We are a rapidly growing startup building an advanced management and analytics platform for companies operating on fan platforms such as OnlyFans, Fansly, and others. Our platform helps businesses efficiently manage large numbers of accounts and teams, offering powerful tools for operational optimization and revenue growth. At the core of our system are powerful automation workflows, advanced data collection, and actionable insights that help our users optimize their operations.
The Role
We’re looking for an experienced backend engineer to take a leading role in this project. You’ll be responsible for designing system architecture, hands-on coding, and driving technical decisions.
What You’ll Do:
• Lead the development of the statistics subsystem, focusing on analytics and reporting.
• Design and implement microservices to automate key processes for our users.
• Build and maintain high-performance databases like ClickHouse, TimescaleDB and other specialized tools to handle large-scale data and optimize analytics flow.
• Work with large volumes of events and data streams, ensuring optimal performance and reliability.
• Collaborate with cross-functional teams to ensure seamless integration of automation and statistics features.
• Continuously optimize existing systems for performance, scalability, and reliability.
Your Expertise:
• Strong experience with Node.js, focused on backend development and building scalable, high-load systems.
• Experience with high-load databases such as ClickHouse, TimescaleDB, DynamoDB, Spark, and Cassandra, to handle and process large volumes of data efficiently.
• Expertise in real-time data processing, with a focus on streaming analytics and optimizing performance.
• Experience designing and implementing microservices, ensuring high availability, fault tolerance, and scalability.
• Strong knowledge of performance optimization for low-latency systems that can handle large-scale data processing.
• Solid understanding of cloud platforms (AWS, GCP, etc.) and containerization technologies (Docker, Kubernetes) for scalable deployments.
• Excellent problem-solving skills, with the ability to design efficient solutions for high-traffic data processing.
• Familiarity with event busses and message queues (e.g., Kafka, RabbitMQ, SQS) for real-time data processing.