We are looking for a Full-Stack Data Engineer to help build the foundation of our company’s data platform and take ownership of the early architecture, core pipelines, and analytics layer. Do you enjoy working with end-to-end data workflows, solving practical engineering challenges, and shaping a scalable data foundation for a growing company? If so, you might be the perfect fit for our team!
About clients:
An IT service company that provides fully ready, end-to-end product solutions for its partners and clients. Our partners operate in various markets, including the iGaming sector.
Key Responsibilities:
- Build & Enhance Core Data Pipelines:
- Develop and maintain reliable ETL/ELT pipelines to ingest data into ClickHouse from production databases, APIs, and auxiliary systems such as. BigQuery.Create modular transformations, staging layers, and data models suited for incremental growth.
- Ensure pipelines are robust, well-monitored, and cost-efficient.
- Expand and Optimise the Analytics Layer:
- Improve the structure and performance of our ClickHouse analytics environment. Implement best practices for schema design, partitioning, materialised views, and query optimisation.
- Integrate and manage additional data sources as the company’s needs evolve.
- Data Modelling and Foundation Layer:
- Design clean data models supporting analytics, reporting, and product insights.
- Build fact/dimension tables and maintain documentation as the data landscape expands.
- Work closely with product/engineering to understand data semantics and define KPIs.
- BI & Visualisation Support:
- Collaborate with the analytics team by providing well-modelled data, optimised queries, and reliable data sources.
- Maintain and scale the BI environment built on Apache Superset.
- Ensure stakeholders have access to accurate, performant datasets for reporting and analytics.
- Support self-service analytics through clear table structures, documentation, and accessible data.
- Basic Infrastructure & Orchestration:
- Manage lightweight orchestration using tools such as Airflow, Cron, Prefect, or dbt.
- Set up CI/CD for data workflows in a simple, maintainable way.
- Collaborate with engineering on access control, monitoring, and basic DevOps support.
- Collaboration & Growth of the Data Ecosystem:
- Work directly with product, engineering, operations, and analytics teams to identify data needs.
- Help define data quality checks, naming conventions, and governance practices. Gradually evolve the platform as data volume, complexity, and team size grow.
Requirements:
- Strong SQL skills—comfortable writing optimised queries for large datasets.
- Hands-on experience with ClickHouse (or another columnar OLAP database).
- Experience building ETL/ELT pipelines from scratch.
- Comfortable working with AWS-managed SQL databases (Postgres, MySQL, Aurora, etc.).
- Proficiency in Python for data engineering tasks.
- Experience with Apache Superset for BI and reporting.
- Familiarity with BigQuery or similar cloud data warehouses.
- Familiarity with cloud services (AWS S3, EC2, IAM, Lambda, etc.).
- Version control (Git) and basic CI/CD knowledge.
Tools:
- Amazon Redshift.
- Snowflake.
- Google BigQuery.
- ClickHouse.
- Tinybird.
- Apache Druid.
- Apache Pinot.
- StarRocks.
Benefits:
- Paid vacation and sick leave.
- Opportunities for career growth and professional development.
- Mentorship and participation in the company’s strategic projects.
- Young, dynamic, and international team.
- Regular team-building activities and corporate events.
- Access to international networks and projects.
- Opportunities to explore new markets and experiment with innovative tools within your role.
- Possibility of relocation to Bulgaria (relocation package included).
Nice-to-have skills:
- Experience with dbt (or interest in using it as the stack matures).
- Exposure to Airflow, Prefect, Dagster, or similar orchestrators.
- Understanding of basic data governance and quality frameworks.
- Experience with event-driven architectures or streaming pipelines (Kafka/Kinesis).
- Familiarity with data monitoring tools or building your own checks.
- Experience with Metabase or other BI tools beyond Superset.
Read this and feel like it’s exactly what you’ve been looking for? Then message us soon — let’s discuss your future in this role!
Contact us:
e-mail: Anastasiia.pi@visiongrid.io
TG: @anastasiia_recruiter12.