About the Project:
A consulting firm specializing in custom data processing solutions, data analytics, and integration. The project involves building and optimizing data pipelines, ensuring seamless integration of various data sources, and customizing solutions to meet client needs. Currently, the company is developing a data infrastructure for a new streaming aggregator, focusing on efficient data flow and scalability.
About the Position:
We are looking for an experienced Data Engineer who can design and implement data pipelines from the ground up, optimize workflows, and ensure smooth data processing.
Requirements:
—
— SQL — deep knowledge, ability to write and optimize complex queries.
— Azure Data Services — experience with Azure Data Factory, Azure Synapse, Azure Databricks.
— ETL Processes — ability to build, optimize, and scale workflows.
— Data Modeling & Warehousing — experience designing efficient data schemas.
— Python or Scala — for data processing and automation.
— Big Data Technologies — experience with Spark, Kafka, or similar is a plus.
— CI/CD for Data Pipelines — knowledge of automated deployment strategies.
— English proficiency (B2+) — required for collaboration with the London-based team.
— Ability to work independently and take ownership of data pipeline development.
— Strong analytical and problem-solving skills, with a focus on scalability and performance.
Responsibilities:
— Develop, test, and maintain ETL pipelines and data workflows.
— Integrate various data sources and optimize data flow efficiency.
— Utilize Azure Data Services to support cloud infrastructure and data processing.
— Write and optimize SQL queries for data transformation and reporting.
— Support data quality and governance practices.
— Collaborate with team members and other departments to deliver high-quality data solutions.
__________________________
Important information about the project duration: the project lasts from 3 to 6+ months, with a possible extension.