About the Role
We are looking for a Middle Data Engineer to design and implement scalable data pipelines that support Generative AI applications. You will work with large-scale datasets, optimize performance, and ensure seamless data integration across AI workflows.
Responsibilities:
- Design and develop scalable data pipelines to support AI/ML models.
- Work with structured and unstructured data for training and inference.
- Optimize ETL workflows for real-time and batch processing.
- Ensure data quality, governance, and security in cloud environments.
- Collaborate with AI/ML engineers to streamline data workflows.
- Deploy and manage data infrastructure on AWS, GCP, or Azure.
Requirements:
- 3+ years of experience in Data Engineering and Big Data technologies.
- Expertise in ETL pipelines, data warehousing, and distributed computing.
- Strong knowledge of SQL, NoSQL, and data lakes.
- Experience with cloud-based data services (AWS Glue, BigQuery, Azure Data Factory).
- Proficiency in Python, Spark, Kafka, Airflow.
- Experience with CI/CD, DevOps, and Agile methodologies.
❤️ What We Offer
- 🚀 A role with real ownership and influence on delivery
- 🌍 International eCommerce projects with global clients
- 📈 Transparent growth path toward Middle+ → Senior PM
- 🎓 Company-supported learning & certification programs
- 🏠 Fully remote work with a flexible schedule
- 🏝 Paid vacation & sick leave
- 🩺 Medical insurance & well-being support
- 💰 Competitive compensation
- 🔎 Mature processes and open team culture