• Strong experience in building cloud native data engineering solutions using GCP or AWS platforms;
• 2+ years of development experience with BigData;
• Prior experience of building data ingestion pipelines of telemetry data in GCP/AWS including app monitoring, performance monitoring and network monitoring logs;
• Background in building data integration applications using Spark or mapReduce frameworks;
• Track record of producing software artifacts of exceptional quality by adhering to coding standards, design patterns and best practices;
• Strong background in SQL / BigQuery / PubSub;
• Experience with GCP products such as BigQuery, Cloud Composer, Data Fusion, GCS and GKE or corresponding technologies on AWS platform;
• High proficiency in working with Git, automated build and CI/CD pipelines;
• Knowledge in some terraform scripting (adding new datasets, buckets, IAM);
• Intermediate level of English or higher.
Challenging tasks & professional growth;
A friendly experienced team;
Up to 50% compensation of educational courses and conferences price for professional growth;
Free English classes;
23 business days paid leave;
Gifts for significant life events;
Ability to grow in People management.
Maintain and support existing ingestion pipelines;
Optimize queries based on performance testing;
Design and implement new ingestion pipelines that bring data from external data sources (HTTPS, SFTP) or internal data sources (JDBC, HTTP, MQQT);
Work with Application engineers and product managers on refining data requirements;
Implement and test fine grained access control setup (per dataset, per column).
Our company is looking for a Data Engineer to join a global industrial project. You will work with one of the largest B2B data sets in the world using the latest cutting edge technologies.