• Experience in building cloud native data engineering solutions using GCP or AWS platforms;
• 2+ years of development experience with BigData;
• ETL scripting in Python;
• Experience of building data ingestion pipelines of telemetry data in GCP/AWS including app monitoring, performance monitoring and network monitoring logs;
• Background in building data integration applications using Spark or mapReduce frameworks;
• Track record of producing software artifacts of exceptional quality by adhering to coding standards, design patterns and best practices;
• Strong background in SQL / BigQuery / PubSub;
• High proficiency in working with Git, automated build and CI/CD pipelines;
• Knowledge in some terraform scripting (adding new datasets, buckets, IAM) will be a plus;
• Intermediate level of English or higher.
Challenging tasks & professional growth;
A friendly experienced team;
Up to 50% compensation of educational courses and conferences price for professional growth;
Free English classes;
23 business days paid leave;
Fascinating corporate events;
Gifts for significant life events.
• Maintain and support existing ingestion pipelines;
• Optimize queries based on performance testing;
• Design and implement new ingestion pipelines that bring data from external data sources (HTTPS, SFTP) or internal data sources (JDBC, HTTP, MQQT);
• Work with Application engineers and product managers on refining data requirements;
• Implement and test fine grained access control setup (per dataset, per column).
Our company is looking for a Python developer to join a global industrial project. You will work with one of the largest B2B data sets in the world using the latest cutting edge technologies, and work with data engineering tasks.