— BS in CS, or Masters Preferred with solid understanding of CS concepts such as common data structures and algorithms;
— Proficiency with Java;
— Proficiency with Python;
— Experience of building pipelines for processing large amounts of data;
— Experience with AWS and cloud-based computing such as EC2, ECS, EMR, Kinesis and Redshift;
— Good knowledges of Event-driven architecture (EDA), best practices and approaches (in scope of Kafka, RabbitMQ);
— Experience with Kafka and RabbitMQ;
— Good point if working with AWS SNS (optional);
— Good knowledges about ETL processes, best practices and approaches;
— Experience with Docker Containers;
— Solid experience with Hadoop(including MapReduce), Hive and Spark;
— Working with cross-functional teams and business customers is second nature to you.
— Familiarity with Node.js and Scala;
— You have built a data pipeline and the infrastructure required to deploy machine learning algorithms and real-time analytics in low latency environments;
— Experience with Tableau, including report and dashboard development;
— Understanding of build systems, and other software configuration tools such as Jenkins and Rundeck, Ansible;
— Building Airflow pipelines;
— General knowledges about other key AWS services such as S3, IAM;
— Experience with Apache Avro or other similar systems such as Thrift, Protocol Buffers.
— Work with really interesting projects in various niches and tech stacks in the dynamic company;
— Flexible work schedule;
— Friendly and engaging professional team environment open for professional growth;
— Compensation of sports;
— Medical service;
— Kicker and other relaxing activities;
— Regular corporate events and activities;
— Nice office with beautiful kitchen and delicious buffet at the business center :)
— Development and maintenance of a scalable data pipeline;
— Gather requirements, scope, architect, develop, build, release, and maintain data oriented projects for different parts of the organization, considering performance, stability, and an error-free operation;
— Identify and resolve pipeline issues and discover opportunities for improvement;
— Architect scalable and reliable data solutions to move data across systems from multiple products in nearly real time;
— Help evaluate new tools and technologies to keep technology stack at the cutting edge;
— Help debug critical issues in complex designs or coding schemes;
— Monitor existing metrics, analyze data, and partner with other internal teams to solve difficult problems creating a better customer experience.
ITRex Group is a talented and dedicated team of IT professionals with wide experience in delivering superior products and custom solutions.
Now our team is 33 professionals in the Kiev office, we are expanding our dept and looking for a Data Engineer (ETL Engineer) to join!