1. 1+ year of experience in data engineering;
2. Strong knowledge of Python in the context of ETL data pipelines creation;
3. Strong knowledge of Pandas/Numpy;
4. Strong knowledge of SQL databases;
6. English (upper intermediate level, written and spoken);
1. Experience with PySpark;
2. Experience with workflow systems such as Airflow/Luigi;
3. Experience with Message Brokers (RabbitMQ, Kafka, etc.);
4. Experience with Machine Learning;
1.Competitive compensation depending on skills and experience level;
2. Your personal MacBook PRO;
3. Various projects and tasks which help you to grow professionally and advance your career;
4. Flexible working hours and flexible approach to work;
5. Paid vacation and sick days, national holidays;
6. English classes;
7. Corporate events and parties;
8. Team that really rocks.
The product allows enterprise leaders to drive alignment of software-driven product creation cost and speed to revenue. Industry-first event-driven resource relationship network, that helps to map every resource to business value unlocking enormous revenue and market growth potential for enterprises.