What will you do:
You will work in a variety of settings to build systems that collect, manage, and convert raw data into usable information for other systems, data scientists and business analysts to interpret.
Make data accessible so that organizations can use it to evaluate and optimize their performance.
The candidates need to have good experience on Python, Apache Airflow and Nifi. Experience with AWS or Azure required.
Requirements:
- Hands-on software development, data engineering, and systems architecture.
- Experience in designing, developing, and maintaining high performance, low latency, and large-scale data pipelines.
- Experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.)
- Cloud data engineering experience in Azure or/and AWS
- Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
- Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines
- Experience with modern data technologies such as SQL and non-SQL databases, REST / GraphQL APIs, data event streaming technologies, and in-memory data storage.
- Experience with version control systems like Bitbucket/Github deployment & CI/CD tools
- Excellent communication skills, both verbal and written
We offer:
- Flexible schedule and opportunities to working remotely (8 hours workday)
- Attending professional conferences, summits, workshops, and seminars (70% of the cost offset by the company)
- Possibility to influence the development of the project
- Friendly professional team and warm atmosphere
- The environment where you can implement your ideas
- Paid 18 days of vacation and up to 10 sick leaves annually
- Medical insurance, gym
- Participation in educational activities and thematic conferences
- Team parties and corporate events