Raif is the part of the Raiffeisen Bank International AG (RBI) banking group, Austria. Our bank is a reliable business partner in the banking market of Ukraine, we strive to become a bank with the most recommended financial services and create a positive customer experience.
28 січня 2022

Big Data Engineer (вакансія неактивна)

Київ

Raiffeisen Bank is a part of the Raiffeisen Bank International AG (Austria), a leading financial entity in 14 countries. In Ukraine, we have more than 2,7 million clients and almost seven thousand colleagues.

Now we are changing. We took off our old-fashioned jacket to transform into a digital partner. Our transformation journey has started. We welcome you to join Raiffeisen Tech, a kind of IT company within the bank.

We are fully responsible for all tech development. During our transformation we kill bureaucracy, increase efficiency, and achieve high-speed digital products. We implement modern engineering practices, work on innovations that build high-quality interaction with our customers.

Sounds curious? Join the Raiffeisen team!

We will support you professionally and personally:

  • You will work in a large international company that provides possibilities for professional and personal growth
  • Involvement into challenging, large-scale and diverse projects which have impact on our customers
  • Knowledge sharing with colleagues from abroad (strong IT community including 14 Raiffeisen Group Banks)
  • Great corporate events and Team building activities
  • Competitive salary and bonuses for your efforts and contribution
  • Stability and a social package that includes 28 days of paid vacation, medical insurance

Your responsibilities:

  • Collaborate with data and analytics experts to strive for greater functionality in our data systems
  • Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies (DevOps & Continuous Integration)
  • Drive the advancement of data infrastructure by designing and implementing the underlying logic and structure for how data is set up, cleansed, and ultimately stored for organizational usage
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Build data integration from various sources and technologies to the data lake infrastructure as part of an agile delivery team
  • Monitor the capabilities and react on unplanned interruptions ensuring that environments are provided & loaded in time

Preferred qualifications:

  • Minimum 3 years experience in a dedicated data engineer role
  • Experience working with large structured and unstructured data in various formats.
  • Knowledge or experience with streaming data frameworks and distributed data architectures (e.g. Spark Structured Streaming, Apache Beam or Apache Flink)
  • Experience with cloud technologies (preferable AWS, Azure)
  • Experience in Cloud services (Data Flow, Data Proc, BigQuery, Pub/Sub)
  • Experience of practical operation of Big Data stack: Hadoop, HDFS, Hive, Presto, Kafka;
  • Experience of Python in the context of creating ETL data pipelines;
  • Experience with Data Lake / Data Warehouse solutions (AWS S3 // Minio);
  • Development skills in a Docker / Kubernetes environment;
  • Intermediate level of English (or higher);
  • Open and team-minded personality and communication skills;
  • Willingness to work in an agile environment.

Our team on social media:

Facebook
Instagram