— About 3 years of experience in implementing integration components and applications, especially in the area of big data / cloud projects or systems integration
— 2 or more years of Hadoop experience (Hortonworks, Cloudera preferred, AWS Cloud) in building data ingestion and transformation
— Hands-on experiences in data discovery, blending data and data cleansing for analytical purposes from various sources of data
— Professional experience in designing and developing solutions in Python
— Profound experience with CD / DevOps methodology and good overview of related tools or tool chains
— Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management
— Fluent knowledge of English; German or another CEE language is appreciated, but not mandatory
— Open and team-minded personality and communication skills
— Willingness to work in an agile environment
— Willingness to travel (Austria/Vienna)
— Real time processing of data such as KAFKA, etc.
— Profound experience in Data Engineering
— Understanding of banking business in general
— Join our dynamic and motivated team in one of the leading banking groups in Austria and Central and Eastern Europe
— Strong support from an international banking & technology team
— Competitive Salary (based on EUR NBU rate)
— Long-term official employment, sustainable and stable working environment. Sick leave, paid vacation (31 days per year).
— Medical insurance
— Dedication and commitment to develop and educate our employees
— Possibility to travel and to contribute to and benefit from international communities
— Comfortable work conditions
— Collaborate with data and analytics experts to strive for greater functionality in our data systems
— Design, use and test the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies (DevOps & Continuous Integration)
— Drive the advancement of RBI data infrastructure by designing and implementing the underlying logic and structure for how data is set up, cleansed, and ultimately stored for organizational usage
— Assemble large, complex data sets that meet functional / non-functional business requirements
— Build data integration from various sources and technologies to the data lake infrastructure as part of an agile delivery team
— Monitor the capabilities and react on unplanned interruptions ensuring that environments are provided & loaded in time
— Manage incidents reported by data deliverers or data consumers and provide service reporting
We are building RBI´s next generation data lake as the back-bone for data science and advanced analytics within RBI group across Europe.
As Cloud Engineer you will act hand in hand with data scientists, big data architects and data consumers across CEE to plan and execute data integration. You will be in charge for data definition and the mapping from raw data to final data products running on the data lake eco-system. Additionally, in the revolution of classic ETL you will be part of rapid prototyping, visualization and BI projects for big data consumers.