At least 3 years of development experience in Java/Scala
Experience with -ETL, building pipelines using Big Data processing techniques
Strong knowledge of the Hadoop Ecosystem
Experience in Data Warehousing technologies and back-end reporting systems
Strong scripting skills to perform data/file manipulation
Experience with automated testing practices
Performing data analysis
Designing, reviewing, implementing, testing, and optimizing ETL processes
Designing and developing big data applications from scratch
Providing development support for existing systems
Troubleshooting data and/or system issues
Creating system documentation
Our Customer is one of the EU’s biggest GameTech companies, owner of one of the largest mobile gambling platforms with millions of users.
The data engineering team is building a data warehouse for the whole company using the latest big data technology in the cloud: Google BigQuery, Google Pub/Sub, Apache Beam, and Apache Airflow.
Here is a perfect chance to work with a subteam that is responsible for DWH covering data ingestion from multiple sources, efficient data organization, and analytics based on that.
Our ideal candidate for this project is a Big Data Engineer, who will be ready to apply the best engineering practices in daily work, will be flexible and motivated to work in a fast-paced environment.
We are looking for a developer who can help us integrate new data sources. Structured, clean and maintainable code is paramount within the team.