We are a product-led company connecting people and technology to make our teams and clients succeed. The product teams inside Newage develop platforms for fintech, igaming, insurtech businesses.
1 лютого 2023

Data Engineer (вакансія неактивна)


Newage is a product-led company connecting people and technology to make our teams and clients succeed. The product teams inside Newage develop platforms for fintech, iGaming, insurtech, and other businesses.

One of our main products is an iGaming platform based on the Cloud Native SAAS В2В approach. It has been in the production environment for 5 years and has been used by several dozens of international brands.

JVM is the main stack for developing microservices.

Our teams are working with the main principles of the Scrum framework to get the best software product in the shortest time.


  • 2+ years of experience in a Data Engineering;
  • Proficient with SQL and RDBMS, preferably PostgreSQL;
  • Professional experience using Python for data processing purposes;
  • Experience designing and monitoring data pipelines;
  • Strong data mining/processing skills;
  • Experience in understanding Data Warehouse building principles;
  • Hand-on experience in designing data pipelines ETL, ELT, exposing processed data to end users.


  • Architect, Design and implement ETL pipelines for multiple data sources ingesting massive amounts of data daily;
  • Develop and maintain new projects and tools around the data warehouse;
  • Optimize, improve, and do refactoring of current applications if deems fit;
  • Collaborating with software engineers to capture, format and prepare data for for various purposes;
  • To develop and maintain the API interfaces necessary to access the data;

Will be a plus

  • Experience with ElasticSearch or other NoSQL data stores;
  • Experience with streaming-based, real-time Data Warehouses, mostly interested in Apache Kafka, KSQL Spark, or similar;
  • Familiar with CI / CD using Jenkins pipelines or similar tools;
  • Data synchronization tools/processes between heterogeneous data sources (SQL, noSQL, DMS, Debezium, ElasticSearch, etc);
  • Degree in Data Science, Statistics, Applied Math, Econometrics, Computer Science, or other related fields.

Technical details of the project:

  • Reactive microservice architecture with more than 200 in production ;
  • Event-driven architecture which uses Apache, and Kafka as an Event Storage;
  • gRPC to low latency connected critical services;
  • JVM as the main stack for developing microservices;
  • Scala as the main programming language;
  • Akka (Actor, Typed, Cluster, Persistence);
  • Cats, ZIO, Monocle, Magnolia;
  • Slick, Elastic4S;
  • PostgreSQL, Elasticsearch, Clickhouse as data warehouse;
  • ​​​​​​​Evolutionary developing each microservice, we select the most modern, optimal stack and approaches;

Delivery Process:

  • AWS hosting;
  • AWS RDS for persistent storage;
  • Kubernetes;
  • Continuous integration and continuous delivery based on Jenkins;
  • GitOps for Deployment in the multi-version environment;
  • Grafana and Prometheus for monitoring;
  • ​​​​​​​Monitoring and alerting uses business metrics and infrastructure metrics;

We take care of your:


  • Health insurance coverage;
  • Paid 20 working days of vacation;
  • ​​​​​​​10 days of paid sick leave;


  • Paid lunches in the office;
  • Full support with all PE-related activities;
  • Competitive salary and encouragement for your efforts and contribution;
  • ​​​​​​​Financial support in critical situations;

Professional development

  • Transparent career path and growth opportunities;
  • Internal and external learning activities;
  • Corporate English courses;
  • ​​​​​​​Opportunity to visit paid conferences and events etc;