We are a product R&D company that creates solutions for the Product Ecosystem in the dynamic iGaming market domain. Our Mission is to create cutting-edge platforms to reinvent the iGaming industry. Our product is the top-quality iGaming ecosystem created on a vast choice of technologies, stacks, and programming languages.
29 листопада 2024

Middle Data Engineer

віддалено

We are a product R&D company that creates solutions for the dynamic iGaming Ecosystem.

Our Mission is to create cutting-edge platforms to reinvent the iGaming industry.


Responsibilities:

  • Provide 24/7 support for existing ETL/ELT pipelines to ensure smooth operation and timely resolution of issues.
  • Administer Databricks environment, including Data Lake and Data Warehouse management, configuration, and optimization.
  • Develop, optimize, and maintain ETL pipelines to ensure efficient data processing and transformation.
  • Ensure data quality, implementing best practices for data validation, monitoring, and governance.
  • Foster a culture of technical excellence, ensuring alignment with business goals and delivering value to stakeholders.
  • Development data intensive applications

    Key Requirements :

Programming Languages & technologies:

  • Python — Strong skills for developing ETL pipelines, working with large datasets, and automating processes.
  • SQL — Ability to optimize queries and write complex SQL statements (joins, window functions, aggregation).
  • PySpark — Experience with Apache Spark for distributed computing and large-scale data processing.
  • Airflow — Experience with Apache Airflow for building and managing ETL pipelines.
  • Docker

Cloud Experience :

  • Any of GCP/Azure/AWS with AWS preferred — Knowledge of key services for data processing and storage (S3, Redshift, EMR, Lambda example for AWS).
  • Kubernetes — Experience in orchestrating containerized applications and managing cluster services.

Algorithms and Optimization:

  • Spark — Understanding of distributed computing principles, partitioning, data indexing, and query optimization for improved performance.
  • Databases — Experience with indexing, effective storage methods, and data access strategies.

Desired Skills:

  • Flink — Familiarity with real-time stream data processing.
  • StarRocks — Experience with StarRocks for analytics and fast query processing.
  • Databricks (advantage) — Proficiency in using Databricks for building and optimizing ETL processes, utilizing Apache Spark for data analytics.


Personal skills:

  • Self Motivated team player;
  • Reliable person in development;
  • Keen to do things better;
  • Strong communication and problem solving skills.
  • Product goals-oriented

Interview Stages:

  1. HR Interview (45 minutes) — Initial conversation to discuss your experience, career goals, and cultural fit.
  2. Technical Task (Optional)
  3. Technical Interview (1.5 hours) — In-depth technical interview covering relevant skills.
  4. Final Interview (1 hour) — A comprehensive discussion with the team, focusing on role-specific competencies and alignment with company values.
  5. Job Offer

You will get:

  • Competitive salary and Bonuses.
  • Professional growth.
  • Medical insurance.
  • Wellbeing program.
  • English courses.
  • Team of motivated professionals.
LinkedIn