Сучасна диджитал-освіта для дітей — безоплатне заняття в GoITeens ×
Proxify is a growing Swedish IT company, experiencing intense growth. We match remote IT professionals with companies in Sweden and abroad. The difference with us is that we like to make sure that the remote workers we present are the very best in their field. We like to make sure we get it the right first time, every time! Who are we looking for?
22 жовтня 2021

Middle / Senior Data Engineer (PySpark) (вакансія неактивна)

віддалено

Share Tweet Share

Mail

Timezone: CET (+/- 3 hours)

Proxify is a Swedish IT company, experiencing intense growth. We match remote IT professionals with companies in Sweden and abroad. The difference with us is that we like to make sure that the remote workers we present are the very best in their field. We like to make sure we get it the right first time, every time!

We are growing fast and are currently looking for a Middle / Senior Data Engineer.

We are building a team of the best Data Engineers. We currently have several Data Engineering projects soon starting. We are looking for Middle / Senior Data Engineers to join on a remote basis a team of our client who is looking for multiple experienced Mid/Senior PySpark data engineers to join their friendly Engineering team. You will work within one of their squads, dealing with data ingestion and storage processes, customer ML analytics platform, and insights and visualization dashboards.

They are looking for individuals who are passionate about technology, sensitive about client needs, and who collaborate well in a team environment.

Requirements:

  • You have +5 years of experience with Python to build robust and resilient data pipelines using ApacheSpark;
  • You have +3 years experience with Cassandra, PostgreSQL, ElasticSearch, S3, HDFS as the main data stores;
  • Pair programming, testing, and continuous integration and deployment as their daily development practices. Kafka, SQS as distributed messages systems;
  • Github platform as a git repository, issue tracker, continuous integration, and package registry;
  • Docker, Nomad, Terraform, Consul, and Vault in the operational stack;
  • Spark and PySpark for transforming multi-terabyte sets of data into valuable insights;
  • Upper-intermediate English level.

Skills:

  • You must have PySpark functional programming experience;
  • You must possess the ability to produce & transform data sets, and move them into production;
  • You must be able to manage massive amounts of data;
  • You must have experience working with advanced analytics or data science systems;
  • You have Microservices architecture and testing practices experience;
  • You understand distributed application architecture;
  • You have experience with using CI/CD tools;
  • You feel comfortable in an Agile/Lean environment;
  • You can communicate well and are willing to learn, listen and share knowledge with your teammates;
  • Nice to have skills: Scala.

Responsibilities:

  • You will face challenging problems and will need to find new solutions for them;
  • You’ll help give back to the open-source community by contributing to the tools they use, as well as playing a role in releasing new ones for the benefit of everybody!
  • Pair programming, peer review.

What we offer

  • 💻100% remote work (work from where you want);
  • 💪We pay for overtime (over 8 hours);
  • 👌🏻The ability to change the project to another one;
  • 💵Competitive compensation and performance-based increases;
  • 🧘🏻‍♂️Very flexible working schedule;
  • 🚀Opportunities for professional development and personal growth;
  • 🐕If you’re based in Kyiv and want to work from our office in the center, you can come with your little friend.

Гарячі вакансії

Всі вакансії