З 1993 року EPAM допомагає світовим лідерам проєктувати, розробляти і впроваджувати програмне забезпечення, яке змінює світ.
23 ноября 2021

Senior Big Data Engineer

Львов, Ивано-Франковск, Луцк, Ровно, Тернополь, Ужгород, Черновцы

Our Customer is a multinational retail corporation that operates a chain of hypermarkets and grocery stores . As the world’s largest grocer, they provide convenient access to affordable food and other products to people around the world. And they do that in ways that help create economic opportunity, advance long-term environmental sustainability and strengthen local communities. Our customer is the world’s largest company by revenue according to the Fortune Global 500 list in 2019. Also regularizes the global rating of Global Powers of Retailing, compiled by Deloitte.

We are looking for an exceptional Senior Software Engineer to help build and run the world class data platform that powers streaming data generated by a fast-moving landscape. This team places an emphasis on producing robust, scalable, self-service solutions that enable Engineers and Analysts to perform complex streaming transformations across a wide array of use cases to power insights, decisions, and machine learning.
This role will be responsible for utilizing industry best technologies and practices to enable analysis, intelligence, and processing of some of the largest datasets in near-real-time. As a Senior Software Engineer, this role will be counted on to have a deep understanding of technology internals in order to tune and troubleshoot individual jobs, as well as a high-level understanding of the landscape to drive value adding features to the platform. This role would take on the following responsibilities:


  • Collaborate with Product Owners and Team Leads to identify, design, and implement new features to support the growing real time data needs
  • Assist and mentor Junior Engineers in troubleshooting and tuning of high volume, distributed applications, primarily on Spark
  • Identify and suggest or implement remediation of cases where we diverge from industry best practices
  • Evangelize and practice an extremely high standard of code quality, system reliability, and performance to ensure SLAs are met for uptime, data freshness, data correctness, and quality
  • Display sense of ownership over assigned work, requiring minimal direction and driving to completion in a sometimes fuzzy and uncharted environment
  • Focus on enabling developers and analysts through self-service and automated tooling, rather than manual requests and acting as a gatekeeper


  • Experience in running, using and troubleshooting industry standard data technologies such as SPARK, HDFS, CASSANDRA, KAFKA
  • Deep development experience, ideally in SCALA but we are open to other experience if you’re willing to learn the languages we use
  • Proficient scripting skills i.e. bash, python, ruby
  • Experience processing large amounts of structured and unstructured data in streaming and batch
  • Experience with cloud infrastructure. We use Azure, specifically, any other cloud experience should work as well
  • A focus on automation and providing leverage-based solutions to enable sustainable and scalable growth in an ever-changing ecosystem
  • Experience building and maintaining a centralized platform or services, to be consumed by other teams, is ideal, but not necessary
  • A passion for Operational Excellence and SRE/DevOps mindset, including an eye for monitoring, alerting, self-healing, and automation
  • Experience in an Agile environment, able to manage scope and iterate quickly to consistently deliver value to the customer


  • Competitive compensation depending on experience and skills
  • Individual career path
  • Unlimited access to LinkedIn learning solutions
  • Social package — medical insurance, sports
  • Compensation for sick lists and regular vacations
  • English classes with native speakers (certified English teachers)
  • Flexible work hours