— 2+ years of Scala development of web or Big Data based solutions
— Knowledge of SQL and NoSQL databases (e.g., MySQL/PostgreSQL, Cassandra/MongoDB)
— Knowledge and understanding of modern code practices and patterns
— Knowledge of either Play Framework or Apache Spark
— Experience with Docker
— Experience with Kubernetes
— Experience working with cloud-based infrastructures (AWS, Google Cloud, Microsoft Azure)
— Experience working with Linux systems (e.g., Ubuntu)
— Experience working with Agile methodologies
— A strong CS background
— Intermediate level of English (B1) and higher
— Experience with any of the following technologies: CATS, Scalaz, Akka (including Akka Streams, Akka Persistence, etc.)
— Experience in writing CI/CD pipelines using Jenkins or GitLab CI/CD, etc.
— Knowledge of the Hadoop ecosystem
— Leadership within a team of developers
— Interesting projects
— Competitive salary
— Opportunities for professional development
— Friendly team
— Gorgeous corporate events
— Paid vacation, holidays, and sick leaves
— Free English classes with a native speaker
— Car and bicycle parking
— All IT-native perks (flexible hours, tea, coffee, discounts in Metallist gym)
— Cozy office (5 min from Zakhysnykiv Ukrainy or 10 min from Sportyvna metro station)
— Participate in the development of reactive Scala-based applications
— Participate in the development of ETL pipelines using Apache Spark as the main framework
— Estimate projects according to customer preferences and requirements
— Perform code review
— Communicate with a customer to deliver the best possible product
— Take responsibility for features that are committed to being developed
We propose to participate in one of the two projects that will interest you more.
If you choose the first project, you will work on an actively developed cloud-based Big Data project aimed at automating the data processing pipeline. The customer is a leader in their domain with more than 80 years of experience. The project is in the MVP stage and will be released to production soon.
You will use the following technologies: Play Framework, Apache Spark, Slick, PostgreSQL, Apache Impala, and Docker. Your role will be connected with adding new features to the Play Framework-based service and maintaining it.
If you choose the second one, you will participate in a mature financial project aimed at simplifying personal finance management for investments and analytics. The project is in the MVP stage.
Here, we will use the following technologies: Play Framework, Apache Spark, MySQL, and Slick. You will take on the role of a Scala engineer and work on writing ETL pipelines to retrieve financial data from data providers and insert it into internal storage. You will also participate in writing and optimizing APIs for accessing this data.