Qualifications:
— Degree(s) in computer science or a similar field
— 3+ years of DevOps experience
— Scripting skills, i.e., Powershell, Bash, Python
— Experience in cloud technologies and services is a must (Google Cloud, MS Azure, AWS)
— 2+ years experience in distributed data processing systems (Hadoop, Spark, others)
— Experience in Docker and at least one of orchestration tools (Kubernetes preferred)
— Experience with graph databases is desirable
— Good command of semantic search engines (ElasticSearch, Apache Solr) is a plus
General Requirements:
— Work in collaborative team in an Agile environment focused on the full life-cycle of software
— Build new products from the ground up, with high performance, scalability and reliability
— Experience / discipline to create efficient, maintainable, robust, well tested, documented code
— Experience in enterprise data processing software
— Understanding and experience working with high availability, high performance, multi-site systems and hybrid
— On-work training and possibility for professional growth
— Flexible working hours
— 20 paid vacation days
— English lessons
— Opportunity to work for one of a major US provider
— Take part in environment architecture planning
— Deploy and support analytical cloud solutions
— Improve security and performance of an infrastructure
— Automate continuous delivery and other processes setup on a project
— Consult development team on infrastructure-related questions
The project would be a good fit for those who are eager to face the challenge of constructing a flexible and scalable platform enriching user insights in data discovery, comprehensive data analysis, and knowledge management for a broad range of resources. If you are passionate about intellectual algorithms, big data processes, and high-performance programming this project will be a right choice for you.