Our project is to develop the data-platform, where all the analytical data of the company will be stacked. This is a great opportunity to participate in the launch and operation of a large Kubernetes/Spark/S3 cluster and one of the most interesting BI practices in Eastern Europe.
Responsibilities
- Support and development data-platform (Kubernetes, Spark, Kafka, Airflow, AWS S3);
- Support for a team of data engineers and analysts.
Skills
- Understanding the advantage of GitOps/IaC over manual work.
- DevOps experience of at least 4 years.
- Kubernetes, Kustomize, Helm, ArgoCD.
- Docker, BuildKit.
- TeamCity, Azure DevOps or similar CI/CD tool.
- Experience with at least one of the most popular programming languages, such as Java, Kotlin, Python, Golang, Scala.
- Oauth, OpenID, Keycloak.
Will be a plus
- Kerberos, Active directory, sssd.
- Apache Spark.
- Security in K8s, Hashicorp Vault.
- Ansible (or a similar tool).
- Argo Workflows, Argo Events, Argo Rollouts.
- TeamCity, Kotlin DSL.
- AWS, Azure
Technologies that we use
- Kubernetes, Kustomize, Helm, ArgoCD, Cilium, Longhorn, Prometheus, Grafana.
TeamCity, Kotlin DSL.
We offer
- Opportunity to work on a large-scale project from scratch.
- We are not tied to the office, willing to work remotely.
- Health insurance.
- Compensation of sports clubs and foreign language schools.
- Internal training (IT and not only);