- 3+ years of experience as a DevOps engineer
- Extensive experience as a Linux system administrator supporting enterprise computing platforms, and systems.
- Knowledge of any container orchestration frameworks (Kubernetes, ECS/Fargate, Docker Swarm, Mesos, etc.)
- Strong coding and troubleshooting experience in any of the following languages: Shell scripting and Python or typescript or Java or Go
- Extensive experience with build tools: Maven, Gradle, Ant
- Admin level experience with tools like Git, Artifactory, and Nexus
- Working experience implementing, operating, and optimizing CI/CD pipelines (Jenkins, CircleCI, etc.)
- Prior experience with developing, maintaining, monitoring, and reporting frameworks that produce artifacts that support security and compliance needs
- Ability to work with APIs and Plugins to integrate security tools like WhiteSource, X-Ray, or Blackduck Synopsys into established CI pipelines.
- Good communication skills in English (verbal, written)
Would be a plus:
- Experience with Terraform and Ansible
- Experience with Cloud platforms
- Working knowledge of Hadoop and components (Spark, Hive) / understanding concept of distributed computations
About the project:
One of the biggest companies in the Hadoop world provides the industry’s only converged data platform that integrates the power of Hadoop and Sparks with global event streaming, real-time database capabilities, and enterprise storage. We are looking for talented engineers to help expand Platform functionality and make it the platform of choice for operational and analytic big data use-cases.