Datagrok is a US-based product startup developing a next-generation web-based integrated data analytics platform that provides a unified experience for data access, data augmentation, exploratory data analysis, advanced visualizations, scientific computations, machine learning, security, governance, and collaboration.
15 квітня 2025

Sr. DevOps Engineer / Data Architect

віддалено

Location: Remote (minimum 4-hour EST overlap required)

Type: Full-time

Start Date: ASAP

About Us

We are building a browser-based data analytics platform with computational capabilities for biopharma.

We are looking for a Senior DevOps Engineer to own infrastructure design, automate deployments, and optimize performance at scale. This is a DevOps-first role, but if you have experience with systems architecture, high-performance or scientific computing, data engineering, that’s a huge plus.

What you’ll do

DevOps & infra engineering (80% focus)

  • Design, automate, and maintain scalable cloud & on-prem infrastructure (AWS, GCP, Kubernetes)
  • Build and optimize CI/CD pipelines (GitHub Actions, Jenkins)
  • Enhance system observability and monitoring
  • Ensure security, performance, and fault tolerance across all environments
  • Simplify deployment and infrastructure management to keep maintenance minimal

Performance optimization & systems engineering (15% focus)

  • Architect and prototype an OLAP solution that syncs with arbitrary external databases
  • Architect the caching solution for scientific computations
  • Optimize database performance and backend services
  • Design and refine cloud-based data processing architectures for analytics pipelines
  • Work with backend teams to ensure smooth integrations between applications & infrastructure

Data infrastructure (5% bonus, not required)

  • Integrate and optimize data warehouses (ClickHouse, Redshift, Snowflake)
  • Optimize query performance and storage for large-scale analytics

Some tech you will be working with

AWS (ECS, EKS), CloudFormation, Terraform, Jenkins, GitHub Actions, Prometheus, CloudWatch, Docker, Kubernetes, ArgoCD, Python, Bash

What we’re looking for

We don’t expect you to check every box, but you should be comfortable solving DevOps and software engineering problems at scale.

Location: remote (GMT to GMT+3), minimum a 4-hour EST overlap required

Must-have skills

  • 5+ years of experience in DevOps, infrastructure, or backend engineering
  • 3+ years managing complex deployment dev environments
  • 3+ years with cloud environments (AWS (primary), GCP (bonus))
  • 3+ years of experience with CI/CD pipelines
  • Strong Docker experience, K8 is a must
  • Infrastructure as Code (IaC): Terraform + CloudFormation
  • Proficiency in backend scripting/automation (Python, Go, Rust, or TypeScript)
  • Observability & monitoring: any of Prometheus, Grafana
  • Experience with complex enterprise deployments
  • Experience with rapid-growth environments. Track record scaling infrastructure
  • Security best practices for cloud & on-prem deployments
  • Strong communication skills. Experience working with diverse remote teams. Resilience under pressure.

Bonus points (for broader impact)

  • SaaS and startup experience
  • Experience leading teams
  • Experience with data analytics software
  • Data engineering & warehouse performance tuning (ClickHouse, Snowflake, Redshift)
  • MLOps or AI model deployment experience
  • High-performance computing (HPC)

Why join us?

  • Work on high-impact problems in cloud infrastructure, automation, and performance optimization
  • Own and shape DevOps strategy in a fast-growing company
  • Remote-friendly, high-trust culture with minimal bureaucracy

If you’re excited about building fast, scalable, and reliable infrastructure while leveraging your software engineering skills (and maybe some data engineering too), let’s talk!

LinkedIn