Grid Dynamics is a leading provider of technology consulting, engineering and data science services for Fortune 500 corporations undergoing digital transformation. We serve some of the largest US retail, e-commerce, tech and financial services companies, delivering our solutions using open source, cloud-based technologies.Love Tech?
14 мая 2020

Senior Big Data Developer (вакансия неактивна)

Киев, Харьков, Львов

Grid Dynamics is the engineering services company known for transformative, mission-critical cloud solutions for retail, finance and technology sectors. We architected some of the busiest e-commerce services on the Internet and have never had an outage during the peak season. Founded in 2006 and headquartered in San Ramon, California with offices throughout the US and Eastern Europe, we focus on big data analytics, scalable omnichannel services, DevOps, and cloud enablement.

By joining Grid Dynamics you are getting tremendous opportunities of working in cutting-edge Big Data projects dealing with dozens of petabytes of data under extremely high load having millions of events per second. It leads data processing to the next level helping humanity to set more and more ambitious targets in different areas of Science, Business, Healthcare and Environment Protection, and achieve it.

Our projects cover such business domains as E-Commerce, Digital Advertising, Finance and Banking, Manufacturing, Product Development, etc.

From project to project technology stack may vary but in most cases we work with this set helping our Engineers to grow in all these areas:

1. Python
2. Java Core
3. Scala
4. SQL
5. NoSQL (Cassandra / Redis / MongoDB / HBase, etc.)
6. Algorithms
7. Design patterns
8. Parallel Distributed Processing / Multithreading / Concurrency / CAP Theorem, etc.
9. Data Processing approaches
9.1. Batch Processing
9.2. Stream Processing
10. Big Data platforms, frameworks and services
10.1. Hadoop (HDFS / Yarn / MapReduce / Hive / Pig / Parquet / Avro, etc.)
10.2. Spark and Spark Streaming
10.3. Kafka / Beam / Flink / Ignite / NiFi / StreamSets, etc.
11. Analytical databases (Yandex Clickhouse / Druid / Vertica / Impala, etc.)
12. Workflow Schedulers (Airflow / Oozie / Azkaban / Taverna, etc.)
13. Tools for Data Visualisation and Reporting (Tableau / QlikView / Domo, etc.)
14. Cloud Services
14.1. Google Cloud (GCP) data services (BigQuery / Cloud Bigtable / Cloud Storage / Cloud SQL / Cloud Spanner / Cloud Datastore / Cloud Pub/Sub / Cloud Dataflow / App Engine / Compute Engine / TensorFlow / Stackdriver , etc.)
14.2. AWS (Kinesis / Redshift / Lambda / Athena, etc.)
14.3. Azure (Databricks / Data Lake Storage / Stream Analytics / Data Lake Analytics / SQL Data Warehouse, etc.)
15. Data Science Models usage (optional)

Non-specific common requirements:

— Readiness for long-term assignments onsite of 6+ months will be a big plus.
— Good understanding of distributed and highload system architectures.
— Upper-Intermediate English
— Network and OS (*nix!) drawbacks and optimizations.
— Extensive experience with production troubleshooting and monitoring.
— Ability to work with vague requirements.
— Good communication skills.

We offer

— Opportunity to work on bleeding-edge projects
— Work with a highly motivated and dedicated team
— Competitive salary
— Flexible schedule
— Benefits program
— Social package — medical insurance, sports
— Corporate social events
— Professional development opportunities
— Opportunity for long business trips to the US and possibility for relocation