We’re NCube, a place where you can work on cool projects with cool people. NCube creates teams of developers and creative problem-solvers who are passionate about designing great software. As a team, we work on different projects but love to relax, share ideas, and make memories together.
9 июля 2019

Senior Big Data Engineer

Киев

Необходимые навыки

We are looking for a candidate who will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow within our Data Science team. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up

The product is an enterprise-grade, trading-floor-ready technology suite that allows market actors to perceive, categorize, and act upon transactions involving digital assets.

We aspire to be the premier source of timely, high-fidelity, actionable data in the blockchain trading world.

We operate and monitor nodes on the network in or near the world’s financial centers, often receiving messages indicating a buyer’s (or seller’s) intent before they are received by any other party. We survey, label, and reveal key landmarks on the blockchain landscape, including revealing whether a wallet address belongs to an exchange (or ICO-affiliated entity or high-frequency trader), whether a node is engaged in mining activity, or whether a wallet has ceased activity recently. We locate and classify large movements of tokens, in many cases catching movements from treasury wallets to clearing wallets while the funds are still internal to a firm (whole minutes before the tokens are shot from a clearing wallet to an external party or exchange).

Technology stack: React, Golang, Python, C++, Rust.

Team size: 10 developers in Kyiv.

Must have:

5+ years of experience in a Data Engineer role

Experience with big data tools: Hadoop, Spark, Kafka, Elastic Search, Kibana, etc.

Experience with Postgres.

Experience with AWS cloud services: EC2, EMR, RDS, Redshift

Experience with stream-processing systems: Storm, Spark-Streaming, etc.

Experience with object-oriented/object function scripting languages: Python, Scala, Go, etc.

Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases, parquet, and avro file formats

Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets

Experience with blockchain technology considered a bonus

Upper-intermediate English

Обязанности

Develop and maintain optimal data pipeline architecture.

Maintain deployments and scaling of EMR clusters.

Build and manage data collection services.

Develop the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

Create data tools for analytics and data scientist team members that assist them in building and optimizing innovative insights into the blockchain ecosystem

О проекте

Dream Team

Modern technologies and management methods

Business trips to the USA

Flexible working schedule, paid vacations and sick leaves;

Medical insurance;

Parties every month;

Great office location near subway Maidan Nezalezhnosti

LinkedIn