Adraba is a Professional Software Development Powerhouse. We work with various projects using latest technologies. The main goal for us is to put together talented and committed team. We do our best to create a space, where every team member can not only realize his/her skills, but also acquire precious experience.
9 июля 2019

Big Data Engineer (вакансия неактивна)

Киев

Необходимые навыки

— Software engineering mindset and ability to write elegant, maintainable code.
— Analytical mindset to understand business needs, and come up with engineering solutions.
— Experience balancing complexity and simplicity in terms of pipeline design.
— Expertise building data pipelines on large complex data sets using Spark, Hadoop or other open source frameworks.
— Expertise in one or more programming languages: Python (an advantage), Ruby, Java,Go.
— Strong SQL skills and experience in working with various databases or query engines(Spark SQL / Dataframes, AWS Athena, Google Big Query).
— Experience with data workflows management tools: Airflow, Luigi, etc.
— Experience with AWS cloud services: EC2, ECS, EMR, Athena, S3. (or Google Cloud equivalent).
— Excellent communication skills to collaborate with cross-functional partners and independently drive projects and decisions.

Будет плюсом

— Experience in: Docker, Kubernetes / AWS, Fargate, Git, Web Crawling.

Предлагаем

— Long-term project with attractive payment.
— Brand new office in city center of Kiev (Metro Pecherska) with young environment and talented guys.
— PE registration, handled by the Company’s accountant; + Monthly payment to the PE bank account.
— 20 working days of annual paid vacation.
— English classes inside the office.
— Opportunity to grow as professional.
— Interesting corporate events and presents.

Обязанности

— Build highly scalable data pipelines and data sets.
— Enhance our data architecture to balance scale and performance.
— Generate required information for data across the web
— Build, optimize and automate data processes to support business requirements.
— Work closely with the CTO to make sure our infrastructure supports the ongoing growth of data.
— Design and implement the required infrastructure for running data pipelines at scale onAWS / Google Cloud.

О проекте

In this role, you will tackle the challenges of collecting a huge amount of data across the web,process it and analyze it.