— Hands-on experience with ETL or/and near-real-time Big Data processing/analytics
— Hands-on experience with Spark and Spark Streaming.
— Hands-on experience with Amazon Web Services: EMR, EC2, S3, Lambda, Athena, Redshift, RDS, DynamoDB, Kinesis etc. or their open source analogues
— Strong knowledge of software architecture and design principles and patterns
— 3+ years of experience with Big Data
— Intermediate+ English
— Strong self-initiative
— Result-oriented and initiative mindset
— Independent, research focused with a desire to find new and improve existing solutions
— DevOps experience, CloudFormation
We offer competitive salary and flexible hours.
Currently we are UNIT.City residents and you can enjoy all of its benefits: free gym, cosy office, free tickets or discounts for the conferences and master classes.
— Work to develop architecture for the data driven product
— Create new high-loaded data processing services
— Collaborate with teams and product units to analyze and adapt business and data requirements
Currently we’re processing up to 3 GBs of symbol rate updates and up to 150 MB (220k) of trade events. And input data volume will only grow in the future.
We are making math over trading activity from 2015 with per minute granularity on minute/hour/daily basis. That produces TBs of data stored at S3 and 300 GBs of compressed storefile size at HBase for the current moment.
There is a lot of functionality to be implemented and number of architecture challenges to be solved, product is in its hot development stage.