Qualifications:
— Degree(s) in computer science or similar field
— Experience in Agile and Test Driven Development (TDD) processes
— 3+ years of experience in distributed data processing systems (Hadoop, Spark, others)
— Basic knowledge of ETL processes (framework and principles)
— Experience in designing and developing real-time analytics
— At least two-year experience developing complex, scalable, production-quality software in Python
— Familiarity with Artificial Intelligence, Machine Learning concepts
— Experience with graph databases is desirable (Neo4J, Apache Tinkerpop, JanusGraph, Amazon Neptune, etc.)
General Requirements:
— Work in collaborative team in an Agile environment focused on the full life-cycle of software
— Build new products from the ground up, with high performance, scalability, and reliability
— Experience/discipline to create efficient, maintainable, robust, well tested, documented code
— Experience in enterprise data processing software
— Strong interest in semantic technologies
— 3+ year experience of using Enterprise Python
— English — upper-intermediate
— Background in Semantic Web technologies
— Natural Language Processing competence
— Good command of semantic search engines (ElasticSearch, Apache Solr)s
— Background in other development languages (Java, Scala, C++, JavaScript, LISP)
— Experience in the security domain
— On-work training and possibility for professional growth
— Flexible working hours
— 20 paid vacation days
— English lessons
— Opportunity to work for one of a major US provider
— Design and develop various backend components, from data ingestion to data analytics
— Utilize big data technologies for distributed real-time analytics, such as Apache Hadoop, Apache Spark and the likes
— Deploy and tune cloud solutions (Google Cloud, Azure, AWS)
— Design and develop UI components
The selected candidate will contribute to the development of an innovative data-centric solution empowering data analysts, domain experts, and business users to influence business and operations by investigating trends and patterns across industries and multiple data sources.
The project would be a good fit for those who are eager to face the challenge of constructing a flexible and scalable platform enriching user insights in data discovery, comprehensive data analysis, and knowledge management for a broad range of resources. If you are passionate about intellectual algorithms, big data processes, and high-performance programming this project will be the right choice for you.