PURPOSE OF THE JOB
You like the creative process of developing software, use the latest greatest
technology. You have a great education and already have some years of experience as a Data Engineer working with large data sets, using tools like Spark, Cassandra, Kafka or Scala.
Still you feel something is missing and you are not part of a story that gets you truly excited...you want to make a lasting impact.
We develop the brains to store and use renewable energy. We obviously are passionate about being part of the clean energy wave and have created self-learning software that knows when it is the most economical time to store renewable energy and when to release it again.
Part of our mission is to help reduce the world’s carbon footprint, but at the same time we have created a commercially successful business. We already serve hundreds of organizations, including many Fortune 500 companies. Our customers are great organizations that are responsible in their energy usage... and they gain economic benefits at the same time.
We have a great team of software developers; some of us are building a scalable and secure cloud, others enjoy programming devices, and our data scientists design and develop machine-learning models to operate the world’s largest energy storage network.
We are looking forward to meet you, and learn if there is an opportunity to have you join our team!
MAIN TASKS AND RESPONSIBILITIES
Develop data pipelines to ingest, clean and transform large data sets that enable artificial intelligence and machine learning platform
Build real time and batch processing frameworks that accelerate the pace of algorithmic improvement
While doing so, you help:
Create virtual power plants comprising of distributed energy storage that help to stabilize the the electricity grid with
Scale the business 10x from thousands to tens of thousands of industrial devices
Build an industry leading real-time technology stack
EDUCATION, SKILLS AND EXPERIENCE
You are well-organized care about the details and take pride in finishing your deliverable commitments.
Familiarity with one or more of the following: Apache Spark, Apache Flink, Kafka the Hadoop stack
Experience working with data scientists to design and develop data pipelines for ML models.
You can demonstrate advanced proficiency in Python (required) and some familiarity of the python scientific stack (numpy, sklearn, etc.) Knowledge of C++, Java or Scala a plus
You know how to design and write APIs or RESTful endpoints
The concepts of designing, developing and deploying micro-services are familiar to you.
You have used SQL and NoSQL datastore (e.g. DynamoDB, Hive, Mysql, Presto, Postgress, MongoDB, etc.)
Bonus: Experience with time series databases (Cassandra, Prometheus, InfluxDB etc.)
You are passionate about high performance computing
You have hands on experience with Linux and Docker based environments
You know the Agile Methodology and all its ceremonies and have worked in an Agile team.... But most important you feel strong of the principles behind this methodology
At least a Bachelor’s degree (Masters preferred)
A minimum of 3 years of relevant software development experience
Professional interest in energy and clean technologies
Friendly and highly professional teams
Modern office facilities (kitchens, gym, yoga, playroom, coffee and tea points etc.)
Regular (twice a year) performance reviews
Fully paid English classes with certified teachers and native speakers
Internal and external training
Premium health insurance (medication, massage, and doctor in the office etc.)
20 working days of annual paid vacation
Christmas and other state holidays
Corporate events (corporate parties, sports competitions, What? Where? When? Team etc.)
Incentives (marriage, childbirth gifts)
And much more!
Please send you CV or contact us with more questions!