Svitla Systems is a multinational software company headquartered in Silicon Valley, with business and development offices throughout the US, Mexico, and Europe. Svitla is an outspoken advocate of workplace flexibility, an individual approach to our teammates’ professional and personal growth, and a family-like environment. Since 2003 we have served a wide range of customers, from innovative start-ups in California to large corporations like Ingenico, AstraZeneca, and Ancestry. At Svitla, developers work with clients directly, building lasting and successful partnerships. Our global mission is to build a business that contributes to the well-being of other communities and makes a lasting difference in the world. Join us!
Svitla Systems Inc. is looking for a Senior Big Data Engineer for a full-time position (40 hours per week) in Ukraine. Our client is developing an innovative Decision Intelligence platform that is changing how enterprises utilize AI to transform their decision-making. The design is based on several patented methods built around the concept of an opportunity and cognitive decision-making. It is currently being deployed across several large enterprises in retail, financial, manufacturing, and automotive/OEM customers to solve unique and interesting business problems. These deployments are extensive, highly complex, with multi-year support for ongoing enhancements. We seek highly motivated, enthusiastic, and energetic individuals to join this fast-growing team — an excellent opportunity for critical thinkers to learn some of the most advanced technologies and business-related acumen.
— Master’s Degree or Ph.D. in Computer Science, Engineering, Information Technology (IT) or related field;
— Experience and desire to work in a fast-paced, collaborative, global delivery environment;
— Deep expertise in cloud-based platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud (GC);
— Deep understanding of big data technologies and big data security models;
— Prior software development experience with data-driven product companies using Spark, SQL, Python, Scala, or advanced analytical tools is a plus;
— Experience with the data collection process and evaluation methods for Machine Learning;
— Proficient in designing efficient and robust ETL/ELT workflows, schedulers, and event-based triggers;
— Database experience, including knowledge of storage internals of SQL and NoSQL services;
— Experience with development methodologies like CI/CD, Agile, Git, Jenkins, etc.;
— Performance Tuning & experience with building scalable Data platforms;
— Experience writing technical specifications and design sketches from both a business and technology standpoint;
— Ability to break down large requests into sub-tasks and provide higher-level status updates;
— Values the “team” and leverages the opinions and expertise of their teammates to deliver quality and the ability to influence cross-functional teams without formal authority;
— Experience using OKRs to drive outcomes;
— Excellent written and verbal communication skills.
— You will be involved in designing, building, and deploying massively scalable, automated data ingestion and processing pipelines to store, analyze and derive insight from structured/unstructured data arriving from our clients and internal systems in multiple cloud environments;
— You are expected to drive technology discussions and analyze the current landscape for gaps in addressing data, platform, and business needs;
— Review existing architecture and capabilities from a technical perspective and challenge the status quo to provide thought leadership/best practices to enhance our instance strategy, technical governance, core data, integrations, and the overall technical health of our data and machine learning pipelines frameworks;
— Interface with the customers and across the org in structuring and implementing solutions while demonstrating the ability to become a trusted advisor to clients, senior executives, and internal data science, product, and engineering teams;
— Develop CI/CD processes for the Engineering and Data Science groups;
— Develop QA/QC suites to process, cleanse, and verify the integrity of incoming data, including data anomaly detection;
— Undertake project specific proof of concepts activities to validate technical feasibility;
— Discuss with onshore/offshore teams on tasks to be done, and mentor junior members of the team;
— Perform design & code reviews for team members and develop test plans;
— Assist DevOps in configuring, operating, and maintaining our infrastructure environment.
— Competitive compensation plan that takes skills and experience into consideration.
— Annual performance appraisals.
— Possibility to choose your workspace either remote or combination of your home and one of our development offices.
— Flexible working hours and adjustable work/life balance. Projects that use advanced, cutting-edge technologies.
— Vacation time, sick-leaves, national holidays, family supplementary days off.
— Comprehensive medical insurance including dental services, massages, and sports activities.
— Support for a healthy lifestyle, compensation of running events.
— Maternity leave policy.
— A personal loan budget is available for long-term personnel.
— Partial compensation of conferences, courses, and English classes.
— Free meetups, webinars, and conferences organized by Svitla.
— Birthday presents for personnel and New Year gifts for children.
— Fun summer and winter corporate parties and memorable anniversary presents.