Seeking Alpha is the world’s largest investing community, powered by the wisdom and diversity of crowdsourcing. Millions of passionate investors connect daily to discover and share new investing ideas, discuss the latest news, debate the merits of stocks, and make informed investment decisions.
24 березня 2023

Senior Data Engineer

Київ, віддалено

We are

Seeking Alpha helps over 200K users to invest better and meet their financial goals faster. Our product is always evolving and we’re looking for a talented and experienced Data Engineer to join us.

A Senior Data Engineer is responsible for designing, building, and maintaining the infrastructure necessary for analyzing large data sets. This individual should be an expert in data management, ETL (extract, transform, load) processes, and data warehousing and should have experience working with various big data technologies, such as Hadoop, Spark, and NoSQL databases. In addition to technical skills, a Senior Data Engineer should have strong communication and collaboration abilities, as they will be working closely with other members of the data and analytics team, as well as other stakeholders, to identify and prioritize data engineering projects and to ensure that the data infrastructure is aligned with the overall business goals and objectives.

We love working together to solve complex problems as much as we love hanging out together!

Why Seeking Alpha is the best place for you

  • We have an awesome product. Our cutting-edge investing tools are helping over 200,000 paying subscribers exceed their financial goals
  • We work hard to make our users happy and help them build wealth to achieve their life goals.
  • We care about work-life balance: We work mostly from home, we provide lots of vacation days, and we insist that you enjoy them


  • Work closely with data scientists/analytics and other stakeholders to identify and prioritize data engineering projects and to ensure that the data infrastructure is aligned with business goals and objectives
  • Design, build and maintain optimal data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources, including external APIs, data streams, and data stores.
  • Continuously monitor and optimize the performance and reliability of the data infrastructure, and identify and implement solutions to improve scalability, efficiency, and security
  • Stay up-to-date with the latest trends and developments in the field of data engineering, and leverage this knowledge to identify opportunities for improvement and innovation within the organization
  • Solve challenging problems in a fast-paced and evolving environment while maintaining uncompromising quality.
  • Implement data privacy and security requirements to ensure solutions comply with security standards and frameworks.
  • Enhance the team’s dev-ops capabilities.


  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
  • 2+ years of proven experience developing large-scale software using an object-oriented or a functional language.
  • 5+ years of professional experience in data engineering, focusing on building and maintaining data pipelines and data warehouses
  • Strong experience with Spark, Scala, and Python, including the ability to write high-performance, maintainable code
  • Experience with AWS services, including EC2, S3, Athena, Lambda and EMR
  • Familiarity with data warehousing concepts and technologies, such as columnar storage, data lakes, and SQL
  • Experience with data pipeline orchestration and scheduling using tools such as Airflow
  • Strong problem-solving skills and the ability to work independently as well as part of a team
  • High-level English — a must.
  • A team player with excellent collaboration skills.

Nice to Have:

  • Expertise with Vertica or Redshift, including experience with query optimization and performance tuning
  • Experience with machine learning and/or data science projects
  • Knowledge of data governance and security best practices, including data privacy regulations such as GDPR and CCPA.
  • Knowledge of Spark internals (tuning, query optimization)