At Lyft, our mission is to improve people’s lives with the world’s best transportation. To do this, we start with our own community by creating an open, inclusive, and diverse organization.
Here at Lyft, high-quality and reliable metrics play an important role in the decision-making process. The metrics help provide insights into the effectiveness of our product launch & features and let us track our goals and missions.
As a software engineer, you will be part of the team that builds the new generation of the metrics platform to serve reliable and high-quality metrics. You will get the chance to design and build the backend APIs for creating and serving the source of truth metrics metadata across Lyft to different downstream apps such as A/B testing and reporting; You will own the data pipelines that powers the Lyft topline and domain metrics; You also get the chance to build the user-facing frontend features to allow users to manage their metrics. Your efforts will ensure the metrics being used across Lyft are high quality, and people can access business/user insights to make data-informed decisions.
— Work with teams to gather requirements and align on the design of the metrics platform features
— Create and own the roadmap for the metrics platform projects in partnership with PMs and Data Scientists
— Design and evolve data models to handle different metrics use cases
— Design and create relational and noSQL databases for persisting data objects
— Design and implement metric platforms backend services to provide APIs for supporting metrics metadata operations and various downstream use cases
— Owner of the data pipelines for metrics computation and A/B testing stats computation. Build and maintain scalable data pipelines
— Proficient in at least one of the programming languages such as Python, Java, C++, Go
— Write well-crafted, well-tested, readable, maintainable code
— Experience with SQL and relational databases (Postgres, Mysql, Oracle, or similar)
— Experience with Kubernetes (k8s), Envoy, Kafka, and/or AWS is a plus
— Experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, Pig, HBase, Parquet)
— Experience with workflow management tools (Airflow or similar)