Software Engineer, Data

Los Angeles, CA, United States • $150k - $190k

Role Location

  • Los Angeles, CA, United States


$150k - $190k


501+ people


406 Broadway # 369
Santa Monica, CA, 90401-2314, US

Tech Stack

  • Swift
  • Dropwizard
  • RxJava/RxKotlin
  • AWS
  • Terraform
  • Flink
  • Kafka
  • Redis
  • RabbitMQ
  • Kotlin

Role Description

Bird is all about micro-mobility: the kind of short distance travel that gets you from your lunch meeting back to the office, or makes exploring a new city with your friends fun and easy. But when you’re ready to ride, how does Bird ensure that there is a scooter near you ready to go?

The answer is data.

To place a Bird in the right place at the right time, Bird needs to understand all the nuances of consumer transit, from weekly seasonality to points of interest, daily commuter paths, and unexpected weather. Bird needs to capture and pool all that information into a final score and decision, all day, everyday.

Bird is operating in over 100 cities around the world, serving hundreds of thousands of vehicles, tens of thousands of chargers, and millions of riders. With this scale and operational complexity, we need to amplify the impact of our decision makers by predicting when and where to place birds, when to dispatch teams to pick up and charge or repair vehicles, and how we should rebalance our fleet to meet the demand in a city.

We also closely partner with cities to ensure Bird is serving their communities. We provide data and insights so cities can better understand the nature of traffic in their city, as well as the success of their mobility program. Because safety is Bird’s top priority, we also use data to enforce speed limits set by cities, plan “nests” (where freshly charged Birds go) to accommodate city regulations, and inform riders of local rules.

Engineers on our Data Platform team are responsible for the data integration and processing across a variety of databases and services used for online and offline features. We work in the world of stream-table duality, unifying events and state in Flink on top of an Event Bus implemented in Kafka. We build data pipelines that drive real-time business decisions by machine learning models from our Data Science team, as well as decisions made by local operations teams all over the globe.

Here’s what you’ll be doing in your first:

  • Week: You’ll work closely with our product and service teams to learn our development practices as you engineer solutions to problems impacting our customers. You will learn trunk-based development, Phabricator, Jenkins, and Kotlin.
  • Month: You’ll be creating new data pipelines. You'll instrument services to create new datasets, and write stream and batch processors to build data powered products.
  • Quarter: You'll be building reusable components that empower product teams to rapidly iterate on features that make the world a little bit smaller.


Kafka, Flink, Presto, Druid, Avro, Spark, Kotlin, Python, AWS, Airflow


  • Computer Science degree or equivalent experience required
  • Batch and/or Stream Data processing experience preferred
  • Large scale distributed system experience is awesome


As cities across the globe grow, so does our need for greater mobility. We’re facing more pollution and congestion than ever before. The near-term solution is partnering with cities to rebalance our existing streets and improve the way we get around—we need to remove cars from the road and make our cities more livable.

Company Culture

We're an ambitious, smart, and open-minded group. Our team members are passionate about our mission, and eager to complete their work at the highest level. The office itself is up-tempo and supportive, because we care about each other. People first, people.

Interested in this role?
Skip straight to final-round interviews by applying through Triplebyte.