Lightup Data Inc.
Lightup enables data-driven businesses to achieve their full potential. Modern businesses are dealing with unprecedented volumes of real-time data, operating in environments that are constantly changing internally and externally. To get the most out of data-driven operations, businesses need the ability to continuously measure the performance of their data-driven operations at fine granularity across the entire pipeline. Lightup is enabling businesses to continuously track vitals of their data-driven operations at the finest granularity, catch problems before their customers do, and fix them before they impact business.
The founding team brings together a unique combination of experience in big data stream processing, large scale distributed systems and rigorous statistical signal processing - crucial for building the Lightup platform.
We have over three decades of industry experience in multiple game changing startups and industry stalwarts like Google, GlobalFoundries and VMware.
Funded by top tier investors Andreesen Horowitz (A16Z) and Spectrum 28 (s28 capital)
Founding engineering team reporting into VP engineering
Run 2 week iterations - planning on a weekly basis - Mon
2 meetings per week to review status Wed/Fri
Demo and retrospective at the end of release
Use Asana to track release content and backlog
Feature/task developed after with discussion with founding team. Freedom to determine design/implementation
- Process large amounts of streaming data (Kafka, PubSub, Kinesis) and data at rest (S3/BigQuery/Redshift/Postgres)
- Build core algorithmic, statistical and data science solutions to find/handle issues on large streaming datasets
- Understanding/Inferring data schemas where it isn't available, dynamically handle changing data schemas
You would architect and build our analytics monitoring engine that has to run complex algorithms and statistics on data. This engine will need to handle enormous amounts of streaming data as well data at rest. In addition, this engine will be able to extract or infer schemas from all types of data
You would come up with algorithms to detect, infer interesting insights in large data streams, data stores. These algorithms need to perform efficiently to run at large scale and deliver performant results
You would architect and implement the scalable backend that should be able to handle large number of API calls with low latencies while reducing the load on the databases using your understanding of database internals. In addition, you'd define the API as well as implement it working closely with our CPO and UI/UX engineer.
You would build out the AWS integration pipeline all the way from build to test to deploy. This will give the broadest view of the product while developing a very portable skillset. Bazel, Jenkins, Docker, Kubernetes, Envoy, AWS/AWS Lambda, Beanstalk etc.
Here are what we value as a company - Customers first - Transparency - Honesty - bidirectional feedback - Bias for action - results matter - Have fun while getting stuff done
We have a 401(k) plan with a company match
If you have to take mass transit to work, we'll subsidize your train pass.
Interested in this company?
Skip straight to final-round interviews by applying through Triplebyte.