- San Francisco, CA, United States
- Apache Spark
What you are going to do:
- Be part of the team that is building the next generation of ETL software
- Come up with novel ideas on how to make complex technology user-friendly, and then turn those ideas into software
- Improve the scalability, performance, and robustness of our data processing systems
What we want to see in you:
- You love data engineering
- You build robust and scalable data systems three times as fast as other developers
- Coding in Java is second nature to you
- You have a passion for improving data analytics
- You’re living in or willing work in San Francisco
- You’re excited to work in a scrappy environment
- You’re down to earth and fun to be around. This is an absolute must!
Big plus if you have the following:
- Have experience with Cascading, Docker, and AWS
- Know the ins and outs of current big data frameworks like Hadoop, Spark, or Flink, but this is not an absolute requirement - you’re a quick learner!
- Have startup experience
Etleap came to be out of the frustration with how much time data wrangling takes away from the actual analysis. As engineers we were tired of spending countless hours building and maintaining data pipelines and warehouses on behalf of our analytics teams. That is why we're creating an intuitive ETL tool that enables the data analysts themselves to integrate data from any source.
The product takes data source input, such as databases, log files, Salesforce, Marketo, and other cloud tools, lets users set up transformations, and builds the data warehouse tables.
We value engineers who can get things done, and who think creatively from first principles to build our product in new ways. Additionally, all our engineers take mentorship seriously, and are always open to discussing technical decisions, product ideas, or plan future infrastructure. We have a very strict no-asshole hiring policy.
- Apache Spark
Skip straight to final-round interviews by applying through Triplebyte.