Data Platform Engineer

Remote, San Francisco, CA, United States, Silicon Valley, CA, United States • $135k - $190k

VoiceOps


Role Locations

  • Remote
  • San Francisco, CA, United States
  • Silicon Valley, CA, United States

Compensation

$135k - $190k

Employees

11 - 25 people

Address

680 8 Th St Unit 202
San Francisco, CA, 94103-4902, US

Tech Stack

  • Ruby on Rails
  • PostgreSQL
  • AWS
  • Python
  • React
  • Go
  • Rust

Role Description

About the Role

VoiceOps is looking for an accomplished, enthusiastic, and driven engineer with experience building data processing and storage systems.  Our ideal candidates have architected and deployed systems to support multiple (small) engineering teams with specific needs and they enjoy a large degree of autonomy and ownership a company's data infrastructure.

Responsibilities

  • Design and develop data pipelines, ETL, storage solutions, and workflows that are optimized for speed, fault-tolerance, and scalability
  • Work with Application, Machine Learning, and Site Reliability/DevOps engineers to create systems that support their varied data needs while allowing for independent manipulation and iteration of data
  • Define robust data schemas for the rapid intake and processing of customer data with diverse structures
  • Support product-focused engineering teams with data infrastructure, APIs, and scalable deployments
  • Architect and author internal libraries for use by fellow engineers
  • Help create data analytics tools for software telemetry and business intelligence purposes
  • Cultivate a better understanding of data handling best practices across engineering teams
  • Collaborate on security efforts for customer data

Qualifications

  • 4+ years of experience writing code in one of: Ruby, Go, Python, Scala, Elixir, Java, or similar languages at a SaaS company
  • Strong understanding of relational and non-relational databases such as PostgreSQL, ElasticSearch, and Redis
  • Highly proficient 
  • Ability to organize and model data to support varied use cases
  • Experience creating and deploying container-based software
  • Familiarity with asynchronous data processing patterns with an added focus on monitoring and logging
  • Prior experience working with AWS or a similar cloud provider
  • A BS/MS in computer science or related field of study, or equivalent experience
  • Ability to communicate ideas to technical and non-technical colleagues

Beneficial Experience

  • Experience designing, building, and maintaining highly distributed or event-driven systems
  • Experience supporting Machine Learning engineers with data preparation, validation, annotation, and model evaluation
  • Previous work with workflow management and/or task scheduling systems
  • Prior use of Terraform/Ansible/Infrastructure as Code tools

About you

  • You have strong opinions about technology and the facts to back it up
  • You welcome healthy but respectful debate
  • You know the differences between: data warehouses and data lakes, schema-on-read and schema-on-write, relational and non-relational databases, batch and stream processing
  • The thought of code sitting undeployed for more than a week sends shivers up your spine
  • You want to be go-to subject matter expert for data-related questions

About VoiceOps

VoiceOps uses AI to improve call center rep performance with world-class coaching.

Our average customer makes tens of thousands of calls per week. In a world without VoiceOps, they have literally no idea what their sales reps are doing on the phone. It's a total (and scary) black box.

By applying ML and a great UI to this problem, call center leadership has all the data they need about customer conversations at their fingertips, and can coach their reps more effectively and efficiently.

The technical problem is interesting, and gets more interesting as we grow. Our core challenge is how to take billions of audio recordings (and messy, unstructured human conversations) and make sense out of that data in a way that is: a) accurate b) cost efficient, and c) highly scalable. The corresponding product problem is how to take well-structured data and make it actionable for the end-user.

Call center recordings are one of the richest/largest untapped datasets in the world (literally, billions of calls stored in AWS buckets that no one is touching right now). We're going to be the best in the world at structuring that data and putting it to use to make businesses work better.

Company Culture

Every problem is our problem

We do not look to external sources for why we didn't hit a deadline, meet an objective, etc. We hold ourselves responsible.

Intellectually honest and curious

We challenge each other's ideas daily - on product strategy and beyond. At lunch we are more likely to talk philosophy than TV (though we do some of that too)

Emotionally open and vulnerable

We talk about the ups and the downs of our lives outside of work and strive to have high authenticity interactions with each other.

Ambitious

We work hard to be an enduring company that paves the way for all sorts of applications which require structuring verbal conversational data at scale.

High quality bar, and always rising

We redo our work again and again until it's great. (can be frustrating in the moment but is ultimately extremely rewarding when we build things that are greater than we imagined they would be)

Interested in this role?
Skip straight to final-round interviews by applying through Triplebyte.