Triplebyte Blog

We help engineers join great companies
Try our coding quiz

Share this article

We built Triplebyte for machine learning engineers - here's what we learned.

By Triplebyte on Sep 23, 2019

We built Triplebyte for machine learning engineers - here's what we learned.

We’re delighted to announce that we’ve just launched our brand-new machine learning track. We’ll now be helping machine learning engineers find jobs in the same way that we’ve already helped generalist, front-end, and mobile engineers.

ML is an exciting field. Machine learning, in one form or another, is driving many of the most innovative startups in Silicon Valley. On our platform alone, we have companies building autonomous vehicle fleets (Zoox), reducing food waste by predicting produce demand (Afresh), powering AI-backed materials science research (Citrine), and matching patients with mental health providers (Lyra). Nor is machine learning limited to high innovation: even basic AI assistants like automatic style matching can save hundreds of man-hours that would otherwise be spent on tedious tasks.

Companies see the value in ML, and they're building out their ML teams at a breakneck pace. It’s not just startups, either; established names like Apple have already posted roles for our ML track. Capable ML engineers are rare, and they are hot commodities - companies both large and small are eager to hire as many as they can get. We’re no exception! Triplebyte couldn't exist the way it does today without our ML team. As much as we want to make hiring more pleasant for everyone involved, we couldn’t do any of the things we do if companies didn’t trust our algorithm to match them with people they want to hire.

We’ve been hiring ML engineers ourselves for years, but we also reached out to numerous companies to ask them what they look at in their own hiring processes. Our new interview is based on a blend of the two, tweaked for consistency and the ability to pull as much information out of two hours as we possibly can. Once the ML track has been out for a few months, we'll be back with more comprehensive data - but, for now, we thought we'd share the big take-aways from our research into ML hiring. Here's what you need to know if you're in the market for a machine learning job:


First and most importantly, raw coding skills matter. This was the common theme we heard over and over again as we spoke to companies hiring for ML roles, and it makes sense. These companies want something built, not just theorized, and most of them are not working in particularly extraordinary spaces that require major innovation in terms of the basic structure of their ML models. Even our own algorithms that match engineers with companies are not in themselves revolutionary. We use relatively small tweaks on standard approaches, albeit with lots of fine-tuning by our resident experts (who were kind enough to provide many of the suggested reading links below).

Contrary to many engineers’ fears, companies would, in general, rather have someone who can code but lacks deep academic background than a Ph.D. who can’t actually spin up a model. As a result, our new ML interview begins (like our other interviews) with a coding challenge. If you’re in the process of ‘pitching’ yourself on the ML job market yourself, emphasize what you can build - not just what you know. If you’re new to the field and haven’t built anything yet, try it out! Build a basic neural network or a random forest classifier, then try training it on one of many open data sets.

Second, experience with real-world production concerns counts for a lot. ML expertise is hard to evaluate unless you’re an expert in it yourself, which many people hiring ML engineers are not. It’s especially hard to detect problems with a system already deployed, because machine learning is famously opaque. As a result, companies tend to emphasize experience as a proxy for this knowledge of production issues, because they (probably rightly) don’t feel they can evaluate ability precisely enough directly. This means that switching into ML from another field depends, even more than other engineering disciplines, on getting your foot in the door the first time. Personal projects can help, but they need to be substantial - many of the issues that come up in production ML systems don’t become obvious until they’ve scaled up or have been running a while.

We don’t like to focus on experience, as we’ve written about numerous times before. Experience is a meaningful proxy, but it is not particularly precise and only exacerbates existing divides in the industry. Our ML interview is every bit as background-blind as the interviews for our other tracks are: if you know your stuff, we’ll help you find a job even if you don’t have any experience at all. That said, we agree with many employers that knowledge of production concerns matters a lot, and we feature those concerns heavily both in our new interview and in our own hiring process. If you’d like to read more about validating and troubleshooting systems in production, check out this article on cross-validation, this article on hyperparameter tuning, or this webinar on running k-nearest-neighbors in production.

Finally, don’t forget data analysis skills. Being a data scientist doesn’t make you an ML engineer, but any good ML engineer should have some basic knowledge of data science. This goes hand in hand with the previous point, because many production problems are detectable through subtle statistical signatures. A great example of this sort of sanity checking comes from political modeler Nate Silver of FiveThirtyEight. He could easily have taken incoming polls at face value to build a model, but some moderate statistical analysis showed that the incoming data was subtly skewed by pollsters’ desires not to deviate from the crowd, which demanded model adjustments to compensate.

Companies (rightly) care about these statistical skills and the common sense to know when to apply them, and our interview contains a data analysis section to test them. If you’ve ever caught something of this sort in a production environment (whether professionally or in a hobbyist project), it’s a good idea to play it up. If you haven’t, it’s a great idea to do some reading on basic data science even outside of ML.


Whether you’re a veteran looking for a streamlined job-search process, or a newcomer trying to see how they stack up against the industry, we’d love to have you try out our new ML quiz - it costs you nothing but a few minutes of your time.

If you’re new to Triplebyte and you’d like to know more about our process, check out our main home page. The tl;dr is that we use vetted ML models to make your job hunt more rigorous and data-driven. If you do well on our interview, we’ll help you with a hassle-free job hunt that fast-tracks you right to final onsites with top companies and exciting startups (including all the companies discussed earlier in this article). If you don’t, that’s okay - we’ll give you personalized, actionable feedback about where you did well and where you can improve. The process is completely free for engineers no matter what - companies pay us because we make their hiring process more effective.

Our ML track is brand new, and we’re sure there are ways we can improve it. We’ll no doubt be making tweaks to it in the coming weeks and months as we gather more data. (In fact, we have some exciting ideas in the wings for our other interviews, as well - more to come on this later.) If you think there’s something we can do better, or something you loved and think we should keep, we’d love to hear your feedback. You can reach us at ml-feedback@triplebyte.com

Get offers from top tech companies

Take our coding quiz

Liked what you read? Here are some of our other popular posts…

How to Interview Engineers

By Ammon Bartram on Jun 26, 2017

We do a lot of interviewing at Triplebyte. Indeed, over the last 2 years, I've interviewed just over 900 engineers. Whether this was a good use of my time can be debated! (I sometimes wake up in a cold sweat and doubt it.) But regardless, our goal is to improve how engineers are hired. To that end, we run background-blind interviews, looking at coding skills, not credentials or resumes. After an engineer passes our process, they go straight to the final interview at companies we work with (including Apple, Facebook, Dropbox and Stripe). We interview engineers without knowing their backgrounds, and then get to see how they do across multiple top tech companies. This gives us, I think, some of the best available data on interviewing.

Read More

Bootcamps vs. College

By Ammon Bartram on May 19, 2016

Programming bootcamps seem to make an impossible claim. Instead of spending four years in university, they say, you can learn how to be a software engineer in a three month program. On the face of it, this sounds more like an ad for Trump University than a plausible educational model.

But this is not what we’ve found at Triplebyte. We do interviews with engineers, and match them with startups where they’ll be a good fit. Companies vary widely in what skills they look for, and by mapping these differences, we’re able to help engineers pass more interviews and find jobs they would not have found on their own. Over the last year, we’ve worked with about 100 bootcamp grads, and many have gone on to get jobs at great companies. We do our interviews blind, without knowing a candidate's background, and we regularly get through an interview and give a candidate very positive scores, only to be surprised at the end when we learn that the candidate has only been programming for 6 months.

Read More

How to pass a programming interview

By Ammon Bartram on Mar 8, 2016

Being a good programmer has a surprisingly small role in passing programming interviews. To be a productive programmer, you need to be able to solve large, sprawling problems over weeks and months. Each question in an interview, in contrast, lasts less than one hour. To do well in an interview, then, you need to be able to solve small problems quickly, under duress, while explaining your thoughts clearly. This is a different skill. On top of this, interviewers are often poorly trained and inattentive (they would rather be programming), and ask questions far removed from actual work. They bring bias, pattern matching, and a lack of standardization.

Read More
Get the Triplebyte newsletter Subscribe

Get the Triplebyte newsletter

Be the first to get the latest Triplebyte content straight to your inbox.