MLOps Engineer

Remote, San Francisco, CA, United States, Richmond, VA, United States

Mission Lane


Role Locations

  • Remote
  • San Francisco, CA, United States
  • Richmond, VA, United States

Employees

251 - 500 people

Address

1504 Belleville St
Richmond, VA, 23230, US

Tech Stack

  • JavaScript
  • TypeScript
  • Node.js
  • React
  • React Native
  • Kotlin
  • Java
  • PosgreSQL
  • Kubernetes
  • GCP

Role Description

At Mission Lane, we are establishing a Machine Learning Operations Engineering team aimed at supporting our organization's growth and excellence in FinTech machine learning. Our MLOps Engineers are responsible for designing, building, and maintaining our Machine Learning deployment infrastructure supporting our Data Science Team. This responsibility is exciting and opens the opportunity to support the delivery of the Machine Learning models at Mission Lane. We are looking for highly motivated, high-energy, self-starter, problem solver candidates with a positive attitude, a strong work ethic, and solid interpersonal and communication skills.

Our MLOps engineers are developing a state of the art platform for Data Scientists, Data Engineers, Machine Learning Engineers and Software Engineers to experiment, build and deploy ML models at scale on GCP. As an MLOP Engineer, you will be part of a DevOps team responsible for deploying and operating core services that we provide our engineering community with all the best practices of software engineering into the domain of data science, meaning we aim to incorporate data/model versioning, collaboration, monitoring etc. in an intuitive way that allows our users to excel within the field.

As an ML Engineer in the MLOps team you will be a key stakeholder and own the responsibility in developing the backend of GCP to deliver world class practices within data engineering and machine learning modeling.

We utilize Bento.ml, Airflow, [fill in others] in combination with many other GCP services to support a highly adaptive and quick growing FinTech organization with a strong culture of technology in a start-up environment.

Expected Role Responsibilities:

  • Setting up CI/CD, Build process, etc.
  • We Managing Production deployment process
  • Driving development & adoption of Self-Service capabilities & automation
  • On-boarding new ETL tools/applications/process to the platform
  • Assisting with monitoring pipelines for failures and diagnose/resolve problems.
  • Creating documentation & conduct training sessions on DevOps process
  • Setting up build agents with appropriate binaries, configurations, and proxy settings.
  • Firewall requests & Configuration settings
  • Experience with all GCP frameworks and services are not necessary, but a clear understanding of distributed computing and machine learning is necessary, as well as knowing how to architect solutions on GCP overall.
  • Partner with architecture and other senior leads to address the data needs of our rapidly-growing business
  • Designing and developing machine learning and deep learning systems
  • Study and transform data science prototypes
  • Partner with data scientists, sales, marketing, operation, and product teams to build and deploy machine learning models that unlock growth
  • Build custom integrations between cloud-based systems using APIs
  • Write complex and efficient queries to transform raw data sources into easily accessible models by coding across several languages such as Java, Python, Kotlin, and SQL
  • Architect, build and launch new data models that provide intuitive analytics to the team
  • Build data expertise and own data quality for the pipelines you create
  • Understanding of data structures, data modeling and software architecture
  • Experience running machine learning tests and experiments
  • Research and implement appropriate ML algorithms and tools
  • Track record develop machine learning applications according to requirements
  • Train and retrain systems when necessary

Skills requirements include:

  • Excellent communication skills
  • Ability to work in a teams
  • Outstanding analytical and problem-solving skills
  • BSc in Computer Science, Mathematics or similar field; Master’s degree is a plus
  • Three or more years of relevant software engineering experience ( Kotlin, Python, Java and R [Others?]) in a data-focused role
  • Proven experience as a Machine Learning Engineer or similar role
  • Proven track record using GCP and knowledge of GCP tools related to Machine learning and Big Data such as, [fill in desired tools and technologies]
  • Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
  • Experience in designing and building highly scalable and reliable data pipelines using Big Data (Airflow, Python, Snowflake).
  • Experience with K8S, EMR, Big Table, … [Others?]
  • Ability to extend existing ML libraries and frameworks
  • Experience in building and deploying machine learning models
  • Passion for creating data infrastructure technologies from scratch using the right tools for the job
  • Experience in solving complex data processing and storage challenges through scalable, fault-tolerant architecture
  • Passionate about data engineering, analytics, distributed systems and solving complex data problems.
  • Prior experience shipping scalable data solutions in the cloud (AWS, Azure, GCP) and database technologies such as Snowflake, Redshift, SQL/NoSQL and or columnar databases
  • Experience in Debugging & Troubleshooting assistance

Desires abilities include:

  • Deep knowledge of math, probability, statistics and algorithms
  • BS degree in a related field, MS preferred
  • Knowledge on Snowflake platform

About Mission Lane

Mission Lane is a next-generation financial technology company built for the people. We’re real humans on a mission to make a real impact, with inclusive products, thoughtful customer service, and a commitment to transparency (no surprise fees, no hidden agendas). The best part? With over one million members already on board, we’re just getting started.

Company Culture

Our employees have grit. We're fast paced and all in. We are highly collaborative and you will have autonomy and scope to get things done.

Interested in this role?
Skip straight to final-round interviews by applying through Triplebyte.