Senior Full Stack Engineer

Silicon Valley, CA, United States

Datatron Technologies


Role Location

  • Silicon Valley, CA, United States

Employees

11 - 25 people

Address

5150 El Camino Real Ste C20
Los Altos, CA, 94022-1542, US

Tech Stack

  • Python
  • React
  • Containerization
  • TensorFlow
  • Scikit-Learn
  • Scala

Role Description

Datatron’s full stack engineers are responsible for building the systems and tools that make our teams productive and the technology stack that powers the applications our customers use every day. We believe standing up a healthy service should be fast, standardized, and intuitive. We can ship code to our customers continuously. We’re empowered to use tools and technologies that provide the Datatron community with the best possible experience.

As an engineer on our team, there’s no limit to the impact you can have on the business. All of our engineering teams are responsible for deploying and supporting their own services, and because of this they look to us for advice, guidance, and stability. We invest heavily in infrastructure because we know that engineers are happiest when they’re shipping code.

We believe in picking the right tools for the job, whether that means evaluating third party vendors or building something in house. We aren’t dogmatic about technologies and we adapt our systems based on the needs of the organization. Currently you’ll find us writing Python, Scala and integrating our services with a suite of Amazon Web Services, Jenkins, Splunk, and Graphite, just to name a few.

Your Responsibilities: Developing and maintaining the platform that runs all of Datatron’s services Writing and maintaining cloud automation software and internal tools to support developers deploying, running and monitoring individual Datatron’s services Championing best practices for building scalable and reliable services Conducting root cause analysis on production issues with other engineers Responding to production incidents and determining how we can prevent them in the future. Contributing your ideas on how we can continuously improve our systems and processes Lead development of architecture and standards for a business metric warehouse Develop and maintain ETL infrastructure and processes Implement systems for tracking data quality and consistency Work closely with data scientists, engineers, and analysts to design and maintain scalable data models and pipelines

Requirements 5+ years software experience Startup experience a big bonus Extensive software engineering experience with an object-oriented, scripting language (python, Java/Scala, ruby, perl) Experience architecting data systems from scratch

Optional Extensive professional experience with a distributed, column-store architecture (Redshift, Vertica, Greenplum, Teradata) Proven track record of leading projects through design, development, release, and maintenance phases Ability to work with varied forms of data infrastructure, including: RDBMS (PostgreSQL, MySQL); NoSQL (MongoDB, DynamoDB, Redis); MapReduce (Hadoop, Hive, HBase, Pig); Logging/messaging systems (Kafka, Scribe, Flume, Kinesis, SQS)

About you: You love to code, and you’ve worked with multiple programming languages. You love to build tools that enable a whole organization to rapidly produce software products and services. You have an insatiable craving for making applications more consistent and reliable over time. You believe you can automate everything, and you can identify opportunities to remove manual processes. You understand scalable web architectures and have implemented a few. You enjoy working in a collaborative environment, and you’re committed to driving projects to completion independently and creatively. You're a great communicator, and can advocate for your proposals while also empathizing with your teammates' goals and priorities. You graciously help others who look to you for feedback and guidance. You think ahead and build for the future.

Our ideal candidate possesses some of the following: Experience with UNIX systems administration including solid scripting skills in Shell, Python, Scala, Java Knowledge of configuration management systems such as Puppet, Chef, Salt, or Ansible Experience building and running RESTful web services on the AWS platform Contributions to open source projects A passion for sustainability and/or big data

About Datatron Technologies

Datatron speeds up the AI life cycle model-management in today’s machine-learning paradigm by orders of magnitude. We deploy ML model deployment, scoring, monitoring, and model governance to enterprise clients in the financial, healthcare, and telecommunication sectors.

Company Culture

We are looking for talented individuals who are eager to learn, want to interact with customers and want to have a chance to make a big impact.

Interested in this role?
Skip straight to final-round interviews by applying through Triplebyte.