Role Location
- Remote
Employees
Address
Tech Stack
- Python
- SQL
- Spark
- PyTorch
- Scikit-Learn
- Kubernetes
- Docker
- AWS
- Azure Cloud
- Google Cloud Platform
- Terraform
Role Description
The Data Science/Data Analytics team is looking for talented Natural Language Processing (NLP) scientists and engineers to help build this vision. You’ll work closely with our Data Scientists on projects to analyze and observe world-scale datasets, write code that can scale to produce never before seen insights, and construct APIs to deliver our product vision.
This position can be worked remotely, but you should be comfortable working on New York time.
Responsibilities
Develop and deploy state-of-the-art Natural Language Processing capabilities Build and maintain distributed machine learning pipelines Analyze and propose technical solutions to invent, enable, and enhance our product offerings Be responsible for automating, testing, and deploying your work Collaborate with fellow engineers and data scientists across the organization
About You
You are experienced in building world scale products from scratch in a venture-backed, fast paced startup You hold an BS, MS or PhD degree in Computer Science, Data Science, or equivalent experience You have 3-10 years of real-world professional experience writing scalable NLP software You have experience developing capabilities to extract insights on diverse and incomplete language sets You have a track record of ownership and delivery of projects with major organizational impact You care deeply about engineering excellence, clean code, and knowledge-sharing You have strong written and verbal communication skills
Nice to have, but not required
Experience with Python Machine Learning toolsets (Scikit-learn, Numpy, Pandas, Dedupe) Experience with container technologies like Docker and Kubernetes Working knowledge of cloud services like AWS, Azure, or GCP
Technologies we love
Languages: Python, Go, Java Tools: Docker, Git, Kubernetes, Swagger/OpenAPI, AWS Datastores: Elasticsearch, Postgres, Redshift, Neo4j Frameworks: BERT, LSTMs, CRFs, LDA
About Altana AI
At Altana we've built the world’s most comprehensive knowledge graph of the global supply chain. This data asset, comprised of billions of records, covers more than 40% of global trade at the transaction level, nearly 400 million companies, supply chain movements, illicit activity, national trade regulations, and more.
On top of this knowledge graph, our Altana Atlas platform brings together data fusion, expert rules on supply chain risk and compliance, and artificial intelligence to help illuminate global business networks, analyze risk, and build supply chain resilience.
Our team is looking for talented leaders and individual contributors to join our mission, and deliver our vision to the world.
Address
Tech Stack
- Python
- SQL
- Spark
- PyTorch
- Scikit-Learn
- Kubernetes
- Docker
- AWS
- Azure Cloud
- Google Cloud Platform
- Terraform
Skip straight to final-round interviews by applying through Triplebyte.