Impact Observatory

11 - 25 employees
< 10 engineers
$500k - $1m funding
Pre-series a

Track environmental trends, threats, and the impact of actions to reduce biodiversity loss and maintain the clean food and water healthy ecosystems provide. Measure the carbon stored in vast landscapes, providing key data to support the creation of new national parks, and show the value of wetlands that protect coastal and upstream communities from disasters. Monitor the health and impact of farming, settlements, and resource extraction across an entire country or regional watershed.


Why join us?

  • Impact Observatory (IO) is a mission-driven technology company bringing artificial intelligence and machine learning algorithms and data to sustainability and environmental risk analysis for governments, civil society, and green markets.

  • Impact Observatory (IO) built the UN Biodiversity Lab's data explorer to give policy makers the tools they need to meet their commitments in the fight against climate change ( https://unbiodiversitylab.org/). IO is committed to continuing to get great science into the hands of decision makers.

  • Impact Observatory (IO) produced the first ever 10m Land Use/Land Cover map. Released in the summer of 2021 we intend to build on that success to produce more and different maps (https://www.arcgis.com/home/item.html?id=d6642f8a4f6d4685a24ae2dc0c73d4ac).


Engineering at Impact Observatory

Engineering team and processes

The internal engineering team reports directly to our Head of Engineering while an external contract team has historically focused on our primary web application and geospatial platform (with guidance from internal engineering). We use Kanban to organize our work.

Technical Challenges

Our engineering team works closely with our product, design and science teams to fulfill company objectives. Our internal team is focused on data pipelines to ingest training data and to manage model training and deployment. We develop and maintain additional data pipelines to run various analyses using our data products and others as input. As an example, to create our 10m 2020 global LULC map we processed ~450,000 Sentinel-2 scenes (~500 TB of data).

Projects you might work on
  • Create a data pipeline to pull training data from a satellite provider, store and index in our catalog.

  • Create a data pipeline to deploy a trained model over a (relatively) small area of interest or the entire globe.

  • Modify our web application to easily switch between multiple assets in a STAC catalog.

  • Spatially join vector and raster data to calculate zonal statistics over user specified regions of interest.

Tech stack
Python
JavaScript
PostgreSQL
Azure Cloud
Terraform

External Links

Interested in this company?
Skip straight to final-round interviews by applying through Triplebyte.

Apply