Google recently announced Earth Engine Noncommercial Tiers for Google Earth Engine. This is a big change that affects all the non-commercial users of the GEE platform. Before the change, if you qualified for non-commercial use, you could use Earth Engine without any restrictions or fees. With the introduction of non-commercial tiers, you now have monthly limits on how much compute you can use for free. Once you exceed the allocated monthly quota, your account will enter a Restricted mode that will slow down the computations triggered through your account.
If you are a non-commercial user of Earth Engine, you now need to monitor and manage your quota usage to ensure you comply with these limits. This post outlines the concepts and tools you can use for quota monitoring.
The post covers the following topics:
Understanding Earth Engine Compute Unit (EECU)
Monitoring Quota using Google Cloud Console
Monitoring Quota using Python (using Google Colab)
In this post, you will learn how to work with the Open Buildings 2.5D Temporal data and download it for many useful downstream applications, such as Visibility Analysis, Population Modeling, and 3D Visualization.
Google has two important large-scale AI-derived open building datasets:
Open Buildings 2.5D Temporal V1: This is a newer dataset that aims to extract useful attributes for buildings such as year of construction and building height. Since this data is derived from open-source medium-resolution Sentinel-2 imagery, it has temporal coverage from 2016-2023. A deep learning model was trained to predict building heights from Sentinel-2 images, so we also get the height information each year.
We will cover a Google Earth Engine workflow to process this data to make it usable in a GIS environment and extract a high-resolution Digital Surface Model (DSM). We will also see how to combine the Open Buildings V3 polygon building footprints with the Open Buildings 2.5D Temporal V1 data to create and extract yearly polygon datasets containing building heights that can be used in a GIS environment.
Open Buildings 2.5D Temporal Data Combined with Open Buildings V3 Polygons and Visualized in QGS
The post is divided into the following sections.
Part 1: Extracting Building Height Raster and High-Resolution DSM
Part 2: Extracting Building Footprints with Heights
When exporting large rasters from Google Earth Engine, it is recommended that you split your exports into several smaller tiles. In this post, I will share the best practices for creating tiled exports in your target projection that can be mosaicked together without any pixel gaps or overlaps. They key concept is the use of the crsTransform to ensure that each individual tile is on the same pixel grid.
A temporally consistent global multi-class time-series classification dataset is critical to understand and quantify long-term changes. Till now, the choices were limited to lower resolution datasets such as MODIS Landcover (2000-present) at 500m resolution or ESA CCI (1992-present) at 300m resolution. We now have a new dataset GLC_FCS30D that provides a high-resolution landcover time-series derived from the Landsat archive (1984-2022) at 30m resolution with 35 classes. This is a very valuable dataset for studying landscape dynamics at high resolution and the first of its kind to be available made available in the public domain. The source dataset was released on Zenodo and can be downloaded as GeoTIFF files. This data is also available in the Google Earth Engine Community catalog and can be used within GEE directly. In this post, I want to share some technical details and scripts to help you analyze this data using Google Earth Engine. You will learn
How to access and pre-process the GLC_FCS30D dataset.
How to visualize and compare landcover changes between 1985-2022.
How to calculate landcover statistics and export a CSV with areas of each class for the entire time series over multiple regions.
In this post, we will learn how to build a regression model in Google Earth Engine and use it to estimate total above ground biomass using openly available Earth Observation datasets.
NASA’s Global Ecosystem Dynamics Investigation (GEDI) mission collects LIDAR measurements along ground transects at 30m spatial resolution at 60m intervals. The GEDI waveform captures the vertical distribution of vegetation structure and is used to derive estimates of Aboveground Biomass Density (AGBD) at each sample. These sample estimates of AGBD are useful but since they are point measurements – you cannot directly use them to calculate total aboveground biomass for a region. We can use other satellite datasets to build a regression model using the GEDI samples to map and quantify the biomass in a region.
Regression Workflow
This article shows how we can build and run the entire workflow in Earth Engine – from pre-processing to building a regression model to running the predictions. You will also learn some advanced techniques and best practices such as:
How to fuse datasets of different resolutions by using setDefaultProjection() and reduceResolution() to align them to a common grid.
How to sample pixels from rasters with sparse data efficiently and precisely by leveraging the image mask using stratifiedSample().
How to split your workflow into separate steps and use Exports to avoid user memory limit exceeded or computation timed-out errors.
Dynamic World is a new landcover product developed by Google and World Resources Institute (WRI). It is a unique dataset that is designed to make it easy for users to develop locally relevant landcover classification easily. Contrary to other landcover products which try to classify the pixels into a single class – the Dynamic World (DW) model gives you the probability of the pixel belonging to each of the 9 different landcover classes. The full dataset contains the DW class probabilities for every Sentinel-2 scene since 2015 having <35% cloud-cover. It is also updated continuously with detections from new Sentinel-2 scenes as soon as they are available. This makes DW ideal for change detection and monitoring applications.
A key fact about this dataset is that Dynamic World is not a ready-to-use landcover product. Users are expected to fine-tune the output of DW with local knowledge into a final landcover product. Since DW provides per-pixel probabilities generated by a Fully Convolutional Neural Network (FCNN) model, a lot of difficult problems encountered in classifying remotely sensed imagery are addressed already and allows users to refine it with a relatively simple model (such as Random Forest) with small amount of local training data.
A good mental model to use for Dynamic World is to not think of it as landcover product but as a dataset that provides 9 additional bands of landcover related information for each Sentinel-2 image that can be refined to build a locally relevant classification or change detection model.
As seen in the mangrove classification example, using the Dynamic World probability bands as input to a supervised classification model can help you generate a more accurate landcover map in less amount of time. It also eliminates the need for post-processing the results.
To test this concept and explore the potential of this new dataset in developing locally relevant landcover maps – I partnered with Google and WRI to develop a training workshop and host a 5-day “Mapathon” with participants of diverse backgrounds. The event was a mix of hands-on workshop along with hackathon-style group projects to use Dynamic World for a real-world application.
The workshop was hosted by Regional Centre for Mapping of Resources for Development (RCMRD) in Nairobi, Kenya. You can read more about the event in this article. I and Elise Mazur from WRI also gave a talk about our experience at Geo for Good 2023.
In this post, I want to share more technical details about the workshop materials and code for projects for those who may want to use Dynamic World for their own applications.
Extracting building footprints from high-resolution imagery is a challenging task. Fortunately we now have access to ready-to-use building footprints dataset extracted using state-of-the-art ML techniques. Google’s Open Buildings project has mapped and extracted 1.8 billion buildings in Africa, South Asia, South-East Asia, Latin America and the Caribbean. Microsoft’s Global ML Building Footprints project has made over 1.24 billion building footprints available from most regions of the world.
Update: VIDA has released the most comprehensive buildings dataset by combining both Google and Microsoft building footprint dataset. This dataset is available in Google Earth Engine via the GEE Community Catalog.
Given the availability of these datasets, we can now analyze them to create derivative products. In this post, we will learn how to access these datasets and compute the aggregate count of buildings within a regular grid using Google Earth Engine. We will then export the grid as a shapefile and create a building density map in QGIS.
An important concept in spatial statistics is pixel weights. When calculating pixel statistics with a polygon, partial pixel overlaps are treated differently by different packages and you need to understand this to evaluate the accuracy of your results. Consider the following image. What is the correct answer?
In this article, I will outline a method for extracting shoreline from satellite images in Google Earth Engine. This method is scalable and automatically extracts the coastline as a vector polyline. The full code link is available at the end of the post.
UPDATE: The post now includes tidal-phase filtering using HYCOM Data.
The method involves the following steps
Create a Cloud-free Composite Image from images collected during the same tidal phase
Extract All Waterbodies
Remove Inland Water and Small Islands
Convert Raster to Vector
Simplify and Extract Coastline
Video Demonstration of the Script
We will go through the details of each step and review the Google Earth Engine API code required to achieve the results.
If you are like me, you have a lot of assets uploaded to Earth Engine. As you upload more and more assets, managing this data becomes quite a cumbersome task. Earth Engine provides a handy Command-Line Tool that helps with asset management. While the command-line tool is very useful, it falls short when it comes to bulk data management tasks.
What if you want to rename an ImageCollection? You will need to manually move each child image to a new collection. If you wanted to delete assets matching certain keywords, you’ll need to write a custom shell script. If you are running low on your asset quota and want to delete large assets, there is no direct way to list large assets. Fortunately, the Earth Engine Python Client API comes with a handy ee.data module that we can leverage to write custom scripts. In this post, I will cover the following use cases with full python scripts that can be used by anyone to manage their assets:
How to get a list of all your assets (including folders/sub-folders/collections)
How to share all assets in a folder
How to find the quota consumed by each asset and find large assets
How to rename ImageCollections
How to delete ImageCollections
The post explains each use-case with code snippets. If you want to just grab the scripts, they are linked at the end of the post.