Dynamic World is a new landcover product developed by Google and World Resources Institute (WRI). It is a unique dataset that is designed to make it easy for users to develop locally relevant landcover classification easily. Contrary to other landcover products which try to classify the pixels into a single class – the Dynamic World (DW) model gives you the probability of the pixel belonging to each of the 9 different landcover classes. The full dataset contains the DW class probabilities for every Sentinel-2 scene since 2015 having <35% cloud-cover. It is also updated continuously with detections from new Sentinel-2 scenes as soon as they are available. This makes DW ideal for change detection and monitoring applications.
A key fact about this dataset is that Dynamic World is not a ready-to-use landcover product. Users are expected to fine-tune the output of DW with local knowledge into a final landcover product. Since DW provides per-pixel probabilities generated by a Fully Convolutional Neural Network (FCNN) model, a lot of difficult problems encountered in classifying remotely sensed imagery are addressed already and allows users to refine it with a relatively simple model (such as Random Forest) with small amount of local training data.
A good mental model to use for Dynamic World is to not think of it as landcover product but as a dataset that provides 9 additional bands of landcover related information for each Sentinel-2 image that can be refined to build a locally relevant classification or change detection model.
As seen in the mangrove classification example, using the Dynamic World probability bands as input to a supervised classification model can help you generate a more accurate landcover map in less amount of time. It also eliminates the need for post-processing the results.
To test this concept and explore the potential of this new dataset in developing locally relevant landcover maps – I partnered with Google and WRI to develop a training workshop and host a 5-day “Mapathon” with participants of diverse backgrounds. The event was a mix of hands-on workshop along with hackathon-style group projects to use Dynamic World for a real-world application.
The workshop was hosted by Regional Centre for Mapping of Resources for Development (RCMRD) in Nairobi, Kenya. You can read more about the event in this article. I and Elise Mazur from WRI also gave a talk about our experience at Geo for Good 2023.
In this post, I want to share more technical details about the workshop materials and code for projects for those who may want to use Dynamic World for their own applications.
Extracting building footprints from high-resolution imagery is a challenging task. Fortunately we now have access to ready-to-use building footprints dataset extracted using state-of-the-art ML techniques. Google’s Open Buildings project has mapped and extracted 1.8 billion buildings in Africa, South Asia, South-East Asia, Latin America and the Caribbean. Microsoft’s Global ML Building Footprints project has made over 1.24 billion building footprints available from most regions of the world.
Given the availability of these datasets, we can now analyze them to create derivative products. In this post, we will learn how to access these datasets and compute the aggregate count of buildings within a regular grid using Google Earth Engine. We will then export the grid as a shapefile and create a building density map in QGIS.
An important concept in spatial statistics is pixel weights. When calculating pixel statistics with a polygon, partial pixel overlaps are treated differently by different packages and you need to understand this to evaluate the accuracy of your results. Consider the following image. What is the correct answer?
A long awaited community event that happened after a break of 3 years. The conference was attended by over 200 people from across the world. I got a chance to participate and meet many of QGIS community members in-person.
In this post, I hope to share some of my insights and resources from the conference.
In this article, I will outline a method for extracting shoreline from satellite images in Google Earth Engine. This method is scalable and automatically extracts the coastline as a vector polyline. The full code link is available at the end of the post.
The method involves the following steps
Create a Cloud-free Composite Image
Extract All Waterbodies
Remove Inland Water and Small Islands
Convert Raster to Vector
Simplify and Extract Coastline
We will go through the details of each step and review the Google Earth Engine API code required to achieve the results.
Modern versions of QGIS comes with a handy command-line utility called qgis_process. This allows you to access and run any Processing Tool, Script or Model from the Processing Toolbox on a terminal. This is very useful for automation since it doesn’t require you to open QGIS or manually click buttons. You can run the algorithms in a headless-mode and even schedule them to run them at specific times.
This post covers the following topics
How to launch qgis_process command on Windows, Mac and Linux.
How to find the parameters and values for each algorithm and build your command
Example showing how to do a spatial join on the command-line using the Join Attributes by Location algorithm
Example showing how to run a model on the command line to automate a complex workflow
Want to follow along? You can download the data package containing all the datasets used in this post. Before running each command, make sure to replace the paths in the commands with the paths on your computer.
If you are like me, you have a lot of assets uploaded to Earth Engine. As you upload more and more assets, managing this data becomes quite a cumbersome task. Earth Engine provides a handy Command-Line Tool that helps with asset management. While the command-line tool is very useful, it falls short when it comes to bulk data management tasks.
What if you want to rename an ImageCollection? You will need to manually move each child image to a new collection. If you wanted to delete assets matching certain keywords, you’ll need to write a custom shell script. If you are running low on your asset quota and want to delete large assets, there is no direct way to list large assets. Fortunately, the Earth Engine Python Client API comes with a handy ee.data module that we can leverage to write custom scripts. In this post, I will cover the following use cases with full python scripts that can be used by anyone to manage their assets:
How to get a list of all your assets (including folders/sub-folders/collections)
How to find the quota consumed by each asset and find large assets
How to rename ImageCollections
The post explains each use-case with code snippets. If you want to just grab the scripts, they are linked at the end of the post.
Multi-criteria Overlay Analysis is the process of the allocation of land to suit a specific objective on the basis of a variety of attributes that the selected areas should possess. Although this is a common GIS operation, it is best performed in the raster space.
This post outlines the typical workflow to take source vector data, transform them to appropriate rasters, re-classify them and perform mathematical operations to do a weighted suitability analysis. The post uses the command-line utilities provided by the open-source GDAL library. If you want to do such analysis in QGIS, please check my tutorial at Multi Criteria Overlay Analysis (QGIS3)
We will work with crime and infrastructure data for the city of London and find suitable areas to build new parking facilities that can help reduce bicycle thefts. Our analysis will apply the following 3 criteria. The proposed parking must be
Matplotlib has functionality to created animations and can be used to create dynamic visualizations. In this post, I will explain the concepts and techniques for creating animated charts using Python and Matplotlib.
I find this technique very helpful in creating animations showing how certain algorithms work. This post also contains Python implementations of two common geometry simplification algorithms and they will used to create animations showing each step of the algorithm. Since both of these implementations use a recursive function, the technique shown in the post can be extended to visualize other recursive functions using matplotlib. You will learn how to create animated plots like below.
Many applications require replacing missing pixels in an image with an interpolated value from its temporal neighbours. This gap-filling technique is used in several applications, including:
Replacing Cloudy Pixels: You may want to fill gap in an image with the best-estimated value from the before and after cloud-free pixel.
Estimating Intermediate Values: You can use this technique to compute an image for a previously unknown time-step. If you had population rasters at 2 different years and want to compute a population raster for an intermediate year using pixel-wise linear interpolation.
Preparing Data for Regression: All of your independent variables may not be available at the same temporal resolution. You can harmonize various dataset by generating interpolated raster at uniform or fixed time-steps.
Google Earth Engine can be used effectively for gap-filling time-series datasets. While the logic for linear interpolation is fairly straightforward, data preparation for this in GEE can be quite challenging. It involves use of Joins, Masks and Advanced Filters. This post explains the steps with code snippets and builds a fully functioning script that can be applied on any time-series data.
You can also watch my video at Geo for Good 2022 Conference where I covered the same topic.