Welcome to #30DaysOfQGIS! We are launching our Advanced QGIS course on YouTube and have designed this challenge to help you master QGIS! Spend 30 minutes each day for the next 30 days to level-up your QGIS skills. This course is the result of my 15+ years of experience using QGIS for large-scale spatial analysis and automating workflows. I am really excited to share this content with you – completely free.

We will be posting short videos everyday and cover the full course material step by step. The material is full of tips, tricks and challenges that will make your learning journey fun and rewarding! All you have to do is show up everyday and spend half an hour watching the videos and practicing the exercise. Ready for #30DaysOfQGIS? Read on to know the details.

Continue reading

When automating GIS workflows, one often needs to automate the creation of cartographic outputs. The QGIS Model Designer allows you to build a workflow by combining multiple Processing algorithms. QGIS now includes several algorithms under the Cartography category that allow you to integrate the map creation process within your model. In this post, we will explore the Export print layout as image (or Export print layout as PDF) algorithm to automate the creation of a fire map. We will build a model that will automatically

  • Download the latest shapefile of active fires from FIRMS.
  • Extract fires intersecting the continental US.
  • Style the layer using a pre-configured QGIS style file.
  • Render a pre-configured Print Layout.

Whenever the model is run, it will output a map such as shown below.

Continue reading

When working with raster datasets of different projections and resolutions, it is often desirable to reproject them to the same projection and align them to the same pixel grid. In this post, we will explore the recently introduced options in the open-source GDAL utility gdalwarp that makes this process much simpler and efficient. In particular, we will be exploring the -r sum (Resample with Sum), -r average (Resample with Average) and -tap (Target Aligned Pixels).

We will take the following 3 raster datasets and clip, resample and align them to a common pixel grid.

  • LandScan Global: A high-quality global population grid that is available at 1km resolution in the geographic CRS WGS84 Lat/Lon (EPSG:4326).
  • GHS Population Grid: A 100m resolution global population dataset that is distributed in the World Mollweide Equal Area Projection (ESRI:54009).
  • NLCD Tree Canopy Cover: A 30m resolution gridded dataset with percent canopy estimate of tree cover in the NAD83 CONUS Alberts Projection (EPSG:5070).

As you can see we have datasets that have widely varying pixel sizes and projections. If we wanted to compare them with each other – we must first harmonize them on a unified pixel grid. We will learn how to reproject, resample and align these to the NAD83 California Albers Projection (EPSG:3311) and at 1km resolution.

Continue reading

GDAL has support for GDAL Virtual File System which allows GDAL library and command-line tools to work with files on network storage. This is critical in the era of Cloud-Native Geospatial where it is becoming a standard practice to access and share geospatial data via cloud storage services. In this post, we will see how to use GDAL command-line tools to read and write data to Google Cloud Storage (GCS) using the /vsigs file system handler. We will focus exclusively on Google Cloud Storage for this post -but the same concepts apply when using other cloud services such as AWS S3 or Azure Blobstore. Similarly, other GDAL-based tools – such as rasterio – will be able to access the data from GCS using the same configuration options shown here.

The post covers the following topics

  • Reading Files from Public GCS Buckets.
  • Creating Private GCS Buckets and Uploading Data
  • Configuring Authentication and Reading Data from Private GCS Buckets
  • Writing Data to Private GCS Buckets
  • Using Environment Variables

Note 1: This post assumes familiarity with the GDAL command-line tools and assumes you have installed GDAL on your machine. You will find detailed instructions for installation in our course material for Mastering GDAL Tools.

Note 2: The code snippets are split over multiple lines for redability using the Windows Line Continuation character ^. If you are running these on Mac/Linux, you can replace them with \ instead.

Continue reading

I was recently asked to deliver a session on Earth Science for kids. My daughter goes to an after-school science program at Max Science where they teach science with unique and fun hands-on experiments. They wanted to do an interactive session to introduce the kids to Earth Science and asked if I could deliver a guest talk for their kids in Grades 1 to 4. I loved the idea and developed a module titled The Science of Satellites to introduce the magic of remote sensing to primary school kids. The session ended up being a lot of fun, for the kids and me. In this post, I want to go through the materials and my experience teaching this session.

All the content developed for this session – including high-resolution graphics – is available freely for download. Scroll to the bottom to find the download link.

The 1.5 hour session was split into 3 parts:

  • Part 1 Guess the Place A game to guess the place from satellite images
  • Part 2: The Science of Satellites Learning what satellites do and how can you build and launch a satellite
  • Part 3: Your Name from Space An activity where kids create their from letters seen from satellite images.
Continue reading

In this post, we will learn how to build a regression model in Google Earth Engine and use it to estimate total above ground biomass using openly available Earth Observation datasets.

NASA’s Global Ecosystem Dynamics Investigation (GEDI) mission collects LIDAR measurements along ground transects at 30m spatial resolution at 60m intervals. The GEDI waveform captures the vertical distribution of vegetation structure and is used to derive estimates of Aboveground Biomass Density (AGBD) at each sample. These sample estimates of AGBD are useful but since they are point measurements – you cannot directly use them to calculate total aboveground biomass for a region. We can use other satellite datasets to build a regression model using the GEDI samples to map and quantify the biomass in a region.

Regression Workflow

This article shows how we can build and run the entire workflow in Earth Engine – from pre-processing to building a regression model to running the predictions. You will also learn some advanced techniques and best practices such as:

  • How to fuse datasets of different resolutions by using setDefaultProjection() and reduceResolution() to align them to a common grid.
  • How to sample pixels from rasters with sparse data efficiently and precisely by leveraging the image mask using stratifiedSample().
  • How to split your workflow into separate steps and use Exports to avoid user memory limit exceeded or computation timed-out errors.
Continue reading

ISRO recently released the full archive of medium and low-resolution Earth Observation dataset to the public. This includes the imagery from LISS-IV camera aboard ResourceSat-2 and ResourceSat-2A satellites. This is currently the highest spatial resolution imagery available in the public domain for India. In this post, I want to cover the steps required to download the imagery and apply the pre-processing steps required to make this data ready for analysis – specifically how to programmatically convert the DN values to TOA Reflectance. We will use modern Python libraries such as XArray, rioxarray, and dask – which allow use to seamlessly work with large datasets and use all the available compute power on your machine.

Continue reading

Dynamic World is a new landcover product developed by Google and World Resources Institute (WRI). It is a unique dataset that is designed to make it easy for users to develop locally relevant landcover classification easily. Contrary to other landcover products which try to classify the pixels into a single class – the Dynamic World (DW) model gives you the probability of the pixel belonging to each of the 9 different landcover classes. The full dataset contains the DW class probabilities for every Sentinel-2 scene since 2015 having <35% cloud-cover. It is also updated continuously with detections from new Sentinel-2 scenes as soon as they are available. This makes DW ideal for change detection and monitoring applications.

A key fact about this dataset is that Dynamic World is not a ready-to-use landcover product. Users are expected to fine-tune the output of DW with local knowledge into a final landcover product. Since DW provides per-pixel probabilities generated by a Fully Convolutional Neural Network (FCNN) model, a lot of difficult problems encountered in classifying remotely sensed imagery are addressed already and allows users to refine it with a relatively simple model (such as Random Forest) with small amount of local training data.

A good mental model to use for Dynamic World is to not think of it as landcover product but as a dataset that provides 9 additional bands of landcover related information for each Sentinel-2 image that can be refined to build a locally relevant classification or change detection model.

As seen in the mangrove classification example, using the Dynamic World probability bands as input to a supervised classification model can help you generate a more accurate landcover map in less amount of time. It also eliminates the need for post-processing the results.

To test this concept and explore the potential of this new dataset in developing locally relevant landcover maps – I partnered with Google and WRI to develop a training workshop and host a 5-day “Mapathon” with participants of diverse backgrounds. The event was a mix of hands-on workshop along with hackathon-style group projects to use Dynamic World for a real-world application.

The workshop was hosted by Regional Centre for Mapping of Resources for Development (RCMRD) in Nairobi, Kenya. You can read more about the event in this article. I and Elise Mazur from WRI also gave a talk about our experience at Geo for Good 2023.

In this post, I want to share more technical details about the workshop materials and code for projects for those who may want to use Dynamic World for their own applications.

Continue reading

Extracting building footprints from high-resolution imagery is a challenging task. Fortunately we now have access to ready-to-use building footprints dataset extracted using state-of-the-art ML techniques. Google’s Open Buildings project has mapped and extracted 1.8 billion buildings in Africa, South Asia, South-East Asia, Latin America and the Caribbean. Microsoft’s Global ML Building Footprints project has made over 1.24 billion building footprints available from most regions of the world.

Update: VIDA has released the most comprehensive buildings dataset by combining both Google and Microsoft building footprint dataset. This dataset is available in Google Earth Engine via the GEE Community Catalog.

Given the availability of these datasets, we can now analyze them to create derivative products. In this post, we will learn how to access these datasets and compute the aggregate count of buildings within a regular grid using Google Earth Engine. We will then export the grid as a shapefile and create a building density map in QGIS.

Continue reading