K-Means Clustering is a popular algorithm for automatically grouping points into natural clusters. QGIS comes with a Processing Toolbox algorithm ‘K-means clustering’ that can take a vector layer and group features into N clusters. A problem with this algorithm is that you do not have control over how many points end up in each cluster. Many applications require you to segment your data layer into equal sized clusters or clusters having a minimum number of points. Some examples where you may need this

  • When planning for FTTH (Fiber-to-the-Home) network one may want to divide a neighborhood into clusters of at least 250 houses for placement of a node.
  • Dividing a sales territory/ customers equally among sales teams with customers in the same region are assigned to the same team.

There is a variation of the K-means algorithm called Constrained K-Means Clustering that uses graph theory to find optimal clusters with a user supplied minimum number of points belonging to given clusters. Stanislaw Adaszewski has a nice Python implementation of this algorithm that I have adapted to be used as a Processing Toolbox algorithm in QGIS.

Continue reading

Rainfall is arguably the most frequently measured hydro-meteorological variable. It is a required input for many hydrological applications like runoff computations, flood forecasting as well as engineering design of structures. However, rainfall data in its raw form contain many gaps and inconsistent values. Therefore it is important to do rigorous validation of rain-gauge observation before incorporating them into analysis.

World Bank’s National Hydrology Project (NHP) prescribes a set of primary and secondary validation methods in the Manual of Rainfall Data Validation.
Of particular interest to me are the spatial methods aimed to identify suspect values by comparison with neighboring stations. This spatial homogeneity test requires complex spatial and statistical data processing that can be quite challenging. I got an opportunity to work on a project that required automating the entire process of identifying and testing suspect stations. I ended up implementing it in QGIS using just Expressions and Processing Modeler. The whole solution required no custom code and was easily usable by an analyst in the QGIS environment. In this post, I will explain the details of the test and show you how you can use similar techniques for your own analysis.

This workflow was presented as a live session on QGIS Open Day. You can watch the recording to understand the concepts and implementation.

Continue reading

When working with raster data, you may sometimes need to deal with data gaps. These could be the result of sensor malfunction, processing errors or data corruption. Below is an example of data gap (i.e. no data values) in aerial imagery.

Source Image: © Commission for Lands (COLA) ; Revolutionary Government of Zanzibar (RGoZ), Downloaded from OpenAerialMap. (Note: The data gap is simulated using a python script and is not part of the original dataset)
Continue reading

In a previous post, I showed how to use the aggregate function to find neighbor polygons using QGIS. Using aggregate functions on the same layer allows us to easily do geoprocessing operations between features of a layer. This is very useful in many analysis that would typically require writing custom python scripts.

Here I demonstrate another powerful function array_foreach that allows one to iterate over other features in QGIS expressions – enabling even more powerful analysis by writing just a single expression.

Continue reading

Google Earth Engine (GEE) is a powerful cloud-based system for analysing massive amounts of remote sensing data. One area where Google Earth Engine shines is the ability to calculate time series of values extracted from a deep stack of imagery. While GEE is great at crunching numbers, it has limited cartographic capabilities. That’s where QGIS comes in. Using the Google Earth Engine Plugin for QGIS and Python, you can combine the computing power of GEE with the cartographic capabilities of QGIS. In this post, I will show how to write PyQGIS code to programmatically fetch time-series data, and render a map template to create an animated maps like below.

Continue reading

Generating pseudo-random data is important for many aspects of research work. QGIS provides for many methods of generating random points to facilitate this.

Recently, I ran into a problem where I wanted to generate random points inside a polygon – but I wanted the random points to have a certain distribution. I wanted to generate a dataset showing employee home locations for a company. Given a city boundary and the location of office, I wanted to have a point layer that showed where the employees lived. A simple ‘Random points within Polygon’ algorithm would not work here, since the distribution of points would not be uniform within the city.

Continue reading

Table Joins are a way to join 2 separate layers based on a common attribute value. QGIS has a Join Attributes By Field Value algorithm that allows you to table joins. A limitation of this algorithm is that the field values must match exactly. If the values differ slightly – the join will fail. There are many times where you are trying to join 2 layers from different sources and they contain values which are similar but may not match exactly. Fortunately QGIS now has built-in fuzzy string matching functions that can be used – along with Aggregate function – to do table join based on fuzzy matches.

Continue reading