Time series analysis is one of the most common operations in Remote Sensing. It helps understanding and modeling of seasonal patterns as well as monitoring of land cover changes. Earth Engine is uniquely suited to allow extraction of dense time series over long periods of time.

In this post, I will go through different methods and approaches for time series extraction. While there are plenty of examples available that show how to extract a time series for a single location – there are unique challenges that come up when you need a time series for many locations spanning a large area. I will explain those challenges and present code samples to solve them.

The ultimate goal for this exercise is to extract NDVI time series from Sentinel-2 data over 1 year for 100 farm locations spanning an entire state in India.

Continue reading

In a previous post, I showed how to use the aggregate function to find neighbor polygons using QGIS. Using aggregate functions on the same layer allows us to easily do geoprocessing operations between features of a layer. This is very useful in many analysis that would typically require writing custom python scripts.

Here I demonstrate another powerful function array_foreach that allows one to iterate over other features in QGIS expressions – enabling even more powerful analysis by writing just a single expression.

Continue reading

Google Earth Engine (GEE) is a powerful cloud-based system for analysing massive amounts of remote sensing data. One area where Google Earth Engine shines is the ability to calculate time series of values extracted from a deep stack of imagery. While GEE is great at crunching numbers, it has limited cartographic capabilities. That’s where QGIS comes in. Using the Google Earth Engine Plugin for QGIS and Python, you can combine the computing power of GEE with the cartographic capabilities of QGIS. In this post, I will show how to write PyQGIS code to programmatically fetch time-series data, and render a map template to create an animated maps like below.

Continue reading

In a previous post, I showed how to use the Uber Movement Travel Times data to create isochrones. In this post, we will explore another use case of this dataset. Say you are concerned about loss of productivity due to long commute times of your employees and wonder if a change in office times might help them get to the office faster. A similar analysis can be done to see if a change in office location will result in better or worse commutes.

Here’s the hypothetical scenario “Given the location of an office and location of homes of employees, determine their current commute times for office timings of 9am-11am and 5pm-6pm. If the office timings were changed to non-peak timings of 7am-8am and 3pm-4pm, what would be the time savings?

Continue reading

Generating pseudo-random data is important for many aspects of research work. QGIS provides for many methods of generating random points to facilitate this.

Recently, I ran into a problem where I wanted to generate random points inside a polygon – but I wanted the random points to have a certain distribution. I wanted to generate a dataset showing employee home locations for a company. Given a city boundary and the location of office, I wanted to have a point layer that showed where the employees lived. A simple ‘Random points within Polygon’ algorithm would not work here, since the distribution of points would not be uniform within the city.

Continue reading

Table Joins are a way to join 2 separate layers based on a common attribute value. QGIS has a Join Attributes By Field Value algorithm that allows you to table joins. A limitation of this algorithm is that the field values must match exactly. If the values differ slightly – the join will fail. There are many times where you are trying to join 2 layers from different sources and they contain values which are similar but may not match exactly. Fortunately QGIS now has built-in fuzzy string matching functions that can be used – along with Aggregate function – to do table join based on fuzzy matches.

Continue reading