Extracting Time Series using Google Earth Engine

Time series analysis is one of the most common operations in Remote Sensing. It helps understanding and modeling of seasonal patterns as well as monitoring of land cover changes. Earth Engine is uniquely suited to allow extraction of dense time series over long periods of time.

In this post, I will go through different methods and approaches for time series extraction. While there are plenty of examples available that show how to extract a time series for a single location – there are unique challenges that come up when you need a time series for many locations spanning a large area. I will explain those challenges and present code samples to solve them.

The ultimate goal for this exercise is to extract NDVI time series from Sentinel-2 data over 1 year for 100 farm locations spanning an entire state in India.

Prefer a video-based guide?

Check out these series of videos that gives you a step-by-step instructions and explain the process of extracting NDVI time series in Earth Engine using MODIS data.

Preparing the data

Farm Locations

To make the analysis relevant, we need a dataset of farm locations spanning a large area. If you have your own data, you can upload the shapefile to Earth Engine and use it. But for the purpose of this post, I generated 100 random points. But I wanted those random points to meet certain criteria – such as they should be over farmland growing a certain crop. We can utilize the GFSAD1000: Cropland Extent 1km Crop Dominance, Global Food-Support Analysis Data from the Earth Engine Data Catalog to select all pixels which are farmland growing wheat and rice. The stratifiedSample() method allows us to then generate sample points within those pixels – approximating a dataset of 100 farm locations.

var gaul = ee.FeatureCollection("FAO/GAUL/2015/level1");
var gfsad = ee.Image("USGS/GFSAD1000_V0");
// Select 'landcover' band with pixel values 1 
// which represent Rice and Wheat Rainfed crops
var wheatrice = gfsad.select('landcover').eq(1)
// Uttar Pradesh is a large state in Indo-Gangetic Plain with
// a large agricultural output.
// We use the Global Administrative Unit Layers (GAUL) dataset to get the state boundary
var uttarpradesh = gaul.filter(ee.Filter.eq('ADM1_NAME', 'Uttar Pradesh'))
// wheatrice image contains 1 and 0 pixels. We want to generate points
// only in the pixels that are 1 (representing crop areas)
// selfMask() masks the pixels with 0 value.
var points = wheatrice.selfMask().stratifiedSample({numPoints:100, region:uttarpradesh, geometries: true} )
// We need a unique id for each point. We take the feature id and set it as
// a property so we can refer to each point easily
var points = points.map(function(feature) {
  return ee.Feature(feature.geometry(), {'id': feature.id()})
// Show the state polygon with a blue outline
var outline = ee.Image().byte().paint({
  featureCollection: uttarpradesh,
  color: 1,
  width: 3
Map.addLayer(outline, {palette: ['blue']}, 'AOI')
// Show the farm locations in green
Map.addLayer(points, {color: 'green'}, 'Farm Locations')

Sentinel-2 Image Collection

We will use atmospherically corrected Sentinel-2 Surface Reflectance Data. To use this in our analysis, we should filter the collection to images overlapping with the farm locations and those within the time range. It is also important to apply cloud masking to remove cloudy pixels from the analysis. This part is fairly straightforward where you map functions to remove cloud and add NDVI bands and then filter it down to a date range and location.

// Function to remove cloud and snow pixels
function maskCloudAndShadows(image) {
  var cloudProb = image.select('MSK_CLDPRB');
  var snowProb = image.select('MSK_SNWPRB');
  var cloud = cloudProb.lt(5);
  var snow = snowProb.lt(5);
  var scl = image.select('SCL'); 
  var shadow = scl.eq(3); // 3 = cloud shadow
  var cirrus = scl.eq(10); // 10 = cirrus
  // Cloud probability less than 5% or cloud shadow classification
  var mask = (cloud.and(snow)).and(cirrus.neq(1)).and(shadow.neq(1));
  return image.updateMask(mask);
// Adding a NDVI band
function addNDVI(image) {
  var ndvi = image.normalizedDifference(['B8', 'B4']).rename('ndvi')
  return image.addBands([ndvi])
var startDate = '2019-01-01'
var endDate = '2019-12-31'
// Use Sentinel-2 L2A data - which has better cloud masking
var collection = ee.ImageCollection('COPERNICUS/S2_SR')
    .filterDate(startDate, endDate)
// View the median composite
var vizParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 2000}
Map.addLayer(collection.median(), vizParams, 'collection')

Get Time Series for a Single Location

At this point, our collection has images spanning a full year. If we wanted to extract NDVI values at any location for the full year, it is quite easy. We can use the built-in charting functions to chart the NDVI value over time.

Our collection has 100 points. We call .first() to get the first point from the collection and create a chart using the ui.Chart.image.series() function. Once you print a chart, you can click the undefined button next to it to get an option to download the data as a CSV.

var testPoint = ee.Feature(points.first())
//Map.centerObject(testPoint, 10)
var chart = ui.Chart.image.series({
    imageCollection: collection.select('ndvi'),
    region: testPoint.geometry()
      interpolateNulls: true,
      lineWidth: 1,
      pointSize: 3,
      title: 'NDVI over Time at a Single Location',
      vAxis: {title: 'NDVI'},
      hAxis: {title: 'Date', format: 'YYYY-MMM', gridlines: {count: 12}}

This is a nice NDVI time-series chart showing the dual-cropping practice common in India.

Exporting Time Series for A Single Location/Region

If you want a time-series over a polygon, the above technique still works. But if the region is large and your time series is long – you may still run into ‘Computation Time Out’ errors. In that case, we can Export the results as a CSV. We can use the reduceRegion() function to get the NDVI value from an image. Since we want to do that for all images in the collection, we need to map() a function

var filteredCollection = collection.select('ndvi')
var timeSeries = ee.FeatureCollection(filteredCollection.map(function(image) {
  var stats = image.reduceRegion({
    reducer: ee.Reducer.mean(),
    geometry: testPoint.geometry(),
    scale: 10,
    maxPixels: 1e10
  // reduceRegion doesn't return any output if the image doesn't intersect
  // with the point or if the image is masked out due to cloud
  // If there was no ndvi value found, we set the ndvi to a NoData value -9999
  var ndvi = ee.List([stats.get('ndvi'), -9999])

  // Create a feature with null geometry and NDVI value and date as properties
  var f = ee.Feature(null, {'ndvi': ndvi,
    'date': ee.Date(image.get('system:time_start')).format('YYYY-MM-dd')})
  return f

// Check the results

// Export to CSV
    collection: timeSeries,
    description: 'Single_Location_NDVI_time_series',
    folder: 'earthengine',
    fileNamePrefix: 'ndvi_time_series_single',
    fileFormat: 'CSV'

Getting Time Series for Multiple Locations

While you can chart or export time series for a single location as shown above, things start getting a bit more complex when you want to do the same for many locations. Continuing the charting method above, you may think of using the ui.Chart.image.seriesByRegion()function to get a chart for all 100 points over the year. But you will start hitting the limit of what can be done in Earth Engine’s ‘interactive’ mode.

var chart = ui.Chart.image.seriesByRegion({
    imageCollection: collection.select('ndvi'),
    regions: points,
    reducer: ee.Reducer.mean()
// This doesn't work as the result is to large to print

This is understandable. Earth Engine limits the execution time in the interactive mode to 5 minutes, and times out if your computation takes longer. In such cases, the recommendation is to switch to using the ‘batch’ mode, which has a lot more resources and can run the computation for a long time. The way to use the batch mode is using any of the Export functions.

The method to export a time-series is explained well in this tutorial. The code has a clever way of organizing the results to reduceRegions() into a table that can be exported. This code works when your points do not span a large area. If you tried using this approach for this example, you will run into problems.

Problem 1: Handling Masked Pixels

As we have masked cloudy pixels in source images, those pixels will return null values, resulting in a data gap. As our area spans multiple images, for any given point, majority of the images will not intersect the point and return a null value. We can fix this by assigning a NoData value (such as -9999) to a missing value in the time series. Specifically, we use ee.Reducer.firstNonNull() function to programmatically assign -9999 to any output containing null value. Below is the modified code that generates a table with each point id as the row and NDVI values from each image as columns.

var triplets = collection.map(function(image) {
  return image.select('ndvi').reduceRegions({
    collection: points, 
    reducer: ee.Reducer.first().setOutputs(['ndvi']), 
    scale: 10,
  })// reduceRegion doesn't return any output if the image doesn't intersect
    // with the point or if the image is masked out due to cloud
    // If there was no ndvi value found, we set the ndvi to a NoData value -9999
    .map(function(feature) {
    var ndvi = ee.List([feature.get('ndvi'), -9999])
    return feature.set({'ndvi': ndvi, 'imageID': image.id()})
var format = function(table, rowId, colId) {
  var rows = table.distinct(rowId); 
  var joined = ee.Join.saveAll('matches').apply({
    primary: rows, 
    secondary: table, 
    condition: ee.Filter.equals({
      leftField: rowId, 
      rightField: rowId
  return joined.map(function(row) {
      var values = ee.List(row.get('matches'))
        .map(function(feature) {
          feature = ee.Feature(feature);
          return [feature.get(colId), feature.get('ndvi')];
      return row.select([rowId]).set(ee.Dictionary(values.flatten()));
var sentinelResults = format(triplets, 'id', 'imageID');

Problem 2: Granule Overlaps

The second problem is specific to Sentinel-2 data and how individual images are produced from the raw data.. If you are working with any other dataset (Landsat, MODIS etc.), skip this step and Export the collection generated in the previous step.

The sentinel data is distributed as granules, also called tiles – which are 100×100 km2 ortho-images. As you can see the map below, there is an overlap between neighboring granules. So the same raw pixel can be in present in upto 4 tiles. And since each granule is processed independently, the output pixel values can be slightly different.

If we exported the table generated in the previous step, we will see multiple NDVI values for the same day which may or may not be the same. For our time series to be consistent, we need to harmonize these overlapping pixels. A decent solution is to take all NDVI values for the same day (generated from the same raw pixels) and assign the maximum of all values to that day. This results in a clean output with 1 NDVI value per point per day.

The following code finds all images of the same day and creates a single output for the day with the maximum of all NDVI values.

// There are multiple image granules for the same date processed from the same orbit
// Granules overlap with each other and since they are processed independently
// the pixel values can differ slightly. So the same pixel can have different NDVI 
// values for the same date from overlapping granules.
// So to simplify the output, we can merge observations for each day
// And take the max ndvi value from overlapping observations
var merge = function(table, rowId) {
  return table.map(function(feature) {
    var id = feature.get(rowId)
    var allKeys = feature.toDictionary().keys().remove(rowId)
    var substrKeys = ee.List(allKeys.map(function(val) { 
        return ee.String(val).slice(0,8)}
    var uniqueKeys = substrKeys.distinct()
    var pairs = uniqueKeys.map(function(key) {
      var matches = feature.toDictionary().select(allKeys.filter(ee.Filter.stringContains('item', key))).values()
      var val = matches.reduce(ee.Reducer.max())
      return [key, val]
    return feature.select([rowId]).set(ee.Dictionary(pairs.flatten()))
var sentinelMerged = merge(sentinelResults, 'id');

Exporting the Time Series

The collection now contains formatted output. It can be exported as a CSV file. Running the code below will create an Export task. Click Run, confirm the parameters and start the task. Once the export task finishes, you will have the CSV file in your Google Drive.

    collection: sentinelMerged,
    description: 'Multiple_Locations_NDVI_time_series',
    folder: 'earthengine',
    fileNamePrefix: 'ndvi_time_series_multiple',
    fileFormat: 'CSV'

You can see the full script at https://code.earthengine.google.com/ad1bd99e9f377b165deef40f0c71cc9e

Here is the resulting CSV file.

Hope you found the post useful and got some inspiration to apply it to your own problem. Do leave a comment if you have ideas to improve the code.

If you are new to Earth Engine and want to master it, check out my course End-to-End Google Earth Engine.


Leave a Comment

  1. Hi, thanks a lot for your work, it helps me a lot in writing my bachelor thesis.

    I need help with the date format:
    When I export NDVI for multiple points, the date format doesn’t suit me. Is there any way to format them? Instead of 20190101, I would need 2019-01-01 (yyyy-mm-dd).

    I’ve tried to figure it out myself, but I’m a beginner and can’t figure it out.

    Thanks again, have a great day!

    • “Cannot load script. Script “NDVI_CODE” does not exist in repository “users/abdoelhmdi/NDVI_CODE”.

      You need to set up script sharing or check the url is correct.

  2. Thank you very much for your excellent coding. sir how can we implement it for specific study area, what should be eefeaturecollection and eeImage data. pl can u suggest this sir?

  3. Thank you so much Ujaval, absolutely great post!
    I wanted to ask you how I can use the average and not the maximum of the “doubled” daily values?

    If I change:
    “var val = matches.reduce(ee.Reducer.max())”
    “var val = matches.reduce(ee.Reducer.mean())”, the value is calculated using -9999 (at least I think, because I get minus values).

    my code: https://code.earthengine.google.com/29a00cd6f090b258d73a3a99e39ff72d
    Thanks a lot again, hats off to you for your work.

    • Thank you. You can try the following to remove the -9999 before calculating the mean

      var val = matches.removeAll(-9999).reduce(ee.Reducer.mean())

      Let me know if that worked.

      • Thanks for answer. Now I tried it and I get the error: Invalid argument specified for ee.List(): -9999.

        I tried replacing the original code:

        var ndvi = ee.List([feature.get(‘ndvi’), -9999]) »»» I replaced the -9999 value with Null, but then it seems to miss some days in the export.

        Any ideas to solve the problem? Thanks a lot!

      • My bad. The removeAll() function takes a list, so the correct code is below

        var val = matches.removeAll([-9999]).reduce(ee.Reducer.mean())

  4. Thank you so much Ujaval, absolutely great post.
    I wanted to ask you how I can use the average and not the maximum of the “doubled” daily values?

    If I change “var val = matches.reduce(ee.Reducer.max())” to “var val = matches.reduce(ee.Reducer.mean())”, the value is calculated using -9999 (at least I think, because I get minus values).

    my code: https://code.earthengine.google.com/29a00cd6f090b258d73a3a99e39ff72d
    Thanks a lot again, hats off to you for your work.

  5. This is a great tutorial and thank you very much for the thorough step by step guide. I am getting an error while exporting the data and am unable to interpret it. I will be grateful if you please help.

    Error: Error in map(ID=2020_01_01_00000000000000000113): Number.format: Parameter ‘number’ is required.

  6. Thanks for sharing this tutorial it has been super helpful! Rather than generate points within GEE I uploaded my points of interest from a csv file. In my csv file I have a point id column. Is it possible in line 18 to specify the value GEE recognizes as the id to the column in my csv file rather than the feature id? In other words instead of :

    {‘id’: feature.id()}

    Can I do this?

    {‘id’: “point id from my csv file”}

    Hopefully this question makes sense and I appreciate any advice folks can offer.

  7. Very great work Ujaval ji I am one of your fan . I just want to know If I have more than 100 farm in shape or Kml format then how can I extract in CSV . 2nd question if I have lat long of 100 farm in excel form and i want extract that all lat long NDVI values how can I do ?

  8. what if i want to do multiple indices (NDVI, NDRE, EVI) extractions and plot them as well each one having its curve

  9. Thanks dear Ujaval am one your fan and your followers on YouTube and on LinkedIn.I want to ask you how can we extract For large training sample ( ex :45000 samples )using Modis NDVI time series from 2008-to 2019 .

Leave a Reply to Ola Lysien Cancel reply