Extracting Time Series using Google Earth Engine

Time series analysis is one of the most common operations in Remote Sensing. It helps understanding and modeling of seasonal patterns as well as monitoring of land cover changes. Earth Engine is uniquely suited to allow extraction of dense time series over long periods of time.

In this post, I will go through different methods and approaches for time series extraction. While there are plenty of examples available that show how to extract a time series for a single location – there are unique challenges that come up when you need a time series for many locations spanning a large area. I will explain those challenges and present code samples to solve them.

The ultimate goal for this exercise is to extract NDVI time series from Sentinel-2 data over 1 year for 100 farm locations spanning an entire state in India.

Prefer a video-based guide?

Check out these series of videos that gives you a step-by-step instructions and explain the process of extracting NDVI time series in Earth Engine using MODIS data.

Preparing the data

Farm Locations

To make the analysis relevant, we need a dataset of farm locations spanning a large area. If you have your own data, you can upload the shapefile to Earth Engine and use it. But for the purpose of this post, I generated 100 random points. But I wanted those random points to meet certain criteria – such as they should be over farmland growing a certain crop. We can utilize the GFSAD1000: Cropland Extent 1km Crop Dominance, Global Food-Support Analysis Data from the Earth Engine Data Catalog to select all pixels which are farmland growing wheat and rice. The stratifiedSample() method allows us to then generate sample points within those pixels – approximating a dataset of 100 farm locations.

var gaul = ee.FeatureCollection("FAO/GAUL/2015/level1");
var gfsad = ee.Image("USGS/GFSAD1000_V0");
// Select 'landcover' band with pixel values 1 
// which represent Rice and Wheat Rainfed crops
var wheatrice = gfsad.select('landcover').eq(1)
// Uttar Pradesh is a large state in Indo-Gangetic Plain with
// a large agricultural output.
// We use the Global Administrative Unit Layers (GAUL) dataset to get the state boundary
var uttarpradesh = gaul.filter(ee.Filter.eq('ADM1_NAME', 'Uttar Pradesh'))
// wheatrice image contains 1 and 0 pixels. We want to generate points
// only in the pixels that are 1 (representing crop areas)
// selfMask() masks the pixels with 0 value.
var points = wheatrice.selfMask().stratifiedSample({numPoints:100, region:uttarpradesh, geometries: true} )
// We need a unique id for each point. We take the feature id and set it as
// a property so we can refer to each point easily
var points = points.map(function(feature) {
  return ee.Feature(feature.geometry(), {'id': feature.id()})
// Show the state polygon with a blue outline
var outline = ee.Image().byte().paint({
  featureCollection: uttarpradesh,
  color: 1,
  width: 3
Map.addLayer(outline, {palette: ['blue']}, 'AOI')
// Show the farm locations in green
Map.addLayer(points, {color: 'green'}, 'Farm Locations')

Sentinel-2 Image Collection

We will use atmospherically corrected Sentinel-2 Surface Reflectance Data. To use this in our analysis, we should filter the collection to images overlapping with the farm locations and those within the time range. It is also important to apply cloud masking to remove cloudy pixels from the analysis. This part is fairly straightforward where you map functions to remove cloud and add NDVI bands and then filter it down to a date range and location.

// Function to remove cloud and snow pixels
function maskCloudAndShadows(image) {
  var cloudProb = image.select('MSK_CLDPRB');
  var snowProb = image.select('MSK_SNWPRB');
  var cloud = cloudProb.lt(5);
  var snow = snowProb.lt(5);
  var scl = image.select('SCL'); 
  var shadow = scl.eq(3); // 3 = cloud shadow
  var cirrus = scl.eq(10); // 10 = cirrus
  // Cloud probability less than 5% or cloud shadow classification
  var mask = (cloud.and(snow)).and(cirrus.neq(1)).and(shadow.neq(1));
  return image.updateMask(mask);
// Adding a NDVI band
function addNDVI(image) {
  var ndvi = image.normalizedDifference(['B8', 'B4']).rename('ndvi')
  return image.addBands([ndvi])
var startDate = '2019-01-01'
var endDate = '2019-12-31'
// Use Sentinel-2 L2A data - which has better cloud masking
var collection = ee.ImageCollection('COPERNICUS/S2_SR')
    .filterDate(startDate, endDate)
// View the median composite
var vizParams = {bands: ['B4', 'B3', 'B2'], min: 0, max: 2000}
Map.addLayer(collection.median(), vizParams, 'collection')

Get Time Series for a Single Location

At this point, our collection has images spanning a full year. If we wanted to extract NDVI values at any location for the full year, it is quite easy. We can use the built-in charting functions to chart the NDVI value over time.

Our collection has 100 points. We call .first() to get the first point from the collection and create a chart using the ui.Chart.image.series() function. Once you print a chart, you can click the undefined button next to it to get an option to download the data as a CSV.

var testPoint = ee.Feature(points.first())
//Map.centerObject(testPoint, 10)
var chart = ui.Chart.image.series({
    imageCollection: collection.select('ndvi'),
    region: testPoint.geometry()
      interpolateNulls: true,
      lineWidth: 1,
      pointSize: 3,
      title: 'NDVI over Time at a Single Location',
      vAxis: {title: 'NDVI'},
      hAxis: {title: 'Date', format: 'YYYY-MMM', gridlines: {count: 12}}

This is a nice NDVI time-series chart showing the dual-cropping practice common in India.

Exporting Time Series for A Single Location/Region

If you want a time-series over a polygon, the above technique still works. But if the region is large and your time series is long – you may still run into ‘Computation Time Out’ errors. In that case, we can Export the results as a CSV. We can use the reduceRegion() function to get the NDVI value from an image. Since we want to do that for all images in the collection, we need to map() a function

var filteredCollection = collection.select('ndvi')
var timeSeries = ee.FeatureCollection(filteredCollection.map(function(image) {
  var stats = image.reduceRegion({
    reducer: ee.Reducer.mean(),
    geometry: testPoint.geometry(),
    scale: 10,
    maxPixels: 1e10
  // reduceRegion doesn't return any output if the image doesn't intersect
  // with the point or if the image is masked out due to cloud
  // If there was no ndvi value found, we set the ndvi to a NoData value -9999
  var ndvi = ee.List([stats.get('ndvi'), -9999])

  // Create a feature with null geometry and NDVI value and date as properties
  var f = ee.Feature(null, {'ndvi': ndvi,
    'date': ee.Date(image.get('system:time_start')).format('YYYY-MM-dd')})
  return f

// Check the results

// Export to CSV
    collection: timeSeries,
    description: 'Single_Location_NDVI_time_series',
    folder: 'earthengine',
    fileNamePrefix: 'ndvi_time_series_single',
    fileFormat: 'CSV'

Getting Time Series for Multiple Locations

While you can chart or export time series for a single location as shown above, things start getting a bit more complex when you want to do the same for many locations. Continuing the charting method above, you may think of using the ui.Chart.image.seriesByRegion()function to get a chart for all 100 points over the year. But you will start hitting the limit of what can be done in Earth Engine’s ‘interactive’ mode.

var chart = ui.Chart.image.seriesByRegion({
    imageCollection: collection.select('ndvi'),
    regions: points,
    reducer: ee.Reducer.mean()
// This doesn't work as the result is to large to print

This is understandable. Earth Engine limits the execution time in the interactive mode to 5 minutes, and times out if your computation takes longer. In such cases, the recommendation is to switch to using the ‘batch’ mode, which has a lot more resources and can run the computation for a long time. The way to use the batch mode is using any of the Export functions.

The method to export a time-series is explained well in this tutorial. The code has a clever way of organizing the results to reduceRegions() into a table that can be exported. This code works when your points do not span a large area. If you tried using this approach for this example, you will run into problems.

Problem 1: Handling Masked Pixels

As we have masked cloudy pixels in source images, those pixels will return null values, resulting in a data gap. As our area spans multiple images, for any given point, majority of the images will not intersect the point and return a null value. We can fix this by assigning a NoData value (such as -9999) to a missing value in the time series. Specifically, we use the ee.Reducer.firstNonNull() function to programmatically assign -9999 to any output containing a null value. Below is the modified code that generates a table with each point id as the row and NDVI values from each date as columns.

var triplets = collection.map(function(image) {
  return image.select('ndvi').reduceRegions({
    collection: points, 
    reducer: ee.Reducer.mean().setOutputs(['ndvi']), 
    scale: 10,
  })// reduceRegion doesn't return any output if the image doesn't intersect
    // with the point or if the image is masked out due to cloud
    // If there was no ndvi value found, we set the ndvi to a NoData value -9999
    .map(function(feature) {
    var ndvi = ee.List([feature.get('ndvi'), -9999])
    return feature.set({'ndvi': ndvi, 'imageID': image.id()})

The triplets variable contains a tall table containing 1 row per date per farm. This table is suitable for further processing in a GIS or statistical analysis. If you require such an output, we can go ahead, set a ‘date’ property and Export this table.

// The result is a 'tall' table. We can further process it to 
// extract the date from the imageID property.
var tripletsWithDate = triplets.map(function(f) {
  var imageID = f.get('imageID');
  var date = ee.String(imageID).slice(0,8);
  return f.set('date', date)

// For a cleaner table, we can also filter out
// null values, remove duplicates and sort the table
// before exporting.
var tripletsFiltered = tripletsWithDate
  .filter(ee.Filter.neq('ndvi', -9999))
  .distinct(['id', 'date'])

// We can export this tall table.
// Specify the columns that we want to export
    collection: tripletsFiltered,
    description: 'Multiple_Locations_NDVI_time_series_Tall',
    folder: 'earthengine',
    fileNamePrefix: 'ndvi_time_series_multiple_tall',
    fileFormat: 'CSV',
    selectors: ['id', 'date', 'ndvi']

Some applications will require a wide table with 1 row per form containing all observations. We can write a format() function that turns triplets into a wide table.

var format = function(table, rowId, colId) {
  var rows = table.distinct(rowId); 
  var joined = ee.Join.saveAll('matches').apply({
    primary: rows, 
    secondary: table, 
    condition: ee.Filter.equals({
      leftField: rowId, 
      rightField: rowId
  return joined.map(function(row) {
      var values = ee.List(row.get('matches'))
        .map(function(feature) {
          feature = ee.Feature(feature);
          return [feature.get(colId), feature.get('ndvi')];
      return row.select([rowId]).set(ee.Dictionary(values.flatten()));
var sentinelResults = format(triplets, 'id', 'imageID');

Problem 2: Granule Overlaps

The second problem is specific to Sentinel-2 data and how individual images are produced from the raw data.. If you are working with any other dataset (Landsat, MODIS etc.), skip this step and Export the collection generated in the previous step.

The sentinel data is distributed as granules, also called tiles – which are 100×100 km2 ortho-images. As you can see in the map below, there is an overlap between neighboring granules. So the same raw pixel can be present in up to 4 tiles. And since each granule is processed independently, the output pixel values can be slightly different.

If we exported the table generated in the previous step, we will see multiple NDVI values for the same day which may or may not be the same. For our time series to be consistent, we need to harmonize these overlapping pixels. When exporting the tall table, we used the distinct() function which picked the first of the duplicate values. A better solution is to take all NDVI values for the same day (generated from the same raw pixels) and assign the maximum of all values to that day. This results in a clean output with 1 NDVI value per point per day.

The following code finds all images of the same day and creates a single output for the day with the maximum of all NDVI values.

// There are multiple image granules for the same date processed from the same orbit
// Granules overlap with each other and since they are processed independently
// the pixel values can differ slightly. So the same pixel can have different NDVI 
// values for the same date from overlapping granules.
// So to simplify the output, we can merge observations for each day
// And take the max ndvi value from overlapping observations
var merge = function(table, rowId) {
  return table.map(function(feature) {
    var id = feature.get(rowId)
    var allKeys = feature.toDictionary().keys().remove(rowId)
    var substrKeys = ee.List(allKeys.map(function(val) { 
        return ee.String(val).slice(0,8)}
    var uniqueKeys = substrKeys.distinct()
    var pairs = uniqueKeys.map(function(key) {
      var matches = feature.toDictionary().select(allKeys.filter(ee.Filter.stringContains('item', key))).values()
      var val = matches.reduce(ee.Reducer.max())
      return [key, val]
    return feature.select([rowId]).set(ee.Dictionary(pairs.flatten()))
var sentinelMerged = merge(sentinelResults, 'id');

Exporting the Time Series

The collection now contains formatted output. It can be exported as a CSV file. Running the code below will create an Export task. Click Run, confirm the parameters and start the task. Once the export task finishes, you will have the CSV file in your Google Drive.

    collection: sentinelMerged,
    description: 'Multiple_Locations_NDVI_time_series_Wide',
    folder: 'earthengine',
    fileNamePrefix: 'ndvi_time_series_multiple_wide',
    fileFormat: 'CSV'

You can see the full script at https://code.earthengine.google.co.in/24c0b6b1a8004a6cd7f43924dcd5cb05

Here is the resulting CSV file.

Hope you found the post useful and got some inspiration to apply it to your own problem. Do leave a comment if you have ideas to improve the code.

If you are new to Earth Engine and want to master it, check out my course End-to-End Google Earth Engine.


Leave a Comment

  1. Hi Ujaval !! Thank you so much for your post. That’s gold to me 🙂

    I have a question: I try to adapt your code to create a .csv table for each band (b2, b3, b4, …)
    When you give the solution for the overlap granules it’s possible that it works for each band too ?
    this part in your code makes me think in the NDVI threshold: // return ee.String(val).slice(0,8)} //


    • Hi Eva – ee.String(val).slice(0,8) is for extracting the date (YYYYMMDD) that is part of the name of the image. So it should work regardless of which band you are extracting.

      To get b1, b2 etc, change the ‘ndvi’ values when you are creating the triplets with ‘b1’, ‘b2’ etc.

  2. Hi Ujaval, thank you very much for this great post ! That’s what I’ve been looking for for a long time.

    I’m doing a similar time series exercice and I have two questions regarding the overlapping granules .

    – Why would you take the maximum of all pixel values to make one single output ? Why not the mean of the pixel values ? Would you say one is more appropriate than the other ?

    – Would you have a simpler code to create a single output for the day with the maximum of all pixel values from an image collection ? I’m not using NDVI nor the code of the probleme 1 because I’m working on water reflectance.

    Thank you in advance for your response.


    • Hi Axelle,

      Glad you found this useful.

      Regarding taking the maximum value, it depends on your application. Mean/Median is fine too.

      For the daily max value – just calling max() on the collection will give you a composite with the maximum value at each pixel from all images. You will need a collection that is filtered by the day. You can use the calendarRange() function to filter the collection. Tell me more about the exact use case and maybe I can give you a code snippet.

      • Hi Ujaval,

        Thank you for your quick reply !

        My goal is to find a correlation between the water quality and the surface reflectance of Sentinel-2 imagery.
        For that, I extracted the pixel value around a point located in the middle of a river with a buffer of 15 meters and I created a time series chart for all the Sentinel-2 bands.

        Here is a part of my code :

        var roi = ee.Geometry.Point(5.068344, 50.492951).buffer(15);

        var s2Col = ee.ImageCollection(‘COPERNICUS/S2_SR’)
        .filterDate(startDate, endDate)
        .select( [‘B1′,’B2′,’B3′,’B4′,’B5′,’B6′,’B7′,’B8′,’B9′,’B8A’,’B11′,’B12′] )

        print( ‘Surface reflectance by band’, ui.Chart.image.series(s2Col, roi, ee.Reducer.mean() ) .setChartType(‘LineChart’));

        I hope that was clear enough.


    • Hi Asmae – The reason you have gaps is because you are showing only a single image on the map. When you iterate over the collection and combine bands of each image in a single image, you get a giant image where the bands are B1, B2, B3, …., B1_1, B2_1, B3_1, …. , B1_2, B2_2, B3_2 .. , when you display your image, you are asking to display only B3, B2, B1 which are from the first image. Since you have applied cloud masking there will be gaps.

      I haven’t seen this type of classification using each image from a time series. Usually you do a composite from a time-range, i.e. a growing season and create composites from that time range and use them for classification. Here’s a code snippet on how to do it.

      // Creating Sentinal 2 based seasonal data to detect phenological differences
      var seasons = ee.List.sequence(1, 12, 3).map(function(month) {
      month = ee.Number(month)

      var collection = s2
      .filterDate(‘2015-01-01’, ‘2018-12-31’)
      .filter(ee.Filter.calendarRange(month, month.add(2), ‘month’))
      // Pre-filter to get less cloudy granules.
      .filter(ee.Filter.lt(‘CLOUDY_PIXEL_PERCENTAGE’, 10))
      // Removing cloudy images
      // Adding indices

      var reducer = ee.Reducer.median()

      var composite = collection.reduce(reducer)
      return composite

      var season_composites = ee.ImageCollection.fromImages(seasons).toBands()
      var composite = composite.addBands(season_composites)

  3. Thank you for your reply,

    I’ve seen your proposition, it’s interesting, but then I realized that I have misexplained my objecting,

    I wanted to make a map to differentiate between crop types in a single year (from the beginning of the agricultural year to its end) using NDVI time series, so the crop would be characterized with its spectral temporal profile, that’s why I’ve made a stuck image using each NDVI band from the time series, and that stuck would be the input for the classification.
    Do you think that idea stands?

    The NDVI bands were also clipped using AOI named domaine

    I have made some modifications and applied 3 classification RF, CART and SVM, but still I couldn’t have a good classification, only CART classification have the frame of AOI domaine, but SVM and FR are cut in half, I realized that came from bands in input that don’t cover the AOI. Maybe should I remove them I don’t know how to do so, or there is another solution?

    The new code with rectifications

    So thankful ,,

    • Since each image doesn’t cover your AOI – you can’t use it to sample training points. To use the spatio-temporal profile of NDVI, you can use monthly or seasonal composites (not individual images) and add them as bands. I realize this is a common question and there is no good example in the user guide. So I will cover this in another blog post soon.

  4. Hi Ujaval! thank you SO much!

    This code is really helpful, it’s all I’ve been looking for haha. Really appreciate your time for explaining it.

  5. Hi Ujaval, really useful post and clearly explained, thanks a lot!
    I do have a question: I pretty new with GEE. I would like to apply this code for a shapefile containing multiple polygons.
    The ‘id’ I would like to use for my crop fields is contained in the feature collection under features properties in the field ‘Name’

    Could you help me understanding how to adapt line 21 in order to make the code work?

    var points = points.map(function(feature) {
    return ee.Feature(feature.geometry(), {‘id’: feature.id()})

    I hope my question is clear enough.
    Thank you some much again for this post, I could learn a lot!

  6. Hi ujaval, thank you very much! Your code is very helpful!
    I have a quick question for you, maskS2clouds removes cloud pixels from images, is there no way to remove the entire image that contains those cloud pixels.
    On my application, I want to generate NDVI time series for polygons that I import on google earth engine and I want to delete each image that contains clouds, basically, I want to keep on my time series only the NDVI values of the images or there are no clouds.

    • That is quite simple, You can filter out images that have more than a certain percentage of cloud cover.

      collection.filter(ee.Filter.lt(‘CLOUDY_PIXEL_PERCENTAGE’, 20))

      If you want images with zero clouds, you will end up with very few images, but give it a try

      collection.filter(ee.Filter.eq(‘CLOUDY_PIXEL_PERCENTAGE’, 0))

    • You can use GAUL which is available in the catalog. But the international boundaries won’t be correct.

      var gaul = ee.FeatureCollection(“FAO/GAUL_SIMPLIFIED_500m/2015/level2”);

  7. Thanks for posting this Ujaval! Your final output is exactly what I’m hoping to generate. However, I modified the code for a Landsat EVI series, and I’m running into a couple of issues. First, in line 67 I’m getting an error that says “EVI is not defined in this scope”, even though it is a band in the image collection. When I comment out the .set command in that line the code runs and I can generate the feature collection.

    Next, when attempting to generate the table to export, I get an error that says “Dictionary: Element at position 0 is not a string.” Any idea why I’m getting these errors?

    Code is available here: https://code.earthengine.google.com/863f733d4d87702e11b3e9ce3cc140f0

    I’m a novice GEE user, so any help you can offer is much appreciated.

      • That did the trick! Thanks for taking the time to look at what was ultimately a very basic mistake on my end.

      • Hi Ujaval,
        Thanks for the fixed code — I was also looking for something like this!
        On running the code (including the commented sections) I get a “Collection query aborted after accumulating over 5000 elements” error. Could you point me towards why this happens, and whether is prevents the eventual export to CSV? How would I fix it?

      • Hi Ujaval,
        Thanks for the fixed code — I was also looking for something like this!
        On running the code (including the commented sections) I get a “Collection query aborted after accumulating over 5000 elements” error. Could you point me towards why this happens, and whether is prevents the eventual export to CSV? How would I fix it?

  8. Hi Ujaval, many thanks. You helped me a lot. But I have another question:

    I’m trying to retrieve all bands’ values of my sample points, and these points are divided into two classes(0 and 1). So I used ui.Chart.image.regions to meet my need, but it warned:
    Error generating chart: Data column(s) for axis #0 cannot be of type string.
    Here is my link: https://code.earthengine.google.com/5218af79c0450c0f2e3f0cf41ffa8f46?accept_repo=users%2Fgorelick%2FEE102
    Do you have any suggestions on how to solve the problem?

  9. Hello Ujaval


    Thanks for this tutorial, it is really helpful.

    I am trying to run this task on MODIS NDVI product.

    Whenever I choose the feature collection given in the mentioned presentation, it works fine.

    But when I insert my own feature collection which is a Grid file then the code works fine in the console but it is unable to export into CSV.
    It is giving an error ‘Error: Error in map(ID=2018_04_07_00000000000000000023): Dictionary: Element at position 0 is not a string.’

    Here is the link to my code & asset:-



  10. Hallo Ujaval,

    I am trying to map flood events using google earth engine and I am supposed to use the expression below to mask water and make a time series for water area coverage. How can I include this in the code to get it running? The link code is also below:

    var indices = ee.Image.cat(

    image.expression(“(b(4) – b(3)) / (b(4) + b(3))”).rename(“ndvi”),

    image.expression(“(b(2) – b(4)) / (b(2) + b(4))”).rename(“ndwi”),

    image.expression(“(b(2) – b(5)) / (b(2) + b(5))”).rename(“mndwi”));

    var water_mask = indices.expression(‘b(“ndvi”) 0 && b(“mndwi”) > 0.5’).rename(“water_mask”);

    Below is the link code:


    Thanks alot


  11. Hello Ujaval

    Thanks for this tutorial, it is really helpful.

    I have a question about the date.

    I modified the date to two years, but it doesn’t work, it still calculated the NDVI in one year.

    I’m a novice GEE user, so any help you can offer is much appreciated.

  12. I want to use specific point so I upload my map coordinates (Longitude,Latitude) csv file and import it as table. I try to change points variable from your code. My specific points are just about 100 points and its date between 1 month. It’s not work as task’s not finished. I want to know how to use own specific point from your code. I try to define ‘raw_points’ to use instead of ‘points’ in your code. Here my code. (some variables are not used) https://code.earthengine.google.com/cf890f86f22e5332e189b49285c72e49

  13. Hello sir,
    Sir ,i am trying to develop real time land usage monitoring tool using satellite data and artificial intelligence.
    How can i start with it.
    I am unable to understand where to start from.

  14. Hi Ujaval: Thank you so much for this detailed instruction, it has made my time series to R so much easier!

    I was modifying the code to using Landsat SR–everything goes well except the final “granule overlap” part. I didn’t change anything of the code, other than replaced sentinel by landsat. But, Instead of getting what you have (74 properties from different date for each feature), I only got 2 properties for each feature, “id” and “LC08_124”, which seems like an aggregated value.

    I am wondering if you have any suggestions if the last part were to applied to Landsat images? Is there change I might need to be aware of ?


      • Hi Ujaval: I have been modifying your script for my own study for a while, and just realize one question–I am wondering if I can bother you on this:

        You are using points to extract NDVI in this tutorial. If I am using polygons instead for extracting time series data, do you think I need to add anything to indicate I want a mean VI value of each polygon? or it doesn’t matter?

        When I compare to Nick Clinton’s tutorial that you linked in this page, it seems I replaced the ee.reducer.mean(), with (reducer:ee.Reducer.first().setOutputs([‘Vegetation Index’]), in order to make all the null values 0.

        My modified script is attached (everything works well):

        Thanks so much for this post, and the new blog post on CHIRPS that you shared!

      • ujaval:Thanks for checking this! I changed to ee.Reducer.mean() and it seems the problem is solved. All the best, Lucy

  15. Hi Ujaval, thanks for sharing the tips on how to get a time-series.

    In my work, I aim to get a time-series for a few environmental variables that I have chosen. What I did was to get images within a year, and then applied a mean, to get the annual mean value per pixel. However, after I applied reduceRegions on the image, the mean value did not appear in the output. I have tried across the chose variables and I get the same results. Could it be due to a scaling issue or is there an issue I am not aware of?

    Link: https://code.earthengine.google.com/81f6d0123ccbd135de419711341fbdfa

    PS: Sorry if the script is a bit dense, I’m pretty new to Earth Engine and I wasn’t to comfortable with mapping functions across collections

  16. Hi ujaval,

    Very nice tutorial 🙂

    I would like to know if it is possible to export the tiff images from all the timestamps

    Thank you for your help

  17. Hello Sir,

    Thank you so much for the tutorial. I have a question related to averaging each polygon in a feature collection.

    I am working on yield prediction using machine learning techniques in GEE. For that, I have multiple polygons representing the crop fields. I want to get the mean vegetation indices for each polygon in the feature collection. When I applied the mean function, I got a single mean value for all features. Secondly, when I divided the available features into training and testing data, pixel values are taken instead of the polygon.

    I would be most grateful if you could give me any suggestions.

  18. Hello Sir,

    Thank you so much for the tutorial. I have a question related to averaging each polygon in a feature collection.

    I am working on yield prediction using machine learning techniques in GEE. For that, I have multiple polygons representing the crop fields. I want to get the mean vegetation indices for each polygon in the feature collection. When I applied the mean function, I got a single mean value for all features. Secondly, when I divided the available features into training and testing data, pixel values are taken instead of the polygon.

    I would be most grateful if you could give me any suggestions.

  19. Hi Ujaval!

    Really useful article it helps me a lot for the phenology analysis and identify the planting and growing seasons to make the composite. I would like to ask you another problem I am facing now.

    I would like to get n random point within a featurecollection that contain many polygons, so if a have featurecollection with 2 polygon and I want 20 random point I would like to have 40 random points, which means 20 for the first polygon and 20 for the second one.

    I have tried the ee.FeatureCollection.randomPoints which works very good when the featurecollection only have one polygon but when I add an extra polygon it fails and retrieve the number of polygons and not the 20 point for region1 and 20 for region2, any suggestion about it ? And/or how to map this function along each element of the featurecollection ? Below the code I have tried. And thank you so much in advance for you help!

    //This works without problem 🙂

    var region1= ee.Geometry.Rectangle(-0.289, 39.165,-0.297 ,39.170);

    // Create 20 random points in the region.
    var randomPoints = ee.FeatureCollection.randomPoints(region1,20);

    // response after run randomPoints size = 20

    // Add to map
    Map.addLayer(randomPoints, {}, ‘random points’);

    // add extra polygon and create featurecollection Fails,

    var region2 = ee.Geometry.Rectangle(-119.224, 34.669, -99.536, 50.064);

    var poly = we.FeatureCollection([region1,region2])

    var NewrandomPoints = ee.FeatureCollection.randomPoints(poly,20);

    // response after run NewrandomPoints size = 2. 🙁

    Thanks a lot!

  20. Hi Ujaval.
    Great tutorial! I’m writing it in Python to see how it works using the Python API. So far, very well.

    Just a question… Is there a way to “re-arrange” the final table? My goal is to have a table with this columns: Feature_ID (since I’m working with several points), Date (should be the date of the image) and NDVI value.

    As in this tutorial, I need to extract NDVI values from different points in a Feature Collection and export a table that has also the date of the images from the Time Series.

    Thank you in advance!

    PD: I’ll be glad to share de Python version of the tutorial once I’m done!

    • Hi. If you want to export the table with just 3 columns, you can export thee triplets collection. All the work after that is to merge those rows those into a single row per feature.

      Please do share a github link to your Python implementation. Will be happy to link from this post.

  21. Hi Ujaval, this post was really helpful to me. Thanks for sharing this.

    I am working on a similar project that requires exporting timeseries NDVI values from multiple locations. I am using python to get the NDVI values and date from a single location inside a for loop and storing the values in an array. However, I am running into an issue that the NDVI array is not the same size as the date array, i.e, NDVI with 85 samples from one image, but date values of 149 for the same image.
    Do you know if there is a way to extract the pair [NDVI, date] at once to export it as a table?

    Thank you so much for your help.

    • I guess because some images might be masked at that location. In your loop, try calling ndvi.unmask(-9999) which will replace mask with -9999. See if that helps.

  22. Hi Ujaval thanks a lot for the informative code. I have a code that plots the Sentinel-1 VH backscatter time series for a specific coordinate point. I want to extend the code to a large area and detect the negative change in the time series (e.g. change of -4 dB).

    Step 1: Check the time series of each pixel and compare with the average of past 5-year value.
    Step 2: If the backscatter for a particular pixel decrease by say 4dB then classify the change into class 1 else class 2. Can you please help me with the code?

  23. Greatttt. Thanks! I will have a look at the video and code and will get back to you if needed 🙂

  24. Hi Ujaval! Thanks for this tutorial- I’m trying to follow it using a csv file with GPS location data as my ‘points’ and got stuck when trying to get time series for a single location, since I get an error that says: Error generating chart: No properties to chart.
    Do you know what may be causing this?
    I did finish running the code, and obtained a table that looks very strange, points are not organized by ‘id’ (I guess that has to do with how I formatted my table?) and also, most of the values I obtained are 9999- is that normal?

    Here’s the link to my code
    And asset:

    Thank you for any help you can provide!

  25. Hi Ujaval,

    Thank you for this very helpful tutorial! I have been trying to extract time series data from multiple polygons for the last few months and this has been the most informative post I have seen so far. I am new to Google Earth Engine, so to learn how to do this I have adapted your script and changed points to polygons (and satellite data to MODIS Burned Area product, which I would like to extract from each polygon on a monthly basis). I am able to download the csv file very quickly but the formatting makes it unreadable, would you kindly look at this modified script and offer any suggestions to export the data? I don’t need a formatted table but I would need the polygon ID, median burn date, and date columns.

    Here is a link to the script: https://code.earthengine.google.co.in/29b6419ddab93c295c5a10905aa6e371

    Thank you,

      • Hi Ujaval,

        Thanks for your quick response! I did try that earlier and it created a table that is close, but with some errors that I’m not sure how to control for (linked in this stackoverflow post). Thus, I tried to achieve the step before the formatting just to see where my error was but that was also somewhat messy (for instance, not all rows have a geoID). I wonder why this error is occurring, or if there’s another way to approach having a clean table for each burn date and month?

        Many thanks,

  26. Hello Ujaval,

    I am trying to adapt your excellent code to look at NDVI values for several catchments in the Arctic. I used a shapefile generated in GIS and imported it to GEE. It has an attribute called “id” the same way as your points do, but when I run the script, the resulting CSV file has a column called system:index instead of the id number.

    I am sure this is an easy solution, but do you know what could be causing this?

    Many thanks,


  27. Hi, Ujaval.

    Thank you so much for the informative tutorial and code! I enjoy learning GEE thanks to you.

    Based on your code, I am trying to export time series NO2 change from multiple polygons with Sentinel-5 Precursor data. Though I could export necessary data without errors, a result contained extra columns filled with “-9999”, and date format became unreadable like “20181231T235559_20190107T012845_0 (first feature)”.

    Would you kindly take a look the code below and give suggestions to eliminate the unnecessary columns with “-9999,” and simplify id to such as “20190101” ?

    Thank you very much in advance for your help.

    Best regards,

    • Hi Akira,

      Here’s how to trim your imageIDs to the dates

      The columns with -9999 are required, otherwise you will not get a CSV output with columns for all days. If there are 100 images and 100 days, your output table needs to have 100 rows and 100 columns. You need to put some value where there is no reading. You can replace -9999 with 0 or ‘NULL’ or any other value, but you can’t skip it.

      • Hi, Ujaval,

        Thank you very much for your kind help with the code, and teaching me about the columns of CSV output! I’m excited to see the trimmed imageIDs as it is really the one I wanted.

        I’m afraid to ask your further suggestions that all obtained data of NO2 resulted in no reading (shown as -9999) even I extended filtering date to increase the number of days. Previously, I could see some value, not many though, shown in CSV such as “6.60E-05”.

        I tried searching solutions and changed code for many times but could not solve. I’d be really appreciated if you could kindly give suggestions for this issue.

        Thank you very much again.

        Best regards,

      • The NO2 concentration values are very small and get rounded to 0. Multiply the image by a large number such as 1e6 and you will see the values.

      • Hi, Ujaval,

        Thank you very much for your suggestions!! I will try that way.

        Many thanks,

  28. Hi ujaval,

    I computed some vegetation indices (NDVI, ARVI,…) using Sentinel-2 collection, then I also computed precipitation and temperature using Era5 in a monthly basis. I printed separately the chart for the indices and meteorological. is it possible to put all of them in the same graph in google earth engine? I found no problem to put the precipitation and temperature on the same graph, but I can’t added the indices. Then, I also want to see the correlation between them.

    Thank you,

  29. Hi Ujaval! Thank you very much for your tutorials! I am wondering if you have any tutorial explaining how to calculate the monthly mean MODIS ndvi value. So far, I got this code: https://code.earthengine.google.com/bce168010dc9a8c1af3b67eab4bec617?accept_repo=users%2Fujavalgandhi%2FEnd-to-End-Projects, but I dont know how to add a “YYYMM” imageId in order to download the featureCollection using the same code you presented here.

    • Hi Andrea – your assets are not shared (they need to be set to ‘anyone can read’) – so I could not run your code. But from what I gather, you need to make the image id into a YYYMM format.

      Change the line
      feature.set(‘date’, image.id())

      feature.set(‘date’, ee.Date.parse(‘YYYY_MM_dd’, image.id()).format(‘YYYMM’))

  30. Hi Ujaval,

    This is awesome and will certainly help with my MSc project!

    Do you know how I could change the script so that instead of computing an NDVI for each farm point, I can find the NDVI for each pixel in a sentinel 2 tile? Essentially, I would like to produce a similar csv but with a pixel id and its coordinates.


  31. Hi Ujaval,

    Thanks for this post, it will be very useful for my MSc project. Do you know how I could tweak the code so that the I can extract an NDVI for each pixel rather than a given point. Essentially, I’d like to produce a csv file with NDVI values for a time step that contain a pixel id, value, time of image and the coordinates of the pixel. I’d like to do that for every pixel in a S2 image that covers my AOI.

    Many thanks!

  32. Hi Ujaval Sir! Thanks for sharing the code!

    I am trying to extract the long term NDVI from landsat 5 for a similar set of polygons, I am having trouble with cloud filtering, how can I modify the code to get the desired results.

    Thanks for the help!

  33. Hello Ajaval, your contribution is extraordinary and very useful.
    I have a query and would greatly appreciate if you have any examples.
    I want to extract a time series of multiple points (NDVI) in such a way that the monthly minimum, mean and maximum values can be appreciated
    I have achieved this with my code:
    However, I still cannot solve the challenge of obtaining monthly data. Also, I need to add a column that shows my “Parcela_ID” that comes from my point database in order to be able to identify within the time series which specific point it refers to.
    Any support will be of great help.

  34. That was a great help in understanding. I have been trying to develop a code such that it does classification of classes(special interests hyacinth), then it calculates area of the hyacinth class and then computes biomass based on the area. I am a little confused on the code to use and some assistance is highly appreciated. the code i’ve been working on is here. The script needs to iterate through the years and months


    • You can do it if you have a few points using ui.Chart.image.seriesByRegion()

      If you have more points, and a large time-period, you can’t do it. You will have to export it as CSV and chart using Excel/R/Python

  35. Hi Ujaval,

    Thank you very much for this well wtitten guided project on extracting NDVI time series using GEE.

    I am generating an NDVI Timeseries for masked land cover classes using the Copernicus landcover 2019 Global product in Google Earth Engine.

    I have masked the vegetatation land cover class and computed NDVI time-series for these masked landcover class at the Pixel level. The code runs but in the chart I have multiple observations for each date.

    Is there a way I could reduce these NDVI for a single date into one mean value to have a less noisy NDVI time-series chart? The red line is the trend line just for visualizing the NDVI trend over time. A code snippet on how to apply this would be appreciated.

    The link to the code is here;


  36. Hi thank you very much for time series coding which was very essential for my research work.How can we implement it for our AOI, and how to pinpoint NDVI of a specific location.

Leave a Reply to ujavalCancel reply