Classification of Imagery in ESA SNAP

Introduction to Tutorial

In the last session, you learned how to import UAV imagery, in .dim format, into ESA SNAP and conduct basic operations. In this tutorial, we will show how you can use inbuilt functions in ESA SNAP to conduct unsupervised classification of your imagery, as well as how to conduct a process called “spectral unmixing” using the spectral library you created in session 3.

Tutorial Steps

Part One – Unsupervised Classification

Classification is a powerful tool that allows us to group objects within our image based on shared characteristics. In remote sensing, those shared characteristics are the reflectances associated with the spectral bands in our data.

Classification can be supervised or unsupervised. In supervised classification, we provide our classification model with the spectral information required to group objects.

If we do not have access to the precise spectral information that can relate a pixel to its exact object identification, or if we’re interested in how well our unclassified analysis performs to a supervised classification technique, we can use unsupervised classification. This is conducted using machine learning techniques.

Over the last decade, the availability of machine learning has grown rapidly. Even with modest computing power, and working with small datasets, we can use open source options to conduct machine learning methods on our data. In this part, we are going to use an unsupervised classification algorithm called k-means clustering. K-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster.

  1. From the product window from the last session, select the RGB image of the survey.
  2. As classification techniques are computationally challenging, they can take some time to process. For demonstration purposes, we’re going take a subset of our spatial data that will be processed. .
  3. Select Raster, then Subset.
  4. Set X scene start as 500, and X scene end as 2750. Set Y scene start as 1500, and Y scene end as 3200.
  5. Ensure that Band Subset includes all bands.
  6. Click OK.

    • Question: - What is the process to generate an RGB image in ESA SNAP? Conduct this to generate an RGB image of your spatial subset.
      Answer Select Open RGB Image Window. In the Window that appears, select the bands that match the red, green and blue portions of the EM spectrum, e.g. bands 4, 3 and 2, respectively, when using Sentinel-2 data.
  7. With the RGB of the subset selected, select Raster, Classification, Unsupervised Classification, and K-Means Clustering.
  1. Set number of clusters to 5, iterations to 30, and select all source bands. Select Run.
  2. Inspect the output by selecting in the product explorer 2024_Tyninghame_Survey_kmeans.dim, and under bands, double clicking Class Indices.

Part Two – Spectral Unmixing

Recall that in session 3 of this online tutorial, you created a file named Tyninghame_Spectral_Lib_SNAP.csv.

In this file is the output the ground spectroscopy data collected at Tyninghame, split into several feature groups.

This ground spectroscopy data is going to be used to classify all of the pixels in our image into one of these feature classes (or more technically, highlight the abundance of each class within each pixel) using a technique known as spectral unmixing.

Spectral unmixing is the procedure by which the measured spectrum of a mixed pixel is decomposed into a collection of constituent spectra, or endmembers, and a set of corresponding fractions, or abundances, that indicate the proportion of each endmember present in the pixel.

  1. First, let’s add some spatial data to the image. When collecting field spectra with the SVC HR-1024i, it will also provide you with GPS information. This helps with classification, as it directly matches your pixel to your sampling location. This information is found in the Ground_spectroscopy_survey.txt file in your workshop folder.
  2. Pins can be added by accessing View in the main toolbar, then Tool Windows, then Pin Manager. 3.A new Pin Manager window will appear, usually at the bottom of your window. Ensuring that the reprojected RGB window is selected, select Import Pins from this window, and in Placemark files, select Flat Text Format. Navigate to the Ground_spectroscopy_survey.txt, and then Open. The pins will now display in your active window.
  1. In the Main Toolbar, select Optical, then select Spectral Unmixing.
  2. Ensure that the source product is set to the reprojected image, 2024_Tyninghame_Survey_reprojected
  3. If you press the Help button, then Help again, a new window will appear that discusses the algorithm used by SNAP during the unmixing. Note the end section, which says which format the endmembers must be in.
  4. In the Endmembers windows, click + to add an endmember file. Navigate to, and select, Tyninghame_Spectral_Lib_SNAP.csv. As the number of features can not exceed the number of bands - 1, and as we wish to reduce computation speed, we’re going to remove from the list RG and Driftwood from the list.
  5. Choose Fully Constrained LSU to output fractional abundances.
  6. Highlight all the spectral source bands in the parameters window.
  7. A new product will appear in your product window.
  8. A fully constrained LSU model will provide a number between 0 and 1 for each pixel for each feature class. A value of 1 suggests that that pixel’s reflectance value can be 100\% attributed to the respective feature class. Double click the DrySand_mean_abundance band.

    • Question: - What is the process to generate a mask? Conduct this to generate a mask for the DrySand_mean_abundance output, highlighting only pixels where 90% of their reflectances can be attributed to the DrySand feature class.
      Answer In the main toolbar, select View, then Tool Windows, then Mask manager. Click the f(x) symbol. In the Expression window, type DrySand> 0.9, and press OK.

Part Three – Downsampling

Another advantage of using field spectroscopy and high-resolution UAV multispectral surveys is the ease by which you can downsample your imagery to simulate the pixel footprint of your satellite of interest.

This is particularly useful with the MAIA camera, since its spectral response already replicates that of Sentinel 2.

  1. Select the 2024_Tyninghame_Survey_reprojected product.
  2. In the Main Toolbar, select Raster, then Geometric, then Resampling.
  3. In the resampling parameters tab, select by pixel resolution.

    • Question: - The pixel resolution you select should match that of whatever band you use which has the coarsest resolution. For this example, we will only be using Band 3, 4,5 and 8 . What is the pixel spatial resolution for these bands on Sentinel-2?
      Answer 10 m.
  4. Choose Mean as the downsampling method.
  5. Change the save file name from _resampled to the spatial resolution you’ve given, i.e. _[spatial_resolution]
  6. A new product is created and displayed in the Product Explorer.
  7. Right click, and select Open RGB image. The resulting output is how it would look if this was a Sentinel-2 image.
  8. Press the Tile Evenly button to tile you outputs evenly.
  9. You can conduct exactly the same operations on this as you’ve done before. Try it by doing an NDVI output.

Wrapup

You have now been introduced to the ESA SNAP application, and used it to conduct an unsupervised classification of a UAV multispectral image, as well as conduct spectral unmixing using ground acquired spectral information to determine fractional abundances.

The last section talked about downsampling, and drawing equivalent to Sentinel-2 imagery. The tools acquired here can be used for any aerial or satellite imagery that can be imported into ESA SNAP.