How do different sensors perform across the electromagnetic spectrum? This question bears practical importance when we want to combine data acquired by different sensors. I thought it would be interesting and fun to do a simulation of how different common sensors see the same feature.
We could in principle do this using subsets of images of the same region captured by different sensors, but it is actually easier to compare them using a given spectral signature, the reflectance (or emittance) of a certain material as a function of wavelength.
I therefore went to the Aster spectral library and downloaded several datasets corresponding to different spectral signatures. In the following example, we use that of common lawn grass:

How do Landsat 7 ETM+, landsat 8 OLI and Sentinel 2A MSI “see” this grass? To answer this question we need to know the shape of the actual relative sensor responses as a function of wavelength. These are technical data that can be found in the documentation for the sensors, and can for example be downloaded from this site.
The data about the spectral signature of grass and the relative spectral responses of the instruments are expressed slightly differently in these files: micrometers vs. nanometers of wavelength; different intervals of the data, etc. Therefore a bit of fiddling with the data was necessary to put everything into a comparable form. I did all that using an eclectic mixture of free software tools (as usual!). In this case, I mainly used Perl Data Language and Generic Mapping Tools to interpolate the one-dimensional files onto a regular wavelength interval (I used sample1d for that). Everything was neatly glued together using Perl scripts and charts were created with gnuplot.
We can see that the responses of these three instruments in the visible and near-infrared parts of the spectrum are somewhat different:



Or, if we make a somewhat crowded packing of all three in one figure:

I deliberately omitted the bands 5, 6 and 7 of Sentinel 2 that capture the vegetation ramp to avoid clutter. Also since Landsat sensors do not have this feature there would be nothing to actually compare. We see that although the bands are judiciously designed to capture vegetation, all three sensors capture slightly different versions of the same feature.
To make the comparison, I derived the averaged reflectance of lawn grass, as expressed in the spectral signature, weighted by the relative spectral response of each sensor in each band. I then calculated the reflectance for each band and also for fun calculated the Normalized Vegetation Index:
Average band reflectance (percent) for grass using weighted sensor sensitivities
ETM+
B: 3.87
G: 7.86
R: 4.51
IR: 38.54
NDVI: 0.79
OLI
B: 3.93
G: 8.88
R: 4.67
IR: 34.03
NDVI: 0.76
MSI
B: 3.30
G: 8.39
R: 4.16
IR: 32.56
NDVI: 0.77
We see that values are very similar and differ only in a negligible way. The differences noted above would possibly dissapear in a real case under the uncertainty introduced by atmospheric effects and their correction. This is good news, but we have to remember that we have chosen a very simple spectral signature to study (that of grass). We cannot rule out that a more complicated signature could possibly fare worse, introducing artifacts and discrepancies in the results. In fact, small but significant differences in spectral responses have been found that may require due attention in critical applications (see for example this paper by Mandanici and Bitelli )
Finally, in a whim of informality, I allowed myself to do some “magicks” and create fake pixels out of these curves that would show what the sensors would display on an image. I did this by rescaling the reflectances to the range 0-255, allowing either the green or the infrarred take the value 255 for natural (R,G,B – RGB) and false color (IR,R,G – RGB), respectively, while adjusting the other two channels proportionately:


Obviously, very much similar tones, although not quite!
Thanks for reading!
References
Mandanici, E.; Bitelli, G. Preliminary Comparison of Sentinel-2 and Landsat 8 Imagery for a Combined Use. Remote Sens. 2016, 8, 1014.
Very interesting blog post!
LikeLiked by 1 person
Thanks! It was fun to write and do the number crunching.
LikeLike
Reblogged this on EarthEnable.
LikeLike
Very nice investigation, well done! There are important differences in the SRF of different sensors, and this is naively ignored in almost all RS studies… One thing that seems uneven in your ‘data crunching’, you say that you take the weighted integrated reflectance. Fair enough. But looking from the graphs and for instance the IR band, the OLI IR bandwidth is considerably smaller that the ETM+ IR bandwidth. That would lead to a corresponding considerably smaller IR value if you take the integrated reflectance. Do you mean perhaps that you took the weighted average of the reflectance, or I dont get something here?
LikeLiked by 1 person
Thank you for your comment, Dimitris. You understood correctly. The idea was to integrate the reflectance weighted by the sensor sensitivities under the sensor sensitivity curves. In practice, this meant taking the average of the product of the reflectance by the relative sensitivities of the sensors under the interval where the sensitivities are non-zero. It is fairly easy to do with a bit of programming. I realized the text wasn’t clear. It is now changed for the better, I hope. Best, Hernán.
LikeLike
Great! This little exercise reflects what I teach (mostly focusing on NDVI) on the lecture related to the spectral aspects of data integration. I would like to use your figures, if possible.
Zoltán
LikeLike
Thank you. Yes, of course. Everything in this blog is CC-BY-SA. You can use and share, under the condition that you give credit to this site and share under the same conditions. Good luck!
LikeLike