Overshoot and collapse

On 19 August 2014, just a few weeks ago, we “celebrated” this year’s Earth Overshoot Day, the day when we have already consumed the resources generated and to be generated by the Earth during the the whole year. This week, a new report was released showing further evidence that our global society has been progressing through the dreaded scenario 1 in “The Limits to Growth” report from 1972 [1], known as “business as usual” but better described as “overshoot and collapse”. We are living in interesting times. We should feel blessed …

Mid August. We will spend the coming four months eroding soils, overusing fertilizers, and accumulating stocks of pollution at rates much higher than the planet can recycle through its natural dynamics, thus harming its capacity to regenerate, transform and sustain life. This is equivalent to having spent our monthly salary by day 20, borrowing for the remaining 10 days. In the global case the loan will have to be paid by our children and grandchildren, not only in money, but in far more important assets such as lower quality of food and water, degraded health and an unpleasant Nature, amidst a potential social and environmental disarray of unheard of magnitude. If you have been through revolutions, uprisings, or any other form of local social collapse, you can easily imagine that similar events at global scale won’t be amusing.

The Earth Overshoot Day is a measure of our abuse of the planet, calculated by the Global Footprint Network. As we are more and more people on Earth, using more and more resources, the Earth Overshoot Day has fallen on earlier and earlier days over time:


Plot of Earth Overshoot Day over time. Note that calculations might have a non-negligible error bound. In any case, the trend is not flattering, and clearly shows our inability to adapt our living to the conditions of the Earth.

A recent report by G.M. Turner at the Melbourne Sustainable Society Institute, University of Melbourne, Australia [2] is the latest comprehensive view of a process that has been previously shown by many other studies (see a long list of publications here): that the evolution path of our global society vastly disregards all criteria of sustainable development, using the available renewable resources at rates higher than their natural regeneration (e.g. fisheries, forests), using non-renewable resources at rates higher than the development of any substitute (e.g. soils, oil, gas), and releasing pollutants at rates higher than Nature can reabsorb and transform them (e.g. carbon dioxide, methane, PCB). The “business-as-usual” scenario, created as a realistic representation of a society like ours, obsessed with greed, consumption and the illusion of infinite growth, describes in its many variants an unsustainable path that invariably leads to an uncomfortable future, characterized by a much lower quality of life, drastic reduction in life expectancy and food availability, and an unpleasant Nature:


Screen capture from the World3 simulator (http://world3simulator.org/) showing the projections for scenario 1 in “The Limits to Growth”. Note the dates on the horizontal axis. We are not far from the projected collapse.

Alternative futures
As an effort to give the image of a less traumatic future, we can finish these thoughts by noting that not everything is gloom-and-doom news yet, that we can still do something to avoid a global environmental catastrophe, although we are achingly running out of time. The very substantial changes we need require rethinking almost every aspect of our economy, social structures and relation to the environment, learning to live under Nature’s conditions. The challenges ahead are enormous. Some countries and regions have already made pertinent and useful changes, but the great majority of the world still lags with this.

Increased pressure on governments and politicians is necessary to force them to stop babbling and act, enforcing international climate and environmental pacts. However, the best place to start is right where we are, becoming ourselves agents of change [3], at our own personal and local scales of action, by choosing to consume less, finding new uses of what we already have, and recycling what cannot be reused, and by exercising as much as possible the “power of the purse”, forcing businesses to either adapt or go bust. Unfortunately, we should not forget that collapse may still be the most likely scenario, and perhaps it is reasonable to start considering how to face it (some suggestions might be found here [4]).


[1] Meadows, D., Randers, J., & Meadows, D. (2004). Limits to growth: the 30-year update. Chelsea Green Publishing.

[2] Turner, G. (2014) ‘Is Global Collapse Imminent?’, MSSI Research Paper No. 4, Melbourne Sustainable Society Institute, The University of Melbourne.

[3] Robertson, M. (2014) Sustainability. Principles and practice. Routledge.

[4] Maher, T. M., & Baum, S. D. (2013). Adaptation to and recovery from global catastrophe. Sustainability, 5(4), 1461-1479.

Mapping snowlines on temperate glaciers: the wonders of optimal thresholding

In this post I describe an interesting glaciological application of Otsu’s optimal thresholding, a method in digital image processing with which we can segment a gray-level image into two or more classes by finding the best threshold level between the two. I stumbled upon the method a few years ago when working on a study in which I needed an objective and reproducible method for segmenting satellite images of glaciers into accumulation and ablation areas. After much trial and error, and in spite of its simplicity, Otsu’s optimal thresholding outperformed all other methods for classification, including sophisticated approaches as ISODATA and spectral unmixing.

For the uninitiated, the problem is as follows: on a glacier, the snow line is the limit between snow (above) and ice (below). It is a dynamic feature whose altitude changes according to the weather and seasons. On temperate glaciers, those where the ice is everywhere at the pressure melting point, conventional glaciological wisdom tells us that the snow line at the end of the ablation season (usually the summer) can be considered a good approximation to the equilibrium line altitude, that is, the altitude at which the surface mass balance is zero. This approximation is not valid for subpolar glaciers because superimposed ice develops right down the firn field, effectively extending the accumulation area without appreciable optical contrast. The relevance of this question lies in the fact that glaciologists are usually interested in the relation between climate and glacier mass balance, and therefore mapping snowline altitudes at the end of the summer has always an intrinsic value. At least conceptually, the solution seems simple: find a cloud-free image taken at a suitable time of the year (end of summer, minimal seasonal snow cover) and just map the snow line on it. See, for example, the following subset of a satellite image of the Southern Patagonia Icefield:

South Patagonian Icefield, Viedma and Rio Túnel glaciers. Landsat 5 TM, 1997-03-17, path/row 231/094

South Patagonian Icefield, Viedma and Rio Túnel glaciers. Landsat 5 TM, 1997-03-17, path/row 231/094. The image is approx. 21 km across.

On Glaciar Viedma, the large one roughly at the center of the image, we can clearly see the obvious differences between the ice on the ablation area, to the south, and the snow and firn on the accumulation area, to the north. We probably feel at once that we could just draw a line along the limit and solve the problem. Reality is nevertheless more complicated because the limit is gradual, and placing a boundary will at some point become an arbitrary choice. If we had several interpreters working on this kind of problem we could in principle calculate some sort of average line, but having many humans paid to do this work is not usually within the budget of the typical glaciological project! This approach is also impractical if we need to map snow line altitudes over a large number of glaciers. Obviously, satellite images are suitable for this kind of task, and the problem of finding a methodology for mapping glaciers with them has challenged the ingenuity of many glaciologists for decades (see a very long list of articles about doing this with visible and near infrared images here). An important aspect is that for achieving some sort of consistency, and for covering large areas in a reasonable time, automatic methods which are also objective and reproducible are mandatory. In practice the matter is not trivial, but I found that Otsu’s optimal thresholding is an extremely suitable method to solve this problem and think it can be advantageously used for mapping a large number of glaciers, regionally or even globally.

As noted above, Otsu’s method reduces a gray-level image to a binary one by finding an optimal threshold level that maximizes a separability function. The original idea was published by Nobuyuki Otsu in 1979 (1), in what became a seminal paper in the field of digital image processing. The procedure can be found in exquisite detail in the original paper as well as in many books, and all around the internet. The recipe I follow here is that given by Gonzalez and Woods in their book “Digital image processing”, page 747 (2). I also give the Perl code that I used to calculate all steps (namely, PDL, the Perl Data Language).

As an example, I here use a subset of a MODIS image (band 2, near infrared) of Glaciar Viedma, part of the study mentioned above (3). We will need a gray-level image of a glacier, where the surroundings have been suitably clipped out using a glacier outline from an inventory. This is important and necessary in order to find the threshold level between ice and snow/firn, and not that between glacier and rock, which is usually more marked in pixel levels:


Glaciar Viedma. Subset of band 2 of an average of 7 MODIS images.

I chose Glaciar Viedma because its tongue is covered by volcanic ash and debris and it is a challenge for image classification. Ideally, a glacier histogram on this spectral band should look bimodal: one mode for ice, and the other for snow. Instead, we here have an additional mode for the debris, at a lower level than the ice:

Glaciar Viedma. Image histogram.

Glaciar Viedma. Image histogram.

The arrow shows the position of the optimal threshold (anticipating the results!). We see that Otsu’s method performed reasonably well even in this difficult case. Simpler glaciers give even better results.

The procedure we followed to find this was:

0. We begin by loading the image into a piddle, a PDL matrix. To do this, we first convert the file from the original GeoTIFF format to NetCDF, using GDAL, and calle it image.grd. We then read the file directly into PDL using the PDL::NetCDF module:
# read the file
my $image = PDL::NetCDF->new("image.grd",{REVERSE_DIMS => 1});
# load the z dimension (the data) into the variable
my $pdl_image = $image->get('z');
# convert eventual NaN to BAD values
$pdl_image .= $pdlimage->inplace->setnantobad;
# clump the two dimensions of the image to one long vector
my $pdl_clump = $pdl_image->clump(2);
# compute the histogram using hist
my ($vb,$nb) = hist $pdl_clump;
# and convolve a bit (using conv1d) to smooth out those pesky noisy images
$nb .= conv1d $nb1,ones(11);

We then start with Otsu’s method proper:

1. compute the normalized histogram of the input image (in the normalized histogram, the frequency of every level is expressed as an estimate of the probability):
my $bnh = $nb/sumover($nb);

2. compute cumulative sums of the histograms (I use here use a subroutine called “cumhisto”, that you can find in the appendix. However, PDL has a method for that called “cumusumover()”. Thanks Chris Marshall for calling my attention to that!)
my $bcsum = cumhisto($bnh);

3. compute the cumulative means (that is, for each level, the mean of the histogram up to that level)
my $bcmea = cumhisto($vb*$bnh);

4. compute the global intensity mean:
my $bintm = sumover($vb1*$b1nh);

5. compute the between class variance, \sigma^{2}_B , using:
\sigma^{2}_B = \frac{(m_G P_1(k) - m(k))^{2}}{P_1(k) (1 - P(k))}

where m_G is the global mean, P_1(k) is the probability that level k belongs to class 1, and m(k) is the k th cumulative average.
my $bsigmasq = (($bintm*$bcsum)-$bcmea)**2/($bcsum*(1-$bcsum));

6. and then we obtain the optimum threshold as the value k^{*} that maximizes \sigma^{2}_B(K) :
$botsuthr = $vb1->index(maximum_ind($bsigmasq));

7. if we want, we can also compute the separability measure \theta^{*} , using:
\theta(k) = \frac{\sigma^{2}_B(k)}{\sigma^{2}_G}

where \sigma^{2}_G is the global variance:
my $b1globvar = sumover(($vb1 - $b1intm)**2 * $b1nh);

and then, the separability is:
$rthetab1 = $b1otsuthr / $b1globvar;

Once we have the value of the threshold, we can simply generate a binary image by setting values below and above the threshold to 0 and 1, respectively. Our segmented glacier will then look like this:

Glaciar Viedma. Segmented into accumulation and ablation areas using Otsu's optimal thresholding method.

Glaciar Viedma. Segmented into accumulation and ablation areas using Otsu’s optimal thresholding method.

The method is computationally very efficient and can be made to run very fast. It is extremely effective with very few caveats. Obviously, images should be cloud free. The only problems I experienced were caused by shadows, or faulty outlines that left some rock or a sufficiently large nunatak inside the boundary. Also, computing AAR (accumulation_area / total_glacier_area) using different bands gives slightly different results, which is not a problem of the method, but a manifestation of the difficulty of the problem.

All in all, Otsu’s optimal thresholding appears to me as an old gem that can be conveniently implemented on larger scales. For example, it would not be unthinkable to use the GLIMS or Randolph global glacier inventories, together with end-of-summer images over glaciers around the world, to systematically map AAR at a global scale on a routine basis. This could be done using the procedure described here at a fairly reasonable computational cost. Cool!


(1) Otsu, N (1979). “Threshold selection method from gray-level histograms”. IEEE Trans. Syst. Man Cybern. 9.1, 62–66. issn: 0018-9472.

(2) Gonzalez, R.C. and R.E. Woods (2008). “Digital image processing”. Pearson/Prentice Hall, 954 pp.

(3) De Angelis, H. “Hypsometry and sensitivity of the mass balance to changes in equilibrium-line altitude: the case of the Southern Patagonia Icefield.” Journal of Glaciology 60, no. 219 (2014): 14-28.


Cumhisto, a subroutine in PDL for calculating the cumulative histogram:

sub cumhisto {
my $lqs = pdl(@_);
my $cum = pdl(@_);
my $h = 0;
while ($h < nelem($lqs)-2) {
my $dummy = zeroes(nelem($cum));
$dummy($h+1:-1) .= $lqs(0:-($h+2));
$cum += $dummy;
return $cum;

Featured Image -- 105

Tell Me How?

Hernán De Angelis:

This is so powerful, so true.

Originally posted on The Quiet Sky:

Tell me how not to be pliant like a willow
When winds of deceit and subterfuge blow?

To stand correct and straight amidst
Overwhelming corruption and hate?

Tell me how to fight war and strife
With a white glove and no knife?

To cry a little more now and then
On seeing killing, riots and mayhem?

Tell me how to tolerate and bear
The mindless destruction of Life everywhere?

How to feel a wrenching pain
As hunters shoot down for profit and gain?

Tell me how not to turn my face away
From gnawing poverty and hunger every day?

Tell me how to experience the same
Of those tortured and maimed?

Tell me how to be a lotus in the lagoon
Despite muddy waters, in full bloom?

Tell me how to revere and hold sublime
All Life in its variety Divine?

Tell me how, O Sage, to pray,
Bringing Compassion…

View original 35 more words

Precautionary principle

* originally published in my old blog “Tales of a Finite Planet”, on 2014-06-24

I have been pondering on writing about the precautionary principle for a while. I hesitated because of the vastness and intricacy of the matter, the apparently endless ramifications, and the somewhat controversial legal sides (see yourself!). I finally decided to go ahead, while keeping these notes close to the simple and powerful core idea.

The roots of the precautionary principle are often traced back to the preamble of the 1987 UN Montreal Protocol on the ozone layer and, particularly, to the Rio Declaration on Environment and Development, made on the 1992 Summit in Rio de Janeiro. Here, the 15th point reads:

In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

The definition used by Schenllnhuber and Wenzel, in “Earth system analysis: integrating science for sustainability” (p. 209): “securing of error-tolerant operating by judiciously staying away from the currently presumed civilizatory and ecological load limits” is also illuminating because it emphasizes the links to a system dynamics approach, as well as the concept that the inherent limits in our knowledge give rise to uncertainties in forecasts and analysis. This applies specifically, for example, to the perceived safety of pollutants, insecticides, etc., and calls for extreme caution in their use.

There is, however, nothing truly revolutionary in these words, as the basic idea is as old as humanity itself and can be found, in subtle ways, in the the wisdom and common sense of many cultures (or grandmothers, for that matter). The point was brilliantly made by Nicholas Nassim Taleb in his book “Antifragile”, and goes basically like this: Do you ingest a bunch of randomly chosen pills on the premise that there is no evidence that they will do any harm to you? Well, no. The same applies to the environment!

The idea is closely connected with the concept of burden of proof. This translates in that those claiming that there is no harm in using a certain product or method are those who have to provide the evidence that these are safe, and not the opposite. With respect to the global environment, the lesson is that we cannot happily assume that we can just do anything on a massive scale without harming the planet, particularly if these assumptions are based on a limited number of isolated and possibly lousy experiments (or none at all). Compare this with the vast experimental record of natural evolution. In her millions of years of trial and error, Nature has come up with far better solutions, compatible with the rest of the environment, than anything humans can do. This point was also raised by Taleb, with a cute probabilistic interpretation (see here).

In recent decades we have witnessed a particularly successful application of the precautionary principle, namely the ban on chlorofluorocarbons (CFC) and the ensuing recovery of the ozone layer. We now have at hand another excellent example with the European Commission’s recent ban on neonicotinoids, that has conceivably put a stop for the colony collapse disorder in the Po river valley, Italy, without observable negative impact on the crop yield. The precautionary principle is also very relevant to the question of greenhouse gases and climate change, where we have an opportunity to act now, by drastically reducing emissions of greenhouse gases, and hopefully call it another case of successful application in the future.

The precautionary principle has been criticized for being vague and loose and, if taken to the extreme, an excuse for inaction. Some of these criticisms are valid, but are directed to particular applications and interpretations, rather than the principle in itself and does not invalidate it. Note that one of the most powerful aspects of the precautionary principle is how it is inextricably linked to ethics, guiding us to think on how our actions and decisions impact on Nature and our fellow humans, both now and in the future. Experience has shown that, when used as a guidance, in an open-minded and reasonable way, the precautionary principle makes a good starting point on any environmental discussion, and a solid ethical and practical ground for the legislation and practice of sustainable development. Let’s not forget it.

Triple bottom line

* originally published in my old blog “Tales of a Finite Planet”, on 2014-05-22

The pictures below show a sparse shanty town in Dock Sud, Avellaneda, a municipality just south of Buenos Aires, Argentina, in August 2006. The “water” course is Riachuelo, the channelized lower stretches of Matanza River, and a serious contender for the most polluted river on the planet.


Feral dogs, precarious houses and dumping sites for demolition rubble are visible. Less obvious are the heavy pollution of the river, the inhuman living conditions of the people and their unstable and often precarious income sources. Health problems due to pollution, criminality and low education levels are rampant. Note that the reason some trees do not have leaves in the pictures is not pollution, but the southern winter.

The river used to be a recreational space until the early 1930’s (told directly by one of my grandmothers), when people sought in its waters relief from the hot summers. By the 1940’s the river was already smelling bad from the chemical wastes, poured directly without treatment, from leather and other industries settled along its banks. I have never seen the river in other any state than this, and recall seeing bubbles coming up to the viscous black surface as early as the early 1980’s. I spent a great deal of the 1990’s commuting on trains passing over bridges above its waters and daydreaming on how the river could be cleaned and the area restored. The situation has not improved and today is at least as bad as ever. Plans to clean the river and restore its surroundings have been around, on and off, for over 30 years, but never has anything substantial been achieved.

I bring up this example as one of the clearest cases I know of catastrophic failure of all aspects of the triple bottom line: a sane natural environment, in sane economy, with a sane social environment. Decades of carelessness, low prioritization, incompetence and corruption have made their job in Riachuelo. The results are obvious. It seems that it is about time to change the strategy.

Working with MODIS L1B from scratch. 4. topographic and illumination correction

* originally published in my old blog “Tales of Ice and Stone”, on 2014-03-05

In the last post of this series we will review the necessary steps for performing topographic and illumination correction of the MODIS images. Topographic and illumination corrections are important when we need to obtain the apparent surface reflectance as an end product. This is necessary when we want to identify materials on the Earth’s surface by deriving empirical spectral signatures, or to compare images taken at different times with different Sun and satellite positions and angles. By applying the corrections described here, we transform the satellite-derived reflectance into values that are (almost) directly comparable with those we would obtain with a field spectrometer. I wrote “almost” because we are not yet considering atmospheric correction, and also for two further assumptions we will make below. Also note that topographic correction just compensates for differences in measured reflectance due to terrain, as if they were lying flat and measured with a nadir-looking instruments, but cannot recreate values for areas behind obstructions.

We will need the calibrated images (radiance top-of-atmosphere, TOA, see previous posts); information about the sun elevation (Sun Zenith) and direction (Sun Azimuth), both from the MOD03 metadata file; information on the solar irradiance on each band, from each MOD02 hdf file; and two auxiliary grids, covering exactly the same region as the images, and containing the slope and aspect maps of the region of interest. The last two grids are derivable from a digital elevation model using standard GIS procedures (see, for example, GRASS).

The basic relation we will consider here is:

\rho_i = \frac{\pi L_i^{TOA}}{E_i cos\beta cos\gamma}

where \rho_i is the corrected band reflectance, L_i^{TOA} is the radiance at top-of-atmosphere, E_i is the bandwise solar irradiance, \gamma is the sensor viewing angle, or sensor zenith, and cos \beta is derived as follows:

cos \beta = cos \theta_n cos \theta_s + sin \theta_n sin \theta_s cos(\phi_n - \phi_s)

where \theta_n is the terrain slope, \theta_s is the solar zenith, \phi_n is the terrain aspect, and \phi_s is the solar azimuth.

For the sake of this example, we assume that the materials on the surface are Lambertian, that is, they are perfectly diffuse isotropic reflectors, spreading the energy homogeneously in all directions. Admittedly, this is not true for many important natural surfaces, but works fairly OK as a first approximation, and allows us to compensate for illumination effects by simply dividing the observed radiance by the cosine of the sensor zenith angle. Think that more realistic assumptions about the surface materials invariably involve very complicated additional calculations which, more often than not, rapidly push the problem to the edge of tractability. Here we will take the pragmatic approach of proposing a “good-enough” solution.

In this post we will use the following MODIS band 2 image of the Southern Patagonia Icefield as an example:


The image covers approximately 395 km in the North-South direction, and 119 km in the East-West direction.

We will need to extract the information on solar zenith and azimuth, as well as sensor zenith from the MOD03 files. Note that, as MODIS images cover a large geographical area, these angles cannot be considered constant over a scene, as we will sometimes assume for Landsat, for example. Therefore it is important to extract these values pixel-wise, as grids from the metadata file. We begin by reading the image and figuring out what is offered there:

read_sds_attributes -sds='SolarZenith,SolarAzimuth,SensorZenith' MOD03....

and we get:

SDS : SolarZenith
Attribute        Data Type           Value
units               STRING    degrees
valid_range         INT16     0, 18000
_FillValue          INT16     -32767
scale_factor        FLOAT64   0.010000
SDS : SolarAzimuth
Attribute        Data Type           Value
units               STRING    degrees
valid_range         INT16     -18000, 18000
_FillValue          INT16     -32767
scale_factor        FLOAT64   0.010000
SDS : SensorZenith
Attribute        Data Type           Value
units               STRING    degrees
valid_range         INT16     0, 18000
_FillValue          INT16     -32767
scale_factor        FLOAT64   0.010000

Note that values range from 0 to 18000 and, as the scale value of 0.01 indicates, are expressed in hundredths of degrees. This is important to consider when performing the calculations.

We then proceed to extract these grids by executing the usual swath2grid command, as explained in previous posts. Images with sensor and sun angles extracted from MOD3 are always 1 Km pixel size, and will need to be resampled for precessing with MOD02QM or MOD02HM. Here we have, from left to right: Solar Azimuth, Solar Zenith and Sensor Zenith for our region:


Note how values vary across the pictures from low (dark) to high (bright).

We then continue by deriving the terrain angles from a digital elevation model. We use an elevation model compiled using data from Shuttle Radar Topography Mission (SRTM). The typical no-data “holes” in SRTM data were filled by creating a new grid through interpolation of the original values (as described in my recent paper “Hypsometry and sensitivity of the mass balance to changes in equilibrium-line altitude: the case of the Southern Patagonia Icefield“). The elevation, and the derived aspect and slope grids look like this:


Before finishing, we must obtain the solar irradiance at each band (E_i in Eq. 1). These values can be exprted using gdalinfo:

gdalinfo MOD02*

What we get is something like:

Solar Irradiance on RSB Detectors over pi=511.46, 511.46, 511.46, 511.46, 511.46, 511.46, 511.46, 511.588, 511.62, 511.588, 511.588, 511.588, 511.556, 511.556, 511.556, 511.492, 511.492, 511.556, 511.588, 511.588, 511.588, 511.62, 511.62, 511.62, 511.62, 511.62, 511.556, 511.492, 511.46, 511.365, 511.301, 511.206, 511.11, 510.983, 510.824, 510.378, 509.614, 509.614, 509.614, 509.614, 315.763, 315.763, 315.763, 315.763, 315.763, 315.763, 315.763, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.763, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.763, 315.763, 315.763, 315.732, 315.732, ...

which is a list of the solar irradiance value on each of the 40 sensors divided by pi (that is the inverse of the first part of the right hand side of Eq. 1, including the pi on the numerator). Now, in our simple example we will restrict ourselves to make a simple average of the values for each band, but we note that rigorous calibration requires detailed correction for each detector. I do this by parsing the “global attributes” text file with Perl and then averaging sequentially over 40 values using Perl Data Language.

We are now ready to combine these grids. As described in the previous post, I use the NetCDF calculator grdmath, that comes with GMT. The resultant topographically corrected surface reflectance image looks like this:


Note how the image looks “flat” now, in comparison with the uncorrected image. This is a Float32 bit image, with reflectance values from 0 to 1. Errors in the procedures and values might cause some pixels to have values larger than 1 or smaller than 0. These cases need to be analyzed with care. Steep slopes and ridges are particularly difficult subjects, specially when covered with snow, as in my example. Always have a second look at your data and procedures!

The procedures that we have reviewed here are appropriate to make a quick and dirty topographic and illumination correction of MODIS images. There are, nevertheless, a number of delicate topics that we have not covered and that must be seriously studied in detail when performing rigorous correction. First, as stated above, we have neglected to correct for atmospheric effects; second, we have blindly assumed a Lambertian surface, and third, we have neglected the individual response of each of the 40 detectors in MODIS, averaging their registered solar irradiance. We also did not quantify errors in reflectance. So, in conclusion, although our results look fairly good to the eye and the pixel values seem very reasonable, remember that these procedures are limited.

All in all, I hope readers of these posts will find them useful and will find themselves motivated to play around with MODIS and may be bring their analysis to more rigorous levels. Good luck!

Working with MODIS L1B from scratch. 3. calibration, conversion to radiance at top-of-atmosphere

* originally published in my old blog Tales of Ice and Stone, on 2014-02-20

In the third part of this series we will calibrate the images to convert the pixel values to spectral radiance.

The spectral radiance is a physical quantity expressing the amount of energy being radiated (in our case, back to space), and is measured in Watts (Joules per second, energy flux) per square meter (area) per micrometer (wavelength) per steradian (solid angle).

The values that we will obtain here will not yet be corrected for illumination, topographic or atmospheric effects, and therefore correspond to what is measured by the satellite at the altitude of its orbit, as opposed to values on the ground that you can measure with a field spectrometer. The values we will obtain here are thus called “radiance at top-of-atmosphere”, usually abbreviated “TOA”.

The conversion is based on a linear relationship between data counts (pixel values) and the radiance measured by the sensor (for the gory details have a look at the manuals in NASA’s MODIS Characterization Support Team). The basic relationship is:

radiance = (pixel_value - offset) * scale

and has to be performed band-wise. We begin by extracting the scales and offset from bands 3-7, that is the 500 m bands (remind that we use the bands resampled to 1 km). We use the program read_sds_attributes that comes with the MODAT suite.

read_sds_attributes -sds='EV_500_Aggr1km_RefSB' MOD021KM.A2013187.1035.005.2013187194950.hdf

and we get:

SDS : EV_500_Aggr1km_RefSB
Attribute Data Type Value
long_name STRING Earth View 500M Aggregated 1km
Reflective Solar Bands Scaled
units STRING none
valid_range UINT16 0, 32767
_FillValue UINT16 65535
band_names STRING 3,4,5,6,7
radiance_scales FLOAT32 0.035233, 0.024804, 0.005951, 0.002677, 0.000823
radiance_offsets FLOAT32 -0.000000, -0.000000, -0.000000, -0.000000, -0.000000
radiance_units STRING Watts/m^2/micrometer/steradian

reflectance_scales FLOAT32 0.000055, 0.000043, 0.000041, 0.000036, 0.000030
reflectance_offsets FLOAT32 -0.000000, -0.000000, -0.000000, -0.000000, -0.000000
reflectance_units STRING none
corrected_counts_scalesFLOAT32 0.124973, 0.124973, 0.124973, 0.124973, 0.124973
corrected_counts_offsetsFLOAT32 -0.000000, -0.000000, -0.000000, -0.000000, -0.000000
corrected_counts_unitsSTRING counts
Processing done !

The output expresses the scales (reflectance_scales) and offsets (reflectance_offsets) for each of the five resampled bands. The same is then done for bands 1 and 2, the 250 m bands:

read_sds_attributes -sds='EV_250_Aggr1km_RefSB' MOD021KM.A2013187.1035.005.2013187194950.hdf

SDS : EV_250_Aggr1km_RefSB
Attribute Data Type Value
long_name STRING Earth View 250M Aggregated 1km
Reflective Solar Bands Scaled
units STRING none
valid_range UINT16 0, 32767
_FillValue UINT16 65535
band_names STRING 1,2
radiance_scales FLOAT32 0.026587, 0.009931
radiance_offsets FLOAT32 -0.000000, -0.000000
radiance_units STRING Watts/m^2/micrometer/steradian

reflectance_scales FLOAT32 0.000054, 0.000033
reflectance_offsets FLOAT32 -0.000000, -0.000000
reflectance_units STRING none
corrected_counts_scalesFLOAT32 0.124973, 0.124973
corrected_counts_offsetsFLOAT32 -0.000000, -0.000000
corrected_counts_unitsSTRING counts
Processing done !

We can then proceed to extract these values and recalculate the images. I find convenient to do these procedures via Perl scripts, by executing read_sds_attributes command, scanning the output text file, and then extracting the values using regular expressions. I then perform the calculations by converting the GeoTIFFs to NetCDF grids and calling the grdmath tool that comes with Generic Mapping Tools:

grdmath file_raw.grd offset SUB scale MUL = file_corrected.grd

* Note: grdmath is a Reverse Polish Notation grid calculator.

and lastly call the gdal_translate utility program from GDAL to produce a new GeoTIFF:

gdal_translate file.grd file.tif

Since the calibration is a linear conversion, the results look pretty much the same as before on the screen. The only difference is that our pixel values are now geophysical quantities, as seen in the following example:


The images depict the uncorrected pixel counts (left) and radiance at top-of-atmosphere (right) for the same region, the southernmost portion of the Southern Patagonia Icefield, as seen in MODIS band 2 (841 – 876 nm). The mouse pointer is located over (roughly) the same place over the ablation area of Tyndall Glacier. The pixel values can be read on the lowermost part of the image.

We have here reviewed how to derive TOA radiance values from uncorrected pixel counts in MODIS L1B images. We did not discuss how to evaluate errors, or take into account uncertainties inherent to the sensors. May be we can handle that in a future post. In the next and last post, we will see how to perform illumination and topographic correction to achieve a product that is suitable for multitemporal comparison and analysis.