Urban automobility, a dead paradigm that we refuse to abandon.

Traffic jam in São Paulo, Brazil. Source: WikiCommons. Photo: Mario Roberto Duran Ortiz Mariordo.

Traffic jam in São Paulo, Brazil. Source: WikiCommons. Photo: Mario Roberto Duran Ortiz Mariordo.

A visit to any of the world’s great cities will convince anyone that cars are no longer useful to travel around dense urban areas. Yet, we seem to insist with our wheeled smoking addiction against all evidence, building and buying more cars and adapting our cities to them, decreasing our overall life quality in the process. Why?

The automobile has been an icon of modern industrial society for about a century. Almost every TV commercial will show us shiny large new cars speeding across empty roads in wide open natural spaces, or through some old and elegant (and curiously empty!) European city. These cars will be often driven by handsome, affluent middle-aged men accompanied by gorgeous, sexy women. Yet, this is not how reality will look like for most us after we buy that car! This says a lot, and not only about the gender perspective of these commercial ads. As powerful status symbols, larger and more sophisticated cars are usually seen as proxies for our personal worth and success, at least according to the hedonistic values of our material society.

We seem to hold the religious belief that the personal car gives us freedom and well-being, and the speed and flexible mobility deemed necessary for a modern urban life. Yet, this is far from truth. Traffic congestion and its attendant visual, noise and chemical pollution are a major source of avoidable health issues, as stress, respiratory and skin problems [1]. Another interesting argument, recently put forward by G. Backhaus [2], states that the rise of atmospheric CO2 concentrations and its concomitant chain of dangerous planetary changes (global warming, ocean acidification, sea-level rise, disruption of agricultural cycles, etc) can be seen as a just a symptom of the prevalent worldview, one in which massive fossil-fuel burning is central, and of which car addiction is just a part of.

Getting rid of this is not easy. We can see it a bit more clearly from the systems thinking perspective: we have over several decades built a massive system for urban traffic and a massive industry to sustain it (if the car industry where a country, it would have the world’s 6th largest BNP [3]). We now became addicted to its functionality and supposed benefits, as employment and regional development. Some cities in the world are namely built for the car, with the idea that people will work in downtown while living in far-off suburbs and make their purchases in far-off malls. We have invested astronomical amounts of effort and money in this global idea, and some people have become astonishingly wealthy in the process. This system is almost self-sustaining, at a cost that we are not paying right now (environmental disruption at planetary scale). It is not surprising, then, that we refuse to see its disadvantages and many of amongst us are often quick to demean alternative forms of urban mobility, with the excuse of the potential cost involved: an almost archetypical sunk-cost fallacy situation.

Cars still have reasons to exist, even in urban areas, as taxis and emergency vehicles, but particularly in sparsely populated areas where public transport might be inadequate. For the dense inner cores of cities, the use of electrically driven public transportation is a far superior alternative [*]. A cleverly and densely laid system of tramways, subways and trolleybus can effectively and cleanly deal with the necessary mobility of millions of people, not to mention that people can be encouraged to walk or cycle. Taking cars out of the streets liberates the space for living and meeting, which leads to enormous positive social side-effects because people start having more opportunities to meet and knit the social mesh, something that it is often lost in modern megacities. We have some good examples of this at hand: Vauban, Freiburg (Germany), Pontevedra (Spain) or Hydra (Greece), where parts of these cities have been closed to car traffic and had then been reclaimed by people as living, playing and meeting space, positively contributing to the local social well being and democracy.

Living in cities does not need to be a torture, but in many places around the world people still insist in it. The situation seems scarily similar to that of the ancient inhabitants of Eastern Island, or norsemen in Greenland: eroding the basic tenets of our existence to mindlessly serve some bogus religious purpose that is neither uplifting nor necessary. In the face of the coming mega-urbanity of the 21st century, every move to make cities livable and human is useful. Time to change our frame of mind. Urban automobility is dead. We better bury it and move on, the effort will be worthwhile.

[1] Armah, Frederick A.; Yawson, David O.; Pappoe, Alex A. N. M. 2010. “A Systems Dynamics Approach to Explore Traffic Congestion and Air Pollution Link in the City of Accra, Ghana.” Sustainability 2, no. 1: 252-265. http://www.mdpi.com/2071-1050/2/1/252

[2] Backhaus, Gary. 2009. “Automobility: Global Warming as Symptomatology.” Sustainability 1, no. 2: 187-208. http://www.mdpi.com/2071-1050/1/2/187

[3] De Vries, Bert JM. Sustainability science. Cambridge University Press, 2012.

Overshoot and collapse

On 19 August 2014, just a few weeks ago, we “celebrated” this year’s Earth Overshoot Day, the day when we have already consumed the resources generated and to be generated by the Earth during the the whole year. This week, a new report was released showing further evidence that our global society has been progressing through the dreaded scenario 1 in “The Limits to Growth” report from 1972 [1], known as “business as usual” but better described as “overshoot and collapse”. We are living in interesting times. We should feel blessed …

Overshoot
Mid August. We will spend the coming four months eroding soils, overusing fertilizers, and accumulating stocks of pollution at rates much higher than the planet can recycle through its natural dynamics, thus harming its capacity to regenerate, transform and sustain life. This is equivalent to having spent our monthly salary by day 20, borrowing for the remaining 10 days. In the global case the loan will have to be paid by our children and grandchildren, not only in money, but in far more important assets such as lower quality of food and water, degraded health and an unpleasant Nature, amidst a potential social and environmental disarray of unheard of magnitude. If you have been through revolutions, uprisings, or any other form of local social collapse, you can easily imagine that similar events at global scale won’t be amusing.

The Earth Overshoot Day is a measure of our abuse of the planet, calculated by the Global Footprint Network. As we are more and more people on Earth, using more and more resources, the Earth Overshoot Day has fallen on earlier and earlier days over time:

ScientiaPlusConscientia_overshoot

Plot of Earth Overshoot Day over time. Note that calculations might have a non-negligible error bound. In any case, the trend is not flattering, and clearly shows our inability to adapt our living to the conditions of the Earth.

Collapse
A recent report by G.M. Turner at the Melbourne Sustainable Society Institute, University of Melbourne, Australia [2] is the latest comprehensive view of a process that has been previously shown by many other studies (see a long list of publications here): that the evolution path of our global society vastly disregards all criteria of sustainable development, using the available renewable resources at rates higher than their natural regeneration (e.g. fisheries, forests), using non-renewable resources at rates higher than the development of any substitute (e.g. soils, oil, gas), and releasing pollutants at rates higher than Nature can reabsorb and transform them (e.g. carbon dioxide, methane, PCB). The “business-as-usual” scenario, created as a realistic representation of a society like ours, obsessed with greed, consumption and the illusion of infinite growth, describes in its many variants an unsustainable path that invariably leads to an uncomfortable future, characterized by a much lower quality of life, drastic reduction in life expectancy and food availability, and an unpleasant Nature:

world3simulator_scenario1_annotated

Screen capture from the World3 simulator (http://world3simulator.org/) showing the projections for scenario 1 in “The Limits to Growth”. Note the dates on the horizontal axis. We are not far from the projected collapse.

Alternative futures
As an effort to give the image of a less traumatic future, we can finish these thoughts by noting that not everything is gloom-and-doom news yet, that we can still do something to avoid a global environmental catastrophe, although we are achingly running out of time. The very substantial changes we need require rethinking almost every aspect of our economy, social structures and relation to the environment, learning to live under Nature’s conditions. The challenges ahead are enormous. Some countries and regions have already made pertinent and useful changes, but the great majority of the world still lags with this.

Increased pressure on governments and politicians is necessary to force them to stop babbling and act, enforcing international climate and environmental pacts. However, the best place to start is right where we are, becoming ourselves agents of change [3], at our own personal and local scales of action, by choosing to consume less, finding new uses of what we already have, and recycling what cannot be reused, and by exercising as much as possible the “power of the purse”, forcing businesses to either adapt or go bust. Unfortunately, we should not forget that collapse may still be the most likely scenario, and perhaps it is reasonable to start considering how to face it (some suggestions might be found here [4]).

References:

[1] Meadows, D., Randers, J., & Meadows, D. (2004). Limits to growth: the 30-year update. Chelsea Green Publishing.

[2] Turner, G. (2014) ‘Is Global Collapse Imminent?’, MSSI Research Paper No. 4, Melbourne Sustainable Society Institute, The University of Melbourne.

[3] Robertson, M. (2014) Sustainability. Principles and practice. Routledge.

[4] Maher, T. M., & Baum, S. D. (2013). Adaptation to and recovery from global catastrophe. Sustainability, 5(4), 1461-1479.

Mapping snowlines on temperate glaciers: the wonders of optimal thresholding

In this post I describe an interesting glaciological application of Otsu’s optimal thresholding, a method in digital image processing with which we can segment a gray-level image into two or more classes by finding the best threshold level between the two. I stumbled upon the method a few years ago when working on a study in which I needed an objective and reproducible method for segmenting satellite images of glaciers into accumulation and ablation areas. After much trial and error, and in spite of its simplicity, Otsu’s optimal thresholding outperformed all other methods for classification, including sophisticated approaches as ISODATA and spectral unmixing.

For the uninitiated, the problem is as follows: on a glacier, the snow line is the limit between snow (above) and ice (below). It is a dynamic feature whose altitude changes according to the weather and seasons. On temperate glaciers, those where the ice is everywhere at the pressure melting point, conventional glaciological wisdom tells us that the snow line at the end of the ablation season (usually the summer) can be considered a good approximation to the equilibrium line altitude, that is, the altitude at which the surface mass balance is zero. This approximation is not valid for subpolar glaciers because superimposed ice develops right down the firn field, effectively extending the accumulation area without appreciable optical contrast. The relevance of this question lies in the fact that glaciologists are usually interested in the relation between climate and glacier mass balance, and therefore mapping snowline altitudes at the end of the summer has always an intrinsic value. At least conceptually, the solution seems simple: find a cloud-free image taken at a suitable time of the year (end of summer, minimal seasonal snow cover) and just map the snow line on it. See, for example, the following subset of a satellite image of the Southern Patagonia Icefield:

South Patagonian Icefield, Viedma and Rio Túnel glaciers. Landsat 5 TM, 1997-03-17, path/row 231/094

South Patagonian Icefield, Viedma and Rio Túnel glaciers. Landsat 5 TM, 1997-03-17, path/row 231/094. The image is approx. 21 km across.

On Glaciar Viedma, the large one roughly at the center of the image, we can clearly see the obvious differences between the ice on the ablation area, to the south, and the snow and firn on the accumulation area, to the north. We probably feel at once that we could just draw a line along the limit and solve the problem. Reality is nevertheless more complicated because the limit is gradual, and placing a boundary will at some point become an arbitrary choice. If we had several interpreters working on this kind of problem we could in principle calculate some sort of average line, but having many humans paid to do this work is not usually within the budget of the typical glaciological project! This approach is also impractical if we need to map snow line altitudes over a large number of glaciers. Obviously, satellite images are suitable for this kind of task, and the problem of finding a methodology for mapping glaciers with them has challenged the ingenuity of many glaciologists for decades (see a very long list of articles about doing this with visible and near infrared images here). An important aspect is that for achieving some sort of consistency, and for covering large areas in a reasonable time, automatic methods which are also objective and reproducible are mandatory. In practice the matter is not trivial, but I found that Otsu’s optimal thresholding is an extremely suitable method to solve this problem and think it can be advantageously used for mapping a large number of glaciers, regionally or even globally.

As noted above, Otsu’s method reduces a gray-level image to a binary one by finding an optimal threshold level that maximizes a separability function. The original idea was published by Nobuyuki Otsu in 1979 (1), in what became a seminal paper in the field of digital image processing. The procedure can be found in exquisite detail in the original paper as well as in many books, and all around the internet. The recipe I follow here is that given by Gonzalez and Woods in their book “Digital image processing”, page 747 (2). I also give the Perl code that I used to calculate all steps (namely, PDL, the Perl Data Language).

As an example, I here use a subset of a MODIS image (band 2, near infrared) of Glaciar Viedma, part of the study mentioned above (3). We will need a gray-level image of a glacier, where the surroundings have been suitably clipped out using a glacier outline from an inventory. This is important and necessary in order to find the threshold level between ice and snow/firn, and not that between glacier and rock, which is usually more marked in pixel levels:

fig_02_viedma_modis-b2

Glaciar Viedma. Subset of band 2 of an average of 7 MODIS images.

I chose Glaciar Viedma because its tongue is covered by volcanic ash and debris and it is a challenge for image classification. Ideally, a glacier histogram on this spectral band should look bimodal: one mode for ice, and the other for snow. Instead, we here have an additional mode for the debris, at a lower level than the ice:

Glaciar Viedma. Image histogram.

Glaciar Viedma. Image histogram.

The arrow shows the position of the optimal threshold (anticipating the results!). We see that Otsu’s method performed reasonably well even in this difficult case. Simpler glaciers give even better results.

The procedure we followed to find this was:

0. We begin by loading the image into a piddle, a PDL matrix. To do this, we first convert the file from the original GeoTIFF format to NetCDF, using GDAL, and calle it image.grd. We then read the file directly into PDL using the PDL::NetCDF module:
# read the file
my $image = PDL::NetCDF->new("image.grd",{REVERSE_DIMS => 1});
# load the z dimension (the data) into the variable
my $pdl_image = $image->get('z');
# convert eventual NaN to BAD values
$pdl_image .= $pdlimage->inplace->setnantobad;
# clump the two dimensions of the image to one long vector
my $pdl_clump = $pdl_image->clump(2);
# compute the histogram using hist
my ($vb,$nb) = hist $pdl_clump;
# and convolve a bit (using conv1d) to smooth out those pesky noisy images
$nb .= conv1d $nb1,ones(11);

We then start with Otsu’s method proper:

1. compute the normalized histogram of the input image (in the normalized histogram, the frequency of every level is expressed as an estimate of the probability):
my $bnh = $nb/sumover($nb);

2. compute cumulative sums of the histograms (I use here use a subroutine called “cumhisto”, that you can find in the appendix. However, PDL has a method for that called “cumusumover()”. Thanks Chris Marshall for calling my attention to that!)
my $bcsum = cumhisto($bnh);

3. compute the cumulative means (that is, for each level, the mean of the histogram up to that level)
my $bcmea = cumhisto($vb*$bnh);

4. compute the global intensity mean:
my $bintm = sumover($vb1*$b1nh);

5. compute the between class variance, \sigma^{2}_B , using:
\sigma^{2}_B = \frac{(m_G P_1(k) - m(k))^{2}}{P_1(k) (1 - P(k))}

where m_G is the global mean, P_1(k) is the probability that level k belongs to class 1, and m(k) is the k th cumulative average.
my $bsigmasq = (($bintm*$bcsum)-$bcmea)**2/($bcsum*(1-$bcsum));

6. and then we obtain the optimum threshold as the value k^{*} that maximizes \sigma^{2}_B(K) :
$botsuthr = $vb1->index(maximum_ind($bsigmasq));

7. if we want, we can also compute the separability measure \theta^{*} , using:
\theta(k) = \frac{\sigma^{2}_B(k)}{\sigma^{2}_G}

where \sigma^{2}_G is the global variance:
my $b1globvar = sumover(($vb1 - $b1intm)**2 * $b1nh);

and then, the separability is:
$rthetab1 = $b1otsuthr / $b1globvar;

Once we have the value of the threshold, we can simply generate a binary image by setting values below and above the threshold to 0 and 1, respectively. Our segmented glacier will then look like this:

Glaciar Viedma. Segmented into accumulation and ablation areas using Otsu's optimal thresholding method.

Glaciar Viedma. Segmented into accumulation and ablation areas using Otsu’s optimal thresholding method.

The method is computationally very efficient and can be made to run very fast. It is extremely effective with very few caveats. Obviously, images should be cloud free. The only problems I experienced were caused by shadows, or faulty outlines that left some rock or a sufficiently large nunatak inside the boundary. Also, computing AAR (accumulation_area / total_glacier_area) using different bands gives slightly different results, which is not a problem of the method, but a manifestation of the difficulty of the problem.

All in all, Otsu’s optimal thresholding appears to me as an old gem that can be conveniently implemented on larger scales. For example, it would not be unthinkable to use the GLIMS or Randolph global glacier inventories, together with end-of-summer images over glaciers around the world, to systematically map AAR at a global scale on a routine basis. This could be done using the procedure described here at a fairly reasonable computational cost. Cool!

References

(1) Otsu, N (1979). “Threshold selection method from gray-level histograms”. IEEE Trans. Syst. Man Cybern. 9.1, 62–66. issn: 0018-9472.

(2) Gonzalez, R.C. and R.E. Woods (2008). “Digital image processing”. Pearson/Prentice Hall, 954 pp.

(3) De Angelis, H. “Hypsometry and sensitivity of the mass balance to changes in equilibrium-line altitude: the case of the Southern Patagonia Icefield.” Journal of Glaciology 60, no. 219 (2014): 14-28.

Appendix

Cumhisto, a subroutine in PDL for calculating the cumulative histogram:

sub cumhisto {
my $lqs = pdl(@_);
my $cum = pdl(@_);
my $h = 0;
while ($h < nelem($lqs)-2) {
my $dummy = zeroes(nelem($cum));
$dummy($h+1:-1) .= $lqs(0:-($h+2));
$cum += $dummy;
$h++;
}
return $cum;
}

Featured Image -- 105

Tell Me How?

Hernán De Angelis:

This is so powerful, so true.

Originally posted on The Quiet Sky:

Tell me how not to be pliant like a willow
When winds of deceit and subterfuge blow?

To stand correct and straight amidst
Overwhelming corruption and hate?

Tell me how to fight war and strife
With a white glove and no knife?

To cry a little more now and then
On seeing killing, riots and mayhem?

Tell me how to tolerate and bear
The mindless destruction of Life everywhere?

How to feel a wrenching pain
As hunters shoot down for profit and gain?

Tell me how not to turn my face away
From gnawing poverty and hunger every day?

Tell me how to experience the same
Of those tortured and maimed?

Tell me how to be a lotus in the lagoon
Despite muddy waters, in full bloom?

Tell me how to revere and hold sublime
All Life in its variety Divine?

Tell me how, O Sage, to pray,
Bringing Compassion…

View original 35 more words

Precautionary principle

* originally published in my old blog “Tales of a Finite Planet”, on 2014-06-24

I have been pondering on writing about the precautionary principle for a while. I hesitated because of the vastness and intricacy of the matter, the apparently endless ramifications, and the somewhat controversial legal sides (see yourself!). I finally decided to go ahead, while keeping these notes close to the simple and powerful core idea.

The roots of the precautionary principle are often traced back to the preamble of the 1987 UN Montreal Protocol on the ozone layer and, particularly, to the Rio Declaration on Environment and Development, made on the 1992 Summit in Rio de Janeiro. Here, the 15th point reads:

In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

The definition used by Schenllnhuber and Wenzel, in “Earth system analysis: integrating science for sustainability” (p. 209): “securing of error-tolerant operating by judiciously staying away from the currently presumed civilizatory and ecological load limits” is also illuminating because it emphasizes the links to a system dynamics approach, as well as the concept that the inherent limits in our knowledge give rise to uncertainties in forecasts and analysis. This applies specifically, for example, to the perceived safety of pollutants, insecticides, etc., and calls for extreme caution in their use.

There is, however, nothing truly revolutionary in these words, as the basic idea is as old as humanity itself and can be found, in subtle ways, in the the wisdom and common sense of many cultures (or grandmothers, for that matter). The point was brilliantly made by Nicholas Nassim Taleb in his book “Antifragile”, and goes basically like this: Do you ingest a bunch of randomly chosen pills on the premise that there is no evidence that they will do any harm to you? Well, no. The same applies to the environment!

The idea is closely connected with the concept of burden of proof. This translates in that those claiming that there is no harm in using a certain product or method are those who have to provide the evidence that these are safe, and not the opposite. With respect to the global environment, the lesson is that we cannot happily assume that we can just do anything on a massive scale without harming the planet, particularly if these assumptions are based on a limited number of isolated and possibly lousy experiments (or none at all). Compare this with the vast experimental record of natural evolution. In her millions of years of trial and error, Nature has come up with far better solutions, compatible with the rest of the environment, than anything humans can do. This point was also raised by Taleb, with a cute probabilistic interpretation (see here).

In recent decades we have witnessed a particularly successful application of the precautionary principle, namely the ban on chlorofluorocarbons (CFC) and the ensuing recovery of the ozone layer. We now have at hand another excellent example with the European Commission’s recent ban on neonicotinoids, that has conceivably put a stop for the colony collapse disorder in the Po river valley, Italy, without observable negative impact on the crop yield. The precautionary principle is also very relevant to the question of greenhouse gases and climate change, where we have an opportunity to act now, by drastically reducing emissions of greenhouse gases, and hopefully call it another case of successful application in the future.

The precautionary principle has been criticized for being vague and loose and, if taken to the extreme, an excuse for inaction. Some of these criticisms are valid, but are directed to particular applications and interpretations, rather than the principle in itself and does not invalidate it. Note that one of the most powerful aspects of the precautionary principle is how it is inextricably linked to ethics, guiding us to think on how our actions and decisions impact on Nature and our fellow humans, both now and in the future. Experience has shown that, when used as a guidance, in an open-minded and reasonable way, the precautionary principle makes a good starting point on any environmental discussion, and a solid ethical and practical ground for the legislation and practice of sustainable development. Let’s not forget it.

Triple bottom line

* originally published in my old blog “Tales of a Finite Planet”, on 2014-05-22

The pictures below show a sparse shanty town in Dock Sud, Avellaneda, a municipality just south of Buenos Aires, Argentina, in August 2006. The “water” course is Riachuelo, the channelized lower stretches of Matanza River, and a serious contender for the most polluted river on the planet.

20140523_triple-baseline_red

Feral dogs, precarious houses and dumping sites for demolition rubble are visible. Less obvious are the heavy pollution of the river, the inhuman living conditions of the people and their unstable and often precarious income sources. Health problems due to pollution, criminality and low education levels are rampant. Note that the reason some trees do not have leaves in the pictures is not pollution, but the southern winter.

The river used to be a recreational space until the early 1930’s (told directly by one of my grandmothers), when people sought in its waters relief from the hot summers. By the 1940’s the river was already smelling bad from the chemical wastes, poured directly without treatment, from leather and other industries settled along its banks. I have never seen the river in other any state than this, and recall seeing bubbles coming up to the viscous black surface as early as the early 1980’s. I spent a great deal of the 1990’s commuting on trains passing over bridges above its waters and daydreaming on how the river could be cleaned and the area restored. The situation has not improved and today is at least as bad as ever. Plans to clean the river and restore its surroundings have been around, on and off, for over 30 years, but never has anything substantial been achieved.

I bring up this example as one of the clearest cases I know of catastrophic failure of all aspects of the triple bottom line: a sane natural environment, in sane economy, with a sane social environment. Decades of carelessness, low prioritization, incompetence and corruption have made their job in Riachuelo. The results are obvious. It seems that it is about time to change the strategy.

Working with MODIS L1B from scratch. 4. topographic and illumination correction

* originally published in my old blog “Tales of Ice and Stone”, on 2014-03-05

In the last post of this series we will review the necessary steps for performing topographic and illumination correction of the MODIS images. Topographic and illumination corrections are important when we need to obtain the apparent surface reflectance as an end product. This is necessary when we want to identify materials on the Earth’s surface by deriving empirical spectral signatures, or to compare images taken at different times with different Sun and satellite positions and angles. By applying the corrections described here, we transform the satellite-derived reflectance into values that are (almost) directly comparable with those we would obtain with a field spectrometer. I wrote “almost” because we are not yet considering atmospheric correction, and also for two further assumptions we will make below. Also note that topographic correction just compensates for differences in measured reflectance due to terrain, as if they were lying flat and measured with a nadir-looking instruments, but cannot recreate values for areas behind obstructions.

We will need the calibrated images (radiance top-of-atmosphere, TOA, see previous posts); information about the sun elevation (Sun Zenith) and direction (Sun Azimuth), both from the MOD03 metadata file; information on the solar irradiance on each band, from each MOD02 hdf file; and two auxiliary grids, covering exactly the same region as the images, and containing the slope and aspect maps of the region of interest. The last two grids are derivable from a digital elevation model using standard GIS procedures (see, for example, GRASS).

The basic relation we will consider here is:

\rho_i = \frac{\pi L_i^{TOA}}{E_i cos\beta cos\gamma}

where \rho_i is the corrected band reflectance, L_i^{TOA} is the radiance at top-of-atmosphere, E_i is the bandwise solar irradiance, \gamma is the sensor viewing angle, or sensor zenith, and cos \beta is derived as follows:

cos \beta = cos \theta_n cos \theta_s + sin \theta_n sin \theta_s cos(\phi_n - \phi_s)

where \theta_n is the terrain slope, \theta_s is the solar zenith, \phi_n is the terrain aspect, and \phi_s is the solar azimuth.

For the sake of this example, we assume that the materials on the surface are Lambertian, that is, they are perfectly diffuse isotropic reflectors, spreading the energy homogeneously in all directions. Admittedly, this is not true for many important natural surfaces, but works fairly OK as a first approximation, and allows us to compensate for illumination effects by simply dividing the observed radiance by the cosine of the sensor zenith angle. Think that more realistic assumptions about the surface materials invariably involve very complicated additional calculations which, more often than not, rapidly push the problem to the edge of tractability. Here we will take the pragmatic approach of proposing a “good-enough” solution.

In this post we will use the following MODIS band 2 image of the Southern Patagonia Icefield as an example:

hps_0_radTOA_35

The image covers approximately 395 km in the North-South direction, and 119 km in the East-West direction.

We will need to extract the information on solar zenith and azimuth, as well as sensor zenith from the MOD03 files. Note that, as MODIS images cover a large geographical area, these angles cannot be considered constant over a scene, as we will sometimes assume for Landsat, for example. Therefore it is important to extract these values pixel-wise, as grids from the metadata file. We begin by reading the image and figuring out what is offered there:

read_sds_attributes -sds='SolarZenith,SolarAzimuth,SensorZenith' MOD03....

and we get:

======================================================================
SDS : SolarZenith
Attribute        Data Type           Value
-----------------+-----------+---------------------------------------
units               STRING    degrees
valid_range         INT16     0, 18000
_FillValue          INT16     -32767
scale_factor        FLOAT64   0.010000
======================================================================
SDS : SolarAzimuth
Attribute        Data Type           Value
-----------------+-----------+---------------------------------------
units               STRING    degrees
valid_range         INT16     -18000, 18000
_FillValue          INT16     -32767
scale_factor        FLOAT64   0.010000
======================================================================
SDS : SensorZenith
Attribute        Data Type           Value
-----------------+-----------+---------------------------------------
units               STRING    degrees
valid_range         INT16     0, 18000
_FillValue          INT16     -32767
scale_factor        FLOAT64   0.010000

Note that values range from 0 to 18000 and, as the scale value of 0.01 indicates, are expressed in hundredths of degrees. This is important to consider when performing the calculations.

We then proceed to extract these grids by executing the usual swath2grid command, as explained in previous posts. Images with sensor and sun angles extracted from MOD3 are always 1 Km pixel size, and will need to be resampled for precessing with MOD02QM or MOD02HM. Here we have, from left to right: Solar Azimuth, Solar Zenith and Sensor Zenith for our region:

hps_angles

Note how values vary across the pictures from low (dark) to high (bright).

We then continue by deriving the terrain angles from a digital elevation model. We use an elevation model compiled using data from Shuttle Radar Topography Mission (SRTM). The typical no-data “holes” in SRTM data were filled by creating a new grid through interpolation of the original values (as described in my recent paper “Hypsometry and sensitivity of the mass balance to changes in equilibrium-line altitude: the case of the Southern Patagonia Icefield“). The elevation, and the derived aspect and slope grids look like this:

hps_terrain

Before finishing, we must obtain the solar irradiance at each band (E_i in Eq. 1). These values can be exprted using gdalinfo:

gdalinfo MOD02*

What we get is something like:

Solar Irradiance on RSB Detectors over pi=511.46, 511.46, 511.46, 511.46, 511.46, 511.46, 511.46, 511.588, 511.62, 511.588, 511.588, 511.588, 511.556, 511.556, 511.556, 511.492, 511.492, 511.556, 511.588, 511.588, 511.588, 511.62, 511.62, 511.62, 511.62, 511.62, 511.556, 511.492, 511.46, 511.365, 511.301, 511.206, 511.11, 510.983, 510.824, 510.378, 509.614, 509.614, 509.614, 509.614, 315.763, 315.763, 315.763, 315.763, 315.763, 315.763, 315.763, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.763, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.795, 315.763, 315.763, 315.763, 315.732, 315.732, ...

which is a list of the solar irradiance value on each of the 40 sensors divided by pi (that is the inverse of the first part of the right hand side of Eq. 1, including the pi on the numerator). Now, in our simple example we will restrict ourselves to make a simple average of the values for each band, but we note that rigorous calibration requires detailed correction for each detector. I do this by parsing the “global attributes” text file with Perl and then averaging sequentially over 40 values using Perl Data Language.

We are now ready to combine these grids. As described in the previous post, I use the NetCDF calculator grdmath, that comes with GMT. The resultant topographically corrected surface reflectance image looks like this:

hps_9_topocorr_35

Note how the image looks “flat” now, in comparison with the uncorrected image. This is a Float32 bit image, with reflectance values from 0 to 1. Errors in the procedures and values might cause some pixels to have values larger than 1 or smaller than 0. These cases need to be analyzed with care. Steep slopes and ridges are particularly difficult subjects, specially when covered with snow, as in my example. Always have a second look at your data and procedures!

The procedures that we have reviewed here are appropriate to make a quick and dirty topographic and illumination correction of MODIS images. There are, nevertheless, a number of delicate topics that we have not covered and that must be seriously studied in detail when performing rigorous correction. First, as stated above, we have neglected to correct for atmospheric effects; second, we have blindly assumed a Lambertian surface, and third, we have neglected the individual response of each of the 40 detectors in MODIS, averaging their registered solar irradiance. We also did not quantify errors in reflectance. So, in conclusion, although our results look fairly good to the eye and the pixel values seem very reasonable, remember that these procedures are limited.

All in all, I hope readers of these posts will find them useful and will find themselves motivated to play around with MODIS and may be bring their analysis to more rigorous levels. Good luck!