Working with Landsat 8, a tutorial. 2. creating simple composites

In this second post of the Landsat 8 series, we have a look at how to create simple color composites with uncorrected data, a common and useful task that is often misunderstood by beginners. For this purpose, it is necessary to understand two concepts: how digital sensors register information and how color images are constructed.

The first key concept is that sensors on board satellites or air planes perform radiometric measurements in specific ranges of wavelength of the electromagnetic spectrum. Sensor elements capture a portion of the outgoing (from Earth) radiation in a given spectral window, which is then converted to digital numbers, stored and, together with the set of neighboring measurements, coded as regular image grids. Images acquired in specific windows of the spectrum are called ‘bands’. The spectral ranges in which they were acquired, measured in nanometers or micrometers, together with their nominal pixel size, are key properties of the sensors. Then, each band will be an image file. For Landsat 8 we have:

Designation Range (µm) Pixel size Denomination/Use
Band 1 0.43 - 0.45 30 Coastal aerosol
Band 2 0.45 - 0.51 30 Blue
Band 3 0.53 - 0.59 30 Green
Band 4 0.64 - 0.67 30 Red
Band 5 0.85 - 0.88 30 Near Infrared (NIR)
Band 6 1.57 - 1.65 30 Short wave infrared (SWIR) 1
Band 7 2.11 - 2.29 30 Short wave infrared (SWIR) 2
Band 8 0.50 - 0.68 15 Panchromatic
Band 9 1.36 - 1.38 30 Cirrus
Band 10 10.60 - 11.19 100 * (30) Thermal Infrared (TIRS) 1
Band 11 11.50 - 12.51 100 * (30) Thermal Infrared (TIRS) 2

The second key concept is that color images can be formed by projecting three separate gray-level (black and white) images through three separate color channels: blue, green, red (in increasing order of wavelength). This is what television sets and projectors do. If the three images are exactly the same, the result will be another gray-level image, but if these images contain differences, these will become manifest as colored areas, depending on the relative magnitude of the levels at a given point.

In the specific case of satellite images, we can create a ‘true-color’ image by combining the three images acquired in the blue, green and red parts of the visible part of the spectrum, through the blue, green and red channels of the image. For example, to create a true color composite of the Bay of Naples, we put the image collected in the red part of the visible spectrum (band 4 in Landsat 8) in the red channel, the green image (band 3) on the green channel, and the blue image (band 2) on the blue channel. We can do this from the command line by simply using the GDAL tool gdal_merge.py:

> gdal_merge.py -separate -o naples_true_composite.tif LC81890322013210LGN01_B4.TIF LC81890322013210LGN01_B3.TIF LC81890322013210LGN01_B2.TIF
0...10...20...30...40...50...60...70...80...90...100 - done.

Our test image will look like this:

Bay of Naples, Italy

Bay of Naples, Italy

Landsat images are about 185×185 km in size. We might not need such a big area, and we then might need to clip a subset. To achieve this from the command line, for example, to extract the Vesubius area, we can use the GDAL tool gdal_translate:

> gdal_translate -projwin 431000 4539000 471000 4489000 naples_true_composite.tif vesuvius.tif
Input file size is 7761, 7901
Computed -srcwin 2317 1460 1333 1667 from projected window.
0...10...20...30...40...50...60...70...80...90...100 - done.

where, using the option -projwin we introduced the window in projected coordinates: ulx (upper left x), uly (upper left y), lrx (lower right x), lry (lower right y). Note that we will need to pick the coordinates of the corners. We can do this in any viewer, such as QGIS.

Our image now looks like this:

Vesuvius area, Campania, Italy.

Vesuvius area, Campania, Italy.

which is basically an approximately 20×20 km area around Vesuvius. Note the northern tip of Capri on the southwest corner.

Alternatively, we might create a ‘false color composite’, by using a different combination of bands in different channels. An extremely useful combination, particularly for vegetation studies, is to use the infrared band, which is not visible to the human eye, in the red channel, the red image in the green channel, and the green image in the blue channel (remember that images are usually specified in order: RGB). Using the same procedures as above, we do:

> gdal_merge.py -separate -o naples_false_composite.tif LC81890322013210LGN01_B5.TIF LC81890322013210LGN01_B4.TIF LC81890322013210LGN01_B3.TIF
0...10...20...30...40...50...60...70...80...90...100 - done.

and we get:

Bay of Naples, Italy, infrared false color composite.

Bay of Naples, Italy, infrared false color composite.

Note the strong red color of the vegetation, as it is much more reflective in the infrared than in the visible wavelengths.

As a final point. We can check the properties of the Vesuvius image with gdalinfo:

> gdalinfo vesuvius.tif
Driver: GTiff/GeoTIFF
Files: vesuvius.tif
Size is 1333, 1667
Coordinate System is:
PROJCS["WGS 84 / UTM zone 33N",
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433],
AUTHORITY["EPSG","4326"]],
PROJECTION["Transverse_Mercator"],
PARAMETER["latitude_of_origin",0],
PARAMETER["central_meridian",15],
PARAMETER["scale_factor",0.9996],
PARAMETER["false_easting",500000],
PARAMETER["false_northing",0],
UNIT["metre",1,
AUTHORITY["EPSG","9001"]],
AUTHORITY["EPSG","32633"]]
Origin = (430995.000000000000000,4539015.000000000000000)
Pixel Size = (30.000000000000000,-30.000000000000000)
Metadata:
AREA_OR_POINT=Area
Image Structure Metadata:
INTERLEAVE=PIXEL
Corner Coordinates:
Upper Left ( 430995.000, 4539015.000) ( 14d10'46.25"E, 40d59'57.85"N)
Lower Left ( 430995.000, 4489005.000) ( 14d11' 6.14"E, 40d32'56.15"N)
Upper Right ( 470985.000, 4539015.000) ( 14d39'17.97"E, 41d 0' 6.51"N)
Lower Right ( 470985.000, 4489005.000) ( 14d39'26.33"E, 40d33' 4.67"N)
Center ( 450990.000, 4514010.000) ( 14d25' 9.19"E, 40d46'32.18"N)
Band 1 Block=1333x1 Type=UInt16, ColorInterp=Gray
Band 2 Block=1333x1 Type=UInt16, ColorInterp=Undefined
Band 3 Block=1333x1 Type=UInt16, ColorInterp=Undefined

We see that the image has a completely defined spatial reference system (SRS), with datum, coordinates referred to the WGS84 ellipsoid, and a cartographic projection: Universal Transverse Mercator (UTM) zone 33 north. We also see from the last lines that this image is coded as in 16-bit unsigned integers. In the next post, we will see how to convert these digital numbers into geophysical measurements of radiance.

Working with Landsat 8, a tutorial. 1: getting the data

This and the following posts in this series are excerpts from a Landsat tutorial I wrote for a GIS course that I recently taught. Although the procedures are simple, they are far from obvious for many potential users and I thought that I could well put the tutorial here where it can be beneficial to many more people. In this first part, as I did with the MODIS tutorial, we focus on selecting and downloading the data, and leave processing and correction for later posts.

The Landsat program, a partnership between NASA and USGS, has imaged the Earth surface with purpose-built instruments since 1972 (see http://landsat.gsfc.nasa.gov/), offering a unique glimpse into more than 40 years of global environmental change. Landsat 8, launched on 13 February 2013, is the latest addition to this successful program and includes many interesting refinements over its predecessors, like 12-bit radiometric quantization (coded into 16-bit images) and improved signal-to-noise ratio. Perhaps, the most interesting feature of the Landsat program is that all data are free to use for everyone.

We start by opening the USGS Global Visualization viewer (Glovis):

landsat8_1_001

On the left side you will see a tiny map of the world, that you can use to navigate to your spot of interest. Landsat images can also be located using a system known as WRS-2, in which the locations are specified in terms of two coordinates: path and row. All scenes with the same path and row depict the same region of the world. It is a convenient practice to become familiar with the path/row coordinates for your area of interest.

You can choose to see several scenes at the same time, or just one, by choosing the appropriate scale in the resolution tab. You can specify a date, and a maximum cloud cover. Note that cloud cover is estimated automatically and might not be accurate. Always check visually. By clicking on ‘Prev scene’ or ‘Next Scene’ you go forward or back in time through the image database. By default Glovis opens in the ‘Landsat’ collection, that includes all landsat satellites. If you need only Landsat 8 images, just select that specific data set on the tab ‘Collection’.

In this tutorial we will use an image of the beautiful Bay of Naples, Italy, hotspot of history and summer home to a few Roman emperors, corresponding to path/row 189/32. We see that the Landsat 8 image of 29 July 2013 is completely cloud free:

landsat8_1_002

Once we locate the image we desire, we check that it is available for download by looking if a red sign (‘Downloadable’) is shown at the upper left corner. If yes, add it to you list by pressing the ‘Add’ button (left, below). If it is not available for download you may still add it, but instead of downloading you will proceed to request the image for processing, and will be available soon, normally after a day or two.

We then proceed to the download page by selecting the ‘Send to cart’ button. This can seem confusing, because the data are said to be free. What we do in reality is to purchase the images at a cost of zero dollars (the site is also used for other images that one has to pay for, i.e. ASTER). After sending to cart, another tab opens with the EarthExplorer site. Here you might need to register, otherwise sign in. You are guided to a list with your selected data. If you choose ‘Bulk download’ you will get an immense dataset, where not everything must be useful. I advise to click on the download button to the right, that with an icon depicting a green arrow on a hard drive. Select then the last option ‘Level 1 GeoTIFF Data Product’. As you see, files sizes can be very big, and will be even bigger in your computer because these are compressed files. Click ‘Download':

landsat8_1_010

You will get a big compressed ‘tar.gz’ file, in our case LC81890322013210LGN01.tar.gz. It takes a while to download. If you are in windows, you might be able to access this format using the program 7-zip. In Linux, any packager will do, or just execute the command:

tar -xvf LC81890322013210LGN01.tar.gz

the output will look like this:

LC81890322013210LGN01_B1.TIF
LC81890322013210LGN01_B2.TIF
LC81890322013210LGN01_B3.TIF
LC81890322013210LGN01_B4.TIF
LC81890322013210LGN01_B5.TIF
LC81890322013210LGN01_B6.TIF
LC81890322013210LGN01_B7.TIF
LC81890322013210LGN01_B8.TIF
LC81890322013210LGN01_B9.TIF
LC81890322013210LGN01_B10.TIF
LC81890322013210LGN01_B11.TIF
LC81890322013210LGN01_BQA.TIF
LC81890322013210LGN01_MTL.txt

Here we have all the individual band image files (*.TIF), the quality control image (*_BQA.TIF), and the metadata text file, with extension *_MTL.txt.

The details of the individual bands can be found here. Pay special attention that Landsat 8 band names do not correspond with the previous satellites, which can be confusing. A useful comparison with landsat 7 can be found here.

In the next post, we will go through the steps to convert these separate files into colour composites, and transform them to radiance values at top-of-atmosphere. Happy downloading!

Urban automobility, a dead paradigm that we refuse to abandon.

Traffic jam in São Paulo, Brazil. Source: WikiCommons. Photo: Mario Roberto Duran Ortiz Mariordo.

Traffic jam in São Paulo, Brazil. Source: WikiCommons. Photo: Mario Roberto Duran Ortiz Mariordo.

A visit to any of the world’s great cities will convince anyone that cars are no longer useful to travel around dense urban areas. Yet, we seem to insist with our wheeled smoking addiction against all evidence, building and buying more cars and adapting our cities to them, decreasing our overall life quality in the process. Why?

The automobile has been an icon of modern industrial society for about a century. Almost every TV commercial will show us shiny large new cars speeding across empty roads in wide open natural spaces, or through some old and elegant (and curiously empty!) European city. These cars will be often driven by handsome, affluent middle-aged men accompanied by gorgeous, sexy women. Yet, this is not how reality will look like for most us after we buy that car! This says a lot, and not only about the gender perspective of these commercial ads. As powerful status symbols, larger and more sophisticated cars are usually seen as proxies for our personal worth and success, at least according to the hedonistic values of our material society.

We seem to hold the religious belief that the personal car gives us freedom and well-being, and the speed and flexible mobility deemed necessary for a modern urban life. Yet, this is far from truth. Traffic congestion and its attendant visual, noise and chemical pollution are a major source of avoidable health issues, as stress, respiratory and skin problems [1]. Another interesting argument, recently put forward by G. Backhaus [2], states that the rise of atmospheric CO2 concentrations and its concomitant chain of dangerous planetary changes (global warming, ocean acidification, sea-level rise, disruption of agricultural cycles, etc) can be seen as a just a symptom of the prevalent worldview, one in which massive fossil-fuel burning is central, and of which car addiction is just a part of.

Getting rid of this is not easy. We can see it a bit more clearly from the systems thinking perspective: we have over several decades built a massive system for urban traffic and a massive industry to sustain it (if the car industry were a country, it would have the world’s 6th largest BNP [3]). We now became addicted to its functionality and supposed benefits, as employment and regional development. Some cities in the world are namely built for the car, with the idea that people will work in downtown while living in far-off suburbs and make their purchases in far-off malls. We have invested astronomical amounts of effort and money in this global idea, and some people have become astonishingly wealthy in the process. This system is almost self-sustaining, at a cost that we are not paying right now (environmental disruption at planetary scale, soon among us). It is not surprising, then, that we refuse to see its disadvantages and many of amongst us are often quick to demean alternative forms of urban mobility, with the excuse of the potential cost involved: an almost archetypical sunk-cost fallacy situation.

Cars still have reasons to exist, even in urban areas, as taxis and emergency vehicles, but particularly in sparsely populated areas where public transport might be inadequate. For the dense inner cores of cities, the use of electrically driven public transportation is a far superior alternative [*]. A cleverly and densely laid system of tramways, subways and trolleybus can effectively and cleanly deal with the necessary mobility of millions of people, not to mention that people can be encouraged to walk or cycle. Taking cars out of the streets liberates the space for living and meeting, which leads to enormous positive social side-effects because people start having more opportunities to meet and knit the social mesh, something that it is often lost in modern megacities. We have some good examples of this at hand: Vauban, Freiburg (Germany), Pontevedra (Spain) or Hydra (Greece), where parts of these cities have been closed to car traffic and had then been reclaimed by people as living, playing and meeting space, positively contributing to the local social well being and democracy.

Living in cities does not need to be a torture, but in many places around the world people still insist in it. The situation seems scarily similar to that of the ancient inhabitants of Eastern Island, or norsemen in Greenland: eroding the basic tenets of our existence to mindlessly serve some bogus religious purpose that is neither uplifting nor necessary. In the face of the coming mega-urbanity of the 21st century, every move to make cities livable and human is useful. Time to change our frame of mind. Urban automobility is dead. We better bury it and move on, the effort will be worthwhile.

[1] Armah, Frederick A.; Yawson, David O.; Pappoe, Alex A. N. M. 2010. “A Systems Dynamics Approach to Explore Traffic Congestion and Air Pollution Link in the City of Accra, Ghana.” Sustainability 2, no. 1: 252-265. http://www.mdpi.com/2071-1050/2/1/252

[2] Backhaus, Gary. 2009. “Automobility: Global Warming as Symptomatology.” Sustainability 1, no. 2: 187-208. http://www.mdpi.com/2071-1050/1/2/187

[3] De Vries, Bert JM. Sustainability science. Cambridge University Press, 2012.

Overshoot and collapse

On 19 August 2014, just a few weeks ago, we “celebrated” this year’s Earth Overshoot Day, the day when we have already consumed the resources generated and to be generated by the Earth during the the whole year. This week, a new report was released showing further evidence that our global society has been progressing through the dreaded scenario 1 in “The Limits to Growth” report from 1972 [1], known as “business as usual” but better described as “overshoot and collapse”. We are living in interesting times. We should feel blessed …

Overshoot
Mid August. We will spend the coming four months eroding soils, overusing fertilizers, and accumulating stocks of pollution at rates much higher than the planet can recycle through its natural dynamics, thus harming its capacity to regenerate, transform and sustain life. This is equivalent to having spent our monthly salary by day 20, borrowing for the remaining 10 days. In the global case the loan will have to be paid by our children and grandchildren, not only in money, but in far more important assets such as lower quality of food and water, degraded health and an unpleasant Nature, amidst a potential social and environmental disarray of unheard of magnitude. If you have been through revolutions, uprisings, or any other form of local social collapse, you can easily imagine that similar events at global scale won’t be amusing.

The Earth Overshoot Day is a measure of our abuse of the planet, calculated by the Global Footprint Network. As we are more and more people on Earth, using more and more resources, the Earth Overshoot Day has fallen on earlier and earlier days over time:

ScientiaPlusConscientia_overshoot

Plot of Earth Overshoot Day over time. Note that calculations might have a non-negligible error bound. In any case, the trend is not flattering, and clearly shows our inability to adapt our living to the conditions of the Earth.

Collapse
A recent report by G.M. Turner at the Melbourne Sustainable Society Institute, University of Melbourne, Australia [2] is the latest comprehensive view of a process that has been previously shown by many other studies (see a long list of publications here): that the evolution path of our global society vastly disregards all criteria of sustainable development, using the available renewable resources at rates higher than their natural regeneration (e.g. fisheries, forests), using non-renewable resources at rates higher than the development of any substitute (e.g. soils, oil, gas), and releasing pollutants at rates higher than Nature can reabsorb and transform them (e.g. carbon dioxide, methane, PCB). The “business-as-usual” scenario, created as a realistic representation of a society like ours, obsessed with greed, consumption and the illusion of infinite growth, describes in its many variants an unsustainable path that invariably leads to an uncomfortable future, characterized by a much lower quality of life, drastic reduction in life expectancy and food availability, and an unpleasant Nature:

world3simulator_scenario1_annotated

Screen capture from the World3 simulator (http://world3simulator.org/) showing the projections for scenario 1 in “The Limits to Growth”. Note the dates on the horizontal axis. We are not far from the projected collapse.

Alternative futures
As an effort to give the image of a less traumatic future, we can finish these thoughts by noting that not everything is gloom-and-doom news yet, that we can still do something to avoid a global environmental catastrophe, although we are achingly running out of time. The very substantial changes we need require rethinking almost every aspect of our economy, social structures and relation to the environment, learning to live under Nature’s conditions. The challenges ahead are enormous. Some countries and regions have already made pertinent and useful changes, but the great majority of the world still lags with this.

Increased pressure on governments and politicians is necessary to force them to stop babbling and act, enforcing international climate and environmental pacts. However, the best place to start is right where we are, becoming ourselves agents of change [3], at our own personal and local scales of action, by choosing to consume less, finding new uses of what we already have, and recycling what cannot be reused, and by exercising as much as possible the “power of the purse”, forcing businesses to either adapt or go bust. Unfortunately, we should not forget that collapse may still be the most likely scenario, and perhaps it is reasonable to start considering how to face it (some suggestions might be found here [4]).

References:

[1] Meadows, D., Randers, J., & Meadows, D. (2004). Limits to growth: the 30-year update. Chelsea Green Publishing.

[2] Turner, G. (2014) ‘Is Global Collapse Imminent?’, MSSI Research Paper No. 4, Melbourne Sustainable Society Institute, The University of Melbourne.

[3] Robertson, M. (2014) Sustainability. Principles and practice. Routledge.

[4] Maher, T. M., & Baum, S. D. (2013). Adaptation to and recovery from global catastrophe. Sustainability, 5(4), 1461-1479.

Mapping snowlines on temperate glaciers: the wonders of optimal thresholding

In this post I describe an interesting glaciological application of Otsu’s optimal thresholding, a method in digital image processing with which we can segment a gray-level image into two or more classes by finding the best threshold level between the two. I stumbled upon the method a few years ago when working on a study in which I needed an objective and reproducible method for segmenting satellite images of glaciers into accumulation and ablation areas. After much trial and error, and in spite of its simplicity, Otsu’s optimal thresholding outperformed all other methods for classification, including sophisticated approaches as ISODATA and spectral unmixing.

For the uninitiated, the problem is as follows: on a glacier, the snow line is the limit between snow (above) and ice (below). It is a dynamic feature whose altitude changes according to the weather and seasons. On temperate glaciers, those where the ice is everywhere at the pressure melting point, conventional glaciological wisdom tells us that the snow line at the end of the ablation season (usually the summer) can be considered a good approximation to the equilibrium line altitude, that is, the altitude at which the surface mass balance is zero. This approximation is not valid for subpolar glaciers because superimposed ice develops right down the firn field, effectively extending the accumulation area without appreciable optical contrast. The relevance of this question lies in the fact that glaciologists are usually interested in the relation between climate and glacier mass balance, and therefore mapping snowline altitudes at the end of the summer has always an intrinsic value. At least conceptually, the solution seems simple: find a cloud-free image taken at a suitable time of the year (end of summer, minimal seasonal snow cover) and just map the snow line on it. See, for example, the following subset of a satellite image of the Southern Patagonia Icefield:

South Patagonian Icefield, Viedma and Rio Túnel glaciers. Landsat 5 TM, 1997-03-17, path/row 231/094

South Patagonian Icefield, Viedma and Rio Túnel glaciers. Landsat 5 TM, 1997-03-17, path/row 231/094. The image is approx. 21 km across.

On Glaciar Viedma, the large one roughly at the center of the image, we can clearly see the obvious differences between the ice on the ablation area, to the south, and the snow and firn on the accumulation area, to the north. We probably feel at once that we could just draw a line along the limit and solve the problem. Reality is nevertheless more complicated because the limit is gradual, and placing a boundary will at some point become an arbitrary choice. If we had several interpreters working on this kind of problem we could in principle calculate some sort of average line, but having many humans paid to do this work is not usually within the budget of the typical glaciological project! This approach is also impractical if we need to map snow line altitudes over a large number of glaciers. Obviously, satellite images are suitable for this kind of task, and the problem of finding a methodology for mapping glaciers with them has challenged the ingenuity of many glaciologists for decades (see a very long list of articles about doing this with visible and near infrared images here). An important aspect is that for achieving some sort of consistency, and for covering large areas in a reasonable time, automatic methods which are also objective and reproducible are mandatory. In practice the matter is not trivial, but I found that Otsu’s optimal thresholding is an extremely suitable method to solve this problem and think it can be advantageously used for mapping a large number of glaciers, regionally or even globally.

As noted above, Otsu’s method reduces a gray-level image to a binary one by finding an optimal threshold level that maximizes a separability function. The original idea was published by Nobuyuki Otsu in 1979 (1), in what became a seminal paper in the field of digital image processing. The procedure can be found in exquisite detail in the original paper as well as in many books, and all around the internet. The recipe I follow here is that given by Gonzalez and Woods in their book “Digital image processing”, page 747 (2). I also give the Perl code that I used to calculate all steps (namely, PDL, the Perl Data Language).

As an example, I here use a subset of a MODIS image (band 2, near infrared) of Glaciar Viedma, part of the study mentioned above (3). We will need a gray-level image of a glacier, where the surroundings have been suitably clipped out using a glacier outline from an inventory. This is important and necessary in order to find the threshold level between ice and snow/firn, and not that between glacier and rock, which is usually more marked in pixel levels:

fig_02_viedma_modis-b2

Glaciar Viedma. Subset of band 2 of an average of 7 MODIS images.

I chose Glaciar Viedma because its tongue is covered by volcanic ash and debris and it is a challenge for image classification. Ideally, a glacier histogram on this spectral band should look bimodal: one mode for ice, and the other for snow. Instead, we here have an additional mode for the debris, at a lower level than the ice:

Glaciar Viedma. Image histogram.

Glaciar Viedma. Image histogram.

The arrow shows the position of the optimal threshold (anticipating the results!). We see that Otsu’s method performed reasonably well even in this difficult case. Simpler glaciers give even better results.

The procedure we followed to find this was:

0. We begin by loading the image into a piddle, a PDL matrix. To do this, we first convert the file from the original GeoTIFF format to NetCDF, using GDAL, and calle it image.grd. We then read the file directly into PDL using the PDL::NetCDF module:
# read the file
my $image = PDL::NetCDF->new("image.grd",{REVERSE_DIMS => 1});
# load the z dimension (the data) into the variable
my $pdl_image = $image->get('z');
# convert eventual NaN to BAD values
$pdl_image .= $pdlimage->inplace->setnantobad;
# clump the two dimensions of the image to one long vector
my $pdl_clump = $pdl_image->clump(2);
# compute the histogram using hist
my ($vb,$nb) = hist $pdl_clump;
# and convolve a bit (using conv1d) to smooth out those pesky noisy images
$nb .= conv1d $nb1,ones(11);

We then start with Otsu’s method proper:

1. compute the normalized histogram of the input image (in the normalized histogram, the frequency of every level is expressed as an estimate of the probability):
my $bnh = $nb/sumover($nb);

2. compute cumulative sums of the histograms (I use here use a subroutine called “cumhisto”, that you can find in the appendix. However, PDL has a method for that called “cumusumover()”. Thanks Chris Marshall for calling my attention to that!)
my $bcsum = cumhisto($bnh);

3. compute the cumulative means (that is, for each level, the mean of the histogram up to that level)
my $bcmea = cumhisto($vb*$bnh);

4. compute the global intensity mean:
my $bintm = sumover($vb1*$b1nh);

5. compute the between class variance, \sigma^{2}_B , using:
\sigma^{2}_B = \frac{(m_G P_1(k) - m(k))^{2}}{P_1(k) (1 - P(k))}

where m_G is the global mean, P_1(k) is the probability that level k belongs to class 1, and m(k) is the k th cumulative average.
my $bsigmasq = (($bintm*$bcsum)-$bcmea)**2/($bcsum*(1-$bcsum));

6. and then we obtain the optimum threshold as the value k^{*} that maximizes \sigma^{2}_B(K) :
$botsuthr = $vb1->index(maximum_ind($bsigmasq));

7. if we want, we can also compute the separability measure \theta^{*} , using:
\theta(k) = \frac{\sigma^{2}_B(k)}{\sigma^{2}_G}

where \sigma^{2}_G is the global variance:
my $b1globvar = sumover(($vb1 - $b1intm)**2 * $b1nh);

and then, the separability is:
$rthetab1 = $b1otsuthr / $b1globvar;

Once we have the value of the threshold, we can simply generate a binary image by setting values below and above the threshold to 0 and 1, respectively. Our segmented glacier will then look like this:

Glaciar Viedma. Segmented into accumulation and ablation areas using Otsu's optimal thresholding method.

Glaciar Viedma. Segmented into accumulation and ablation areas using Otsu’s optimal thresholding method.

The method is computationally very efficient and can be made to run very fast. It is extremely effective with very few caveats. Obviously, images should be cloud free. The only problems I experienced were caused by shadows, or faulty outlines that left some rock or a sufficiently large nunatak inside the boundary. Also, computing AAR (accumulation_area / total_glacier_area) using different bands gives slightly different results, which is not a problem of the method, but a manifestation of the difficulty of the problem.

All in all, Otsu’s optimal thresholding appears to me as an old gem that can be conveniently implemented on larger scales. For example, it would not be unthinkable to use the GLIMS or Randolph global glacier inventories, together with end-of-summer images over glaciers around the world, to systematically map AAR at a global scale on a routine basis. This could be done using the procedure described here at a fairly reasonable computational cost. Cool!

References

(1) Otsu, N (1979). “Threshold selection method from gray-level histograms”. IEEE Trans. Syst. Man Cybern. 9.1, 62–66. issn: 0018-9472.

(2) Gonzalez, R.C. and R.E. Woods (2008). “Digital image processing”. Pearson/Prentice Hall, 954 pp.

(3) De Angelis, H. “Hypsometry and sensitivity of the mass balance to changes in equilibrium-line altitude: the case of the Southern Patagonia Icefield.” Journal of Glaciology 60, no. 219 (2014): 14-28.

Appendix

Cumhisto, a subroutine in PDL for calculating the cumulative histogram:

sub cumhisto {
my $lqs = pdl(@_);
my $cum = pdl(@_);
my $h = 0;
while ($h < nelem($lqs)-2) {
my $dummy = zeroes(nelem($cum));
$dummy($h+1:-1) .= $lqs(0:-($h+2));
$cum += $dummy;
$h++;
}
return $cum;
}

Featured Image -- 105

Tell Me How?

Hernán De Angelis:

This is so powerful, so true.

Originally posted on The Quiet Sky:

Tell me how not to be pliant like a willow
When winds of deceit and subterfuge blow?

To stand correct and straight amidst
Overwhelming corruption and hate?

Tell me how to fight war and strife
With a white glove and no knife?

To cry a little more now and then
On seeing killing, riots and mayhem?

Tell me how to tolerate and bear
The mindless destruction of Life everywhere?

How to feel a wrenching pain
As hunters shoot down for profit and gain?

Tell me how not to turn my face away
From gnawing poverty and hunger every day?

Tell me how to experience the same
Of those tortured and maimed?

Tell me how to be a lotus in the lagoon
Despite muddy waters, in full bloom?

Tell me how to revere and hold sublime
All Life in its variety Divine?

Tell me how, O Sage, to pray,
Bringing Compassion…

View original 35 more words

Precautionary principle

* originally published in my old blog “Tales of a Finite Planet”, on 2014-06-24

I have been pondering on writing about the precautionary principle for a while. I hesitated because of the vastness and intricacy of the matter, the apparently endless ramifications, and the somewhat controversial legal sides (see yourself!). I finally decided to go ahead, while keeping these notes close to the simple and powerful core idea.

The roots of the precautionary principle are often traced back to the preamble of the 1987 UN Montreal Protocol on the ozone layer and, particularly, to the Rio Declaration on Environment and Development, made on the 1992 Summit in Rio de Janeiro. Here, the 15th point reads:

In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

The definition used by Schenllnhuber and Wenzel, in “Earth system analysis: integrating science for sustainability” (p. 209): “securing of error-tolerant operating by judiciously staying away from the currently presumed civilizatory and ecological load limits” is also illuminating because it emphasizes the links to a system dynamics approach, as well as the concept that the inherent limits in our knowledge give rise to uncertainties in forecasts and analysis. This applies specifically, for example, to the perceived safety of pollutants, insecticides, etc., and calls for extreme caution in their use.

There is, however, nothing truly revolutionary in these words, as the basic idea is as old as humanity itself and can be found, in subtle ways, in the the wisdom and common sense of many cultures (or grandmothers, for that matter). The point was brilliantly made by Nicholas Nassim Taleb in his book “Antifragile”, and goes basically like this: Do you ingest a bunch of randomly chosen pills on the premise that there is no evidence that they will do any harm to you? Well, no. The same applies to the environment!

The idea is closely connected with the concept of burden of proof. This translates in that those claiming that there is no harm in using a certain product or method are those who have to provide the evidence that these are safe, and not the opposite. With respect to the global environment, the lesson is that we cannot happily assume that we can just do anything on a massive scale without harming the planet, particularly if these assumptions are based on a limited number of isolated and possibly lousy experiments (or none at all). Compare this with the vast experimental record of natural evolution. In her millions of years of trial and error, Nature has come up with far better solutions, compatible with the rest of the environment, than anything humans can do. This point was also raised by Taleb, with a cute probabilistic interpretation (see here).

In recent decades we have witnessed a particularly successful application of the precautionary principle, namely the ban on chlorofluorocarbons (CFC) and the ensuing recovery of the ozone layer. We now have at hand another excellent example with the European Commission’s recent ban on neonicotinoids, that has conceivably put a stop for the colony collapse disorder in the Po river valley, Italy, without observable negative impact on the crop yield. The precautionary principle is also very relevant to the question of greenhouse gases and climate change, where we have an opportunity to act now, by drastically reducing emissions of greenhouse gases, and hopefully call it another case of successful application in the future.

The precautionary principle has been criticized for being vague and loose and, if taken to the extreme, an excuse for inaction. Some of these criticisms are valid, but are directed to particular applications and interpretations, rather than the principle in itself and does not invalidate it. Note that one of the most powerful aspects of the precautionary principle is how it is inextricably linked to ethics, guiding us to think on how our actions and decisions impact on Nature and our fellow humans, both now and in the future. Experience has shown that, when used as a guidance, in an open-minded and reasonable way, the precautionary principle makes a good starting point on any environmental discussion, and a solid ethical and practical ground for the legislation and practice of sustainable development. Let’s not forget it.