In this post I describe an interesting glaciological application of Otsu’s optimal thresholding, a method in digital image processing with which we can segment a gray-level image into two or more classes by finding the best threshold level between the two. I stumbled upon the method a few years ago when working on a study in which I needed an objective and reproducible method for segmenting satellite images of glaciers into accumulation and ablation areas. After much trial and error, and in spite of its simplicity, Otsu’s optimal thresholding outperformed all other methods for classification, including sophisticated approaches as ISODATA and spectral unmixing.

For the uninitiated, the problem is as follows: on a glacier, the snow line is the limit between snow (above) and ice (below). It is a dynamic feature whose altitude changes according to the weather and seasons. On temperate glaciers, those where the ice is everywhere at the pressure melting point, conventional glaciological wisdom tells us that the snow line at the end of the ablation season (usually the summer) can be considered a good approximation to the equilibrium line altitude, that is, the altitude at which the surface mass balance is zero. This approximation is not valid for subpolar glaciers because superimposed ice develops right down the firn field, effectively extending the accumulation area without appreciable optical contrast. The relevance of this question lies in the fact that glaciologists are usually interested in the relation between climate and glacier mass balance, and therefore mapping snowline altitudes at the end of the summer has always an intrinsic value. At least conceptually, the solution seems simple: find a cloud-free image taken at a suitable time of the year (end of summer, minimal seasonal snow cover) and just map the snow line on it. See, for example, the following subset of a satellite image of the Southern Patagonia Icefield:

On Glaciar Viedma, the large one roughly at the center of the image, we can clearly see the obvious differences between the ice on the ablation area, to the south, and the snow and firn on the accumulation area, to the north. We probably feel at once that we could just draw a line along the limit and solve the problem. Reality is nevertheless more complicated because the limit is gradual, and placing a boundary will at some point become an arbitrary choice. If we had several interpreters working on this kind of problem we could in principle calculate some sort of average line, but having many humans paid to do this work is not usually within the budget of the typical glaciological project! This approach is also impractical if we need to map snow line altitudes over a large number of glaciers. Obviously, satellite images are suitable for this kind of task, and the problem of finding a methodology for mapping glaciers with them has challenged the ingenuity of many glaciologists for decades (see a very long list of articles about doing this with visible and near infrared images here).

An important aspect is that for achieving some sort of consistency, and for covering large areas in a reasonable time, automatic methods which are also objective and reproducible are mandatory. In practice the matter is not trivial, but I found that Otsu’s optimal thresholding is an extremely suitable method to solve this problem and think it can be advantageously used for mapping a large number of glaciers, regionally or even globally. As noted above, Otsu’s method reduces a gray-level image to a binary one by finding an optimal threshold level that maximizes a separability function. The original idea was published by Nobuyuki Otsu in 1979 (1), in what became a seminal paper in the field of digital image processing. The procedure can be found in exquisite detail in the original paper as well as in many books, and all around the internet.

The recipe I follow here is that given by Gonzalez and Woods in their book “Digital image processing”, page 747 (2). I also give the Perl code that I used to calculate all steps (namely, PDL, the Perl Data Language). As an example, I here use a subset of a MODIS image (band 2, near infrared) of Glaciar Viedma, part of the study mentioned above (3). We will need a gray-level image of a glacier, where the surroundings have been suitably clipped out using a glacier outline from an inventory. This is important and necessary in order to find the threshold level between ice and snow/firn, and not that between glacier and rock, which is usually more marked in pixel levels:

I chose Glaciar Viedma because its tongue is covered by volcanic ash and debris and it is a challenge for image classification. Ideally, a glacier histogram on this spectral band should look bimodal: one mode for ice, and the other for snow. Instead, we here have an additional mode for the debris, at a lower level than the ice:

The arrow shows the position of the optimal threshold (anticipating the results!). We see that Otsu’s method performed reasonably well even in this difficult case. Simpler glaciers give even better results. The procedure we followed to find this was: 0. We begin by loading the image into a piddle, a PDL matrix. To do this, we first convert the file from the original GeoTIFF format to NetCDF, using GDAL, and calle it image.grd. We then read the file directly into PDL using the PDL::NetCDF module:

# read the file my $image = PDL::NetCDF->new("image.grd",{REVERSE_DIMS => 1}); # load the z dimension (the data) into the variable my$pdl_image = $image->get('z'); # convert eventual NaN to BAD values$pdl_image .= $pdlimage->inplace->setnantobad; # clump the two dimensions of the image to one long vector my$pdl_clump = $pdl_image->clump(2); # compute the histogram using hist my ($vb,$nb) = hist$pdl_clump; # convolve a bit (using conv1d) to smooth out those pesky noisy images $nb .= conv1d$nb1,ones(11);

1. compute the normalized histogram of the input image (in the normalized histogram, the frequency of every level is expressed as an estimate of the probability):
my $bnh =$nb/sumover($nb); 2. compute cumulative sums of the histograms (I use here use a subroutine called “cumhisto”, that you can find in the appendix. However, PDL has a method for that called “cumusumover()”. Thanks Chris Marshall for calling my attention to that!) my$bcsum = cumhisto($bnh); 3. compute the cumulative means (that is, for each level, the mean of the histogram up to that level) my$bcmea = cumhisto($vb*$bnh);

4. compute the global intensity mean:
my $bintm = sumover($vb1*$b1nh); 5. compute the between class variance, $\sigma^{2}_B$, using: $\sigma^{2}_B = \frac{(m_G P_1(k) - m(k))^{2}}{P_1(k) (1 - P(k))}$ where $m_G$ is the global mean, $P_1(k)$ is the probability that level $k$ belongs to class 1, and $m(k)$ is the $k$ th cumulative average. my$bsigmasq = (($bintm*$bcsum)-$bcmea)**2/($bcsum*(1-$bcsum)); 6. and then we obtain the optimum threshold as the value $k^{*}$ that maximizes $\sigma^{2}_B(K)$: $botsuthr = $vb1->index(maximum_ind($bsigmasq));

7. if we want, we can also compute the separability measure $\theta^{*}$, using:

$\theta(k) = \frac{\sigma^{2}_B(k)}{\sigma^{2}_G}$

where $\sigma^{2}_G$ is the global variance:
my $b1globvar = sumover(($vb1 - $b1intm)**2 *$b1nh);

and then, the separability is:
$rthetab1 =$b1otsuthr / $b1globvar; Once we have the value of the threshold, we can simply generate a binary image by setting values below and above the threshold to 0 and 1, respectively. Our segmented glacier will then look like this: The method is computationally very efficient and can be made to run very fast. It is extremely effective with very few caveats. Obviously, images should be cloud free. The only problems I experienced were caused by shadows, or faulty outlines that left some rock or a sufficiently large nunatak inside the boundary. Also, computing AAR (accumulation_area / total_glacier_area) using different bands gives slightly different results, which is not a problem of the method, but a manifestation of the difficulty of the problem. All in all, Otsu’s optimal thresholding appears to me as an old gem that can be conveniently implemented on larger scales. For example, it would not be unthinkable to use the GLIMS or Randolph global glacier inventories, together with end-of-summer images over glaciers around the world, to systematically map AAR at a global scale on a routine basis. This could be done using the procedure described here at a fairly reasonable computational cost. Cool! References (1) Otsu, N (1979). “Threshold selection method from gray-level histograms”. IEEE Trans. Syst. Man Cybern. 9.1, 62–66. issn: 0018-9472. (2) Gonzalez, R.C. and R.E. Woods (2008). “Digital image processing”. Pearson/Prentice Hall, 954 pp. (3) De Angelis, H. “Hypsometry and sensitivity of the mass balance to changes in equilibrium-line altitude: the case of the Southern Patagonia Icefield.” Journal of Glaciology 60, no. 219 (2014): 14-28. Appendix Cumhisto, a subroutine in PDL for calculating the cumulative histogram: sub cumhisto { my$lqs = pdl(@_); my $cum = pdl(@_); my$h = 0; while ($h < nelem($lqs)-2) { my $dummy = zeroes(nelem($cum)); $dummy($h+1:-1) .= $lqs(0:-($h+2)); $cum +=$dummy; $h++ } return$cum; }