# Hillshading, useful fun with digital elevation models

Hillshading is the art and science of making a graphical representation of an illuminated landscape. It has a long history that includes names as Leonardo da Vinci (with the chiaroscuro technique) and many famous and not so famous Swiss and Japanese cartographers (have a look at some examples of mastery on manual hillshading by Eduard Imhof). A very complete account can be found in the excellent and very enjoyable paper “Hill shading and the reflectance map”, by Berthold Horn [1].

Shading has an intuitive appeal because it conveys information on shape in a straightforward manner. This is best envisaged as the difference in perception that a person experiences during a cloudy or a sunny day. In a cloudy day light is diffuse and does not produce shadows, giving the impression of flatness, whereas in a sunny day shadows allow us to make an immediate mental model of the shapes and enables better estimation of distances. This is very relevant in many practical situations, in particular “… when the interpreter’s time is limited, as in aviation, for users that are not trained cartographers, and for small scale maps, where contours degenerate into messy tangles of lines.” [1]

Most importantly, the techniques involved in computing shaded maps are fundamental for applying topographic correction to satellite images. They can also become important when it is relevant to convey information about topography as context to something else, but when a exact quantitative specification is not necessary nor desirable, as in tourist maps or as background information in geoscientific studies, as shown below.

Palaeo-ice-stream onsets in south-central Baffin Island, Amadjuak Lake region. Section of geological map of Canada (GSC1860A*; Wheeler et al., 1997) draped over shaded relief with superimposed landforms. Map 1860A is produced, compiled and published by the Geological Survey of Canada (GSC). This section is reproduced here according to the permissions granted by the GSC for non-commercial applications. Modified from De Angelis and Kleman (2008) [2].

Every modern GIS package provides a convenient way to compile a shaded relief map, often including very involved corrections. We here focus on the fundamental aspects, leaving refinements aside, and will confine ourselves to the important and sufficiently general case of Lambertian surfaces, using as an example the DEM we compiled in previous posts of the region around Vesuvius volcano, in Italy.

The central concept here is to understand that the irradiance, the amount of light coming upon a surface element, depends on the position of the light source relative to the surface element. The radiance, the amount of light being reflected by the surface element, depends on the irradiance but also on the reflectance properties of the surface. As we mentioned, we here focus on Lambertian surfaces, reflecting diffusively and equally on all directions, in which case we can safely assume that radiance will be proportional to irradiance. For natural surfaces illuminated by the Sun the problem of finding the irradiance, or solar incidence, reduces to an application of the spherical law of cosines:

$cos \gamma_{i} = cos \theta_{p} cos \theta_{z} + sin \theta_{p} sin \theta_{z} cos (\phi_{a} - \phi_{o})$

where $\gamma_{i}$ is the solar incidence, $\theta_{p}$ is the slope of the surface, $\theta_{z}$ is the zenith angle, or the angle between the Sun position and the vertical, $\phi_{a}$ is the the Sun azimuth from the North, and $\phi_{o}$ is the surface aspect, that is, the direction in which the surface is sloping.

We begin by calculating the slope and aspect from the DEM:

1. slope:

$\theta_{p} = atan ( \sqrt{ (\frac{dz}{dx})^{2} + (\frac{dz}{dy})^{2} })$

we can easily apply this manipulation using grdmath, the GMT reverse polish notation calculator, on our DEM grid, here denoted dem.grd:

grdmath dem.grd DDX dem.grd DDY R2 SQRT ATAN R2D = slope.grd

Note that the function R2 is equal to the sum of the squares of the first two terms, and R2D is the Radian To Degree converter.

2. aspect:

$\phi_{o} = atan (\frac{dx}{dy})$

and similarly:

grdmath dem.grd DDX dem.grd DDY ATAN2 R2D 180 ADD = aspect.grd

Note that we used the function ATAN2, which specifically accounts for the signs of the numerator and denominator and thus gives the angles in the right quadrant. We add 180 degrees to get angles from 0 to 360, instead in the range -180 – +180.

Our partial results look like this:

Slope (left) and aspect (right) of the Vesuvius DEM.

We can now turn into computing cosine of the irradiance. As Sun azimuth and elevation we choose 315° and 45°, that is: illumination from the northwest at half sky, following the cartographic convention for shaded maps:

We do:

grdmath slope.grd D2R COS 45 D2R COS MUL slope.grd D2R SIN 45 D2R SIN MUL 315 D2R aspect.grd D2R SUB COS MUL ADD = cosgammai_315.grd

(this can also be accomplished using GMT’s grdgradient, which also has a function for specifying the properties of the reflective surface)

Our results look like this:

Presto! This is an image upon which we could drape a photograph or satellite image, or even draw the contours. This is also a fundamental piece of information needed for topographical correction of images, a topic of a previous post.

[1] Horn, B. K. (1981). Hill shading and the reflectance map. Proceedings of the IEEE, 69(1), 14-47. Find it in ResearchGate

[2] De Angelis, H. & Kleman, J. (2008). Palaeo‐ice‐stream onsets: examples from the north‐eastern Laurentide Ice Sheet. Earth Surface Processes and Landforms, 33(4), 560-572. Find it in ResearchGate.

# A reflection on learning in technology dominated fields

This post is a short reflection on some aspects of the influence of technology in learning, and is based on my experience as teacher of remote sensing and GIS in higher education settings. I nevertheless think that many aspects of this reflection apply to other fields as well.

A comfortable and amusing side of modernity is that we have at our disposal a plethora of fancy software that will, at a mouse click, create beautiful results without us having to bother about the boring details. This is particularly evident in computing, where we can rapidly get results from data, but more strikingly in visualization problems, where shiny colorful plots can be created in a second. This is all good and fine, but there are dangerous side effects of this.

The heavy price we pay for this handiness is that we very often forget, or even neglect to adequately learn, the essential ideas behind the methods and, in extreme cases, even their purpose. In this way, we unnecessarily expose ourselves to the risk of ignoring the potentials and limitations of our tools. I think that this is most clearly evident when it comes to visualization, a complex field in which many software packages today enable us to create shockingly colorful charts in an eye cast. In this sort of double-sided sword, the tools invite us to produce results rapidly thereby opening the door to forget fundamental aspects of visualization like scientific purpose and human perception.

Conceptual model of scientific visualization, with the three outstanding dimensions and their caveats.

Another trap lurking behind is that we may be caught unaware by the fact that we usually become addicted to specific software tools, particularly of the GUI variety, and might end up associating a given procedure with a sequence of clicks on virtual buttons, instead of ideas and equations that can be done in every computer language or even paper and pencil (albeit not recommendable!).

As a teacher in remote sensing and GIS I have seen this unfortunate situation in many higher-education settings, where for a sizable part of students learning goes as to learn some loose basics and then jump to click on programs and produce results. This is not learning, at least not for me. The seriousness of the problem is manifested by the typical question ‘ok, now, how do I do this or that?‘, meaning: ‘which button do I click on now?‘ I assume that this approach will be then transported to other settings, industrial or academic, as the now experts, pressed to produce rapid results but acquainted with performing series of clicks in a particular software package, become fragile to changes in software packages, tool configurations and new problems.

I believe that all this is avoidable. Focusing on the essential core of principles in a discipline can help people to construct their knowledge, upon which long-term academic excellence and professional expertise can be built. In this sense, I agree with the late mathematician Richard Hamming in that developing an attitude of learning to derive results from principles is essential. Learning the principles behind the methods is, as much as learning to learn, a cornerstone of lifelong learning. I believe that stressing on this approach is a way of helping our students become more adaptable and resilient in a changing world.

The Shuttle Radar Topography Mission (SRTM) is a tremendous resource because of its high accuracy and its consistency as a “snapshot” of Earth’s surface, acquired over just 10 days during February 2000. It is also freely available for anyone! SRTM was a specially designed mission of the Endeavour space shuttle, with a specially built sensor antenna, and aimed to map the Earth’s land topography using radar interferometry. As a result, during the 10 days of the mission, the land surface topography between latitudes 60°N and 56°S, or about 80% of Earth’s surface, was mapped with a final cell size of 1 arc-second (about 30 m) and absolute elevation errors under 10 m. More details about the mission can be found in the paper by Farr and collaborators [1] (which is open access and can be downloaded here).

SRTM data have been publicly available relatively soon after the data was processed. Three versions exist: version 1, the original sensor raw data; version 2, a processed version with data voids (arising from weak radar returns or other anomalies caused by, for example, deep valleys); and version 3, the void-filled version, compiled with the help of ASTER DEM. During many years, though, only data over the United States was available as V3 at the maximum resolution of 1 arc-second, whereas data for the rest of the world was offered as V2, 3 arc-second, or about 90 m pixel size. Fortunately, a recent happy decision by the US government has finally made available all data at full resolution for the whole globe. The process is gradual and is expected to be finished by next year.

Luckily, for our Landsat 8 example, the Bay of Naples is already available with 1 arc-second as of November 2014. We proceed then to download the data and prepare them. We start by opening the Earth Explorer site (http://earthexplorer.usgs.gov/) and choose the area of interest, either with the path/row or, as shown below, specifying a coordinate area with the map in the menu to the left:

we then choose the SRTM dataset in the menu to the left, under the tab ‘Data Sets’ by navigating the menu as follows: DigitalElevation -> SRTM -> SRTM 1 Arc-Second Global:

Clicking in “Results” will show us the 1° x 1° tiles in which SRTM data are compiled. We simply download them, using the icon with a green arrow next to each result item in the results list. I here chose to get the four westernmost tiles:

 n40_e013_1arc_v3.tif n40_e014_1arc_v3.tif n41_e013_1arc_v3.tif n41_e014_1arc_v3.tif

These are GeoTIFF files. It is also possible to get the raw format grids, “hgt” files, but these require a bit more involved processing, so we here proceed with the GeoTIFFs. The name of the files refer to the coordinates of the lower left corner of each tile. Each of the files contains 3601 x 3601 elements, in 16 bit integer data type, with missing values signaled as -32767. The file coordinates are given as longitude and latitude, referred to the WGS84 ellipsoid, with elevation expressed in meters referred to the mean sea level (EGM96 geoid). As is tradition in this blog, we use the free and open source tools GDAL and Generic Mapping Tools (GMT) from the command line:

> gdalinfo n40_e013_1arc_v3.tif Driver: GTiff/GeoTIFF Files: n40_e013_1arc_v3.tif Size is 3601, 3601 Coordinate System is: GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4326"]] Origin = (12.999861111111111,41.000138888888884) Pixel Size = (0.000277777777778,-0.000277777777778) Metadata: AREA_OR_POINT=Point DTED_CompilationDate=0002 DTED_DataEdition=02 DTED_DigitizingSystem=SRTM DTED_HorizontalAccuracy=0009 DTED_HorizontalDatum=WGS84 DTED_MaintenanceDate=0000 DTED_MaintenanceDescription=0000 DTED_MatchMergeDate=0000 DTED_MatchMergeVersion=A DTED_NimaDesignator=DTED2 DTED_OriginLatitude=0400000N DTED_OriginLongitude=0130000E DTED_Producer=USCNIMA DTED_RelHorizontalAccuracy=NA DTED_RelVerticalAccuracy=0011 DTED_SecurityCode_DSI=U DTED_SecurityCode_UHL=U DTED_UniqueRef_DSI=F02 011 DTED_UniqueRef_UHL=F02 011 DTED_VerticalAccuracy_ACC=0008 DTED_VerticalAccuracy_UHL=0008 DTED_VerticalDatum=E96 Image Structure Metadata: INTERLEAVE=BAND Corner Coordinates: Upper Left ( 12.9998611, 41.0001389) ( 12d59'59.50"E, 41d 0' 0.50"N) Lower Left ( 12.9998611, 39.9998611) ( 12d59'59.50"E, 39d59'59.50"N) Upper Right ( 14.0001389, 41.0001389) ( 14d 0' 0.50"E, 41d 0' 0.50"N) Lower Right ( 14.0001389, 39.9998611) ( 14d 0' 0.50"E, 39d59'59.50"N) Center ( 13.5000000, 40.5000000) ( 13d30' 0.00"E, 40d30' 0.00"N) Band 1 Block=3601x1 Type=Int16, ColorInterp=Gray NoData Value=-32767 Unit Type: m

We now proceed to compile a mosaic with our four files:

> gdal_merge.py -o bayofnaples.tif -n -32767 *.tif 0...10...20...30...40...50...60...70...80...90...100 - done.

Note that we have specified the no-data value, with the option ‘-n’. We now have a file with the following characteristics:

> gdalinfo bayofnaples.tif Driver: GTiff/GeoTIFF Files: bayofnaples.tif Size is 7201, 7201 Coordinate System is: GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4326"]] Origin = (12.999861111111111,42.000138888888884) Pixel Size = (0.000277777777778,-0.000277777777778) Metadata: AREA_OR_POINT=Area Image Structure Metadata: INTERLEAVE=BAND Corner Coordinates: Upper Left ( 12.9998611, 42.0001389) ( 12d59'59.50"E, 42d 0' 0.50"N) Lower Left ( 12.9998611, 39.9998611) ( 12d59'59.50"E, 39d59'59.50"N) Upper Right ( 15.0001389, 42.0001389) ( 15d 0' 0.50"E, 42d 0' 0.50"N) Lower Right ( 15.0001389, 39.9998611) ( 15d 0' 0.50"E, 39d59'59.50"N) Center ( 14.0000000, 41.0000000) ( 14d 0' 0.00"E, 41d 0' 0.00"N) Band 1 Block=7201x1 Type=Int16, ColorInterp=Gray

Since we plan to use this elevation model with our image from the Vesuvius volcano (this one), we proceed to clip the DEM to the same area as the image, reprojecting it at the same time to the UTM33N projection, and taking care to specify a 30 m pixel size, so there will be a one-to-one correspondence between the cells in the image and DEM. We accomplish this using gdalwarp:

> gdalwarp -t_srs 'EPSG:32633' -te 430995 4489005 470985 4539015 -tr 30 30 bayofnaples.tif bayofnaples_UTM33N.tif Creating output file that is 1333P x 1667L. Processing input file bayofnaples.tif. 0...10...20...30...40...50...60...70...80...90...100 - done.

our new file has the following properties:

> gdalinfo bayofnaples_UTM33N.tif Driver: GTiff/GeoTIFF Files: bayofnaples_UTM33N.tif Size is 1333, 1667 Coordinate System is: PROJCS["WGS 84 / UTM zone 33N", GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4326"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",15], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AUTHORITY["EPSG","32633"]] Origin = (430995.000000000000000,4539015.000000000000000) Pixel Size = (30.000000000000000,-30.000000000000000) Metadata: AREA_OR_POINT=Area Image Structure Metadata: INTERLEAVE=BAND Corner Coordinates: Upper Left ( 430995.000, 4539015.000) ( 14d10'46.25"E, 40d59'57.85"N) Lower Left ( 430995.000, 4489005.000) ( 14d11' 6.14"E, 40d32'56.15"N) Upper Right ( 470985.000, 4539015.000) ( 14d39'17.97"E, 41d 0' 6.51"N) Lower Right ( 470985.000, 4489005.000) ( 14d39'26.33"E, 40d33' 4.67"N) Center ( 450990.000, 4514010.000) ( 14d25' 9.19"E, 40d46'32.18"N) Band 1 Block=1333x3 Type=Int16, ColorInterp=Gray 

Finally, we can display a nice color plot using GMT. We begin by translating our GeoTIFF to a NetCDF grid:

> gdal_translate -of NetCDF bayofnaples_UTM33N.tif bayofnaples_UTM33N.grd Input file size is 1333, 1667 0...10...20...30...40...50...60...70...80...90...100 - done. 

We then clip the values below 0, setting them to zero, to facilitate colour coding:

> grdclip bayofnaples_UTM33N.grd -Gnew.grd -Sb0/0

The new grid, new.grd, has the following properties:

> grdinfo new.grd new.grd: Title: GDAL Band Number 1 new.grd: Command: grdclip bayofnaples_UTM33N.grd -Gnew.grd -Sb0/0 new.grd: Remark: new.grd: Gridline node registration used [Cartesian grid] new.grd: Grid file format: nf = GMT netCDF format (32-bit float), COARDS, CF-1.5 new.grd: x_min: 431010 x_max: 470970 x_inc: 30 name: x coordinate of projection [m] nx: 1333 new.grd: y_min: 4489020 y_max: 4539000 y_inc: 30 name: y coordinate of projection [m] ny: 1667 new.grd: z_min: 0 z_max: 1442 name: GDAL Band Number 1 new.grd: scale_factor: 1 add_offset: 0 new.grd: format: netCDF-4 chunk_size: 134,129 shuffle: on deflation_level: 3

We then proceed to make a nice color table, based on the master table dem2, using makecpt:

> makecpt -Cdem2 -T0/1500/10 > color.cpt

For visualization, it might be nicer if we indicate that the lowest values, the sea level, have to be blue. We do this by opening a text editor and adding a first line that encompasses the values from 0 to 0.1, with blue color (0/0/255 stands for 0 red, 0 green, 255 blue):

0 0/0/255 0.1 0/0/255 0.1 5.2267/105.17/63.16 10 5.2267/105.17/63.16 10 15.68/121.5/47.48 20 15.68/121.5/47.48

We can then plot the grid using grdimage:

> grdimage new.grd -JX15 -Rnew.grd -Ccolor.cpt -X1 -Y1 -P > plot.ps

and export the postscript file as a raster, and clipping the original page to the area of the map:

> ps2raster -A plot.ps

Digital elevation model covering the same area that the Vesuvius image, and with the same pixel size.

Presto! In the next post we will compute grids of aspect and slope from this DEM.

[1] Farr, T.G. et al. 2007. The shuttle radar topography mission. Reviews of geophysics 45.2.

# Working with Landsat 8, a tutorial. 3. conversion to radiance and brightness temperature at top-of-atmosphere

In this third post of the Landsat 8 series we will implement, step by step, the procedures explained in the Landsat web site for converting the digital numbers in the satellite images to geophysical measurements of spectral radiance, as measured at the sensor and without atmospheric correction, called ‘top-of-atmosphere’ or simply TOA.

The images that we download as TIF files are coded into 16-bit unsigned integer images. These are referred to as digital numbers. For some simple purposes, as for example, measuring the extent of features identifiable by eye, these digital numbers are perfectly sufficient. However, when we need to assess the geophysical properties of the terrain or make studies of land cover change over time, we might find convenient to convert these raw digital numbers to their original values of radiance, mostly to be able to compute reflectance, and compare surfaces on a solid geophysical basis.

Converting to radiance is extremely easy because the images are coded using a linear relationship between radiance measurement and digital numbers. For all bands, we retrieve these values by calculating:

$L_{\gamma} = M_{L} Qcal + A_{L}$

where $L_{\gamma}$ is the spectral, or band-wise, TOA radiance, $M_{L}$ is a multiplicative term, the slope of the line in the linear relationship, $Qcal$ is the digital number, and $A_{L}$ is the additive linear term, the value at the origin. This will give us measurements in:

$\frac{W}{m^{2} ~ srad ~ \mu m}$

or Watts per square meter (surface) per steradian (solid angle) per micrometer (wavelength), which is the actual radiometric information that sensors collect.

We can accomplish this for each band by searching the metainformation text file *_MTL.txt and extracting for each band the values of RADIANCE_MULT_BAND_X and RADIANCE_ADD_BAND_X, corresponding to the parameters $M_{L}$ and $A_{L}$ in the above equation, respectively. I perform these operations via a Perl script, extracting the parameters using regular expressions, storing them in variables ($RADMULBAND and$RADADDBAND below), and then using the powerful NetCDF calculator grdmath as a raster calculator, in Generic Mapping Tools, and GDAL tools:

 gdal_translate -of NetCDF $band.TIF$band.grd grdmath $band.grd$RADMULBAND MUL $RADADDBAND ADD =$band-radTOA.grd gdal_translate -ot Float32 $band-radTOA.grd$band-radTOA.tif 

In this snippet, I first convert the band file to NetCDF using gdal_translate, then perform the operation with grdmath, and finally export to GeoTIFF using gdal_translate. The results, for band 8, the panchromatic, in the area around the Vesuvius volcano looks like this:

Subset of a Landsat 8, band 8 (panchromatic) image, depicting radiance at top-of-atmosphere of the Vesuvius area, on 29 July 2013.

The image looks very similar to the uncorrected one because what we made is just a linear transformation, so the relationship between nearby pixels has not been changed. However, the image now is in 32-bit floating point format, and has double the size in memory.

For the particular case of the thermal bands (bands 10 and 11), we can proceed to calculate the brightness temperature, again at TOA, without correcting for the atmospheric distortions, by applying:

$T = \frac{K_{2}}{ln(\frac{K_1}{L_{\gamma}} + 1)}$

where K1 and K2 are constants called K1_CONSTANT_BAND_x and K2_CONSTANT_BAND_x, respectively, in the metainformation file. As before, we extract these using a regular expression within a script, and proceed to generate a corrected TIF using:

 grdmath $K1$band-radTOA.grd DIV 1 ADD LOG INV $K2 MUL =$band-temp.grd gdal_translate -ot Float32 $band-temp.grd$band-temp.tif

The results are in degrees Kelvin [K], which we can easily transform to Celsius by subtracting the value at 0°C, 273.15 °K. The brightness temperature in band 10, together with the difference between band 10 and band 11 look like this:

Brightness temperatures (°C) calculated for the Vesuvius area on 29 July 2013, at satellite pass time, using band 10 (left), and difference with temperature in band 11 (right).

As you see, the difference is very small, with mean -1.79 °C, and outliers on the top of the volcano and on some urban areas. Note how the northern slopes are slightly colder than southern ones. Pay also attention to the fact that these are uncorrected for atmospheric effects and might therefore depart from the real temperatures measured on surfaces at the same time. The above figures where drawn using R, which can also be successfully used to perform the calculations described in this page.

Finally, the Landsat use page also points to an illumination correction by which one can calculate reflectance. The procedure is the same as above, applying a linear transformation using the digital numbers and two parameters (REFLECTANCE_MULT_BAND_X, REFLECTANCE_ADD_BAND_X). Landsat 8 data comes in L1T processing level, meaning that they have already been corrected for topographic effects. According to the Landsat website this is done using a combination of SRTM, ASTER DEM, etc. These conversions work fairly well in most cases, but might not be entirely correct for polar areas, for example, where the existing DEM might be too coarse.

# Working with Landsat 8, a tutorial. 2. creating simple composites

In this second post of the Landsat 8 series, we have a look at how to create simple color composites with uncorrected data, a common and useful task that is often misunderstood by beginners. For this purpose, it is necessary to understand two concepts: how digital sensors register information and how color images are constructed.

The first key concept is that sensors on board satellites or air planes perform radiometric measurements in specific ranges of wavelength of the electromagnetic spectrum. Sensor elements capture a portion of the outgoing (from Earth) radiation in a given spectral window, which is then converted to digital numbers, stored and, together with the set of neighboring measurements, coded as regular image grids. Images acquired in specific windows of the spectrum are called ‘bands’. The spectral ranges in which they were acquired, measured in nanometers or micrometers, together with their nominal pixel size, are key properties of the sensors. Then, each band will be an image file. For Landsat 8 we have:

Designation Range (µm) Pixel size Denomination/Use Band 1 0.43 - 0.45 30 Coastal aerosol Band 2 0.45 - 0.51 30 Blue Band 3 0.53 - 0.59 30 Green Band 4 0.64 - 0.67 30 Red Band 5 0.85 - 0.88 30 Near Infrared (NIR) Band 6 1.57 - 1.65 30 Short wave infrared (SWIR) 1 Band 7 2.11 - 2.29 30 Short wave infrared (SWIR) 2 Band 8 0.50 - 0.68 15 Panchromatic Band 9 1.36 - 1.38 30 Cirrus Band 10 10.60 - 11.19 100 * (30) Thermal Infrared (TIRS) 1 Band 11 11.50 - 12.51 100 * (30) Thermal Infrared (TIRS) 2

The second key concept is that color images can be formed by projecting three separate gray-level (black and white) images through three separate color channels: blue, green, red (in increasing order of wavelength). This is what television sets and projectors do. If the three images are exactly the same, the result will be another gray-level image, but if these images contain differences, these will become manifest as colored areas, depending on the relative magnitude of the levels at a given point.

In the specific case of satellite images, we can create a ‘true-color’ image by combining the three images acquired in the blue, green and red parts of the visible part of the spectrum, through the blue, green and red channels of the image. For example, to create a true color composite of the Bay of Naples, we put the image collected in the red part of the visible spectrum (band 4 in Landsat 8) in the red channel, the green image (band 3) on the green channel, and the blue image (band 2) on the blue channel. We can do this from the command line by simply using the GDAL tool gdal_merge.py:

> gdal_merge.py -separate -o naples_true_composite.tif LC81890322013210LGN01_B4.TIF LC81890322013210LGN01_B3.TIF LC81890322013210LGN01_B2.TIF 0...10...20...30...40...50...60...70...80...90...100 - done. 
Our test image will look like this:

Bay of Naples, Italy

Landsat images are about 185×185 km in size. We might not need such a big area, and we then might need to clip a subset. To achieve this from the command line, for example, to extract the Vesuvius area, we can use the GDAL tool gdal_translate:

> gdal_translate -projwin 431000 4539000 471000 4489000 naples_true_composite.tif vesuvius.tif Input file size is 7761, 7901 Computed -srcwin 2317 1460 1333 1667 from projected window. 0...10...20...30...40...50...60...70...80...90...100 - done.

where, using the option -projwin we introduced the window in projected coordinates: ulx (upper left x), uly (upper left y), lrx (lower right x), lry (lower right y). Note that we will need to pick the coordinates of the corners. We can do this in any viewer, such as QGIS.

Our image now looks like this:

Vesuvius area, Campania, Italy.

which is basically an approximately 20×20 km area around Vesuvius. Note the northern tip of Capri on the southwest corner.

Alternatively, we might create a ‘false color composite’, by using a different combination of bands in different channels. An extremely useful combination, particularly for vegetation studies, is to use the infrared band, which is not visible to the human eye, in the red channel, the red image in the green channel, and the green image in the blue channel (remember that images are usually specified in order: RGB). Using the same procedures as above, we do:

> gdal_merge.py -separate -o naples_false_composite.tif LC81890322013210LGN01_B5.TIF LC81890322013210LGN01_B4.TIF LC81890322013210LGN01_B3.TIF 0...10...20...30...40...50...60...70...80...90...100 - done.

and we get:

Bay of Naples, Italy, infrared false color composite.

Note the strong red color of the vegetation, as it is much more reflective in the infrared than in the visible wavelengths.

As a final point. We can check the properties of the Vesuvius image with gdalinfo:

> gdalinfo vesuvius.tif Driver: GTiff/GeoTIFF Files: vesuvius.tif Size is 1333, 1667 Coordinate System is: PROJCS["WGS 84 / UTM zone 33N", GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4326"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",15], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AUTHORITY["EPSG","32633"]] Origin = (430995.000000000000000,4539015.000000000000000) Pixel Size = (30.000000000000000,-30.000000000000000) Metadata: AREA_OR_POINT=Area Image Structure Metadata: INTERLEAVE=PIXEL Corner Coordinates: Upper Left ( 430995.000, 4539015.000) ( 14d10'46.25"E, 40d59'57.85"N) Lower Left ( 430995.000, 4489005.000) ( 14d11' 6.14"E, 40d32'56.15"N) Upper Right ( 470985.000, 4539015.000) ( 14d39'17.97"E, 41d 0' 6.51"N) Lower Right ( 470985.000, 4489005.000) ( 14d39'26.33"E, 40d33' 4.67"N) Center ( 450990.000, 4514010.000) ( 14d25' 9.19"E, 40d46'32.18"N) Band 1 Block=1333x1 Type=UInt16, ColorInterp=Gray Band 2 Block=1333x1 Type=UInt16, ColorInterp=Undefined Band 3 Block=1333x1 Type=UInt16, ColorInterp=Undefined

We see that the image has a completely defined spatial reference system (SRS), with datum, coordinates referred to the WGS84 ellipsoid, and a cartographic projection: Universal Transverse Mercator (UTM) zone 33 north. We also see from the last lines that this image is coded as in 16-bit unsigned integers. In the next post, we will see how to convert these digital numbers into geophysical measurements of radiance.

# Working with Landsat 8, a tutorial. 1: getting the data

This and the following posts in this series are excerpts from a Landsat tutorial I wrote for a GIS course that I recently taught. Although the procedures are simple, they are far from obvious for many potential users and I thought that I could well put the tutorial here where it can be beneficial to many more people. In this first part, as I did with the MODIS tutorial, we focus on selecting and downloading the data, and leave processing and correction for later posts.

The Landsat program, a partnership between NASA and USGS, has imaged the Earth surface with purpose-built instruments since 1972 (see http://landsat.gsfc.nasa.gov/), offering a unique glimpse into more than 40 years of global environmental change. Landsat 8, launched on 13 February 2013, is the latest addition to this successful program and includes many interesting refinements over its predecessors, like 12-bit radiometric quantization (coded into 16-bit images) and improved signal-to-noise ratio. Perhaps, the most interesting feature of the Landsat program is that all data are free to use for everyone.

We start by opening the USGS Global Visualization viewer (Glovis):

On the left side you will see a tiny map of the world, that you can use to navigate to your spot of interest. Landsat images can also be located using a system known as WRS-2, in which the locations are specified in terms of two coordinates: path and row. All scenes with the same path and row depict the same region of the world. It is a convenient practice to become familiar with the path/row coordinates for your area of interest.

You can choose to see several scenes at the same time, or just one, by choosing the appropriate scale in the resolution tab. You can specify a date, and a maximum cloud cover. Note that cloud cover is estimated automatically and might not be accurate. Always check visually. By clicking on ‘Prev scene’ or ‘Next Scene’ you go forward or back in time through the image database. By default Glovis opens in the ‘Landsat’ collection, that includes all landsat satellites. If you need only Landsat 8 images, just select that specific data set on the tab ‘Collection’.

In this tutorial we will use an image of the beautiful Bay of Naples, Italy, hotspot of history and summer home to a few Roman emperors, corresponding to path/row 189/32. We see that the Landsat 8 image of 29 July 2013 is completely cloud free:

You will get a big compressed ‘tar.gz’ file, in our case LC81890322013210LGN01.tar.gz. It takes a while to download. If you are in windows, you might be able to access this format using the program 7-zip. In Linux, any packager will do, or just execute the command:

tar -xvf LC81890322013210LGN01.tar.gz

the output will look like this:

LC81890322013210LGN01_B1.TIF LC81890322013210LGN01_B2.TIF LC81890322013210LGN01_B3.TIF LC81890322013210LGN01_B4.TIF LC81890322013210LGN01_B5.TIF LC81890322013210LGN01_B6.TIF LC81890322013210LGN01_B7.TIF LC81890322013210LGN01_B8.TIF LC81890322013210LGN01_B9.TIF LC81890322013210LGN01_B10.TIF LC81890322013210LGN01_B11.TIF LC81890322013210LGN01_BQA.TIF LC81890322013210LGN01_MTL.txt

Here we have all the individual band image files (*.TIF), the quality control image (*_BQA.TIF), and the metadata text file, with extension *_MTL.txt.

The details of the individual bands can be found here. Pay special attention that Landsat 8 band names do not correspond with the previous satellites, which can be confusing. A useful comparison with landsat 7 can be found here.

In the next post, we will go through the steps to convert these separate files into colour composites, and transform them to radiance values at top-of-atmosphere. Happy downloading!

Traffic jam in São Paulo, Brazil. Source: WikiCommons. Photo: Mario Roberto Duran Ortiz Mariordo.

A visit to any of the world’s great cities will convince anyone that cars are no longer useful to travel around dense urban areas. Yet, we seem to insist with our wheeled smoking addiction against all evidence, building and buying more cars and adapting our cities to them, decreasing our overall life quality in the process. Why?

The automobile has been an icon of modern industrial society for about a century. Almost every TV commercial will show us shiny large new cars speeding across empty roads in wide open natural spaces, or through some old and elegant (and curiously empty!) European city. These cars will be often driven by handsome, affluent middle-aged men accompanied by gorgeous, sexy women. Yet, this is not how reality will look like for most us after we buy that car! This says a lot, and not only about the gender perspective of these commercial ads. As powerful status symbols, larger and more sophisticated cars are usually seen as proxies for our personal worth and success, at least according to the hedonistic values of our material society.

We seem to hold the religious belief that the personal car gives us freedom and well-being, and the speed and flexible mobility deemed necessary for a modern urban life. Yet, this is far from truth. Traffic congestion and its attendant visual, noise and chemical pollution are a major source of avoidable health issues, as stress, respiratory and skin problems [1]. Another interesting argument, recently put forward by G. Backhaus [2], states that the rise of atmospheric CO2 concentrations and its concomitant chain of dangerous planetary changes (global warming, ocean acidification, sea-level rise, disruption of agricultural cycles, etc) can be seen as a just a symptom of the prevalent worldview, one in which massive fossil-fuel burning is central, and of which car addiction is just a part of.

Getting rid of this is not easy. We can see it a bit more clearly from the systems thinking perspective: we have over several decades built a massive system for urban traffic and a massive industry to sustain it (if the car industry were a country, it would have the world’s 6th largest BNP [3]). We now became addicted to its functionality and supposed benefits, as employment and regional development. Some cities in the world are namely built for the car, with the idea that people will work in downtown while living in far-off suburbs and make their purchases in far-off malls. We have invested astronomical amounts of effort and money in this global idea, and some people have become astonishingly wealthy in the process. This system is almost self-sustaining, at a cost that we are not paying right now (environmental disruption at planetary scale, soon among us). It is not surprising, then, that we refuse to see its disadvantages and many of amongst us are often quick to demean alternative forms of urban mobility, with the excuse of the potential cost involved: an almost archetypical sunk-cost fallacy situation.

Cars still have reasons to exist, even in urban areas, as taxis and emergency vehicles, but particularly in sparsely populated areas where public transport might be inadequate. For the dense inner cores of cities, the use of electrically driven public transportation is a far superior alternative [*]. A cleverly and densely laid system of tramways, subways and trolleybus can effectively and cleanly deal with the necessary mobility of millions of people, not to mention that people can be encouraged to walk or cycle. Taking cars out of the streets liberates the space for living and meeting, which leads to enormous positive social side-effects because people start having more opportunities to meet and knit the social mesh, something that it is often lost in modern megacities. We have some good examples of this at hand: Vauban, Freiburg (Germany), Pontevedra (Spain) or Hydra (Greece), where parts of these cities have been closed to car traffic and had then been reclaimed by people as living, playing and meeting space, positively contributing to the local social well being and democracy.

Living in cities does not need to be a torture, but in many places around the world people still insist in it. The situation seems scarily similar to that of the ancient inhabitants of Eastern Island, or norsemen in Greenland: eroding the basic tenets of our existence to mindlessly serve some bogus religious purpose that is neither uplifting nor necessary. In the face of the coming mega-urbanity of the 21st century, every move to make cities livable and human is useful. Time to change our frame of mind. Urban automobility is dead. We better bury it and move on, the effort will be worthwhile.

[1] Armah, Frederick A.; Yawson, David O.; Pappoe, Alex A. N. M. 2010. “A Systems Dynamics Approach to Explore Traffic Congestion and Air Pollution Link in the City of Accra, Ghana.” Sustainability 2, no. 1: 252-265. http://www.mdpi.com/2071-1050/2/1/252

[2] Backhaus, Gary. 2009. “Automobility: Global Warming as Symptomatology.” Sustainability 1, no. 2: 187-208. http://www.mdpi.com/2071-1050/1/2/187

[3] De Vries, Bert JM. Sustainability science. Cambridge University Press, 2012.