Microwave and Lidar Sensi̇ng
Microwave and Lidar Sensi̇ng
Microwave and Lidar Sensi̇ng
6 MICROWAVE AND
LIDAR SENSING
6.1 INTRODUCTION
385
386 CHAPTER 6 MICROWAVE AND LIDAR SENSING
The word radar originated as an acronym for radio detection and ranging. As its
name implies, radar was developed as a means of using radio waves to detect the
presence of objects and to determine their distance and sometimes their angular
position. The process entails transmitting short bursts, or pulses, of microwave
energy in the direction of interest and recording the strength and origin of
“echoes” or “reflections” received from objects within the system’s field of view.
Radar systems may or may not produce images, and they may be ground
based or mounted in aircraft or spacecraft. A common form of nonimaging radar
is the type used to measure vehicle speeds. These systems are termed Doppler
radar systems because they utilize Doppler frequency shifts in the transmitted and
returned signals to determine an object’s velocity. Doppler frequency shifts are a
function of the relative velocities of a sensing system and a reflector. For example,
we perceive Doppler shifts in sound waves as a change in pitch, as in the case of
a passing car horn or train whistle. The Doppler shift principle is often used in
analyzing the data generated from imaging radar systems.
Another common form of radar is the plan position indicator (PPI) system.
These systems have a circular display screen on which a radial sweep indicates
the position of radar “echoes.” Essentially, a PPI radar provides a continuously
updated plan-view map of objects surrounding its rotating antenna. These sys-
tems are common in weather forecasting, air traffic control, and navigation appli-
cations. However, PPI systems are not appropriate for most remote sensing
applications because they have rather poor spatial resolution.
Airborne and spaceborne radar remote sensing systems are collectively
referred to as imaging radar. Imaging radar systems employ an antenna fixed
6.2 RADAR DEVELOPMENT 387
below the aircraft (or spacecraft) and pointed to the side. Thus, such systems
were originally termed side-looking radar (SLR), or side-looking airborne radar
(SLAR) in the case of airborne systems. Modern imaging radar systems use
advanced data processing methods (described in Section 6.4) and are referred to
as synthetic aperture radar (SAR) systems. Regardless of the terminology used to
identify them, imaging radar systems produce continuous strips of imagery
depicting extensive ground areas that parallel the platform’s flight line.
Imaging radar was first developed for military reconnaissance purposes in the
late 1940s. It became an ideal military reconnaissance system, not only because
it affords nearly an all-weather operating capability, but also because it is an
active, day-or-night imaging system. The military genesis of imaging radar has
had two general impacts on its subsequent application to civilian remote sensing
uses. First, there was a time lag between military development, declassification,
and civilian application. Less obvious, but nonetheless important, is the fact that
military radar systems were developed to look at military targets. Terrain features
that “cluttered” radar imagery and masked objects of military importance were
naturally not of interest in original system designs. However, with military declas-
sification and improvement in nonmilitary capabilities, imaging radar has evolved
into a powerful tool for acquiring natural resource data.
In the years since the first edition of this book was published, the science and tech-
nology of radar remote sensing have undergone dramatic progress. Today, at least
eight major satellite radar systems provide imagery day and night, worldwide to moni-
tor the planet’s oceans, land, and ice caps. These data support operations ranging from
agricultural forecasting to natural disaster response. Scientists have learned a great
deal about the ways that radar signals interact with different environments and are
using this understanding to develop new applications of radar technology.
Some of the earliest broad-scale applications of radar remote sensing occur-
red in regions where persistent cloud cover limits the acquisition of imagery
in the optical portion of the spectrum. The first such large-scale project for
mapping terrain with side-looking airborne radar was a complete survey of the
Darien province of Panama. This survey was undertaken in 1967 and resulted in
images used to produce a mosaic of a 20,000-km2 ground area. Prior to that time,
this region had never been photographed or mapped in its entirety because of
persistent (nearly perpetual) cloud cover. The success of the Panama radar map-
ping project led to the application of radar remote sensing throughout the world.
In 1971, a radar survey was begun in Venezuela that resulted in the mapping
of nearly 500,000 km2 of land. This project resulted in improvements in the accu-
racy of the location of the boundaries of Venezuela with its neighboring coun-
tries. It also permitted a systematic inventory and mapping of the country’s water
resources, including the discovery of the previously unknown source of several
major rivers. Likewise, improved geologic maps of the country were produced.
Also beginning in 1971 was Project Radam (standing for Radar of the Amazon),
a reconnaissance survey of the Amazon and the adjacent Brazilian northeast. At
that time, this was the largest radar mapping project ever undertaken. By the end
of 1976, more than 160 radar mosaic sheets covering an area in excess of
388 CHAPTER 6 MICROWAVE AND LIDAR SENSING
8,500,000 km2 had been completed. Scientists used these radar mosaics as base
maps in a host of studies, including geologic analysis, timber inventory, transpor-
tation route location, and mineral exploration. Large deposits of important miner-
als were discovered after intensive analysis was made of newly discovered features
shown by radar. Mapping of previously uncharted volcanic cones, and even large
rivers, resulted from this project. In such remote and cloud-covered areas of the
world, radar imagery is a prime source of inventory information about potential
mineral resources, forestry and range resources, water supplies, transportation
routes, and sites suitable for agriculture. Such information is essential to planning
sustainable development in such ecologically sensitive areas.
Along with the humid tropics, the Arctic and Antarctic regions present some of
the most challenging conditions for optical remote sensing. At high latitudes,
where cloud cover is frequent and the sun is low in the sky or absent for much of
the year, imaging radar is used to track the extent and movement of sea ice to
ensure the safety of shipping lanes. On the circumpolar land masses, radar is used
to measure the velocity of glaciers and to map surface features in areas that may
only rarely, if ever, be visited by humans. Figure 6.1 shows the first-ever compre-
hensive image mosaic of Antarctica, produced in 1997 by the Radarsat Antarctic
Mapping Program (RAMP; Jezek, 2003). In this image, the seemingly featureless
expanse of Antarctica turns out to be far more variable than previously realized,
Figure 6.1 Radarsat mosaic of the entire continent of Antarctica, derived from over 3150
individual radar images. (Courtesy Canadian Space Agency.)
6.3 IMAGING RADAR SYSTEM OPERATION 389
with extensive areas of higher and lower radar reflectivity associated with differing
conditions of snow deposition and surface melt; the presence of features such as
crevasses, snow dune fields, and ice shelves; and varying roughness of the sub-
glacial terrain. In addition to this image mosaic, Radarsat imagery has been pro-
cessed using the technique of radar interferometry (Section 6.9) to produce a
digital elevation model of the whole continent, while repeated image acquisitions
in subsequent years by Radarsat-2 have enabled measurement of the velocity of ice
flow likewise across the entirety of the continent.
Numerous other applications of radar imagery have been demonstrated in the
areas of geologic mapping, mineral exploration, flood inundation mapping, for-
estry, agriculture, and urban and regional planning. Over the oceans, radar ima-
gery has been used extensively to determine wind, wave, and ice conditions and
to map the extent and movement of oil spills. Finally, the sophisticated data pro-
cessing methods used in radar interferometry are now being widely employed for
topographic mapping and for measuring changes in topography caused by factors
ranging from the movements of magma within volcanoes to the gradual sub-
sidence of the surface in areas of groundwater extraction.
Radar remote sensing from space began with the launch of Seasat in 1978 and
continued with the Shuttle Imaging Radar (SIR) and Soviet Cosmos experiments in
the 1980s. The “operational” era of spaceborne radar remote sensing began in the
early 1990s with the Almaz-1, ERS-1, JERS-1, and Radarsat-1 systems launched by
the former Soviet Union, European Space Agency, Japan, and Canada, respectively.
In the new century, radar satellites have proliferated, culminating with the develop-
ment of multi-satellite “constellations” such as Germany’s TerraSAR-X/TanDEM-X
pair, Italy’s COSMO-SkyMed suite of satellites, and other multi-satellite missions
planned by Canadian and European space agencies. The field of spaceborne radar
remote sensing thus continues to be characterized by rapid technological advances,
an expanding range of sources of data, and a high level of international participation.
The basic operating principle of an imaging radar system is shown in Figure 6.2.
Microwave energy is transmitted from an antenna in very short bursts or
pulses. These high energy pulses are emitted over a time period on the order of
microseconds (106 sec). In Figure 6.2, the propagation of one pulse is shown by
indicating the wavefront locations at successive increments of time. Beginning
with the solid lines (labeled 1 through 10), the transmitted pulse moves radially
outward from the aircraft in a constrained (or narrow) beam. Shortly after
time 6, the pulse reaches the house, and a reflected wave (dashed line) is shown
beginning at time 7. At time 12, this return signal reaches the antenna and
is registered at that time on the antenna response graph (Figure 6.2b). At time 9,
the transmitted wavefront is reflected off the tree, and this “echo” reaches the
antenna at time 17. Because the tree is less reflective of radar waves than
the house, a weaker response is recorded in Figure 6.2b.
390 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
Figure 6.2 Operating principle of imaging radar: (a) propagation of one radar
pulse (indicating the wavefront location at time intervals 1–17); (b) resulting
antenna return.
(Note that the factor 2 enters into the equation because the time is measured
for the pulse to travel the distance both to and from the target, or twice the
range.) This principle of determining distance by electronically measuring the
transmission–echo time is central to imaging radar systems.
One manner in which radar images are created is illustrated in Figure 6.3. As
the aircraft advances, the antenna (1) is continuously repositioned along the flight
6.3 IMAGING RADAR SYSTEM OPERATION 391
Figure 6.3 Imaging radar system operation. (Adapted from Lewis, 1976.)
Figure 6.4 TerraSAR-X image of the Three Gorges Dam on the Yangtze River, China (scale 1:170,000). Imagery
acquired in October at 1 m to 5 m spatial resolution. (Copyright: DLR e.V. 2009, Distribution Airbus DS/Infoterra
GmbH.)
Yangtze River in China. The dam is 181 m high and spans a length of 2300 m. In
terms of installed capacity, it is the largest electrical generating station in the
world, and it also serves to reduce the risk of flooding on the lower Yangtze. In
this image, the river appears as a dark band running from left to right (west to
east) across the center of the image. Its predominantly dark tone is due to the fact
that the smooth surface of the water acts as a specular reflector, producing very
little radar backscatter and a correspondingly low amplitude of the returning
radar signal. The dam itself, and other surrounding structures, appear relatively
bright due to their geometric configuration and the materials used in their con-
struction; likewise, ships and barges on the river reflect strongly back toward
the radar antenna and appear as bright points or rectangles. The land surface
north and south of the river appears quite rough, because of the irregular moun-
tainous terrain. As will be noted in Section 6.5 and elsewhere in this chapter, the
side-looking geometry of radar images tends to emphasize topography by giving
slopes facing toward the sensor a bright appearance while slopes facing away are
darker, with especially prominent shadows in steep terrain. In Figure 6.4, the pro-
minent rightward-pointing shape of the hills and mountains is an indicator that
the TerraSAR-X satellite was viewing this area from the right (east) when the
image was acquired.
6.3 IMAGING RADAR SYSTEM OPERATION 393
Figure 6.5 Seasat radar image of Appalachian Mountains of Pennsylvania, L band, midsummer. Scale 1:575,000.
(Courtesy NASA/JPL/Caltech.)
394 CHAPTER 6 MICROWAVE AND LIDAR SENSING
apparent. In Figure 6.5, the signals from the radar system in the spacecraft were
transmitted toward the bottom of the page, and the signals received by the radar
system are those reflected back toward the top of the page. Note that, as men-
tioned above, the topographic slopes of the linear hills and valleys associated with
the folded sedimentary rocks that face the spacecraft return strong signals,
whereas the flatter areas and slopes facing away from the spacecraft return
weaker signals. Note also that, as with the Yangtze River in Figure 6.4, the river
seen at upper left of Figure 6.5 is dark toned because of specular reflection away
from the sensor. (Section 6.8 describes earth surface feature characteristics influ-
encing radar returns in some detail.)
Figure 6.6 illustrates the nomenclature typically used to describe the geo-
metry of radar data collection. A radar system’s look angle is the angle from
nadir to a point of interest on the ground. The complement of the look angle is
called the depression angle. The incident angle is the angle between the incident
radar beam at the ground and the normal to the earth’s surface at the point of
incidence. (The incident angle is often referred to using the grammatically ques-
tionable term “incidence angle” in the literature. The two terms are synonymous.)
In the case of airborne imaging over flat terrain, the incident angle is approxi-
mately equal to the look angle. In radar imaging from space, the incident angle
is slightly greater than the look angle due to earth curvature. The local incident
angle is the angle between the incident radar beam at the ground and the normal
to the ground surface at the point of incidence. The incident angle and the local
incident angle are the same only in the case of level terrain.
The ground resolution cell size of an imaging radar system is controlled by
two independent sensing system parameters: pulse length and antenna beamwidth.
The pulse length of the radar signal is determined by the length of time that the
antenna emits its burst of energy. As can be seen in Figure 6.7, the signal pulse
length dictates the spatial resolution in the direction of energy propagation. This
direction is referred to as the range direction. The width of the antenna beam
determines the resolution cell size in the flight, or azimuth, direction. We con-
sider each of these elements controlling radar spatial resolution separately.
Range Resolution
For a radar system to image separately two ground features that are close to each
other in the range direction, it is necessary for the reflected signals from all parts
of the two objects to be received separately by the antenna. Any overlap in time
between the signals from two objects will cause their images to be blurred toge-
ther. This concept is illustrated in Figure 6.7. Here a pulse of length PL (deter-
mined by the duration of the pulse transmission) has been transmitted toward
buildings A and B. Note that the slant-range distance (the direct sensor-to-target
distance) between the buildings is less than PL=2. Because of this, the pulse has
had time to travel to B and have its echo return to A while the end of the pulse at
A continues to be reflected. Consequently, the two signals are overlapped and will
be imaged as one large object extending from building A to building B. If the
slant-range distance between A and B were anything greater than PL=2, the two
signals would be received separately, resulting in two separate image responses.
Thus, the slant-range resolution of an imaging radar system is independent of the
distance from the aircraft and is equal to half the transmitted pulse length.
Although the slant-range resolution of an imaging radar system does not
change with distance from the aircraft, the corresponding ground-range resolution
does. As shown in Figure 6.8, the ground resolution in the range direction varies
inversely with the cosine of the depression angle. This means that the ground-
range resolution becomes smaller with increases in the slant-range distance.
Accounting for the depression angle effect, the ground resolution in the range
direction Rr is found from
ct
Rr ¼ ð6:2Þ
2 cos yd
where t is the pulse duration.
EXAMPLE 6.1
A given imaging radar system transmits pulses over a duration of 0.1 msec. Find the range
resolution of the system at a depression angle of 45°.
Solution
From Eq. 6.2
3 3 108 m=sec 0:1 3 106 sec
Rr ¼ ¼ 21 m
2 3 0:707
Azimuth Resolution
Early imaging radar systems were significantly limited in their resolution in the
azimuth dimension. With the development of a set of advanced data processing
techniques known as synthetic aperture processing, it became possible to greatly
improve the azimuth resolution of these systems. We begin this discussion of azi-
muth resolution by considering the inherent resolution of a simple radar without
synthetic aperture processing, and then move on to discuss the great improve-
ment in resolution for synthetic aperture radar systems.
As shown in Figure 6.9, the resolution of an imaging radar system in the azi-
muth direction, Ra , is determined by the angular beamwidth b of the antenna and
the slant range SR. As the antenna beam “fans out” with increasing distance from
the aircraft, the azimuth resolution deteriorates. Objects at points A and B in
Figure 6.8 would be resolved (imaged separately) at SR1 but not at SR2 . That is,
at distance SR1 , A and B result in separate return signals. At distance SR2 , A and
B would be in the beam simultaneously and would not be resolved.
Azimuth resolution Ra is given by
Ra ¼ SR b ð6:3Þ
EXAMPLE 6.2
A given imaging radar system has a 1.8-mrad antenna beamwidth. Determine the azimuth
resolution of the system at slant ranges of 6 and 12 km.
Solution
From Eq. 6.3
Ra6 km ¼ 6 3 103 m 1:8 310 3 ¼ 10:8 m
and
Ra12 km ¼ 12 3 103 m 1:8 3 103 ¼ 21:6 m
The deficiencies of brute force operation are overcome in synthetic aperture radar
(SAR) systems. These systems employ a short physical antenna, but through
modified data recording and processing techniques, they synthesize the effect of a
very long antenna. The result of this mode of operation is a very narrow effective
antenna beamwidth, even at far ranges, without requiring a physically long
antenna or a short operating wavelength.
At the detailed level, the operation of SAR systems is quite complex. Suffice it
to say here that these systems operate on the principle of using the sensor motion
along track to transform a single physically short antenna into an array of such
antennas that can be linked together mathematically as part of the data recording
and processing procedures (Elachi, 1987). This concept is shown in Figure 6.10.
The “real” antenna is shown in several successive positions along the flight line.
These successive positions are treated mathematically (or electronically) as if they
were simply successive elements of a single, long synthetic antenna. Points on the
ground at near range are viewed by proportionately fewer antenna elements than
those at far range, meaning effective antenna length increases with range. This
Figure 6.10 Concept of an array of real antenna positions forming a synthetic aperture.
400 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
Figure 6.12 Variation with distance of spatial resolution of real aperture (a) versus synthetic aperture
(b) imaging radar systems.
antennas receive signals reflected from the surface. The bistatic configuration
shown in 6.13b is particularly useful for SAR interferometry (Section 6.9), a tech-
nique in which radar images are acquired over a given area from two slightly
different angles. Figure 6.13c shows a variant of bistatic radar in which the pas-
sive (receiving-only) antenna is located on the opposite side of the area being
imaged, so it records radar signals that are forward-scattered rather than
backscattered. The Shuttle Radar Topography Mission (Section 6.19) and the
TerraSAR-X/TanDEM-X satellite constellation (Section 6.16) are examples of sys-
tems that have used bistatic SAR imaging.
SAR systems can also be divided into unfocused and focused systems. Again
the details of these systems are beyond our immediate concern. The interesting
point about these systems is that the theoretical resolution of unfocused systems
is a function of wavelength and range, not antenna length. The theoretical
resolution of a focused system is a function of antenna length, regardless of range
or wavelength. More particularly, the resolution of a focused synthetic aperture
402 CHAPTER 6 MICROWAVE AND LIDAR SENSING
system is approximately one-half the actual antenna length. That is, the shorter
the antenna, the finer the resolution. In theory, the resolution for a 1-m antenna
would be about 0.5 m, whether the system is operated from an aircraft or a space-
craft! Radar system design is replete with trade-offs among operating range, reso-
lution, wavelength, antenna size, and overall system complexity. (For additional
technical information on SAR systems, see Raney, 1998.)
The geometry of radar imagery is fundamentally different from that of both pho-
tography and scanner imagery. This difference basically results because radar is a
distance rather than an angle measuring system. The influences this has on image
geometry are many and varied. Here we limit our discussion to treatment of the
following geometric elements of radar image acquisition and interpretation: scale
distortion, relief displacement, and parallax.
Figure 6.14 Slant-range versus ground-range image format. (Adapted from Lewis, 1976.)
GR can be derived from slant range SR and flying height H 0 , under the assumption
of flat terrain. From Figure 6.14 it can be seen that
2 2
SR ¼ H 0 þ GR
2
so
2
2 1=2
GR ¼ SR H 0 ð6:5Þ
Relief Displacement
(a)
(b)
(c)
(d)
Figure 6.16 Relief displacement and shadowing in radar images. (a) Relationship between range
and incident angle. (b) Relationship between terrain slope and wavefront of incident radiation.
(c) Appearance of resulting image, showing brightness and geometric characteristics. Note differences
in severity of relief displacement, ranging from layover (pyramid 1) to slight foreshortening (pyramid 4).
(d) Increased length of shadows at flat incident angles. Note absence of shadows at pyramid 1, and
lengthy shadow at pyramid 4.
406 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.17 ERS-1 radar image, C band, Vancouver Island, British Columbia, midsummer. Scale 1:625,000. (ESA,
Courtesy Canada Centre for Remote Sensing.)
6.5 GEOMETRIC CHARACTERISTICS OF RADAR IMAGERY 407
and thus the right side will still be illuminated by the radar pulse. This illumina-
tion, however, will be very slight, and the resulting return signals will be weak,
causing this slope to appear relatively dark on the image. In the case of pyramid
4, located at far range, the slope of its right side is steeper than the incident angle
of the radar pulse. Thus, it will receive no illumination at all and will appear com-
pletely black. In fact, this pyramid will cast a radar shadow across an area extend-
ing further in the far-range direction (Figure 6.16d). The length of this shadowed
area increases with the range distance, because of the corresponding flattening of
the incident angle.
In summary, we can see that there is a trade-off between relief displacement
and shadowing. Radar images acquired at steep incident angles will have severe
foreshortening and layover, but little shadowing. Images acquired at flatter inci-
dent angles will have less relief displacement, but more of the imagery will be
obscured by shadows. The effects of shadowing in radar images are discussed
further in Section 6.8.
Parallax
When an object is imaged from two different flight lines, differential relief
displacements cause image parallax on radar images. This allows images to be
viewed stereoscopically. Stereo radar images can be obtained by acquiring data
from flight lines that view the terrain feature from opposite sides (Figure 6.18a).
However, because the radar sidelighting will be reversed on the two images in
the stereopair, stereoscopic viewing is somewhat difficult using this technique.
(a)
(b)
Figure 6.18 Flight orientations to produce parallax on radar images:
(a) opposite-side configuration; (b) same-side configuration.
408 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Accordingly, stereo radar imagery is often acquired from two flight lines at the
same altitude on the same side of the terrain feature. In this case, the direction
of illumination and the sidelighting effects will be similar on both images
(Figure 6.18b). It is also possible to acquire stereo radar imagery in the same-side
configuration by using different flying heights on the same flight line and, there-
fore, varying the antenna look angle.
Figure 6.19 shows a stereopair of spaceborne radar images of volcanic terrain
in Chile that were acquired from two laterally offset flight lines at the same alti-
tude on the same side of the volcano. This resulted in the two different look
angles (45° and 54°) that were used for data collection. Although the stereo con-
vergence angle is relatively small (9°), the stereo perception of the imagery is
excellent because of the ruggedness of the terrain. The volcano near the bottom
of the figure is Michinmahuida volcano; it rises 2400 m above the surrounding
terrain. The snow-covered slopes of this volcano appear dark toned, because of
absorption of the radar signal by the snow.
Figure 6.19 Shuttle imaging radar stereopair (SIR-B), Michinmahuida Volcano, Chiloe
Province, Chile. Scale 1:350,000. The data for this stereopair were collected at two incident
angles from the same altitude with same-side illumination. (Courtesy NASA/JPL/Caltech.)
6.6 TRANSMISSION CHARACTERISTICS OF RADAR SIGNALS 409
The two primary factors influencing the transmission characteristics of the sig-
nals from any given radar system are the wavelength and the polarization of the
energy pulse used. Table 6.1 lists the common wavelength bands used in pulse
transmission. The letter codes for the various bands (e.g., K, X, L) were originally
selected arbitrarily to ensure military security during the early stages of radar
development. They have continued in use as a matter of convenience, and various
authorities designate the various bands in slightly different wavelength ranges.
Naturally, the wavelength of a radar signal determines the extent to which it is
attenuated and/or dispersed by the atmosphere. Serious atmospheric effects on
radar signals are confined to the shorter operating wavelengths (less than about
4 cm). Even at these wavelengths, under most operating conditions the atmo-
sphere only slightly attenuates the signal. As one would anticipate, attenuation
generally increases as operating wavelength decreases, and the influence of clouds
and rain is variable. Whereas radar signals are relatively unaffected by clouds,
echoes from heavy precipitation can be considerable. Precipitation echoes are
proportional, for a single drop, to the quantity D6 =l4 , where D is the drop
diameter. With the use of short wavelengths, radar reflection from water droplets
is substantial enough to be used in PPI systems to distinguish regions of precipita-
tion. For example, rain and clouds can affect radar signal returns when the radar
Ka 0.75–1.1 40,000–26,500
K 1.1–1.67 26,500–18,000
Ku 1.67–2.4 18,000–12,500
X 2.4–3.75 12,500–8000
C 3.75–7.5 8000–4000
S 7.5–15 4000–2000
L 15–30 2000–1000
P 30–100 1000–300
410 CHAPTER 6 MICROWAVE AND LIDAR SENSING
wavelength is 2 cm or less. At the same time, the effect of rain is minimal with
wavelengths of operation greater than 4 cm. With K- and X-band radar, rain may
attenuate or scatter radar signals significantly.
Figure 6.20 illustrates an unusual shadow effect created by severe rainfall
activity and demonstrates the wavelength dependence of radar systems. In this
X-band airborne radar image, the bright “cloudlike” features are due to back-
scatter from rainfall. The dark regions “behind” these features (especially well
seen at lower right) can be created by one of two mechanisms. One explanation is
that they are “hidden” because the rain completely attenuates the incident energy,
preventing illumination of the ground surface. Alternatively, a portion of the
Figure 6.20 X-band airborne radar image acquired near Woodstock, New Brunswick,
Canada, illustrating an unusual shadow effect created by severe rainfall activity and radar
signal attenuation. (Courtesy Agriculture and Agri-Food Canada, Fredericton, NB, Canada.)
6.6 TRANSMISSION CHARACTERISTICS OF RADAR SIGNALS 411
energy penetrates the rain during radar signal transmission but is completely
attenuated after backscatter. This is termed two-way attenuation. In either case,
much less energy is returned to the receiving antenna. When the signal is not
completely attenuated, some backscatter is received (upper left).
Another effect of rainfall on radar images is that rain occurring at the time of
data acquisition can change the physical and dielectric properties of the surface
soil and vegetation, thus affecting backscatter. The effects on backscatter from
moisture in or on the soil, plants, and other surface features will be discussed in
Section 6.8.
Irrespective of wavelength, radar signals can be transmitted and/or received
in different modes of polarization. The polarization of an electromagnetic wave
describes the geometric plane in which its electrical field is oscillating. With mul-
tipolarization radar systems, the signal can be filtered in such a way that its
electrical wave vibrations are restricted to a single plane perpendicular to the
direction of wave propagation. (Unpolarized energy vibrates in all directions per-
pendicular to that of propagation.) Typically, radar signals are transmitted in a
plane of polarization that is either parallel to the antenna axis (horizontal polar-
ization, H) or perpendicular to that axis (vertical polarization, V), as shown in
Figure 6.21. Likewise, the radar antenna may be set to receive only signals with a
specified polarization. This results in four typical polarization combinations (HH,
VV, HV, and VH), where the first letter indicates the transmitted polarization and
the second indicates the received polarization. The HH and VV cases are referred
to as like-polarized or co-polarized signals, while HV and VH are referred to as
being cross-polarized.
Many objects will respond to polarized incident radiation by reflecting signals
that are partially polarized and partially depolarized. In this case, the degree of
polarization is defined as the power of the polarized component divided by the
total reflected power. Measuring the polarization properties of an object’s reflec-
tance pattern is referred to as polarimetry. Clearly, polarimetry requires the ability
to measure reflected energy at multiple polarizations. One common configuration
for polarimetric radars is to have two channels, one operating in H mode and one
(a)
(b)
Figure 6.21 Polarization of radar signals. (a) Vertically polarized
wave. (b) Horizontally polarized wave.
412 CHAPTER 6 MICROWAVE AND LIDAR SENSING
in V. When the first channel transmits a pulse, both channels are used to collect
returning signals, resulting in HH and HV measurements. In the next instant, the
second channel transmits, and the two record the ensuing VH and VV returns.
Any system that measures more than one polarization (e.g., HH and HV) is
referred to as a “polarization diversity” or multipolarization radar. A system that
measures all four of these orthogonal polarization states (HH, HV, VH, and VV) is
referred to as being “fully polarimetric” or quadrature polarized. This is a particu-
larly useful design, because it provides more insight into the physics of the inter-
action between the radar signals and the surface being imaged. For example, it
may be possible to distinguish single-bounce from double-bounce scattering, to
differentiate surface and volume scattering, and to measure various physical
properties of the surface.
Quadrature polarization also provides sufficient information to calculate the
theoretical backscatter properties of the surface in any plane of polarization, not
just the H and V planes. In fact, the H and V polarizations are actually only two
special cases (albeit the most commonly used in remote sensing) of a much more
complex set of possible polarization states. In theory, any polarization state can
be described using two parameters, ellipticity ðÞ and orientation or inclination
angle ðCÞ. Ellipticity refers to the tendency of a wave to rotate its plane of polar-
ization around the axis in which it is propagating. It ranges from 45° to þ45°.
Waves with an ellipticity of 0° are said to be linearly polarized, those with positive
ellipticity are said to be left-handed, and those with negative ellipticity are said to
be right-handed. In the extreme cases, polarizations of þ45° and 45° are refer-
red to as left-circular and right-circular, respectively.
The orientation angle of a wave’s polarization can range from 0° to 180°.
Technically, a horizontally polarized wave is one with an ellipticity of 0° and an
orientation of either 0° or 180°. A vertically polarized wave has an ellipticity of 0°
and an orientation of 90°.
When a quadrature-polarized radar system collects HH, HV, VH, and VV mea-
surements, it is possible to use matrix transformations to compute the theoretical
response that would be expected from the surface when transmitting in any com-
bination of ðT ,CT Þ and receiving in any combination of ðR ,CR Þ.
Because various objects modify the polarization of the energy they reflect to
varying degrees, the mode of signal polarization influences how the objects look
on the resulting imagery. We illustrate this in Section 6.8. For further information
on radar polarization, see Boerner et al. (1998).
Note that, as shown in Figure 6.21, radar signals can be represented as sinusoi-
dal waves that cycle through a repeating pattern of peaks and troughs. The phase of
a wave refers to its position along this cycle—whether it is at a peak, at a trough, or
somewhere in between—at a given instant in time. The measurement of radar sig-
nal phase is an essential part of radar interferometry (discussed in Section 6.9). In
addition, the phase of radar waves plays a role in the phenomenon known as radar
image speckle, which is discussed in the next section of this chapter.
6.7 OTHER RADAR IMAGE CHARACTERISTICS 413
For long wavelengths (P band) at high altitude (greater than 500 km), the
ionosphere can have significant effects on the transmission of radar signals.
These effects occur in at least two ways. First, passage through the ionosphere
can result in a propagation delay, which leads to errors in the measurement of the
slant range. Second, there is a phenomenon known as Faraday rotation, whereby
the plane of polarization is rotated somewhat, in direct proportion to the amount
of ionospheric activity from the planet’s magnetic field. These factors can cause
significant problems for polarimetric, long wavelength, orbital SAR systems. (For
further information on these effects, see Curlander and McDonough, 1991.)
Other characteristics that affect the appearance of radar images are radar image
speckle and radar image range brightness variation. These factors are described
below.
All radar images contain some degree of speckle, a seemingly random pattern
of brighter and darker pixels in the image. Radar pulses are transmitted coher-
ently, such that the transmitted waves are oscillating in phase with one another.
However, Figure 6.22 shows that the waves backscattered from within a single
ground resolution cell (or pixel) on the earth’s surface will travel slightly different
distances from the antenna to the surface and back. This difference in distance
means that the returning waves from within a single pixel may be in phase or
out of phase by varying degrees when received by the sensor. Where the returning
waves are in phase with one another, the intensity of the resulting combined sig-
nal will be amplified by constructive interference. At the opposite extreme, where
returning waves from within a single pixel are at completely opposite phases
(that is, when one wave is at the peak of its cycle and another is at the trough),
they will tend to cancel each other out, reducing the intensity of the combined
signal (this is known as destructive interference). Constructive and destructive
interference produce a seemingly random pattern of brighter and darker pixels in
radar images, giving them the distinctly grainy appearance known as speckle.
For example, part 3 of Figure 6.22 shows a grid of 24 pixels representing a
ground area with a uniformly dark linear feature (perhaps a smooth level road)
crossing a uniformly lighter-toned background. Because of the effect of speckle,
the resulting radar image will show pseudo-random variations in the apparent
backscatter from every pixel in the image. This makes it more difficult to identify
and differentiate features within the imagery. Speckle is often described impre-
cisely as “random noise,” but it is important to realize that the seemingly random
414 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
(c)
Figure 6.23 An example of multilook processing and its effect on image speckle: (a) one look;
(b) four looks; (c) 16 looks. X-band airborne SAR radar image. Note that speckle decreases as
the number of looks increases. These images were specially processed such that the image
resolution is the same for all three parts of the figure; 16 times as much data were required to
produce the image in (c) as the image in (a). (From American Society for Photogrammetry and
Remote Sensing, 1998. Images copyright © John Wiley & Sons, Inc.)
6.8 RADAR IMAGE INTERPRETATION 417
(a) (b)
Figure 6.24 Airborne SAR radar images, Hualalai volcano, Hawaii: (a) without compensation for
range brightness falloff; (b) with compensation for range brightness falloff. The difference in look
angle from near range (top of the image) to far range (bottom of the image) is about 14°. (Courtesy
NASA/JPL/Caltech.)
objects. These factors are many, varied, and complex. Although several theoretical
models have been developed to describe how various objects reflect radar energy,
most practical knowledge on the subject has been derived from empirical observa-
tion. It has been found that the primary factors influencing objects’ return signal
intensity are their geometric and electrical characteristics; these are described below.
The effects of radar signal polarization are illustrated, and radar wave interactions
with soil, vegetation, water and ice, and urban areas are also described.
Geometric Characteristics
Again, one of the most readily apparent features of radar imagery is its side-lighted
character. This arises through variations in the relative sensor/terrain geometry for
differing terrain orientations, as was illustrated in Figure 6.16. Variations in local
incident angle result in relatively high returns from slopes facing the sensor and
relatively low returns, or no returns, from slopes facing away from the sensor.
In Figure 6.25, the return-strength-versus-time graph has been positioned
over the terrain such that the signals can be correlated with the feature that pro-
duced them. Above the graph is the corresponding image line, in which the signal
strength has been converted schematically to brightness values. The response
from this radar pulse initially shows a high return from the slope facing the sen-
sor. This is followed by a duration of no return signal from areas blocked from
illumination by the radar wave. This radar shadow is completely black and shar-
ply defined, unlike shadows in photography that are weakly illuminated by energy
scattered by the atmosphere. Note that radar shadows can be seen in several
radar images in this chapter. Following the shadow, a relatively weak response is
recorded from the terrain that is not oriented toward the sensor.
Figure 6.25 Effect of sensor/terrain geometry on radar imagery. (Adapted from Lewis, 1976.)
6.8 RADAR IMAGE INTERPRETATION 419
Radar backscatter and shadow areas are affected by different surface proper-
ties over a range of local incident angles. As a generalization, for local incident
angles of 0° to 30°, radar backscatter is dominated by topographic slope. For
angles of 30° to 70°, surface roughness dominates. For angles greater than 70°,
radar shadows dominate the image.
Figure 6.26 illustrates radar reflection from surfaces of varying roughness and
geometry. The Rayleigh criterion states that surfaces can be considered “rough”
and act as diffuse reflectors (Figure 6.26a) if the root-mean-square (rms) height of
the surface variations exceeds one-eighth of the wavelength of sensing ðl=8Þ divi-
ded by the cosine of the local incident angle (Sabins, 1997). Such surfaces scatter
incident energy in all directions and return a significant portion of the incident
energy to the radar antenna. Surfaces are considered “smooth” by the Rayleigh
criterion and act as specular reflectors (Figure 6.26b) when their rms height varia-
tion is less than approximately l=8 divided by the cosine of the local incident
angle. Such surfaces reflect most of the energy away from the sensor, resulting in
a very low return signal.
The Rayleigh criterion does not consider that there can be a category of sur-
face relief that is intermediate between definitely rough and definitely smooth sur-
faces. A modified Rayleigh criterion is used to typify such situations. This criterion
considers rough surfaces to be those where the rms height is greater than l=4:4
divided by the cosine of the local incident angle and smooth when the rms height
variation is less than l=25 divided by the cosine of the local incident angle
(Sabins, 1997). Intermediate values are considered to have intermediate rough-
nesses. Table 6.2 lists the surface height variations that can be considered
smooth, intermediate, and rough for several radar bands for local incident angles
of 20°, 45°, and 70°. (Values for other wavelength bands and/or incident angles
can be calculated from the information given above.)
Figure 6.27 graphically illustrates how the amount of diffuse versus specular
reflection for a given surface roughness varies with wavelength, and Table 6.3
describes how rough various surfaces appear to radar pulses of various wavelengths
using the modified Rayleigh criterion described above. It should be noted that some
features, such as cornfields, might appear rough when seen in both the visible and
the microwave portion of the spectrum. Other surfaces, such as roadways, may be
diffuse reflectors in the visible region but specular reflectors of microwave energy. In
general, radar images manifest many more specular surfaces than do photographs.
The shape and orientation of objects must be considered as well as their sur-
face roughness when evaluating radar returns. A particularly bright response
results from a corner reflector, as illustrated in Figure 6.26c. In this case, adjacent
smooth surfaces cause a double reflection that yields a very high return. Because
corner reflectors generally cover only small areas of the scene, they typically
appear as bright spots on the image.
Electrical Characteristics
The electrical characteristics of terrain features work closely with their geometric
characteristics to determine the intensity of radar returns. One measure of an
6.8 RADAR IMAGE INTERPRETATION 421
Figure 6.27 X-band and L-band radar reflection from surfaces of varying
roughness. (Modified from diagram by Environmental Research Institute of
Michigan.)
Root-Mean-Square
Surface Height Ka Band X Band L Band
Variation (cm) (l ¼ 0.86 cm) (l ¼ 3.2 cm) (l ¼ 23.5 cm)
themselves. Because plants have large surface areas and often have a high moist-
ure content, they are particularly good reflectors of radar energy. Plant canopies
with their varying complex dielectric constants and their microrelief often dom-
inate the texture of radar image tones.
It should be noted that the dielectric constant of vegetation changes with
atmospheric conditions. Clouds limit incident radiation on the earth’s surface,
changing the water content of the surface vegetation. In particular, clouds
decrease or stop vegetation transpiration, which in turn changes the water poten-
tial, dielectric constant, and radar backscatter of vegetation.
Metal objects also give high returns, and metal vehicles, bridges, silos, railroad
tracks, and poles usually appear as bright features on radar images. Figure 6.28
shows an X-SAR (X-band) radar image of Hong Kong and surrounding areas in
southeastern China. (The X-SAR system is discussed in Section 6.12.) The image
was acquired on October 4, 1994. Hong Kong is among the world’s busiest sea-
ports, and numerous ships appear as small, bright features in the imagery. The very
high backscatter from these ships is partly due to their metallic composition and
partly due to corner reflections from structures on the ships, as well as corner
reflections involving the water surface and the sides of the ships. Urban areas in this
image also show very high backscatter, because of the presence of large buildings
that likewise act as corner reflectors. The sea surface acts as a specular reflector and
appears dark. Note also the effects of layover (relief displacement) in the steep,
mountainous terrain on Hong Kong island at the right side of the image.
Some particularly bright features in Figure 6.28, such as ships and large
buildings, show anomalous cross-shaped patterns radiating outward from the
central points. These artifacts occur when the backscatter from an object (gen-
erally a metallic corner reflector) is high enough to exceed the dynamic range of
the radar system, saturating the antenna’s electronics. These “side-lobe” patterns
are frequently observed whenever large, angular metallic objects such as bridges,
ships, and offshore oil rigs are imaged against a backdrop of dark, smooth water.
6.8 RADAR IMAGE INTERPRETATION 423
Figure 6.28 X-SAR radar image of Hong Kong, China. Scale 1:170,000. (Courtesy DLR and
NASA/JPL/Caltech.)
Effect of Polarization
Figure 6.29 illustrates the effect of signal polarization on radar images. The figure
shows a pair of L-band radar images from SIR-C (Section 6.12), covering part of
the island of Sumatra in Indonesia. Most of the area shown here consists of rela-
tively undisturbed tropical rainforest, interspersed with large tracts that have
been cleared for palm oil plantations. The images were acquired in October 1994.
Newly cleared areas (where the rainforest cover was removed within five years
before the image acquisition) appear as bright polygons in the HH-polarized
image. Older clearings where planted palm trees are growing are not easily dis-
tinguished in the HH image, but appear much darker in the HV image. In the
lower right corner of the area, a chain of lakes located in coastal marshes appear
dark in both images due to specular reflection. In general, the choice of polariza-
tion(s) used in acquiring radar imagery will depend on the type of landscape fea-
tures being studied. The most detailed information about surface materials would
be provided by fully polarimetric (“quad-pol”) radar systems offering all four
polarization bands. However, radar system design involves trade-offs among the
spatial resolution, extent of coverage, and number of polarizations available.
Where fully polarimetric data are not available, dual-polarization systems
424 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
Figure 6.29 SIR-C radar images of Sumatra, Indonesia. (a) L-HH. (b) L-HV. (Courtesy NASA/
JPL/Caltech.)
including both a like- and a cross-polarized band will often yield a great deal of
information about surface features.
Plate 24 shows a pair of polarimetric L-band radar images acquired by NASA’s
UAVSAR, a radar system mounted on an uninhabited aerial vehicle (UAV; see
Section 1.8). In these images, the HH polarization is shown in red, the HV polarization
in green, and the VV polarization in blue. In (a), Iceland’s Hofsjokull glacier appears
in green and magenta along the upper portion of the image. These colors indicate
different scattering mechanisms that, in turn, can be used to draw inferences about
surface conditions on the glacier. The green hue results from depolarized scattering
(including relatively greater HV scattering) in the ablation zone, where the surface
is rougher and melt is occurring. At higher elevations, the glacier’s surface is
smoother and the radar response is dominated by HH and VV backscatter.
6.8 RADAR IMAGE INTERPRETATION 425
Surrounding Hofsjokull, many familiar glacial landforms can be seen, including mor-
aines, outwash plains, braided streams, ice-margin lakes, and other features.
Plate 24b shows a UAVSAR image of the area surrounding the Bahia de San
Lorenzo, an arm of the Gulf of Fonseca extending into Honduras. The bay is sur-
rounded by an intricate network of natural drainage channels flowing through
mangrove swamps. In 1999, the mangroves in this region were designated as a
“wetland of international importance” under the Ramsar Convention. Prominent
hills at the lower left and upper right appear bright green, due to the increased
cross-polarized response from trees growing on slopes that are too steep to farm.
At lower elevations, agricultural fields occur on dryer land and aquaculture ponds
intermingle with the mangrove swamps. Green patches within the swamps (with
relatively higher HV backscatter) support more large woody biomass while the
red, magenta, and blue areas are generally cleared, with the relative proportions
of HH and VV backscatter determined by the surface roughness, the type of vege-
tation present, and the radar incident angle.
Soil Response
Because the dielectric constant for water is at least 10 times that for dry soil, the pre-
sence of water in the top few centimeters of bare (unvegetated) soil can be detected
in radar imagery. Soil moisture and surface wetness conditions become particularly
apparent at longer wavelengths. Soil moisture normally limits the penetration of
radar waves to depths of a few centimeters. However, signal penetrations of several
meters have been observed under extremely dry soil conditions with L-band radar.
Figure 6.30 shows a comparison of Landsat TM (a) and spaceborne synthetic
aperture radar (b) imagery of the Sahara Desert near Safsaf Oasis in southern Egypt.
This area is largely covered by a thin layer of windblown sand, which obscures many
of the underlying bedrock and drainage features. Field studies in the area have shown
that L-band (23 cm) radar signals can penetrate up to 2 m through this sand, provid-
ing imagery of subsurface geologic features. The dark, braided patterns in (b) repre-
sent former drainage channels from an ancient river valley, now filled with sand
more than 2 m deep. While some of these channels are believed to date back tens of
millions of years, others most likely formed during intervals within the past half-
million years when the region experienced a wetter climate. Archaeologists working
in this area have found stone tools used by early humans more than 100,000 years
ago. Other features visible in the radar imagery primarily relate to bedrock structures,
which include sedimentary rocks, gneisses, and other rock types. Very few of these
features are visible in the Landsat imagery, due to the obscuring sand cover.
Vegetation Response
(a) (b)
Figure 6.30 Sahara Desert near Safsaf Oasis, southern Egypt: (a) Landsat TM image; (b) SIR-C image, L band, HH
polarization, 45° incident angle. North is to the upper left. Scale 1:170,000. (Courtesy NASA/JPL/Caltech.)
limbs, etc.). In turn, the vegetation canopy is underlain by soil that may cause
surface scattering of the energy that penetrates the vegetation canopy. When the
radar wavelengths approximate the mean size of plant components, volume scat-
tering is strong, and if the plant canopy is dense, there will be strong backscatter
6.8 RADAR IMAGE INTERPRETATION 427
from the vegetation. In general, shorter wavelengths (2 to 6 cm) are best for
sensing crop canopies (corn, soybeans, wheat, etc.) and tree leaves. At these wave-
lengths, volume scattering predominates and surface scattering from the under-
lying soil is minimal. Longer wavelengths (10 to 30 cm) are best for sensing tree
trunks and limbs.
In addition to plant size and radar wavelength, many other factors affect
radar backscatter from vegetation. Recall that vegetation with a high moisture
content returns more energy than dry vegetation. Also, more energy is returned
from crops having their rows aligned in the azimuth direction than from those
aligned in the range direction of radar sensing.
Figure 6.31 shows a pair of L-band radar images of an agricultural area loca-
ted near Winnipeg, Alberta. The images were acquired from NASA’s UAVSAR
(mentioned previously in the discussion of Plate 24) on June 17 (6.31a) and July
17 (6.31b). The light-toned features are agricultural fields with higher soil moist-
ure and/or crops with a higher moisture content than the darker-toned areas. Cir-
cular features represent center-pivot irrigation systems. The numerous differences
in radar brightness between corresponding fields in 6.31a and 6.31b are primarily
due to changes in plant growth and soil moisture during the one-month interval
between image acquisitions. Healthy plants have a high water content, and thus a
high dielectric constant, which in turn increases the reflectivity of the crop sur-
face. That is, leaves with a high moisture content reflect radar waves more
strongly than dry leaves, bare soil, or other features. Likewise, the vertical struc-
ture of a crop canopy increases the radar backscatter relative to the specular
reflections that would typify a bare, smooth field.
(a) (b)
Figure 6.31 Radar imagery of agricultural crops near Winnepeg, Alberta, from an L-band radar
system operated on an uninhabited aerial vehicle (UAV). (a) June 17. (b) July 17. (Courtesy NASA/JPL/
Caltech.)
428 CHAPTER 6 MICROWAVE AND LIDAR SENSING
The lighter-toned linear feature wandering through the images in Figure 6.31
from upper left to lower right is a watercourse whose banks are lined with trees,
shrubs, and other riparian vegetation. The brightness of this feature is due both to
the increased roughness of the vegetation present and to its increased moisture
content. Often, vegetated areas that are flooded or are adjacent to standing water
can cause a corner reflector effect. Each stalk of vegetation forms a right angle with
the calm water. Combined, these can produce a bright radar return, which is a use-
ful indicator of water standing beneath a vegetation canopy (see also Figure 6.34).
Figure 6.32 illustrates the effect of wavelength on the appearance of airborne
SAR images. Here, the scene is imaged with three different wavelengths. Most
crop types reflect differently in all three wavelength bands, with generally lighter
tones in the C band and darker tones in the P band. Many crop types in this
image could be identified by comparing the relative amounts of backscatter in the
three different bands.
Figure 6.33 shows a C-band (a) and an L-band (b) image of an area in north-
ern Wisconsin that is mostly forested, containing many lakes. Because of specular
reflection from their smooth surfaces, the lakes appear dark throughout both ima-
ges. A tornado scar can be seen as a dark-toned linear feature running through
Figure 6.32 Airborne SAR images of an agricultural area in the Netherlands: (a) C band (3.75–7.5 cm);
(b) L band (15–30 cm); (c) P band (30–100 cm). HH polarization. (From American Society for
Photogrammetry and Remote Sensing, 1998. Images copyright © John Wiley & Sons, Inc.)
6.8 RADAR IMAGE INTERPRETATION 429
(a) (b)
Figure 6.33 SIR-C images of a forested area in northern Wisconsin: (a) C-band image; (b) L-band
image. Scale 1:150,000. Note the dark-toned lakes throughout the image and the tornado scar that is
visible only in the L-band image. (Courtesy NASA/JPL/Caltech and UW-Madison Environmental
Remote Sensing Center.)
the center of Figure 6.33b, from upper left to lower right. The tornado occurred
10 years before the date of this image. It destroyed many buildings and felled
most of the trees in its path. After the tornado damage, timber salvage operations
removed most of the fallen trees, and young trees were established. At the time of
acquisition of these spaceborne radar images, the young growing trees in the area
of the tornado scar appear rough enough in the C-band (6-cm) image that they
blend in with the larger trees in the surrounding forested area. In the L-band
(24-cm) image, they appear smoother than the surrounding forested area and the
tornado scar can be seen as a dark-toned linear feature.
Incident angle also has a significant effect on radar backscatter from vegeta-
tion. Figure 6.34 shows spaceborne SAR images of a forested area in northern
Florida and further illustrates the effect of radar imaging at multiple incident
angles on the interpretability of radar images. The terrain is flat, with a mean ele-
vation of 45 m. Sandy soils overlay weathering limestone; lakes are sinkhole
lakes. Various land cover types can be identified in Figure 6.34b by their tone, tex-
ture, and shape. Water bodies (W) have a dark tone and smooth texture. Clear-cut
areas (C) have a dark tone with a faint mottled texture and rectangular to angular
430 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
(c)
Figure 6.34 SIR-B images, northern Florida, L band (scale 1:190,000): (a) 58° incident
angle, October 9; (b) 45° incident angle, October 10; (c) 28° incident angle, October 11.
C ¼ clear-cut area; F ¼ pine forest; P ¼ powerline right-of-way; R ¼ road; S ¼ cypress-
tupelo swamp; W ¼ open water. (Courtesy Department of Forestry and Natural Resources,
Purdue University, and NASA/JPL/Caltech.)
6.8 RADAR IMAGE INTERPRETATION 431
shapes. The powerline right-of-way (P) and roads (R) appear as dark-toned,
narrow, linear swaths. Pine forest (F), which covers the majority of this image,
has a medium tone with a mottled texture. Cypress-tupelo swamps (S), which
consist mainly of deciduous species, have a light tone and a mottled texture.
However, the relative tones of the forested areas vary considerably with incident
angle. For example, the cypress-tupelo swamp areas are dark toned at an incident
angle of 58° and cannot be visually distinguished from the pine forest. These
same swamps are somewhat lighter toned than the pine forest at an incident angle
of 45° and much lighter toned than the pine forest at an incident angle of 28°. The
very high radar return from these swamps on the 28° image is believed to be
caused by specular reflection from the standing water in these areas acting
in combination with reflection from the tree trunks, resulting in a complex corner
reflector effect (Hoffer, Mueller, and Lozano-Garcia, 1985). This effect is more
pronounced at an incident angle of 28° than at larger incident angles because the
penetration of radar waves through the forest canopy is greater at the smaller angle.
Smooth water surfaces act as specular reflectors of radar waves and yield no
returns to the antenna, but rough water surfaces return radar signals of varying
strengths. Experiments conducted with the Seasat radar system (L-band system
with look angles of 20° to 26°, as described later in Section 6.11) showed that
waves with a wavelength greater than 100 m could be detected when wave heights
were greater than about 1 m and surface wind speeds exceeded about 2 m/sec (Fu
and Holt, 1982). It was also found that waves moving in the range direction (mov-
ing toward or away from the radar system) could be detected more readily than
waves moving in the azimuth direction.
Radar images from space have revealed interesting patterns that have been
shown to correlate with ocean bottom configurations. Figure 6.35a is a space-
borne SAR image of the English Channel near the Strait of Dover. Here, the chan-
nel is characterized by tidal variations of up to 7 m and reversing tidal currents
with velocities at times over 1.5 m/sec. Also, there are extensive sand bars on both
sides of the strait and along the coasts of France and England. The sand bars in
the channel are long, narrow ridges from 10 to 30 m in depth, with some shal-
lower than 5 m. Together with the high volume of ship traffic, these sand bars
make navigation in the channel hazardous. By comparing this image with Figure
6.35b, it can be seen that the surface patterns on the radar image follow closely
the sand bar patterns present in the area. Tidal currents at the time of image
acquisition were 0.5 to 1.0 m/sec, generally in a northeast-to-southwest direction.
The more prominent patterns are visible over bars 20 m or less in depth.
Radar backscatter from ice is dependent on the dielectric properties and spa-
tial distribution of the ice. In addition, such factors as ice age, surface roughness,
internal geometry, temperature, and snow cover also affect radar backscatter.
432 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
Figure 6.35 English Channel near the Strait of Dover: (a) Seasat-1 SAR image,
L band, midsummer; (b) map showing ocean bottom contours in meters.
(Courtesy NASA/JPL/Caltech.)
6.8 RADAR IMAGE INTERPRETATION 433
X- and C-band radar systems have proven useful in determining ice types and, by
inference, ice thickness. L-band radar is useful for showing the total extent of ice,
but it is often not capable of discriminating ice type and thickness.
As illustrated in Figure 6.36, urban areas typically appear light toned in radar
images because of their many corner reflectors.
Figure 6.37, an airborne SAR image of Sun City, Arizona, illustrates the effect
of urban building orientation on radar reflection. The “corner reflection” from
buildings located on the part of the circular street system where the wide faces of
the houses (front and rear) face the direction from which the radar waves have
Figure 6.36 Airborne SAR image, Las Vegas, NV, X band, HH polarization. North is to the top, the look direction
is from the right. Scale 1:250,000. (From American Society for Photogrammetry and Remote Sensing, 1998. Images
copyright © John Wiley & Sons, Inc.)
434 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.37 Airborne SAR image, Sun City, AZ, X band. The look direction is from the top of the image. Scale 1:28,000.
(From American Society for Photogrammetry and Remote Sensing, 1998. Images copyright © John Wiley & Sons, Inc.)
originated provides the strongest radar returns. At right angles to this direction,
there is again a relatively strong radar return where the sides of the houses face
the direction from which the radar waves have originated. This effect is some-
times called the cardinal effect, a term that has survived from the early days of
radar remote sensing. At that time, it was noted that reflections from urban areas,
often laid out according to the cardinal directions of a compass, caused sig-
nificantly larger returns when the linear features were illuminated at an angle
orthogonal to their orientation, hence the name cardinal effect (Raney, 1998).
Other earth surface features also respond with a similar effect. For example, the
orientation of row crops affects their response, as described earlier in this section,
and the orientation of ocean waves also strongly affects their response.
Summary
metal objects, and urban and other built-up areas (resulting from corner reflec-
tions). Surfaces acting as diffuse reflectors return a weak to moderate signal and
may often have considerable image texture. Low returns are received from sur-
faces acting as specular reflectors, such as smooth water, pavements, and playas
(dry lakebeds). No return is received from radar “shadow” areas.
Figure 6.39 Radar image and radar interferograms: (a) SIR-C radar image, Mt. Etna, Italy; (b) raw
interferogram; (c) flattened interferogram showing elevation ranges. Parts (b) and (c) are black-
and-white reproductions of color interferograms. (Courtesy NASA/JPL/Caltech and EROS Data Center.)
6.9 INTERFEROMETRIC RADAR 437
(a) (b)
Figure 6.40 C- and L-band interferometric radar images, northwestern Wisconsin. (a) C-band,
magnitude. (b) C-band, phase. (c) L-band, magnitude. (d) L-band, phase. (Author-prepared figure.)
(c) (d)
fraction of the radar system’s wavelength—often to less than 1 cm. With a single
pair of images, surface changes are measured only as line-of-sight displacements,
meaning that only the degree to which a point moved toward or away from the
radar look direction can be measured. If two sets of interferometric image pairs
are available from different look directions, such as from the ascending and
descending segments of a satellite’s orbit, the two-dimensional movement of the
surface can be derived.
This approach works best for changes that affect large areas in a spatially
correlated manner, such as the entire surface of a glacier moving downhill, as
opposed to changes that occur in a spatially disjointed manner, such as the
growth of trees in a forest. Plate 25 provides three examples of applications of
differential interferometry. In Plate 25a this technique is used to assess surface
440 CHAPTER 6 MICROWAVE AND LIDAR SENSING
deformation during the Tōhoku earthquake that struck Japan on March 11,
2011. This magnitude 9.0 earthquake and the tsunami that followed it caused
extensive destruction in the northeastern coastal region of the main Japanese
island of Honshu. The radar interferograms shown in this plate were produced
using data from Japan’s ALOS PALSAR satellite radar system (Section 6.14),
acquired both before and after the earthquake. As shown in the scale bar at bot-
tom, each color cycle from one cyan “fringe” to the next represents approxi-
mately 12 cm of surface movement in the radar’s line-of-sight direction
(additional movement may have occurred perpendicular to this axis). Scientists
at the Japan Aerospace Exploration Agency (JAXA) have analyzed these data and
estimated that up to 4 m of surface displacement occurred in the most affected
parts of this area.
Plate 25b shows an example of the use of radar interferometry for monitoring
slower changes in topography, in this case the continuing ground uplift caused
by magma accumulation below a volcano in the central Oregon Cascade Range.
Scientists from the USGS Cascades Volcano Observatory, in connection with
other agencies, have confirmed the slow uplift of a broad area centered about
5 km west of South Sister volcano. The radar interferogram shown in Plate 25b
was produced using radar data from the European Space Agency’s ERS satellites.
In this repeat-pass interferogram, each full color band from blue to red repre-
sents about 2.8 cm of ground movement in the direction of the radar satellite.
(No information is available for the uncolored areas, where forest vegetation,
or other factors, hinders the acquisition of useful radar data.) The four con-
centric bands show that the ground surface moved toward the satellite by as
much as 10 cm between August 1996 and October 2000. Surface uplift caused
by magma accumulation at depth can be a precursor to volcanic activity at the
earth’s surface.
A third example of differential interferometry is illustrated in Plate 25c. This
differential interferogram, derived from ERS-1 and ERS-2 images, shows surface
displacement between April 1992 and December 1997 in the area around Las
Vegas, Nevada. For most of the past century, pumping of groundwater from an
underground aquifer for domestic and commercial consumption has caused land
subsidence at a rate of several centimeters per year in Las Vegas, with significant
damage to the city’s infrastructure. In recent years, artificial recharge of ground-
water has been used in an attempt to reduce subsidence. Analysis of the interfero-
metric radar imagery in combination with geologic maps shows that the spatial
extent of subsidence is controlled by geologic structures (faults, indicated by
white lines in Plate 25c) and sediment composition (clay thickness). The
maximum detected subsidence during the 1992 to 1997 period was 19 cm. Other
potential applications of differential radar interferometry include monitoring the
movement of glaciers and ice sheets, measuring displacement across faults after
earthquakes, and detecting land subsidence due to oil extraction, mining, and
other activities.
6.10 RADAR REMOTE SENSING FROM SPACE 441
The earliest civilian (non-classified) spaceborne imaging radar missions were the
experimental spaceborne systems Seasat-1 (1978) and three Shuttle Imaging
Radar systems (SIR-A, SIR-B, and SIR-C) that orbited for short periods between
1981 and 1994. The SIR-C antenna was employed again in February 2000 for the
Shuttle Radar Topography Mission (SRTM), a brief but highly productive opera-
tional program to map global topography using radar interferometry.
In the 1990s the first true “operational” (non-experimental) radar remote sen-
sing satellites were developed. During the four-year period from 1991 to 1995,
radar satellites were launched by the former Soviet Union, the European Space
Agency, and the national space agencies of Japan and Canada. Since then, the
number of radar satellite systems has increased dramatically, culminating with a
shift toward multi-satellite “constellations” to provide rapid global coverage and
tandem single-pass interferometry. These multi-satellite missions include Ger-
many’s TerraSAR-X/TanDEM-X pair, the Italian constellation of four COSMO-
SkyMed satellites, the European Space Agency’s forthcoming pair of Sentinel-1
satellites, and the planned Radarsat Constellation Mission.
The advantages of spaceborne radar systems are obvious. Because radar is an
active sensor that can gather data both day and night, radar images may be
acquired during both the south-to-north (ascending) and north-to-south (descend-
ing) portions of a satellite’s orbit. This is in contrast to electro-optical remote
sensing systems that normally only acquire imagery on the daylight portion of
each orbit. Spaceborne radar imagery can also be collected where clouds, haze,
and other atmospheric conditions would prevent the acquisition of electro-optical
imagery. Thus, spaceborne radar systems are ideally suited for applications where
imagery can be dependably acquired wherever and whenever it is needed.
In general, radar images acquired at small incident angles (less than 30°)
emphasize variations in surface slope, although geometric distortions due to lay-
over and foreshortening in mountainous regions can be severe. Images with large
incident angles have reduced geometric distortion and emphasize variations in
surface roughness, although radar shadows increase.
A limitation in the use of airborne radar imagery is the large change in incident
angle across the image swath. In these circumstances, it is often difficult to distin-
guish differences in backscatter caused by variations in incident angle from those
actually related to the structure and composition of the surface materials present in
an image. Spaceborne radar images overcome this problem because they have only
small variations in incident angle. This makes their interpretation less difficult.
Beginning with SIR-C in 1994, most spaceborne radar systems have made use
of advanced beam steering techniques to collect data in three broad categories of
imaging modes. The basic configuration, often called Stripmap mode, represents
the imaging process that has been discussed throughout the previous sections of
this chapter. For wide-area coverage, the ScanSAR imaging mode involves
442 CHAPTER 6 MICROWAVE AND LIDAR SENSING
electronically steering the radar beam back and forth in the range direction. In effect,
the beam illuminates two or more adjacent swaths in alternation, with the far-range
side of the first swath being contiguous with the near-range side of the second swath,
and so on. The multiple swaths are then processed to form a single, wide image. The
disadvantage of ScanSAR mode is that the spatial resolution is reduced.
An additional imaging configuration, referred to as Spotlight, involves steering
the radar beam in azimuth rather than in range, in order to dwell on a given site
for a longer period of time. As the satellite approaches the target area, the beam is
directed slightly forward of the angle at which it is normally transmitted; then,
while the satellite moves past, the beam swings back to continue covering the tar-
get area. Through an extension of the synthetic aperture principle, this Spotlight
mode allows a finer resolution to be achieved by acquiring more “looks” over the
target area from a longer segment of the orbit path. This increase in resolution
comes at the expense of continuous coverage because while the antenna is focus-
ing on the target area it is missing the opportunity to image other portions of the
ground swath. Many of the latest high-resolution radar satellites employ Spotlight
mode to achieve resolutions on the order of 1 to 3 m.
The SIR-C radar mission provided the first tests of both ScanSAR and Spot-
light modes from space. Figure 6.41 shows the first ScanSAR image from space,
Figure 6.41 SIR-C “ScanSAR” image of the Weddell Sea off Antarctica, October. Scale 1:2,500,000. (Courtesy
NASA/JPL/Caltech.)
6.11 SEASAT-1 AND THE SHUTTLE IMAGING RADAR MISSIONS 443
acquired over the Weddell Sea off Antarctica on October 5, 1994. The dimensions
of the image are 240 km by 320 km. The upper left half of the image shows an
area of the open sea, with a uniform gray tone. First-year seasonal pack ice (0.5 to
0.8 m thick) occupies the lower right corner of the image. In between these areas,
in the lower left and center of the image, there are two large oceanic circulation
features or eddies, each approximately 50 km wide and rotating in a clockwise
direction. Very dark areas within and adjacent to the eddies are newly formed ice,
whose smooth surface acts as a specular reflector. This type of spaceborne Scan-
SAR imagery is an important resource for applications such as tracking the extent
of sea ice and movements of icebergs over large areas, as an aid to shipping and
navigation in high-latitude oceans.
Seasat-1 was the first of a proposed series of satellites oriented toward oceano-
graphic research. The Seasat-1 satellite was launched on June 27, 1978, into an
800-km near-polar orbit. Approximately 95% of the earth’s oceans were to be
covered by the system. Unfortunately, prime power system failure 99 days after
launch limited the image data produced by the satellite.
Seasat-1 employed a spaceborne L-band (23.5-cm) SAR system with HH
polarization. It was designed to generate imagery across a 100-km swath with a
look angle of 20° to 26° and four-look 25 m resolution in both range and azimuth.
Table 6.4 summarizes these characteristics (as well as those of the SIR systems).
Although the primary rationale for placing the imaging radar system on board
Seasat-1 was its potential for monitoring the global surface wave field and polar
sea ice conditions, the resultant images of the oceans revealed a much wider spec-
trum of oceanic and atmospheric phenomena, including internal waves, current
Launch date June 1978 November 1981 October 1984 April 1994
October 1994
Length of mission 99 days 3 days 8 days 10 days
Nominal altitude, km 800 260 225 225
Wavelength band L band L band L band X band (X-SAR)
C and L bands
(SIR-C)
Polarization HH HH HH HH, HV, VV, VH
(X band HH only)
Look angle 20–26° (fixed) 47–53° (fixed) 15–60° (variable) 15–60° (variable)
Swath width, km 100 40 10–60 15–90
Azimuth resolution, m 25 40 25 25
Range resolution, m 25 40 15–45 15–45
444 CHAPTER 6 MICROWAVE AND LIDAR SENSING
SIR-A
The SIR-A experiments were conducted from the Space Shuttle during November
1981. This was the second flight of the Space Shuttle, and the first scientific pay-
load ever flown on the Shuttle. The SIR-A system possessed many of the same
characteristics as the radar system onboard Seasat-1, most notably its long-
wavelength L-band (23.5-cm) antenna with HH polarization. The principal dif-
ferences between these two were SIR-A’s larger look angle (47° to 53°), narrower
swath, and slightly coarser resolution (40 m). SIR-A obtained imagery over 10
million km2 of the earth’s surface and acquired radar images of many tropical,
arid, and mountainous regions for the first time.
Figure 6.42 SIR-A image of eastern China, L band. Scale 1:530,000. (Courtesy NASA/JPL/Caltech.)
6.11 SEASAT-1 AND THE SHUTTLE IMAGING RADAR MISSIONS 445
Figure 6.42 is a SIR-A image showing villages, roads, and cultivated fields in
eastern China. Each of the hundreds of white spots on this image is a village. The
agricultural crops common in this area are winter wheat, kaoliang, corn, and
millet. The dark linear and winding features with white lines on each side are riv-
ers and drainageways located between levees.
SIR-B
The SIR-B experiments were conducted from the Space Shuttle during October
1984. Again, an L-band system with HH polarization was used. The principal dif-
ference between the SIR-A and SIR-B radar systems is that SIR-B was equipped
with an antenna that could be tilted mechanically to beam its radar signals
toward the earth at varying look angles (ranging from 15° to 60°). This provided
the opportunity for scientific studies aimed at assessing the effect of various inci-
dent angles on radar returns. In addition, it provided the opportunity for the
acquisition of stereo radar images.
Figure 6.43 shows SIR-B images of Mt. Shasta, a 4300-m-high volcano in
northern California. These images illustrate the effect of incident angle on eleva-
tion displacement. In Figure 6.43a, having an incident angle of 60°, the peak of
the volcano is imaged near its center. In Figure 6.43b, having an incident angle of
30°, the peak is imaged near the top of the figure (the look direction was from top
to bottom in this figure). Several light-toned tongues of lava can be seen on the
flanks of this strato volcano. The surface of the young lava flow seen at upper left
in this radar image consists of unvegetated angular chunks of basalt 13 to 1 m in
size that present a very rough surface to the L-band radar waves. The other,
somewhat darker toned lava flows on the flanks of Mt. Shasta are older, have
weathered more, and are more vegetated.
SIR-C
SIR-C missions were conducted in April and October 1994. SIR-C was designed
to explore multiple-wavelength radar sensing from space. Onboard SIR-C were
L-band (23.5-cm) and C-band (5.8-cm) systems from NASA and an X-band (3.1-cm)
system (known as X-SAR) from a consortium of the German Space Agency (DARA)
and the Italian Space Agency (ASI). The multiple wavelength bands available
allowed scientists to view the earth in up to three wavelength bands, either individu-
ally or in combination. SIR-C look angles were variable in one-degree increments
from 15° to 60°, and four polarizations were available (HH, HV, VV, and VH).
The scientific emphasis in selecting sites for SIR-C image acquisition was on
studying five basic themes—oceans, ecosystems, hydrology, geology, and rain and
clouds. Ocean characteristics studied included large surface and internal waves,
wind motion at the ocean surface, and ocean current motion, as well as sea ice
characteristics and distribution. Ecosystem characteristics studied included land
use, vegetation type and extent, and the effects of fires, flooding, and clear cutting.
446 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
Figure 6.43 SIR-B images, Mt. Shasta, CA, L band, mid-fall (scale 1:240,000): (a) 60° incident angle, (b) 30°
incident angle. Note the severe layover of the mountain top in (b). (Courtesy NASA/JPL/Caltech.)
Hydrologic studies focused on water and wetland conditions, soil moisture pat-
terns, and snow and glacier cover. Geologic applications included mapping geolo-
gic structures (including those buried under dry sand), studying soil erosion,
transportation and deposition, and monitoring active volcanoes. Also under study
6.11 SEASAT-1 AND THE SHUTTLE IMAGING RADAR MISSIONS 447
was the attenuation of radar signals at X-band and C-band wavelengths by rain
and clouds.
The multi-wavelength, multi-polarization design of SIR-C provides many
more options for viewing and analyzing the imagery. For visual interpretation,
three different wavelength–polarization combinations are used to produce a com-
posite image, with one wavelength/polarization combination displayed as blue,
one as green, and one as red. If the different wavelength–polarization images
show backscatter from different features with different intensities, then these
features are displayed with different colors. This can be seen in Plate 26, which
shows a volcano-dominated landscape in central Africa; parts of Rwanda,
Uganda, and the Democratic Republic of the Congo (formerly Zaire) are each pre-
sent in this image. In this image, C-band data with HH polarization are displayed
as blue, C-band data with HV polarization are displayed as green, and L-band
data with HV polarization are displayed as red. The volcano at top center is
Karisimba, 4500 m high. The green band on the lower slopes of Karisimba vol-
cano, to the right of its peak, is an area of bamboo forest, one of the world’s few
remaining natural habitats for mountain gorillas. Just right of the center of the
image is Nyiragongo volcano, an active volcano 3465 m high. The lower portion
of the image is dominated by Nyamuragira volcano, 3053 m high, and the many
lava flows (purple in this image) that issue from its flanks.
Plate 27 shows SIR-C imagery of a portion of Yellowstone National Park in
Wyoming. Yellowstone was the world’s first national park and is known for its
geological features, including geysers and hot springs. The park and the sur-
rounding region also provide habitat for populations of grizzly bears, elk, and
bison. In 1988, massive forest fires burned across some 3200 km2 within the park.
The intensity of the burn varied widely, leaving a complex mosaic of heavily
burned, lightly burned, and unburned areas that will continue to dominate the
park’s landscape for decades. The effects of these fires can be clearly seen in the
LVH-band SIR-C image (a), acquired on October 2, 1994. Unburned lodgepole
pine forest returned a strong response in the LVH-band and thus appear relatively
bright in this image. Areas of increasing burn intensity have proportionately less
above-ground forest biomass present; these areas produced less LVH backscatter
and appear darker. Yellowstone Lake, near the bottom of the image, appears
black due to specular reflection and negligible LVH backscatter.
Plate 27b shows a map of above-ground forest biomass, derived from the
SIR-C data shown in (a) and from field measurements by the Yellowstone
National Biological Survey. Colors in the map indicate the amount of biomass,
ranging from brown (less than 4 tons per hectare) to dark green (nonburned for-
est with a biomass of greater than 35 tons per hectare). Rivers and lakes are
shown in blue. The ability of long-wavelength and cross-polarized radar systems
to estimate forest biomass may provide a valuable tool for natural resource man-
agers and scientists, whether in the wake of natural disasters such as fires and
windstorms, or in the course of routine forest inventory operations.
Unlike the subsequent Shuttle Radar Topography Mission (Section 6.19),
which collected data over the majority of the earth’s land surface, SIR-C data were
only collected over specific areas of research interest. Furthermore, due to the
448 CHAPTER 6 MICROWAVE AND LIDAR SENSING
experimental nature of the SIR-C system, not all of the data from the two missions
were processed to the full-resolution “precision” level. Those images that were pro-
cessed (prior to the termination of the program in mid-2005) are now archived at
the USGS EROS Data Center, and can be ordered online through EarthExplorer.
6.12 ALMAZ-1
The Soviet Union (just prior to its dissolution) became the first country to operate
an earth-orbiting radar system on a commercial basis with the launch of Almaz-1
on March 31, 1991. Almaz-1 returned to earth on October 17, 1992, after operat-
ing for about 18 months. Other Almaz missions were planned prior to the dissolu-
tion of the Soviet Union but were subsequently canceled.
The primary sensor on board Almaz-1 was a SAR system operating in the
S-band spectral region (10 cm wavelength) with HH polarization. The look angle
for the system could be varied by rolling the satellite. The look angle range of 20° to
70° was divided into a standard range of 32° to 50° and two experimental ranges of
20° to 32° and 50° to 70°. The effective spatial resolution varied from 10 to 30 m,
depending on the range and azimuth of the area imaged. The data swaths were
approximately 350-km wide. Onboard tape recorders were used to record all data
until they were transmitted in digital form to a ground receiving station.
The European Space Agency (ESA) launched its first remote sensing satellite,
ERS-1, on July 17, 1991, and its successor ERS-2 on April 21, 1995. Both had a
projected life span of at least three years; ERS-1 was retired from service on
March 10, 2000, and ERS-2 ended its mission on September 5, 2011. The char-
acteristics of both systems were essentially the same. They were positioned in
sun-synchronous orbits at an inclination of 98.5° and a nominal altitude of
785 km. During the 1995 to 2000 period, a particular focus of the two satellites
was tandem operation for repeat-pass radar interferometry.
ERS-1 and ERS-2 carried three principal sensors: (1) a C-band active micro-
wave instrumentation (AMI) module, (2) a Ku-band radar altimeter, and (3) an
along-track scanning radiometer. We limit this discussion to the AMI, which con-
sisted of a C-band, VV-polarized SAR system plus a non-imaging microwave scat-
terometer. The ERS SAR had a fixed (and relatively steep) look angle of 23° and a
four-look resolution of approximately 30 m. The choice of wavelength, polariza-
tion, and look angle were selected primarily to support imaging of the oceans,
although ERS-1 and -2 images have also been used for many land applications.
Figure 6.17 (described previously) is an example of an ERS-1 image. Plates
25b and 25c, described in Section 6.9, illustrate the use of ERS-1 and ERS-2 in
tandem for differential radar interferometry.
6.13 ERS, ENVISAT, AND SENTINEL-1 449
On March 1, 2002, ESA launched Envisat. This large satellite platform carries
a number of instruments, including the ocean monitoring system MERIS
(Section 5.18), and an advanced imaging radar system. Following the launch,
Envisat was maneuvered into an orbit matching that of ERS-2, just 30 min ahead
of ERS-2 and covering the same ground track to within 1 km. Envisat operated
for 10 years before contact was lost with the satellite on April 8, 2012.
Envisat’s SAR instrument, the Advanced Synthetic Aperture Radar (ASAR)
system, represented a substantial advance over that of its ERS predecessors. Like
the ERS-1 and -2 SARs, Envisat’s ASAR operated in the C band. However, ASAR
had multiple imaging modes offering various combinations of swath width, reso-
lution, look angle, and polarization. In its normal Image Mode, ASAR generated
four-look high resolution (30 m) images at either HH or VV polarization, with
swath widths ranging from 58 to 109 km and look angles ranging from 14° to 45°.
Other ASAR modes were based on the ScanSAR technique discussed in
Section 6.10. These provided coarser-resolution (150 to 1000 m) imagery over a
405-km swath with either HH or VV polarization. The final ASAR mode, Alternat-
ing Polarization Mode, represented a modified ScanSAR technique that alternates
between polarizations over a single swath, rather than alternating between near-
and far-range swaths. This mode provides 30-m-resolution dual-polarization ima-
gery, with one of three polarization combinations (HH and VV, VV and VH, or HH
and HV). Thus, Envisat was the first operational radar satellite to offer multiple
polarization radar imagery.
Figure 6.44 is an Envisat ASAR image showing an oil slick in the Atlantic
Ocean off the coast of Spain. On November 13, 2002, the oil tanker Prestige was
located off the west coast of Spain when its hull ruptured. The tanker had been
damaged in a storm and eventually sank on November 19. The image in Figure
6.44 was acquired on November 17, four days after the initial incident, and shows
the damaged ship as a bright point (inset) at the head of an extensive oil slick. Over
3.8 million liters of fuel oil spilled from the tanker during this incident. Oil films
have a dampening effect on waves, and the smoother, oil-coated water acts as more
of a specular reflector than the surrounding water, thus appearing darker.
Just as Envisat represented an improvement over the earlier ERS satellite
radars, the ESA’s Sentinel-1A and -1B satellites will build upon the ERS/Envisat
heritage while incorporating several advances. The first Sentinel-1 satellite was
launched on April 3, 2014, and the second is planned for launch in 2016. Working
together, these two imaging radar satellites will provide C-band, single- or dual-
polarized imagery with coverage of mid- to high-latitudes every one to three days,
with data delivery over the internet within an hour of image acquisition. This
rapid delivery of data is designed to support operations such as ship tracking and
sea ice monitoring, detection and mapping of oil spills, response to natural dis-
asters such as floods and wildfires, and other time-critical applications.
The imaging modes for the Sentinal-1 SAR systems include a strip-map mode
with 5-m resolution and 80-km swath; an extra-wide swath mode offering 40-m
resolution data over a 400-km swath; and a single-pass interferometric mode
450 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.44 Envisat ASAR image showing an oil slick in the Atlantic Ocean off the coast of
Spain. (Courtesy ESA.)
Japan’s earlier L-band radar satellite system, JERS-1. The ALOS PALSAR had a
cross-track pointing capability over a range of incident angles from 8° to 60°. In
its fine resolution mode, PALSAR collected either single- or dual-polarization
imagery. In its ScanSAR mode, PALSAR covered a large area with a resolution of
100 m. Finally, PALSAR also had a fully polarimetric mode, in which it collected
imagery in all four linear polarizations (HH, HV, VH, and VV).
Figure 6.45 shows a PALSAR image of Nagoya, a major Japanese port city.
The image was acquired on April 21, 2006. Numerous structures associated with
the port facilities can be seen in the image, along with bridges, ships, and water-
ways. The city’s airport, Chubu Centrair International Airport, is located on an
artificial island at the left side of the image. Plate 25a discussed previously pro-
vides an example of the use of ALOS PALSAR data for differential interferometry.
Following on the successes of ALOS, Japan’s national space agency launched
ALOS-2 in May 2014. The radar system on ALOS-2, PALSAR-2, shares many of
the characteristics of its predecessor, but with improved spatial resolution and
the ability to image on either the left or right side of its orbit track.
Table 6.5 lists the characteristics of ALOS-2’s radar imaging modes. Cur-
rently, ALOS-2 is the only radar satellite operating at the relatively long L-band
wavelength range (all other current and near-future proposed imaging radar satel-
lites operate at the shorter C- or X-bands). With its predecessors JERS-1 and
ALOS-1 also having employed this longer wavelength, Japan’s NASDA appears to
be filling a critical niche not met by any other spaceborne radar systems since the
brief Seasat-1 and SIR missions of 1978–1994.
Figure 6.45 ALOS PALSAR image of Nagoya, Japan, April. Scale 1:315,000. Black-and-white
copy of a color multipolarization L-band image. (Courtesy NASDA.)
452 CHAPTER 6 MICROWAVE AND LIDAR SENSING
6.15 RADARSAT
Radarsat-1, launched on November 28, 1995, was the first Canadian remote sen-
sing satellite. It was developed by the Canadian Space Agency in cooperation with
the United States, provincial governments, and the private sector. Canada was
responsible for the design, control, and operations of the overall system, while
NASA provided the launch services. Radarsat-1 long outlived its expected lifetime
of five years, finally ceasing operations on March 29, 2013. Among its major
accomplishments was the production of the first complete, high-resolution map
of Antarctica, as discussed at the beginning of this chapter.
The Radarsat-1 SAR was a C-band (5.6-cm) system with HH polarization. In
contrast to the ERS and JERS-1 systems that preceded it, Radarsat-1’s SAR could
be operated in a variety of beam selection modes providing various swath widths,
resolutions, and look angles. Virtually all subsequent spaceborne radars have
adopted this flexible approach.
Radarsat-1 was followed by Radarsat-2, launched on December 14, 2007. It
too employs a C-band system, but offers HH, VV, HV, and VH polarization
options. It has beam selection modes similar to Radarsat-1 but with an increased
number of modes available. These modes provide swath widths from 10 to
500 km, look angles from 10° to 60°, resolutions varying from 1 to 100 m, and
number of looks varying from 1 to 10.
The orbit for Radarsat-2 is sun synchronous and at an altitude of 798 km and
inclination of 98.6°. The orbit period is 100.7 min and the repeat cycle is 24 days.
The radar antenna can be operated in either right-looking or left-looking config-
urations, providing one-day repeat coverage over the high Arctic and approxi-
mately three-day repeat coverage at midlatitudes.
Table 6.6 and Figure 6.46 summarize the modes in which the system
operates.
Rather than following Radarsat-2 with another single spacecraft, the Canadian
Space Agency is developing plans for the Radarsat Constellation Mission, consist-
ing of at least three identical systems to be launched beginning in 2018, with the
possibility of adding another three satellites in the future. Like Radarsat-1 and -2,
6.15 RADARSAT 453
Subsatellie Track
All beam modes
Extended Beams
are available as right- (Low incidence)
or left-looking
the satellites in the Radarsat Constellation Mission would operate at the C-band, in
a variety of geometric and polarimetric modes. One new operating mode will be a
“low noise” configuration for detecting oil slicks and flat sea ice on the ocean sur-
face, both of which can act as specular reflectors that return only very weak signals
454 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.47 Radarsat-1 image showing flooding of the Red River, Manitoba, Canada, May 1996. Standard beam
mode, incident angle 30° to 37°. Scale 1:135,000. (Canadian Space Agency 1996. Received by the Canada Centre
for Remote Sensing. Processed and distributed by RADARSAT International. Enhanced and interpreted by the
Canada Centre for Remote Sensing.)
to the radar sensor. With three to six identical satellites flying in formation, the
revisit period will be brief, providing one-day revisit opportunities independent of
cloud cover or time of day/night over much of the globe. The brief revisit interval
between satellites will also facilitate the use of repeat-pass interferometry by redu-
cing the time lag between acquisitions and correspondingly reducing the risk of
temporal decorrelation.
6.16 TERRASAR-X, TANDEM-X, AND PAZ 455
The primary applications for which Radarsat-1 and -2 were designed, and
which are guiding the design of the Radarsat Constellation Mission, include ice
reconnaissance, coastal surveillance, land cover mapping, and agricultural and
forestry monitoring. The near-real-time monitoring of sea ice is important for
reducing the navigational risks of Arctic ships. Other uses include disaster mon-
itoring (e.g., oil spill detection, landslide identification, flood monitoring), snow
distribution mapping, wave forecasting, ship surveillance in offshore economic
zones, and measurement of soil moisture. The different operating modes of the
system allow both broad monitoring programs to be conducted as well as more
detailed investigations using the fine resolution mode.
Figure 6.47 shows flooding of the Red River, Manitoba, Canada, in May 1996.
The broad, dark area from lower right to the top of the image is smooth, standing
water. The lighter-toned areas to the left and upper right are higher, nonflooded,
ground. Where standing water is present under trees or bushes, corner reflection
takes place and the area appears very light toned. This is especially evident near
the Red River. (See also Figure 6.33 for an example of this effect.) The town of
Morris can be identified as a light-toned rectangle in the flooded area. The town is
protected by a levee and, as a result, has not flooded. Other, smaller areas that
have not flooded (but are surrounded by water) can also be seen in this image.
On June 15, 2007, the German Aerospace Center launched TerraSAR-X, a new
X-band radar satellite that provides imagery on demand at resolutions as fine as
1 m. A virtually identical twin satellite known as TanDEM-X was launched three
years later, on June 21, 2010. The two systems are operated in tandem, in a
bistatic configuration (Section 6.4) where one system transmits radar signals
and both record the amplitude and phase of the backscattered response, effec-
tively forming a large X-band single-pass interferometric radar. This provides
on-demand topographic mapping capability anywhere on earth at 2 m accuracy,
significantly better than anything previously available from space.
For imaging purposes, the two satellites each offer a variety of modes. In the
standard stripmap mode, imagery is collected at approximately 3-m resolution,
over a swath width of 30 km (for single polarization data) or 15 km (dual polar-
ization). Several Spotlight modes provide 1- to 3-m resolution imagery over a
5-km to 10-km swath, again with the choice of single or dual polarization. Finally,
in ScanSAR mode the instruments cover a 100-km-wide swath at a spatial resolu-
tion of 3 m in range by 18 m in azimuth.
In their tandem configuration, TerraSAR-X and TanDEM-X follow orbit
tracks separated by approximately 200 to 500 m. The interferometric data col-
lected in this configuration are being used to develop a globally uniform topo-
graphic data set with 12.5-m by 12.5-m pixel spacing. Elevations in this DEM
have better than 2-m relative and 10-m absolute accuracy. When complete, it will
provide a replacement for the near-global but coarser resolution topographic data
456 CHAPTER 6 MICROWAVE AND LIDAR SENSING
from the Shuttle Radar Topography Mission, discussed in Section 6.19. The
initial data collection for this new global DEM took just over one year. The results
began to be made available in 2014, following a second year of data collection to
improve the accuracy in areas of rough terrain.
A third satellite based on the TerraSAR-X design, and flying in the same orbit,
is planned for launch in 2015. This system, known as PAZ (“peace” in Spanish)
will further extend the opportunities for data collection by this constellation of
X-band SARs, leading to very frequent revisit opportunities and more interfero-
metric image pairs for mapping the dynamic topography of the earth’s surface.
Figure 6.48 shows an example of TerraSAR-X imagery acquired in its High
Resolution Spotlight mode over a copper mine in Chuquicamata, Chile. The
mine, located in the Atacama Desert of northern Chile, is the largest copper mine
in the world by volume. The imagery consists of a mosaic of data from two sepa-
rate orbit tracks, with ground range resolutions ranging from 1.04 m to 1.17 m.
The large whorled pattern at upper left, with relatively dark texture, is the open
pit mine, with an access road spiraling into it. The bright dots within this pattern
are vehicles working in the mine. Other features in this image include buildings,
roads, and tailing piles associated with the mine operations.
Another High Resolution Spotlight mode image from TerraSAR-X is shown in
Figure 6.49. This image covers a portion of Charles de Gaulle Airport, the largest
airport in France, located outside Paris. The architectural design of Terminal 1,
located left of the center of the image, has been compared to the shape of an octo-
pus. As in this example, radar images of airports tend to have high contrast, due
to the close proximity of large flat surfaces (which appear dark, due to specular
Figure 6.48 High Resolution Spotlight imagery from TerraSAR-X, over a copper mine in
Chuquicamata, Chile (scale 1:44,000). Full resolution of this image ranges from 1.04 m to
1.17 m. (Copyright: DLR e.V. 2009, Distribution Airbus DS/Infoterra GmbH.)
6.17 THE COSMO-SKYMED CONSTELLATION 457
Figure 6.49 Charles de Gaulle Airport, France, shown in a TerraSAR-X High Resolution Spotlight
image (scale 1:67,000). Full resolution of this image is 2.4 m. (Copyright: DLR e.V. 2009, Distribution
Airbus DS/Infoterra GmbH.)
a
Prior to 2014, the data for areas outside the U.S. were aggregated to 3 arcseconds (about 90 m).
460 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a) (b)
Figure 6.50 The SRTM extendable mast during prelaunch testing: (a) view of the mast
emerging from the canister in which it is stowed during launch and landing; (b) the mast fully
extended. (Courtesy NASA/JPL/Caltech.)
Figure 6.51 Artist’s rendition of the shuttle in orbit during SRTM, showing the
positions of the main antenna inside the payload bay, the canister, the mast, and the
outboard antennas. (Courtesy NASA/JPL/Caltech.)
Figure 6.52 Perspective view of a DEM of the Los Angeles area, derived from SRTM
interferometric radar data. A Landsat-7 ETMþ image has been draped over the DEM to show
land cover patterns. (Courtesy NASA/JPL/Caltech.)
and a Landsat-7 ETMþ image was draped over the DEM. This view is dominated
by the San Gabriel Mountains, with Santa Monica and the Pacific Ocean in the
lower right and the San Fernando Valley to the left.
Technical problems during the shuttle mission caused 50,000 km2 of the tar-
geted land area to be omitted by SRTM. These omitted areas represent less than
0.01% of the land area intended for coverage. All the omitted areas were located
within the United States, where topographic data are already available from other
sources.
As the first large-scale effort to utilize single-pass radar interferometry for
topographic mapping from space, the project has proved to be highly successful.
The resulting topographic data and radar imagery represent a unique and highly
valuable resource for geospatial applications. However, with the anticipated near
future availability of globally consistent, higher-resolution spaceborne radar topo-
graphic data such as the database now being compiled from TerraSAR-X and
TanDEM-X interferometry (Section 6.16), the groundbreaking SRTM data set will
likely come to be seen as an important early stepping stone on the path to a world
where continually updated topographic data are available at increasingly high
resolution across the whole earth. Figure 6.53 compares the resolution of the
12.5-m resolution globally consistent digital elevation data set from TerraSAR-X/
TanDEM-X to the pre-2014 3-arcsecond (90-m) resolution SRTM elevation data,
for a site near Las Vegas, NV. Just as a decade ago the SRTM project represented
a dramatic improvement in the consistency and availability of digital elevation
data worldwide, the high-resolution DEMs currently being produced from
462 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a) (b)
Figure 6.53 Comparison of digital elevation data from (a) the 3-arcsecond (90-m) resolution
SRTM data set to (b) the 12.5-m resolution elevation data currently being collected worldwide by
TerraSAR-X and TanDEM-X. (Courtesy German Aerospace Center – DLR.)
The proliferation of new radar satellites and multi-satellite constellations, and the
diversity of operating modes offered by all of these systems, render the task of
of resolution and swath width. However, none of these systems operates at more
than one wavelength band—a fact that testifies to the technical challenges
involved in designing spaceborne multi-wavelength radar systems.
The radar systems described in the preceding sections of this chapter all are side-
looking instruments, designed to collect image data on a pixel-by-pixel basis
across a wide spatial area. Another class of radar remote sensing systems, how-
ever, is designed to look directly downward and measure the precise distance
from the radar antenna to the earth’s surface. These sensors are referred to as
radar altimeters, and while they do not normally produce image data per se, the
spatial data they collect on the earth’s oceans, lakes, ice sheets, land surface, and
seafloor are widely used in the geosciences and in practical applications ranging
from water resources management to monitoring and forecasting the behavior of
the El Nino/Southern Oscillation.
The basic principle behind radar altimetry is quite simple: The radar antenna
transmits a microwave pulse downward to the surface and measures the elapsed
time for the returning pulse. This in turn can be used to calculate the distance
from the antenna to the surface. If the position of the antenna is known accu-
rately, the elevation of the surface can be determined with a similar degree of
accuracy. While this appears straightforward, in practice there are several chal-
lenges to the design and operation of radar altimeters.
Many applications of radar altimetry require centimeter-scale resolution in
the vertical dimension. To ensure this, the transmitted pulse duration must be
very brief, on the order of nanoseconds. This in turn would require an unfeasibly
large power for the transmitted signal. Radar altimeters (and many imaging
radars as well) circumvent this problem by using sophisticated signal processing
methods, such as a pulse compression approach that involves modulating the sig-
nal (referred to as “chirp”).
During the signal’s transit from the antenna to the surface, it fans outward,
such that for spaceborne radar altimeters the normal diameter of the pulse’s foot-
print on the surface may be 5 to 10 km or more. The shape of the returning signal
recorded by the antenna (referred to as the waveform) is a composite of thou-
sands of individual echoes from scattering nodes within this broad footprint. The
roughness of the surface will affect the waveform, such that as the surface rough-
ness increases, it becomes more challenging to identify a specific elevation for the
altimeter’s footprint. Figure 6.54 illustrates the interaction between a transmitted
pulse from a radar altimeter and an ideal, flat surface. At time t ¼ 1, the pulse has
not yet intersected the surface. At t ¼ 2, only the very center of the wavefront is
being scattered back, producing a weak but rapidly increasing signal. At t ¼ 3, a
larger area is being illuminated, and because this area of illumination is still
directly beneath the sensor, it sends back a strong echo. At t ¼ 4 and t ¼ 5, the
6.21 RADAR ALTIMETRY 465
Elapsed
t=1 t=2 t=3 t=4 t=5
time
Transmitted
pulse
Ground
Footprint
(plan view)
intensity
Returning
signal
time
Figure 6.54 Schematic diagram of the signal transmitted and received by a radar altimeter.
wavefront is expanding outward away from the center of the altimeter’s footprint,
and the returned signal consequently becomes weaker. This produces the char-
acteristic waveform of a radar altimeter, with a steeply rising leading edge fol-
lowed by a gradually tapering trailing edge. Over a more complex surface with a
rougher texture, the front of the waveform will be less steep, the peak will be less
obvious, and the trailing edge will be noisier (Figure 6.55).
Since the early 1990s spaceborne radar altimeters have been operated
from a variety of platforms, including the ERS-1, ERS-2, and Envisat satellites
(Section 6.13); the Topex/Poseidon mission (1992–2005) and its successors Jason-1
(2001–present) and OSTM/Jason-2 (2008–present), all three of which are joint
efforts of the United States and France); and others. Of particular note, the ESA’s
Intensity
a
b
Time
Figure 6.55 Returning pulses from radar altimeter measurements
over a flatter surface (solid line a) and a rougher surface (dashed
line b).
466 CHAPTER 6 MICROWAVE AND LIDAR SENSING
radar altimetry satellite Cryosat-2 was successfully launched on April 8, 2010. The
Cryosat-2 mission is primarily focused on monitoring the seasonal and long-term
dynamics of the earth’s land ice and sea ice, including the large polar ice sheets
of Greenland and Antarctica, smaller ice caps and glaciers elsewhere, and the sea-
sonally expanding and contracting Arctic and Antarctic sea ice.
Plate 29 shows a global data set of ocean bathymetry, derived from analysis of
spatial variations in the long-term average height of the oceans as measured by
radar altimeters, primarily Topex/Poseidon. While one might assume that because
water flows downhill, the ocean surface must be flat, in reality there are irregula-
rities in “sea level” at both short and long time scales. Sea level temporarily rises
and falls in individual regions due to changes in local atmospheric pressure, wind
strength and direction, and the dynamics of ocean currents. Over long periods,
sea level tends to be higher in some areas than others, because the ocean surface
reflects the presence of ridges, valleys, and abyssal plains on the ocean floor.
(Conceptually, this is similar to the finer-scale representation of local bathymetry
in the English channel as seen in the Seasat-1 SAR imagery in Figure 6.35.)
Again, the radar altimeters do not see through the ocean directly; instead, they
map the surficial expression of broad-scale submarine features deep below the
surface.
Operating in the same spectral domain as radar, passive microwave systems yield
yet another “look” at the environment—one quite different from that of radar.
Being passive, these systems do not supply their own illumination but rather
sense the naturally available microwave energy within their field of view. They
operate in much the same manner as thermal radiometers and scanners. In fact,
passive microwave sensing principles and sensing instrumentation parallel those
of thermal sensing in many respects. As with thermal sensing, blackbody radia-
tion theory is central to the conceptual understanding of passive microwave sen-
sing. Again as in thermal sensing, passive microwave sensors exist in the form of
both radiometers and scanners. However, passive microwave sensors incorporate
antennas rather than photon detection elements.
Most passive microwave systems operate in the same spectral region as the
shorter wavelength radar (out to 30 cm). As shown in Figure 6.56, passive micro-
wave sensors operate in the low energy tail of the 300 K blackbody radiation
curve typifying terrestrial features. In this spectral region, all objects in the nat-
ural environment emit microwave radiation, albeit faintly. This includes terrain
elements and the atmosphere. In fact, passive microwave signals are generally
composed of a number of source components—some emitted, some reflected, and
some transmitted. Over any given object, a passive microwave signal might
include (1) an emitted component related to the surface temperature and material
attributes of the object, (2) an emitted component coming from the atmosphere,
6.22 PASSIVE MICROWAVE SENSING 467
Figure 6.56 Comparison of spectral regions used for thermal versus passive microwave
sensing.
(3) a surface-reflected component from sunlight and skylight, and (4) a trans-
mitted component having a subsurface origin. In short, the intensity of remotely
sensed passive microwave radiation over any given object is dependent not only
on the object’s temperature and the incident radiation but also on the emittance,
reflectance, and transmittance properties of the object. These properties in turn
are influenced by the object’s surface electrical, chemical, and textural character-
istics, its bulk configuration and shape, and the angle from which it is viewed.
Because of the variety of its possible sources and its extremely weak magni-
tude, the signal obtained from various ground areas is “noisy” compared to that
provided by cameras, scanners, or radars. The interpretation of this signal is thus
much more complex than that of the other sensors discussed. In spite of the diffi-
culties, passive microwave systems are widely used in applications as diverse as
measuring atmospheric temperature profiles, assessing snow water content,
tracking the extent of sea ice, and analyzing subsurface variations in soil, water,
and mineral composition.
Microwave Radiometers
readout and recording. (It should be noted that we have greatly simplified the
operation of a microwave radiometer and that many variations of the design
described here exist.)
Common to all radiometer designs is the trade-off between antenna beam-
width and system sensitivity. Because of the very low levels of radiation available
to be passively sensed in the microwave region, a comparatively large antenna
beamwidth is required to collect enough energy to yield a detectable signal.
Consequently, passive microwave radiometers are characterized by low spatial
resolution.
Profiling microwave radiometers are nonimaging devices that measure micro-
wave emissions in a single track beneath the aircraft or spacecraft. During
daylight operations, photography can be concurrently acquired to provide a visual
frame of reference for the profile data. Normally, the radiometer output is expres-
sed in terms of apparent antenna temperature. That is, the system is calibrated in
terms of the temperature that a blackbody located at the antenna must reach to
radiate the same energy as was actually collected from the ground scene.
469
470 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.58 Map showing sea ice distribution in the Arctic Ocean, derived from AMSR2
passive microwave radiometer imagery, June. (Courtesy JAXA and Institute of Environmental
Physics, University of Bremen.)
6.23 BASIC PRINCIPLES OF LIDAR 471
Figure 6.58 shows a map of sea ice in the Arctic Ocean, derived from AMSR2
imagery acquired on June 10, 2013. Northern Greenland and the Canadian Arctic
archipelago are located at the lower center and left of this image, with Hudson
Bay in the far lower left corner, while the northern coast of Siberia and its off-
shore islands are at the upper center and upper right. The methods used to derive
sea ice concentration from AMSR2 microwave radiometry data are based on
those described in Spreen et al. (2008) for an earlier generation of imaging micro-
wave radiometer, AMSR-E.
In this map, the brightest tones represent 100% cover of sea ice, while darker
tones represent lower sea ice concentrations. In the Arctic, some ice persists from
year to year in the central part of the basin, while the outer extent of the ice
expands and contracts with the seasons. The extent of ice cover in the Arctic has
been gradually decreasing since 1979, and recent years have seen particularly
striking declines in sea ice in this region. Some climate models suggest that the
Arctic Ocean will become nearly or completely free of multiyear ice later in this
century, while extrapolation of trends from the past three decades suggests that
the disappearance of multiyear ice could happen much earlier. Sea ice plays an
important role in the global climate cycle, modulating transfers of salinity within
the ocean and transfers of heat and water vapor from the ocean to the atmo-
sphere. Thus, efforts to monitor its extent are an essential component of research
programs in global climate variability and climate change.
Lidar (which stands for light detection and ranging), like radar, is an active remote
sensing technique. This technology involves transmitting pulses of laser light
toward the ground and measuring the time of pulse return. The return time for
each pulse back to the sensor is processed to calculate the distances between the
sensor and the various surfaces present on (or above) the ground.
The use of lidar for accurate determination of terrain elevations began in the
late 1970s. Initial systems were profiling devices that obtained elevation data only
directly under the path of an aircraft. These initial laser terrain systems were
complex and not necessarily suited for cost-effective terrain data acquisition over
large areas, so their utilization was limited. (Among the primary limitations was
the fact that neither airborne GPS nor Inertial Measurement Units, or IMUs, were
yet available for accurate georeferencing of the raw laser data.) One of the more
successful early applications of lidar was the determination of accurate water
depths. In this situation the first reflected return records the water surface, closely
followed by a weaker return from the bottom of the water body. The depth of the
water can then be calculated from the differential travel time of the pulse returns
(Figure 6.59).
The advantages of using lidar to supplement or replace traditional photo-
grammetric methodologies for terrain and surface feature mapping stimulated
472 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.60 Components of an airborne scanning lidar system. (Courtesy EarthData International
and Spencer B. Gross, Inc.)
In addition to rapid pulsing, modern systems are able to record five or more
returns per pulse, which permits these systems to discriminate not only such fea-
tures as a forest canopy and bare ground but also surfaces in between (such as
the intermediate forest structure and understory). Figure 6.61 illustrates a theore-
tical pulse emitted from a lidar system and traveling along a line of sight through
474 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.61 Lidar pulse recording multiple returns as various surfaces of a forest canopy are “hit.”
(Courtesy EarthData International.)
a forest canopy and recording multiple returns as various surfaces are “hit.” In
this case, the first return from a transmitted pulse represents the top of the forest
canopy at a given location. The last return may represent the ground surface, if
there are sufficient gaps in the canopy for portions of the transmitted pulse to
reach the ground and return, or a point above the ground surface if the canopy is
too dense. In urban areas, unobstructed surfaces such as building roofs will pro-
duce a single return, while areas with trees present will produce multiple returns
from the tree canopy and ground surface.
Depending on the surface complexity (e.g., variable vegetation heights, terrain
changes) and other mission parameters, lidar data sets can be remarkably large.
6.24 LIDAR DATA ANALYSIS AND APPLICATIONS 475
Raw lidar data collected by the sensor are processed using GPS differential cor-
rection and filtered for noise removal, then prepared as a file of X, Y, Z points.
These can be visualized as a point cloud (Section 1.5 and Chapter 3) with the
reflecting point for each returning pulse represented as a point in three-
dimensional space. Figure 6.62 shows a lidar point cloud for a stand of needle-
leaved evergreen trees in the Sierra Nevadas. The overall shape of each individual
tree crown can be discerned within the point cloud. Before further processing or
analysis, these lidar data points are usually classified to isolate different types of
features and surfaces. The simplest classification would categorize points as
“ground” or “non-ground,” but it is preferable to provide more detailed categories
such as ground, buildings, water, low and high vegetation classes, and noise.
Typically, this process involves a mixture of automated classification and manual
editing. Once the lidar points have been classified, they can be used to create a
variety of derived products, ranging from DEMs and other three-dimensional
models to high-resolution contour maps. A distinct advantage to lidar is that all
the data are georeferenced from inception, making them inherently compatible
with GIS applications.
Figure 6.63 shows a small portion of an airborne lidar data set collected over
the coast of West Maui, Hawaii. The lidar data were collected using a Leica Geo-
systems ALS-40 Airborne Laser Scanner from an altitude of 762 m, over a 25°
field of view. The scan rate was 20,000 pulses per second, and the lidar ground
476 CHAPTER 6 MICROWAVE AND LIDAR SENSING
Figure 6.62 Lidar point cloud over a forest in the Sierra Nevada, California. (Courtesy of
Qinghua Guo and Jacob Flanagan, UC Merced.)
points were spaced 2 m apart. The position and orientation of the system was
measured in-flight using GPS and an IMU, with a stationary GPS located at a
nearby airport for differential correction (see Chapter 1). The resulting elevation
measurements have a vertical accuracy of 16 cm root mean square (RMS) error.
Many small topographic features can be identified in the lidar data shown in
Figure 6.63. The left (west) side of the lidar swath includes a narrow strip of the
ocean surface, on which individual waves are visible. A narrow section of land
between the shoreline and the Honoapiilani coastal highway shows many small
streets, houses, and trees. Across the highway, lands formerly used for growing
sugar cane extend up-slope, interrupted by narrow, steep-sided gullies. At the
northern end of the image, numerous features associated with a golf course at
Kapalua can be identified. The inset (with dimensions of 1000 m by 2200 m)
shows an airport runway, numerous small structures, and individual trees and
shrubs located in one of the large gullies.
Plate 28 shows an airborne lidar image of the coastline around Reid State
Park in Sagadahoc County, Maine. The lidar data were acquired in May, with a
spacing of 2 m between lidar points on the surface. The data were used to create a
6.24 LIDAR DATA ANALYSIS AND APPLICATIONS 477
Figure 6.63 Shaded-relief DEM from Leica Geosystems ALS-40 Airborne Laser Scanner data, West
Maui, Hawaii. Scale 1:48,000 (main image), 1:24,000 (inset). (Courtesy NOAA Coastal Services Center.)
DEM, which is represented here in shaded relief and with color-coded elevations.
(See Chapter 1 for a discussion of methods for representing topographic data.)
The nearshore waters of the Gulf of Maine appear blue in this visualization. Low-
lying land areas are represented in green. In this area, many such low-lying areas
478 CHAPTER 6 MICROWAVE AND LIDAR SENSING
are salt marshes, which have a uniform, flat appearance in this DEM, interrupted
only by subtle but distinctly visible networks of tidal channels. A long linear fea-
ture parallel to the shoreline at the right side of the image is a line of dunes along
a barrier spit. A road and causeway can be seen crossing the salt marsh behind
this line of dunes. Other roads, as well as buildings and other features, can be
seen elsewhere in the lidar visualization. At elevations above the tidal marshes,
the DEM has a much rougher appearance, indicative of the heavily forested land-
scape. Like Figure 6.63, Plate 28 shows a digital surface model (DSM) where the
first lidar return from each point is used to represent the top of the canopy in
vegetated areas.
The ability to detect multiple returning pulses from a plant canopy and the
ground surface is one of the most revolutionary aspects of lidar remote sensing.
For topographic mapping applications, this allows the creation of a bare-earth
DEM by using only the last pulse received from each point. On the other hand, it
is also possible to use the multiple returning echoes to characterize the trees,
shrubs, and other vegetation present within the “footprint” of each transmitted
pulse. Plate 1 and Figure 1.21, both discussed previously in Chapter 1, show
examples of the separation of aboveground vegetation from the terrain surface via
analysis of early and late returns in spatially dense lidar data sets. The canopy
height model in Plate 1c is typical of what might be used in forest management
applications to monitor forest growth, estimate the volume of timber in a stand or
the storage of carbon in aboveground woody biomass, and create maps of fea-
tures ranging from wildlife habitat to potential wildfire fuel loadings.
At fine spatial scales, much research has focused on using horizontally and
vertically detailed canopy height measurements to develop geometric models
of individual tree crown shapes. This can be done using either lidar or multi-
wavelength interferometric radar systems. (Radar interferometry was discussed in
Section 6.9.) Figure 6.64 shows a side view of models of individual trees, along
with remotely sensed point-measurements of the tree crowns (light-toned
spheres) and the ground surface (darker spheres). In Figure 6.64a, a dual-
wavelength airborne interferometric radar system known as GeoSAR was used
to collect imagery at the X-band (3 cm) and P-band (85 cm) wavelengths. The
short-wavelength X-band is sensitive to the top of the canopy, while the long-
wavelength P-band primarily passes through the canopy and is backscattered
from the ground surface. In Figure 6.64b, an airborne lidar sensor was used to
measure the same stand of trees, recording multiple returns from each trans-
mitted pulse. The resulting “point cloud” can then be analyzed to create models of
the structures of individual trees (Andersen, Reutebuch, and Schreuder, 2002).
Collecting lidar data or interferometric radar imagery over the same stand on
multiple dates permits estimation of tree growth rates.
Another current research topic in lidar is full-waveform analysis (Mallet and
Bretar, 2009). Whereas “traditional” lidar systems may record one or more
discrete returns from each transmitted pulse, full-waveform lidars digitize the
continuous returning signal at a uniform sampling frequency. In essence, instead
6.24 LIDAR DATA ANALYSIS AND APPLICATIONS 479
(a) (b)
Figure 6.64 Models of tree structures at the Capitol Forest site, Washington State.
(a) From interferometric radar. (b) From lidar. (Courtesy Robert McGaughey, USDA
Forest Service PNW Research Station.)
of recording the timing of four or five peaks of the returning wave, these systems
record the entire shape of the wave itself. Typically, a full-waveform lidar analysis
involves sampling the shape of the lidar pulse at 1 nanosecond (ns) intervals, with
the signal amplitude at each sampling interval recorded as an 8-bit digital num-
ber. The result of this process is a digital lidar waveform that can be considered
as analogous to the digital spectral signature from a hyperspectral system. By
analyzing this waveform, it is possible to more fully characterize the physical
properties of the medium (e.g., a plant canopy) through which the transmitted
signal passes on its way to the ground surface. Full-waveform lidar data can also
be used for more efficient automated classification of types of lidar point returns.
One example of such a system is NASA’s Experimental Advanced Airborne
Research Lidar (EAARL). When acquired from an altitude of 300 m, EAARL lidar
spots have a 20-cm diameter on the surface. The returning wave from each spot is
digitized at a resolution of 1 ns, corresponding to a vertical precision of 14 cm in
air or 11 cm in water. This system has been used in coastal and marine mapping
surveys, for studying emergent coastal vegetation communities and coral reefs
(Wright and Brock, 2002).
A third area of active research is on the use of lidar data for detecting and quan-
tifying changes in the three-dimensional position, shape, and volume of objects or
surfaces. By acquiring repeat lidar data sets over a target area at multiple points in
480 CHAPTER 6 MICROWAVE AND LIDAR SENSING
(a)
(b)
(c)
Figure 6.65 Lidar monitoring of coastal erosion and sand bar migration at the mouth of the
Morse River, Popham Beach, Maine. Shaded relief renderings of DEMs from airborne lidar data.
(a) May 5, 2004. (b) June 6, 2007. (c) May 24, 2010. (Scale 1:12,000). (Data courtesy NOAA
Coastal Services Center.)
6.24 LIDAR DATA ANALYSIS AND APPLICATIONS 481
Figure 6.66 Lidar intensity image of a portion of the Lewiston/Nez Perce County Airport,
Idaho. The data were acquired by a Leica Geosystems ALS 40 Airborne Laser Scanner at 24,000
pulses per second and 2-m post spacing. (Courtesy of i-TEN Associates.)
482 CHAPTER 6 MICROWAVE AND LIDAR SENSING
The first free-flying satellite lidar system was launched successfully on January
12, 2003, and was decommissioned on August 14, 2010. This system, the Ice,
Cloud, and Land Elevation Satellite (ICESat), was a component of NASA’s Earth
Observing System (Section 5.19. ICESat carried a single instrument, the
Geoscience Laser Altimeter System (GLAS), which included lidars operating in the
near infrared and visible, at 1.064 and 0.532 mm, respectively. GLAS transmitted
40 times per second, at an eye-safe level of intensity, and measured the timing of
the received return pulses to within 1 ns. The laser footprints at ground level
were approximately 70 m in diameter and were spaced 170 m apart along-track.
The primary purpose of ICESat was to collect precise measurements of the
mass balance of polar ice sheets and to study how the earth’s climate affects ice
sheets and sea level. The ICESat lidar was designed to measure very small chan-
ges in ice sheets when data are averaged over large areas. This permits scientists
to determine the contribution of the ice sheets of Greenland and Antarctica to
global sea level change to within 0.1 cm per decade. In addition to measurements
of surface elevation, GLAS was designed to collect vertical profiles of clouds and
aerosols within the atmosphere, using sensitive detectors to record backscatter
from the 0.532-mm-wavelength laser.
Figure 6.67 shows an example of GLAS data acquired on two passes over a
lake in the desert in southern Egypt. Figure 6.67a shows the individual GLAS-
illuminated spots (from a data acquisition on October 26, 2003) superimposed on
a MODIS image taken on the same date. Figure 6.67b shows a cross section along
the GLAS transect. In this cross section, the solid line represents the topography
of the surface as obtained from a Shuttle Radar Topography Mission (SRTM)
DEM, acquired on February 2000 before this lake basin was filled. Thus, this line
represents the bathymetry of the lake, or the lake bottom elevations along the
6.25 SPACEBORNE LIDAR 483
(a)
(b)
Figure 6.67 ICESat GLAS data over southern Egypt. (a) GLAS illuminated spots superimposed
on MODIS imagery, October 26, 2003. (b) Cross section showing two dates of GLAS
measurements (points), along with surface elevations from SRTM DEM data. (From Chipman
and Lillesand, 2007; courtesy Taylor & Francis, International Journal of Remote Sensing, http://
www.tandf.co.uk/journals.)
transect. The dark points in Figure 6.67b represent the GLAS elevation measure-
ments from October 26, 2003. Where those points fall on land, they correspond
closely to the ground surface elevation obtained from the SRTM DEM. Where
they extend across the lake, they show the water level on that date. For compar-
ison, the gray points in Figure 6.67b are GLAS elevation measurements from
October 31, 2005. During the intervening years, the GLAS measurements indicate
that the lake level dropped by 5.2 m (Chipman and Lillesand, 2007). When
484 CHAPTER 6 MICROWAVE AND LIDAR SENSING
combined with measurement of the change in area of the lake, changes in lake
volume can also be calculated.
A follow-on mission, ICESat-2, is currently planned for launch in 2016. This
system will continue its predecessor’s studies of polar and montane ice sheets and
sea ice and will also contribute to studies of global vegetation biomass. It will
employ a visible (green) laser with a wavelength of 0.532 mm, but while ICESat-1
had a pulse rate of 40 transmitted pulses per second, ICESat-2 will have a rate of
approximately 10,000 pulses per second, closer to the rate of an airborne lidar.
The laser will also be split into six beams, grouped into three pairs each 3.3 km
apart. Along the track, the measurement spacing will be 70 cm, thanks to the high
pulse rate. This configuration should allow greatly improved spatial detail in ICE-
Sat-2’s observations of global ice and ecosystems.
As we have seen in this chapter, radar and lidar remote sensing are rapidly
expanding the frontiers of remote sensing. With applications as diverse as mea-
suring landscape subsidence from groundwater extraction (via differential radar
interferometry) to modeling the shapes of individual tree crowns in lidar point
clouds, these systems are providing entirely new capabilities that were not even
conceived of in previous decades. While these advances are dramatic enough in
their own right, the field of remote sensing is also benefiting from the synergistic
interactions of multiple sensors. By bringing together information from multiple
sources, based on diverse principles and technologies—be they high-resolution
commercial satellite systems, space-based interferometric radar, or airborne lidar
systems—remote sensing is helping us improve our understanding of the earth as
a system and our own “position” in the environment. In the following two chap-
ters, we will discuss many different ways to analyze and interpret the information
coming from these diverse sensors. Chapter 7 will focus on the use of digital
image processing techniques, while Chapter 8 will examine the wide array of
applications that have been developed using remotely sensed imagery.