sensors
sensors
sensors
20
Luis Mejias, John Lai, and Troy Bruggemann
Contents
20.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
20.2 Navigation Sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
20.2.1 Electro-Optical (EO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
20.2.2 Radio-Wave Sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
20.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
20.3.1 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
20.3.2 Remote Sensing: Power Line Inspection and Vegetation Management . . . . . . . . . 394
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
Abstract
An onboard payload may be seen in most instances as the “Raison d’Etre”
for a UAV. It will define its capabilities, usability and hence market value.
Large and medium UAV payloads exhibit significant differences in size and
computing capability when compared with small UAVs. The latter has stringent
size, weight, and power requirements, typically referred as SWaP, while the
former still exhibit endless appetite for compute capability. The tendency for this
type of UAVs (Global Hawk, Hunter, Fire Scout, etc.) is to increase payload
density and hence processing capability. An example of this approach is the
Northrop Grumman MQ-8 Fire Scout helicopter, which has a modular payload
architecture that incorporates off-the-shelf components. Regardless of the UAV
size and capabilities, advances in miniaturization of electronics are enabling the
replacement of multiprocessing, power-hungry general-purpose processors with
more integrated and compact electronics (e.g., FPGAs).
K.P. Valavanis, G.J. Vachtsevanos (eds.), Handbook of Unmanned Aerial Vehicles, 385
DOI 10.1007/978-90-481-9707-1 6,
© Springer Science+Business Media Dordrecht 2015
386 L. Mejias et al.
20.1 Introduction
There are two main categories of payloads onboard a UAV: those that allow the
vehicle to navigate and those that allow the vehicle to perform its main task. The
distinction between them is sometimes very subtle. In some cases, a payload aimed
to perform a task can be used for navigation purposes (e.g., cameras), and navigation
payloads may in some instances be integrated with other sensors to perform a
specific task (e.g., GPS/INS with LIDAR). This section will describe common
sensors that are used in many typical payloads onboard UAVs.
At the core of most UAV guidance and navigation systems, one can find Global
Navigation Satellite Systems (GNSS) (including the Global Positioning System –
GPS) and Inertial Navigation Systems (INS). Their complementary nature has been
recognized, and as a result, GPS and INS sensors are the preferred sensor couple for
the majority of autopilot systems.
The integration of GPS and INS is without doubt the area in which researchers
have spent considerable efforts proposing approaches such as uncoupled integra-
tion, loosely coupled integration, tightly coupled integration, and deeply coupled
integration (Grewal et al. 2007). GPS and INS are not the only two sensors used
for navigation. They can be complemented with altimeters (laser-based, barometric,
etc.) to enhance the estimation of the vehicle state. Additionally, infrared attitude
sensors are typically found in micro UAVs. Recently in Cao et al. (2010), a survey
of UAV autopilot alternatives and typical sensor combinations was presented.
At the heart of the integration scheme lies a estimator (usually a form of Kalman
filter) which estimates position, velocity, attitude, GPS errors and inertial sensor
errors. Due to their complementary nature, GPS and INS are often the preferred
core sensor suite. However, researchers have also investigated the integration of
other sensor combinations, such as GPS with computer vision (Dusha et al. 2011;
Wein et al. 2011; Dusha and Mejias 2012) and INS with computer vision (Merz
et al. 2006). Additionally, factors such as the trade-off between cost and accuracy
20 Sensors for Missions 387
of INS sensors and the susceptibility of GPS to spoofing and jamming have also
contributed to increased interest in alternative sensor combinations.
While alternative integration schemes using other sensors is an attractive option,
the low cost and future navigation availability and integrity that Space-Based
Augmentation Systems (SBAS) such as WAAS, EGNOS, GAGAN, and MSAS will
provide cannot be ignored. Submeter accuracy for civilian users will also be possible
with the commissioning of Galileo, Compass, and the modernization of GLONASS.
This in turn will encourage interoperatibility and the possibility of a triple-frequency
civilian-band GPS over the next decades.
performed. For more details on camera models and calibration theory, refer to
Forsyth and Ponce (2002) and Szeliski (2011).
20.2.1.2 Infrared
An infrared (IR) camera is a device that detects and converts light in the same way
as common visible-spectrum cameras, but is sensitive to light at longer wavelengths.
They form an image using infrared radiation in the spectrum at wavelengths from
5,000 nm (5 !m) to 14,000 nm (14 !m). Infrared cameras are used to convey a
measure of thermal radiation of bodies. The intensity of each pixel can be converted
for use in temperature measurement, where the brightest parts of the image (colored
white) represent the warmest temperatures. Intermediate temperatures are shown as
reds and yellows, and the coolest parts are shown in blue. Often, IR images are
accompanied by a scale next to a false color image to relate colors to temperatures.
The resolution of IR cameras (up to a maximum of 640 !480 pixels) is considerably
lower than the resolution of optical cameras. Furthermore, the price of IR cameras
is considerably higher than their visible-spectrum counterparts.
Infrared cameras can be categorized in two main groups:
require good knowledge of what specific properties are to be measured and how
they relate to the actual measurement made from the sensor. For example, a
single cell position in an image will have a set of brightness (or intensity) levels
for each wavelength (spectral band). Different materials under examination by
a hyperspectral sensor will often exhibit different intensity versus wavelength
relationships. Hence, with some prior knowledge, hyperspectral image data can be
useful for identifying the type or composition of materials.
Cost and complexity are the two major drawbacks of this technology. Storage is
another limiting factor given the multidimensional nature of hyperspectral datasets.
The availability of graphics processing units (GPU) as powerful parallel process-
ing hardware could bring this technology a step closer to widespread adoption.
However, further research efforts are needed to create analytical techniques and
algorithms to unleash the full potential of hyperspectral imaging.
However, LIDAR shares many of the SWaP obstacles that are faced by radar
technology. Hence, LIDARs are not often found in micro/small size UAVs, but
have been flown in light general aviation aircraft (Ara 2011). Recent advances in
technology have been made, an example of which is the Riegl LMS-Q160 LIDAR
in <15 kg and 60 W range (Riegl 2011), which is paving the path for the use of
LIDARs in smaller UAVs in the future.
The pace of innovation seems to be accelerating lately. Perhaps this is a result of
the challenging economy, but without doubt, the miniaturization of electronics and
the availability of small, power-efficient and computationally capable hardware will
facilitate the adoption of UAVs in many civilian contexts.
20.3 Applications
conservative size, weight, and power (SWaP) budgets. Furthermore, sensors sup-
porting high-speed IEEE 1394 and IEEE 802.3-2008 (Gigabit Ethernet) communi-
cation interfaces are readily accessible as commercial off-the-shelf (COTS) products
and can easily facilitate real-time capture and transfer of image data at high reso-
lutions. Currently, the market offers four preferred standards for transferring digital
video signals from the camera to the image-processing computer or workstation,
FireWire (IEEE 1394), USB 2.0, Gigabit Ethernet, and Camera Link. Although
not covered in this section, analog interfaces have constituted the backbone of the
imaging industry, as it is known it today. They are still present in analog video links
in many UAVs payloads.
The information provided by EO sensors can be exploited for purposes other
than the detection and localization of targets in the image plane. Relative bearing
information derived from target location on the image plane can be used to evaluate
the risk of collision (constant relative bearings correspond to high risk, whereas
large variations in relative bearings correspond to low risk) (Angelov et al. 2008).
Furthermore, range information useful for control purposes can be derived from a
single sensor when combined with appropriate aircraft maneuvers (Karlsson and
Gustafsson 2005).
An EO sensing approach offers many benefits, and studies have suggested that
EO-based sense-and-avoid systems hold the greatest potential for gaining regulatory
approval (Karhoff et al. 2006). However, there are also many challenges associated
with an EO sensing approach. One of the most significant challenges lies in
the unpredictable and constantly changing nature of the airborne environment. In
particular, for visible light spectrum EO sensors, detection algorithms must be able
to handle a variety of image backgrounds (from blue-sky to cloud and ground
clutter), lighting conditions, and possibly image artifacts (e.g., lens flare).
Another factor to contend with when using an EO sensing approach is the
presence of image jitter noise. Image jitter is caused by movement of the camera
sensor and hence is exacerbated in an airborne environment where the platform
is in constant motion and subject to unpredictable aerodynamic disturbances. For
detection algorithms that exploit target motion dynamics in the image plane,
image jitter represents an undesirable noise component that can have significant
impact on performance (Utt et al. 2004; Lai et al. 2011a). Various jitter com-
pensation techniques based on aircraft state information and image features have
been proposed to reduce the effects of image jitter (but it cannot be completely
eliminated).
Finally, it can also be a challenge to achieve real-time processing of EO
sensor image data, given the computationally intensive nature of image-processing
algorithms. However, this is gradually being overcome with the development of
specialized parallel processing hardware (such as graphics processing units (GPU),
field-programmable gate arrays (FPGA), and dedicated digital signal processors
(DSP)). Several attempts have been made, aiming to characterize the performance
of these specialize parallel processing hardware (Chase et al. 2008; Thomas et al.
2009), but their utility is highly dependent on the application and its degree of
parallelization.
392 L. Mejias et al.
Over the past decade, government, university, and private research groups
have demonstrated EO-based sense-and-avoid technologies of varying levels of
sophistication (Lai et al. 2012). One of the most mature EO-based sense-and-
avoid technology programs has been conducted by a partnership between Defense
Research Associates, Inc. (DRA), the Air Force Research Laboratory (AFRL),
and Aeronautical Systems Center (ASC) (Utt et al. 2004, 2005). Over a number
of development phases, detection and tracking of noncooperative general aviation
aircraft to a range of approximate 7 nautical miles has been demonstrated onboard
an Aerostar UAV. The program aims to ultimately provide a combined cooperative
and noncooperative collision avoidance capability for seamless integration onto
Department of Defense platforms.
Concurrently, the Australian Research Centre for Aerospace Automation
(ARCAA) has been undertaking a program to develop a cost-effective vision-
based sense-and-avoid system for civilian UAV applications (Lai et al. 2011a,b).
Extensive air-to-air target image data as well as nontarget data has been collected
to characterize the detection range and false alarm performance of the proposed
detection subsystem. Furthermore, closed-loop flight trials have been conducted,
demonstrating the ability of the prototype system to automatically detect intruder
aircraft and command the host aircraft autopilot to perform avoidance maneuvers.
These development programs and other studies over the past decade have advanced
the knowledge and understanding of trade-offs between EO sensor parameters (such
as camera field of view) and system performance measures (such as detection range,
detection probability, and false alarm rate) to a large extent. For example, many
studies have shown that, in general, increasing the field of view reduces detection
range and vice versa (Geyer et al. 2009; Lai et al. 2012).
However, the knowledge so far gained is far from complete, and more research
and experimentation is required to determine the design trade-offs that will allow
an EO-based sense-and-avoid system to demonstrate an equivalent level of safety
to human pilot see-and-avoid. Figure 20.1 presents two examples of images taken
from an onboard collision avoidance system.
the pilot should follow to avoid a collision. These might be corrective or preventive,
depending on whether a deviation in course is suggested or not, respectively. For full
technical details and capabilities of TCAS, readers are referred to Harman (1989)
and FAA (2000).
TCAS can only be fully effective as a means for collision avoidance if it
is equipped in all aircraft in any given airspace, as one might expect with any
cooperative collision avoidance system.
aircraft would have the capability to share messages including 3D position, speed,
heading, time, and intent. This information is highly valuable in the design of
collision avoidance systems. However, ADS-B is often criticized for its excessive
verbosity, which is not always necessary for collision avoidance. The high volume
of information exchanged in ADS-B limits the number of aircraft that can use the
system in a given time due to channel data bandwidth constraints. Nonetheless, the
affordability of ADS-B will continue to motivate future research and development
into its application for UAVs (Lai et al. 2009).
There are ADS-B systems already in the market for under U.S. $3,000 (Kunzi
2009; NavWorx 2011). A detailed description of ADS-B systems, including its
operational capabilities, can be found in RTCA (2002).
Power line infrastructure is a large and costly asset to manage, costing many millions
of dollars per year. Inspecting the power lines for failures (or worn, broken parts) and
vegetation encroachment is a significant task particularly for electricity companies
such as Ergon Energy who manages one of the largest networks in the world,
consisting of over 150,000 km of power lines covering an area of 120,000 km2 of
Queensland, Australia (Li et al. 2012).
Traditionally, power line inspection is carried out by manned crews visiting
the power lines by ground vehicle. However, this is time consuming, and due to
terrain or proximity limitations, this task can be difficult. Unpiloted aircraft is an
ideal technology for streamlining the inspection task carried out by ground crews.
UAVs can achieve similar results to ground inspection but with greater efficiency.
In many cases, UAVs can gain visibility of power lines in difficult locations which
are inaccessible by ground vehicle.
Alternatively, inspecting power lines with a manned aerial inspection aircraft
involves precise flying at low altitude above the infrastructure to be inspected. This
is a potentially hazardous task for a human pilot due to fatigue from sitting in
aircraft for long periods of time and the intense concentration and effort required
to ensure that the infrastructure to inspect is within the sensor swath or field of view
(usually LIDAR and high-resolution camera) (Li et al. 2012). A LIDAR and camera
combination provides excellent sources of data with complementary characteristics.
However, LIDAR data files are large due to the amount of data it can capture,
often in the order of hundreds of Gigabytes or more. The processing of LIDAR
captured by UAV is an important task which currently must be done after flight
missions.
A future avenue is to perform LIDAR processing onboard the aerial vehicle.
Processing may be done by a number of techniques. During the processing, one of
the tasks is to distinguish the wanted feature from unwanted features in the data.
For example, trees, building, and other obstacles which are not near the power
line infrastructure are unwanted. Information such as the precise height, location
20 Sensors for Missions 395
of power lines and their displacement from potentially hazardous objects such as
vegetation (which may start bushfires) is among some of the wanted data to be
extracted.
LIDAR and images (IR or visible spectrum) might be combined to provide a
comprehensive analysis of the area under examination. Figure 20.2 shows a typical
infrared image taken from an aircraft. Vegetation and other objects are clearly
distinguishable; this facilitates the identification of potentially dangerous vegetation
(in the vicinity of power lines). Image data can be combined in a multilayered
processing approach with other sources of information as depicted in Fig. 20.3.
Here, an image (top left) has been used to detect power lines, creating a region
of interest around the lines; the same image after georeferencing is then merged
with LIDAR data (top right) to improve feature detection rates. Typical LIDAR
data representation can be seen in the middle, while in the bottom, an example
representation of the merged data can be observed.
Because the task requires tracking of linear infrastructure from the air (not
merely enroute waypoint-to-waypoint navigation), specialized UAV control systems
may be required (Bruggemann and Ford 2011; Bruggemann et al. 2011; Li et al.
2012). Without a human pilot to control the aircraft, the stabilization and control
of the platform such that the assets to be inspected fall within the field of view
of the LIDAR or EO sensor becomes more challenging. The control system must
also maintain steady altitude and speed as per LIDAR/camera scan/frame rate and
swath/field-of-view requirements. Also, accurate spatial knowledge of the power-
line infrastructure to be tracked is required, and the UAVs own flight management,
navigation, and sensing systems must be capable for the task.
396 L. Mejias et al.
Fig. 20.3 Example of data merging image and LIDAR (Source data courtesy of Ergon Energy
and the Cooperative Research Centre for Spatial Information (CRCSI))
Intuitively, this means the UAV must be of reasonably large size. This is for a
number of reasons. Firstly, unless the sensor payload can compensate for unwanted
dynamic motion, there is a requirement for stable flight over the power lines. This
means many small to medium UAVs which are sensitive to wind and turbulence
at low altitudes are not suitable. Secondly, the size of the LIDAR/camera sensor
payloads to capture fine details (such as tree branches and leaves, wires, and bolts) is
quite large (can be in the order of 80 kg weight and of large volume). This also rules
out many smaller UAV platforms. Thirdly, the use of a large UAV in civilian airspace
weighing 100 kg or more presents its own technical, legal, and regulatory challenges
to overcome. In future, smaller and lighter LIDAR/camera sensor payloads that have
the required resolutions for power line inspection could help to facilitate the use
of smaller UAV platforms for this task. This application illustrates the important
coupling between the UAV platform and the sensor payload.
Conclusion
This chapter has outlined some of the most common payloads for civilian
applications. Two examples of the use of sensors for missions such as collision
avoidance and remote sensing were presented also. These are two examples of
experimental use of sensors for civilian mission. Collision avoidance represents
one of the critical capabilities for UAVs in order to allow these systems to
routinely perform missions in shared airspace. In a civilian context, the remote
sensing field is considered as the one that can benefit the most for UAV
technology.
However, the establishment of UAVs in the civilian arena will be dictated by
how the regulators define and establish requirements and guidelines for UAVs.
This will affect similarly payload manufacturers, UAV manufacturers, and UAV
service providers.
Overall, the UAV sector is increasingly demanding sensors that can provide
more onboard autonomy, less unnecessary weight, and more ISR capabilities
without an exponential increase in costs or weight. Applications such as ISR will
require or emphasize sensors like cameras or signal intelligence receivers, robust
radio relay, etc. Commercial payloads will emphasize very high data rate com-
munications and sensors such as visible light and multispectral (hyperspectral)
cameras.
In the near future, enduring and persistent UAVs flying for days, months, or
years at high altitude will require a completely new approach to design and
manufacture of payloads. These new payloads will form new communications
and sensing infrastructures. UAVs flying in the upper atmosphere will behave
much like satellites, but with the clear advantage that if something goes wrong
the vehicle might be commanded to land, at the same time that its replacement
is sent to continue the mission. This capability will be very difficult to replicate
in traditional satellites. Hence, the gain is using UAVs for missions traditionally
assigned to satellites.