Sensors 2016

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

sensors

Article
A Ground-Based Near Infrared Camera Array System
for UAV Auto-Landing in GPS-Denied Environment
Tao Yang 1, *, Guangpo Li 1 , Jing Li 2 , Yanning Zhang 1 , Xiaoqiang Zhang 1 , Zhuoyue Zhang 1
and Zhi Li 1
1 ShaanXi Provincial Key Laboratory of Speech and Image Information Processing, School of Computer
Science, Northwestern Polytechnical University, Xi’an 710129, China; liguangponwpu@gmail.com (G.L.);
ynzhangnwpu@gmail.com (Y.Z.); vantasy@mail.nwpu.edu.cn (X.Z.); zhangzzy@mail.nwpu.edu.cn (Z.Z.);
zLeewack@163.com (Z.L.)
2 School of Telecommunications Engineering, Xidian University, Xi’an 710071, China;
jinglixd@mail.xidian.edu.cn
* Correspondence: tyang@nwpu.edu.cn; Tel.: +86-29-8843-1533

Academic Editors: Vincenzo Spagnolo and Dragan Indjin


Received: 17 June 2016; Accepted: 25 August 2016; Published: 30 August 2016
Abstract: This paper proposes a novel infrared camera array guidance system with capability to
track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a
landing process. The system mainly include three novel parts: (1) Infrared camera array and near
infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor
camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive
automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array
system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover,
the measurement and control distance of our system is more than 1000 m. The experimental results
also demonstrate that our system can be used for UAV automatic accurate landing in Global Position
System (GPS)-denied environments.

Keywords: UAV; automatic landing; infrared camera array; vision-based navigation

1. Introduction
Unmanned air vehicles (UAVs) have become more and more prevalent recently. However, one of
the most difficult challenges for both manned and unmanned aircraft is safe landing. A significant
number of accidents happens during the landing phase due to inexperience of pilots or sudden
changes in the weather dynamics, thus automatic landing systems are required to land UAVs safely [1].
How to develop autonomous landing systems has been a hot issue of the current UAV researches, it is
a challenge for its high requirements of reliability and accuracy.
Several methods of the control of the UAV have been developed, such as PID control [2],
backstepping [3–5], H∞ control [6], sliding mode control [7], fuzzy control [8], and model based
fault tolerant control [9]. The traditional navigation systems for landing which have been on the UAV
mainly include Inertial Navigation System (INS), Global Position System (GPS), INS/GPS combined
navigation system, Global Navigation Satellite System (GNSS) and so on [10]. One of the most
commonly used methods is the GPS/INS integrated navigation, but GPS signals are easily blocked,
lower height accuracy [11], and INS tends to drift because the errors accumulate over time [12].
As described previously, the height measurement from the GPS is usually inaccurate, which is
easy to cause a crash, thus other sensors may be needed like radar altimeter. The most important is
that the GPS signals may not be always available, the automatic landing may not be possible in many
remote regions or GPS-denied environments. At this time, the advantages of vision-based automatic
landing method is particularly important.

Sensors 2016, 16, 1393; doi:10.3390/s16091393 www.mdpi.com/journal/sensors


Sensors 2016, 16, 1393 2 of 20

In recent years, new measurement systems which take visual sensors as cores have been applied
more and more widely and expanded to the UAV automatic landing [10]. Guo et al. [12] proposed
a vision-aided landing navigation system based on fixed waveband guidance illuminant using a
single camera. Ming et al. [13] adopted a vision-aided INS method to implement the UAV auto
landing navigation system. Vladimir et al. [14] presented a robust real-time line tracking algorithm for
fixed-wing aircraft landing. Abu-Jbara [15] and Cao et al. [16] studied the airport runway in natural
scenes, while Sereewattana et al. [17] provided a method to find the runway by adding four different
color marks, Zhuang et al. [18] used the two edge lines on both sides of the main runway and the
front edge line of the airport to estimate the attitude and position parameters. Li et al. [19] extracted
three runway lines in the image using Hough transform method to estimate the pose of the UAV.
Barber et al. [20] used a visual marker to estimate the roll and pitch for flight control. Huh et al. [11]
proposed a vision-based automatic landing method which used a monotone hemispherical airbag as
a marker. Lange et al. [21] adopted an adaptive thresholding technique to detect a landing pad for
the multirotor UAV automatic landing. Miller et al. [22] proposed an navigation algorithm based on
image registration. Daquan et al. [23] estimated aircraft’s pose using extended Kalman filter (EKF).
In the papers mentioned above, most of them are based on a downward-looking camera
to recognize the airport runway, the artificial markers or the natural markers in the image, etc.
However, the stabilizing controllers based on monocular camera are subject to drift over time [24], the
field of view can be temporarily occluded and the illumination conditions might change drastically
within a distance of a few meters [25], and extracting the symbols such as runway and the performance
of the image-matching algorithm are greatly influenced by imaging circumstance, and it is hard to
extract the symbols correctly and the performance of the image-matching algorithm may be unsatisfied
in complicated situations such as rain, fog and night [10]. Compared with onboard navigators,
the ground-based system possesses stronger computation resources and enlarges the search field
of view. Wang et al. [26] used a ground USB-camera to track a square marker patched on the
micro-aircraft. Martinez et al. [27] designed a trinocular system, which is composed of three FireWire
cameras fixed on the ground, to estimate the vehicle’s position and orientation by tracking color
landmarkers on the UAV. Researchers in Chiba University [28] designed a ground-based stereo vision
system to estimate the three dimensional position of a quadrotor. The continuously adaptive mean
shift algorithms was used to track the color based object. Abbeel et al. [29] achieved autonomous
aerobatic flights of an instrumented helicopter using two Point Grey cameras with known positions
and orientations on the ground. The accuracy of the estimates obtained was about 25 cm at about 40 m
distance from the cameras.
There is one similar automatic landing system called OPATS (Object Position and
Tracking System), which is developed by RUAG for the Swiss Air force in 1999 [30]. However, here
are several different aspects compared our system with OPATS. First of all, the theories are different.
OPATS is a laser-based automatic UAV landing system that continuously measures the dynamic
positions of the object of the interest, using one laser sensor with tripod, etc. While the proposed
landing system is based on stereo vision using infrared camera array. Thus their location theories
are different. The location theories of the OPATS is based on the infrared laser beam reflected by
the retroreflector on the UAV, while our infrared camera array system is based on the theory of
binocular positioning. Secondly, the adopted equipment is different. The equipment on the ground
of OPATS mainly includes: a standalone laser sensor, electronics unit, battery. The equipment on the
ground of our landing system mainly include: infrared laser lamp, infrared camera array, camera
lens and optical filter. The equipment fixed on the UAV of OPATS: passive optical retroreflector.
The equipment fixed on the UAV of our landing system: near infrared laser lamp. Most importantly,
OPATS can only guide one UAV landing at the same time, while the proposed landing system in
this paper covers wide field of regard, which has the ability to guide several UAVs landing at the
same time. And this problem could be described as “multi-object tracking”, which is very important
as the development of UAV. A pan-tilt unit (PTU) is employed to actuate the vision system in [31].
Sensors 2016, 16, 1393 3 of 20

Although it can be used under all weather conditions and around the clock, the limited baseline
has resulted in short detection distance. To achieve long range detection and cover wide field of
regard, a newly developed system has been designed in [32], which is the most similar works to ours.
Two separate sets of PTU are mounted integrated with visible light camera on both sides of the runway
instead of their previous stereo vision system, which is able to detect the UAV around 600 m. However,
the detection results are unsatisfied when the background become cluttered.
In order to land the UAVs safely in GPS-denied environment, a novel ground-based infrared
camera array system is proposed in this paper as shown in Figure 1. The direct detection of the UAV
will be limited by detection range, thus a near infrared laser lamp is fixed on the nose to instead
the position of the UAV, which simplifies the problem and improve the robustness of the system.
Two infrared cameras are located on the two sides of the runway respectively to capture flying aircraft
images, which are processed by the navigation computer. After processing, the real time position and
speed of the UAV are sent to the UAV control center, and the detection results, tracking trajectory and
three-dimensional localization results are displayed in real time. The infrared camera array system
cooperated with the laser lamp can effectively suppress the interference of the light and it can be
employed around the clock under all weather condition.

Z
Data
transmission Auto pilot
Y link

Image signal Infrared laser lamp UAV

Data
transmission Infrared
Detection result link camera

Tracking
trajectory

Runway

Detection result Computed


results

Navigation computer
40m 70m 100m 160m Placement example
Infrared camera Laser marker

Image signal Camera array calibration module

Figure 1. Infrared camera array system.

In this paper, we present the design and construction of hardware and software components to
accomplish autonomous accurate landing of the UAV. Our work mainly makes three contributions:
• We propose a novel infrared camera array and near infrared laser lamp based cooperative optical
imaging method, which greatly improve the detection range of the UAV.
• We design a wide baseline camera array calibration method. The method presented could achieve
high precision calibration and localization results.
• We develop a robust detection and tracking method for near infrared laser marker.
The proposed system is verified using a middle-sized fixed-wing UAV. The experiments
demoustrate that the detection range has been greatly improved, which is more than 1000 m, and a high
localization accuracy is achieved. The system has also been validated in the GPS-denied environment
and the UAV is guided to land safely.
Sensors 2016, 16, 1393 4 of 20

The rest of the paper is organized as follows. In Section 2, we describe the design and methodology
of our landing system. Section 3 describes the experimental results and followed the conclude results
in Section 4.

2. Landing System
This paper focuses on the content of the vision-based landing navigation. Figure 2 shows the
system framework and the complete experimental setup is shown in Figure 3.

Multi-camera detection, Storage, display and


Large scene calibration module localization and tracking module transmission module
Multi-camera collaborative
Camera intrinsic params
detection based on epipolar Data storage
calibration based on
constraint
chessboard
Camera
The target
params
Camera intrinsic Multi-camera target localization
params localization based on stereo 3D display
vision
Camera external params Multi-camera
Multi-camera
calibration based on synchronized Multi-camera data synchronized
electronic total station video data association and target video data Serial transmission
tracking

Camera
params Target trajectory Image data

Data base(Camera params, target trajectory, image storage)

Figure 2. The framework of the infrared camera array system.

Landing area
Infrared
camera

Infrared
UAV
camera

Navigation Nose infrared


computer laser lamp

Figure 3. Ground-based landing system.

We adopt a vision-based method, and how to get clear images of the target at long-distance is our
first task. In our system, the light source is carried on the nose to instead of the position of the UAV.
Sensors 2016, 16, 1393 5 of 20

After many tests and analyses, we finally choose the near infrared laser lamp and infrared camera
array to form the optical imaging module.
The accuracy of the camera array parameters directly determines the localization accuracy, while
the traditional method of measuring binocular calibration accuracy will serious decline in long-distance.
In this paper, a calibration method for infrared camera array is proposed, which can achieve a high
calibration accuracy.
When the UAV is landing from a long distance, the laser marker fixed on the UAV presents the
characteristics of small target. Besides, the target may be influenced by the strong sunlight, signal
noise and other uncertain factors in the actual situation. In this paper, these problems are discussed in
detail and solved effectively.

2.1. Optical Imaging Module


For vision-based UAV automatic landing systems, one of the basic steps is to construct the optical
imaging module. A good optical imaging module will make the target in the image prominent, simplify
the algorithm, guarantee the stability of the system, etc. The main components of our optical imaging
module are near infrared laser lamp, infrared camera array and optical filter.
The results of the direct detection of the UAV are usually not robust, especially at long distance,
which is greatly affected by background. Thus the detection range is usually limited. To improve the
detection range, we carefully design the vision system by introducing the light source, which plays an
important role in optical imaging module. It directly affects the quality of the image, and then affects
the performance of the system. In our system, the light source is carried on the nose to replace the
position of the UAV. The function of the light source is to obtain a clear image with a high contrast.
One of the required characteristics of the selected light source is insensitive to visible light. Taking the
factors of the wind into consideration, the light source should have the robustness of multi-angle.
We have conducted a lot of comparative tests of different light sources, and finally choose the near
infrared laser lamp. The near infrared laser lamp parameters are shown in Table 1. The illumination
distance of the near infrared laser lamp is more than 1000 m, which guarantees the detection range.
The wavelength of the near infrared laser lamp is 808 ± 5 nm. Its weight is only 470 g, which is suitable
to be fixed on the UAV.

Table 1. The near infrared laser lamp parameters.

Near Infrared Laser Lamp Specification Parameter


wavelength 808 ± 5 nm
illumination distance more than 1000 m
weight 470 g
working voltage DC12V ± 10%
maximum power consumption 25 W
operating temperature 0 ◦ C∼50 ◦ C

Our ground-based system is mainly designed for fixed-wing UAVs which have a high flight speed
and landing height. The function of the camera is to capture images of the near infrared laser lamp
fixed on the UAV. So the camera needs to have a high sampling rate for the dynamic target, and should
have enough resolution so that the target can still be clearly acquired at a long distance. Considering
these, we finally choose the infrared camera of Point Grey GS3-U3-41C6NIR-C with 2048 × 2048 pixels.
In order to ensure the camera resolution and spatial location accuracy, we select the the camera lens of
Myutron HF3514V with focal length of 35 mm. The cameras are fixed on each side of the runway as
shown in Figure 3, which has a wide baseline. The camera parameters are shown in Table 2 and the
camera lens parameters are shown in Table 3. The maximum frame rate of the camera is 2048 × 2048 at
90 fps, which meets the need of the proposed system. The maximum magnification of the camera lens
is 0.3× and its TV distortion is −0.027%.
Sensors 2016, 16, 1393 6 of 20

Table 2. The camera parameters.

Camera Specification Parameter


sensor CMOSIS CMV4000-3E12
maximum resolution 2048 × 2048
maximum frame rate 2048 × 2048 at 90 fps
interface USB 3.0
maximum power consumption 4.5 W
operating temperature −20 ◦ C∼50 ◦ C

Table 3. The camera lens parameters.

Camera Lens Specification Parameter


focal length 35 mm
F No. 1.4
range of WD 110 mm–∞
maximum magnification 0.3×
TV distortion −0.027%
filter pitch M46P = 0.75
maximum compatible sensor 1.1

In order to increase the anti-interference ability of the optical imaging module, a kind of Near-IR
interference bandpass filter is adopted. The wavelength of the optical filter is 808 nm, of which the
signal attenuation is small. The filter is fixed in front of the camera lens, so the camera is only sensitive
to the wavelength of 808 nm. The emission wavelength of the near infrared laser lamp fixed on the
UAV is 808 nm. The filter is a component of the infrared camera array. The cooperation of the infrared
camera array and the light source could guarantee distinct imaging of the near infrared laser lamp
and effectively avoid interferences of complicated background. Thus the robustness and the detection
range of the system are both greatly improved. As shown in Figure 4, we can see that the filter could
get rid of almost every interferential signal successfully. The optical filter parameters are shown in
Table 4.

Table 4. The Near-IR interference bandpass filter parameters.

Filter Specification Parameter


useful range 798∼820 nm
FWHM 35 nm
tolerance ±5 nm
peak transmission ≥85%

Interference
Interference
Target Target

(a) Without optical filter (b) Without optical filter (c) With optical filter (d) With optical filter
(after segmentation) (after segmentation)

Figure 4. The ability to resist interference in complex environments (650 m).


Sensors 2016, 16, 1393 7 of 20

2.2. Large Scale Outdoor Camera Array Calibration


The process of calibration is to estimate the intrinsic parameters and extrinsic parameters of the
cameras in the array system.
To get high localization accuracy, precise camera parameters are needed. Classical camera
calibration methods include Weng’s [33], Zhang’s [34], etc. For those traditional camera calibration
methods, the reference points or lines must be distributed in space or in the calibration image rationally,
which is easy to be constructed indoors or if the view field is not large. To obtain the precise camera
parameters in large scale outdoors, a new camera array calibration method is presented.
Chessboard pattern is adopted to obtain intrinsic parameters. As described before, two infrared
cameras are located on both sides of the airport runway to enlarge the baseline, which contributes
to promoting the localization accuracy. However, it brings difficulties to the calibration of the
external parameters. Thus a novel camera external parameters calibration method based on electronic
total station is presented here. The parameters of the electronic total station used in our system are
shown in Table 5. The measuring distance of the electronic total station could reach 2000 m and its
00
ranging accuracy is ±(2 mm + 2 ppm). And its angle measurement accuracy is ±2 . We can see
that the electronic total station has ensured a high accuracy of measurement. To get precise external
parameters, ten near infrared laser lamps are also putted on both sides of the runway as control points
as shown in Figure 1, one placement example of the distance between each pair of the near infrared
laser lamps is presented here. Six of them are located closer to the ground, and four of them are located
about 2.0 m height from the ground. The external parameters calibration method mainly includes the
following steps:

Table 5. The electronic total station parameters.

Electronic Total Station Specification Parameter


0
field of view 1◦ 30
measuring distance 2000 m
ranging accuracy ±(2 mm + 2 ppm)
00
angle measurement accuracy ±2
ranging time 1.2 s
operating temperature −20 ◦ C∼50 ◦ C

(1) Establish a world coordinate system.


(2) Measure the precise world coordinates of the control points using the electronic total station.
(3) Extract the projections of the control points from the two calibration images.
(4) Accurately estimate the spot center coordinates of the control points using bilinear interpolation.
(5) Obtain the initial external parameters by DLT (Direct Linear Transform) algorithm [35].
(6) Generate the final calibration results by LM (Levenberg-Marquardt) algorithm [35].

It is necessary for centroids to be extracted after the region is determined in step 4. In order
to improve the accuracy and stability, bilinear interpolation method are used before calculating the
coordinate of the spot center:

 g(i + x, j) = g(i, j) + x [ g(i + 1, j) − g(i, j)]



 g(i, j + y) = g(i, j) + y[ g(i, j + 1) − g(i, j)]



g(i + x, j + y) = x [ g(i + 1), j) − g(i, j)]+





y[ g(i, j + 1) − g(i, j)]+ , (1)




 xy[ g(i + 1, j + 1)+


g(i, j) − g(i + 1, j)−






g(i, j + 1)] + g(i, j)
Sensors 2016, 16, 1393 8 of 20

where g(i, j) is the gray value of the point (i, j), x, y ∈ (0, 1). Then the subpixel of the spot center
( x0 , y0 ) is calculated by:
 xe xe
 x c = ∑ xi · w ( xi , yi ) / ∑ w ( xi , yi )


i = xb i = xb
ye ye and (2)
 y c = ∑ yi · w ( xi , yi ) / ∑ w ( xi , yi )


i = yb i = yb


 w ( xi , yi ) = g ( xi , yi )

 g( xi , yi ) ≥ T0

 w ( xi , yi ) = 0

 g( xi , yi ) < T0


 x = x − T1

b 0
, (3)


 xe = x0 + T1

y = y0 − T1


 b



y = y + T1
e 0

where xi and yi are the values of horizontal and vertical coordinates of the point ( xi , yi ), T0 and T1 are
the thresholds.

2.3. Target Detection, Localization and Tracking


Target Detection: Because of the obviously different grayscale between the target and background,
we directly acquire the foreground image of the candidate targets after a simple morphological
pre-processing, and then the foreground cluster is carried to get the coordinates of the candidate
targets in the image. If pixel distance f pd ( pi , p j ) is less than foreground clustering window J, then
clustered as a class xi (i ≥ 0). We regard the image centroid coordinate of each cluster as the coordinate
of the candidate target in the image. The pixel distance is defined as:
q
y y
f pd ( pi , p j ) = ( pix − p xj )2 + ( pi − p j )2 , (4)

y y
where pi and p j are image pixels, ( pix , p xj ) and ( pi , p j ) are pixel coordinates of pi and p j respectively.
To determine the corresponding relationship of the candidate targets and remove the false targets,
epipolar geometry constraints between the two cameras are used. Epipolar geometry between the
two cameras refers to the inherent projective geometry between the views. It only depends on the
camera intrinsic parameters and the relative pose of the two cameras. Thus after the target is detected
on the two cameras independently, epipolar geometry constraints between the cameras can be used to
get data association results. In this way, the corresponding relationship of the candidate targets are
confirmed and parts of false targets are removed.
Define I1 = { x11 , x21 , . . . , xm
1 } and I = { x2 , x2 , . . . , x2 } as the detection results of the first and
2 1 2 m
second camera. The duty of the data association is to find the corresponding relationship between xi1
and x2j . Distance measurement is obtained by the symmetric transfer error between xi1 (i = 1, 2, . . . , m)
and x2j (i = 1, 2, . . . , n), can be defined as:

d( xi1 , x2j ) = d( xi1 , F T x2j ) + d( x2j , Fxi1 ), (5)

where F is the fundamental matrix between the two cameras. The matching matrix between two
images is:  
d( x11 , x12 ) d( x11 , x22 ) . . . d( x11 , xn2 )
 d( x21 , x12 ) d( x21 , x22 ) . . . d( x21 , xn2 ) 
 
D= .. .. .. .. . (6)
. . . .
 
 
1 , x2 ) d( x1 , x2 ) . . . d( x1 , x2 )
d( xm 1 m 2 m n
Sensors 2016, 16, 1393 9 of 20

The global optimal matching result is obtained by solving the matching matrix D using Hungarian
algorithm [36], which is taken as the final detection result.
Target Localization: Supposing the world coordinate of the laser marker is X, the two camera
parameter matrices are P and P0 , respectively. For the two detection images, supposing the image
coordinates of the laser markers are x and x 0 . Due to the measurement error, there are no points meet
the equations x ∼= PX and x 0 ∼
= P0 X strictly, and the image point is not satisfied to epipolar geometry
0 T
constraint x Fx = 0.
A projective invariant binocular location method to minimize the re-projection error is
presented here. The method is to find the exact solution to meet the minimum epipolar geometry
constraint and re-projection error. Since the whole process only involves the projection of the space
points and the distance of 2D image points, the method is projective invariant, which means that the
solution is independent of the specific projective space.
In the corresponding images of the two cameras, the observation points are x and x 0 respectively.
Supposing that the points near x and x 0 which precisely meet to epipolar geometry constraint are xb
and xb0 . Maximum likelihood estimation of the following objective function:

C ( xb, xb0 ) = d( x, xb) + d( x 0 , xb0 ), (7)

subject to xb0T Fb
x = 0, where d(∗, ∗) is the Euclidean distance between image points.
We obtain initial value of x and x 0 by DLT algorithm firstly. Supposing x ∼= P0 X and x 0 ∼
= P0 X,
0 0 0
two equations x PX = 0 and x P X = 0 are obtained by the homogeneous relation. By expansioning
the equations and we get: 


 x1 ( p3T X ) − ( p1T X ) = 0

y1 ( p3T X ) − ( p2T X ) = 0





 x ( p2T X ) − y ( p1T X ) = 0

1 1
, (8)
x ( p 0 3T X ) − ( p 01T X ) = 0
2




y2 ( p03T X ) − ( p02T X ) = 0





 x2 ( p02T X ) − y2 ( p01T X ) = 0

where piT is the i-th row of the matrix P, p0 jT is the j-th column of matrix P0 . The homogeneous
coordinate equations are x = ( x1 , y1 , 1) T and x 0 = ( x2 , y2 , 1) T . The formula for linear equations on X
can be written as AX = 0. Although each set of points correspond to the three equations, only two of
them are linearly independent. Thus each set of points could provide two equations about X. The third
equation is usually omitted when solving X. Thus A could be described as:
 
x1 p3T − p1T
 y1 p3T − p2T 
A= . (9)
 
 x2 p03T − p01T 
y2 p03T − p0 2T

Since X is a homogeneous coordinate, only three degrees of freedom are scale-independent.


The linear equation set AX = 0 contains four equations, so the linear system actually is a
over-determined system. To get the approximate solution of X, equation set AX = 0 could be
changed into the following optimization problem:

mink AX k, (10)
x

subject to k X k = 1.
Sensors 2016, 16, 1393 10 of 20

After the initial value X0 of X is obtained from the above formula, LM algorithm is used for the
iterative optimization to yield final localization results.
Target Tracking: The Euclidean distance is used as the distance measurement in the 3D
space. Define the historical target tracking result Tit (i = 1, 2, ..., p) and current localization result
X jt + 1 (j = 1, 2, ..., q), the distance between them is computed by:
q
d( Tit , X jt+1 ) = ( xit − x tj + 1 )2 + (yit − ytj + 1 )2 + (zit − ztj + 1 )2 , (11)

where ( xit , yit , zit ) and ( x tj + 1 , ytj + 1 , ztj + 1 ) are space coordinates of Tit and X jt + 1 . Thus the matching
matrix between them is computed by:

d( T1t , X1T + 1 ) d( T1t , X2T + 1 ) d( T1t , XqT + 1 )


 
...
 d( T2t , X1T + 1 ) d( T2t , X2T + 1 ) ... d( T2t , XqT + 1 ) 
Dtt+1 = 
 
.. .. .. .. . (12)
. . . .
 
 
d( Tpt , X1T + 1 ) d( X tp , X2T + 1 ) ... d( x tp , XqT + 1 )

The Hungarian algorithm is used to get the target tracking results from Dtt + 1 .

3. Experiments and Discussion


3.1. Optical Imaging Experiments
We have compared several different kinds of light sources such as strong light flashlight, high
intensity discharge lamp, halogen lamp, etc. Because of those light sources are sensitive to visible light
and have a short irradiation range, we finally choose the near infrared laser lamp. In this section, we
will present the comparison results of near infrared laser lamp and strong light flashlight.
We compare the quality of the imaging at different distances firstly. In this experiment, the
near infrared laser lamp and the flashlight are placed in the same position at different distances.
The experimental distance is from 80 m to 650 m. Figure 5 shows that the light spots of the strong light
flashlight in the images are hard to find after 400 m, while the light spots of the near infrared laser
lamp can still be seen clearly at 650 m.

650m 400m 250m 150m 80m


(a) Near infrared laser lamp

Infr
Infr

650m 400m 250m 150m 80m


(b) Strong light flashlight

Figure 5. The comparison of the imaging at different distances.

The system need to have a certain fault tolerance to the angle change. In this experiment, we place
the light sources at the same position with the same directions firstly, and then adjust the horizontal
Sensors 2016, 16, 1393 11 of 20

rotation angle of the light source. We did this experiment at 150 m. As shown in Figure 6, the near
infrared laser lamp can be detected robustly from 0 degrees to 45 degrees, while the strong light
flashlight cannot be seen clearly when the angle is greater than 10 degrees.

5° 30° 45° 60° 5° 10°


(a) Near infrared laser lamp (b) Strong light flashlight

Figure 6. The comparison of the imaging from different angles at the distance of 150 m. (a) The near
infrared laser lamp: from 0 degrees to 45 degrees; (b) The strong light flashlight: from 0 degrees to
10 degrees.

From the above experiments, we can see that the near infrared laser lamp greatly meets the needs
of the landing system. Cooperated with the infrared camera array and optical filter, a robust optical
imaging system with long detection range is established.

3.2. Infrared Camera Array Calibration Experiments


The precision of the camera array parameters directly determines the localization accuracy.
To verify, five reference points are selected near the center line of the runway to simulate the
UAV position. Their space coordinates are measured by electronic total station as the ground truth.
Then laser markers are placed to the positions of the reference points, of which the space coordinates
are calculated based on the calibration results. In fact, this experiment could also be considered as a
localization accuracy verification of UAV on the ground.
The experiment results are shown in Table 6. It is obviously to observe that the errors of X
elements are much larger than Y and Z elements, while the errors of Y and Z remain below a limited
threshold. The measurement errors gradually descend from far to near in X axis, and the precision
has attained centimeter level within 200 m. Limited by the length of the runway, the maximum
experimental distance is about 400 m. And in this experiment, the accuracy of the Y axis and the Z
axis is controlled within the centimeter level. More importantly, we can see that a high localization
accuracy has been attained in the last 200 m, which greatly meets the needs of the UAV automatic
landing system.
A practical experiment based on control points has also been conducted to verify the calibration
results as shown in Figure 7. The calibration images taken by the two infrared cameras are shown in
Figure 7a,b. As described previously, the positions of the control points are measured by electronic total
station. The red circles in Figure 7c,d are real positions of the control points, which are marked by our
detection algorithm. The world coordinates of control points are re-projected to the image coordinates
based on calibration parameters, which are marked by yellow crosses as shown in Figure 7c,d. We can
see that the red circles and yellow crosses are basic coincidence, which demonstrate the calibration
accuracy of the intrinsic and external parameters effectively.
Sensors 2016, 16, 1393 12 of 20

Table 6. The calibration accuracy analysis.

Serial Reference Points (m) Localization Results (m) Errors (m)


Number X(m) Y(m) Z(m) X0 Y0 Z0 ∆X ∆Y ∆Z
1 64.363 0.012 2.072 64.274811 0.012149 2.060729 −0.088188 0.000149 −0.011271
2 102.898 −0.068 2.185 102.961128 −0.066252 2.164158 0.063126 0.001748 −0.020842
3 198.018 −0.141 2.615 197.970352 −0.113468 2.613832 −0.047653 0.027532 −0.001168
4 303.228 −0.324 3.049 300.387817 −0.283089 2.991707 −2.840179 0.040911 −0.057293
5 395.121 −0.567 3.427 396.371521 −0.573597 3.456094 1.250519 −0.006597 0.029094

7
6 9
4 7

3 7
6 5 3
5
6
4

2
2
1
1
(a) left camera (b) right camera (c) results of the left camera (d) results of the right camera
Figure 7. Verification of calibration results(red circles in (c) and (d) are real positions of the control points, the positions of yellow crosses are calculated by re-projection
based on calibration parameters).
Sensors 2016, 16, 1393 13 of 20

3.3. Target Detection and Localization Experiments


In order to improve the stability and robustness of the ground-based system, the landing system
needs to be able to remove the false targets effectively. The removal of false targets is mainly reflected
in three aspects. In the process of multi-camera collaborative detection based on the epipolar constrain,
the false targets can be removed by the symmetric transfer error; In the process of the stereo vision
localization of multi-camera, the false targets can be removed by the space motion track constraints of
the UAV; In the process of the target tracking, the false targets can be removed by analyzing the motion
directions and velocities of the candidate targets. In this way, the target can be detected correctly.
Figures 8 and 9 shows the detection experiments under sunlight, even with smear effect in
Figure 9, we can see that the targets are both detected correctly.

Figure 8. The detection results under sunlight.

Figure 9. The detection results under sunlight with smear effect.

The detection accuracy of the near infrared laser lamp fixed on the UAV directly determines the
accuracy of the target spatial localization. In this part, we will analyze the effects of the detection error
on the localization accuracy. In this simulation experiment, we assume that the camera parameters
are already known. During the landing phase, the point is projected to the image through the camera
matrix, and Gaussian random noise with a mean of zero is added on the projected point. And then we
get the localization result and analyze the localization accuracy. Table 7 gives the average error results
of 1000 times simulation experiments with different standard deviation of Gaussian noise.
In Table 7, the standard deviation of the Gaussian noise is set to 0.1 pixels, 0.2 pixels and 0.5 pixels
in turn. The error of the three axes decreases with the decrease of the distance. The error in X axis is
the largest, and the error in Y axis and Z axis remains a high accuracy. When the standard deviation of
the Gaussian noise is set to 0.5 pixels, the error in three axes is the largest. However, in the last 100 m,
the error in X axis is less than 0.5 m and the error in Y axis and Z axis is within the centimeter level.
We can see that when the target detection accuracy is less than 0.5 pixels, the localization accuracy
could meet the requirements of the landing system.
We have performed extensive automatic landing experiments in several places successfully.
In order to enlarge the field of view, we usually choose the runway whose width is more than 15 m.
The field of view is usually determined by the width and length of the runway. As described previously,
the two infrared cameras are located on the two sides of the runway respectively to capture flying
aircraft images. One of basic principles is that the public field of view should cover the landing area
of the UAV. In the actual experiment, the landing area is usually already known, thus it is easy to
make the infrared camera array cover the landing area. And to ensure the accuracy, the imaging of the
Sensors 2016, 16, 1393 14 of 20

landing area should be close to the image center, especially for the last 200 m. The detection range
changes with the baseline. With the baseline of 20 m, the minimum detection range is about 25 m and
the maximum detection range is over 1000 m. The UAV takes off and cruise at an altitude of 200 m
using the DGPS system. Once the UAV is detected and the error is acceptable, the UAV is guided to
land by our ground-based vision system.

Table 7. The effect of the detection error on localization accuracy.

Pixel Errors 0.5 Pixels 0.2 Pixels 0.1 Pixels


Distance (m) X (m) Y (m) Z (m) X (m) Y (m) Z (m) X (m) Y (m) Z (m)
10 0.0905 0.0092 0.0091 0.0367 0.0037 0.0037 0.0180 0.0018 0.0018
20 0.1173 0.0105 0.0104 0.0479 0.0042 0.0042 0.0225 0.0021 0.0021
30 0.1430 0.0113 0.0123 0.0550 0.0047 0.0049 0.0285 0.0024 0.0024
40 0.1783 0.0128 0.0137 0.0714 0.0051 0.0056 0.0339 0.0026 0.0027
50 0.2078 0.0139 0.0163 0.0810 0.0056 0.0060 0.0393 0.0027 0.0031
100 0.4099 0.0198 0.0288 0.1709 0.0080 0.0120 0.0839 0.0039 0.0062
150 0.7081 0.0248 0.0503 0.2846 0.0102 0.0205 0.1418 0.0053 0.0101
200 1.1051 0.0305 0.0790 0.4307 0.0129 0.0302 0.2099 0.0065 0.0155
300 2.0075 0.0422 0.1501 0.7960 0.0167 0.0591 0.3989 0.0087 0.0294
400 3.1424 0.0549 0.2423 1.3165 0.0217 0.1013 0.6523 0.0105 0.0497

3.4. Real-Time Automatic Landing Experiments


It is important to ensure the safety and reliability of the UAV automatic landing system,
therefore, the verification of the UAV real-time localization accuracy is necessary. Thus, we compared
the localization accuracy with DGPS measurements. The DGPS data is produced by SPAN-CPT
module as shown in Table 8. SPAN-CPT is a compact, single enclosure GNSS + INS receiver with
variety of positioning modes to ensure the accuracy. The IMU components within the SPAN-CPT
enclosure are comprised of Fiber Optic Gyros (FOG) and Micro Electromechanical System (MEMS)
accelerometers, etc. The tight coupling of the GNSS and IMU measurements delivers the most satellite
observations and the most accurate, continuous solution possible. In our experiments, we choose the
RT-2 module and its horizontal position accuracy is 1 cm + 1 ppm. During the process of landing,
the localization results of the ground-based system are uploaded to the UAV control center through
wireless data chain, and then the received data and current DGPS measurement results will be saved
at the same time by the UAV control center, of which the maximum data update rate is 200 HZ.
By analyzing the stored localization data after UAV landing, the localization accuracy can be verified.
Airborne DGPS measurement data is usually defined in the geographic coordinate system, while
the vision measurement data is defined in the ground world coordinate system. To analyze the errors
between them, the conversion between the DGPS coordinates and the world coordinates is necessary.
To ensure the accuracy of coordinate conversion, three points are selected, one is the origin of the
world coordinates, and the others are far long the runway (e.g., 200 m). Their coordinate information
such as longitude, latitude and altitude is measured by the DGPS measurement module. The world
coordinates of the three points are measured by the electronic total station. The direction of the runway
can be obtained by the two points that far along the runway, computed with the origin coordinate, the
conversion relationship between the coordinate systems can be finally determined.
Sensors 2016, 16, 1393 15 of 20

Table 8. The SPAN-CPT parameters.

The SPAN-CPT Specification Parameter


horizontal positon accuracy (RT-2 module) 1 cm + 1 ppm
horizontal positon accuracy (single point) 1.2 m
heading accuracy 0.03◦
bias (gyroscope) ±20 ◦ /h
bias stability (gyroscope) ±1 ◦ /h
bias (accelerometer) ±50 mg
bias stability (gyroscope) ±0.75 mg
speed accuracy 0.02 m/s
weight 2.28 kg

We compared the localization results with DGPS in Figure 10a. We can see that the detection range
is over 1000 m and the data generated by our system are coincident with the DGPS data. The accuracy
in Z axis is the most important factor to real UAV automatic landing. On the contrary, the accuracy
in X axis has the minimal influence on its automatic landing because of the long runway. And the
limited width of the runway also needs a high localization accuracy at Y axis. In this experiment, we
refer the data from DGPS as the ground truth, the absolute errors in X, Y, Z are evaluated as shown in
Figure 10e–g. The errors of X elements are the largest compared with Y and Z elements. However, the
errors of X and Y elements are gradually decreased during the landing phase. At the last 200 m, The
location errors in X and Y coordinates have reduced to below 5 and 0.5 m, respectively. Both of which
have achieved a high accuracy. And at the last 100 m, the localization results in X and Y coordinates
nearly are the same with DGPS. To avoid the crash, a high precision estimation of altitude should be
guaranteed. We have achieved an impressive localization result in Z axis, in which the error is less
than 0.22 m during the whole landing process. The measurement precisions in the whole landing
process completely meet the requirements of the UAV control center.
Figure 11 shows one of the landing trajectories generated by our system and several landing poses
of the UAV are presented here. Under the control of our system, the poses of the UAV could remain
steady during the whole process of the decline. When the UAV was controlled by our ground-based
system, the GPS jammer was turned on. Thus, in this experiment, the UAV was controlled from
820 m in the GPS-denied environment and successful landing, we can see that the trajectory is smooth
and complete.
Sensors 2016, 16, 1393 16 of 20

Location result in X coordinate


UAV trajectory 450 -30
Location error in X coordinate
Vision Data

Z(height)(m)
700 400 DGPS Data
-25

680 350
-20

660 300
-15

X(m)
640 Vision Data 250
-10
200
DGPS Data -5
150

1200 100
0

50 5
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
(b) time (e) time
Location result in Y coordinate Location error in Y coordinate
50 -3.5
1000 Vision Data
45 DGPS Data -3

40
X(along the runway)(m)

-2.5

35
-2
30
800

Y(m)
-1.5
25
-1
20

-0.5
15

600 10 0

5 0.5
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
(c) time (f) time
14
Location result in Z coordinate -0.25
Location error in Z coordinate
Vision Data

400 12
DGPS Data
-0.2

10 -0.15

8 -0.1
Z(m)

200 6 -0.05

4 0

Y(vertical to runway)(m) 2 0.05

0 0 0.1
200 150 100 50 0 0 500 1000 1500
(d) time
2000 2500 0 500 1000 1500
(g) time
2000 2500
(a)

Figure 10. Comparison of DGPS and vision data. (a) The UAV trajectories of vision-based method and DGPS method; (b–d) The location results in X, Y and Z
coordinates. (e–g) The location errors in X, Y and Z coordinates.
Sensors 2016, 16, 1393 17 of 20

40 UAV trajectory

35

30

25
Z(height)(m)

20

15

10
Y (ve

5
rtical

0
to th

10
e run

0
way)

-10
0 100 200 300 400 500 600 700 800 900
(m)

X(along the runway)(m)

Figure 11. Landing trajectory away from 800 m.


Sensors 2016, 16, 1393 18 of 20

4. Conclusions
This paper described a novel infrared camera array guidance system for UAV automatic landing
in GPS-denied environment. We overcome the shortcomings of the traditional GPS method which is
easily blocked, etc. After an optical imaging system is designed, a high precision calibration method
for large scene based on electronic total station is provided. The feasibility, accuracy has been verified
through real-time flight experiments without GPS, and the results have identified that the control
distance of our system is over 1000 m and a high landing accuracy has been achieved.

Supplementary Materials: The following are available online at http://www.mdpi.com/1424-8220/16/9/1393/s1.


Acknowledgments: This work is supported by the ShenZhen Science and Technology Foundation
(JCYJ20160229172932237), National Natural Science Foundation of China (No. 61672429, No. 61502364,
No. 61272288, No. 61231016), Northwestern Polytechnical University (NPU) New AoXiang Star
(No. G2015KY0301), Fundamental Research Funds for the Central Universities (No. 3102015AX007), NPU
New People and Direction (No. 13GH014604). And the authors would like to thank Bin Xiao, SiBing Wang, Rui Yu,
XiWen Wang, LingYan Ran, Ting Chen and Tao Zhuo who supplied help on the algorithm design and experiments.
Author Contributions: Tao Yang and Guangpo Li designed the algorithm and wrote the source code and the
manuscript together; Jing Li and Yanning Zhang made contributions to algorithm design, paper written and
modification; Xiaoqiang Zhang, Zhuoyue Zhang and Zhi Li supplied help on experiments and paper revision.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Shaker, M.; Smith, M.N.; Yue, S.; Duckett, T. Vision-based landing of a simulated unmanned aerial vehicle
with fast reinforcement learning. In Proceedings of the 2010 International Conference on Emerging Security
Technologies (EST), Canterbury, UK, 6–7 September 2010; pp. 183–188.
2. Erginer, B.; Altug, E. Modeling and PD control of a quadrotor VTOL vehicle. In Proceedings of the 2007 IEEE
Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; 894–899.
3. Ahmed, B.; Pota, H.R. Backstepping-based landing control of a RUAV using tether incorporating flapping
correction dynamics. In Proceedings of the 2008 American Control Conference, Seattle, WC, USA,
11–13 June 2008; pp. 2728–2733.
4. Gavilan, F.; Acosta, J.; Vazquez, R. Control of the longitudinal flight dynamics of an UAV using adaptive
backstepping. IFAC Proc. Vol. 2011, 44, 1892–1897.
5. Yoon, S.; Kim, Y.; Park, S. Constrained adaptive backstepping controller design for aircraft landing in wind
disturbance and actuator stuck. Int. J. Aeronaut. Space Sci. 2012, 13, 74–89.
6. Ferreira, H.C.; Baptista, R.S.; Ishihara, J.Y.; Borges, G.A. Disturbance rejection in a fixed wing UAV using
nonlinear H∞ state feedback. In Proceedings of the 9th IEEE International Conference on Control and
Automation (ICCA), Santiago, Chile, 19–21 December 2011; pp. 386–391.
7. Rao, D.V.; Go, T.H. Automatic landing system design using sliding mode control. Aerosp. Sci. Technol. 2014,
32, 180–187.
8. Olivares-Méndez, M.A.; Mondragón, I.F.; Campoy, P.; Martínez, C. Fuzzy controller for uav-landing task
using 3d-position visual estimation. In Proceedings of the 2010 IEEE International Conference on Fuzzy
Systems (FUZZ), Barcelona, Spain, 18–23 July 2010; pp. 1–8.
9. Liao, F.; Wang, J.L.; Poh, E.K.; Li, D. Fault-tolerant robust automatic landing control design. J. Guid.
Control Dyn. 2005, 28, 854–871.
10. Gui, Y.; Guo, P.; Zhang, H.; Lei, Z.; Zhou, X.; Du, J.; Yu, Q. Airborne vision-based navigation method for uav
accuracy landing using infrared lamps. J. Intell. Robot. Syst. 2013, 72, 197–218.
11. Huh, S.; Shim, D.H. A vision-based automatic landing method for fixed-wing UAVs. J. Intell. Robot. Syst.
2010, 57, 217–231.
12. Guo, P.; Li, X.; Gui, Y.; Zhou, X.; Zhang, H.; Zhang, X. Airborne vision-aided landing navigation system for
fixed-wing UAV. In Proceedings of the 12th International Conference on Signal Processing (ICSP), Hangzhou,
China, 26–30 October 2014; pp. 1215–1220.
Sensors 2016, 16, 1393 19 of 20

13. Ming, C.; Xiu-Xia, S.; Song, X.; Xi, L. Vision aided INS for UAV auto landing navigation using SR-UKF based
on two-view homography. In Proceedings of the 2014 IEEE Chinese Guidance, Navigation and Control
Conference (CGNCC), Yantai, China, 8–10 August 2014; pp. 518–522.
14. Vladimir, T.; Jeon, D.; Kim, D.H.; Chang, C.H.; Kim, J. Experimental feasibility analysis of roi-based
hough transform for real-time line tracking in auto-landing of UAV. In Proceedings of the 15th IEEE
International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing
Workshops (ISORCW), Shenzhen, China, 11 April 2012; pp. 130–135.
15. Abu-Jbara, K.; Alheadary, W.; Sundaramorthi, G.; Claudel, C. A robust vision-based runway detection and
tracking algorithm for automatic UAV landing. In Proceedings of the 2015 International Conference on
Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 1148–1157.
16. Cao, Y.; Ding, M.; Zhuang, L.; Cao, Y.; Shen, S.; Wang, B. Vision-based guidance, navigation and control for
Unmanned Aerial Vehicle landing. In Proceedings of the 9th IEEE International Bhurban Conference on
Applied Sciences & Technology (IBCAST), Islamabad, Pakistan, 9–12 January 2012; pp. 87–91.
17. Sereewattana, M.; Ruchanurucks, M.; Thainimit, S.; Kongkaew, S.; Siddhichai, S.; Hasegawa, S. Color marker
detection with various imaging conditions and occlusion for UAV automatic landing control. In Proceedings
of the 2015 Asian Conference on IEEE Defence Technology (ACDT), Hua Hin, Thailand, 23–25 April 2015;
pp. 138–142.
18. Zhuang, L.; Han, Y.; Fan, Y.; Cao, Y.; Wang, B.; Zhang, Q. Method of pose estimation for UAV landing.
Chin. Opt. Lett. 2012, 10, S20401.
19. Hong, L.; Haoyu, Z.; Jiaxiong, P. Application of cubic spline in navigation for aircraft landing. J. HuaZhong
Univ. Sci. Technol. (Nat. Sci. Ed.) 2006, 34, 22.
20. Barber, B.; McLain, T.; Edwards, B. Vision-based landing of fixed-wing miniature air vehicles. J. Aerosp.
Comp. Inf. Commun. 2009, 6, 207–226.
21. Lange, S.; Sunderhauf, N.; Protzel, P. A vision based onboard approach for landing and position control of an
autonomous multirotor UAV in GPS-denied environments. In Proceedings of the International Conference
on Advanced Robotics, Munich, Germany, 22-26 June 2009; pp. 1–6.
22. Miller, A.; Shah, M.; Harper, D. Landing a UAV on a runway using image registration. In Proceedings
of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008;
pp. 182–187.
23. Daquan, T.; Hongyue, Z. Vision based navigation algorithm for autonomic landing of UAV without heading
& attitude sensors. In Proceedings of the Third International IEEE Conference on Signal-Image Technologies
and Internet-Based System, Shanghai, China, 16–19 December 2007; pp. 972–978.
24. Zingg, S.; Scaramuzza, D.; Weiss, S.; Siegwart, R. MAV navigation through indoor corridors using optical
flow. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA),
Anchorage, AK, USA, 3–8 May 2010; pp. 3361–3368.
25. Gautam, A.; Sujit, P.; Saripalli, S. A survey of autonomous landing techniques for UAVs. In Proceedings of the
2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014;
pp. 1210–1218.
26. Wang, W.; Song, G.; Nonami, K.; Hirata, M.; Miyazawa, O. Autonomous control for micro-flying robot and
small wireless helicopter X.R.B. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent
Robots and Systems, Beijing, China, 9–15 October 2006; pp. 2906–2911.
27. Martínez, C.; Campoy, P.; Mondragón, I.; Olivares Mendez, M.A. Trinocular ground system to control UAVs.
In Proceedings of the 2009 IEEE-RSJ International Conference on Intelligent Robots and Systems, St. Louis,
MO, USA, 11–15 October 2009; pp. 3361–3367.
28. Pebrianti, D.; Kendoul, F.; Azrad, S.; Wang, W.; Nonami, K. Autonomous hovering and landing of a
quad-rotor micro aerial vehicle by means of on ground stereo vision system. J. Syst. Des. Dynam. 2010,
4, 269–284.
29. Abbeel, P.; Coates, A.; Ng, A.Y. Autonomous helicopter aerobatics through apprenticeship learning. Int. J.
Robot. Res. 2010, doi:10.1177/0278364910371999.
30. OPATS Laser based landing aid for Unmanned Aerial Vehicles, RUAG—Aviation Products, 2016.
Available online: http://www.ruag.com/aviation (accessed on 27 July 2016).
Sensors 2016, 16, 1393 20 of 20

31. Kong, W.; Zhang, D.; Wang, X.; Xian, Z.; Zhang, J. Autonomous landing of an UAV with a ground-based
actuated infrared stereo vision system. In Proceedings of the 2013 IEEE/RSJ International Conference on
Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2963–2970.
32. Kong, W.; Zhou, D.; Zhang, Y.; Zhang, D.; Wang, X.; Zhao, B.; Yan, C.; Shen, L.; Zhang, J. A ground-based
optical system for autonomous landing of a fixed wing UAV. In Proceedings of the 2014 IEEE/RSJ
International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014;
pp. 4797–4804.
33. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation.
IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980.
34. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000,
22, 1330–1334.
35. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press:
Cambridge, UK, 2003.
36. Edmonds, J. Paths, trees, and flowers. Can. J. Math. 1965, 17, 449–467.

c 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like