Sensors 2016
Sensors 2016
Sensors 2016
Article
A Ground-Based Near Infrared Camera Array System
for UAV Auto-Landing in GPS-Denied Environment
Tao Yang 1, *, Guangpo Li 1 , Jing Li 2 , Yanning Zhang 1 , Xiaoqiang Zhang 1 , Zhuoyue Zhang 1
and Zhi Li 1
1 ShaanXi Provincial Key Laboratory of Speech and Image Information Processing, School of Computer
Science, Northwestern Polytechnical University, Xi’an 710129, China; liguangponwpu@gmail.com (G.L.);
ynzhangnwpu@gmail.com (Y.Z.); vantasy@mail.nwpu.edu.cn (X.Z.); zhangzzy@mail.nwpu.edu.cn (Z.Z.);
zLeewack@163.com (Z.L.)
2 School of Telecommunications Engineering, Xidian University, Xi’an 710071, China;
jinglixd@mail.xidian.edu.cn
* Correspondence: tyang@nwpu.edu.cn; Tel.: +86-29-8843-1533
1. Introduction
Unmanned air vehicles (UAVs) have become more and more prevalent recently. However, one of
the most difficult challenges for both manned and unmanned aircraft is safe landing. A significant
number of accidents happens during the landing phase due to inexperience of pilots or sudden
changes in the weather dynamics, thus automatic landing systems are required to land UAVs safely [1].
How to develop autonomous landing systems has been a hot issue of the current UAV researches, it is
a challenge for its high requirements of reliability and accuracy.
Several methods of the control of the UAV have been developed, such as PID control [2],
backstepping [3–5], H∞ control [6], sliding mode control [7], fuzzy control [8], and model based
fault tolerant control [9]. The traditional navigation systems for landing which have been on the UAV
mainly include Inertial Navigation System (INS), Global Position System (GPS), INS/GPS combined
navigation system, Global Navigation Satellite System (GNSS) and so on [10]. One of the most
commonly used methods is the GPS/INS integrated navigation, but GPS signals are easily blocked,
lower height accuracy [11], and INS tends to drift because the errors accumulate over time [12].
As described previously, the height measurement from the GPS is usually inaccurate, which is
easy to cause a crash, thus other sensors may be needed like radar altimeter. The most important is
that the GPS signals may not be always available, the automatic landing may not be possible in many
remote regions or GPS-denied environments. At this time, the advantages of vision-based automatic
landing method is particularly important.
In recent years, new measurement systems which take visual sensors as cores have been applied
more and more widely and expanded to the UAV automatic landing [10]. Guo et al. [12] proposed
a vision-aided landing navigation system based on fixed waveband guidance illuminant using a
single camera. Ming et al. [13] adopted a vision-aided INS method to implement the UAV auto
landing navigation system. Vladimir et al. [14] presented a robust real-time line tracking algorithm for
fixed-wing aircraft landing. Abu-Jbara [15] and Cao et al. [16] studied the airport runway in natural
scenes, while Sereewattana et al. [17] provided a method to find the runway by adding four different
color marks, Zhuang et al. [18] used the two edge lines on both sides of the main runway and the
front edge line of the airport to estimate the attitude and position parameters. Li et al. [19] extracted
three runway lines in the image using Hough transform method to estimate the pose of the UAV.
Barber et al. [20] used a visual marker to estimate the roll and pitch for flight control. Huh et al. [11]
proposed a vision-based automatic landing method which used a monotone hemispherical airbag as
a marker. Lange et al. [21] adopted an adaptive thresholding technique to detect a landing pad for
the multirotor UAV automatic landing. Miller et al. [22] proposed an navigation algorithm based on
image registration. Daquan et al. [23] estimated aircraft’s pose using extended Kalman filter (EKF).
In the papers mentioned above, most of them are based on a downward-looking camera
to recognize the airport runway, the artificial markers or the natural markers in the image, etc.
However, the stabilizing controllers based on monocular camera are subject to drift over time [24], the
field of view can be temporarily occluded and the illumination conditions might change drastically
within a distance of a few meters [25], and extracting the symbols such as runway and the performance
of the image-matching algorithm are greatly influenced by imaging circumstance, and it is hard to
extract the symbols correctly and the performance of the image-matching algorithm may be unsatisfied
in complicated situations such as rain, fog and night [10]. Compared with onboard navigators,
the ground-based system possesses stronger computation resources and enlarges the search field
of view. Wang et al. [26] used a ground USB-camera to track a square marker patched on the
micro-aircraft. Martinez et al. [27] designed a trinocular system, which is composed of three FireWire
cameras fixed on the ground, to estimate the vehicle’s position and orientation by tracking color
landmarkers on the UAV. Researchers in Chiba University [28] designed a ground-based stereo vision
system to estimate the three dimensional position of a quadrotor. The continuously adaptive mean
shift algorithms was used to track the color based object. Abbeel et al. [29] achieved autonomous
aerobatic flights of an instrumented helicopter using two Point Grey cameras with known positions
and orientations on the ground. The accuracy of the estimates obtained was about 25 cm at about 40 m
distance from the cameras.
There is one similar automatic landing system called OPATS (Object Position and
Tracking System), which is developed by RUAG for the Swiss Air force in 1999 [30]. However, here
are several different aspects compared our system with OPATS. First of all, the theories are different.
OPATS is a laser-based automatic UAV landing system that continuously measures the dynamic
positions of the object of the interest, using one laser sensor with tripod, etc. While the proposed
landing system is based on stereo vision using infrared camera array. Thus their location theories
are different. The location theories of the OPATS is based on the infrared laser beam reflected by
the retroreflector on the UAV, while our infrared camera array system is based on the theory of
binocular positioning. Secondly, the adopted equipment is different. The equipment on the ground
of OPATS mainly includes: a standalone laser sensor, electronics unit, battery. The equipment on the
ground of our landing system mainly include: infrared laser lamp, infrared camera array, camera
lens and optical filter. The equipment fixed on the UAV of OPATS: passive optical retroreflector.
The equipment fixed on the UAV of our landing system: near infrared laser lamp. Most importantly,
OPATS can only guide one UAV landing at the same time, while the proposed landing system in
this paper covers wide field of regard, which has the ability to guide several UAVs landing at the
same time. And this problem could be described as “multi-object tracking”, which is very important
as the development of UAV. A pan-tilt unit (PTU) is employed to actuate the vision system in [31].
Sensors 2016, 16, 1393 3 of 20
Although it can be used under all weather conditions and around the clock, the limited baseline
has resulted in short detection distance. To achieve long range detection and cover wide field of
regard, a newly developed system has been designed in [32], which is the most similar works to ours.
Two separate sets of PTU are mounted integrated with visible light camera on both sides of the runway
instead of their previous stereo vision system, which is able to detect the UAV around 600 m. However,
the detection results are unsatisfied when the background become cluttered.
In order to land the UAVs safely in GPS-denied environment, a novel ground-based infrared
camera array system is proposed in this paper as shown in Figure 1. The direct detection of the UAV
will be limited by detection range, thus a near infrared laser lamp is fixed on the nose to instead
the position of the UAV, which simplifies the problem and improve the robustness of the system.
Two infrared cameras are located on the two sides of the runway respectively to capture flying aircraft
images, which are processed by the navigation computer. After processing, the real time position and
speed of the UAV are sent to the UAV control center, and the detection results, tracking trajectory and
three-dimensional localization results are displayed in real time. The infrared camera array system
cooperated with the laser lamp can effectively suppress the interference of the light and it can be
employed around the clock under all weather condition.
Z
Data
transmission Auto pilot
Y link
Data
transmission Infrared
Detection result link camera
Tracking
trajectory
Runway
Navigation computer
40m 70m 100m 160m Placement example
Infrared camera Laser marker
In this paper, we present the design and construction of hardware and software components to
accomplish autonomous accurate landing of the UAV. Our work mainly makes three contributions:
• We propose a novel infrared camera array and near infrared laser lamp based cooperative optical
imaging method, which greatly improve the detection range of the UAV.
• We design a wide baseline camera array calibration method. The method presented could achieve
high precision calibration and localization results.
• We develop a robust detection and tracking method for near infrared laser marker.
The proposed system is verified using a middle-sized fixed-wing UAV. The experiments
demoustrate that the detection range has been greatly improved, which is more than 1000 m, and a high
localization accuracy is achieved. The system has also been validated in the GPS-denied environment
and the UAV is guided to land safely.
Sensors 2016, 16, 1393 4 of 20
The rest of the paper is organized as follows. In Section 2, we describe the design and methodology
of our landing system. Section 3 describes the experimental results and followed the conclude results
in Section 4.
2. Landing System
This paper focuses on the content of the vision-based landing navigation. Figure 2 shows the
system framework and the complete experimental setup is shown in Figure 3.
Camera
params Target trajectory Image data
Landing area
Infrared
camera
Infrared
UAV
camera
We adopt a vision-based method, and how to get clear images of the target at long-distance is our
first task. In our system, the light source is carried on the nose to instead of the position of the UAV.
Sensors 2016, 16, 1393 5 of 20
After many tests and analyses, we finally choose the near infrared laser lamp and infrared camera
array to form the optical imaging module.
The accuracy of the camera array parameters directly determines the localization accuracy, while
the traditional method of measuring binocular calibration accuracy will serious decline in long-distance.
In this paper, a calibration method for infrared camera array is proposed, which can achieve a high
calibration accuracy.
When the UAV is landing from a long distance, the laser marker fixed on the UAV presents the
characteristics of small target. Besides, the target may be influenced by the strong sunlight, signal
noise and other uncertain factors in the actual situation. In this paper, these problems are discussed in
detail and solved effectively.
Our ground-based system is mainly designed for fixed-wing UAVs which have a high flight speed
and landing height. The function of the camera is to capture images of the near infrared laser lamp
fixed on the UAV. So the camera needs to have a high sampling rate for the dynamic target, and should
have enough resolution so that the target can still be clearly acquired at a long distance. Considering
these, we finally choose the infrared camera of Point Grey GS3-U3-41C6NIR-C with 2048 × 2048 pixels.
In order to ensure the camera resolution and spatial location accuracy, we select the the camera lens of
Myutron HF3514V with focal length of 35 mm. The cameras are fixed on each side of the runway as
shown in Figure 3, which has a wide baseline. The camera parameters are shown in Table 2 and the
camera lens parameters are shown in Table 3. The maximum frame rate of the camera is 2048 × 2048 at
90 fps, which meets the need of the proposed system. The maximum magnification of the camera lens
is 0.3× and its TV distortion is −0.027%.
Sensors 2016, 16, 1393 6 of 20
In order to increase the anti-interference ability of the optical imaging module, a kind of Near-IR
interference bandpass filter is adopted. The wavelength of the optical filter is 808 nm, of which the
signal attenuation is small. The filter is fixed in front of the camera lens, so the camera is only sensitive
to the wavelength of 808 nm. The emission wavelength of the near infrared laser lamp fixed on the
UAV is 808 nm. The filter is a component of the infrared camera array. The cooperation of the infrared
camera array and the light source could guarantee distinct imaging of the near infrared laser lamp
and effectively avoid interferences of complicated background. Thus the robustness and the detection
range of the system are both greatly improved. As shown in Figure 4, we can see that the filter could
get rid of almost every interferential signal successfully. The optical filter parameters are shown in
Table 4.
Interference
Interference
Target Target
(a) Without optical filter (b) Without optical filter (c) With optical filter (d) With optical filter
(after segmentation) (after segmentation)
It is necessary for centroids to be extracted after the region is determined in step 4. In order
to improve the accuracy and stability, bilinear interpolation method are used before calculating the
coordinate of the spot center:
g(i + x, j) = g(i, j) + x [ g(i + 1, j) − g(i, j)]
g(i, j + y) = g(i, j) + y[ g(i, j + 1) − g(i, j)]
g(i + x, j + y) = x [ g(i + 1), j) − g(i, j)]+
y[ g(i, j + 1) − g(i, j)]+ , (1)
xy[ g(i + 1, j + 1)+
g(i, j) − g(i + 1, j)−
g(i, j + 1)] + g(i, j)
Sensors 2016, 16, 1393 8 of 20
where g(i, j) is the gray value of the point (i, j), x, y ∈ (0, 1). Then the subpixel of the spot center
( x0 , y0 ) is calculated by:
xe xe
x c = ∑ xi · w ( xi , yi ) / ∑ w ( xi , yi )
i = xb i = xb
ye ye and (2)
y c = ∑ yi · w ( xi , yi ) / ∑ w ( xi , yi )
i = yb i = yb
w ( xi , yi ) = g ( xi , yi )
g( xi , yi ) ≥ T0
w ( xi , yi ) = 0
g( xi , yi ) < T0
x = x − T1
b 0
, (3)
xe = x0 + T1
y = y0 − T1
b
y = y + T1
e 0
where xi and yi are the values of horizontal and vertical coordinates of the point ( xi , yi ), T0 and T1 are
the thresholds.
y y
where pi and p j are image pixels, ( pix , p xj ) and ( pi , p j ) are pixel coordinates of pi and p j respectively.
To determine the corresponding relationship of the candidate targets and remove the false targets,
epipolar geometry constraints between the two cameras are used. Epipolar geometry between the
two cameras refers to the inherent projective geometry between the views. It only depends on the
camera intrinsic parameters and the relative pose of the two cameras. Thus after the target is detected
on the two cameras independently, epipolar geometry constraints between the cameras can be used to
get data association results. In this way, the corresponding relationship of the candidate targets are
confirmed and parts of false targets are removed.
Define I1 = { x11 , x21 , . . . , xm
1 } and I = { x2 , x2 , . . . , x2 } as the detection results of the first and
2 1 2 m
second camera. The duty of the data association is to find the corresponding relationship between xi1
and x2j . Distance measurement is obtained by the symmetric transfer error between xi1 (i = 1, 2, . . . , m)
and x2j (i = 1, 2, . . . , n), can be defined as:
where F is the fundamental matrix between the two cameras. The matching matrix between two
images is:
d( x11 , x12 ) d( x11 , x22 ) . . . d( x11 , xn2 )
d( x21 , x12 ) d( x21 , x22 ) . . . d( x21 , xn2 )
D= .. .. .. .. . (6)
. . . .
1 , x2 ) d( x1 , x2 ) . . . d( x1 , x2 )
d( xm 1 m 2 m n
Sensors 2016, 16, 1393 9 of 20
The global optimal matching result is obtained by solving the matching matrix D using Hungarian
algorithm [36], which is taken as the final detection result.
Target Localization: Supposing the world coordinate of the laser marker is X, the two camera
parameter matrices are P and P0 , respectively. For the two detection images, supposing the image
coordinates of the laser markers are x and x 0 . Due to the measurement error, there are no points meet
the equations x ∼= PX and x 0 ∼
= P0 X strictly, and the image point is not satisfied to epipolar geometry
0 T
constraint x Fx = 0.
A projective invariant binocular location method to minimize the re-projection error is
presented here. The method is to find the exact solution to meet the minimum epipolar geometry
constraint and re-projection error. Since the whole process only involves the projection of the space
points and the distance of 2D image points, the method is projective invariant, which means that the
solution is independent of the specific projective space.
In the corresponding images of the two cameras, the observation points are x and x 0 respectively.
Supposing that the points near x and x 0 which precisely meet to epipolar geometry constraint are xb
and xb0 . Maximum likelihood estimation of the following objective function:
subject to xb0T Fb
x = 0, where d(∗, ∗) is the Euclidean distance between image points.
We obtain initial value of x and x 0 by DLT algorithm firstly. Supposing x ∼= P0 X and x 0 ∼
= P0 X,
0 0 0
two equations x PX = 0 and x P X = 0 are obtained by the homogeneous relation. By expansioning
the equations and we get:
x1 ( p3T X ) − ( p1T X ) = 0
y1 ( p3T X ) − ( p2T X ) = 0
x ( p2T X ) − y ( p1T X ) = 0
1 1
, (8)
x ( p 0 3T X ) − ( p 01T X ) = 0
2
y2 ( p03T X ) − ( p02T X ) = 0
x2 ( p02T X ) − y2 ( p01T X ) = 0
where piT is the i-th row of the matrix P, p0 jT is the j-th column of matrix P0 . The homogeneous
coordinate equations are x = ( x1 , y1 , 1) T and x 0 = ( x2 , y2 , 1) T . The formula for linear equations on X
can be written as AX = 0. Although each set of points correspond to the three equations, only two of
them are linearly independent. Thus each set of points could provide two equations about X. The third
equation is usually omitted when solving X. Thus A could be described as:
x1 p3T − p1T
y1 p3T − p2T
A= . (9)
x2 p03T − p01T
y2 p03T − p0 2T
mink AX k, (10)
x
subject to k X k = 1.
Sensors 2016, 16, 1393 10 of 20
After the initial value X0 of X is obtained from the above formula, LM algorithm is used for the
iterative optimization to yield final localization results.
Target Tracking: The Euclidean distance is used as the distance measurement in the 3D
space. Define the historical target tracking result Tit (i = 1, 2, ..., p) and current localization result
X jt + 1 (j = 1, 2, ..., q), the distance between them is computed by:
q
d( Tit , X jt+1 ) = ( xit − x tj + 1 )2 + (yit − ytj + 1 )2 + (zit − ztj + 1 )2 , (11)
where ( xit , yit , zit ) and ( x tj + 1 , ytj + 1 , ztj + 1 ) are space coordinates of Tit and X jt + 1 . Thus the matching
matrix between them is computed by:
The Hungarian algorithm is used to get the target tracking results from Dtt + 1 .
Infr
Infr
The system need to have a certain fault tolerance to the angle change. In this experiment, we place
the light sources at the same position with the same directions firstly, and then adjust the horizontal
Sensors 2016, 16, 1393 11 of 20
rotation angle of the light source. We did this experiment at 150 m. As shown in Figure 6, the near
infrared laser lamp can be detected robustly from 0 degrees to 45 degrees, while the strong light
flashlight cannot be seen clearly when the angle is greater than 10 degrees.
Figure 6. The comparison of the imaging from different angles at the distance of 150 m. (a) The near
infrared laser lamp: from 0 degrees to 45 degrees; (b) The strong light flashlight: from 0 degrees to
10 degrees.
From the above experiments, we can see that the near infrared laser lamp greatly meets the needs
of the landing system. Cooperated with the infrared camera array and optical filter, a robust optical
imaging system with long detection range is established.
7
6 9
4 7
3 7
6 5 3
5
6
4
2
2
1
1
(a) left camera (b) right camera (c) results of the left camera (d) results of the right camera
Figure 7. Verification of calibration results(red circles in (c) and (d) are real positions of the control points, the positions of yellow crosses are calculated by re-projection
based on calibration parameters).
Sensors 2016, 16, 1393 13 of 20
The detection accuracy of the near infrared laser lamp fixed on the UAV directly determines the
accuracy of the target spatial localization. In this part, we will analyze the effects of the detection error
on the localization accuracy. In this simulation experiment, we assume that the camera parameters
are already known. During the landing phase, the point is projected to the image through the camera
matrix, and Gaussian random noise with a mean of zero is added on the projected point. And then we
get the localization result and analyze the localization accuracy. Table 7 gives the average error results
of 1000 times simulation experiments with different standard deviation of Gaussian noise.
In Table 7, the standard deviation of the Gaussian noise is set to 0.1 pixels, 0.2 pixels and 0.5 pixels
in turn. The error of the three axes decreases with the decrease of the distance. The error in X axis is
the largest, and the error in Y axis and Z axis remains a high accuracy. When the standard deviation of
the Gaussian noise is set to 0.5 pixels, the error in three axes is the largest. However, in the last 100 m,
the error in X axis is less than 0.5 m and the error in Y axis and Z axis is within the centimeter level.
We can see that when the target detection accuracy is less than 0.5 pixels, the localization accuracy
could meet the requirements of the landing system.
We have performed extensive automatic landing experiments in several places successfully.
In order to enlarge the field of view, we usually choose the runway whose width is more than 15 m.
The field of view is usually determined by the width and length of the runway. As described previously,
the two infrared cameras are located on the two sides of the runway respectively to capture flying
aircraft images. One of basic principles is that the public field of view should cover the landing area
of the UAV. In the actual experiment, the landing area is usually already known, thus it is easy to
make the infrared camera array cover the landing area. And to ensure the accuracy, the imaging of the
Sensors 2016, 16, 1393 14 of 20
landing area should be close to the image center, especially for the last 200 m. The detection range
changes with the baseline. With the baseline of 20 m, the minimum detection range is about 25 m and
the maximum detection range is over 1000 m. The UAV takes off and cruise at an altitude of 200 m
using the DGPS system. Once the UAV is detected and the error is acceptable, the UAV is guided to
land by our ground-based vision system.
We compared the localization results with DGPS in Figure 10a. We can see that the detection range
is over 1000 m and the data generated by our system are coincident with the DGPS data. The accuracy
in Z axis is the most important factor to real UAV automatic landing. On the contrary, the accuracy
in X axis has the minimal influence on its automatic landing because of the long runway. And the
limited width of the runway also needs a high localization accuracy at Y axis. In this experiment, we
refer the data from DGPS as the ground truth, the absolute errors in X, Y, Z are evaluated as shown in
Figure 10e–g. The errors of X elements are the largest compared with Y and Z elements. However, the
errors of X and Y elements are gradually decreased during the landing phase. At the last 200 m, The
location errors in X and Y coordinates have reduced to below 5 and 0.5 m, respectively. Both of which
have achieved a high accuracy. And at the last 100 m, the localization results in X and Y coordinates
nearly are the same with DGPS. To avoid the crash, a high precision estimation of altitude should be
guaranteed. We have achieved an impressive localization result in Z axis, in which the error is less
than 0.22 m during the whole landing process. The measurement precisions in the whole landing
process completely meet the requirements of the UAV control center.
Figure 11 shows one of the landing trajectories generated by our system and several landing poses
of the UAV are presented here. Under the control of our system, the poses of the UAV could remain
steady during the whole process of the decline. When the UAV was controlled by our ground-based
system, the GPS jammer was turned on. Thus, in this experiment, the UAV was controlled from
820 m in the GPS-denied environment and successful landing, we can see that the trajectory is smooth
and complete.
Sensors 2016, 16, 1393 16 of 20
Z(height)(m)
700 400 DGPS Data
-25
680 350
-20
660 300
-15
X(m)
640 Vision Data 250
-10
200
DGPS Data -5
150
1200 100
0
50 5
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
(b) time (e) time
Location result in Y coordinate Location error in Y coordinate
50 -3.5
1000 Vision Data
45 DGPS Data -3
40
X(along the runway)(m)
-2.5
35
-2
30
800
Y(m)
-1.5
25
-1
20
-0.5
15
600 10 0
5 0.5
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
(c) time (f) time
14
Location result in Z coordinate -0.25
Location error in Z coordinate
Vision Data
400 12
DGPS Data
-0.2
10 -0.15
8 -0.1
Z(m)
200 6 -0.05
4 0
0 0 0.1
200 150 100 50 0 0 500 1000 1500
(d) time
2000 2500 0 500 1000 1500
(g) time
2000 2500
(a)
Figure 10. Comparison of DGPS and vision data. (a) The UAV trajectories of vision-based method and DGPS method; (b–d) The location results in X, Y and Z
coordinates. (e–g) The location errors in X, Y and Z coordinates.
Sensors 2016, 16, 1393 17 of 20
40 UAV trajectory
35
30
25
Z(height)(m)
20
15
10
Y (ve
5
rtical
0
to th
10
e run
0
way)
-10
0 100 200 300 400 500 600 700 800 900
(m)
4. Conclusions
This paper described a novel infrared camera array guidance system for UAV automatic landing
in GPS-denied environment. We overcome the shortcomings of the traditional GPS method which is
easily blocked, etc. After an optical imaging system is designed, a high precision calibration method
for large scene based on electronic total station is provided. The feasibility, accuracy has been verified
through real-time flight experiments without GPS, and the results have identified that the control
distance of our system is over 1000 m and a high landing accuracy has been achieved.
References
1. Shaker, M.; Smith, M.N.; Yue, S.; Duckett, T. Vision-based landing of a simulated unmanned aerial vehicle
with fast reinforcement learning. In Proceedings of the 2010 International Conference on Emerging Security
Technologies (EST), Canterbury, UK, 6–7 September 2010; pp. 183–188.
2. Erginer, B.; Altug, E. Modeling and PD control of a quadrotor VTOL vehicle. In Proceedings of the 2007 IEEE
Intelligent Vehicles Symposium, Istanbul, Turkey, 13–15 June 2007; 894–899.
3. Ahmed, B.; Pota, H.R. Backstepping-based landing control of a RUAV using tether incorporating flapping
correction dynamics. In Proceedings of the 2008 American Control Conference, Seattle, WC, USA,
11–13 June 2008; pp. 2728–2733.
4. Gavilan, F.; Acosta, J.; Vazquez, R. Control of the longitudinal flight dynamics of an UAV using adaptive
backstepping. IFAC Proc. Vol. 2011, 44, 1892–1897.
5. Yoon, S.; Kim, Y.; Park, S. Constrained adaptive backstepping controller design for aircraft landing in wind
disturbance and actuator stuck. Int. J. Aeronaut. Space Sci. 2012, 13, 74–89.
6. Ferreira, H.C.; Baptista, R.S.; Ishihara, J.Y.; Borges, G.A. Disturbance rejection in a fixed wing UAV using
nonlinear H∞ state feedback. In Proceedings of the 9th IEEE International Conference on Control and
Automation (ICCA), Santiago, Chile, 19–21 December 2011; pp. 386–391.
7. Rao, D.V.; Go, T.H. Automatic landing system design using sliding mode control. Aerosp. Sci. Technol. 2014,
32, 180–187.
8. Olivares-Méndez, M.A.; Mondragón, I.F.; Campoy, P.; Martínez, C. Fuzzy controller for uav-landing task
using 3d-position visual estimation. In Proceedings of the 2010 IEEE International Conference on Fuzzy
Systems (FUZZ), Barcelona, Spain, 18–23 July 2010; pp. 1–8.
9. Liao, F.; Wang, J.L.; Poh, E.K.; Li, D. Fault-tolerant robust automatic landing control design. J. Guid.
Control Dyn. 2005, 28, 854–871.
10. Gui, Y.; Guo, P.; Zhang, H.; Lei, Z.; Zhou, X.; Du, J.; Yu, Q. Airborne vision-based navigation method for uav
accuracy landing using infrared lamps. J. Intell. Robot. Syst. 2013, 72, 197–218.
11. Huh, S.; Shim, D.H. A vision-based automatic landing method for fixed-wing UAVs. J. Intell. Robot. Syst.
2010, 57, 217–231.
12. Guo, P.; Li, X.; Gui, Y.; Zhou, X.; Zhang, H.; Zhang, X. Airborne vision-aided landing navigation system for
fixed-wing UAV. In Proceedings of the 12th International Conference on Signal Processing (ICSP), Hangzhou,
China, 26–30 October 2014; pp. 1215–1220.
Sensors 2016, 16, 1393 19 of 20
13. Ming, C.; Xiu-Xia, S.; Song, X.; Xi, L. Vision aided INS for UAV auto landing navigation using SR-UKF based
on two-view homography. In Proceedings of the 2014 IEEE Chinese Guidance, Navigation and Control
Conference (CGNCC), Yantai, China, 8–10 August 2014; pp. 518–522.
14. Vladimir, T.; Jeon, D.; Kim, D.H.; Chang, C.H.; Kim, J. Experimental feasibility analysis of roi-based
hough transform for real-time line tracking in auto-landing of UAV. In Proceedings of the 15th IEEE
International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing
Workshops (ISORCW), Shenzhen, China, 11 April 2012; pp. 130–135.
15. Abu-Jbara, K.; Alheadary, W.; Sundaramorthi, G.; Claudel, C. A robust vision-based runway detection and
tracking algorithm for automatic UAV landing. In Proceedings of the 2015 International Conference on
Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 1148–1157.
16. Cao, Y.; Ding, M.; Zhuang, L.; Cao, Y.; Shen, S.; Wang, B. Vision-based guidance, navigation and control for
Unmanned Aerial Vehicle landing. In Proceedings of the 9th IEEE International Bhurban Conference on
Applied Sciences & Technology (IBCAST), Islamabad, Pakistan, 9–12 January 2012; pp. 87–91.
17. Sereewattana, M.; Ruchanurucks, M.; Thainimit, S.; Kongkaew, S.; Siddhichai, S.; Hasegawa, S. Color marker
detection with various imaging conditions and occlusion for UAV automatic landing control. In Proceedings
of the 2015 Asian Conference on IEEE Defence Technology (ACDT), Hua Hin, Thailand, 23–25 April 2015;
pp. 138–142.
18. Zhuang, L.; Han, Y.; Fan, Y.; Cao, Y.; Wang, B.; Zhang, Q. Method of pose estimation for UAV landing.
Chin. Opt. Lett. 2012, 10, S20401.
19. Hong, L.; Haoyu, Z.; Jiaxiong, P. Application of cubic spline in navigation for aircraft landing. J. HuaZhong
Univ. Sci. Technol. (Nat. Sci. Ed.) 2006, 34, 22.
20. Barber, B.; McLain, T.; Edwards, B. Vision-based landing of fixed-wing miniature air vehicles. J. Aerosp.
Comp. Inf. Commun. 2009, 6, 207–226.
21. Lange, S.; Sunderhauf, N.; Protzel, P. A vision based onboard approach for landing and position control of an
autonomous multirotor UAV in GPS-denied environments. In Proceedings of the International Conference
on Advanced Robotics, Munich, Germany, 22-26 June 2009; pp. 1–6.
22. Miller, A.; Shah, M.; Harper, D. Landing a UAV on a runway using image registration. In Proceedings
of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008;
pp. 182–187.
23. Daquan, T.; Hongyue, Z. Vision based navigation algorithm for autonomic landing of UAV without heading
& attitude sensors. In Proceedings of the Third International IEEE Conference on Signal-Image Technologies
and Internet-Based System, Shanghai, China, 16–19 December 2007; pp. 972–978.
24. Zingg, S.; Scaramuzza, D.; Weiss, S.; Siegwart, R. MAV navigation through indoor corridors using optical
flow. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA),
Anchorage, AK, USA, 3–8 May 2010; pp. 3361–3368.
25. Gautam, A.; Sujit, P.; Saripalli, S. A survey of autonomous landing techniques for UAVs. In Proceedings of the
2014 International Conference on Unmanned Aircraft Systems (ICUAS), Orlando, FL, USA, 27–30 May 2014;
pp. 1210–1218.
26. Wang, W.; Song, G.; Nonami, K.; Hirata, M.; Miyazawa, O. Autonomous control for micro-flying robot and
small wireless helicopter X.R.B. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent
Robots and Systems, Beijing, China, 9–15 October 2006; pp. 2906–2911.
27. Martínez, C.; Campoy, P.; Mondragón, I.; Olivares Mendez, M.A. Trinocular ground system to control UAVs.
In Proceedings of the 2009 IEEE-RSJ International Conference on Intelligent Robots and Systems, St. Louis,
MO, USA, 11–15 October 2009; pp. 3361–3367.
28. Pebrianti, D.; Kendoul, F.; Azrad, S.; Wang, W.; Nonami, K. Autonomous hovering and landing of a
quad-rotor micro aerial vehicle by means of on ground stereo vision system. J. Syst. Des. Dynam. 2010,
4, 269–284.
29. Abbeel, P.; Coates, A.; Ng, A.Y. Autonomous helicopter aerobatics through apprenticeship learning. Int. J.
Robot. Res. 2010, doi:10.1177/0278364910371999.
30. OPATS Laser based landing aid for Unmanned Aerial Vehicles, RUAG—Aviation Products, 2016.
Available online: http://www.ruag.com/aviation (accessed on 27 July 2016).
Sensors 2016, 16, 1393 20 of 20
31. Kong, W.; Zhang, D.; Wang, X.; Xian, Z.; Zhang, J. Autonomous landing of an UAV with a ground-based
actuated infrared stereo vision system. In Proceedings of the 2013 IEEE/RSJ International Conference on
Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 2963–2970.
32. Kong, W.; Zhou, D.; Zhang, Y.; Zhang, D.; Wang, X.; Zhao, B.; Yan, C.; Shen, L.; Zhang, J. A ground-based
optical system for autonomous landing of a fixed wing UAV. In Proceedings of the 2014 IEEE/RSJ
International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014;
pp. 4797–4804.
33. Weng, J.; Cohen, P.; Herniou, M. Camera calibration with distortion models and accuracy evaluation.
IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 965–980.
34. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000,
22, 1330–1334.
35. Hartley, R.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press:
Cambridge, UK, 2003.
36. Edmonds, J. Paths, trees, and flowers. Can. J. Math. 1965, 17, 449–467.
c 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).