Sensors 21 06810 v2
Sensors 21 06810 v2
Sensors 21 06810 v2
Article
Smart Search System of Autonomous Flight UAVs for
Disaster Rescue
Donggeun Oh 1 and Junghee Han 2, *
Abstract: UAVs (Unmanned Aerial Vehicles) have been developed and adopted for various fields
including military, IT, agriculture, construction, and so on. In particular, UAVs are being heavily
used in the field of disaster relief thanks to the fact that UAVs are becoming smaller and more
intelligent. Search for a person in a disaster site can be difficult if the mobile communication network
is not available, and if the person is in the GPS shadow area. Recently, the search for survivors
using unmanned aerial vehicles has been studied, but there are several problems as the search is
mainly using images taken with cameras (including thermal imaging cameras). For example, it is
difficult to distinguish a distressed person from a long distance especially in the presence of cover.
Considering these challenges, we proposed an autonomous UAV smart search system that can
complete their missions without interference in search and tracking of castaways even in disaster
areas where communication with base stations is likely to be lost. To achieve this goal, we first make
UAVs perform autonomous flight with locating and approaching the distressed people without the
Citation: Oh, D.; Han, J. Smart help of the ground control server (GCS). Second, to locate a survivor accurately, we developed a
Search System of Autonomous Flight genetic-based localization algorithm by detecting changes in the signal strength between distress
UAVs for Disaster Rescue. Sensors and drones inside the search system. Specifically, we modeled our target platform with a genetic
2021, 21, 6810. https://doi.org/ algorithm and we re-defined the genetic algorithm customized to the disaster site’s environment for
10.3390/s21206810 tracking accuracy. Finally, we verified the proposed search system in several real-world sites and
found that it successfully located targets with autonomous flight.
Academic Editors: Fatih Kurugollu,
Chaker Abdelaziz Kerrache, Farhan
Keywords: UAV (Unmanned Aerial Vehicles); disaster; localization; smart search; autonomous flight
Ahmad, Syed Hassan Ahmed and
Rasheed Hussain
On the other hand, if you use a drone to detect radio waves and search the location
of the survivor, the time required for the initial search can be drastically reduced and a
large area can be searched quickly. In addition to mobile communication radio waves,
by detecting radio waves from WiFi and portable RF terminals, location search becomes
possible even in non-cellular areas or in GPS shadow areas.
From this point of view, the main purpose of the system proposed in this paper is
to expedite the search operation in the disaster areas outside the mobile communication
network where distressed persons cannot request rescue. The speed of rescue is directly
related to the lives of the survivors, and a quick search operation can minimize cost. To
achieve this goal, we develop a smart search system consisting of autonomous flying
UAV, GCS, a smart search algorithm, and protocols. This system allows UAVs to locate
and approach people in distress and perform autonomous flight without direct control of
ground control servers.
The main contributions of the proposed paper can be summarized as follows:
• Autonomous UAV Search: the smart search system enables UAVs to perform the
autonomous search process to locate and approach the distressed people without the
help of the ground control server (GCS). When a UAV takes off, the first predicted
position is not accurate and can be very far from the actual survivor’s position. As the
drone flies, it accumulates RSSI (Received Signal Strength Indicator) and ToA (Time
of Arrival) data from survivors and the UAV gradually modifies its flight direction
towards the survivor, resulting in a more accurate estimate of the location.
• Quick and Smart Tracking Algorithm: we present a smart search system based on a
genetic algorithm by detecting changes in the signal strength between distress and
drones inside the search system. The proposed smart search system is customized
to the disaster site environment to improve tracking accuracy. Specifically, by com-
bining RSSI and ToA data in consideration of the flight environment, it is possible to
effectively filter out noise factors and obtain more accurate distance estimation.
• Real-World Case Studies: we performed the flight search test in two real-world test
cases to verify the performance of the proposed survivor location tracking system.
We operated fixed-wing drones in about 4 km × 4 km and 1 km × 1 km sites to
search a survivor, and we compared the estimated position of the survivor with the
actual position.
The rest of the paper is organized as follows: Section 2 introduces positioning and
tracking systems and discusses problems and limitations of current disaster rescue plat-
forms. Section 3 presents an architecture of the overall search platform and explains the
detailed algorithm and implementation of the proposed smart search. In Section 4, we
describe the experimental procedure and analyze the results for the performance evaluation
of the proposed approach. Finally, Section 5 wraps up this paper with a discussion.
2. Related Works
2.1. Autonomous Path Planning
For completing an autonomous flight mission to a destination, planning with obstacle
avoidance is the most basic and important process. There have been many research
studies for efficient and effective path planning [4,5]. Several ML (Machine Learning) based
approaches have been proposed for path planning. DQN (Deep Q Networks) [6–8] is
one of the well known and widely used ML-based path planning algorithms. DQN is
basically categorized as a reinforcement learning method [9–11], which learns how to make
the best decision in the future through the process of performing an action and receiving
a reward. Based on DQN, many researchers have developed various applications and
extensions [12,13]. In addition, for realistic driving scenarios which require continuous
actions, not discrete ones, DDPG (Deep Deterministic Policy Gradient) algorithm [14]
adapts the ideas underlying the success of DQN to the continuous action domain. Many
extended studies of DDPG have been developed for various applications [15–18].
Sensors 2021, 21, 6810 3 of 18
The main limitation of the above methods is that these algorithms do not perform
well, especially in new environments that differ significantly from the trained domain a
priori. Compared to these conventional methods, agents in Value Iteration Networks (VIN)
algorithms can learn a plan to reach a goal, even in a new environment [19–21]. Although
VIN is also categorized as a reinforcement learning-based method such as DQN and DDPG,
VIN has additionally embedded separate explicit reactive planning modules to express the
policy. However, traditional VIN-based algorithms cannot cover a wide area. Thus, in this
paper, we adopted a hierarchical VIN [22] method for path planning to cover a large target
area such as our test fields (4 km × 4 km and 1 km × 1 km).
Control System) for data sharing. Based on the input data, the GCS and UAVs estimate the
location of the survivor via the proposed Smart Search Algorithm. Specifically, the search
algorithm estimates locations, updates the estimated locations of the survivor, and reduces
the error between the estimated locations and the actual location. GCS is also uploading
this information such as trajectories of UAVs and the estimated survivor’s location to its
web server for further sharing with mobile applications.
In addition, GCS can store these data into cloud storage because the volume of these
data is too large to keep in a local storage. All relevant information can be exchanged
between UAVs via direct communication without going through a web server or shared
with a web or smart device application via the server.
On the other hand, UAVs update their waypoints based on the estimated location of
a survivor and fly toward the potential location. Figure 2 illustrates how the estimated
position information indicated by the dots in this figure is continuously updated as a
UAV approaches the target (The plain dots are waypoints for flight and the solid dots are
temporary or local goals in flight trajectory towards the waypoints, which are generated by
a trajectory builder inside UAVs).
Figure 3. Communication Packets (a) to a Smart Search module and (b) from a Smart Search Module.
Using this information, each UAV and GCS can launch its own Smart Search module
to estimate a survivor’s location. For example, in the two test cases in the paper, the UAV
Sensors 2021, 21, 6810 6 of 18
and GCS exchange packets and each of them calculate the survivor’s location for itself.
Note that the Smart Search module is embedded in each UAV and GCS.
The packets shown in Figure 3b are transmitted from a Smart Search module to a
UAV or GCS agent. With this packet, the Smart Search process transfers the estimated
location of the survivor, the ID indicating that it came from the Smart Search process, the
size of the payload, and the value bits indicating whether the estimated location is valid or
invalid. The latitude and longitude of the estimated location are represented by HEX. For
the multi-byte data, it is represented by MSB first as well.
The communication procedure for searching a survivor is summarized as follows:
1. Each drone takes off and flies in a random direction until it catches signals from a
survivor. We call this stage a random flight.
2. Once a radio signal with the survivor’s radio module is detected, the UAV shares and
exchanges the information with other UAVs and GCS over the mesh network using
OLSR routing protocols.
3. Each agent in UAVs and/or GCS launches a Smart Search Module and transfers the
shared data about a survivor to the Smart Search Module.
4. The Smart Search module estimates the survivor’s location, if possible. If the obtained
data are not enough for localization, then it sends back the packet with an invalid
tag on.
5. If the Smart Search module sends valid location information of a survivor, then the
GCS and UAVs switch their stage to a search mission for the survivor and generate
an evasive path to the estimated survivor’s location.
6. UAVs autonomously fly to the waypoints avoiding obstacles towards the estimated location.
7. As an UAV gets close to the survivor, it gets more accurate signal information about
it. It re-sends the information about a survivor to the Smart Search Algorithm and
updates a survivor’s location (then go to Step 3).
In this system, GCS can check the drone’s latitude and longitude, altitude and in-
clination, wind direction, and wind speed during flight. It also links with Google Earth
to determine the drone’s flight path and views the drone’s location on the map and the
estimated location of the victim. Figure 4 shows the screenshot of our GCS during the
smart search mission of UAVs. Thus, GCS with a human control can guide UAVs for more
accurate and faster tracking by considering flight environment.
Note that the UAV of the proposed system is still able to estimate the location of the
survivor by itself based on its own data even if the UAV is isolated and fails to communicate
with other UAVs and GCS. Even if communication fails, it can reach the survivor by itself.
However, the tracking procedure might be delayed or inaccurate because more data on
survivors are better for genetic-based algorithms to calculate the location of survivors.
Figure 4. GCS (Ground Control System): the screenshot during the smart search flight.
Sensors 2021, 21, 6810 7 of 18
In the equation, d is the distance, Txpower is the transmission power, and n is the
indirect constant in the real world. For n, the value varies depending on the environment,
but it is usually considered as 2 [39].
ToA, like RSSI, can estimate the distance between drones and survivors. ToA means
the travel time of the radio signal from the transmitter to the receiver. The radio wave
arrival time is estimated from the difference between the radio wave transmission time and
the radio wave reception time at the signal source. Therefore, it is possible to estimate the
distance between the drone and the survivor by multiplying the radio wave arrival time by
the radio wave’s moving speed. Based on the data estimating the distance between the
drone and the distressed using RSSI and ToA, it is possible to roughly find out the position
of the survivor using triangulation.
Sensors 2021, 21, 6810 8 of 18
Triangulation is the most commonly used method for estimating the position of an
object in two dimensions. At least three reference points are required to use triangulation.
In this work, the position of the drone when it receives data from the survivor becomes the
reference point. If the received signal strength is ideal, representing the converted distance
based on the received RSSI value or ToA at each of the three reference points in one circle
might have one point of intersection. The intersection point can be the estimated position
of the survivor as shown in Figure 5.
where α = [0, 1]. din ( RSSI ) and din ( ToA) are the distance calculated by RSSI and ToA, re-
spectively. The procedure and notations of the proposed GA-based localization algorithm
Sensors 2021, 21, 6810 9 of 18
is well illustrated in Algorithm 1 and Table 2. In the following, we explain the details of
each function in this pseudo code.
Notations Definitions
lni nth
location of ith UAV = ( xni , yin , zin )
dn the distance from a survivor to the UAV calculated
bni beacon composed of (lni , din )
Sample randomly extracted survivor’s locations out of intersection area
c chromosome correspondent to a survivor’s potential location
dist(c, lni ) distance between the chromosome and a UAV’s location
• Collect Beacons(): The Smart Search module in UAVs gathers its own beacons and
also beacons from other UAVs, referred to as bni = (lni , din ). Among these collected
beacons, there are two different ways to select three beacons:
– three adjacent beacons from all UAVs (Figure 6a)
– three beacons from all UAVs with a certain time interval (Figure 6b)
If the positions of the forming circles are close to each other, the width of the in-
tersection section might be too large and the estimated positions could not be nar-
rowed down. Thus, the proposed genetic-based algorithm uses the second method
to select beacons so that selected circles can keep the proper distance to form an
intersection area.
• Obtain Samples(): Smart Search modules draw three circles, each of which is corre-
spondent to one of three beacons in the previous step. As shown in Figure 5, the
three circles might create an intersection area instead of an exact single location point
due to the distance errors calculated by RSSI and ToA. Therefore, the location should
be estimated based on the intersection area, not a point of intersection. Then, the
algorithm randomly extracts a certain number of sample points out of the intersection
area and puts them into Samples. Note that, if the size of intersection area is way too
large or too small, then we skip the genetic process and wait for the next beacons.
Sensors 2021, 21, 6810 10 of 18
• Form Population(): The algorithm now selects a set of chromosomes, c out of Samples
using fitness function. Fitness value of c is defined as follows:
1
Fitness(c) = ∑ (dist(c, lni ) − din )2
(3)
bni ∈Selected B eacons
A fitness value for each chromosome is calculated based on the distance between the
chromosome and the drone, and the radius of the circle data. In this work, we design a
fitness function so that the closer the distance between the drone and the chromosome
is to the distance estimated based on RSSI and TOA signals, the higher the fitness. The
selected chromosomes form a population of a generation.
• Crossover() and Mutation(): It then undergoes inter-chromosome crossover and
mutation, which constitute a generation with a certain probability. The following
equations describe crossover process in the proposed algorithm in Algorithm 1:
c a = ( x b , y b , z b ) × (1 − α ) + ( x a , y a , z a ) × α (4)
c b = ( x a , y a , z a ) × (1 − α ) + ( x b , y b , z b ) × α (5)
c a = ( x a , y a , z a ) ± ( γ x , γy , γz ) (6)
where γx , γy , γz are random numbers in the range of [0, minimum radius of three
circles]. Such mutation and cross-over operations with randomness are performed to
avoid local optimal solution.
• Termination(): the output offspring of above mutation and crossover operations
are put into population and two worst-fitted chromosomes are removed from the
population. This process is correspondent to one generation. If the best fitness value
among the population is greater than a threshold or the number of generation is larger
than a maximum generation threshold, then we terminate this process. If not, we
repeat the above process.
the survivor’s location. If the potential location of a survivor is identified, even though
it is not accurate yet, the UAV updates its waypoint for flight. If the UAV arrives at the
identified location, it ends its mission and notifies it to GCS. If the survivor is still far from
the current UAV’s location, then it flies towards the identified location and keeps collecting
beacons. Based on the updated beacons, it re-estimates the potential location of a survivor
and updates its waypoint again until it reaches the survivor.
4. Case Studies
This section presents results and analysis of the case studies conducted in the real
world. In particular, we estimate the position of the survivor based on the data communi-
cated with the survivor while operating the fixed-wing and rotary-wing drones, and we
analyze the estimation error by comparing the estimated position with the actual position
of the survivor.
Figure 8. Two flight test case environments for verification of the proposed survivor location
tracking system.
The UAV model used in both test cases is the Sky Observer [40], and the GCS runs on
an Intel i5 CPU core with the Windows 10 operating system. The parameter values used in
the proposed genetic localization algorithm are presented in Table 3.
Parameters Values
Population size 300
Crossover Probability 0.9
Mutation Probability 0.1
Threshold of MaxGeneration 300
Figure 9. Estimates based on signals and actual distance graph between drones and survivors.
This observation about RSSI-based estimation is trivial because the shorter the distance,
the smaller the RSSI signal noise. To explain the reason for the above ToA results, we
examined the experimental logs in more detail. Note that we converted ToA signals into
distance using the following baseline equation:
When calculating the ToA, there may be additional processing time consumed by the node
(e.g., running the operating system and/or processing network traffic). To account for this
system overhead, we reduce the total round trip time by the system overhead.
Figure 10 shows a real distance between a drone and a survivor (i.e., y-axis) per each
obtained ToA value (i.e., x-axis). Each dot corresponds to each measure. These graphs
show that, even if the ToA values are the same, the corresponding values of distances do
not converge but varies. This implies that the error of the ToA-based estimated distance
would be large regardless of how far a drone is from the survivor. To ameliorate this
problem, we used linear regression to obtain the “realistic” value of speed of ToA signals
with minimal error in each case field. The value is expressed as a first-order straight line in
each graph of Figure 10. Note that, even with the use of linear regression value, ToA-based
estimation error would not be negligible. Similarly, in order to minimize the error of the
RSSI signal-based distance estimation, this paper determines the value of a variable n in
Equation (1) based on the measured data in each test case field.
Figure 10. Distance between drones and survivors per ToA value graph.
Overall, we observe that errors of RSSI and ToA signals are related to the distance
between a drone and a survivor. In particular, we discovered the tendency that RSSI-based
distance measure is more accurate than ToA-based measure near a survivor while ToA-
Sensors 2021, 21, 6810 14 of 18
based shows better accuracy when a drone is far from a survivor. Thus, to increase the
accuracy of the location estimation system, this paper adopts a “hybrid” of RSSI- and
ToA-based methods depending on the estimated distance between a drone and a survivor.
We separate the estimated distance values into two categories: above and below 1000 m
as shown in Figure 11. Basically, if the estimated distance value was more than 1000 m,
the RSSI value had a very large error and was difficult to be used as meaningful data.
Therefore, distance estimates were made based on the filtered ToA values only when the
distance is more than 1000 m. On the other hand, if the estimated distance is less than
1000 m, we increase the RSS reflection ratio gradually because we observe that the error of
the RSSI value is reduced near a survivor. Note that we also exclude abnormal ToA-based
signals from distance estimation procedure when the filtered ToA value is smaller than
system overhead.
Using the hybrid method of RSSI and ToA as shown in Figure 11, we track the location
of a survivor in both Case #1 and Case #2. The results in Figure 12 compared the estimated
distance from the above filtering process with the actual distance. For both Case #1 and
Case #2, we found that the localization errors were significantly reduced compared to
RSSI-only and ToA-only in Figure 9.
Figure 12. Estimated distance and actual distance between drones and survivors by the hybrid
localization method.
to the survivor is right and thus the drone can approach near a survivor. Even if the
first estimated location is very far from the actual survivor’s location, we can see that the
estimated location is increasingly heading towards a survivor as a drone accumulates RSSI
and ToA data from a survivor as it flies.
For the second case shown in the right picture of Figure 13, a drone takes off in the
center of the picture and flies in a circle. A survivor is located in the right part of a picture
as shown in a red dot. The initially estimated position is at the right bottom of a picture
which is obviously not an accurate location. However, as the circulation flight progressed,
it was confirmed that the sequence of estimated positions is getting closer to the actual
position of the survivor. In this example flight, the final estimation is about 80 m away
from the actual survivor’s location.
Just like the above two example flights, we tested the proposed Smart Search Program
100 times in Figure 14. These results show that, in most cases, localization error was about
40 ∼ 50 m. Note that the target area is very large about 4 km × 4 km in Case #1 and
1 km × 1 km Case #2, and we are using a high-speed fixed-wing plane, not a slow quad-
copter.Since a fixed-wing plane cannot stop in the same position and it just has to hover in
a large circle, we think the accuracy of estimation is limited in our study. To compensate
for these limitations, it can be used not only with high-speed fixed-wing drones, but also
with low-speed fixed-wing drones such as quad-copters. Once the location of the survivor
is tracked in a small range, a more accurate location can be estimated using a low-speed
fixed-wing drone.
Figure 14. Minimum error frequency graph between distress and estimated location.
5. Conclusions
This paper proposed a smart search system consisting of autonomous flying UAV,
GCS, and smart search algorithms, and protocols. This system enables UAVs to perform
autonomous flights while locating and approaching the distressed people even without
direct control of the ground control server (GCS). When a UAV takes off, the first predicted
position is not accurate and can be very far from the actual survivor’s position. As the drone
flies, it accumulates RSSI and ToA data from survivors. As a result, the UAV gradually
modifies its flight direction towards the survivor, resulting in a more accurate estimate of
the location. For accurate localization, we also present a genetic-based search algorithm,
which detects changes in the signal strength between distress and drones inside the search
system. The proposed smart search system is customized to the disaster site environment to
improve tracking accuracy. Specifically, by combining RSSI and ToA data in consideration
of the flight environment, it is possible to effectively filter out noise factors and obtain
more accurate distance estimation. Finally, we verified the whole proposed system in two
real-world test fields, 4 × 4 km and 1 × 1 km, respectively, and found that it tracked down
the survivor with about 20∼50 m errors out of 4 × 4 km and 1 × 1 km areas.
Author Contributions: J.H. and D.O. conceived and designed the experiments; D.O. performed the
experiments; J.H. analyzed the data; and J.H. and D.O. wrote the paper. All authors have read and
agreed to the published version of the manuscript.
Funding: This research was funded by the National Research Foundation of Korea (NRF Award
Number: NRF-2018R1A1A3A04077512.)
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
References
1. Al-Naji, A.; Perera, A.G.; Mohammed, S.L.; Chahl, J. Life Signs Detector Using a Drone in Disaster Zones. Remote Sens. 2019, 11, 2441.
2. Zw˛egliński, T. The Use of Drones in Disaster Aerial Needs Reconnaissance and Damage Assessment—Three-Dimensional
Modeling and Orthophoto Map Study. Sustanability 2020, 12, 6080. [CrossRef]
3. Kyrkou, C.; Theocharides, T. Deep-Learning-Based Aerial Image Classification for Emergency Response Applications Using
Unmanned Aerial Vehicles. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Workshops (CVPRW), Long Beach, CA, USA, 16–20 June 2019; pp. 517–525.
4. Kim, D.W. Path Planning Algorithms of Mobile Robot. J. Korean Inst. Commun. Sci. 2016, 33, 80–85.
5. Xin, J.; Zhao, H.; Liu, D.; Li, M. Application of deep reinforcement learning in mobile robot path planning. In Proceedings of the
2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017; pp. 7112–7116.
6. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.;
Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [CrossRef] [PubMed]
7. Zhou, S.; Liu, X.; Xu, Y.; Guo, J. A Deep Q-network (DQN) Based Path Planning Method for Mobile Robots. In Proceedings of the
2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 11–13 August 2018; pp. 366–371.
[CrossRef]
8. Simao, L.B. Deep Q-Learning. Available online: https://github.com/lucasbsimao/DQN-simVSSS (accessed on 20 August 2019).
9. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2011
10. Duan, Y.; Chen, X.; Houthooft, R.; Schulman, J.; Abbeel, P. Benchmarking deep reinforcement learning for continuous control.
In International Conference on Machine Learning; JMLR: New York, NY, USA, 2016; Volume 48.
11. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Graves, A.; Antonoglou, I.; Wierstra, D.; Riedmiller, M. Playing atari with deep reinforce-
ment learning. In Proceedings of the NIPS Deep Learning Workshop, Lake Tahoe, Ca, USA, 9 December 2013.
12. Han, X.; Wang, J.; Xue, J.; Zhang, Q. Intelligent decision-making for three-dimensional dynamic obstacle avoidance of UAV based
on deep reinforcement learning. In Proceedings of the 11th WCSP, Xi’an, China, 23–25 October 2019.
13. Kjell, K. Deep Reinforcement Learning as Control Method for Autonomous UAV. Master’s Thesis, Polytechnic University of
Catalonia, Barcelona, Spain, 2018.
14. Lillicrap, T.P.; Hunt, J.J.; Pritzel, A.; Heess, N.; Erez, T.; Tassa, Y.; Silver, D.; Wierstra, D. Continuous control with deep
reinforcement learning. arXiv 2015, arXiv:1509.02971
15. Kong, W.; Zhou, D.; Yang, Z.; Zhao, Y.; Zhang, K. UAV Autonomous Aerial Combat Maneuver Strategy Generation with
Observation Error Based on State-Adversarial Deep Deterministic Policy Gradient and Inverse Reinforcement Learning. Electronics
2020, 9, 1121. [CrossRef]
16. Gupta, A.; Khwaja, A.S.; Anpalagan, A.; Guan, L.; Venkatesh, B. Policy-Gradient and Actor-Critic Based State Representation
Learning for Safe Driving of Autonomous Vehicles. Sensors 2020, 20, 5991. [CrossRef] [PubMed]
17. Qi, H.; Hu, Z.; Huang, H.; Wen, X.; Lu, Z. Energy Efficient 3D UAV Control for Persistent Communication Service and Fairness: A
Deep Reinforcement Learning Approach. IEEE Access 2020, 36, 53172–53184. [CrossRef]
18. Hu, Z.; Wan, K.; Gao, X.; Zhai, Y.; Wang, Q. Deep Reinforcement Learning Approach with Multiple Experience Pools for UAV
Autonomous Motion Planning in Complex Unknown Environments. Sensors 2020, 20, 1890.
19. Tamar, A.; Wu, Y.; Thomas, G.; Levine, S.; Abbeel, P. Value iteration networks. arXiv 2016, arXiv:1602.02867
20. Sykora, Q.; Ren, M.; Urtasun, R. Multi-Agent Routing Value Iteration Network. In Proceedings of the 37 th International
Conference on Machine Learning, Vienna, Austria, 13–18 July 2020.
21. Niu, S.; Chen, S.; Guo, H.; Targonski, C.; Smith, M.C.; Kovačević, J. Generalized Value Iteration Networks: Life Beyond Lattices.
In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2017.
22. Oh, D.; Han, J. Fisheye-Based Smart Control System for Autonomous UAV Operation. Sensors 2020, 20, 7321. [CrossRef] [PubMed]
23. Niculescu, D.; Nath, B. Ad hoc positioning system (APS). In Proceedings of the Global Telecommunications Conference, San
Antonio, TX, USA, 25–29 November 2001; Volume 5, pp. 2926–2931.
24. Horiba, M.; Okamoto, E.; Shinohara, T.; Matsumura, K. An Accurate Indoor-Localization Scheme with NLOS Detection and
Elimination Exploiting Stochastic Characteristics. IEICE Trans. Commun. 2015, 98, 1758–1767. [CrossRef]
25. Kim, K.W.; Kwon, J.; Lee, C.G.; Han, J. Accurate Indoor Location Tracking Exploiting Ultrasonic Reflections. IEEE Sensors J. 2016,
16, 9075–9088. [CrossRef]
26. Mathias, A.; Leonardi, M.; Galati, G. An efficient multilateration algorithm. In Proceedings of the 2008 Tyrrhenian Interna-
tional Workshop on Digital Communications-Enhanced Surveillance of Aircraft and Vehicles, Capri, Italy, 3–5 September 2008.
[CrossRef]
27. Leonardi, M.; Mathias, A.; Galati, G. Two efficient localization algorithms for multilateration. Int. J. Microw. Wirel. Technol. 2009,
1, 223–229. [CrossRef]
28. Li, Q.; Li, R.; Ji, K.; Dai, W. Kalman Filter and Its Application. In Proceedings of the 2015 8th International Conference on
Intelligent Networks and Intelligent Systems (ICINIS), Tianjin, China, 1–3 November 2015; pp. 74–77. [CrossRef]
29. Lazik, P.; Rajagopal, N.; Shih, O.; Sinopoli, B.; Rowe, A. ALPS: A Bluetooth and ultra-sound platform for mapping and
localization. In Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems, Seoul, Korea, 1–4 November
2015; pp. 73–84.
Sensors 2021, 21, 6810 18 of 18
30. Xiao, Z.; Wen, H.; Markham, A.; Trigoni, N.; Blunsom, P.; Frolik, J. Nonline-of-sight iden-tification and mitigation using received
signal strength. IEEE Trans. Wirel. Commun. 2015, 14, 1689–1702. [CrossRef]
31. Sichitiu, M.L.; Ramadurai, V. Localization of wireless sensor networks with a mobile beacon. In Proceedings of the 2004 IEEE
International Conference on Mobile Ad-hoc and Sensor Systems, Fort Lauderdale, FL, USA, 25–27 October 2004; pp. 174–183.
32. Sun, G.; Guo, W. Comparison of distributed localization algorithms for sensor network with a mobile beacon. In Proceedings
of the 2004 IEEE International Conference on Networking, Sensing and Contro, Taipei, Taiwan, 21–23 March 2004; Volume 1,
pp. 536–540.
33. Ssu, K.; Ou, C.; Jiau, H.C. Localization with mobile anchor points in wireless sensor net-works. IEEE Trans. Veh. Technol. 2005, 54,
1187–1197. [CrossRef]
34. Yu, G.; Yu, F.; Feng, L. A three-dimensional localization algorithm using a mobile anchor node under wireless channel.
In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational
Intelligence), Padua, Italy, 1–8 June 2008; pp. 477–483.
35. Han, J.; Han, J. Building a disaster rescue platform with utilizing device-to-device communication between smart devices. Int. J.
Distrib. Sens. Netw. 2018, 14. [CrossRef]
36. Shenoy, N.; Hamilton, J.; Kwasinski, A.; Xiong, K. An improved IEEE 802.11 CSMA/CA medium access mechanism through the
introduction of random short delays. In Proceedings of the 2015 13th International Symposium on Modeling and Optimization in
Mobile, Ad Hoc, and Wireless Networks (WiOpt), Mumbai, India, 25–29 May 2015.
37. Jacquet, P.; Muhlethaler, P.; Clausen, T.; Laouiti, A.; Qayyum, A.; Viennot, L. Optimized link state routing protocol for ad hoc
networks. In Proceedings of the IEEE International Multi Topic Conference, IEEE INMIC 2001, Technology for the 21st Century,
Lahore, Pakistan, 28–30 December 2001; pp. 62–68.
38. Rango, F.D.; Fotino, M.; Marano, S. EE-OLSR: Energy Efficient OLSR routing protocol for Mobile ad-hoc Networks. In Proceedings
of the MILCOM 2008—2008 IEEE Military Communications Conference, San Diego, CA, USA, 16–19 November 2008; pp. 1–7.
39. Benkic, K.; Malajner, M.; Planinsic, P.; Cucej, Z. Using RSSI value for distance estimation in wireless sensor networks based on
ZigBee. In Proceedings of the 2008 15th International Conference on Systems, Signals and Image Processing, Batislava, Slovak
Republic, 25–28 June 2008; pp. 303–306.
40. Available online: https://diydrones.com/profiles/blogs/introducing-the-sky-observer-skylark-uav-from-zeta (accessed on
20 August 2018).