47 Review Remote Sensing
47 Review Remote Sensing
47 Review Remote Sensing
Review
A Review on Unmanned Aerial Vehicle Remote Sensing:
Platforms, Sensors, Data Processing Methods, and Applications
Zhengxin Zhang 1 and Lixue Zhu 2,3, *
1 College of Information Science and Technology, Zhongkai University of Agriculture and Engineering,
Guangzhou 510225, China; [email protected]
2 Guangdong Laboratory for Lingnan Modern Agriculture, Guangzhou 510642, China
3 School of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering,
Guangzhou 510225, China
* Correspondence: [email protected]
Abstract: In recent years, UAV remote sensing has gradually attracted the attention of scientific
researchers and industry, due to its broad application prospects. It has been widely used in agriculture,
forestry, mining, and other industries. UAVs can be flexibly equipped with various sensors, such
as optical, infrared, and LIDAR, and become an essential remote sensing observation platform.
Based on UAV remote sensing, researchers can obtain many high-resolution images, with each pixel
being a centimeter or millimeter. The purpose of this paper is to investigate the current applications
of UAV remote sensing, as well as the aircraft platforms, data types, and elements used in each
application category; the data processing methods, etc.; and to study the advantages of the current
application of UAV remote sensing technology, the limitations, and promising directions that still lack
applications. By reviewing the papers published in this field in recent years, we found that the current
application research of UAV remote sensing research can be classified into four categories according
to the application field: (1) Precision agriculture, including crop disease observation, crop yield
estimation, and crop environmental observation; (2) Forestry remote sensing, including forest disease
identification, forest disaster observation, etc.; (3) Remote sensing of power systems; (4) Artificial
facilities and the natural environment. We found that in the papers published in recent years, image
Citation: Zhang, Z.; Zhu, L. A data (RGB, multi-spectral, hyper-spectral) processing mainly used neural network methods; in crop
Review on Unmanned Aerial Vehicle disease monitoring, multi-spectral data are the most studied type of data; for LIDAR data, current
Remote Sensing: Platforms, Sensors, applications still lack an end-to-end neural network processing method; this review examines UAV
Data Processing Methods, and platforms, sensors, and data processing methods, and according to the development process of certain
Applications. Drones 2023, 7, 398. application fields and current implementation limitations, some predictions are made about possible
https://doi.org/10.3390/ future development directions.
drones7060398
Academic Editor: Pablo Keywords: UAV; remote sensing; land applications; UAV imagery
Rodríguez-Gonzálvez
From the 1980s onward, remote sensing research had mainly been based on satellite
data. Due to the cost of satellite launches, there were only a few remote sensing satellites
available for a long time, and most satellite images required high costs to obtain limited
data, except for a few satellites such as the Landsat series that were partially free. This also
affected the direction of remote sensing research. During this period, many remote sensing
index methods based on ground target spectral characteristics mainly used free Landsat
satellite data. Other satellite data were less used, due to their high purchase costs.
Beside the high cost and lack of supply, remote sensing satellite data acquisition is also
constrained by several factors that affect the observation ability and direction of research:
1. The observation ability of a remote sensing satellite is determined by its cameras. A
satellite can only carry one or two cameras as sensors, and these cameras cannot be
replaced once the satellite has been launched. Therefore, the observation performance
of a satellite cannot be improved in its lifetime;
2. Remote sensing satellites can only observe targets when flying over the adjacent area
above the target and along the satellite’s orbit, which limits the ability to observe
targets from a specific angle;
3. Optical remote sensing satellites use visible and infrared light reflected by observation
targets as a medium, such as panchromatic, colored, multi-spectral, and hyper-spectral
remote sensing satellites. For these satellites, the target illumination conditions se-
riously affect the observation quality. Effective remote sensing imagery data only
can be obtained when the satellite is flying over the observation target and when the
target has good illumination conditions;
4. For optical remote sensing satellites, meteorological conditions, such as cloud cover,
can also affect the observation result, which limits the selection of remote sensing
images for research;
5. The resolution of remote sensing imagery data is limited by the distance between the
satellite and the target. Since remote sensing satellites are far from ground targets,
their image resolution is relatively low.
These constraints not only limit the scope of remote sensing research but also affect
research directions. For instance, land cover/land use is a important aspect of remote
sensing research. However, the research object of land cover/land use is limited by the
spatial resolution of remote sensing image data. The current panchromatic cameras carried
by remote sensing satellites have a resolution of 31 cm/pixel, which can only identify the
type, location, and outline information of ground targets with a 3 m [28] size or more,
such as buildings, roads, trees, ships, cars, etc. Ground objects with smaller sized aerial
projections, such as people, animals, bicycles, etc., cannot be distinguished from the images,
due to the relatively large pixel size. Similarly, change detection, which compares different
information in images taken of the same target in two or more periods, is another example.
Since the data used in many research articles are images taken by the same remote sensing
satellite at different times along its orbit and at the same spatial location, the observation
angles and spatial resolution of these images are similar, making them suitable for pixel-
by-pixel information comparison methods. Hence, change detection has become a key
direction in remote sensing research since the 1980s.
In the past decade, the emergence of multi-rotor unmanned aerial vehicles (UAV) has
gradually changed the above-mentioned limitations in remote sensing research. This type
of unmanned aircraft is pilotless, consumes no fuel, and does not require maintenance of
turboshaft engines. These multi-copters are equipped with cheap but reliable brushless
motors, which only require a small amount of electricity per flight. Users can schedule the
entire flight process of a multi-copter, from takeoff to landing, and edit flight parameters
such as passing points, flight speed, acceleration, and climbing rate. Compared to human-
crewed aircraft such as helicopters and small fixed-wing aircraft, multi-rotor drones are
more stable and reliable, and have several advantages for remote sensing applications.
First, multi-copter drones can carry a variety of sensors flexibly, according to the
requirements of the task. Second, the UAV’s observation angle and target observation time
Drones 2023, 7, 398 3 of 42
Figure 3. UAV platforms: (a) Multi-rotor UAV, (b) Fixed-wing UAV, (c) Unmanned Helicopter,
(d) VTOL UAV.
rotor mode and fly in fixed-wing mode, providing the advantages of easy control during
takeoff and landing and energy-saving during flight.
Figure 4. Sensors carried by UAVs :(a) RGB Camera, (b) Multi-spectral Camera, (c) Hyper-spectral
Camera, (d) LIDAR.
Imagery sensors capture images of the observation targets and can be further classified
into several types. RGB cameras capture images in the visible spectrum and are commonly
used for vegetation mapping, land use classification, and environmental monitoring. Multi-
spectral/hyper-spectral cameras capture images in multiple spectral bands, enabling the
identification of specific features such as vegetation species, water quality, and mineral
distribution. Thermal imagers capture infrared radiation emitted by the targets, making it
possible to identify temperature differences and detect heat anomalies. These sensors can
provide high-quality imagery data for various remote sensing applications.
In addition to imagery sensors, multi-rotor UAVs can also carry three-dimensional
information sensors. These sensors are relatively new and have been developed in recent
years with the advancement of simultaneous localization and mapping (SLAM) technology.
LIDAR sensors use laser beams to measure the distance between the UAV and the target,
enabling the creation of high-precision three-dimensional maps. Millimeter wave radar
sensors use electromagnetic waves to measure the distance and velocity of the targets,
making them suitable for applications that require long-range and all-weather sensing.
Multi-camera arrays capture images from different angles, allowing the creation of 3D
models of the observation targets. These sensors can provide rich spatial information,
enabling the analysis of terrain elevation, structure, and volume.
cameras are heavier and require gimbals for installation, necessitating a UAV with sufficient
size and load capacity to accommodate them. For example, Liu et al. [42] utilized the SONY
A7R camera, which provides multiple lens options, including zoom and fixed focus lenses, to
produce a high-precision digital elevation model (DEM) in their research.
Table 1. Parameters of UAV multi-spectral cameras and several satellite multi-spectral sensors.
Thematic Mapper Plus. 7 Landsat 8–9 Operational Land Imager. 8 IKONOS Multispectral Sensor. 9 QuickBird Multispectral Sensor. 10 WorldView-4 Multispectral Sensor. 11 Sentinel-2A
Multispectral Sensor of Sentinel-2. 12 Sentinel-2A have 3 red-edge spectral band, with the spatial resolution 20 m/pixel.
Drones 2023, 7, 398 8 of 42
Hyper-spectral and multi-spectral cameras are both imaging devices that can capture
data across multiple wavelengths of light. However, there are some key differences be-
tween these two types of camera. Multi-spectral cameras typically capture data across a few
discrete wavelength bands, while hyper-spectral cameras capture data across many more
(often hundreds) of narrow and contiguous wavelength bands. Moreover, multi-spectral
cameras generally have a higher spatial resolution than hyper-spectral cameras. Addi-
tionally, hyper-spectral cameras are typically more expensive than multi-spectral cameras.
Table 2 provides a summary of several hyper-spectral cameras and their features and that
were utilized in the papers we reviewed.
The data produced by hyper-spectral cameras are not only useful for investigating
the reflected spectral intensity of green plants but also for analyzing the chemical proper-
ties of ground targets. Hyper-spectral data can provide information about the chemical
composition and water content of soil [48], as well as the chemical composition of ground
minerals [49,50]. This is because hyper-spectral cameras can capture data across many
narrow and contiguous wavelength bands, allowing for detailed analysis of the unique
spectral signatures of different materials. The chemical composition and water content of
soil can be determined based on the unique spectral characteristics of certain chemical com-
pounds or water molecules, while the chemical composition of minerals and artifacts can
be identified based on their distinctive spectral features. As such, hyper-spectral cameras
are highly versatile tools that can be utilized for a broad range of applications in various
fields, including agriculture, geology, and archaeology.
2.2.3. LIDAR
LIDAR, an acronym for “laser imaging, detection, and ranging”, is a remote sensing
technology that has become increasingly popular in recent years, due to its ability to
generate precise and highly accurate 3D images of the Earth’s surface. LIDAR systems
mounted on UAVs are capable of collecting data for a wide range of applications, including
surveying [51,52], environmental monitoring [53], and infrastructure inspection [54–56].
One of the key advantages of using LIDAR in UAV remote sensing is its ability to
provide highly accurate and detailed elevation data. By measuring the time it takes for
laser pulses to bounce off the ground and return to the sensor, LIDAR can create a high-
resolution digital elevation model (DEM) of the terrain. This data can be used to create
detailed 3D maps of the landscape, which are useful for a variety of applications, such as
flood modeling, land use planning, and urban design.
Another benefit of using LIDAR in UAV remote sensing is its ability to penetrate
vegetation cover to some extent, allowing for the creation of detailed 3D models of forests
and other vegetation types. Multiple return LIDAR has the ability to measure the return
time of different pulses of reflected light emitted at the same time. By precisely using this
feature, information on the canopy structure in a forest can be obtained by measuring the
different return times. This data can be used for ecosystem monitoring, wildlife habitat
assessment, and other environmental applications.
Drones 2023, 7, 398 9 of 42
LIDAR technology has evolved significantly in recent years with the emergence of
solid-state LIDAR technology, which uses an array of stationary lasers and photodetectors
to scan the target area. Solid-state LIDAR technology offers several advantages over
mechanical scanning LIDAR, which use a rotating mirror or prism to scan a laser beam
across the target area. Solid-state LIDAR is typically more compact and lightweight, making
it well suited for use on UAVs.
sufficient object textural features. This has led to the development of object-based image
analysis (OBIA) methods for land use/land cover.
OBIA uses a super-pixel segmentation method to segment the image and then applies
a classifier method to classify the spectral features of the segmented blocks and identify
the type of ground targets. In recent years, neural network methods, especially the full
convolution neural network (FCN) [59] method, have become the mainstream methods of
land use and land cover research. Semantic segmentation [23,60,61] and instance segmen-
tation [24,62,63] neural network methods can extract the type, location, and spatial range
information of ground targets end-to-end from remote sensing images.
The emergence of unmanned aerial vehicle (UAV) remote sensing has produced a
new generation of data for land cover/land use research. The image sensors carried by
UAVs can acquire images with decimeter-level, centimeter-level, or even millimeter-level
resolution, allowing the problem of information extraction for small objects on the ground,
which were previously difficult to study, to become a new research interest, such as people
on the street, cars, animals, and plants.
Researchers have proposed various methods to address these challenges. For instance,
PEMCNet [64], an encoder–decoder neural network method proposed by Zhao et al.,
achieved good classification results for LIDAR data taken by UAVs, with a high accuracy
for ground objects such as buildings, shrubs, and trees. Harvey et al. [65] proposed a
terrain matching system based on the Xception [66] network model, which uses a pre-
trained neural network to determine the position of the aircraft without relying on inertial
measurement units (IMUs) and global navigation satellite systems (GNSS). Additionally,
Zhuang et al. [67] proposed a method based on neural networks to match remote sens-
ing images of the same location taken from different perspectives and resolutions, called
multiscale block attention (MSBA). By segmenting and combining the target image and
calculating the loss function separately for the local area of the image, the authors realized
a matching method for complex building targets photographed from different angles.
developed a change detection method, which was tested on the Oil Pipes Construction
Dataset(OPCD) and successfully detected construction traces from multiple pictures taken
by UAV at different times in the same area and space. Hastaouglu et al. [71] monitored
three-dimensional displacement in a garbage dump using aerial image data and the SfM-
MVS method [41] to generate a three-dimensional model. Lucieer et al. [72] proposed a
method for reconstructing a three-dimensional model of landslides in mountainous areas
based on unmanned aerial vehicle multi-view images using the SfM-MVS method. The
measured horizontal accuracy was 7 cm, and the vertical accuracy was 6 cm. Li et al. [73]
monitored the deformation of the slope section of large water conservancy projects using
UAV aerial photography and achieved a measurement error of less than 3 mm, which was
significantly higher than traditional aerial photography methods. Han et al. [74] proposed
a method of using UAVs to monitor road construction, which was applied to an extended
road construction site and accurately identified changed ground areas with an accuracy
of 84.5∼85%. Huang et al. [75] developed a semantic detection method for changes in
construction sites, based on a 3D point cloud data model generated from images obtained
through UAV aerial photography.
scanning a forest area with airborne LIDAR to obtain three-dimensional point cloud data.
Local minima were extracted from the point cloud data as candidate points, with some
of these candidates representing the ground between trees in the forest area. The DTM
generated by the method had high consistency with the ALS-based reference, with a RMSE
of 2.1 m.
UAV. They combined RGB and grey-level co-occurrence matrix (GLCM) features and used
a random forest classifier to identify the crown area in the image. Taylor-Zavala et al. [99]
investigated the correlation between the biochemical characteristics of plant cells and their
spectral characteristics by comparing the multi-spectral data collected by UAV with the
biochemical characteristics of sampled plant leaves.
Hu et al. [86] proposed a neural network method based on UAV remote sensing
RGB image data to identify diseased pine trees in a forest. The spatial resolution of the
dataset sampled by the authors was 5 cm/pixel, and the UAV’s flight height was 380 m.
The authors tested several classifiers on the sampled data, and the proposed deep neural
network achieved the highest recall rate (91.3%). Wu et al. [108] proposed a method to
identify pine wilt based on UAV remote sensing RGB image data. The author divided the
Drones 2023, 7, 398 15 of 42
disease course of pine wilt into early stage and late stage, plus healthy pine, and divided
this into three categories. In the experiment, the recognition accuracy of the neural network
method for late stage of fusarium wilt (73.9∼77.2%) was much higher than that of the early
stage of fusarium wilt (46.5∼50.8%). Xia et al. [109] studied the shape of pine blight based
on camera RGB data taken by a UAV platform and combined with a ground survey. Their
research showed that from the RGB images taken with the ordinary SLR camera, using
current neural network tools, the accuracy of detecting pine blight could reach 80.6%, while
the recall rate could reach 83.1%. Li et al. [110] proposed a pine wilt detection method that
can be used with edge computing hardware and that can be placed on UAVs. This method
is based on remote sensing RGB data from a UAV and uses the YOLOv4 [111] tiny neural
network model to detect the infection of pine with fusarium wilt. Ren et al. [84] proposed
a neural network method for detecting pine wilt from UAV remote sensing RGB image
data. The spatial resolution of the RGB image sampled by the authors was 5 cm/pixel. The
authors considered diseased pine trees as positive samples and other red ground targets
(such as red cars, red roofs, and red floors) as negative samples. All samples were checked
with boxes. The recall rate of this method was 86.6%, and the accuracy rate was 79.8%.
Sun et al. [112] proposed an object-oriented method for detecting pine wilt using UAV
remote sensing RGB data.
Yu et al. [85] proposed a neural network method for identifying pine wilt based on
UAV remote sensing multi-spectral data. The UAV flew at a height of 100 m, and the spatial
resolution of multi-spectral images was 12 cm/pixel. The authors divided the course of
pine wilt disease into three stages: early stage, middle stage, and late stage. In addition,
healthy pines and other broad-leaved trees were added as two kinds of comparison samples,
which made a total of five types of sample. Based on the classified multi-spectral data, the
authors found that the correct recognition rate was nearly 50% in the early stage of pine
wilt and more than 70% in the middle stage of pine wilt. Yu et al. [113] also proposed a
method to detect early pine wilt disease based on UAV hyper-spectral image data. The
authors ran a UAV at a flight height of 120 m above ground, using a Resonon Pika L hyper-
spectral camera for sampling data. The spatial resolution of the image was 44 cm/pixel,
and the LR1601-IRIS LIDAR system was used to collect LIDAR point cloud data. The
authors classified UAV hyper-spectral forest image data using a 3D convolution neural
network, and the comprehensive accuracy rate of the five types of ground target (pine
wilt at early stage, middle stage and late stage, health pine, and other broad-leaved trees)
reached 88.11%. Yu et al. [114] also proposed a method to identify pine wilt based on
UAV hyper-spectral and LIDAR data. The UAV flew at a height of 70 m, and the spatial
resolution of hyper-spectral images was 25.6 cm/pixel. Using a random forest classifier,
the authors recognized five types ground target (pine wilt at early stage, middle stage and
late stage, health pine, and other broad-leaved trees). Under the condition of using only
hyper-spectral data, the classification accuracy was 66.86%; under the condition of using
only LIDAR data, the accuracy was 45.56%; with the combination of the two data sources,
the accuracy reached 73.96%. Li et al. [115] proposed a method of data recognition based
on UAV remote sensing hyper-spectral images. Using a Headwall Nano-Hyperspec as an
instrument, 8 data blocks were obtained, each a size of 4600 × 700 pixels, and with a spatial
resolution of 11 cm/pixel. When tested on these different data blocks, the accuracy rate of
this method was from 84% to 99.8%, and the recall rate was from 88.3% to 99.9%.
For other types of trees, Dash et al. [116] simulated the changes of leaf reflectance
spectrum characteristics caused by forest disease outbreaks through small-scale experi-
ments. This result also proved the feasibility of monitoring forest diseases through UAV
multi-spectral remote sensing. Sandino et al. [117] proposed a method for detecting fungal
infection of trees in forests based on UAV remote sensing hyper-spectral image data. The
authors took images of paperbark tea trees from a forest environment under the condition
of partial myrtle rust infection using a Headwall Nano-Hyperspec hyper-spectral camera.
The image marked each pixel according to five types: health, infected, background, soil,
and protruding tree stump. By training the XGBoost [97] classifier, the authors obtained
Drones 2023, 7, 398 16 of 42
a comprehensive accuracy of 97.35% on the validation data. Nasi et al. [118] proposed a
method for detecting the damage of spruce from beetle disease in a forest based on the
UAV remote sensing hyper-spectral data. Gobbi et al. [119] experimented with a method of
identifying the degraded state of forests using UAV remote sensing RGB image data. The
proposed method generates a 3D point cloud and canopy height model (CHM) through
the SfM-MVS method based on UAV remote sensing RGB images. The method measures
forest degradation from three dimensions: forest structural integrity, structural complexity,
and forest middle vegetation density. The authors stated that the SfM-MVS method is an
appropriate tool for building and evaluating forest degradation models. Coletta et al. [120]
proposed a method for identifying eucalyptus infection with ceratocystis wilt disease based
on UAV remote sensing RGB image data. Xiao et al. [121] proposed a method for detecting
apple tree fire blight based on UAV remote sensing multi-spectral data.
Year Authors Study Area UAV Type Sensor Species Disease Method Type
2020 Hu et al. [86] Anhui, China Mulit-rotor RGB Pine Pine Wilt Neural Network
2021 Wu et al. [108] Qingkou, Fujian, China Mulit-rotor RGB Pine Pine Wilt Neural Network
2021 Xia et al. [109] Qingdao, Shandong, China Fixed-wing RGB Pine Pine Wilt Neural Network
2021 Li et al. [110] Taián, Shandong, China Mulit-rotor RGB Pine Pine Wilt Neural Network
2022 Ren et al. [84] Yichang, Hubei, China Mulit-rotor RGB Pine Pine Wilt Neural Network
2022 Sun et al. [112] Dayu, Jiangxi, China Mulit-rotor RGB Pine Pine Wilt OBIA
2021 Yu et al. [85] Yiwu, Zhejiang, China Mulit-rotor Multi-spectral Pine Pine Wilt Neural Network
2021 Yu et al. [113] Fushun, Liaoning, China Mulit-rotor Hyper-spectral Pine Pine Wilt Neural Network
2021 Yu et al. [114] Yiwu, Zhejiang, China Mulit-rotor Hyper-spectral & LiDAR Pine Pine Wilt Random Forest
2022 Li et al. [115] Yantai, Shandong, China Mulit-rotor Hyper-spectral Pine Pine Wilt Neural Network
2022 Coletta et al. [120] Fixed-wing RGB Eucalyptus Ceratocystis Wilt Ensemble Method
2022 Xiao et al. [121] Biglerville, Pennsylvania, USA Mulit-rotor Multi-spectral Apple Apple Fire Blight Vegetation Index
Drones 2023, 7, 398 18 of 42
Figure 10. Symptoms of huanglongbing (HLB), also known as citrus green disease.
Figure 11 shows symptoms of grape disease. Kerkech et al. [128] proposed a neural net-
work method for detecting esca disease plants from grape fields based on UAV RGB image
data. By using several energy difference-related vegetation indexes (ExR [129], ExG [130],
and ExGR [131]) of diseased and normal plants reflecting light at different wavebands
as key features, the authors achieved a recognition accuracy of 95.80%. Kerkech et al. [83]
proposed a neural network method for grape disease detection based on UAV remote
sensing multi-spectral data. The authors constructed a DSM model of grape fields based on
UAV remote sensing images and combined the infrared and visible spectra of image data to
detect diseased plants in grape fields. The recognition accuracy of this method was 93.72%.
Drones 2023, 7, 398 19 of 42
Figure 12 shows symptoms of wheat yellow rust disease. Su et al. [132] proposed a
method for identifying wheat yellow rust based on UAV remote sensing multi-spectral
data. In the experiment, the authors found that when the spatial resolution of the image
reached 1∼1.5 cm/pixel, the UAV remote sensing method could provide enough spectral
information to distinguish wheat yellow rust. The recognition accuracy of the random forest
classifier was 89.2%. The authors found that RVI [10], NDVI [9], and OSAVI [15] were three
most effective vegetation index methods to distinguish healthy wheat from wheat yellow
rust plants. Zhang et al. [133] proposed a neural network method for detection of wheat
yellow rust plants based on UAV remote sensing hyper-spectral image data. The method’s
recognition accuracy rate was 85%. Zhang et al. [134] proposed a neural network method
for detecting wheat yellow rust based on UAV remote sensing multi-spectral image data.
This method improved the network performance by adding an irregular encoding module
(IEM), irregular decoding module (IDM), and channel weighting module (CCRM) to the
U-Net [23] structure. Compared with U-Net [23], this proposed network achieved a higher
recognition accuracy on multi-spectral image data of wheat yellow rust. Huang et al. [135]
proposed a wheat helminthosporium leaf batch disease identification method based on
UAV remote sensing RGB image data.
Kharim et al. [136] proposed a method to predict the invasion degree of bacterial
leaf blight (BLB), bacterial spike light (BPB), and stem borer (SB) in rice fields based on
UAV remote sensing RGB image data. The authors named their method the IPCA-RGB
vegetation index (IPCA− RGB ) to determine the chlorophyll content in rice plant leaves,
which is positively correlated with the degree of damage of rice plant from BLB, BPB,
and SB. Stewart et al. [137] proposed a quantitative method of maize northern leaf blight
(NLB) based on UAV remote sensing RGB data. The authors conducted experiments
based on the validation data set of the MASK-RCNN [24] neural network model, and
the obtained intersection over union (IoU) was 73%. Shegoma et al. [138] established the
Fall Armywords (FAW) infection dataset based on RGB images collected by UAVs. Based
on this dataset, the authors investigated four neural networks: VGG16 and VGG19 [139],
InceptionV3 [140], and MobileNetV2 [141]. Through transfer learning, these four methods
achieved a higher accuracy on this dataset.
Drones 2023, 7, 398 20 of 42
Ye et al. [142] studied the multi-spectral characteristics of banana wilt disease plants
in two fields in Guangxi and Hainan Province, China. They found that the chlorophyll
content of the banana leaf and body surface would decrease significantly with the develop-
ment of fusarium wilt, and the red edge frequency in the multi-spectral data was highly
sensitive to the change in chlorophyll. This method is based on several vegetation indexes
(green chlorophyll index (CIgreen ) and red-edge chlorophyll index(CIRE ) [126], NDVI [9],
NDRE [125]) of UAV aerial multi-spectral images. The authors obtained a 80% accuracy
with a binary regression classifier.Tetila et al. [143] proposed a recognition method for
soybean leaf disease. The method is based on a RGB image taken by a UAV using SLIC
segmentation method and a neural network classifier. In the experiment, image data with a
spatial resolution of 0.85 mm/pixel were acquired at a height of 2 m above the field, and the
accuracy rate was 99.04%. Ha et al. [144] proposed a neural network method for detecting
radish wilt disease based on field RGB data photographed by a UAV.
the authors verified that both methods can provide reliable crop height estimates, and the
second method is better. Wei et al. [153] verified the high accuracy of an existing neural
network model for identifying rice panicles from UAV remote sensing RGB images. In
the experiment, the authors used the YOLOv4 [111] neural network model to detect rice
panicles, with an average accuracy of 98.84%. The accuracy of identifying rice ears was
95.42% in the rice field without any pathogen infection, which was slightly infected. The
accuracy of data set recognition was 98.84%. The accuracy of identification of moderately
infected rice was 94.35%. The accuracy of identification of severely infected rice data was
93.36%. Cao et al. [154] compared the differences between RGB and multi-spectral data
obtained based on UAV aerial photography on the green phenotype of wheat. In compara-
tive experiments, the multi-spectral index, including red edge and infrared information,
was more effective than the color index with only visible light for wheat phenotypic clas-
sification. Zhao et al. [155] proposed a wheat spike detection method based on a UAV
remote sensing RGB image number neural network method. The authors’ method was
based on the YOLOv5 [156] network model and improved the weight post-processing
part of the network’s conjecture results. During experimental verification, the proposed
method achieved a significant improvement on the recognition accuracy of the original
YOLOv5 [156] structure network, with an overall accuracy of 94.1%, and the processing
speed was consistent with YOLOv5 [156] under the condition of processing different resolu-
tion images. Wang et al. [157] proposed a method for estimating the chlorophyll content of
winter wheat based on remote sensing multi-spectral image data from UAVs. Based on the
remote sensing image data of winter wheat taken by a multi-spectral camera, the authors
tested 26 different remote sensing indexes combined with four estimation methods, such as
random forest regression. After experimental comparison, an RF-SVR-sigmoid model, a
combination of RF variable selection and an SVR algorithm based on the sigmoid kernel,
achieved a good accuracy when estimating wheat soil plant analysis development canopy
data. Nazeri et al. [158] proposed a neural network detection method for outliers of 3D
point cloud signals based on UAV LIDAR data. Based on f-score, this method successfully
removed different levels of outliers in a crop LIDAR 3D point cloud. This method was
proven effective in experiments on sorghum and maize plants. Chen et al. [159] proposed
a method to detect agricultural crop rows in UAV images. The accuracy rate of detecting
corn planting lines with UAV remote sensing RGB image was higher than 95.45%. When
identifying wheat planting lines, the recall rate was more than 96%, and the accuracy rate
was more than 89%. Wang et al. [160] proposed a method to detect the fluorescence ratio
index (FRI) and fluorescence difference index (FDI) of rice flowering numbers per unit area
with hyper-spectral image data. Through this index system, rice yield was estimated by
taking multi-spectral images to detect the early stage of rice flowering. Traore et al. [161]
proposed a neural network method based on multi-spectral remote sensing data to detect
equivalent water thickness (EWT). Ndlovu et al. [162] proposed a random forest regression
method based on multi-spectral remote sensing image data to estimate corn water content.
Padua et al. [163] proposed an OBIA method for the classification of vineyard objects,
which included four categories: soil, shadow, other vegetation, and grape plants. In the
classification stage of the method, the author experimented with three different classifiers
(support vector machine (SVM), random forest (RF), and artificial neural network (ANN)),
and the ANN achieved the highest accuracy.
being located in remote mountains and highlands, it is labor-intensive and inefficient work to
check and maintain the towers one by one. The operation becomes extremely dangerous in
winter or the rainy season. UAV remote sensing has greatly reduced the difficulty of power
line and tower inspection. Table 5 lists part of the reviewed remote sensing articles related to
power line towers and compares the data research on drone platforms, sensors, observation
objects, and method purpose. Figure 13 shows the organization of the applications for power
lines, towers, and accessories.
Figure 13. UAV remote sensing research of power lines and accessories.
The inspection objects in the research papers we reviewed mainly included transmis-
sion lines, towers, insulators, and other accessories.
Zhang et al. [164] proposed a power line detection method based on UAV remote sensing
RGB image data. In the study, the authors also disclosed two datasets: a power line dataset
of urban scenes, and a power line dataset of mountain scenes, which were UAV remote
Drones 2023, 7, 398 24 of 42
sensing power line datasets in urban and outdoor environments. The method proposed by
the authors is based on an improvement of the VGG16 neural network model [139], and the
f-score value was 91.4% on the proposed dataset. Pastucha et al. [165] proposed a power line
detection method based on UAV remote sensing RGB image data. The method was validated
on two open-source datasets and reached accuracies of 98.96% and 92.16%, respectively.
Chen et al. [54] proposed a method of detecting transmission lines based on UAV
remote sensing LIDAR data. The LIDAR equipment carried by the UAV obtained a laser
spot density of 35 points/m2 due to low-altitude flights, which is better than the detection
density that can be obtained by manned aircraft. With this method, the detection accuracy
rate was 96.5%, the recall rate was 94.8%, and f-score was 95.6%. Tan et al. [166] proposed
a method to detect transmission lines based on UAV remote sensing LIDAR data. The
authors established four datasets to verify the performance of the method. The LIDAR
point sampling density ranged from 215 points/m2 to 685 points/m2 , which is significantly
higher than the previous datasets. The accuracy of the method was higher than 97.6% on
four different datasets, and the recall rate was higher than 98.8%.
Zhang et al. [167] proposed a power line automatic measurement method based on
epipolar constraints. First, the authors acquired the spatial position of the power lines.
Second, semi-patch matching based on an epipolar constraint dense matching method was
applied to automatically extract dense point clouds within the power line corridor. Then,
obstacles were automatically detected by calculating the spatial distance between a power
line and the ground, which was represented by a 3D point cloud. This method generated 3D
point cloud data based on UAV RGB data through the SfM-MVS method, with a detection
accuracy of 93.2%, which is equivalent to manual measurement. The above results show that
this method could replace manual measurement. Zhou et al. [168] proposed an automatic
power line inspection system based on binocular vision. This method used binocular
cameras to identify the direction of the power line and then calculate its own course.
RGB images. The authors collected and sorted the data set “CCIN detection” of power line
tower insulators and trained MTI-YOLO, a neural network model that can be deployed on
edge computing equipment, to detect insulators in UAV images. Prates et al. [174] proposed
an insulator defect identification method for power line towers based on UAV RGB image
data. The authors constructed an insulator image dataset with more than 2500 pictures.
The accuracy of the method proposed by the authors was 92% in identifying insulator types
and 85% in detecting defective insulators. Wang et al. [175] proposed a neural network
method to detect defects of circuit insulators. The detection accuracy was 98.38%, and
12.8 pictures could be detected per second. Wen et al. [176] proposed a neural network
detection method for the insulator defects of power line towers based on UAV RGB images.
The authors used a two-level neural network to identify the defects in the insulators of a
power line tower. The first-level R-CNN [171] network detects the position of the insulator
from the remote sensing image, and the second-level coder–decoder network accurately
identifies the defect position. In the authors’ experiment, the recognition accuracy was
88.7%. Chen et al. [177] proposed a method to generate wire insulator image data based on
UAV RGB images. Aiming at the characteristics of a low spatial resolution and less training
data of insulator image data of power line towers obtained from UAV RGB images, the
authors proposed a method to generate high-resolution and realistic insulator detection
images with a conditional GAN [178] for expanding training data. Liu et al. [179] proposed
a method for autonomous inspection of power line towers using UAVs.
Figure 16 shows a shock absorber on power lines. Bao et al. [180] proposed an
improved neural network model based on the YOLOv4 [111] network to identify shock
absorbers of transmission lines on power towers. After a period of experimentation,
Bao et al. [181] put forward an improved neural network to identify transmission line
shock absorbers based on the YOLOv5 [156] network model and using the dataset they had
collected before.
Figure 17. UAV remote sensing research on artificial facilities and natural environments.
remote sensing method based on multiple data sources. Based on various data sources,
such as aerial remote sensing images, satellite remote sensing images, and drone remote
sensing images, the authors conducted a remote sensing survey of pre-modern land use
information in the Irwan Valley of Iraq. The targets included canals, karez, orbits, and
field systems.
UAV remote sensing is also used to detect larger vehicles, such as cars and ships. Am-
mour et al. [193] proposed a neural network method for detecting cars in UAV RGB images.
The authors divided the images into multiple blocks using the mean shift method, extracted
regional image features using a VGG network, and then identified whether a certain area
contained car targets using a linear SVM classifier. Li et al. [194] proposed a method for es-
timating the ground speed of multi-vehicles from UAV video. Zhang et al. [195] proposed
a neural network method CFF-SDN for detecting ship targets based on UAV RGB images.
The network model proposed by the authors has three branches that respectively examine
large, medium, and small targets, which can adapt to ship detection tasks of different sizes
in different sea areas and can detect overlapping targets.
5. Discussion
According to the different application types of drone remote sensing, comparing
some of the reviewed papers can reveal the commonalities and differences in tasks, data
collection, data processing, and other stages of these studies.
In recent years, there have been several reviews [29–31] on UAV remote sensing.
Osco et al. [32] focused on the deep learning methods applied in UAV remote sensing.
Aasen et al. [33] focused on the data processing of hyper-spectral UAV remote sensing.
Guimaraes et al. [34] and Torresan et al. [35] focused on the application of UAV remote
sensing in forestry. Maes et al. [36] and Tsouros et al. [37] focused applications in precision
agriculture. Jafarbiglu et al. [40] reviewed UAV remote sensing of nut crops.
In this review, we mainly reviewed research papers published in the past three years
on all application fields and data processing methods for UAV remote sensing. Our goal
was to grasp the current status of the hardware, software, and data processing methods
Drones 2023, 7, 398 29 of 42
used in UAV remote sensing research, as well as the main application directions, in order
to analyze the future development direction of this research field.
6. Conclusions
Through this review of UAV-related papers in recent years, the authors verified that
UAV technology has been widely used in remote sensing applications such as precision
agriculture, forestry, power transmission lines, buildings, artificial objects, and natural
environments, and has shown its unique advantages. Compared with satellite remote
sensing, UAV remote sensing can provide higher resolution image data, which makes
the accuracy of crop type identification, agricultural plant disease monitoring, and crop
information extraction significantly better than when using satellite remote sensing data.
UAV LIDAR data can produce accurate elevation information for power transmission
lines, buildings, and artificial objects, which provides better results when detecting and
identifying the attributes of these targets and demonstrates that UAV remote sensing can
be used in accurate ground object identification and detection.
There are still many advantages and characteristics of UAV technology that have not
been applied in remote sensing. Considering the classification of sensors that drones can
carry, optical image data have been studied the most. With the improvement of spatial
resolution of these data, more detailed information about large targets could be extracted,
such as fungal infections on crop surfaces, or information such as the position and speed
of smaller targets. In terms of 3D stereoscopic data, multi-view stereoscopic imaging has
had more research and applications compared to LIDAR data, due to low equipment
requirements, low costs, and simple initial data processing. However, in remote sensing
tasks for buildings, bridges, iron towers, and other targets, research based on LIDAR data
will continue to be the main research object, due to its outstanding accuracy advantages.
We can find other research opportunities if we look at the current lack of usage and
processing of certain types of data from drones. The flight data of drones, such as GNSS,
flight altitude, speed, and gyroscope data during the flight, were rarely used in the research
we reviewed. The main reason for this is that the current mainstream UAV sensors lack a
data channel to link with the drone’s flight controller, so the flight controller’s data cannot
be synchronously saved to the UAV sensors.
The GNSS information of drones is crucial for accurately measuring the coordinates of
ground targets from an aerial perspective. Due to the fact that drones can achieve accurate
positioning with a horizontal error of less than 1 cm through RTK, the absolute GNSS
coordinates of ground targets can be obtained not only from the relative position of ground
targets and the drone but also from the GNSS coordinates of the drone itself, and the error
mainly depends on the relative position error measured from image and video data with
the drone.
The flight altitude of drones plays a crucial role in determining the spatial resolution
of the image sensors carried and for measuring the elevation information of ground targets.
However, in the papers we reviewed, most drones flew at a fixed altitude when collecting
data. This flight method is suitable for observing targets on flat ground. For targets that
require elevation information, rich multi-view images can be established by remote sensing
Drones 2023, 7, 398 33 of 42
the targets at different altitudes using drones, and three-dimensional information of the
targets can be reconstructed through the SfM-MVS method.
In the currently reviewed drone remote sensing articles, neither image nor video data
were synchronized with gyroscope data. However, in recent years, in the newly published
articles on SLAM, the use of high-precision gyroscopes has made relatively considerable
progress in accuracy in 3D imaging and 3D reconstruction. A drone flight controller’s
gyroscope system has advanced sensors and complete physical damping and sensor noise
elimination methods. Therefore, it could be possible that some indoor SLAM methods
could be migrated to drone platforms, to utilize the gyroscope data.
The above is a prediction of the future development direction of drone remote sensing
from different data sources and processing perspectives. In addition, drones are excellent
data acquisition and observation platforms for performing change detection tasks, due
to their ability to program flight routes and remote sensing positions and observe the
determined flight routes multiple times. Using drones, not only can observation targets be
observed multiple times from the same angle, but also unrestricted temporal resolutions
can be achieved. Therefore, we believe that change detection based on drones should
experience significant development in the next few years.
Author Contributions: Conceptualization, L.Z.; Literature view, Z.Z. and L.Z.; writing, Z.Z. and L.Z.;
editing, Z.Z. All authors have read and agreed to the published version of the manuscript.
Funding: This work was funded by the Laboratory of Lingnan Modern Agriculture Project (Grant
No. NZ2021038).
Data Availability Statement: Data available on request from the authors.
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
HLB huanglongbing
IEM irregular encoding module
IDM irregular decoding module
CCRM channel weighting module
AHSB all hyper-spectral bands
TIR thermal infrared
RMSE root-mean-square error
NDVI normalized difference vegetation index
NDWI normalized difference water index
EVI enhanced vegetation index
LAI leaf area index
NDRE normalized difference red edge index
SAVI soil adjusted vegetation index
MSAVI improved soil adjusted vegetation index
CI chlorophyll index
FRI fluorescence ratio index
FDI fluorescence difference index
EWT equivalent water thicknes
DSPC digital terrain point cloud
DTPC digital terrain point cloud
PLDU power line dataset of urban scene
PLDM power line dataset of mountain scene
SVM support vector machine
RF random forest
ANN artificial neural network
SVR support vector regression
PLAMEC power line automatic measurement method based on epipolar constraints
SPMEC semi patch matching based on epipolar constraints
References
1. Simonett, D.S. Future and Present Needs of Remote Sensing in Geography; Technical Report; 1966. Available online: https:
//ntrs.nasa.gov/citations/19670031579 (accessed on 23 May 2023).
2. Hudson, R.; Hudson, J.W. The military applications of remote sensing by infrared. Proc. IEEE 1975, 63, 104–128. [CrossRef]
3. Badgley, P.C. Current Status of NASA’s Natural Resources Program. Exploring Unknown. 1960; p. 226. Available online:
https://ntrs.nasa.gov/citations/19670031597 (accessed on 23 May 2023).
4. Roads, B.O.P. Remote Sensing Applications to Highway Engineering. Public Roads 1968, 35, 28.
5. Taylor, J.I.; Stingelin, R.W. Infrared imaging for water resources studies. J. Hydraul. Div. 1969, 95, 175–190. [CrossRef]
6. Roy, D.P.; Wulder, M.A.; Loveland, T.R.; Woodcock, C.E.; Allen, R.G.; Anderson, M.C.; Helder, D.; Irons, J.R.; Johnson, D.M.;
Kennedy, R.; et al. Landsat-8: Science and product vision for terrestrial global change research. Remote Sens. Environ. 2014,
145, 154–172. [CrossRef]
7. Chevrel, M.; Courtois, M.; Weill, G. The SPOT satellite remote sensing mission. Photogramm. Eng. Remote Sens. 1981, 47, 1163–1171.
8. Dial, G.; Bowen, H.; Gerlach, F.; Grodecki, J.; Oleszczuk, R. IKONOS satellite, imagery, and products. Remote Sens. Environ. 2003,
88, 23–36. [CrossRef]
9. Rouse, J.W., Jr.; Haas, R.H.; Schell, J.; Deering, D. Monitoring the Vernal Advancement and Retrogradation (Green Wave Effect) of Natural
Vegetation; Technical Report; 1973. Available online: https://ntrs.nasa.gov/citations/19740022555 (accessed on 23 May 2023).
10. Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [CrossRef]
11. Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance
of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [CrossRef]
12. Gao, B.C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens.
Environ. 1996, 58, 257–266. [CrossRef]
13. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ.
1994, 48, 119–126. [CrossRef]
14. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [CrossRef]
15. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107.
[CrossRef]
16. Blaschke, T.; Lang, S.; Lorup, E.; Strobl, J.; Zeil, P. Object-oriented image processing in an integrated GIS/remote sensing
environment and perspectives for environmental applications. Environ. Inf. Plan. Politics Public 2000, 2, 555–570.
Drones 2023, 7, 398 35 of 42
17. Blaschke, T.; Strobl, J. What’s wrong with pixels? Some recent developments interfacing remote sensing and GIS. Z. Geoinforma-
tionssysteme 2001, 12–17. Available online: https://www.researchgate.net/publication/216266284_What’s_wrong_with_pixels_
Some_recent_developments_interfacing_remote_sensing_and_GIS (accessed on 23 May 2023).
18. Schiewe, J. Segmentation of high-resolution remotely sensed data-concepts, applications and problems. Int. Arch. Photogramm.
Remote Sens. Spat. Inf. Sci. 2002, 34, 380–385.
19. Hay, G.J.; Blaschke, T.; Marceau, D.J.; Bouchard, A. A comparison of three image-object methods for the multiscale analysis of
landscape structure. ISPRS J. Photogramm. Remote Sens. 2003, 57, 327–345. [CrossRef]
20. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote
sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [CrossRef]
21. Blaschke, T.; Burnett, C.; Pekkarinen, A. New contextual approaches using image segmentation for objectbased classification.
In Remote Sensing Image Analysis: Including the Spatial Domain; De Meer, F., de Jong, S., Eds.; 2004. Available online: https:
//courses.washington.edu/cfr530/GIS200106012.pdf (accessed on 23 May 2023).
22. Zhan, Q.; Molenaar, M.; Tempfli, K.; Shi, W. Quality assessment for geo-spatial objects derived from remotely sensed data. Int. J.
Remote Sens. 2005, 26, 2953–2974. [CrossRef]
23. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of
the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich,
Germany, 5–9 October 2015; Proceedings, Part III 18; Springer: Cham, Switzerland, 2015; pp. 234–241.
24. He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE international Conference on Computer Vision,
Venice, Italy, 22–29 October 2017; pp. 2961–2969.
25. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141.
26. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional
nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [CrossRef]
27. Chu, X.; Zheng, A.; Zhang, X.; Sun, J. Detection in crowded scenes: One proposal, multiple predictions. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12214–12223.
28. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [CrossRef]
29. Yao, H.; Qin, R.; Chen, X. Unmanned aerial vehicle for remote sensing applications—A review. Remote Sens. 2019, 11, 1443.
[CrossRef]
30. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm.
Remote Sens. 2014, 92, 79–97. [CrossRef]
31. Alvarez-Vanhard, E.; Corpetti, T.; Houet, T. UAV & satellite synergies for optical remote sensing applications: A literature review.
Sci. Remote Sens. 2021, 3, 100019.
32. Osco, L.P.; Junior, J.M.; Ramos, A.P.M.; de Castro Jorge, L.A.; Fatholahi, S.N.; de Andrade Silva, J.; Matsubara, E.T.; Pistori, H.;
Gonçalves, W.N.; Li, J. A review on deep learning in UAV remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102456.
[CrossRef]
33. Aasen, H.; Honkavaara, E.; Lucieer, A.; Zarco-Tejada, P.J. Quantitative remote sensing at ultra-high resolution with UAV
spectroscopy: A review of sensor technology, measurement procedures, and data correction workflows. Remote Sens. 2018,
10, 1091. [CrossRef]
34. Guimarães, N.; Pádua, L.; Marques, P.; Silva, N.; Peres, E.; Sousa, J.J. Forestry remote sensing from unmanned aerial vehicles: A
review focusing on the data, processing and potentialities. Remote Sens. 2020, 12, 1046. [CrossRef]
35. Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L.
Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [CrossRef]
36. Maes, W.H.; Steppe, K. Perspectives for remote sensing with unmanned aerial vehicles in precision agriculture. Trends Plant Sci.
2019, 24, 152–164. [CrossRef]
37. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349.
[CrossRef]
38. Olson, D.; Anderson, J. Review on unmanned aerial vehicles, remote sensors, imagery processing, and their applications in
agriculture. Agron. J. 2021, 113, 971–992. [CrossRef]
39. Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of remote sensing in precision agriculture: A review. Remote Sens. 2020, 12, 3136.
[CrossRef]
40. Jafarbiglu, H.; Pourreza, A. A comprehensive review of remote sensing platforms, sensors, and applications in nut crops. Comput.
Electron. Agric. 2022, 197, 106844. [CrossRef]
41. Carrivick, J.L.; Smith, M.W.; Quincey, D.J. Structure from Motion in the Geosciences; John Wiley & Sons: Hoboken, NJ, USA, 2016.
42. Liu, Y.; Zheng, X.; Ai, G.; Zhang, Y.; Zuo, Y. Generating a high-precision true digital orthophoto map based on UAV images.
ISPRS Int. J. Geo-Inf. 2018, 7, 333. [CrossRef]
43. Watson, D.J. Comparative physiological studies on the growth of field crops: I. Variation in net assimilation rate and leaf area
between species and varieties, and within and between years. Ann. Bot. 1947, 11, 41–76. [CrossRef]
Drones 2023, 7, 398 36 of 42
44. Seager, S.; Turner, E.L.; Schafer, J.; Ford, E.B. Vegetation’s red edge: A possible spectroscopic biosignature of extraterrestrial plants.
Astrobiology 2005, 5, 372–390. [CrossRef]
45. Delegido, J.; Verrelst, J.; Meza, C.; Rivera, J.; Alonso, L.; Moreno, J. A red-edge spectral index for remote sensing estimation of
green LAI over agroecosystems. Eur. J. Agron. 2013, 46, 42–52. [CrossRef]
46. Lin, S.; Li, J.; Liu, Q.; Li, L.; Zhao, J.; Yu, W. Evaluating the effectiveness of using vegetation indices based on red-edge reflectance
from Sentinel-2 to estimate gross primary productivity. Remote Sens. 2019, 11, 1303. [CrossRef]
47. Imran, H.A.; Gianelle, D.; Rocchini, D.; Dalponte, M.; Martín, M.P.; Sakowska, K.; Wohlfahrt, G.; Vescovo, L. VIS-NIR, red-edge
and NIR-shoulder based normalized vegetation indices response to co-varying leaf and Canopy structural traits in heterogeneous
grasslands. Remote Sens. 2020, 12, 2254. [CrossRef]
48. Datta, D.; Paul, M.; Murshed, M.; Teng, S.W.; Schmidtke, L. Soil Moisture, Organic Carbon, and Nitrogen Content Prediction with
Hyperspectral Data Using Regression Models. Sensors 2022, 22, 7998. [CrossRef]
49. Jackisch, R.; Madriz, Y.; Zimmermann, R.; Pirttijärvi, M.; Saartenoja, A.; Heincke, B.H.; Salmirinne, H.; Kujasalo, J.P.; Andreani, L.;
Gloaguen, R. Drone-borne hyperspectral and magnetic data integration: Otanmäki Fe-Ti-V deposit in Finland. Remote Sens. 2019,
11, 2084. [CrossRef]
50. Thiele, S.T.; Bnoulkacem, Z.; Lorenz, S.; Bordenave, A.; Menegoni, N.; Madriz, Y.; Dujoncquoy, E.; Gloaguen, R.; Kenter, J.
Mineralogical mapping with accurately corrected shortwave infrared hyperspectral data acquired obliquely from UAVs. Remote
Sens. 2021, 14, 5. [CrossRef]
51. Krause, S.; Sanders, T.G.; Mund, J.P.; Greve, K. UAV-based photogrammetric tree height measurement for intensive forest
monitoring. Remote Sens. 2019, 11, 758. [CrossRef]
52. Yu, J.W.; Yoon, Y.W.; Baek, W.K.; Jung, H.S. Forest Vertical Structure Mapping Using Two-Seasonal Optic Images and LiDAR
DSM Acquired from UAV Platform through Random Forest, XGBoost, and Support Vector Machine Approaches. Remote Sens.
2021, 13, 4282. [CrossRef]
53. Zhang, H.; Bauters, M.; Boeckx, P.; Van Oost, K. Mapping canopy heights in dense tropical forests using low-cost UAV-derived
photogrammetric point clouds and machine learning approaches. Remote Sens. 2021, 13, 3777. [CrossRef]
54. Chen, C.; Yang, B.; Song, S.; Peng, X.; Huang, R. Automatic clearance anomaly detection for transmission line corridors utilizing
UAV-Borne LIDAR data. Remote Sens. 2018, 10, 613. [CrossRef]
55. Zhang, R.; Yang, B.; Xiao, W.; Liang, F.; Liu, Y.; Wang, Z. Automatic extraction of high-voltage power transmission objects from
UAV lidar point clouds. Remote Sens. 2019, 11, 2600. [CrossRef]
56. Alshawabkeh, Y.; Baik, A.; Fallatah, A. As-Textured As-Built BIM Using Sensor Fusion, Zee Ain Historical Village as a Case Study.
Remote Sens. 2021, 13, 5135. [CrossRef]
57. Short, N.M. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing; National Aeronautics and Space Administration,
Scientific and Technical Information Branch: Washington, DC, USA, 1982; Volume 1078.
58. Schowengerdt, R.A. Soft classification and spatial-spectral mixing. In Proceedings of the International Workshop on Soft
Computing in Remote Sensing Data Analysis, Milan, Italy, 4–5 December 1995; pp. 4–5.
59. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
60. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation.
IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [CrossRef]
61. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image
segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018;
pp. 801–818.
62. Wang, X.; Kong, T.; Shen, C.; Jiang, Y.; Li, L. Solo: Segmenting objects by locations. In Proceedings of the Computer Vision–ECCV
2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XVIII 16; Springer: Cham, Switzerland,
2020; pp. 649–665.
63. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9157–9166.
64. Zhao, G.; Zhang, W.; Peng, Y.; Wu, H.; Wang, Z.; Cheng, L. PEMCNet: An Efficient Multi-Scale Point Feature Fusion Network for
3D LiDAR Point Cloud Classification. Remote Sens. 2021, 13, 4312. [CrossRef]
65. Harvey, W.; Rainwater, C.; Cothren, J. Direct Aerial Visual Geolocalization Using Deep Neural Networks. Remote Sens. 2021,
13, 4017. [CrossRef]
66. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258.
67. Zhuang, J.; Dai, M.; Chen, X.; Zheng, E. A Faster and More Effective Cross-View Matching Method of UAV and Satellite Images
for UAV Geolocalization. Remote Sens. 2021, 13, 3979. [CrossRef]
68. Chen, B.; Chen, Z.; Deng, L.; Duan, Y.; Zhou, J. Building change detection with RGB-D map generated from UAV images.
Neurocomputing 2016, 208, 350–364. [CrossRef]
69. Cook, K.L. An evaluation of the effectiveness of low-cost UAVs and structure from motion for geomorphic change detection.
Geomorphology 2017, 278, 195–208. [CrossRef]
Drones 2023, 7, 398 37 of 42
70. Mesquita, D.B.; dos Santos, R.F.; Macharet, D.G.; Campos, M.F.; Nascimento, E.R. Fully convolutional siamese autoencoder for
change detection in UAV aerial images. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1455–1459. [CrossRef]
71. Hastaoğlu, K.Ö.; Gül, Y.; Poyraz, F.; Kara, B.C. Monitoring 3D areal displacements by a new methodology and software using
UAV photogrammetry. Int. J. Appl. Earth Obs. Geoinf. 2019, 83, 101916. [CrossRef]
72. Lucieer, A.; Jong, S.M.d.; Turner, D. Mapping landslide displacements using Structure from Motion (SfM) and image correlation
of multi-temporal UAV photography. Prog. Phys. Geogr. 2014, 38, 97–116. [CrossRef]
73. Li, M.; Cheng, D.; Yang, X.; Luo, G.; Liu, N.; Meng, C.; Peng, Q. High precision slope deformation monitoring by uav with
industrial photogrammetry. IOP Conf. Ser. Earth Environ. Sci. 2021, 636, 012015. [CrossRef]
74. Han, D.; Lee, S.B.; Song, M.; Cho, J.S. Change detection in unmanned aerial vehicle images for progress monitoring of road
construction. Buildings 2021, 11, 150. [CrossRef]
75. Huang, R.; Xu, Y.; Hoegner, L.; Stilla, U. Semantics-aided 3D change detection on construction sites using UAV-based photogram-
metric point clouds. Autom. Constr. 2022, 134, 104057. [CrossRef]
76. Sanz-Ablanedo, E.; Chandler, J.H.; Rodríguez-Pérez, J.R.; Ordóñez, C. Accuracy of unmanned aerial vehicle (UAV) and SfM
photogrammetry survey as a function of the number and location of ground control points used. Remote Sens. 2018, 10, 1606.
[CrossRef]
77. Rebelo, C.; Nascimento, J. Measurement of Soil Tillage Using UAV High-Resolution 3D Data. Remote Sens. 2021, 13, 4336.
[CrossRef]
78. Almeida, A.; Gonçalves, F.; Silva, G.; Mendonça, A.; Gonzaga, M.; Silva, J.; Souza, R.; Leite, I.; Neves, K.; Boeno, M.; et al.
Individual Tree Detection and Qualitative Inventory of a Eucalyptus sp. Stand Using UAV Photogrammetry Data. Remote Sens.
2021, 13, 3655. [CrossRef]
79. Hartwig, M.E.; Ribeiro, L.P. Gully evolution assessment from structure-from-motion, southeastern Brazil. Environ. Earth Sci.
2021, 80, 548. [CrossRef]
80. Abdulridha, J.; Ampatzidis, Y.; Qureshi, J.; Roberts, P. Laboratory and UAV-based identification and classification of tomato
yellow leaf curl, bacterial spot, and target spot diseases in tomato utilizing hyperspectral imaging and machine learning. Remote
Sens. 2020, 12, 2732. [CrossRef]
81. Ampatzidis, Y.; Partel, V. UAV-based high throughput phenotyping in citrus utilizing multispectral imaging and artificial
intelligence. Remote Sens. 2019, 11, 410. [CrossRef]
82. Moriya, É.A.S.; Imai, N.N.; Tommaselli, A.M.G.; Berveglieri, A.; Santos, G.H.; Soares, M.A.; Marino, M.; Reis, T.T. Detection
and mapping of trees infected with citrus gummosis using UAV hyperspectral data. Comput. Electron. Agric. 2021, 188, 106298.
[CrossRef]
83. Kerkech, M.; Hafiane, A.; Canals, R. VddNet: Vine disease detection network based on multispectral images and depth map.
Remote Sens. 2020, 12, 3305. [CrossRef]
84. Ren, D.; Peng, Y.; Sun, H.; Yu, M.; Yu, J.; Liu, Z. A Global Multi-Scale Channel Adaptation Network for Pine Wilt Disease Tree
Detection on UAV Imagery by Circle Sampling. Drones 2022, 6, 353. [CrossRef]
85. Yu, R.; Luo, Y.; Zhou, Q.; Zhang, X.; Wu, D.; Ren, L. Early detection of pine wilt disease using deep learning algorithms and
UAV-based multispectral imagery. For. Ecol. Manag. 2021, 497, 119493. [CrossRef]
86. Hu, G.; Yin, C.; Wan, M.; Zhang, Y.; Fang, Y. Recognition of diseased Pinus trees in UAV images using deep learning and
AdaBoost classifier. Biosyst. Eng. 2020, 194, 138–151. [CrossRef]
87. Micieli, M.; Botter, G.; Mendicino, G.; Senatore, A. UAV Thermal Images for Water Presence Detection in a Mediterranean
Headwater Catchment. Remote Sens. 2021, 14, 108. [CrossRef]
88. Lubczonek, J.; Kazimierski, W.; Zaniewicz, G.; Lacka, M. Methodology for combining data acquired by unmanned surface and
aerial vehicles to create digital bathymetric models in shallow and ultra-shallow waters. Remote Sens. 2021, 14, 105. [CrossRef]
89. Christie, A.I.; Colefax, A.P.; Cagnazzi, D. Feasibility of Using Small UAVs to Derive Morphometric Measurements of Australian
Snubfin (Orcaella heinsohni) and Humpback (Sousa sahulensis) Dolphins. Remote Sens. 2021, 14, 21. [CrossRef]
90. Lu, Z.; Gong, H.; Jin, Q.; Hu, Q.; Wang, S. A transmission tower tilt state assessment approach based on dense point cloud from
UAV-based LiDAR. Remote Sens. 2022, 14, 408. [CrossRef]
91. Ganz, S.; Käber, Y.; Adler, P. Measuring tree height with remote sensing—A comparison of photogrammetric and LiDAR data
with different field measurements. Forests 2019, 10, 694. [CrossRef]
92. Fakhri, S.A.; Latifi, H. A Consumer Grade UAV-Based Framework to Estimate Structural Attributes of Coppice and High Oak
Forest Stands in Semi-Arid Regions. Remote Sens. 2021, 13, 4367. [CrossRef]
93. Meyer, F.; Beucher, S. Morphological segmentation. J. Vis. Commun. Image Represent. 1990, 1, 21–46. [CrossRef]
94. Pu, Y.; Xu, D.; Wang, H.; An, D.; Xu, X. Extracting Canopy Closure by the CHM-Based and SHP-Based Methods with a
Hemispherical FOV from UAV-LiDAR Data in a Poplar Plantation. Remote Sens. 2021, 13, 3837. [CrossRef]
95. Mo, J.; Lan, Y.; Yang, D.; Wen, F.; Qiu, H.; Chen, X.; Deng, X. Deep Learning-Based Instance Segmentation Method of Litchi
Canopy from UAV-Acquired Images. Remote Sens. 2021, 13, 3919. [CrossRef]
96. Reder, S.; Mund, J.P.; Albert, N.; Waßermann, L.; Miranda, L. Detection of Windthrown Tree Stems on UAV-Orthomosaics Using
U-Net Convolutional Networks. Remote Sens. 2021, 14, 75. [CrossRef]
Drones 2023, 7, 398 38 of 42
97. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference
on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794.
98. Guo, X.; Liu, Q.; Sharma, R.P.; Chen, Q.; Ye, Q.; Tang, S.; Fu, L. Tree Recognition on the Plantation Using UAV Images with
Ultrahigh Spatial Resolution in a Complex Environment. Remote Sens. 2021, 13, 4122. [CrossRef]
99. Taylor-Zavala, R.; Ramírez-Rodríguez, O.; de Armas-Ricard, M.; Sanhueza, H.; Higueras-Fredes, F.; Mattar, C. Quantifying
Biochemical Traits over the Patagonian Sub-Antarctic Forests and Their Relation to Multispectral Vegetation Indices. Remote Sens.
2021, 13, 4232. [CrossRef]
100. Li, X.; Gao, H.; Zhang, M.; Zhang, S.; Gao, Z.; Liu, J.; Sun, S.; Hu, T.; Sun, L. Prediction of Forest Fire Spread Rate Using UAV
Images and an LSTM Model Considering the Interaction between Fire and Wind. Remote Sens. 2021, 13, 4325. [CrossRef]
101. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [CrossRef] [PubMed]
102. Hu, J.; Niu, H.; Carrasco, J.; Lennox, B.; Arvin, F. Fault-tolerant cooperative navigation of networked UAV swarms for forest fire
monitoring. Aerosp. Sci. Technol. 2022, 123, 107494. [CrossRef]
103. Namburu, A.; Selvaraj, P.; Mohan, S.; Ragavanantham, S.; Eldin, E.T. Forest Fire Identification in UAV Imagery Using X-MobileNet.
Electronics 2023, 12, 733. [CrossRef]
104. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient
convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861.
105. Beltrán-Marcos, D.; Suárez-Seoane, S.; Fernández-Guisuraga, J.M.; Fernández-García, V.; Marcos, E.; Calvo, L. Relevance of UAV
and sentinel-2 data fusion for estimating topsoil organic carbon after forest fire. Geoderma 2023, 430, 116290. [CrossRef]
106. Rutherford, T.; Webster, J. Distribution of pine wilt disease with respect to temperature in North America, Japan, and Europe.
Can. J. For. Res. 1987, 17, 1050–1059. [CrossRef]
107. Hunt, D. Pine wilt disease: A worldwide threat to forest ecosystems. Nematology 2009, 11, 315–316. [CrossRef]
108. Wu, B.; Liang, A.; Zhang, H.; Zhu, T.; Zou, Z.; Yang, D.; Tang, W.; Li, J.; Su, J. Application of conventional UAV-based high-
throughput object detection to the early diagnosis of pine wilt disease by deep learning. For. Ecol. Manag. 2021, 486, 118986.
[CrossRef]
109. Xia, L.; Zhang, R.; Chen, L.; Li, L.; Yi, T.; Wen, Y.; Ding, C.; Xie, C. Evaluation of Deep Learning Segmentation Models for
Detection of Pine Wilt Disease in Unmanned Aerial Vehicle Images. Remote Sens. 2021, 13, 3594. [CrossRef]
110. Li, F.; Liu, Z.; Shen, W.; Wang, Y.; Wang, Y.; Ge, C.; Sun, F.; Lan, P. A remote sensing and airborne edge-computing based detection
system for pine wilt disease. IEEE Access 2021, 9, 66346–66360. [CrossRef]
111. Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934.
112. Sun, Z.; Wang, Y.; Pan, L.; Xie, Y.; Zhang, B.; Liang, R.; Sun, Y. Pine wilt disease detection in high-resolution UAV images using
object-oriented classification. J. For. Res. 2022, 33, 1377–1389. [CrossRef]
113. Yu, R.; Luo, Y.; Li, H.; Yang, L.; Huang, H.; Yu, L.; Ren, L. Three-Dimensional Convolutional Neural Network Model for Early
Detection of Pine Wilt Disease Using UAV-Based Hyperspectral Images. Remote Sens. 2021, 13, 4065. [CrossRef]
114. Yu, R.; Luo, Y.; Zhou, Q.; Zhang, X.; Wu, D.; Ren, L. A machine learning algorithm to detect pine wilt disease using UAV-based
hyperspectral imagery and LiDAR data at the tree level. Int. J. Appl. Earth Obs. Geoinf. 2021, 101, 102363. [CrossRef]
115. Li, J.; Wang, X.; Zhao, H.; Hu, X.; Zhong, Y. Detecting pine wilt disease at the pixel level from high spatial and spectral resolution
UAV-borne imagery in complex forest landscapes using deep one-class classification. Int. J. Appl. Earth Obs. Geoinf. 2022,
112, 102947. [CrossRef]
116. Dash, J.P.; Watt, M.S.; Pearse, G.D.; Heaphy, M.; Dungey, H.S. Assessing very high resolution UAV imagery for monitoring forest
health during a simulated disease outbreak. ISPRS J. Photogramm. Remote Sens. 2017, 131, 1–14. [CrossRef]
117. Sandino, J.; Pegg, G.; Gonzalez, F.; Smith, G. Aerial mapping of forests affected by pathogens using UAVs, hyperspectral sensors,
and artificial intelligence. Sensors 2018, 18, 944. [CrossRef]
118. Näsi, R.; Honkavaara, E.; Blomqvist, M.; Lyytikäinen-Saarenmaa, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Holopainen, M. Remote
sensing of bark beetle damage in urban forests at individual tree level using a novel hyperspectral camera from UAV and aircraft.
Urban For. Urban Green. 2018, 30, 72–83. [CrossRef]
119. Gobbi, B.; Van Rompaey, A.; Gasparri, N.I.; Vanacker, V. Forest degradation in the Dry Chaco: A detection based on 3D canopy
reconstruction from UAV-SfM techniques. For. Ecol. Manag. 2022, 526, 120554. [CrossRef]
120. Coletta, L.F.; de Almeida, D.C.; Souza, J.R.; Manzione, R.L. Novelty detection in UAV images to identify emerging threats in
eucalyptus crops. Comput. Electron. Agric. 2022, 196, 106901. [CrossRef]
121. Xiao, D.; Pan, Y.; Feng, J.; Yin, J.; Liu, Y.; He, L. Remote sensing detection algorithm for apple fire blight based on UAV
multispectral image. Comput. Electron. Agric. 2022, 199, 107137. [CrossRef]
122. Singh, P.; Pandey, P.C.; Petropoulos, G.P.; Pavlides, A.; Srivastava, P.K.; Koutsias, N.; Deng, K.A.K.; Bao, Y. Hyperspectral
remote sensing in precision agriculture: Present status, challenges, and future trends. In Hyperspectral Remote Sensing; Elsevier:
Amsterdam, The Netherlands, 2020; pp. 121–146.
123. Fuglie, K. The growing role of the private sector in agricultural research and development world-wide. Glob. Food Secur. 2016,
10, 29–38. [CrossRef]
124. Chang, A.; Yeom, J.; Jung, J.; Landivar, J. Comparison of canopy shape and vegetation indices of citrus trees derived from UAV
multispectral images for characterization of citrus greening disease. Remote Sens. 2020, 12, 4122. [CrossRef]
Drones 2023, 7, 398 39 of 42
125. Barnes, E.; Clarke, T.; Richards, S.; Colaizzi, P.; Haberland, J.; Kostrzewski, M.; Waller, P.; Choi, C.; Riley, E.; Thompson, T.;
et al. Coincident detection of crop water stress, nitrogen status and canopy density using ground based multispectral data. In
Proceedings of the Fifth International Conference on Precision Agriculture, Bloomington, MN, USA, 16–19 July 2000; Volume
1619, p. 6.
126. Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green
leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30. Available online: https://www.researchgate.net/publication/4325
6762_Coincident_detection_of_crop_water_stress_nitrogen_status_and_canopy_density_using_ground_based_multispectral_
data (accessed on 23 May 2023). [CrossRef]
127. Deng, X.; Zhu, Z.; Yang, J.; Zheng, Z.; Huang, Z.; Yin, X.; Wei, S.; Lan, Y. Detection of citrus huanglongbing based on multi-input
neural network model of UAV hyperspectral remote sensing. Remote Sens. 2020, 12, 2678. [CrossRef]
128. Kerkech, M.; Hafiane, A.; Canals, R. Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases
detection in UAV images. Comput. Electron. Agric. 2018, 155, 237–243. [CrossRef]
129. Meyer, G.E.; Hindman, T.W.; Laksmi, K. Machine vision detection parameters for plant species identification. In Proceedings of
the Precision Agriculture and Biological Quality, Boston, MA, USA, 3–4 November 1999; Volume 3543, pp. 327–335.
130. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color indices for weed identification under various soil, residue,
and lighting conditions. Trans. ASAE 1995, 38, 259–269. [CrossRef]
131. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric.
2008, 63, 282–293. [CrossRef]
132. Su, J.; Liu, C.; Coombes, M.; Hu, X.; Wang, C.; Xu, X.; Li, Q.; Guo, L.; Chen, W.H. Wheat yellow rust monitoring by learning from
multispectral UAV aerial imagery. Comput. Electron. Agric. 2018, 155, 157–166. [CrossRef]
133. Zhang, X.; Han, L.; Dong, Y.; Shi, Y.; Huang, W.; Han, L.; González-Moreno, P.; Ma, H.; Ye, H.; Sobeih, T. A deep learning-based
approach for automated yellow rust disease detection from high-resolution hyperspectral UAV images. Remote Sens. 2019,
11, 1554. [CrossRef]
134. Zhang, T.; Xu, Z.; Su, J.; Yang, Z.; Liu, C.; Chen, W.H.; Li, J. Ir-UNet: Irregular Segmentation U-Shape Network for Wheat Yellow
Rust Detection by UAV Multispectral Imagery. Remote Sens. 2021, 13, 3892. [CrossRef]
135. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Zhang, L.; Wen, S.; Zhang, H.; Zhang, Y.; Deng, Y. Detection of helminthosporium leaf
blotch disease based on UAV imagery. Appl. Sci. 2019, 9, 558. [CrossRef]
136. Kharim, M.N.A.; Wayayok, A.; Abdullah, A.F.; Shariff, A.R.M.; Husin, E.M.; Mahadi, M.R. Predictive zoning of pest and disease
infestations in rice field based on UAV aerial imagery. Egypt. J. Remote Sens. Space Sci. 2022, 25, 831–840.
137. Stewart, E.L.; Wiesner-Hanks, T.; Kaczmar, N.; DeChant, C.; Wu, H.; Lipson, H.; Nelson, R.J.; Gore, M.A. Quantitative phenotyping
of Northern Leaf Blight in UAV images using deep learning. Remote Sens. 2019, 11, 2209. [CrossRef]
138. Ishengoma, F.S.; Rai, I.A.; Said, R.N. Identification of maize leaves infected by fall armyworms using UAV-based imagery and
convolutional neural networks. Comput. Electron. Agric. 2021, 184, 106124. [CrossRef]
139. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
140. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826.
141. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018;
pp. 4510–4520.
142. Ye, H.; Huang, W.; Huang, S.; Cui, B.; Dong, Y.; Guo, A.; Ren, Y.; Jin, Y. Recognition of banana fusarium wilt based on UAV
remote sensing. Remote Sens. 2020, 12, 938. [CrossRef]
143. Tetila, E.C.; Machado, B.B.; Menezes, G.K.; Oliveira, A.d.S.; Alvarez, M.; Amorim, W.P.; Belete, N.A.D.S.; Da Silva, G.G.; Pistori, H.
Automatic recognition of soybean leaf diseases using UAV images and deep convolutional neural networks. IEEE Geosci. Remote
Sens. Lett. 2019, 17, 903–907. [CrossRef]
144. Ha, J.G.; Moon, H.; Kwak, J.T.; Hassan, S.I.; Dang, M.; Lee, O.N.; Park, H.Y. Deep convolutional neural network for classifying
Fusarium wilt of radish from unmanned aerial vehicles. J. Appl. Remote Sens. 2017, 11, 042621. [CrossRef]
145. Lu, F.; Sun, Y.; Hou, F. Using UAV visible images to estimate the soil moisture of steppe. Water 2020, 12, 2334. [CrossRef]
146. Ge, X.; Ding, J.; Jin, X.; Wang, J.; Chen, X.; Li, X.; Liu, J.; Xie, B. Estimating agricultural soil moisture content through UAV-based
hyperspectral images in the arid region. Remote Sens. 2021, 13, 1562. [CrossRef]
147. Bertalan, L.; Holb, I.; Pataki, A.; Szabó, G.; Szalóki, A.K.; Szabó, S. UAV-based multispectral and thermal cameras to predict soil
water content–A machine learning approach. Comput. Electron. Agric. 2022, 200, 107262. [CrossRef]
148. Awad, M.; Khanna, R.; Awad, M.; Khanna, R. Support vector regression. In Efficient Learning Machines: Theories, Concepts, and
Applications for Engineers and System Designers; 2015; pp. 67–80. Available online: https://www.researchgate.net/publication/27
7299933_Efficient_Learning_Machines_Theories_Concepts_and_Applications_for_Engineers_and_System_Designers (accessed
on 23 May 2023).
149. Zhang, Y.; Han, W.; Zhang, H.; Niu, X.; Shao, G. Evaluating soil moisture content under maize coverage using UAV multimodal
data by machine learning algorithms. J. Hydrol. 2023, 129086. [CrossRef]
Drones 2023, 7, 398 40 of 42
150. Zhang, X.; Yuan, Y.; Zhu, Z.; Ma, Q.; Yu, H.; Li, M.; Ma, J.; Yi, S.; He, X.; Sun, Y. Predicting the Distribution of Oxytropis
ochrocephala Bunge in the Source Region of the Yellow River (China) Based on UAV Sampling Data and Species Distribution
Model. Remote Sens. 2021, 13, 5129. [CrossRef]
151. Lan, Y.; Huang, K.; Yang, C.; Lei, L.; Ye, J.; Zhang, J.; Zeng, W.; Zhang, Y.; Deng, J. Real-Time Identification of Rice Weeds by UAV
Low-Altitude Remote Sensing Based on Improved Semantic Segmentation Model. Remote Sens. 2021, 13, 4370. [CrossRef]
152. Lu, W.; Okayama, T.; Komatsuzaki, M. Rice Height Monitoring between Different Estimation Models Using UAV Photogrammetry
and Multispectral Technology. Remote Sens. 2021, 14, 78. [CrossRef]
153. Wei, L.; Luo, Y.; Xu, L.; Zhang, Q.; Cai, Q.; Shen, M. Deep Convolutional Neural Network for Rice Density Prescription Map at
Ripening Stage Using Unmanned Aerial Vehicle-Based Remotely Sensed Images. Remote Sens. 2021, 14, 46. [CrossRef]
154. Cao, X.; Liu, Y.; Yu, R.; Han, D.; Su, B. A Comparison of UAV RGB and Multispectral Imaging in Phenotyping for Stay Green of
Wheat Population. Remote Sens. 2021, 13, 5173. [CrossRef]
155. Zhao, J.; Zhang, X.; Yan, J.; Qiu, X.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W. A wheat spike detection method in UAV images based on
improved YOLOv5. Remote Sens. 2021, 13, 3095. [CrossRef]
156. Jocher, G.; Stoken, A.; Borovec, J.; Christopher, S.; Laughing, L.C. ultralytics/yolov5: V4. 0-nn. SiLU () activations, Weights &
Biases logging, PyTorch Hub integration. Zenodo 2021. [CrossRef]
157. Wang, J.; Zhou, Q.; Shang, J.; Liu, C.; Zhuang, T.; Ding, J.; Xian, Y.; Zhao, L.; Wang, W.; Zhou, G.; et al. UAV-and Machine
Learning-Based Retrieval of Wheat SPAD Values at the Overwintering Stage for Variety Screening. Remote Sens. 2021, 13, 5166.
[CrossRef]
158. Nazeri, B.; Crawford, M. Detection of Outliers in LiDAR Data Acquired by Multiple Platforms over Sorghum and Maize. Remote
Sens. 2021, 13, 4445. [CrossRef]
159. Chen, P.; Ma, X.; Wang, F.; Li, J. A New Method for Crop Row Detection Using Unmanned Aerial Vehicle Images. Remote Sens.
2021, 13, 3526. [CrossRef]
160. Wang, F.; Yao, X.; Xie, L.; Zheng, J.; Xu, T. Rice Yield Estimation Based on Vegetation Index and Florescence Spectral Information
from UAV Hyperspectral Remote Sensing. Remote Sens. 2021, 13, 3390. [CrossRef]
161. Traore, A.; Ata-Ul-Karim, S.T.; Duan, A.; Soothar, M.K.; Traore, S.; Zhao, B. Predicting Equivalent Water Thickness in Wheat
Using UAV Mounted Multispectral Sensor through Deep Learning Techniques. Remote Sens. 2021, 13, 4476. [CrossRef]
162. Ndlovu, H.S.; Odindi, J.; Sibanda, M.; Mutanga, O.; Clulow, A.; Chimonyo, V.G.; Mabhaudhi, T. A comparative estimation of
maize leaf water content using machine learning techniques and unmanned aerial vehicle (UAV)-based proximal and remotely
sensed data. Remote Sens. 2021, 13, 4091. [CrossRef]
163. Pádua, L.; Matese, A.; Di Gennaro, S.F.; Morais, R.; Peres, E.; Sousa, J.J. Vineyard classification using OBIA on UAV-based RGB
and multispectral data: A case study in different wine regions. Comput. Electron. Agric. 2022, 196, 106905. [CrossRef]
164. Zhang, H.; Yang, W.; Yu, H.; Zhang, H.; Xia, G.S. Detecting power lines in UAV images with convolutional features and structured
constraints. Remote Sens. 2019, 11, 1342. [CrossRef]
165. Pastucha, E.; Puniach, E.; Ścisłowicz, A.; Ćwiakała,
˛ P.; Niewiem, W.; Wiacek,
˛ P. 3d reconstruction of power lines using UAV
images to monitor corridor clearance. Remote Sens. 2020, 12, 3698. [CrossRef]
166. Tan, J.; Zhao, H.; Yang, R.; Liu, H.; Li, S.; Liu, J. An entropy-weighting method for efficient power-line feature evaluation and
extraction from lidar point clouds. Remote Sens. 2021, 13, 3446. [CrossRef]
167. Zhang, Y.; Yuan, X.; Li, W.; Chen, S. Automatic power line inspection using UAV images. Remote Sens. 2017, 9, 824. [CrossRef]
168. Zhou, Y.; Xu, C.; Dai, Y.; Feng, X.; Ma, Y.; Li, Q. Dual-view stereovision-guided automatic inspection system for overhead
transmission line corridor. Remote Sensing 2022, 14, 4095. [CrossRef]
169. Ortega, S.; Trujillo, A.; Santana, J.M.; Suárez, J.P.; Santana, J. Characterization and modeling of power line corridor elements from
LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2019, 152, 24–33. [CrossRef]
170. Zhao, Z.; Zhen, Z.; Zhang, L.; Qi, Y.; Kong, Y.; Zhang, K. Insulator detection method in inspection image based on improved
faster R-CNN. Energies 2019, 12, 1204. [CrossRef]
171. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014;
pp. 580–587.
172. Ma, Y.; Li, Q.; Chu, L.; Zhou, Y.; Xu, C. Real-time detection and spatial localization of insulators for UAV inspection based on
binocular stereo vision. Remote Sens. 2021, 13, 230. [CrossRef]
173. Liu, C.; Wu, Y.; Liu, J.; Han, J. MTI-YOLO: A light-weight and real-time deep neural network for insulator detection in complex
aerial images. Energies 2021, 14, 1426. [CrossRef]
174. Prates, R.M.; Cruz, R.; Marotta, A.P.; Ramos, R.P.; Simas Filho, E.F.; Cardoso, J.S. Insulator visual non-conformity detection in
overhead power distribution lines using deep learning. Comput. Electr. Eng. 2019, 78, 343–355. [CrossRef]
175. Wang, S.; Liu, Y.; Qing, Y.; Wang, C.; Lan, T.; Yao, R. Detection of insulator defects with improved ResNeSt and region proposal
network. IEEE Access 2020, 8, 184841–184850. [CrossRef]
176. Wen, Q.; Luo, Z.; Chen, R.; Yang, Y.; Li, G. Deep learning approaches on defect detection in high resolution aerial images of
insulators. Sensors 2021, 21, 1033. [CrossRef]
177. Chen, W.; Li, Y.; Zhao, Z. InsulatorGAN: A Transmission Line Insulator Detection Model Using Multi-Granularity Conditional
Generative Adversarial Nets for UAV Inspection. Remote Sens. 2021, 13, 3971. [CrossRef]
Drones 2023, 7, 398 41 of 42
178. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784.
179. Liu, Z.; Miao, X.; Xie, Z.; Jiang, H.; Chen, J. Power Tower Inspection Simultaneous Localization and Mapping: A Monocular
Semantic Positioning Approach for UAV Transmission Tower Inspection. Sensors 2022, 22, 7360. [CrossRef]
180. Bao, W.; Ren, Y.; Wang, N.; Hu, G.; Yang, X. Detection of Abnormal Vibration Dampers on Transmission Lines in UAV Remote
Sensing Images with PMA-YOLO. Remote Sens. 2021, 13, 4134. [CrossRef]
181. Bao, W.; Du, X.; Wang, N.; Yuan, M.; Yang, X. A Defect Detection Method Based on BC-YOLO for Transmission Line Components
in UAV Remote Sensing Images. Remote Sens. 2022, 14, 5176. [CrossRef]
182. Nex, F.; Duarte, D.; Steenbeek, A.; Kerle, N. Towards real-time building damage mapping with low-cost UAV solutions. Remote
Sens. 2019, 11, 287. [CrossRef]
183. Yeom, J.; Han, Y.; Chang, A.; Jung, J. Hurricane building damage assessment using post-disaster UAV data. In Proceedings of the
IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019;
pp. 9867–9870.
184. Wenzhuo, L.; Kaimin, S.; Chuan, X. Automatic 3D Building Change Detection Using UAV Images. In Proceedings of the
IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019;
pp. 1574–1577.
185. Wu, H.; Nie, G.; Fan, X. Classification of Building Structure Types Using UAV Optical Images. In Proceedings of the IGARSS
2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020;
pp. 1193–1196.
186. Zheng, L.; Ai, P.; Wu, Y. Building recognition of UAV remote sensing images by deep learning. In Proceedings of the IGARSS
2020—2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020;
pp. 1185–1188.
187. Li, X.; Yang, J.; Li, Z.; Yang, F.; Chen, Y.; Ren, J.; Duan, Y. Building Damage Detection for Extreme Earthquake Disaster Area
Location from Post-Event Uav Images Using Improved SSD. In Proceedings of the IGARSS 2022—2022 IEEE International
Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2674–2677.
188. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of
the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings,
Part I 14; Springer: Cham, Switzerland, 2016; pp. 21–37.
189. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference
on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19.
190. Shi, X.; Huang, H.; Pu, C.; Yang, Y.; Xue, J. CSA-UNet: Channel-Spatial Attention-Based Encoder–Decoder Network for Rural
Blue-Roofed Building Extraction from UAV Imagery. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [CrossRef]
191. He, H.; Yu, J.; Cheng, P.; Wang, Y.; Zhu, Y.; Lin, T.; Dai, G. Automatic, Multiview, Coplanar Extraction for CityGML Building
Model Texture Mapping. Remote Sens. 2021, 14, 50. [CrossRef]
192. Laugier, E.J.; Casana, J. Integrating Satellite, UAV, and Ground-Based Remote Sensing in Archaeology: An Exploration of
Pre-Modern Land Use in Northeastern Iraq. Remote Sens. 2021, 13, 5119. [CrossRef]
193. Ammour, N.; Alhichri, H.; Bazi, Y.; Benjdira, B.; Alajlan, N.; Zuair, M. Deep learning approach for car detection in UAV imagery.
Remote Sens. 2017, 9, 312. [CrossRef]
194. Li, J.; Chen, S.; Zhang, F.; Li, E.; Yang, T.; Lu, Z. An adaptive framework for multi-vehicle ground speed estimation in airborne
videos. Remote Sens. 2019, 11, 1241. [CrossRef]
195. Zhang, Y.; Guo, L.; Wang, Z.; Yu, Y.; Liu, X.; Xu, F. Intelligent ship detection in remote sensing images based on multi-layer
convolutional feature fusion. Remote Sens. 2020, 12, 3316. [CrossRef]
196. Lubczonek, J.; Wlodarczyk-Sielicka, M.; Lacka, M.; Zaniewicz, G. Methodology for Developing a Combined Bathymetric and
Topographic Surface Model Using Interpolation and Geodata Reduction Techniques. Remote Sens. 2021, 13, 4427. [CrossRef]
197. Ioli, F.; Bianchi, A.; Cina, A.; De Michele, C.; Maschio, P.; Passoni, D.; Pinto, L. Mid-Term Monitoring of Glacier’s Variations with
UAVs: The Example of the Belvedere Glacier. Remote Sens. 2021, 14, 28. [CrossRef]
198. Nardin, W.; Taddia, Y.; Quitadamo, M.; Vona, I.; Corbau, C.; Franchi, G.; Staver, L.W.; Pellegrinelli, A. Seasonality and
Characterization Mapping of Restored Tidal Marsh by NDVI Imageries Coupling UAVs and Multispectral Camera. Remote Sens.
2021, 13, 4207. [CrossRef]
199. Kim, M.; Chung, O.S.; Lee, J.K. A Manual for Monitoring Wild Boars (Sus scrofa) Using Thermal Infrared Cameras Mounted on
an Unmanned Aerial Vehicle (UAV). Remote Sens. 2021, 13, 4141. [CrossRef]
200. Rančić, K.; Blagojević, B.; Bezdan, A.; Ivošević, B.; Tubić, B.; Vranešević, M.; Pejak, B.; Crnojević, V.; Marko, O. Animal Detection
and Counting from UAV Images Using Convolutional Neural Networks. Drones 2023, 7, 179. [CrossRef]
201. Ge, S.; Gu, H.; Su, W.; Praks, J.; Antropov, O. Improved semisupervised unet deep learning model for forest height mapping with
satellite sar and optical data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5776–5787. [CrossRef]
202. Zhang, B.; Ye, H.; Lu, W.; Huang, W.; Wu, B.; Hao, Z.; Sun, H. A spatiotemporal change detection method for monitoring pine
wilt disease in a complex landscape using high-resolution remote sensing imagery. Remote Sens. 2021, 13, 2083. [CrossRef]
203. Barrile, V.; Simonetti, S.; Citroni, R.; Fotia, A.; Bilotta, G. Experimenting Agriculture 4.0 with Sensors: A Data Fusion Approach
between Remote Sensing, UAVs and Self-Driving Tractors. Sensors 2022, 22, 7910. [CrossRef] [PubMed]
Drones 2023, 7, 398 42 of 42
204. Zheng, Q.; Huang, W.; Cui, X.; Shi, Y.; Liu, L. New spectral index for detecting wheat yellow rust using Sentinel-2 multispectral
imagery. Sensors 2018, 18, 868. [CrossRef] [PubMed]
205. Bohnenkamp, D.; Behmann, J.; Mahlein, A.K. In-field detection of yellow rust in wheat on the ground canopy and UAV scale.
Remote Sens. 2019, 11, 2495. [CrossRef]
206. Saeed, Z.; Yousaf, M.H.; Ahmed, R.; Velastin, S.A.; Viriri, S. On-Board Small-Scale Object Detection for Unmanned Aerial Vehicles
(UAVs). Drones 2023, 7, 310. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.