Agronomy 12 00365 v2
Agronomy 12 00365 v2
Agronomy 12 00365 v2
Article
Plant Disease Recognition Model Based on Improved YOLOv5
Zhaoyi Chen 1 , Ruhui Wu 2 , Yiyan Lin 1 , Chuyu Li 1 , Siyu Chen 1 , Zhineng Yuan 2 , Shiwei Chen 2
and Xiangjun Zou 1,3, *
Abstract: To accurately recognize plant diseases under complex natural conditions, an improved
plant disease-recognition model based on the original YOLOv5 network model was established. First,
a new InvolutionBottleneck module was used to reduce the numbers of parameters and calculations,
and to capture long-distance information in the space. Second, an SE module was added to improve
the sensitivity of the model to channel features. Finally, the loss function ‘Generalized Intersection
over Union’ was changed to ‘Efficient Intersection over Union’ to address the former’s degeneration
into ‘Intersection over Union’. These proposed methods were used to improve the target recognition
effect of the network model. In the experimental phase, to verify the effectiveness of the model,
sample images were randomly selected from the constructed rubber tree disease database to form
training and test sets. The test results showed that the mean average precision of the improved
YOLOv5 network reached 70%, which is 5.4% higher than that of the original YOLOv5 network. The
precision values of this model for powdery mildew and anthracnose detection were 86.5% and 86.8%,
respectively. The overall detection performance of the improved YOLOv5 network was significantly
Citation: Chen, Z.; Wu, R.; Lin, Y.; Li,
better compared with those of the original YOLOv5 and the YOLOX_nano network models. The
C.; Chen, S.; Yuan, Z.; Chen, S.; Zou,
X. Plant Disease Recognition Model
improved model accurately identified plant diseases under natural conditions, and it provides a
Based on Improved YOLOv5. technical reference for the prevention and control of plant diseases.
Agronomy 2022, 12, 365.
https://doi.org/10.3390/ Keywords: plant diseases recognition; YOLOv5; InvolutionBottleneck; SE module; EIOU
agronomy12020365
detection methods based on visible light and near-infrared spectroscopic digital images
have been widely used. Near-infrared spectroscopic and hyperspectral images contain
continuous spectral information and provide information on the spatial distributions
of plant diseases. Consequently, they have become the preferred technologies of many
researchers [10–13]. However, the equipment for acquiring spectral images is expensive
and difficult to carry; therefore, this technology cannot be widely applied. The acquisition
of visible light images is relatively simple and can be achieved using various ordinary
electronic devices, such as digital cameras and smart phones, which greatly reduces the
challenges of visible light image-recognition research [14,15].
Because of the need for real-time monitoring and sharing of crop growth information,
visible light image recognition has been successfully applied to the field of plant disease
detection in recent years [16–20]. A variety of traditional image-processing methods have
been applied. First, the images are segmented, then the characteristics of plant diseases are
extracted and, finally, the diseases are classified. Shrivastava et al. [21] proposed an image-
based rice plant disease classification approach using color features only, and it successfully
classifies rice plant diseases using a support vector machine classifier. Alajas et al. [22]
used a hybrid linear discriminant analysis and a decision tree to predict the percentage of
damaged leaf surface on diseased grapevines, with an accuracy of 97.79%. Kianat et al. [23]
proposed a hybrid framework based on feature fusion and selection techniques to classify
cucumber diseases. They first used the probability distribution-based entropy approach
to reduce the extracted features, and then, they used the Manhattan distance-controlled
entropy technique to select strong features. Mary et al. [24] used the merits of both the
Gabor filter and the 2D log Gabor filter to construct an enhanced Gabor filter to extract
features from the images of the diseased plant, and then, they used the k-nearest neighbor
classifier to classify banana leaf diseases. Sugiarti et al. [25] combined the grey-level
co-occurrence matrix extraction function with the naive Bayes classification to greatly
improved the classification accuracy of apple diseases. Mukhopadhyay et al. [26] proposed
a novel method based on image-processing technology, and they used the non-dominated
sorting genetic algorithm to detect the disease area on tea leaves, with an average accuracy
of 83%. However, visible light image-recognition based on traditional image processing
technologies requires the artificial preprocessing of images and the extraction of disease
features. The feature information is limited to shallow learning, and the generalization
ability of new data sets needs to be improved.
However, deep learning methods are gradually being applied to agricultural research
because they can automatically learn the deep feature information of images, and their
speed and accuracy levels are greater than those of traditional algorithms [27–30]. Deep
learning has also been applied to the detection of plant diseases from visible light im-
ages. Abbas et al. [31] proposed a deep learning-based method for tomato plant disease
detection that utilizes the conditional generative adversarial network to generate synthetic
images of tomato plant leaves. Xiang et al. [32] established a lightweight convolutional
neural network-based network model with channel shuffle operation and multiple-size
modules that achieved accuracy levels of 90.6% and 97.9% on a plant disease severity
and PlantVillage datasets, respectively. Tan et al. [33] compared the recognition effects
of deep learning networks and machine learning algorithms on tomato leaf diseases and
found that the metrics of the tested deep learning networks are all better than those of
the measured machine learning algorithms, with the ResNet34 network obtaining the
best results. Alita et al. [34] used the EfficientNet deep learning model to detect plant leaf
disease, and it was superior to other state-of-the-art deep learning models in terms of
accuracy. Mishra et al. [35] developed a sine-cosine algorithm-based rider neural network
and found that the detection performance of the classifier improved, achieving an accuracy
of 95.6%. In summary, applying deep learning to plant disease detection has achieved good
results.
As a result of climatic factors, rubber trees may suffer from a variety of pests and
diseases, most typically powdery mildew and anthracnose, during the tender leaf stage.
Agronomy 2022, 12, 365 3 of 14
where A, B ⊆ S ⊆ Rn represent two arbitrary boxes, C represents the smallest convex box,
C ⊆ S ⊆ Rn , enclosing both A and B and IOU = | A ∩ B|/| A ∪ B|.
When the input network predicts image features, the optimal target frame is filtered
by combining the loss function GIOU and the non-maximum suppression algorithm.
where C represents the calculation channel, φ represents the generation function of the
convolution kernel and ψi,j represents the index to the pixel set Hi,j . The convolution kernel
Hi,j,...,g ∈ RK ×K ( g = 1, 2, . . . , G ) was specifically customized for the pixel Xi,j ∈ RC located
at the corresponding coordinates (i, j), but it is shared on the channel. G represents the
number of groups sharing the same convolution kernel. The size of the convolution kernel
depends on the size of the input feature map.
Figure 1. SE modules with different structures. (a) SE module with Inception structure; (b) SE module
Figure 1. SE modules with different structures. (a) SE module with Inception structure; (b) SE
with Residual structure.
module with Residual structure.
2.2.3. Loss Function Design
The SE module Theisloss
a calculation
function was block
changedthat can
from GIOUbe to
built
EIOUon[41].
theThetransformation
GIOU function be-
was
proposed on the basis of the IOU function. It solves the problem
tween the input feature vector X and the output feature map U, and the transformation of the IOU not being able
to reflect how the two boxes intersect. However, if the anchor and target boxes are part
relationship is shown in Equation (4):
of a containment relationship, then GIOU will still degenerate into IOU. Therefore, we
changed the loss function GIOU to EIOU. EIOU was obtained on the basis of complete-
IOU loss (CIOU), and it not only takes into account the central point distance and (4)the
𝑢 =𝑣 ∗𝑋 = 𝑣 ∗𝑥
aspect ratio, but also the true discrepancies in the target and anchor boxes’ widths and
heights. The EIOU loss function directly minimizes these discrepancies and accelerates
where ∗ model convergence.
represents 𝑣 =loss
The EIOU
convolution, 𝑣 function
, 𝑣 , … , 𝑣is shown
, 𝑋 =in 𝑋 , 𝑋 , …(5).
Equation ,𝑋 and 𝑢 ∈
𝑅 × .𝑣 represents a 2D spatial kernel, which denotes b gt
ρ2 w, w gt of ρ𝑣2 h,that
ρ2 ab, singlechannel
h gt acts
on the corresponding L EIOU
channel of+𝑋.
= L IOU Ldis + L asp = 1 − IOU +
C2
+
Cw2
+
Ch2
(5)
In this paper, the SE module was added to the last layer of the Backbone, allowing
imageCwfeatures
it to merge the where and Ch represent
of powderythe width and height,
mildew andrespectively,
anthracnose of the
in smallest enclosing
a weighted box
man-
covering the two boxes; b and b gt represent the central points of the predicted and target
ner, thereby improving the network performance at a small cost.
boxes, respectively; ρ represents the Euclidean distance; C represents the diagonal length
of the smallest enclosing box covering the two boxes. The loss function EIOU is divided
2.2.3. Loss Function Design
into three parts: the IOU loss L IOU , the distance loss Ldis and the aspect loss L asp .
Combined with the InvolutionBottleneck and the SE modules, the whole improved
The loss function was changed from GIOU to 𝐸𝐼𝑂𝑈 [41]. The GIOU function was
YOLOv5 network model framework is constructed, as shown in Figure 2.
proposed on the basis of the IOU function. It solves the problem of the IOU not being
able to reflect how the two boxes intersect. However, if the anchor and target boxes are
part of a containment relationship, then GIOU will still degenerate into IOU. Therefore,
we changed the loss function GIOU to 𝐸𝐼𝑂𝑈. 𝐸𝐼𝑂𝑈 was obtained on the basis of com-
plete-IOU loss (CIOU), and it not only takes into account the central point distance and
the aspect ratio, but also the true discrepancies in the target and anchor boxes’ widths
and heights. The 𝐸𝐼𝑂𝑈 loss function directly minimizes these discrepancies and accel-
and target boxes, respectively; 𝜌 represents the Euclidean distance; 𝐶 represents the
diagonal length of the smallest enclosing box covering the two boxes. The loss function
𝐸𝐼𝑂𝑈 is divided into three parts: the 𝐼𝑂𝑈 loss 𝐿 , the distance loss 𝐿 and the as-
pect loss 𝐿 .
Agronomy 2022, 12, 365 Combined with the InvolutionBottleneck and the SE modules, the whole 6improved
of 14
YOLOv5 network model framework is constructed, as shown in Figure 2.
Figure
Figure 2. The
2. The improvedYOLOv5
improved YOLOv5 network
networkmodel
modelstructure.
structure.
3. Materials and Methods
3. Materials and Methods
3.1. Experimental Materials
3.1. Experimental
The imagesMaterials
of rubber tree diseases were collected from a rubber plantation in Shengli
State
TheFarm,
imagesMaoming City,tree
of rubber China. It is located
diseases at 22◦ 60 N,from
were collected 110◦a 0 E, with an altitude of
80rubber plantation in Shen-
34–69 m, an average annual precipitation of 1698.1 mm and an annual average temperature
gli State Farm,◦Maoming City, China. It is located at 22°6′ N, 110°80′ E, with an altitude of
of 19.9–26.5 C. The high humidity and warm climate are conducive to widespread epi-
34–69 m, an average annual precipitation of 1698.1 mm and an annual average temper-
demics of powdery mildew and anthracnose. To ensure the representativeness of the image
ature
set,of 19.9–26.5
they °C. Theunder
were collected highnatural
humidity lightand warm climate
conditions. A Sonyare conducive
digital ILCE-7m3to camera
widespread
epidemics
was used to photograph powdery mildew and anthracnose of rubber leaves at differentof the
of powdery mildew and anthracnose. To ensure the representativeness
image set,with
angles, theyanwere
imagecollected
resolutionunder
of 6000natural light
× 4000 conditions.
pixels. A Sony
There were digital in
2375 images ILCE-7m3
the
camera
rubberwas
tree used
diseasetodatabase,
photograph powdery
including mildewmildew
1203 powdery and anthracnose of rubber
images and 1172 leaves at
anthracnose
images,angles,
different which were
with used for theresolution
an image training and of testing
6000 × of disease
4000 recognition
pixels. models.
There were 2375We images
in identified
the rubber these two
tree diseasesdatabase,
disease with the guidance
including of plant
1203 protection
powderyexperts.
mildewImages
imagesof these
and 1172
rubber treeimages,
anthracnose diseases which
are shownwerein Figure
used for 3. the training and testing of disease recognition
models. We identified these two diseases with the guidance of plant protection experts.
Images of these rubber tree diseases are shown in Figure 3.
Agronomy 2022, 12,
Agronomy x FOR
2022, PEER REVIEW
12, 365 7 of 14
7 of 14
Figure 3. Rubber
Figure
Figure tree
3. Rubber
3. treediseases
diseases images. (a)Powdery
images. (a)
images. Powderymildew
mildew image;
image; (b) (b)
(b) Anthracnose
Anthracnose
Anthracnose image.
image.
image.
3.2.3.2.
Data
3.2. Data Preprocessing
Preprocessing
Data Preprocessing
Before
Before
Before the
the images
theimages were
were inputted
imageswere inputted into
inputted intothe
into theimproved
the improved
improved YOLOv5
YOLOv5
YOLOv5 network
network
network model,
model, the
the the
model,
mosaic
mosaic data
data enhancement
enhancement method
method was
was used
used to
to expand
expand the
the image
image set.
set. The
The images
images were
were
mosaic data enhancement method was used to expand the image set. The images were
spliced
spliced using several
usingseveral methods,
methods, such
severalmethods, such asas random
random scaling,
scaling, random
random cropping
cropping and
and random
random
spliced using
arrangement, such astherandom scaling, random cropping and random
arrangement, which not only expanded the image set, but also improved the detection of
which not only expanded image set, but also improved the detection of
arrangement,
small which not only expanded the image set, but also improved the detection of
small targets.
targets.In
Inaddition,
addition,before
beforetraining
trainingthethemodel,
model, adaptive scaling
adaptive andand
scaling filling operations
filling opera-
small
weretargets. In addition,
performed on the beforeoftraining the model, adaptive scaling and filling opera-
tions were performed onimages
the images rubber
of rubbertree diseases,
tree diseases,andand the
theinput
inputimage
imagesize size was
was
tions were
normalizedperformed
to 640 × on
640 the images
pixels. The of rubber
preprocessing tree diseases,
results are and
shown
normalized to 640 × 640 pixels. The preprocessing results are shown in Figure 4. the
in input
Figure image
4. size was
normalized to 640 × 640 pixels. The preprocessing results are shown in Figure 4.
Figure 4.
Figure 4. Image
Image preprocessing
preprocessing result.
result.
Figure 4. Results
5. Test flowand Analysis
chart.
4.1. Convergence Results of the Network Model
The training and verification sets were inputted into the network for training. After
80 batches of training, the loss function value curves of the training and verification sets
were determined (Figure 6), and they included the detection frame loss, the detection object
loss and the classification loss.
4. Results and Analysis
4.1. Convergence Results of the Network Model
The training and verification sets were inputted into the network for training. After
80 batches of training, the loss function value curves of the training and verification sets
Agronomy 2022, 12, 365 were determined (Figure 6), and they included the detection frame loss, the detection 9 of 14
object loss and the classification loss.
TP
P= (6)
TP + FP
To evaluate the detection performance of the improved YOLOv5 network, it was
crucial to use appropriate evaluation metrics for each problem. The precision, recall, av-
erage precision and mean average precision were used as the evaluation metrics, and
they were respectively defined as follows:
Agronomy 2022, 12, 365 10 of 14
𝑇𝑃
𝑃= (6)
𝑇𝑃
𝐹𝑃
𝑇𝑃
𝑅 = TP
R= (7)
(7)
𝑇𝑃
TP + FN𝐹𝑁
Z 1
AP𝐴𝑃
i =
= P(𝑃R𝑅
)d(𝑑R)𝑅 (8)
(8)
0
1 1 N
mAP 𝑚𝐴𝑃= = ∑i=1 AP 𝐴𝑃i (9)
(9)
N𝑁
whereTP
where 𝑇𝑃represents
representsthe number
the number of of
positive samples
positive samplesthatthat
are correctly detected,
are correctly FP rep-
detected, 𝐹𝑃
resents the number
represents the numberof negative samples
of negative thatthat
samples are falsely detected
are falsely FN represents
and and
detected the
𝐹𝑁 represents
number of positive
the number samples
of positive that are
samples thatnot
aredetected.
not detected.
In
In total, 200 powdery mildew imagesand
total, 200 powdery mildew images and200 200anthracnose
anthracnoseimages
images were
wererandomly
randomly
selected as the test set and inputted into the improved YOLOv5 network for
selected as the test set and inputted into the improved YOLOv5 network for testing. testing. The test
The
results were compared with those of the original YOLOv5 and theYOLOX_nano
test results were compared with those of the original YOLOv5 and theYOLOX_nano networks.
The comparison
networks. results are shown
The comparison resultsinareFigure
shown 7. in Figure 7.
Figure7.7.Performance
Figure Performancecomparison
comparison ofof
allall
thethe network
network models.
models. (a) Powdery
(a) Powdery mildew
mildew recognition
recognition re-
results;
sults; (b) Anthracnose recognition results; (c) Mean average precision; (d) Processing times
(b) Anthracnose recognition results; (c) Mean average precision; (d) Processing times per photo. per
photo.
As shown in Figure 7, the detection performance of the improved YOLOv5 network
As shown
was better than in Figure
that of the7,original
the detection
YOLOv5performance
network forof the
eachimproved YOLOv5
of the tested network
two diseases
was better than that of the original YOLOv5 network for each of the tested
of rubber trees. For the detection of powdery mildew, precision increased by 8.7% two diseases
and
average precision increased by 1%; however, recall decreased by 1.5%. For the detectionand
of rubber trees. For the detection of powdery mildew, precision increased by 8.7% of
average precision
anthracnose, increased
average byincreased
precision 1%; however, recall
by 9.2% anddecreased by 1.5%.byFor
recall increased the however,
9.3%; detection
precision decreased by 5.2%. Overall, the mean average precision increased by 5.4%. The
improved YOLOV5 network achieved 86.5% and 86.8% precision levels for the detection
of powdery mildew and anthracnose, respectively. In summary, the improved YOLOv5
network’s performance was greatly enhanced compared with that of the original YOLOv5
network; consequently, it achieved more accurate rubber tree disease identification and
location functions. As shown in Figure 7, the performance of the improved YOLOv5
network was better than those of the original YOLOv5 and YOLOX_nano networks for
the detection of the two diseases of rubber trees. Compared with the original YOLOv5
5.4%. The improved YOLOV5 network achieved 86.5% and 86.8% precision levels for the
detection of powdery mildew and anthracnose, respectively. In summary, the improved
YOLOv5 network’s performance was greatly enhanced compared with that of the origi-
nal YOLOv5 network; consequently, it achieved more accurate rubber tree disease iden-
Agronomy 2022, 12, 365 tification and location functions.As shown in Figure 7, the performance of the improved 11 of 14
YOLOv5 network was better than those of the original YOLOv5 and YOLOX_nano net-
works for the detection of the two diseases of rubber trees. Compared with the original
YOLOv5 network,
network, the precision
the precision of powdery of powdery mildew detection
mildew detection increasedincreased
by 8.7% and by 8.7% and the
the average
average precision increased by 1%; however, recall decreased by 1.5%.
precision increased by 1%; however, recall decreased by 1.5%. The average precision The average pre-
of
cision of anthracnose detection increased by 9.2% and recall increased by
anthracnose detection increased by 9.2% and recall increased by 9.3%; however, precision 9.3%; however,
precision decreased
decreased by 5.2%.the
by 5.2%. Overall, Overall, the mean
mean average averageincreased
precision precisionby increased by 5.4%.
5.4%. Compared
Compared with the YOLOX_nano network, the precision of powdery
with the YOLOX_nano network, the precision of powdery mildew detection increased mildew detection
increased
by 3.7% and by 3.7% and theprecision
the average average precision
increasedincreased
by 0.3%;by 0.3%; however,
however, recall decreased
recall decreased by 2%.
by 2%. The precision of anthracnose detection increased by 4.4% and recall
The precision of anthracnose detection increased by 4.4% and recall increased by 3.8%; increased by
3.8%; however, precision decreased by 4.4%. Overall, themean average
however, precision decreased by 4.4%. Overall, themean average precision increased by precision in-
creased
1.4%. Thebyimproved
1.4%. TheYOLOv5
improved YOLOv5
network network
achieved achieved
86.5% 86.5%
and 86.8% and for
levels 86.8%the levels
detection for
the detection of powdery mildew and anthracnose, respectively.
of powdery mildew and anthracnose, respectively. In summary, the improved YOLOv5 In summary, the im-
proved YOLOv5
network’s network’s
performance performance
was greatly enhancedwascompared
greatly enhanced
with thosecompared with YOLOv5
of the original those of
the original
and YOLOX_nano YOLOv5 and YOLOX_nano
networks; consequently,networks; consequently,
it more accurately locatesitand
more accurately
identifies rubberlo-
catesdiseases.
tree and identifies rubber tree diseases.
Figure 8.
Figure 8. Comparison
Comparison of of the
the recognition
recognition effects
effects of
of all
all the
the network
network models.
models. (a–c)
(a–c) Powdery
Powdery mildew
mildew
recognition effects of the (a) original YOLOv5; (b)YOLOX_nano; and (c) improved YOLOv5 net-
recognition effects of the (a) original YOLOv5; (b) YOLOX_nano; and (c) improved YOLOv5 network
work models; (d–f) Anthracnose recognition effects of the (d) original YOLOv5; (e) YOLOX_nano;
models; (d–f) Anthracnose recognition effects of the (d) original YOLOv5; (e) YOLOX_nano; and
and (f) improved YOLOv5 network models.
(f) improved YOLOv5 network models.
As shown
As shown ininFigure
Figure8, comparedwith
8, compared withthe
the other
other networks,
networks, thethe improved
improved network
network sig-
significantly
nificantly improved
improved the detection
the detection of powdery
of powdery mildew,
mildew, including
including obscuredobscured diseased
diseased leaves.
leaves. Additionally,
Additionally, the recognition
the recognition effect
effect of the of the YOLOX_nano
YOLOX_nano network fornetwork
powdery formildew
powdery is
better than that of the original YOLOv5 network. For the detection of anthracnose, the
recognition effects of the three networks were similar, with all three effectively detecting
anthracnose. Therefore, the effectiveness of the improved network for diseased leaves detec-
tion is generally better than those of the original YOLOv5 and the YOLOX_nano networks.
5. Conclusions
The detection and location of plant diseases in the natural environment are of great
significance to plant disease control. In this paper, a rubber tree disease recognition model
based on the improved YOLOv5 network was established. We replaced the Bottleneck
Agronomy 2022, 12, 365 12 of 14
module with the InvolutionBottleneck module to achieve channel sharing within the group
and reduce the number of network parameters. In addition, the SE module was added to
the last layer of the Backbone for feature fusion, which improved network performance at
a small cost. Finally, the loss function was changed from GIOU to EIOU to accelerate the
convergence of the network model. According to the experimental results, the following
conclusions can be drawn:
(1) The model performance verification experiment showed that the rubber tree disease
recognition model based on the improved YOLOv5 network achieved 86.5% precision
for powdery mildew detection and 86.8% precision for anthracnose detection. In gen-
eral, the mean average precision reached 70%, which is an increase of 5.4% compared
with the original YOLOv5 network. Therefore, the improved YOLOv5 network more
accurately identified and classified rubber tree diseases, and it provides a technical
reference for the prevention and control of rubber tree diseases.
(2) A comparison of the detection results showed that the performance of the improved
YOLOv5 network was generally better than those of the original YOLOv5 and the
YOLOX_nano networks, especially in the detection of powdery mildew. The problem
of the missing obscured diseased leaves was improved.
Although the improved YOLOv5 network, as applied to rubber tree disease detection,
achieved good results, the detection accuracy still needs to be improved. In future research,
the network model structure will be further optimized to improve the network performance
of the rubber tree disease recognition model.
Author Contributions: Conceptualization, Z.C. and X.Z.; methodology, Z.C.; software, Z.C.; valida-
tion, Z.C.; formal analysis, Z.C. and X.Z.; investigation, Z.C., X.Z., R.W., Y.L., C.L. and S.C. (Siyu
Chen); resources, Z.C., X.Z., R.W., Z.Y. and S.C. (Shiwei Chen); data curation, Z.C.; writing—original
draft preparation, Z.C.; writing—review and editing, Z.C. and X.Z.; visualization, Z.C.; supervision,
X.Z.; project administration, X.Z. All authors have read and agreed to the published version of
the manuscript.
Funding: The paper is funded by the No. 03 Special Project and the 5G Project of Jiangxi Province
under Grant 20212ABC03A27 and the Key-area Research and Development Program of Guangdong
Province under Grant 2019B020223003.
Data Availability Statement: The data presented in this study are available on request from the cor-
responding author. The data are not publicly available due to the privacy policy of the organization.
Acknowledgments: The authors would like to thank the anonymous reviewers for their critical
comments and suggestions for improving the manuscript.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Lin, G.; Tang, Y.; Zou, X.; Xiong, J.; Fang, Y. Color-, depth-, and shape-based 3D fruit detection. Precis. Agric. 2020, 21, 1–17.
[CrossRef]
2. Joshi, R.C.; Kaushik, M.; Dutta, M.K.; Srivastava, A.; Choudhary, N. VirLeafNet: Automatic analysis and viral disease diagnosis
using deep-learning in Vigna mungoplant. Ecol. Inform. 2021, 61, 101197. [CrossRef]
3. Buja, I.; Sabella, E.; Monteduro, A.G.; Chiriacò, M.S.; De Bellis, L.; Luvisi, A.; Maruccio, G. Advances in Plant Disease Detection
and Monitoring: From Traditional Assays to In-Field Diagnostics. Sensors 2021, 21, 2129. [CrossRef] [PubMed]
4. Liu, S.; Liu, D.; Srivastava, G.; Połap, D.; Woźniak, M. Overview and methods of correlation filter algorithms in object tracking.
Complex Intell. Syst. 2020, 7, 1895–1917. [CrossRef]
5. Tang, Y.; Chen, M.; Wang, C.; Luo, L.; Li, J.; Lian, G.; Zou, X. Recognition and localization methods for vision-based fruit picking
robots: A review. Front. Plant Sci. 2020, 11, 510. [CrossRef]
6. Li, J.; Tang, Y.; Zou, X.; Lin, G.; Wang, H. Detection of fruit-bearing branches and localization of litchi clusters for vision-based
harvesting robots. IEEE Access 2020, 8, 117746–117758. [CrossRef]
7. Wu, F.; Duan, J.; Chen, S.; Ye, Y.; Ai, P.; Yang, Z. Multi-Target Recognition of Bananas and Automatic Positioning for the
Inflorescence Axis Cutting Point. Front. Plant Sci. 2021, 12, 705021. [CrossRef]
8. Wang, C.; Tang, Y.; Zou, X.; Luo, L.; Chen, X. Recognition and matching of clustered mature litchi fruits using binocular
charge-coupled device (CCD) color cameras. Sensors 2017, 17, 2564. [CrossRef]
Agronomy 2022, 12, 365 13 of 14
9. Luo, L.; Liu, W.; Lu, Q.; Wang, J.; Wen, W.; Yan, D.; Tang, Y. Grape Berry Detection and Size Measurement Based on Edge Image
Processing and Geometric morphology. Machines 2021, 9, 233. [CrossRef]
10. Gui, J.; Fei, J.; Wu, Z.; Fu, X.; Diakite, A. Grading method of soybean mosaic disease based on hyperspectral imaging technology.
Inf. Process. Agric. 2021, 8, 380–385. [CrossRef]
11. Luo, L.; Chang, Q.; Wang, Q.; Huang, Y. Identification and Severity Monitoring of Maize Dwarf Mosaic Virus Infection Based on
Hyperspectral Measurements. Remote Sens. 2021, 13, 4560. [CrossRef]
12. Appeltans, S.; Pieters, J.G.; Mouazen, A.M. Detection of leek white tip disease under field conditions using hyperspectral proximal
sensing and supervised machine learning. Comput. Electron. Agric. 2021, 190, 106453. [CrossRef]
13. Fazari, A.; Pellicer-Valero, O.J.; Gómez-Sanchıs, J.; Bernardi, B.; Cubero, S.; Benalia, S.; Zimbalatti, G.; Blasco, J. Application of
deep convolutional neural networks for the detection of anthracnose in olives using VIS/NIR hyperspectral images. Comput.
Electron. Agric. 2021, 187, 106252. [CrossRef]
14. Shi, Y.; Huang, W.; Luo, J.; Huang, L.; Zhou, X. Detection and discrimination of pests and diseases in winter wheat based on
spectral indices and kernel discriminant analysis. Comput. Electron. Agric. 2017, 141, 171–180. [CrossRef]
15. Phadikar, S.; Sil, J.; Das, A.K. Rice diseases classification using feature selection and rule generation techniques. Comput. Electron.
Agric. 2013, 90, 76–85. [CrossRef]
16. Ahmed, N.; Asif, H.M.S.; Saleem, G. Leaf Image-based Plant Disease Identification using Color and Texture Features. arXiv 2021,
arXiv:2102.04515.
17. Singh, S.; Gupta, S.; Tanta, A.; Gupta, R. Extraction of Multiple Diseases in Apple Leaf Using Machine Learning. Int. J. Image
Graph. 2021, 2140009. [CrossRef]
18. Gadade, H.D.; Kirange, D.K. Machine Learning Based Identification of Tomato Leaf Diseases at Various Stages of Development.
In Proceedings of the 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), Erode,
India, 8–10 April 2021.
19. Almadhor, A.; Rauf, H.; Lali, M.; Damaševičius, R.; Alouffi, B.; Alharbi, A. AI-Driven Framework for Recognition of Guava
Plant Diseases through Machine Learning from DSLR Camera Sensor Based High Resolution Imagery. Sensors 2021, 21, 3830.
[CrossRef]
20. Kundu, N.; Rani, G.; Dhaka, V.S.; Gupta, K.; Nayak, S.C.; Verma, S.; Ijaz, M.F.; Woźniak, M. IoT and Interpretable Machine
Learning Based Framework for Disease Prediction in Pearl Millet. Sensors 2021, 21, 5386. [CrossRef]
21. Shrivastava, V.K.; Pradhan, M.K. Rice plant disease classification using color features: A machine learning paradigm. J. Plant
Pathol. 2020, 103, 17–26. [CrossRef]
22. Alajas, O.J.; Concepcion, R.; Dadios, E.; Sybingco, E.; Mendigoria, C.H.; Aquino, H. Prediction of Grape Leaf Black Rot Damaged
Surface Percentage Using Hybrid Linear Discriminant Analysis and Decision Tree. In Proceedings of the 2021 International
Conference on Intelligent Technologies (CONIT), Hubli, India, 25–27 June 2021.
23. Kianat, J.; Khan, M.A.; Sharif, M.; Akram, T.; Rehman, A.; Saba, T. A joint framework of feature reduction and robust feature
selection for cucumber leaf diseases recognition. Optik 2021, 240, 166566. [CrossRef]
24. Mary, N.A.B.; Singh, A.R.; Athisayamani, S. Classification of Banana Leaf Diseases Using Enhanced Gabor Feature Descriptor. In
Inventive Communication and Computational Technologies; Springer: Berlin/Heidelberg, Germany, 2020; pp. 229–242.
25. Sugiarti, Y.; Supriyatna, A.; Carolina, I.; Amin, R.; Yani, A. Model Naïve Bayes Classifiers For Detection Apple Diseases. In
Proceedings of the 2021 9th International Conference on Cyber and IT Service Management (CITSM), Bengkulu, Indonesia,
22–23 September 2021.
26. Mukhopadhyay, S.; Paul, M.; Pal, R.; De, D. Tea leaf disease detection using multi-objective image segmentation. Multimed. Tools
Appl. 2021, 80, 753–771. [CrossRef]
27. Chen, M.; Tang, Y.; Zou, X.; Huang, K.; Huang, Z.; Zhou, H.; Wang, C.; Lian, G. Three-dimensional perception of orchard banana
central stock enhanced by adaptive multi-vision technology. Comput. Electron. Agric. 2020, 174, 105508. [CrossRef]
28. Li, Q.; Jia, W.; Sun, M.; Hou, S.; Zheng, Y. A novel green apple segmentation algorithm based on ensemble U-Net under complex
orchard environment. Comput. Electron. Agric. 2021, 180, 105900. [CrossRef]
29. Cao, X.; Yan, H.; Huang, Z.; Ai, S.; Xu, Y.; Fu, R.; Zou, X. A Multi-Objective Particle Swarm Optimization for Trajectory Planning
of Fruit Picking Manipulator. Agronomy 2021, 11, 2286. [CrossRef]
30. Anagnostis, A.; Tagarakis, A.C.; Asiminari, G.; Papageorgiou, E.; Kateris, D.; Moshou, D.; Bochtis, D. A deep learning approach
for anthracnose infected trees classification in walnut orchards. Comput. Electron. Agric. 2021, 182, 105998. [CrossRef]
31. Abbas, A.; Jain, S.; Gour, M.; Vankudothu, S. Tomato plant disease detection using transfer learning with C-GAN synthetic
images. Comput. Electron. Agric. 2021, 187, 106279. [CrossRef]
32. Xiang, S.; Liang, Q.; Sun, W.; Zhang, D.; Wang, Y. L-CSMS: Novel lightweight network for plant disease severity recognition. J.
Plant Dis. Prot. 2021, 128, 557–569. [CrossRef]
33. Tan, L.; Lu, J.; Jiang, H. Tomato Leaf Diseases Classification Based on Leaf Images: A Comparison between Classical Machine
Learning and Deep Learning Methods. AgriEngineering 2021, 3, 542–558. [CrossRef]
34. Atila, Ü.; Uçar, M.; Akyol, K.; Uçar, E. Plant leaf disease classification using EfficientNet deep learning model. Ecol. Inform. 2021,
61, 101182. [CrossRef]
35. Mishra, M.; Choudhury, P.; Pati, B. Modified ride-NN optimizer for the IoT based plant disease detection. J. Ambient Intell.
Humaniz. Comput. 2021, 12, 691–703. [CrossRef]
Agronomy 2022, 12, 365 14 of 14
36. Liu, X.; Li, B.; Cai, J.; Zheng, X.; Feng, Y.; Huang, G. Colletotrichum species causing anthracnose of rubber trees in China. Sci. Rep.
2018, 8, 10435. [CrossRef] [PubMed]
37. Wu, H.; Pan, Y.; Di, R.; He, Q.; Rajaofera, M.J.N.; Liu, W.; Zheng, F.; Miao, W. Molecular identification of the powdery mildew
fungus infecting rubber trees in China. Forest Pathol. 2019, 49, e12519. [CrossRef]
38. Jocher, G.; Stoken, A.; Borovec, J.; Christopher, S.T.; Laughing, L.C. Ultralytics/yolov5: V4.0-nn.SILU() Activations, Weights &
Biases Logging, Pytorch Hub Integration. Zenodo 2021. [CrossRef]
39. Li, D.; Hu, J.; Wang, C.; Li, X.; She, Q.; Zhu, L.; Zhang, T.; Chen, Q. Involution: Inverting the inherence of convolution for visual
recognition. arXiv 2021, arXiv:2103.06255.
40. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141.
41. Zhang, Y.-F.; Ren, W.; Zhang, Z.; Jia, Z.; Wang, L.; Tan, T. Focal and Efficient IOU Loss for Accurate Bounding Box Regression.
arXiv 2021, arXiv:2101.08158.