A Computer Vision-Based Automatic System For Egg G
A Computer Vision-Based Automatic System For Egg G
A Computer Vision-Based Automatic System For Egg G
Article
A Computer Vision-Based Automatic System for Egg Grading
and Defect Detection
Xiao Yang , Ramesh Bahadur Bist , Sachin Subedi and Lilong Chai *
Simple Summary: Egg defects such as cracks, dirty spots on the eggshell, and blood spots inside
the egg can decrease the quality and market value of table eggs. To address this issue, an automatic
method based on computer vision technology was developed for grading eggs and determining
defects in a cage-free facility. A two-stage model was developed based on RTMDet and random
forest networks for predicting egg category and weight in this study. Results show that the best
classification accuracy reached 94–96%.
Abstract: Defective eggs diminish the value of laying hen production, particularly in cage-free sys-
tems with a higher incidence of floor eggs. To enhance quality, machine vision and image processing
have facilitated the development of automated grading and defect detection systems. Additionally,
egg measurement systems utilize weight-sorting for optimal market value. However, few studies
have integrated deep learning and machine vision techniques for combined egg classification and
weighting. To address this gap, a two-stage model was developed based on real-time multitask
detection (RTMDet) and random forest networks to predict egg category and weight. The model
uses convolutional neural network (CNN) and regression techniques were used to perform joint egg
classification and weighing. RTMDet was used to sort and extract egg features for classification, and
a Random Forest algorithm was used to predict egg weight based on the extracted features (major
axis and minor axis). The results of the study showed that the best achieved accuracy was 94.8% and
best R2 was 96.0%. In addition, the model can be used to automatically exclude non-standard-size
eggs and eggs with exterior issues (e.g., calcium deposit, stains, and cracks). This detector is among
the first models that perform the joint function of egg-sorting and weighing eggs, and is capable of
Citation: Yang, X.; Bist, R.B.;
classifying them into five categories (intact, crack, bloody, floor, and non-standard) and measuring
Subedi, S.; Chai, L. A Computer
them up to jumbo size. By implementing the findings of this study, the poultry industry can reduce
Vision-Based Automatic System for
Egg Grading and Defect Detection.
costs and increase productivity, ultimately leading to better-quality products for consumers.
Animals 2023, 13, 2354. https://
doi.org/10.3390/ani13142354 Keywords: laying hen production; egg quality; defect detection; egg weight; deep learning
grade eggs automatically has the potential to improve the potential efficiency and quality
of the egg production process, leading to higher-quality eggs for consumers and increased
market value for producers.
Egg weight is another important aspect of egg quality associated with the egg grade
and market value [7]. The manual measurement of eggs at the digital scale is a time-
consuming and tedious process. To improve the efficiency of the egg weighting process,
automated egg measurement systems have been developed. Payam et al. (2011) used the
ANFIS model to predict egg weight according to the number of pixels of eggs reaching
0.98 R-squared (R2 ) [8], which is more efficient and accurate compared to manual meth-
ods. Jeerapa et al. (2017), using the Support Vector Machine (SVM) technique to predict
brown chicken eggs from a single egg image, yielded the correlation coefficient of 0.99 [9].
Raoufat et al. (2010) built a computer vison system to measure egg weights by artificial
neural networks (ANN); their algorithms showed a high accuracy (R2 = 0.96) [10].
Previous works in this area primarily focused on using computer vision techniques
such as convolutional neural networks (CNNs) and image classification algorithms for
egg classification [11,12]. These methods have shown promising results in classifying eggs
based on their size, shape, and color. However, few studies have combined deep learning
and machine learning regression techniques for joint egg classification and weighting,
especially including floor eggs collected from cage-free poultry farms, which is an important
category for real-world egg types, which range from floor eggs to commercial eggs. This
can be useful for producers who want to ensure consistent quality across all types of eggs
and consumers who want to purchase high-quality eggs. Another reason for this is that the
egg industry is shifting from cage to cage-free [13–16]. Therefore, introducing floor eggs is
beneficial for application in the cage-free egg industry.
In this study, an automatic system will be developed at the University of Georgia,
aiming to fill this gap by integrating deep learning and supervised machine learning
technologies to perform joint egg classification and weighting. The system uses an updated
and powerful CNN, called real-time multitask detection (RTMDet), to extract egg features
for classification [17], and a classic Random Forest (RF) algorithm to regress egg-weight data
based on the extracted features [18]. The objects of this study were as follows: (1) develop an
egg classifier to sort eggs through their size and surface; (2) build a regressor to predict egg
weights through their geometrical attributes; (3) combine egg-sorting and the measuring of
egg weights into one two-stage model; (4) test the model with standard eggs and second
eggs. This two-stage model is expected to result in improved accuracy and efficiency
compared to existing methods.
Animals
Animals2023, 13,13,
2023, x FOR
2354 PEER REVIEW 3 of3 of
2019
Figure 2. The classification of cage-free eggs and visualization of standard egg sizes (g).
Figure 2. The classification of cage-free eggs and visualization of standard egg sizes (g).
2.2. Egg
Figure Samples
2. The Acquisition
classification Systemeggs and visualization of standard egg sizes (g).
of cage-free
2.2. Egg Samples Acquisition System
An egg samples’ collection system was constructed to collect images and weights of
2.2. An
Eggegg samples’
Samples
different collection
Acquisition
classes of eggs thesystem
atSystem was constructed
department to collect
of poultry science images
at the and weights
University of
of Georgia
different
(UGA), classes
USA. of eggs at the
Figure 3collectiondepartment
demonstrates thewasof poultry science
egg constructed
sample acquisition at the University
setup, includingof Georgia
the camera,
An egg samples’ system to collect images and weights of
(UGA),
tripod,USA.
egg Figurecomputer,
base, 3 demonstrates
and the scale.
digital egg sample
Details acquisition
are shown setup,
in including
Table 1. The the
system
different classes of eggs at the department of poultry science at the University of Georgia
camera, tripod,to
is designed egg base, computer,
accurately and record
collect and digital scale.on
Details are shown in Table 1. TheThe
(UGA), USA. Figure 3 demonstrates the egg data the different
sample acquisition classes
setup, of eggs.
including the
system is
camera, designed
which is to accurately
mounted on acollect
tripod, and record
takes data
images of on
thethe
eggsdifferent
placed classes
on the of eggs.
designated
camera, tripod, egg base, computer, and digital scale. Details are shown in Table 1. The
egg base.
system The digital
is designed scale measures
to accurately collectthe weight
and ofdata
record the eggs,
on theand the computer
different classes stores the
of eggs.
Animals 2023, 13, x FOR PEER REVIEW 4 of 20
Figure Figure
3. The 3.eggThesamples’
egg samples’ acquisition
acquisition system
system forfor classifyingeggs
classifying eggs(a)
(a) and weighting
weightingeggs
eggs(b): (1)(1)
(b): cam-
camera;era;
(2)(2) tripod;
tripod; (3)(3)
eggegg base;(4)
base; (4)computer;
computer;(5)
(5) digital
digital scale.
scale context and spatial information. With the use of large-kernel depth-wise convolu-
tions, this constraint can be overcome. The advantages of using large-kernel depth-wise
convolutions include improved model ability when applied to real-world objects, a more
comprehensive capturing of the data and their surroundings, and enhanced accuracy on
benchmark datasets. In the context of egg classification, this approach allows for a more
comprehensive analysis of various parameters, including egg size, eggshell type, and other
spatial characteristics. Furthermore, large-kernel depth-wise convolutions allow for a
Animals 2023, 13, x FOR PEER REVIEW 6 of 20
reduction in the number of parameters and computation, while still delivering a similar
performance to models with more parameters.
Figure 5.
Figure 5. The structure
structure of
of egg
egg classification
classification based
based on
on RTMDet
RTMDetarchitecture.
architecture.
2.4.2.
2.4.1. Soft Labels Depth-Wide Convolution Approach
Large-Kernel
In deep learning,
Large-kernel soft labels
depth-wise refer to the use
convolutions of continuous,
involve the use ofrather
more than binary,
extensive values
filters in
as target outputs. The purpose of using soft labels is to provide the
depth-wise convolutional layers within a convolutional neural network (CNN) [22]. The model with additional
information
purpose of usingand tothese
encourage
larger smoothness
kernels is toingainthe amodel
betterpredictions
understanding [19,23]. By employing
of the contextual
soft labels, the model can generate predictions that provide more
information contained in the input data and enhance the representation power of the subtlety and precision in
the classification task. Instead of solely assigning eggs to specific classes with binary labels,
model. Depth-wise convolutions are frequently utilized in CNNs to reduce computational
the soft labels enable the model to express varying degrees of confidence or probabilities
complexity and boost efficiency. Nevertheless, they have limitations in capturing
for each class. This allows for a more detailed understanding of the eggs’ characteristics
significant scale context and spatial information. With the use of large-kernel depth-wise
and their association with different classes. In addition, the use of soft labels can result in
convolutions, this constraint can be overcome. The advantages of using large-kernel
more robust models because the model is able to discover correlations between the input
depth-wise convolutions include improved model ability when applied to real-world
data and the desired outputs, even if the relationship is not obvious. In our study, soft
objects, a more comprehensive capturing of the data and their surroundings, and
labels are applied in problems with multi-class classification or multi-label classification
enhanced accuracy on benchmark datasets. In the context of egg classification, this
(i.e., unclean eggs, standard eggs, and no standard eggs), where the model must predict the
approach allows for a more comprehensive analysis of various parameters, including egg
presence of multiple target classes [24,25]. In addition, on the basis of simplified optimal
size, eggshell type, and other spatial characteristics. Furthermore, large-kernel depth-wise
transport assignment (SimOTA), an advanced cost function calculation for soft labels was
convolutions allow for a reduction in the number of parameters and computation, while
presented to reduce training loss, and its loss function is described below.
still delivering a similar performance to models with more parameters.
f (C ) = α1 f (C cls ) + α2 f Creg (1)
2.4.2. Soft Labels
where Infdeep
(C ) islearning,
loss fuction, f (C clsrefer
soft labels ) is the classification
to the use of continuous, reg ) is than
loss, f (Crather the regression loss,
binary, values
and two coefficients,
as target andα2 , were
outputs. Theα1purpose empirically
of using set.is to provide the model with additional
soft labels
information and to encourage smoothness in the model predictions
2 [19,23]. By employing
(C cls ) = predictions
soft labels, the model canf generate × provide
CE P, Yso f t that Yso f t − pmore subtlety and precision
(2)
in the classification task. Instead of solely assigning eggs to specific classes with binary
labels, the soft labels enable the model to express varying degrees of confidence or
probabilities for each class. This allows for a more detailed understanding of the eggs’
characteristics and their association with different classes. In addition, the use of soft
labels can result in more robust models because the model is able to discover correlations
between the input data and the desired outputs, even if the relationship is not obvious. In
our study, soft labels are applied in problems with multi-class classification or multi-label
classification (i.e., unclean eggs, standard eggs, and no standard eggs), where the model
𝑓 𝐶 =𝛼 𝑓 𝐶 +𝛼 𝑓 𝐶 (1)
where 𝑓 𝐶 is loss fuction, 𝑓 𝐶𝑐𝑙𝑠 is the classification loss, 𝑓 𝐶 is the regression loss,
and two coefficients, 𝛼 and 𝛼 , were empirically set.
Animals 2023, 13, 2354 7 of 19
𝑓 𝐶 = 𝐶𝐸 𝑃, 𝑌 × 𝑌 −𝑝 (2)
where 𝐶𝐸 𝑃, 𝑌 represents the cross-entropy (CE) loss between the predicted
probabilities (P) and the soft labels
where CE( P, Yso f t ) represents (𝑌𝑠𝑜𝑓𝑡 .
the cross-entropy (CE) loss between the predicted probabili-
ties (P) and the soft labels (Yso f t ).
𝑓 𝐶 = − log 𝐼𝑜𝑈 (3)
where −log (IoU) means the negative Creg = −log
f logarithm ( IoU
of the )
intersection over union (IoU). (3)
2.5. Egg −
where log (IoU)
Weight meansMethod
Prediction the negative logarithm of the intersection over union (IoU).
2.5. Predicting
Egg Weightegg weightMethod
Prediction through computer vision leads to several challenges that must
be addressed. One of the challenges is the accuracy of measurements of the egg’s
Predicting egg weight through computer vision leads to several challenges that must
dimensions, such as the major and minor axis. This is due to the difficulty of obtaining
be addressed. One of the challenges is the accuracy of measurements of the egg’s di-
high-quality images or accurately identifying and measuring the egg in the image.
mensions, such as the major and minor axis. This is due to the difficulty of obtaining
Another obstacle is the diversity in the shapes and sizes of eggs (small–jumbo), which
high-quality images or accurately identifying and measuring the egg in the image. Another
requires the implementation of complex machine learning algorithms that can account for
obstacle is the diversity in the shapes and sizes of eggs (small–jumbo), which requires
various factors, including eggshell color, shape, size, and birth date, that may affect egg
the implementation of complex machine learning algorithms that can account for various
weight.
factors,Random
including Forest Regression
eggshell is utilized
color, shape, size,for
andegg-weight
birth date,prediction due toegg
that may affect its ability
weight.
toRandom
handle Forest
complex, non-linear relationships between features and target variables
Regression is utilized for egg-weight prediction due to its ability to handle using
an ensemble learning method that combines predictions from multiple
complex, non-linear relationships between features and target variables using an ensembledecision trees,
which are trained on randomly selected subsets of the data. This combination
learning method that combines predictions from multiple decision trees, which are trained reduces
variance and enhances
on randomly the overall
selected subsets of theaccuracy
data. This ofcombination
the model. Furthermore,
reduces varianceRandom Forest
and enhances
can handle missing or incomplete data and perform effectively
the overall accuracy of the model. Furthermore, Random Forest can handle missing when there is aor
combination of continuous and categorical variables [18,26]. Lastly, feature
incomplete data and perform effectively when there is a combination of continuous and importance
scores are provided
categorical variablesby [18,26].
RandomLastly,
Forest,feature
which importance
helps determine theare
scores most significant
provided factors
by Random
that contribute to egg weight prediction. The structure of RF is shown below
Forest, which helps determine the most significant factors that contribute to egg weight (Figure 6)
[27].
prediction. The structure of RF is shown below (Figure 6) [27].
Randomforest
Figure6.6.Random
Figure forestalgorithm.
algorithm.
Figure 8. The processes of calculating egg parameters: (a) original image; (b) binary image; (c) geo-
Figure 8. The processes of calculating egg parameters: (a) original image; (b) binary image; (c)
Figure image.
metric 8. The processes of calculating egg parameters: (a) original image; (b) binary image; (c)
geometric image.
geometric image.
2.7. Performance Evaluation
2.7. Performance Evaluation
2.7. Performance Evaluation
In this research, a dataset was created using 2100 egg images, which were then
In
randomly this research,
intoaa training
dataset was created using 2100 egg images, which were then
In thisdivided
research, and created
dataset was testing sets
using with
2100a ratio of 4:1. To
egg images, better
which analyze
were then
randomly
and compare divided into training
performance acrossand testing
eggtesting
classes,sets with
thewith a ratio of 4:1. To
confusion better analyze and
randomly divided into training and sets a ratio matrix wasbetter
of 4:1. To created to derive
analyze and
compare
standard performance across egg classes, the confusion matrix was created to derive
compare parameters
performance in classification tasks [29].
across egg classes, the The confusion
confusion matrix
matrix wasis acreated
two-dimensional
to derive
standard
table parameters RTMDet
that summarizes in classification tasks
model’s tasks [29]. The
performance confusion the
by comparing matrix is a two-
predicted and
standard parameters in classification [29]. The confusion matrix is a two-
dimensional
actual class tableEach
labels. thatrow summarizes
of the RTMDet
matrix model’s
represents performance
occurrences in a by comparing
predicted class, the
while
dimensional table that summarizes RTMDet model’s performance by comparing the
predicted
each column and actual
represents class labels.
instances Each row of the matrix represents occurrences in a
predicted and actual class labels.inEach
an actual
row class.
of theThe elements
matrix of the confusion
represents occurrences matrix
in a
represent the number of cases identified correctly versus incorrectly. The four elements
of true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN)
are used to calculate evaluation metrics such as precision, recall, F1-score, and average
precision (AP) for egg grading in deep learning [30,31]. To further explore the performance
of Random Forest, coefficient of determination (R2 ) is utilized to evaluate the goodness of
fit of the regression model.
TP
precision = (4)
TP + FP
Animals 2023, 13, 2354 9 of 19
TP
recall = (5)
TP + FN
2 × ( precision × recall )
F1 − score = (6)
( precision + recall )
Z 1
AP = p(r )dr (7)
γ =0
where SSres represents the residual sum of squares and SStot means the total sum of squares.
3. Results
3.1. CNN Model Comparison
Four individual experiments (RTMDet-s, RTMDet-m, RTMDet-l and RTMDet-x) were
conducted to discover the optimal classifier for egg-sorting. All experiments trained
300 epochs based on Python 3.7 version, PyTorch deep learning library and a hardware
with NVIDIA-SMI (16 GB) graphics card. A summary of the model comparison is listed
below (Table 2). In terms of accuracy, RTMDet-x reached an accuracy of 94.80%, which was
better than any other comparison model. Correspondingly, the training loss and validation
loss values of RTMDet-x were also the smallest among all the tested models because fewer
loss values mean minor errors in neural networks. In terms of floating-point operations
per second (FLOPS), RTMDet-s with fewer parameters have minimal FLOPS compared
with other methods, which means they requires less computational time to perform a
forward or backward pass in a neural network, and therefore have a broader further
application in robots with limited computational resources [32]. In addition, RTMDet-x
also outperformed any other comparison model in [email protected] and [email protected] because of
the additional parameters required for the computer to perform classification. Figure 9
shows the detailed comparison results of the model indicators for different deep learning
classifiers. These findings demonstrated that RTMDet-x achieved the best performance in
terms of egg classification.
Model Accuracy (%) [email protected] (%) [email protected] (%) Params (M) FLOPS(G) Training Loss
RTMDet-s 67.8 55.8 52.3 8.89 14.8 0.30
RTMDet-m 75.6 62.6 60.1 24.71 39.27 0.23
RtMDet-l 86.1 72.1 64.8 52.3 80.23 0.21
RtMDet-x 94.8 79.2 69.1 94.86 141.67 0.12
not match the true class. The average false scores of RTMDet-s are higher than those of
other classifiers, which means its performance could be improved. In terms of type error,
no type error was observed in the classes of bloody eggs and floor eggs. The reason for this
is their significant characters; for example, bloody eggs have clear bloody spots and only
floor eggs have a litter background. However, when classifiers detect eggsusing standard,
non-standard, and cracked eggs, some errors exist due to the similarities within the minor
axis and major axis, and the difficulties in detecting microcracks and cracks located on the
bottom or sides not shown by the camera [34]. However, the results were still acceptable be-
cause there are not many non-standard eggs or cracked eggs on commercial poultry farms
(varying between 1 and 5% of the total) [35]. In general, the RTMDet-x classifier is the best
experimental classifier with the highest accuracy. In addition, to visualize how RTMDet-x
classifies eggs and extracts feature maps, heatmap and gradient-weighted class-activation
mappings were outputted (Figure 11). To understand the model’s decision-making process
and identify important regions in the input images, the gradient-weighted class activation
mapping (Grad-CAM) technique was utilized [36]. Grad-CAM produces a heatmap that
highlights the regions contributing significantly to the model’s predictions. By extracting
the feature map from the last convolutional layer of the input egg image, a Grad-CAM
heatmap is created. The feature map channels are then weighted using a class gradient
computed with respect to the feature map. This weighting process emphasizes regions
that strongly influence the model’s predictions. Experimental findings demonstrate the
CNN-based model’s ability to effectively extract features from areas with blood spots and
Animals 2023, 13, x FOR PEER REVIEW
broken parts, even when the defects are minor. This showcases the model’s capacity 10 of to
20
Figure
Figure 9.9.Model
Modelcomparison:
comparison:(a)
(a)accuracy,
accuracy,(b)
(b)[email protected],
[email protected],(c)
(c)[email protected]
[email protected] and
and (d)
(d) training loss.
Model Accuracy (%) [email protected] (%) [email protected] (%) Params (M) FLOPS(G) Training Loss
RTMDet-s 67.8 55.8 52.3 8.89 14.8 0.30
Animals
nimals 2023, 2023,
13, x FOR 13, 2354
PEER REVIEW 11 of 20 11 of 19
(a)
(b)
(c)
(d)
Animals 2023, 13, x FOR PEER REVIEW
Figure 10. Confusion 13 of 20
Figure 10. matrix of classifiers
Confusion matrix offor differentfor
classifiers types of eggs
different ((a–d)
types represent
of eggs ((a–d)RTMDet-s,
represent RTMDet-s,
RTMDet-m RTMDet-m
and RTMDet-l
andand RTMDet-x,
RTMDet-l respectivly).respectivly).
and RTMDet-x,
The prediction results are shown in the confusion matrix, where the gradually
changing shade of blue represents the accuracy of true predictions (cells filled with deeper
blue have more accurate predictions). The number in each cell represents the results of the
models [33]. The average true scores (along the diagonal line from the top-left corner of
the matrix to the bottom-right corner) of RTMDet-x are the highest among the whole
confusion matrix of classifiers, which indicates that RTMDet-x has a better true prediction
rate. The scores off the diagonal (false scores) represent the instances where the predicted
class does not match the true class. The average false scores of RTMDet-s are higher than
those of other classifiers, which means its performance could be improved. In terms of
type error, no type error was observed in the classes of bloody eggs and floor eggs. The
reason for this is their significant characters; for example, bloody eggs have clear bloody
spots and only floor eggs have a litter background. However, when classifiers detect
eggsusing standard, non-standard, and cracked eggs, some errors exist due to the
similarities within the minor axis and major axis, and the difficulties in detecting
microcracks and cracks located on the bottom or sides not shown by the camera [34].
However, the results were still acceptable because there are not many non-standard eggs
or cracked eggs on commercial poultry farms (varying between 1 and 5% of the total) [35].
In general, the RTMDet-x classifier is the best experimental classifier with the highest
accuracy. In addition, to visualize how RTMDet-x classifies eggs and extracts feature
maps, heatmap and gradient-weighted class-activation mappings were outputted (Figure
11). To understand the model’s decision-making process and identify important regions
in the input images, the gradient-weighted class activation mapping (Grad-CAM)
technique was utilized [36]. Grad-CAM produces a heatmap that highlights the regions
contributing significantly to the model’s predictions. By extracting the feature map from
the last convolutional layer of the input egg image, a Grad-CAM heatmap is created. The
feature map channels are then weighted using a class gradient computed with respect to
the feature map. This weighting process emphasizes regions that strongly influence the
model’s predictions. Experimental findings demonstrate the CNN-based model’s ability
to effectively extract features from areas with blood spots and broken parts, even when
the defects are minor. This showcases the model’s capacity to accurately identify egg
abnormalities and make precise predictions.
Figure
Figure 11.11. Visualization
Visualization of of CNN:
CNN: (a)(a) original
original image,
image, (b)(b) heatmap
heatmap and
and (c)(c) gradient-weighted
gradient-weighted map.
map.
3.3.
3.3. Results
Results of
of Weighting
Weighting Eggs
Eggs
To further test the model
To further test the model under egg scales
under ranging
egg scales from small
ranging fromtosmall
jumbo,to each category
jumbo, each
randomly
category randomly selected 100 pictures to test the robustness and precision of The
selected 100 pictures to test the robustness and precision of the regressor. the
results are shown in Figure 13. The error bar at the top of each stacked bar graph represents
regressor. The results are shown in Figure 13. The error bar at the top of each stacked bar
the standard error of each class and the height of the green bar represents the absolute error
graph represents the standard error of each class and the height of the green bar represents
between real weights and predicted weights. From the graph, we can find the height of the
the absolute error between real weights and predicted weights. From the graph, we can
error bar for small, medium and jumbo eggs is lower than that for large and extra-large
find the height of the error bar for small, medium and jumbo eggs is lower than that for
eggs, which indicates that the regressor has a better prediction performance for large and
large and extra-large eggs, which indicates that the regressor has a better prediction
extra-large eggs. This may because the large and extra-large eggs have medium values
performance for large and extra-large eggs. This may because the large and extra-large
according to the regression model; in a large dataset, the relationship between the precited
eggs have medium values according to the regression model; in a large dataset, the
variables and the response variables is more complex, resulting in the risk of overfitting
relationship between the precited variables and the response variables is more complex,
and more prohibitive computational costs. However, the data in the medium values may
resulting in the risk of overfitting and more prohibitive computational costs. However,
be less affected by measurement error or other types of noise than very small or very
the data
large in the
values medium
[39,40]. values
This maytobeimprove
can help less affected by measurement
the accuracy error or predictions.
of the regressor other types
of noise than very small or very large values [39,40]. This can help to improve the accuracy
of the regressor predictions. In addition, for some types of data, preprocessing can be
simplified for medium values. For example, scaling or normalization may not be as critical
for medium values as it is for very small or very large values. In addition, medium values
may be complex enough to require a more sophisticated model, but not so complex that
Animals 2023, 13, 2354 14 of 19
In addition, for some types of data, preprocessing can be simplified for medium values.
For example, scaling or normalization may not be as critical for medium values as it is for
very
Animals 2023, 13, x FOR PEER REVIEW
small or very large values. In addition, medium values may be complex enough15 of 20
to
require a more sophisticated model, but not so complex that the model becomes difficult to
interpret. This can help strike a balance between model performance and interpretability.
Figure13.
Figure Eggweight
13.Egg weightprediction
predictionfrom
from small
small to
to jumbo.
jumbo.
4. Discussions
4. Discussions
4.1. Discussion of Egg Classification Accuracy
4.1. Discussion of Egg Classification Accuracy
In this study, five classes of eggs were investigated to build a classifier to sort eggs. For
floorIn this
and study,eggs,
bloody five there
classes is of
noeggs were investigated
misunderstanding in thetoclassification
build a classifier to sort
of them and eggs.
other
For
classes. This is due to the clear features of floor and bloody eggs [41]. For floor eggs,and
floor and bloody eggs, there is no misunderstanding in the classification of them the
other
eggs areclasses. This
laid in theislitter,
due to so,the
in clear
computerfeatures of floor
vision, and bloody
the white eggs areeggs [41]. For floor
surrounded eggs,
by brown
the eggs
litter, are laid
which is a in the litter,
unique featureso, compared
in computer to vision,
other egg the classes.
white eggs Thisare surrounded
improves the eggby
brown litter, which is a unique feature compared to other egg
classifier’s accuracy when sort it. As for bloody eggs, because of the red spots that appearclasses. This improves the
egg classifier’s
on white accuracy
eggshells, there when
is a clearsortindicator
it. As forthatbloody
the CNNeggs,model
because canofusetheto red spots
extract that
feature
appear on white eggshells, there is a clear indicator that the CNN
maps, and the egg classifier also has a high sorting accuracy. More false classifications are model can use to extract
feature
obtainedmaps, and thenon-standard
for standard, egg classifier and also has aeggs.
cracked highThissorting accuracy.
is because More false
the classifier uses
classifications
minor and major areaxes
obtained for standard,
to differentiate egg non-standard and cracked
size, and non-standard eggseggs.
haveThis
moreisabnormal
because
the classifier
shapes, suchuses minor
as being tooandlongmajor
or too axes to differentiate
round, which means eggthere
size,might
and non-standard
be unusual minor eggs
have more axes
and major abnormal
that theshapes, suchmisunderstands
classifier as being too long [5]. or
In too round,cracked
addition, which eggsmeans arethere
also
might be unusual minor and major axes that the classifier misunderstands
not easy for the classifier to detect. This is due to the limitations of camera angles. In this [5]. In addition,
cracked
study, we eggs
onlyare
usealso
thenotfronteasy
view forofthe
eggs classifier
for egg to detect. Thistasks.
classification is due to the limitations
Therefore, some cracks of
camera angles. Inon
on the eggshell thisthestudy,
backwe or only use the
side view offront
might view of eggs for
be ignored, eggcracked
and classification
eggs willtasks.be
Therefore,
classified as some
othercracks
typeson ofthe eggshell on the back or side view of might be ignored, and
eggs.
cracked To eggs willdiscuss
further be classified as other types
the performance of of eggs.
the classifier, we compare our study with
To further
various discuss
other pieces of the performance
research. Table 3of the classifier,
shows the results weofcompare
some studies our study with
conducted
various other pieces of
on the classification of research.
eggs using Table 3 shows
computer the results
vision of some studies
and compares these withconducted
the resultson
obtained
the in the present
classification of eggsstudy.using Pyiyadumkol
computer vision et al.and(2017) developed
compares these awithsorting system
the results
based onin
obtained the
themachine
present vision
study.technique
Pyiyadumkol to identify cracks
et al. (2017) in unwashed
developed eggssystem
a sorting [42]. The egg
based
images
on were captured
the machine under atmospheric
vision technique to identifyand vacuum
cracks pressure.eggs
in unwashed The [42].
cracks Thewere
eggdetected
images
usingcaptured
were the difference
underbetween
atmosphericimages andtaken
vacuumunder atmospheric
pressure. and vacuum
The cracks conditions.
were detected using A
combination
the differenceofbetween
machine imagesvision methods
taken under and the support vector
atmospheric and machine
vacuum(SVM) classifier
conditions. A
combination of machine vision methods and the support vector machine (SVM) classifier
was presented in Wu et al. (2017) to detect intact and cracked eggs [43]. Guanjun et al.
(2019) introduced a machine vision-based method for cracked egg detection [44]. A
negative Laplacian of Gaussian (LoG) operator, hysteresis thresholding method, and a
local fitting image index were used to identify crack regions. Amin et al. (2020) proposed
Animals 2023, 13, 2354 15 of 19
was presented in Wu et al. (2017) to detect intact and cracked eggs [43]. Guanjun et al. (2019)
introduced a machine vision-based method for cracked egg detection [44]. A negative
Laplacian of Gaussian (LoG) operator, hysteresis thresholding method, and a local fitting
image index were used to identify crack regions. Amin et al. (2020) proposed a CNN model
using hierarchical architecture to classify unwashed egg images based on three classes,
namely intact, bloody, and broken [45]. In our study, we introduced more classes, floor
and non-standard eggs, to cover all the normal egg categories while maintaining a high
level of accuracy through the use of the large-kernel depth-wide convolution approach and
soft labels, and cooperation with other optimizations such as anchor-free object detection
and deformable convolutional networks, which further improve accuracy and efficiency in
multi-classification tasks.
4.3.
4.3.Discussion
DiscussionofofJointly JointlyPerforming
PerformingEgg-Sorting
Egg-SortingandandWeighting
WeightingFunctions
Functions
InInour
our study, we combine egg classification and weightingtasks
study, we combine egg classification and weighting tasksintointoone
onetwo-stage
two-stage
model.
model. The Theapproach
approachisistototraintraintwo
twodistinct
distinctmodels,
models,one onefor
forclassification
classificationand andone oneforfor
regression,
regression,and andthenthencombine
combinetheirtheirpredictions
predictionsatatthe thetime
timeofofinference.
inference.First,
First,train
trainaa
classification
classificationmodel modeltotopredict
predicteach
eachinput’s
input’segg
eggclass
classlabel.
label.Then,
Then,usingusingthe thepredicted
predictedclassclass
labels
labelstotofilter
filterthe
theinputs,
inputs,train
trainaaregression
regressionmodel
modelusing
usingonly
onlythethefiltered
filteredinputs.
inputs.Use Usethethe
egg classification model to sort eggs and the corresponding regression
egg classification model to sort eggs and the corresponding regression model to predict model to predict
the
theweight
weightofofeggs eggsatatsame
sametimetime(Figure
(Figure14).
14).The
Theoverall
overallperformance
performanceofofthe thetwo-stage
two-stage
model is good, but other factors restrict its application, including
model is good, but other factors restrict its application, including potential potential errors in errors
filteringin
and increased
filtering complexity.
and increased The classification
complexity. model is used
The classification modelto is
filter
used thetoregression model’s
filter the regression
inputs. If the classification model’s predictions are inaccurate, it may
model’s inputs. If the classification model’s predictions are inaccurate, it may erroneously erroneously exclude
inputs that the regression model could have used. This can result
exclude inputs that the regression model could have used. This can result in a reduction in a reduction in the
accuracy of the final prediction. In addition, the two-stage model
in the accuracy of the final prediction. In addition, the two-stage model approach requiresapproach requires the
training of two distinct models and additional processing steps at the
the training of two distinct models and additional processing steps at the time of inference time of inference to
combine the predictions. This could make the overall architecture
to combine the predictions. This could make the overall architecture more complicated more complicated and
increase the required
and increase computational
the required resources.
computational resources.
Figure14.
Figure 14.The
Theegg
egghas
hasbeen
beenclassified
classifiedasas‘Standard’
‘Standard’and
anditsitspredicted
predictedweight
weightisis66.7
66.7g.g.
4.4.Future
4.4. FutureStudies
Studies
Despite
Despitethetheresearch;s
research;shigh
highperformance
performanceininsorting
sortingegg
eggquality
qualitybased
basedon onegg
eggsurface
surface
and
andweight,
weight,some
some further studies
further studiescould
couldthethe
model
modelbe applied to real-world
be applied to real-worldsituations: (a) us-
situations: (a)
ing emerging
using emerging nonvolatile
nonvolatilememory
memory (NVM)
(NVM) to to
reduce
reducememory
memory footprint
footprintand andlatency
latency[49],
[49],
which
whichisiscrucial
crucialforformobile
mobileapplication;
application;(b)(b)extending
extendingthe themodel
modeltotoeggeggdatasets
datasetswith
withmore
more
diversity
diversity(other
(othereggeggcolors, egg
colors, multiplication
egg multiplicationandandother spices)
other to fulfill
spices) the application
to fulfill en-
the application
vironment;
environment;(c) using a 360-degree
(c) using camera
a 360-degree to prevent
camera misidentification
to prevent in cracked
misidentification and bloody
in cracked and
eggs; (d) eggs;
bloody optimize the sorting
(d) optimize theand weighing
sorting process toprocess
and weighing reduce the time required
to reduce the timetorequired
completeto
the task without
complete sacrificing
the task accuracy;
without (e) enhancing
sacrificing accuracy; the
(e) accuracy
enhancing of egg
the segmentation
accuracy of by egg
leveraging the segment-anything model [50].
segmentation by leveraging the segment-anything model [50].
Animals 2023, 13, 2354 17 of 19
5. Conclusions
In this study, a two-stage model was developed based on RTMDet and random forest
networks to predict egg category and weight. The results show that the best classification
accuracy was 94.80% and 96.0% for the R2 regression model. The model can be installed
on the egg-collecting robot to sort eggs in advance and collect our target eggs specifically.
In addition, the model can be used to automatically pick out non-standard size eggs and
eggs with surface defects (blood-stained or broken). Furthermore, 1000 egg pictures were
utilized to test the detector’s performance for different egg types and egg weight scales.
The results showed that the detector has a better classification performance for standard
and non-standard size eggs, and large (55–60 g) and extra-large (60–65 g) egg weights
led to more reliable predictions. This detector is one of the first models that performs
the joint function of egg sorting and weighting. By implementing the findings of this
study, the poultry industry can reduce costs and increase productivity, ultimately leading
to better-quality products for consumers.
Author Contributions: Methodology, X.Y. and L.C.; validation, X.Y.; formal analysis, X.Y.; investi-
gation, X.Y., R.B.B., S.S. and L.C.; resources, L.C.; writing—original draft, X.Y. and L.C.; funding
acquisition, L.C. All authors have read and agreed to the published version of the manuscript.
Funding: The study was sponsored by the USDA-NIFA AFRI (2023-68008-39853), Egg Industry
Center; Georgia Research Alliance (Venture Fund); Oracle America (Oracle for Research Grant, CPQ-
2060433); University of Georgia (UGA) CAES Dean’s Office Research Fund; UGA Rural Engagement
Seed Grant & UGA Global Engagement fund.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data will be available per reasonable request.
Conflicts of Interest: All authors declare no conflict of interest.
References
1. Nematinia, E.; Abdanan Mehdizadeh, S. Assessment of Egg Freshness by Prediction of Haugh Unit and Albumen PH Using an
Artificial Neural Network. Food Meas. 2018, 12, 1449–1459. [CrossRef]
2. Patel, V.C.; McClendon, R.W.; Goodrum, J.W. Crack Detection in Eggs Using Computer Vision and Neural Networks. AI Appl.
1994, 8, 21–31.
3. Patel, V.C.; Mcclendon, R.W.; Goodrum, J.W. Color Computer Vision and Artificial Neural Networks for the Detection of Defects
in Poultry Eggs. In Artificial Intelligence for Biology and Agriculture; Panigrahi, S., Ting, K.C., Eds.; Springer: Dordrecht, The
Netherlands, 1998; pp. 163–176. ISBN 978-94-011-5048-4.
4. Omid, M.; Soltani, M.; Dehrouyeh, M.H.; Mohtasebi, S.S.; Ahmadi, H. An Expert Egg Grading System Based on Machine Vision
and Artificial Intelligence Techniques. J. Food Eng. 2013, 118, 70–77. [CrossRef]
5. Turkoglu, M. Defective Egg Detection Based on Deep Features and Bidirectional Long-Short-Term-Memory. Comput. Electron.
Agric. 2021, 185, 106152. [CrossRef]
6. Bist, R.B.; Subedi, S.; Chai, L.; Yang, X. Ammonia Emissions, Impacts, and Mitigation Strategies for Poultry Production: A Critical
Review. J. Environ. Manag. 2023, 328, 116919. [CrossRef]
7. Sanlier, N.; Üstün, D. Egg Consumption and Health Effects: A Narrative Review. J. Food Sci. 2021, 86, 4250–4261. [CrossRef]
8. Javadikia, P.; Dehrouyeh, M.H.; Naderloo, L.; Rabbani, H.; Lorestani, A.N. Measuring the Weight of Egg with Image Pro-
cessing and ANFIS Model. In Proceedings of the Swarm, Evolutionary, and Memetic Computing, Andhra Pradesh, India,
19–21 December 2011; Panigrahi, B.K., Suganthan, P.N., Das, S., Satapathy, S.C., Eds.; Springer: Berlin/Heidelberg, Germany,
2011; pp. 407–416.
9. Thipakorn, J.; Waranusast, R.; Riyamongkol, P. Egg Weight Prediction and Egg Size Classification Using Image Processing and
Machine Learning. In Proceedings of the 2017 14th International Conference on Electrical Engineering/Electronics, Computer,
Telecommunications and Information Technology (ECTI-CON), Phuket, Thailand, 27–30 June 2017; pp. 477–480.
10. Asadi, V.; Raoufat, M.H. Egg Weight Estimation by Machine Vision and Neural Network Techniques (a Case Study Fresh Egg).
Int. J. Nat. Eng. Sci. 2010, 4, 1–4.
11. Dong, S.; Wang, P.; Abbas, K. A Survey on Deep Learning and Its Applications. Comput. Sci. Rev. 2021, 40, 100379. [CrossRef]
12. Apostolidis, E.; Adamantidou, E.; Metsai, A.I.; Mezaris, V.; Patras, I. Video Summarization Using Deep Neural Networks: A
Survey. Proc. IEEE 2021, 109, 1838–1863. [CrossRef]
Animals 2023, 13, 2354 18 of 19
13. Berkhoff, J.; Alvarado-Gilis, C.; Pablo Keim, J.; Antonio Alcalde, J.; Vargas-Bello-Perez, E.; Gandarillas, M. Consumer Preferences
and Sensory Characteristics of Eggs from Family Farms. Poult. Sci. 2020, 99, 6239–6246. [CrossRef]
14. Hansstein, F. Profiling the Egg Consumer: Attitudes, Perceptions and Behaviours. In Improving the Safety and Quality of Eggs and
Egg Products, Vol 1: Egg Chemistry, Production and Consumption; Nys, Y., Bain, M., VanImmerseel, F., Eds.; Woodhead Publ Ltd.:
Cambridge, UK, 2011; pp. 39–61. ISBN 978-0-85709-391-2.
15. Chai, L.; Zhao, Y.; Xin, H.; Richardson, B. Heat Treatment for Disinfecting Egg Transport Tools. Appl. Eng. Agric. 2022, 38, 343–350.
[CrossRef]
16. Lusk, J.L. Consumer Preferences for Cage-Free Eggs and Impacts of Retailer Pledges. Agribusiness 2019, 35, 129–148. [CrossRef]
17. Lyu, C.; Zhang, W.; Huang, H.; Zhou, Y.; Wang, Y.; Liu, Y.; Zhang, S.; Chen, K. RTMDet: An Empirical Study of Designing
Real-Time Object Detectors. arXiv 2022, arXiv:2212.07784.
18. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [CrossRef]
19. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Floor Eggs with Machine Vision in Cage-Free Hen Houses. Poult. Sci. 2023, 102, 102637.
[CrossRef] [PubMed]
20. Zhang, Y.; Li, M.; Ma, X.; Wu, X.; Wang, Y. High-Precision Wheat Head Detection Model Based on One-Stage Network and GAN
Model. Front. Plant Sci. 2022, 13, 787852. [CrossRef]
21. Nazari, Z.; Kang, D.; Asharif, M.R.; Sung, Y.; Ogawa, S. A New Hierarchical Clustering Algorithm. In Proceedings of the 2015
International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS), Okinawa, Japan, 28–30 November 2015;
IEEE: New York, NY, USA, 2015; pp. 148–152.
22. Zhang, D.; Zhou, F. Self-Supervised Image Denoising for Real-World Images With Context-Aware Transformer. IEEE Access 2023,
11, 14340–14349. [CrossRef]
23. Ma, X.; Karimpour, A.; Wu, Y.-J. Statistical Evaluation of Data Requirement for Ramp Metering Performance Assessment. Transp.
Res. Part A Policy Pract. 2020, 141, 248–261. [CrossRef]
24. Wang, F.; Zhu, L.; Li, J.; Chen, H.; Zhang, H. Unsupervised Soft-Label Feature Selection. Knowl.-Based Syst. 2021, 219, 106847.
[CrossRef]
25. Wang, W.; Wang, Z.; Wang, M.; Li, H.; Wang, Z. Importance Filtered Soft Label-Based Deep Adaptation Network. Knowl.-Based
Syst. 2023, 265, 110397. [CrossRef]
26. Riley, P.C.; Deshpande, S.V.; Ince, B.S.; Hauck, B.C.; O’Donnell, K.P.; Dereje, R.; Harden, C.S.; McHugh, V.M.; Wade, M.M. Random
Forest and Long Short-Term Memory Based Machine Learning Models for Classification of Ion Mobility Spectrometry Spectra. In
Proceedings of the Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) Sensing XXII, Online, 12–16 April 2021;
Volume 11749, pp. 179–187.
27. Khan, M.Y.; Qayoom, A.; Nizami, M.; Siddiqui, M.S.; Wasi, S.; Syed, K.-U.-R.R. Automated Prediction of Good Dictionary
EXamples (GDEX): A Comprehensive Experiment with Distant Supervision, Machine Learning, and Word Embedding-Based
Deep Learning Techniques. Complexity 2021, 2021, 2553199. [CrossRef]
28. Chieregato, M.; Frangiamore, F.; Morassi, M.; Baresi, C.; Nici, S.; Bassetti, C.; Bnà, C.; Galelli, M. A Hybrid Machine Learning/Deep
Learning COVID-19 Severity Predictive Model from CT Images and Clinical Data. Sci. Rep. 2022, 12, 4329. [CrossRef] [PubMed]
29. Wu, H.; Zhu, Z.; Du, X. System Reliability Analysis with Autocorrelated Kriging Predictions. J. Mech. Des. 2020, 142, 101702.
[CrossRef]
30. Yang, X.; Chai, L.; Bist, R.B.; Subedi, S.; Wu, Z. A Deep Learning Model for Detecting Cage-Free Hens on the Litter Floor. Animals
2022, 12, 1983. [CrossRef] [PubMed]
31. Subedi, S.; Bist, R.; Yang, X.; Chai, L. Tracking Pecking Behaviors and Damages of Cage-Free Laying Hens with Machine Vision
Technologies. Comput. Electron. Agric. 2023, 204, 107545. [CrossRef]
32. Jeyakumar, P.; Tharanitaran, N.M.; Malar, E.; Muthuchidambaranathan, P. Beamforming Design with Fully Connected Analog
Beamformer Using Deep Learning. Int. J. Commun. Syst. 2022, 35, e5109. [CrossRef]
33. Li, J.; Sun, H.; Li, J. Beyond Confusion Matrix: Learning from Multiple Annotators with Awareness of Instance Features. Mach.
Learn. 2023, 112, 1053–1075. [CrossRef]
34. Bist, R.B.; Subedi, S.; Chai, L.; Regmi, P.; Ritz, C.W.; Kim, W.K.; Yang, X. Effects of Perching on Poultry Welfare and Production: A
Review. Poultry 2023, 2, 134–157. [CrossRef]
35. Khabisi, M.; Salahi, A.; Mousavi, S. The Influence of Egg Shell Crack Types on Hatchability and Chick Quality. Turk. J. Vet. Anim.
Sci. 2012, 36, 289–295. [CrossRef]
36. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations From Deep Networks
via Gradient-Based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy,
22–29 October 2017; pp. 618–626.
37. Gogo, J.A.; Atitwa, B.E.; Gitonga, C.N.; Mugo, D.M. Modelling Conditions of Storing Quality Commercial Eggs. Heliyon 2021, 7, e07868.
[CrossRef]
38. Kim, T.H.; Kim, J.H.; Kim, J.Y.; Oh, S.E. Egg Freshness Prediction Model Using Real-Time Cold Chain Storage Condition Based on
Transfer Learning. Foods 2022, 11, 3082. [CrossRef] [PubMed]
39. Li, W.; Qian, X.; Ji, J. Noise-Tolerant Deep Learning for Histopathological Image Segmentation. In Proceedings of the 2017 24th
IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; IEEE: New York, NY, USA,
2017; pp. 3075–3079.
Animals 2023, 13, 2354 19 of 19
40. Radlak, K.; Malinski, L.; Smolka, B. Deep Learning for Impulsive Noise Removal in Color Digital Images. In Proceedings of the
Real-Time Image Processing and Deep Learning 2019, Baltimore, MD, USA, 15–16 April 2019; Kehtarnavaz, N., Carlsohn, M.F.,
Eds.; Spie-Int Soc Optical Engineering: Bellingham, UK, 2019; Volume 10996, p. UNSP 1099608.
41. Bist, R.B.; Yang, X.; Subedi, S.; Chai, L. Mislaying Behavior Detection in Cage-Free Hens with Deep Learning Technologies. Poult.
Sci. 2023, 102, 102729. [CrossRef]
42. Priyadumkol, J.; Kittichaikarn, C.; Thainimit, S. Crack Detection on Unwashed Eggs Using Image Processing. J. Food Eng. 2017,
209, 76–82. [CrossRef]
43. Wu, L.; Wang, Q.; Jie, D.; Wang, S.; Zhu, Z.; Xiong, L. Detection of Crack Eggs by Image Processing and Soft-Margin Support
Vector Machine. J. Comput. Methods Sci. Eng. 2018, 18, 21–31. [CrossRef]
44. Guanjun, B.; Mimi, J.; Yi, X.; Shibo, C.; Qinghua, Y. Cracked Egg Recognition Based on Machine Vision. Comput. Electron. Agric.
2019, 158, 159–166. [CrossRef]
45. Nasiri, A.; Omid, M.; Taheri-Garavand, A. An Automatic Sorting System for Unwashed Eggs Using Deep Learning. J. Food Eng.
2020, 283, 110036. [CrossRef]
46. Cen, Y.; Ying, Y.; Rao, X. Egg Weight Detection on Machine Vision System. Proc. SPIE–Int. Soc. Opt. Eng. 2006, 6381, 337–346.
[CrossRef]
47. Alikhanov, D.; Penchev, S.; Georgieva, T.; Moldajanov, A.; Shynybaj, Z.; Daskalov, P. Indirect Method for Egg Weight Measurement
Using Image Processing. Int. J. Emerg. Technol. Adv. Eng. 2015, 5, 30–34.
48. Akkoyun, F.; Ozcelik, A.; Arpaci, I.; Erçetin, A.; Gucluer, S. A Multi-Flow Production Line for Sorting of Eggs Using Image
Processing. Sensors 2023, 23, 117. [CrossRef]
49. Wen, F.; Qin, M.; Gratz, P.; Reddy, N. Software Hint-Driven Data Management for Hybrid Memory in Mobile Systems. ACM
Trans. Embed. Comput. Syst. 2022, 21, 1–8. [CrossRef]
50. Yang, X.; Dai, H.; Wu, Z.; Bist, R.; Subedi, S.; Sun, J.; Lu, G.; Li, C.; Liu, T.; Chai, L. SAM for Poultry Science. arXiv 2023,
arXiv:2305.10254.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.