Coatings 14 00501
Coatings 14 00501
Coatings 14 00501
Article
Automatic Defect Detection of Jet Engine Turbine and
Compressor Blade Surface Coatings Using a Deep
Learning-Based Algorithm
Md Hasib Zubayer 1 , Chaoqun Zhang 1,2, * , Wen Liu 3, *, Yafei Wang 1 and Haque Md Imdadul 4
1 School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
2 School of Materials, Sun Yat-sen University, and Southern Marine Science and Engineering Guangdong
Laboratory (Zhuhai), Shenzhen 518107, China
3 AI Technology Innovation Group, School of Economics and Management, Communication University
of China, Beijing 100024, China
4 School of Mechanical Engineering, Shenyang Aerospace University, Shenyang 110136, China
* Correspondence: [email protected] or [email protected] (C.Z.);
[email protected] (W.L.)
Abstract: The application of additive manufacturing (AM) in the aerospace industry has led to the
production of very complex parts like jet engine components, including turbine and compressor
blades, that are difficult to manufacture using any other conventional manufacturing process but can
be manufactured using the AM process. However, defects like nicks, surface irregularities, and edge
imperfections can arise during the production process, potentivally affecting the operational integrity
and safety of jet engines. Aiming at the problems of poor accuracy and below-standard efficiency in
existing methodologies, this study introduces a deep learning approach using the You Only Look
Once version 8 (YOLOv8) algorithm to detect surface, nick, and edge defects on jet engine turbine and
compressor blades. The proposed method achieves high accuracy and speed, making it a practical
solution for detecting surface defects in AM turbine and compressor blade specimens, particularly
in the context of quality control and surface treatment processes in AM. The experimental findings
confirmed that, in comparison to earlier automatic defect recognition procedures, the YOLOv8 model
Citation: Zubayer, M.H.; Zhang, C.; effectively detected nicks, edge defects, and surface defects in the turbine and compressor blade
Liu, W.; Wang, Y.; Imdadul, H.M. dataset, attaining an elevated level of accuracy in defect detection, reaching up to 99.5% in just 280 s.
Automatic Defect Detection of Jet
Engine Turbine and Compressor Keywords: additive manufacturing; deep learning; gas turbine and compressor blades; defect
Blade Surface Coatings Using a Deep detection; image processing
Learning-Based Algorithm. Coatings
2024, 14, 501. https://doi.org/
10.3390/coatings14040501
like the US Air Force, invested massively due to its distinctive attributes, including dimin-
ished material waste, facilitation of lightweight designs, decreased reliance on assembly
via component consolidation, and the capacity to manufacture intricately complex compo-
nents. These advantages collectively result in reduced fuel consumption and cost savings,
a consequence of streamlined certification processes [5].
Traditional defect detection methods, such as visual inspection, are labor-intensive,
time-consuming subjective, require skilled personnel to perform the inspection, and may
not always provide precise results. In addition, non-destructive testing (NDT) tech-
niques [6], optical methods (OM) [7], engine borescope inspection [8], though precise,
are significantly costly due to the necessity of not only certified tools but also engineers
with specialized licenses [9]. In the recent past, image-based analytic tools, such as deep
machine learning (DML), have shown considerable promise and extensive potential in im-
age recognition tasks, including object detection, that is applicable to industries to enhance
quality control in production systems and have yielded significant outcomes in quality
assurance of surface coatings defect detection.
Several studies have delved into the application of deep learning for failure detection
across various manufacturing settings, particularly focusing on the outer layers of additive
manufacturing (AM) parts, including turbine and compressor blades for jet engines [10–12].
However, the majority of these investigations have predominantly utilized traditional
convolutional neural networks (CNNs), which are not optimal for spotting subtle and
asymmetrical flaws. In the Malta et al. [13] study, a CNN algorithm was trained to detect
surface defects in automotive engine component detection. The results demonstrated
the model’s ability to accurately identify specific components in live video monitoring.
Aust et al. [14] devised a technique for identifying edge defects in high-pressure compressor
blades utilizing small datasets, employing traditional computer vision methods, followed
by defect feature point calculation and clustering via the DBSCAN algorithm, although
this method is primarily limited to edge defect detection.
In a separate study by Matthias et al. [15], a multi-scale technique of various length
scales was presented, utilizing the sustained finite element method (FEM) for analyzing
surface defects in three-dimensional space. To validate the model, its application was
focused on the quadrature area of a jet turbine and compressor blade, resulting in effective
computation and analysis of crack growth. In a research study by Yoon et al. [16], an
analysis was conducted on the defects found in gas turbine blades within a cogeneration
plant. Scanning electron microscopy (SEM) was implemented to obtain photographs,
revealing that cracks were initiated as a result of concentrated stresses around preexisting
defects. Liu et al. [17] analyzed the FEM-based deep vibration analysis method in the gas
turbine rotor blades dataset to detect potential locations of dynamic crack propagation.
He et al. [18] proposed an improved R-CNN cascade mask deep learning-based
methodology aimed at achieving accurate edge failure detection in turbine blades. In the
field of nick damage detection in rotating 3D blade-like structural elements, Buezas et al. [19]
employed genetic algorithms (GAs) to implement multi-layer detection. Advancements
in continuum robotics have enabled researchers, as mentioned in references [20,21], to
design snake-like robots capable of autonomously exploring the intricate inner spaces of gas
turbines. Morini et al. [22] conducted a specialized study on compressor fouling, addressing
associated issues and potential solutions. Furthermore, in order to optimize the blade
defect analysis process, Li et al. [23] focused on utilizing measurement parameters, while
Zhou et al. [24] emphasized the establishment of mapping and fault index relationships.
In the context of object detection using the YOLO network, Redmon et al. [25] ap-
proached the issue of defect detection as a regression problem. The method involves direct
regression of the target’s bounding box from varied locations within the input dataset,
leading to a substantial increase in detection speed while maintaining precision. Concur-
rently, Hui et al. [26] developed an advanced model leveraging the YOLOv4 framework to
detect cracks in jet engine blades. This enhancement incorporated an attention mechanism
within the architecture network to augment background differentiation and enhanced multi-
proached the issue of defect detection as a regression problem. The method involves direct
regression of the target’s bounding box from varied locations within the input dataset,
leading to a substantial increase in detection speed while maintaining precision. Concur-
rently, Hui et al. [26] developed an advanced model leveraging the YOLOv4 framework
Coatings 2024, 14, 501 to detect cracks in jet engine blades. This enhancement incorporated an attention mecha- 3 of 15
nism within the architecture network to augment background differentiation and en-
hanced multi-scale feature fusion through the application of bilinear interpolation,
thereby
scale significantly
feature elevating
fusion through thedetection
applicationcapabilities.
of bilinear interpolation, thereby significantly
In the pursuit of achieving
elevating detection capabilities. 100% accuracy in quick succession of time, intensive ef-
fortsInhave been devoted to various deep learning-based
the pursuit of achieving 100% accuracy in quick defect detectionofalgorithms,
succession and
time, intensive
the YOLOv8 model has been distinguished as the preeminent version,
efforts have been devoted to various deep learning-based defect detection algorithms, surpassing its pre-
decessors
and in efficacy
the YOLOv8 as corroborated
model by a multitude
has been distinguished asofthe
studies referenced
preeminent in thesurpassing
version, literature
reviews
its [10,27,28].
predecessors inThe principal
efficacy aim was the pragmatic
as corroborated application
by a multitude of thereferenced
of studies YOLOv8 archi- in the
tecture, which, despite its baseline proficiency, required bespoke alterations
literature reviews [10,27,28]. The principal aim was the pragmatic application of the to accommo-
date the unique
YOLOv8 defectwhich,
architecture, typologies
despiteinherent to AMproficiency,
its baseline specimens.required
Concomitantly,
bespokethe model
alterations
underwent further customization to proficiently identify defects
to accommodate the unique defect typologies inherent to AM specimens. Concomitantly,that manifest post-pro-
duction,
the modelincluding
underwent but further
not limited to surface coatings,
customization nicks, and
to proficiently edgedefects
identify deformities, partic-
that manifest
ularly in the context of AM-fabricated components such as aero-engine
post-production, including but not limited to surface coatings, nicks, and edge deformities, compressor and
turbine blades.
particularly in the context of AM-fabricated components such as aero-engine compressor
In this research,
and turbine blades. the innovative application of YOLOv8, one of the most widely used
state-of-the-art objectthe
In this research, detection frameworks,
innovative applicationwasofexplored
YOLOv8, within
one of the context
the of aero-en-
most widely used
gine turbine and compressor blade defect detection, significantly
state-of-the-art object detection frameworks, was explored within the context enhancing inspection
of aero-
speed, turbine
engine accuracy,and and reliabilityblade
compressor over defect
traditional methods.
detection, The proposed
significantly methodology
enhancing inspection
seeks to enhance the detection of surface, nick, and edge defects on
speed, accuracy, and reliability over traditional methods. The proposed methodology jet engine turbine and
compressor
seeks blades,
to enhance theas demonstrated
detection in Figure
of surface, nick, 1.
andIn edge
pursuit of this
defects onobjective,
jet enginethe studyand
turbine is
designed to blades,
compressor achieve asthedemonstrated
following aims: in Figure 1. In pursuit of this objective, the study is
designed to achieve
1. To better the following
understand aims:
the relationship between additive manufacturing, surface de-
1. fectbetter
To detection in jet engine
understand turbine andbetween
the relationship compressor blades,
additive and deep learning,
manufacturing, surfacewe de-
conduct a comprehensive
fect detection in jet enginecomparative
turbine andanalytical
compressorreview of deep
blades, and learning architec-
deep learning, we
tures
conductproposed by previouscomparative
a comprehensive researchers for image analysis
analytical review ofofgas engine
deep turbine
learning and
architec-
compressor blades.
tures proposed by previous researchers for image analysis of gas engine turbine and
2. To implement
compressor a deep learning-based YOLOv8 algorithm for identifying surface flaws
blades.
2. To implement aand
on the turbine compressor
deep bladesYOLOv8
learning-based of jet engines that for
algorithm have been produced
identifying surfaceusing
flaws
conventional and AM methods.
on the turbine and compressor blades of jet engines that have been produced using
3. To assess the and
conventional execution of the proposed approaches in detecting defects, a dataset
AM methods.
3. consisting
To assess theofexecution
turbine and compressor
of the blade images
proposed approaches is employed
in detecting in this
defects, research
a dataset in-
consist-
vestigation.
ing of turbine and compressor blade images is employed in this research investigation.
Figure 1. Surface, nick, and edge defects observed in the manufacturing of turbine blades using
AM. 1. Surface, nick, and edge defects observed in the manufacturing of turbine blades using AM.
Figure
2. Methodology
In this research, a novel deep learning methodology was developed for identifying
surface defects in jet engine turbine and compressor blades, utilizing the advanced YOLOv8
algorithm. The approach is distinguished by the incorporation of both conventionally
manufactured and additively manufactured blades, enhancing the dataset’s diversity and
accuracy. This study marks the first application of YOLOv8 for defect detection in turbine
In this research, a novel deep learning methodology was developed for identifyin
surface defects in jet engine turbine and compressor blades, utilizing the advance
YOLOv8 algorithm. The approach is distinguished by the incorporation of both conven
tionally manufactured and additively manufactured blades, enhancing the dataset’s d
Coatings 2024, 14, 501
versity and accuracy. This study marks the first application of YOLOv8 for4 defect of 15
detec
tion in turbine blades, demonstrating the model’s adaptability and effectiveness in a new
high-stakes domain. The novelty of this work lies in the targeted adaptation and optim
blades, demonstrating the model’s adaptability and effectiveness in a new, high-stakes
zation ofThe
domain. YOLOv8
noveltyfor
of aerospace component
this work lies surface
in the targeted inspection,
adaptation anda optimization
critical area ofpreviousl
unexplored by this technology. The contributions include a bespoke dataset
YOLOv8 for aerospace component surface inspection, a critical area previously unexplored tailored fo
turbine
by blade defects
this technology. and significant
The contributions includeenhancements to the
a bespoke dataset model’s
tailored performance
for turbine blade in thi
defects and significant enhancements to the model’s performance in this specific
specific context. This research advances the field of aerospace manufacturing and mainte context.
This research
nance, offeringadvances the fieldapproach
a pioneering of aerospace manufacturing
to defect detectionand
thatmaintenance,
promises to offering
improve qualit
aassurance
pioneeringand approach to defect detection that promises to improve quality assurance
operational safety in jet engines. The methodology section involved thre
and operational safety in jet engines. The methodology section involved three main
main components: dataset preparation, deep learning architecture and algorithm selec
components: dataset preparation, deep learning architecture and algorithm selection,
tion, and training process and evaluation. A schematic research flowchart is illustrate
and training process and evaluation. A schematic research flowchart is illustrated below
in Figurein2.Figure 2.
below
Figure
Figure 2. 2. Schematic
Schematic experimental
experimental flowchart
flowchart of theof the research
research approach.
approach.
2.1.
2.1.Dataset
DatasetPreparation
Preparation
For this
For research,
this an openly
research, accessible
an openly compilation
accessible of multiple
compilation datasets (refer
of multiple to Table
datasets 1) to Tabl
(refer
containing 302 augmented images derived from 151 original images of jet engine turbine
1) containing 302 augmented images derived from 151 original images of jet engine tur
and compressor blade surfaces featuring various surface defects was utilized. To evaluate
bine and compressor blade surfaces featuring various surface defects was utilized. T
the efficacy of the proposed YOLOv8 model, a dataset comprising turbine blade images
evaluate
from theorigins
diverse efficacy ofemployed
was the proposed YOLOv8
to assess model,
the model’s a dataset in
proficiency comprising turbine blad
achieving 100%
images from diverse origins was employed to assess the model’s proficiency
accuracy in detecting nick, edge, and surface defects while utilizing a minimal dataset in achievin
100% the
within accuracy
shortestinpossible
detecting nick, edge,
detection and surface
time frame. defectswas
The dataset while utilizing
prepared a minimal da
by taking
high-resolution images of the turbine and compressor blades and manually annotating the
surface defects by creating a bounding box surrounding each defect area using a software
tool named roboflow (https://roboflow.com/auto-label (accessed on 14 April 2024)). The
annotated images were then split into train, validation, and test sets. The training dataset is
focused on training the required deep learning experimental model, while the validation
set is used to select the best architecture based on model performance. The images used for
the dataset are of three defects found in turbine and compressor blades and non-defective
turbine and compressor blades illustrated in Figure 3. In particular, inside the dataset, the
orange square labels reflect the areas of nick and red as surface defects, while the purple
squares signify edge defects.
square labels reflect the areas of nick and red as surface defects, while the purple
signify edge defects.
Image
Study Refe
Acquisition Table 1. Dataset source.
16 Engine blade LLP status from engine data information. [
Image
12 Acquisition Compressor blade damage Study assessment algorithm dataset. Reference [
18 16 A survey
Engineonblade
theLLP
aero-engine bladedata
status from engine processing
information.techniques by AM.[29] [
14 12 Additive design
Compressor bladeand
damagemanufacturing of jet
assessment algorithm engine components. [30]
dataset. [
18 A survey on the aero-engine blade processing techniques by AM. [31]
15 14 Gas turbine blade
Additive fault
design anddetection
manufacturing byofXCT-ray computed tomography model.
jet engine components. [32] [
44 15 Evaluating
Gas turbine risk
blade fault assessment
detection of engine
by XCT-ray computedblades visual
tomography inspection. [33]
model. [34
44 Evaluating risk assessment of engine blades visual inspection. [34,35]
16 16 TheThe
BladeBlade
Runners Runners Investigation.
Investigation. [36] [
16 16 Investigation of failure
Investigation of failure mechanism
mechanism inblade.
in a turbofan a turbofan blade. [37] [
Figure3. 3.
Figure Sample
Sample dataset
dataset imagesimages
for the for
deepthe deeptraining.
learning learning (a)training.
Nick marks(a)asNick marks
orange, (b) redas orang
denotes
denotes thethe
areaarea of surface
of surface defects,
defects, and (c)
and (c) purple purple
squares squares
signify edge signify
defects. edge defects.
of these percentages is based on ensuring that the model has sufficient data for effective
training while also providing sufficient unseen data for validation and testing, meeting
both the model’s needs and the project’s specific requirements. The larger training set (67%)
eases the model’s learning development by exposing it to a diverse range of turbine and
compressor blade defects, for example, aiding in defect recognition and feature extraction.
The validation subset (18%) plays a crucial role in adjusting the model’s hyperparameters
and preventing overfitting by assessing its performance on unseen data during training.
It helps in optimizing the model performance by providing feedback on how well the
model is generalizing from the training data. Lastly, the testing dataset (15%) remains
entirely separate from both the training and validation sets, allowing an independent
evaluation of the model’s generalization to the new dataset, thereby evaluating its real-
world performance.
in Table 2. The accuracy of the proposed approach was systematically evaluated using
globally recognized established metrics pertinent to computer vision deep learning image
processing, specifically within the context of object detection models [44]. In evaluating the
performance of the YOLOv8 model on both training and validation sets, several key metrics
are employed to provide comprehensive insights into the model’s precision (P), recall (R),
and overall efficacy. These metrics include F1_curve, PR_curve, P_curve, and R_curve, as
illustrated in Figure 5, which are standard metrics used to evaluate object detection models.
During this phase, adjustments to hyperparameters like the number of epochs or model
fine-tuning may be undertaken to optimize performance. The choice of learning rate, set
at 0.001, was crucial for balancing the speed of convergence and the stability of training.
This process is replicated until the model accomplishes satisfactory performance on the
validation set. Fine-tuning encompasses meticulous adjustments to hyperparameters and
potential architectural modifications. These adjustments can encompass variables such as
batch size and epoch count in the model’s layers. Following successful fine-tuning and
the attainment of satisfactory performance on the validation dataset, the model undergoes
comprehensive testing to determine its ultimate efficacy. An essential aspect of this phase
Coatings 2024, 14, x FOR PEER REVIEW 7 of 15
involves a comparative analysis of testing results against the model’s performance on the
validation dataset, ensuring overfitting is mitigated.
Figure
Figure 5. Metrics
5. Metrics for training
for training and validation
and validation sets. (b)
sets. (a) F1_curve; (a)PR_curve;
F1_curve; (b) PR_curve;
(c) P_curve; (c) P_curv
(d) R_curve.
R_curve.
Figure 5a showcases the F1-score, a critical metric derived as the harmonic mean
of precision and recall, plotted against varying confidence thresholds for distinct defect
In the case of surface defect analysis of the jet engine compressor and turbine bl
classifications: edge defect, nick, and surface defect. This plotting elucidates the model’s
the trainingspectrum
performance processacross
would involve
diverse feeding
defect a dataset
types alongside of annotated
a consolidated images of
assessment of turbine
compressor
its blades
overall efficacy with
across all both defective
classes. andthe
In addition, non-defective surfaces
precision–recall curve, into the YOLOv8
as shown in a
Figure
tecture.5b, The
elucidates
modelthewould
model’s precision
adjust against recall,
its weights during devoid of thein
training direct influence
response to the tra
of confidence
data, and the thresholds. The precision–confidence
process would be repeated for a(Figurefixed 5c) and recall–confidence
number of epochs. The proce
curves (Figure 5d) elaborate on how the model’s precision and recall metrics evolve with
weight adjustment, a cornerstone of the YOLOv8’s learning algorithm, plays a param
fluctuating confidence levels, showcasing a sophisticated balance between these metrics, as
role in theby
highlighted architecture’s ability to
an aggregate precision incrementally
score refine deviation
of 1 with a standard its predictive accuracy. This
of 0.458.
cateThis
process
intricate analysis underscores the model’s commendable performance,sophisticated
is undergirded by the principles of backpropagation and yet it
mization
also algorithms,
illuminates facilitating
significant a methodical
optimization adjustment
avenues, especially of weights
in bolstering in accordance
the model’s
consistency
the gradient across
of varying
the lossdefect classifications
function. During andthe confidence thresholds.
model’s training After atraining,
phase, forward prop
the model would be evaluated on a separate validation set of
tion mechanism is employed to generate predictions based on input annotated images to determine
data, followed b
its performance and identify areas for improvement. This might involve fine-tuning the
computation of loss using a predefined function that quantifies the deviation betw
model or modifying the architecture by changing hyperparameters.
these predictions and the actual labels. This recursive process is designed to mini
loss, thereby progressively ameliorating the model’s predictive performance across
cessive epochs.
Figure 5a showcases the F1-score, a critical metric derived as the harmonic me
precision and recall, plotted against varying confidence thresholds for distinct defect
Coatings 2024, 14, 501 9 of 15
Following the attainment of satisfactory metrics on the validation set, it will be tested
on a separate testing set of annotated images to evaluate its final performance. The testing
results would be compared to the performance on the validation dataset to verify that
the model has not overfitted the validation set and to obtain an estimate of the model’s
generalization performance. In the assessment of object detection performance, Intersection
over Union (IoU) serves as a critical metric by assessing the ground truth bounding box to
the predicted one, as articulated in Equation (1). Within the domain of defect detection,
IoU assumes the role of quantifying the congruence between two bounding boxes, thereby
evaluating the concurrence of ground truth and prediction regions in defect detection tasks.
Precision (P), denoting the accuracy of predictions emanating from the model’s per-
formance, and recall (R), reflecting the model’s ability to detect all possible positive cases
among top-priority predictions, are mathematically represented by Equations (2) and (3).
These metrics are pivotal in gauging the model’s precision-recall trade-off and its effective-
ness in capturing relevant instances. Average precision (AP) is utilized as the calculating
matrix for turbine and compressor blade defect detection, while the mean average preci-
sion (mAP) and ([email protected]:0.95) [45] are used to assess the model’s overall performance,
analyzed through Equations (4)–(6), where true positive (TP) and false positive (FP) rates
indicate that misalignment defects exist in the non-defect areas as a false detection. A false
negative (FN) rate, representing ground truth, is present in the dataset, and the model
failed to detect exact defect types.
True Positive(TP)
Precision(P) = (4)
True Positive(TP) + False Positive(FP)
True positive(TP)
Recall(R) = (5)
True Positive(TP) + False Negative(FN)
mAP0.50 + mAP0.55 . . . + mAP0.95
[email protected] : 0.95 = (6)
N
(a)
(b)
Figure 6. YOLOv8 performance evaluation. (a) Confusion matrix; (b) Performance results.
Figure 6. YOLOv8 performance evaluation. (a) Confusion matrix; (b) Performance results.
In the training dataset results presented in the top row of Figure 6b, an assortment of
metrics provides a detailed account of the model’s proficiency through the training phase.
The train/box_loss metric, indicative of the model’s accuracy in bounding box predictions,
shows a declining trend, suggesting enhanced capability in localizing objects within the
which is consistent with the increased difficulty in maintaining precision across more
stringent IOU thresholds. The bottom row, delineating the validation losses such as
val/box_loss, val/cls_loss, and val/df1_loss, offers insights into the model’s generalizabil-
ity. These metrics, expectedly higher than their training counterparts, should ideally par-
Coatings 2024, 14, 501 11 of 15
allel the downward trends observed in training.
The accurate identification of samples among the total samples defines accuracy
while precision represents
The accurate the percentage
identification of accurately
of samples among identified
the total samples faulty
defines surfaces
accuracy, whilewithin a
set of samples
precision flaggedthe
represents as percentage
positive. Additionally, the recall
of accurately identified denotes
faulty thewithin
surfaces percentage
a set ofof accu-
samples flagged as positive. Additionally, the recall denotes the percentage
rate predictions in relation to the total validation predictions. Incorporating both precisionof accurate
andpredictions
quantity ofinaccurate
relation to the total validation
predictions, predictions.
the F1-score reachesIncorporating
a maximumboth valueprecision
of 1. Notably
and quantity of accurate predictions, the F1-score reaches a maximum value of 1. Notably,
the confusion matrix demonstrates a 100% detection accuracy for nick, edge, and surface
the confusion matrix demonstrates a 100% detection accuracy for nick, edge, and surface
defects (see(see
defects Figure
Figure6a).
6a).Some
Some faults inturbine
faults in turbine
andand compressor
compressor blades
blades were picked
were picked up by up by
the YOLOv8 algorithm, as depicted in Figure
the YOLOv8 algorithm, as depicted in Figure 7 below. 7 below.
Figure 7. Surface
Figure defects
7. Surface defectsdetected bythe
detected by theYOLOv8
YOLOv8 algorithm.
algorithm.
Validation
Validation determines the
determines the YOLOv8
YOLOv8model’s model’s performance
performanceusingusing
multiple datasets,
multiple datasets
where it identifies surface, edge, and nick flaws with an average value of
where it identifies surface, edge, and nick flaws with an average value of 0.9, derived0.9, derived from from
the turbine and compressor blade training model. By testing the algorithm on a validation
the turbine and compressor blade training model. By testing the algorithm on a validation
set, one can evaluate its potential for standardization and make the required modifications
set, one can evaluate
to enhance its potential
its performance. for3 provides
Table standardization
a detailedand makeofthe
analysis therequired modifications
algorithm’s test,
to enhance its performance. Table 3 provides a detailed analysis of the
where the average mean pixel size (MAP) was 0.99, and the accuracy values for surface, algorithm’s test
where the
nick, andaverage mean
edge defects pixel
were size
up to (MAP)
0.91, was
0.91, and 1, 0.99, and the
respectively. Allaccuracy values
of the detected for surface
images
had 640 × 640 pixels.
Overall, the YOLOv8 gas turbine and compressor blade defect detection algorithm
performed magnificently as an excellent detector to identify metal AM turbine and com-
pressor blade defects like nicks, edges, and surface defects as a novel approach. To extend
Coatings 2024, 14, 501 12 of 15
its applicability to additional surface defect types, a similar methodology can be employed.
This involves annotating new defect types in the dataset and subsequently training the
YOLOv8 model to enhance its capabilities for identifying and classifying these new types
of defects.
In evaluating the most effective method for detecting aero-engine blade defects, it
is crucial to assess the efficacy, accuracy, and applicability of NDT testing and optical
methods in comparison to the proposed deep learning-based YOLOv8 algorithms. While
non-destructive testing and optical methods are traditional and reliable, they each have
limitations either in the type of defects they can detect or in their operational efficiency. The
deep learning-based YOLOv8 algorithm, on the other hand, offers a comprehensive solution
that can continuously evolve and adapt, providing high accuracy and real-time detection
capabilities. Therefore, for the specific case of inspecting surface coating issues, edges,
and nicks in aero-engine blades, the YOLOv8 deep learning approach is generally more
advantageous, especially in scenarios where high throughput and precision are required.
This study found that the proposed deep learning strategy based on the YOLOv8
method surpassed existing methodologies in accuracy and reliability. However, the method
did have a few limitations, like the small size of the dataset, which may limit the model’s
applicability in other contexts. When it comes to machine learning, the quality and amount
of data used for training the model are crucial to its success. Overfitting, where the model
becomes overly particular to the examples in the dataset and fails to generalize adequately
to new, unseen examples, can occur when the dataset is small.
The computational complexity of the YOLOv8 architecture was another shortcom-
ing of the proposed solution. The computational complexity of an algorithm describes
how much time and space the program needs to execute in real time. It is possible that
real-time detection is crucial for ensuring safety and preventing faults from producing
catastrophic failures when it comes to identifying surface defects on jet engine turbine and
compressor blades.
4. Conclusions
In deep learning, numerous layers of artificial neural networks are used to learn elabo-
rate representations of data. When applied to tasks like image identification, identifying
items, and natural language processing, deep learning has proven to be remarkably effec-
tive. In this research, a YOLOv8-based deep learning approach was devised for identifying
flaws in the surface of jet engine turbine and compressor blades. The great precision and
speed attained by the suggested technology make it a viable option for quality control in
the aerospace jet engine industry. The findings show that deep learning-based methods can
be utilized to detect surface defects in jet engine turbine and compressor blades produced
by using additive manufacturing with high accuracy and efficiency.
The experimental findings suggested that the YOLOv8 model adequately detected
nick, edge, and surface defects in the turbine and compressor blade dataset, attaining an
optimized defect detection precision of up to 99.5% within just 280 s in comparison to
former automatic defect identification techniques. The suggested approach can immensely
minimize the cost and time required for defect identification when contrasted to other
existing methods. In addition, the proposed method can be executed in real time, enabling
instantaneous problem detection and rectification all through the production cycle.
In the future, researchers may look at generalizing the model to detect flaws in dif-
ferent types of components and expanding the dataset to cover a broader spectrum of
defects. In order to discover and rectify flaws in the manufacturing process in real time, the
proposed technology can be incorporated into an automated flaw detection system. Despite
stringent controls, manual annotation of defects introduces a possibility of human error,
potentially affecting the uniformity and dependability of the training inputs. Additionally,
the reliance on high-resolution imagery raises concerns about the model’s performance
degradation when deployed with lower-quality images commonly encountered in practical
settings. To further boost the accuracy and acceleration of the model, the suggested method
Coatings 2024, 14, 501 13 of 15
Author Contributions: Conceptualization, supervision, funding acquisition, review and editing, C.Z.;
Methodology, validation, software development and formal analysis, M.H.Z.; Review and editing,
Y.W.; Review and editing, validation, funding acquisition, W.L.; Image acquisition, re-validation,
H.M.I. All authors have read and agreed to the published version of the manuscript.
Funding: The authors are grateful for the financial support from the Innovation Group Project of
Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (Grant No. 311021013).
This work was also supported by funds from the State Key Laboratory of Clean and Efficient
Turbomachinery Power Equipment (Grant No. 16ZR1417100), the State Key Laboratory of Long-Life
High Temperature Materials (Grant No. DTCC28EE190933), the National Natural Science Foundation
of China (Grant No. 51605287), the Natural Science Foundation of Shanghai (Grant No. 16ZR1417100),
the Fundamental Research Funds for the Central Universities (Grant No. E3E40808), and the National
Natural Science Foundation of China (Grant No. 71932009).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: This study did not involve human participants, or human data.
However, the research employed datasets consisting of publicly available, de-identified images
of jet engine turbine and compressor blades for the purpose of developing and validating a deep
learning model.
Data Availability Statement: Data available upon request.
Acknowledgments: The authors would like to acknowledge Yawei He, School of Media and Com-
munication, Shanghai Jiao Tong University, for YOLO experimental technology support for the open
access database to conduct this research article.
Conflicts of Interest: The authors whose names are mentioned certify that they have NO affilia-
tions with or involvement in any organization or entity with any financial interests or professional
relationships and affiliations in the subject matter discussed in this manuscript.
References
1. Uriondo, A.; Esperon-Miguez, M.; Perinpanayagam, S. The present and future of additive manufacturing in the aerospace sector:
A review of important aspects. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2015, 229, 2132–2147. [CrossRef]
2. Simpson, T.W. Book Review: Additive Manufacturing for the Aerospace Industry. Am. Inst. Aeronaut. Astronaut. 2020, 58,
1901–1902. [CrossRef]
3. Clijsters, S.; Craeghs, T.; Buls, S.; Kempen, K.; Kruth, J.P. In situ quality control of the selective laser melting process using a
high-speed, real-time melt pool monitoring system. Int. J. Adv. Manuf. Technol. 2014, 75, 1089–1101. [CrossRef]
4. Bourell, D.L.; Leu, M.C.; Rosen, D.W. Roadmap for Additive Manufacturing: Identifying the Future of Freeform Processing; The
University of Texas at Austin: Austin, TX, USA, 2009; pp. 11–15.
5. Milewski, J.O.; Milewski, J.O. Additive Manufacturing Metal, the Art of the Possible; Springer: Berlin/Heidelberg, Germany, 2017.
6. Brown, M.; Wright, D.; M’Saoubi, R.; McGourlay, J.; Wallis, M.; Mantle, A. Destructive and non-destructive testing methods for
characterization and detection of machining-induced white layer: A review paper. CIRP J. Manuf. Sci. Technol. 2018, 23, 39–53.
[CrossRef]
7. Xie, L.; Lian, Y.; Du, F.; Wang, Y.; Lu, Z. Optical methods of laser ultrasonic testing technology in the industrial and engineering
applications: A review. Opt. Laser Technol. 2024, 176, 110876. [CrossRef]
8. Shang, H.; Sun, C.; Liu, J.; Chen, X.; Yan, R. Deep learning-based borescope image processing for aero-engine blade in-situ
damage detection. Aerosp. Sci. Technol. 2022, 123, 107473. [CrossRef]
9. Kim, Y.H.; Lee, J. Videoscope-based inspection of turbofan engine blades using convolutional neural networks and image
processing. Struct. Health Monit. 2019, 18, 2020–2039. [CrossRef]
Coatings 2024, 14, 501 14 of 15
10. Li, X.; Wang, W.; Sun, L.; Hu, B.; Zhu, L.; Zhang, J. Deep learning-based defects detection of certain aero-engine blades and vanes
with DDSC-YOLOv5s. Sci. Rep. 2022, 12, 13067. [CrossRef]
11. Shen, Z.; Wan, X.; Ye, F.; Guan, X.; Liu, S. Deep learning based framework for automatic damage detection in aircraft engine
borescope inspection. In Proceedings of the 2019 International Conference on Computing, Networking and Communications
(ICNC), Honolulu, HI, USA, 18–21 February 2019.
12. Yixuan, L.; Dongbo, W.; Jiawei, L.; Hui, W. Aeroengine Blade Surface Defect Detection System Based on Improved Faster RCNN.
Int. J. Intell. Syst. 2023, 2023, 1992415. [CrossRef]
13. Malta, A.; Mendes, M.; Farinha, T. Augmented reality maintenance assistant using yolov5. Appl. Sci. 2021, 11, 4758. [CrossRef]
14. Aust, J.; Shankland, S.; Pons, D.; Mukundan, R.; Mitrovic, A. Automated defect detection and decision-support in gas turbine
blade inspection. Aerospace 2021, 8, 30. [CrossRef]
15. Holl, M.; Rogge, T.; Loehnert, S.; Wriggers, P.; Rolfes, R. 3D multiscale crack propagation using the XFEM applied to a gas turbine
blade. Comput. Mech. 2014, 53, 173–188. [CrossRef]
16. Yoon, W.N.; Kang, M.S.; Jung, N.K.; Kim, J.S.; Choi, B. Failure analysis of the defect-induced blade damage of a compressor in the
gas turbine of a cogeneration plant. Int. J. Precis. Eng. Manuf. 2012, 13, 717–722. [CrossRef]
17. Liu, B.; Tang, L.; Liu, T.; Liu, Z.; Xu, K. Blade health monitoring of gas turbine using online crack detection. In Proceedings of the
2017 Prognostics and System Health Management Conference (PHM-Harbin), Harbin, China, 9–12 July 2017.
18. Zhang, H.; Chen, J.; Yang, D. Fibre misalignment and breakage in 3D printing of continuous carbon fibre reinforced thermoplastic
composites. Addit. Manuf. 2021, 38, 101775. [CrossRef]
19. Buezas, F.S.; Rosales, M.B.; Filipich, C. Damage detection with genetic algorithms taking into account a crack contact model. Eng.
Fract. Mech. 2011, 78, 695–712. [CrossRef]
20. Wang, Y.; Ju, F.; Cao, Y.; Yun, Y.; Bai, D.; Chen, B. An aero-engine inspection continuum robot with tactile sensor based on EIT
for exploration and navigation in unknown environment. In Proceedings of the 2019 IEEE/ASME International Conference on
Advanced Intelligent Mechatronics (AIM), Hong Kong, China, 8–12 July 2019.
21. Dong, X.; Axinte, D.; Palmer, D.; Cobos, S.; Raffles, M.; Rabani, A. Development of a slender continuum robotic system for
on-wing inspection/repair of gas turbine engines. Robot. Comput. Manuf. 2017, 44, 218–229. [CrossRef]
22. Morini, M.; Pinelli, M.; Spina, P.R.; Venturini, M. Numerical analysis of the effects of nonuniform surface roughness on compressor
stage performance. J. Eng. Gas Turbines Power 2011, 133, 072402. [CrossRef]
23. Li, Y.G. Gas turbine performance and health status estimation using adaptive gas path analysis. J. Eng. Gas Turbines Power 2010,
132, 041701. [CrossRef]
24. Zhou, D.; Wei, T.; Huang, D.; Li, Y.; Zhang, H. A gas path fault diagnostic model of gas turbines based on changes of blade
profiles. Eng. Fail. Anal. 2020, 109, 104377. [CrossRef]
25. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016.
26. Hui, T.; Xu, Y.; Jarhinbek, R. Detail texture detection based on Yolov4-tiny combined with attention mechanism and bicubic
interpolation. IET Image Process. 2021, 15, 2736–2748. [CrossRef]
27. Hussain, M.J.M. YOLO-v1 to YOLO-v8, the rise of YOLO and its complementary nature toward digital manufacturing and
industrial defect detection. Machines 2023, 11, 677. [CrossRef]
28. Li, S.; Yu, J.; Wang, H. Damages detection of aeroengine blades via deep learning algorithms. IEEE Trans. Instrum. Meas. 2023, 72,
1–11. [CrossRef]
29. Zaretsky, E.V.; Litt, J.S.; Hendricks, R.C.; Soditus, S. Determination of turbine blade life from engine field data. J. Propuls. Power
2012, 28, 1156–1167. [CrossRef]
30. Zhang, X.; Li, W.; Liou, F. Damage detection and reconstruction algorithm in repairing compressor blade by direct metal
deposition. Int. J. Adv. Manuf. Technol. 2018, 95, 2393–2404. [CrossRef]
31. Sinha, A.; Swain, B.; Behera, A.; Mallick, P.; Samal, S.K.; Vishwanatha, H.; Behera, A. A review on the processing of aero-turbine
blade using 3D print techniques. J. Manuf. Mater. Process. 2022, 6, 16. [CrossRef]
32. Han, P. Additive design and manufacturing of jet engine parts. Engineering 2017, 3, 648–652. [CrossRef]
33. Błachnio, J.; Chalimoniuk, M.; Kułaszka, A.; Borowczyk, H.; Zasada, D. Exemplification of detecting gas turbine blade structure
defects using the x-ray computed tomography method. Aerospace 2021, 8, 119. [CrossRef]
34. Aust, J.; Pons, D. Methodology for evaluating risk of visual inspection tasks of aircraft engine blades. Aerospace 2021, 8, 117.
[CrossRef]
35. Aust, J.; Pons, D. Assessment of aircraft engine blade inspection performance using attribute agreement analysis. Safety 2022,
8, 23. [CrossRef]
36. Kellner, T. The Blade Runners: This Factory Is 3D Printing Turbine Parts for the World’s Largest Jet Engine. 2018. Available
online: https://www.ge.com/additive/stories/cameri-factory-3d-printing-turbine-parts-worlds-largest-jet-engine (accessed
on 20 March 2018).
37. Mishra, R.; Thomas, J.; Srinivasan, K.; Nandi, V.; Bhatt, R. Investigation of HP turbine blade failure in a military turbofan engine.
Int. J. Turbo Jet-Engines 2017, 34, 23–31. [CrossRef]
Coatings 2024, 14, 501 15 of 15
38. Wang, C.Y.; Liao, H.Y.M.; Wu, Y.H.; Chen, P.Y.; Hsieh, J.W.; Yeh, I.H. CSPNet: A new backbone that can enhance learning
capability of CNN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle,
WA, USA, 14–19 June 2020.
39. Bolya, D.; Zhou, C.; Xiao, F.; Lee, Y.J. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019.
40. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017.
41. Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018.
42. Cao, Y.; Chen, K.; Loy, C.C.; Lin, D. Prime sample attention in object detection. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020.
43. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object
detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada,
17–24 June 2023.
44. Everingham, M.; Van, L.; Williams, C.K.; Winn, J.; Zisserman, A. The pascal visual object classes (voc) challenge. Int. J. Comput.
Vis. 2010, 88, 303–338. [CrossRef]
45. Zhao, W.; Huang, H.; Li, D.; Chen, F.; Cheng, W. Pointer defect detection based on transfer learning and improved cascade-RCNN.
Sensors 2020, 20, 4939. [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.