Quantum Machine Learning
Quantum Machine Learning
Quantum Machine Learning
net/publication/327929467
An improved fault diagnosis approach for FDM process with acoustic emission
CITATIONS READS
70 344
4 authors:
bo wu Yan Wang
Huazhong University of Science and Technology Georgia Institute of Technology
94 PUBLICATIONS 1,143 CITATIONS 227 PUBLICATIONS 2,850 CITATIONS
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Yan Wang on 12 October 2018.
30332, USA
Abstract
∗ Corresponding author
Email address: [email protected] (Yan Wang)
1. Introduction
2
product quality assurance. Sensor-based fault diagnostics and prognostics have
been widely applied in other manufacturing equipment, where machine conditions
are monitored by analyzing signals acquired by sensors and extracting features to
identify machine states. The future states can also be predicted from statistical
machine learning models.
Limited research is done on monitoring the FDM process and machine health.
Recently acoustic emission (AE) was used to monitor the condition [7, 8, 9, 10].
In other related efforts, optical camera [11, 12] and infrared camera [13] were app-
lied to monitor the extrusion process non-intrusively, whereas fiber Bragg grating
sensor [14] was applied intrusively.
AE sensor has several advantages and has been applied in traditional manufac-
turing processes [15, 16, 17, 18, 19]. First, AE sensor is sensitive to mechanical
system dynamics caused by changes of friction, force, vibration, or structural de-
fect. It contains rich information of machine states and their changes. Second, its
implementation is simple. The source of signals can be directly from machines
themselves, and no external stimulation is needed. Third, AE monitoring system
can be made non-intrusive. Thus modification of the original equipment is not
required.
To identify machine states from AE signals, the signals in waveform are first
processed by some signal decomposition methods, such as wavelet analysis [20],
empirical mode decomposition [21], and variational mode decomposition [22].
Features are then extracted from the processed signals. Mappings between the ex-
tracted features and patterns to be recognized are established. The mappings are
then used to identify machine states when new signals arrive. The major challen-
ges of this pattern recognition process for real-time applications include the large
3
volume of data because of high sampling rate used by sensors and high dimensio-
nality of feature space formed by various information extracted from the original
signals, which both lead to high computational load. The sensitivity and robust-
ness of identification depend on what features to be selected if the dimension of
feature space is to be reduced.
In our previous work of AE-based FDM machine monitoring, the original
AE waveform signals were simplified as AE hits. This significantly reduces the
amount of data to be processed. It was shown that state classification with sup-
port vector machine based on single feature in time domain is effective [7]. The
further consideration of multiple features in time domain increases detection sen-
sitivity. Hidden Markov model based on the further reduced signals by principal
component analysis (PCA) can improve efficiency [9].
In this paper, both time- and frequency-domain features from the AE hits are
used for state identification. The inclusion of frequency-domain information in the
feature vector prevents the information loss and provides a more comprehensive
and accurate approach for state recognition. With the further increased dimension
of feature space, more accurate classification approaches can be helpful. Here,
the linear discriminant analysis (LDA) [23] is applied to customize the operator
for dimensionality reduction. The customization is according to the nature of AE
hits so that the sensitivity of feature detection is maximized. After the dimen-
sion of features is reduced, the clustering by fast search and find of density peaks
(CFSFDP) approach [24] is taken for classification.
The standard linear dimension reduction techniques such as PCA reduce the
dimension of feature space with a generic criterion of sample variance. Their
use is not optimal when the ultimate goal is classification, where the differentia-
4
tion power between classes after the data size reduction is important. In contrast,
in the LDA method, the linear transformation operator for dimension reduction is
customized to maximize the differentiation power between classes based on a par-
ticular set of data. This approach can provide the optimum projection directions
in the application of classification. With the optimum transformation operator,
the data processing and feature identification based on LDA can be more accurate
than the ones based on traditional PCA.
In fault diagnosis for the FDM process, conventional supervised classification
methods, such as hidden semi-Markov model [9] and support vector machines
[7], have been applied to identify the machine states. However, the system mo-
dels need to be trained based on the previously determined features and states.
Therefore, supervised classification methods have the limitation in real-time sy-
stem identification when the knowledge about the system states is incomplete or
limited. The premise of applying classification methods is that all machine states
are known a priori. In addition, the decisions of how to choose the training data
sets and training methods will affect the final identification results. As an alter-
native, unsupervised clustering methods have been used to recognize and classify
machine states in manufacturing [25, 26, 27].
In unsupervised clustering analysis, typically cluster centers need to be de-
termined first. Then feature point regrouping and cluster update are performed
iteratively. Different machine states thus can be identified by analyzing clusters
without supervised training process. In this paper, a density-based clustering met-
hod, CFSFDP method, is used. Different from distance-based clustering methods
such as hierarchical and partitioning algorithms (e.g. k-means), density-based
clustering methods do not group data and update the clusters iteratively. This can
5
significantly save computational cost in real-time applications. More importantly,
density-based clustering algorithms, such as the recently developed CFSFDP met-
hod, do not necessarily group clustering into spherical domains. Clusters with
more generic topology can be generated. Therefore, some inherent nonlinear re-
lationships among data within a cluster can be preserved.
In summary, the efficiency of feature extraction as well as effectiveness of
dimensionality reduction for state classification in existing research approaches
need to be improved. In this work, the CFSFDP method is applied to fault diag-
nosis of FDM process for the first time, where the centers of clusters are efficiently
identified. With the LDA-based dimension reduction and classification methods,
the overall capability for real-time fault diagnosis is improved.
The rest of the paper is organized as follows. In Section 2, the architecture of
the proposed fault diagnosis approach is introduced, including feature extraction,
hybrid feature space construction and reduction, and unsupervised clustering. Ex-
perimental results in the FDM process are analyzed to demonstrate the perfor-
mance of the proposed approach in Section 3, with comparison with other appro-
aches. Section 4 concludes the paper.
2. Methodology
The basic flow chart of the improved AE sensor-based fault diagnosis system
is shown in Figure 1. The AE sensor is used to collect vibration signal of the
extruder. The acquired AE signals contain the information of different machine
states. The time and frequency domain features in AE signal are then extracted
to form the hybrid feature vector. The high-dimensional hybrid feature space is
constructed to avoid information loss. The LDA method is applied to reduce the
6
high-dimension
AE time/frequency features
AE sensor signal features
Hybrid features space
Features Hybrid features
space
extraction space reduction
construction
PAC 2/4/6 reduced
features space
label
Fault State
Clustering
diagnosis recognition
PCI-2
Engine SR
Volts
Count
Amplitude
Threshold
t1 t2 Time
Duration
dimension of the hybrid feature space and thus the computational cost in the fol-
lowing classification step. Then the unsupervised density-based clustering techni-
que is used to identify the reduced hybrid features. Finally, machine states of the
extruder in FDM process is recognized and classified for fault diagnosis.
The AE signals from the AE sensor are first processed and AE hits are counted.
An AE hit u(t) is illustrated in Fig. 2, where time-domain features including
amplitude, count, and duration are shown. Other typical features such as root
mean square (RMS), peak frequency (P-Freq), absolute energy (ABS-Energy),
and signal strength are also used [7, 8, 9].
7
Specifically, the amplitude is the peak voltage of the wave within an AE hit,
which is defined as
Umax
AEA = 20 log( ), (1)
Ure f
where Umax is the peak voltage, Ure f is the reference voltage, and AEA is expressed
on a decibel (dB) scale.
Counts are the numbers of pulses that surpass a predefined threshold. Duration
is the elapsed time from the first count to the last one in one AE hit, which is
described as
AED = t2 − t1 , (2)
where t2 is the time of the last count, and t1 is the time of the first count.
P-Freq is the frequency in the power spectrum with the maximum magnitude.
Signal strength is defined as
Z t2
AEstr = |u(t)|dt, (3)
t1
8
2.2. Hybrid feature space construction and reduction
fm = [mc , md , ma , mr , ms , me , m p f ],
(6)
fstd = [stdc , stdd , stda , stdr , stds , stde , std p f ],
W T SbW
J(W ) = tr( ), (7)
W T SwW
where
c
Sw = ∑ ∑ (x j − mi )(x j − mi )T (8)
i=1 x j ∈Ci
is the within-class scatter matrix, and each data vector x j belongs to one of the c
classes Ci ’s (i = 1, 2, · · · , c). In addition, mi is the centroid of the i-th class, and
c
Sb = ∑ pi (mi − m)(mi − m)T (9)
i=1
is the between-class scatter matrix, with pi as the number of features in the i-th
class, and m as the global centroid of all feature vectors.
LDA finds the optimum transformation operator W such that Sb is maximized
and Sw is minimized [23, 28]. Using the linear transformation operator W , the
original D-dimensional feature space is reduced into a d-dimensional one.
9
2.3. Unsupervised clustering
is the indicator function, di j is the distance between the i-th and j-th data points,
and dc is a cutoff distance, which is usually set manually at the beginning. Based
on the Eq. (10), the local density associated with a point is the number of points
that has a smaller distance than the cutoff.
The minimum distance δi between the i-th point and any others with a higher
local density is obtained by
δi = min di j . (11)
j:ρ j >ρi
For the point that has the highest local density, its minimum distance is calculated
as δi = max j:ρ j <ρi di j . The cluster centers will be selected as those points that have
both large local densities and large minimum distances.
10
In order to identify the cluster centers with large minimum distances and large
local densities, all feature points xi (ρi , δi ) can be plotted on a two-dimensional de-
cision graph as functions of ρi and δi . The feature points with high local densities
ρi and large minimum distances δi on the top right corner of the graph are selected
as the cluster centers. The remaining feature points will be assigned to a cluster
with the cluster center as the nearest neighbor with a higher density.
11
Extruder
AE sensor
FDM machine
Host PC
Filament
Tablet PC
Preamplifier
12
Table 2: Number of recorded AE hits in each machine state.
Machine state Number of AE hits
Based the recorded AE hits, the time- and frequency-domain features listed
in Eq.(6) are obtained. In order to reduce computational cost in fault diagnosis,
segmental analysis is applied. A total number of 100 segments with a time inter-
val of 0.1 second for each were obtained from the original AE hits for each state,
as shown in Table 2. The numbers of AE hits within each segment for different
machine states are approximately 60, 59, 67, 61, and 46 respectively. Then, the
means and standard deviations of the eight features in each segment are calcula-
ted. The means and standard deviations of each segment for different machine
states are shown in Fig. 4. Segment No.1-No.100 respond to the normal extru-
13
ding state. Segments No.101-200, No.201-300, No.301-400, and No.401-500 are
material loading/unloading, run out of materials, blocked, and semi-blocked sta-
tes respectively. Through analyzing the AE hits in each segment instead of raw
signals, computational cost for feature extraction can be reduced significantly.
14
rdrrrme
uSu
u
rdrrme
uSu
u
r
(a) (b)
p)p
p
p-a
SrS SV)
(c) (d)
H H H H
g
gpg
g geh
H
H H H S
S)S
S
S-
m
H
g
gpggeh
S
S)S
S-
m
H
H H H
H H
H
g
S
(e) (f)
tete
e
e(
H
tFtF
F
F(
H
tete
e(
H
tFtF
F(
H
15
e
F
(g) (h)
Figure 4: Interested features in different machine states (a) count, (b) duration, (c) amplitude, (d)
RMS, (e) signal strength, (f) ABS Energy, (g) Freq-C, (h) P-Freq.
r
r
r
r r
16
GHOWD
UKR
#289, #315, and #483, which indeed were samples that are collected when the
extruder was at five respective states of normal, semi-blocked, blocked, run-out-
of-material, and loading/unloading. They are respectively denoted by a circle, a
rectangle, a diamond, an up triangle, and a down triangle in Figure 6. Therefore,
the five distinctive machine states are identified via the selection of cluster centers
in the CFSFDP method. The remaining feature points then can be classified to a
cluster with its center as the nearest neighbor with a higher local density.
Based on the five selected clustering centers, the remaining feature points were
assigned to the corresponding classes as shown in Figure 7. Feature points that
are correctly classified as the five respective machine states are plotted as black ci-
rcles, red rectangles, green diamonds, blue up triangles, and cyan down triangles.
There are still some mis-classified feature points plotted as magenta left triang-
les, which are mainly in the regions where normal, semi-blocked, blocked states
overlap.
The ability of identifying the normal-extruding, semi-blocked, and blocked
17
<
;
18
Table 3: Classification accuracy for five clustering centers using different dimension of feature
space.
Feature space dimension normal-extruding semi-blocked blocked run-out-of-material loading/unloading All states
different dimensions, a reduction to two dimensions has the best performance. In-
creasing the dimensions do not improve the performance. This is mainly because
the LDA dimension reduction method is able to optimize the projection directions
according to the nature of data sets. The results also indicate that there is a certain
level of correlation between these features.
Another sensitivity analysis is to check the results when the number of clusters
or states is four instead of five. Because the normal-extruding and semi-blocked
states are the most similar ones with the only extruding temperature difference,
these two states are merged into one state. The clustering results with four sta-
tes and different feature space dimensions are shown in Table 4. With only four
cluster centers identified, the ability of correctly classifying the blocked and run-
out-of-material is enhanced using four- and six-dimensional feature space. The
run-out-of-material state is identified the most accurately in two-dimensional fea-
ture space.
It is seen that increasing the dimension of feature space does not help to im-
prove classification accuracy, if there are five states to be identified. It could help
if the semi-blocked state is no longer of interest. With the consideration of com-
putational cost involved in classification, the optimum choice of the reduced di-
mension is considered to be two. The two-dimensional reduced feature space is
19
Table 4: Classification accuracy for four clustering centers using different dimension of feature
space.
Feature space dimension normal-extruding & semi-blocked blocked run-out-of-material loading/unloading All states
3.5. Evaluation
20
Table 5: Classification accuracy and corresponding F1 score in parenthesis using unsupervised
methods.
Unsupervised methods normal-extruding semi-blocked blocked run-out-of-material loading/unloading
CFSFDP 99.0 % (82.16 %) 68.0 % (79.53 %) 88.0 % (92.15%) 98.0 % (98.49 %) 98.0 % (98.99%)
K-means clustering [8] 99.0 % (82.16 %) 63.0 % (72.00%) 88.0 % (91.19%) 94.0 % (96.91%) 97.0 % (98.48%)
PHA clustering [29] 100.0 % (57.64 %) 0.0 % (n/a) 88.0 % (92.15%) 66.0 % (79.52%) 95.0 % (97.44%)
spectively. The CFSFDP also has the highest F1 scores for all states. Specifically,
an F1 score of 79.53 % for the semi-blocked state is achieved using the CFSFDP.
Thus, the CFSFDP method is shown to be effective in identifying the machine
states from the AE hits, and at the same time computationally more efficient than
distance based clustering.
21
Table 6. It can be found that the LDA-based method has higher classification
accuracies for all five machine states than the other traditional feature reduction
methods. Here, traditional feature reduction methods cannot recognize and clas-
sify the semi-blocked, blocked, and run-out-of-material states. Their highest clas-
sification accuracies are only 56.0%, 60.0% and 73.0% respectively. Among all
methods, the proposed LDA-based approach has the highest F1 scores for all sta-
tes. Its F1 scores are 72.00 %, 91.19 % and 96.91 % for the semi-blocked, blocked,
and run-out-of-material states respectively, which are higher than most of the ot-
her methods.
With classification as the purpose, the LDA method helps find a customized
linear projection operator based on data so that the classes of feature points can
be differentiated easily after reduction. In contrast, the other reduction methods
do not keep classification as the goal. For instance, NCA intends to keep neig-
hbor topology unchanged, LPP finds transformation such that distances between
neighbors are preserved, NPE preserves the local manifold structures, PCA main-
tains the global statistical information of data, whereas sparse filtering seeks to
maximize the dispersal of data after projection. In those methods, the ease of dif-
ferentiation between clusters after dimension reduction is not taken into account.
Compared to other popular linear dimensionality reduction methods, the LDA
used in this work has a better performance in processing the interested features
in order to differentiate the extruder states, which provides better classification
results.
22
r
r
r
r
r
r
r r
r r
(a) (b)
r
r
r
r
r
r
r r
r r
(c) (d)
r
r
r
r r
(e)
Figure 8: Different feature reduction methods (a) NCA, (b) LPP, (c) NPE, (d) PCA, (e) Sparse
filtering.
23
Table 6: Classification accuracy and F1 score in parenthesis under different feature reductions.
Feature reduction normal-extruding semi-blocked blocked run-out-of-material loading/unloading
LDA [23] 99.0 % (82.16 %) 63.0 % (72.00 %) 88.0 % (91.19 %) 94.0 % (96.91 %) 97.0 % (98.48 %)
NCA [30] 68.0 % (60.18 %) 43.0 % (38.57 %) 49.0 % (57.65 %) 64.0 % (71.51 %) 99.0 % (98.02 %)
LPP [31] 72.0 % (66.36 %) 56.0 % (48.70 %) 60.0 % (72.73 %) 73.0 % (76.44 %) 95.0 % ( 96.45%)
NPE [32] 64.0 % (62.14 %) 47.0 % (42.34 %) 34.0 % (39.08 %) 49.0 % (49.75 %) 99.0 % (98.51 %)
PCA [33] 64.0 % (62.14 %) 48.0 % (43.44 %) 34.0 % (39.08 %) 50.0 % (50.51 %) 99.0 % (98.51 %)
Sparse filtering [34] 68.0 % (53.54 %) 35.0 % (34.31 %) 49.0 % (60.49 %) 53.0 % (65.84 %) 86.0 % (78.54 %)
(HMM) [35], support vector machines (SVM) [36], genetic algorithm-based back
propagation neural network (BPNN-GA) model [37], and probabilistic neural net-
work (PNN) model [38], are also applied to classify the reduced features from five
different machine states.
To select representative feature points that evenly cover the space, Kennard
and Stone algorithm [39, 40] was applied to choose training and testing data sets
from different machine states. Here, 250 training data sets were chosen from ori-
ginal 500 data sets. The system models were trained and updated by the extracted
features. The final results of the classification accuracies and F1 scores are shown
in Table 7.
Similar to the proposed CFSFDP method in this paper, the supervised clas-
sification methods also recognize and classify the normal-extruding, run-out-of-
material, and loading/unloading states with a relatively high classification accu-
racy. At the same time, the semi-blocked and blocked states still cannot be iden-
tified with the reduced features. Generally, the training results using supervised
classification methods is better than the testing data sets. For example, the tes-
ting results of the semi-blocked state using the HMM, SVM, BPNN-GA, and
PNN are 67.5%, 77.5%, 65.0%, and 67.5% respectively, which are worse than
24
Table 7: Classification accuracy and F1 score in parenthesis using supervised methods.
Supervised methods Types normal-extruding semi-blocked blocked run-out-of-material loading/unloading
Training 91.18 % (80.52 %) 80.00 % (83.48 %) 90.38 % (93.07 %) 97.10 % (97.81 %) 100.00 % (100.00 %)
HMM [35] Testing 100.00 % (88.00 %) 67.50 % (80.60 %) 87.50 % (92.31 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 97.00 % (85.46 %) 75.00 % (82.42 %) 89.00 % (92.71 %) 98.00 % (98.49 %) 100.00 % (100.00 %)
Training 82.35 % (73.68 %) 85.00 % (79.69 %) 73.08 % (83.52 %) 97.10 % (97.10 %) 91.43 % (95.52 %)
SVM [36] Testing 87.88 % (87.22 %) 77.50 % (68.13 %) 72.92 % (83.33 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 86.00 % (82.30 %) 82.00 % (74.89 %) 73.00 % (83.43 %) 98.00 % (98.00 %) 97.00 % (98.48 %)
Training 97.06 % (91.67 %) 91.67 % (92.44 %) 94.23 % (96.08 %) 98.55 % (99.27 %) 100.00 % (100.00 %)
BPNN-GA [37] Testing 100.00 % (91.03 %) 65.00 % (72.22 %) 87.50 % (92.31 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 99.00 % (91.24 %) 81.00 % (84.82 %) 91.00 % (94.30 %) 99.00 % (99.50 %) 100.00 % (100.00 %)
Training 100.00 % (94.44 %) 93.33 % (95.73 %) 96.15 % (97.09 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
PNN [38] Testing 100.00 % (91.03 %) 67.50 % (75.00 %) 87.50 % (92.31 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
Total 100.00 % (92.17 %) 83.00 % (87.83 %) 92.00 % (94.85 %) 100.00 % (100.00 %) 100.00 % (100.00 %)
the respective training results of 80.0%, 85.0%, 91.67%, and 93.33%. The clas-
sification results using the proposed CFSFDP method are slightly better than the
testing results of some of the supervised methods. For example, the classification
accuracy of the blocked state using the CFSFDP is about 88.0%, whereas testing
results using the HMM, SVM, BPNN-GA, and PNN are 87.50%, 72.92%, 87.5%,
and 87.5% respectively. The F1 scores using the proposed unsupervised method
are also better than some of the supervised methods. Specifically, an F1 score of
79.53 % for the semi-blocked state is achieved, which is higher than the SVM,
BPNN-GA and PNN methods.
Generally, supervised classification methods have a better ability of system
identification than the unsupervised methods. Nevertheless the model training
process causes higher computational costs. Training times in BPNN-GA, PNN,
HMM and SVM are 2.88 sec, 1.37 sec, 2.9 sec, and 1.93 sec, respectively. Thus,
trade-offs have to be made in the real-time application scenarios where there are
requirements in computational time. In addition, the decisions of how to choose
the training data sets and training methods will affect the final identification re-
25
Table 8: Classification accuracy and F1 score without considering the semi-blocked state using
different dimension of feature space.
Feature space dimension normal-extruding blocked run-out-of-material loading/unloading
sults. The proposed unsupervised approach can identify different machine sta-
tes without model training procedures, which will reduce computational burdens
when processing high-dimensional large-volume data sets for process monitoring
and fault diagnosis.
26
loss with customized projection operators. Nevertheless, the lack of independent
features means that there is no enough information in the original features.
The potential approach to resolve this is to increase the number of indepen-
dent features such that the dimension of feature space is effectively increased to
improve the identifiability. The possible solutions could be using additional AE
sensors so that spatial distribution information is included for diagnostics. Other
sensing modes such as optical and thermal sensors can also be added such that
sensor fusion is applied.
4. Conclusions
27
sensors, will be investigated in future work. Further, trade-offs of efficiency and
accuracy also need to made in process monitoring systems. High-dimensional fe-
ature information provides good differentiation power. However, processing large
volume of such data sets is computationally demanding for processors embedded
on board for in-situ monitoring. Reduction of feature space improves the effi-
ciency, yet the accuracy is compromised. Therefore, further research on how to
make good decisions on the trade-offs is needed.
Acknowledgements
The work was supported in part by China Scholarship Council with a Scho-
larship (No. 201606160048), the National Natural Science Foundation of China
(No. 51175208), and the U.S. National Science Foundation (CMMI 1547102).
The authors also thank Dr. Haixi Wu for the help of experimental data.
References
[2] Y. Jin, Y. Wan, B. Zhang, Z. Liu, Modeling of the chemical finishing process
for polylactic acid parts in fused deposition modeling and investigation of its
tensile properties, Journal of Materials Processing Technology 240 (2017)
233–239.
[3] W.-c. Lee, C.-c. Wei, S.-C. Chung, Development of a hybrid rapid prototy-
ping system using low-cost fused deposition modeling and five-axis machi-
28
ning, Journal of Materials Processing Technology 214 (11) (2014) 2366–
2374.
[6] E. Pei, R. Ian Campbell, D. de Beer, Entry-level rp machines: how well can
they cope with geometric complexity?, Assembly Automation 31 (2) (2011)
153–160.
[7] H. Wu, Y. Wang, Z. Yu, In situ monitoring of fdm machine condition via
acoustic emission, The International Journal of Advanced Manufacturing
Technology 84 (5-8) (2016) 1483–1495.
[8] H. Wu, Z. Yu, Y. Wang, A new approach for online monitoring of additive
manufacturing based on acoustic emission, in: ASME 2016 11th Internatio-
nal Manufacturing Science and Engineering Conference, American Society
of Mechanical Engineers, 2016, pp. V003T08A013–V003T08A013.
[9] H. Wu, Z. Yu, Y. Wang, Real-time fdm machine condition monitoring and di-
agnosis based on acoustic emission and hidden semi-markov model, The In-
ternational Journal of Advanced Manufacturing Technology 90 (5-8) (2017)
2027–2036.
29
[10] I. T. Cummings, M. E. Bax, I. J. Fuller, A. J. Wachtor, J. D. Bernardin,
A framework for additive manufacturing process monitoring & control, in:
Topics in Modal Analysis & Testing, Volume 10, Springer, 2017, pp. 137–
146.
[11] F. Baumann, D. Roller, Vision based error detection for 3d printing proces-
ses, in: MATEC Web of Conferences, Vol. 59, EDP Sciences, 2016.
[15] S. Liang, D. Dornfeld, Tool wear detection using time series analysis of
acoustic emission, J. Eng. Ind.(Trans. ASME) 111 (3) (1989) 199–205.
30
[17] S. Subramaniam, N. S, D. A. S, Acoustic emission–based monitoring appro-
ach for friction stir welding of aluminum alloy aa6063-t6 with different tool
pin profiles, Proceedings of the Institution of Mechanical Engineers, Part B:
Journal of Engineering Manufacture 227 (3) (2013) 407–416.
[18] B. Wang, Z. Liu, Acoustic emission signal analysis during chip formation
process in high speed machining of 7050-t7451 aluminum alloy and inconel
718 superalloy, Journal of Manufacturing Processes 27 (2017) 114–125.
[19] X. Li, A brief review: acoustic emission method for tool wear monitoring du-
ring turning, International Journal of Machine Tools and Manufacture 42 (2)
(2002) 157–165.
[21] R. Li, D. He, Rotational machine health monitoring and fault detection using
emd-based acoustic emission feature quantification, IEEE Transactions on
Instrumentation and Measurement 61 (4) (2012) 990–1001.
[22] Q. Xiao, J. Li, Z. Bai, J. Sun, N. Zhou, Z. Zeng, A small leak detection
method based on vmd adaptive de-noising and ambiguity correlation classi-
fication intended for natural gas pipelines, Sensors 16 (12) (2016) 2116.
31
[24] A. Rodriguez, A. Laio, Clustering by fast search and find of density peaks,
Science 344 (6191) (2014) 1492–1496.
[25] A. Malhi, R. X. Gao, Pca-based feature selection scheme for machine de-
fect classification, IEEE Transactions on Instrumentation and Measurement
53 (6) (2004) 1517–1525.
[32] X. He, D. Cai, S. Yan, H.-J. Zhang, Neighborhood preserving embedding, in:
32
Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference
on, Vol. 2, IEEE, 2005, pp. 1208–1213.
[37] J. Liu, Y. Hu, B. Wu, C. Jin, A hybrid health condition monitoring method
in milling operations, The International Journal of Advanced Manufacturing
Technology 92 (5-8) (2017) 2069–2080.
33