Sensors 21 05831

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

sensors

Article
Simultaneous Burr and Cut Interruption Detection during
Laser Cutting with Neural Networks
Benedikt Adelmann * and Ralf Hellmann

Applied Laser and Photonics Group, Faculty of Engineering, University of Applied Sciences Aschaffenburg,
Wuerzburger Straße 45, 63739 Aschaffenburg, Germany; [email protected]
* Correspondence: [email protected]; Tel.: +49-6022-81-3693

Abstract: In this contribution, we compare basic neural networks with convolutional neural networks
for cut failure classification during fiber laser cutting. The experiments are performed by cutting
thin electrical sheets with a 500 W single-mode fiber laser while taking coaxial camera images for
the classification. The quality is grouped in the categories good cut, cuts with burr formation and
cut interruptions. Indeed, our results reveal that both cut failures can be detected with one system.
Independent of the neural network design and size, a minimum classification accuracy of 92.8% is
achieved, which could be increased with more complex networks to 95.8%. Thus, convolutional
neural networks reveal a slight performance advantage over basic neural networks, which yet is
accompanied by a higher calculation time, which nevertheless is still below 2 ms. In a separated
examination, cut interruptions can be detected with much higher accuracy as compared to burr
formation. Overall, the results reveal the possibility to detect burr formations and cut interruptions
during laser cutting simultaneously with high accuracy, as being desirable for industrial applications.


Keywords: laser cutting; quality monitoring; artificial neural network; burr formation; cut interruption;
Citation: Adelmann, B.; Hellmann, fiber laser
R. Simultaneous Burr and Cut
Interruption Detection during Laser
Cutting with Neural Networks.
Sensors 2021, 21, 5831. https:// 1. Introduction
doi.org/10.3390/s21175831
Laser cutting of thin metal sheets using fiber or disk lasers is now a customary process
in the metal industry. The key advantages of laser cutting are high productivity and
Academic Editor: Bernhard
flexibility, good edge quality and the option for easy process automation. Especially
Wilhelm Roth
for highly automated unmanned machines, seamlessly combined in line with bending,
separation or welding machines, a permanent high cut quality is essential to avoid material
Received: 8 June 2021
Accepted: 25 August 2021
waste, downtime or damaging subsequent machine steps in mechanized process chains.
Published: 30 August 2021
As a consequence, besides optimizing the cutting machine in order to reduce the influence
of disturbance variables, cut quality monitoring is also of utmost interest.
Publisher’s Note: MDPI stays neutral
The most common and disruptive quality defects are cut interruptions and burr
with regard to jurisdictional claims in
formation [1]. To obtain high-quality cuts, process parameters, such as laser power, feed
published maps and institutional affil- rate, gas pressure, working distance of the nozzle and focus position, respectively, are to be
iations. set appropriately. Imprecise process parameters and typical disturbance values like thermal
lenses, unclean optics, damaged gas nozzles, gas pressure fluctuations and the variations
of material properties may lead to cut poor-quality and, thus, nonconforming products.
To ensure a high quality, an online quality monitoring system, which can detect multiple
Copyright: © 2021 by the authors.
defects, would be the best choice in order to respond quickly and reduce downtime,
Licensee MDPI, Basel, Switzerland.
material waste or cost-extensive rework. Until now, most reviewed sensor systems for
This article is an open access article
monitoring laser cutting focus only on one single fault.
distributed under the terms and For detecting burr formation during laser cutting, different approaches using cameras,
conditions of the Creative Commons phododiodes or acoustic emission were investigated. In [2,3] burr formation, roughness
Attribution (CC BY) license (https:// and striation angle during laser cutting with a 6 kW CO2 laser are determined by using
creativecommons.org/licenses/by/ a NIR camera sampling with 40 Hz. By using two cameras in [4], laser cutting with
4.0/). a CO2 laser is monitored by observing the spark trajectories underneath the sheet and

Sensors 2021, 21, 5831. https://doi.org/10.3390/s21175831 https://www.mdpi.com/journal/sensors


Sensors 2021, 21, 5831 2 of 16

melt bath geometries and correlate this to the burr formation or overburning defects. A
novel approach is used in [5], employing a convolutional neural network to calculate burr
formation from camera images with a high accuracy of 92%. By evaluating the thermal
radiation of the process zone with photodiodes [6], the burr height during fiber laser cutting
can be measured from the standard deviation of a filtered photodiode signal. Results by
using photodiode-based sensors integrated in the cutting head [7] showed that the mean
photodiode’s current increases with lower cut qualities, while similar experiments revealed
increasing mean photodiode currents at lower cut surface roughness [8]. An acoustic
approach was investigated by monitoring the acoustic emission during laser cutting and
deducing burr formation by evaluating the acoustic bursts [9].
Also for cut interruption detection, most approaches are based on photodiode signals
or camera images. Photodiode-based methods for cut interruption detection are signal
threshold-based [10], done by the comparison of different photodiodes [11] or based on
cross-correlations [12]. However, all those methods have the disadvantage of requiring
thresholds that vary with the sheet thickness or laser parameters. In addition, an adap-
tation to other materials or sheet thicknesses requires a large engineering effort to define
respective threshold values by extensive investigations. To avoid this problem, [13] uses
a convolutional neural network to calculate cut interruptions from camera images dur-
ing fiber laser cutting of different sheet thicknesses with an accuracy of 99.9%. Another
approach is performed by using a regression model based on polynomial logistics [14] to
calculate the interruptions from laser machine parameters only.
This literature review reveals that for both burr formation monitoring and cut inter-
ruption, individual detection schemes have previously been reported, but a combined and
simultaneous detection for both failure patterns has not been reported so far. In addition,
many of the previous studies applied CO2 lasers, which are often replaced nowadays by
modern fiber or disk lasers, for which, in turn, fewer reports are available. To detect both
failures with the same system, we chose the evaluation of camera images with neural
networks, as they are able to achieve a high accuracy in detecting both cut failures [5,13].
The use of neural networks, especially for convolutional neural networks (CNN), has
been demonstrated for various image classification purposes, such as face recognition and
object detection [15,16], in medicine for cancer detection [17] and electroencephalogram
(EEG) evaluations [18] or in geology for earthquake detection [19]. For failure analyses in
technical processes, neural networks have also been successfully used for, e.g., concrete
crack detection [20], road crack detection [21] or detecting wafer error determinations [22].
In addition, detecting different failure types with the same system has been successfully
proven with neural networks, such as detecting various wood veneer surface defects [23]
or different welding defects [24] during laser welding.
The objective of this publication is to detect both burr formation and cut interruptions
during single-mode laser cutting of electrical sheets from camera images with neural
networks. The advantages of our system are, firstly, easy adaption to industrial cutting
heads, which often already have a camera interface. Secondly, images are taken coaxially
to the laser beam and are therefore independent of the laser cut direction. Thirdly, due
to the use of a learning system the engineering effort is low when the system has to be
adapted to other materials or sheet thicknesses. Two different neural network types are
used, namely a basic neural network and a convolutional neural network. The basic neural
network is faster and can detect bright or dark zones but is less able to extract abstractions
of 2D features and needs a lot of parameters when the networks get more complex. On
the other hand, convolutional neural networks are much better in learning and extracting
abstractions of 2D features and usually need fewer parameters. However, they require a
higher calculation effort due to many multiplications in the convolution layers [25,26].
The cutting of electrical sheets is chosen because it is an established process in the
production and prototyping of electric motors and transformers [27–30], i.e., it is a relevant
and contributing process step to foster e-mobility. In order to reduce the electrical losses
caused by eddy currents, the rotor is assembled of a stack of thin electrical sheets with
Sensors 2021, 21, 5831 3 of 16

electrical isolation layers in between the sheets. The sheet thickness varies typically between
0.35 mm to 0.5 mm, with the eddy currents being lower for thinner sheets. As a result, for
an electric motor, a large number of sheets with high quality requirements are necessary.
Especially burr formations result in gaps between sheets or the burr can pierce the electrical
isolation layer and connect the sheets electrically which both reduce the performance of
motors drastically. Therefore, quality monitoring during laser cutting is of great interest
for industrial applications.

2. Experimental
2.1. Laser System and Cutting Setup
In this study, a continuous wave 500 W single-mode fiber laser (IPG Photonics, Bur-
bach, Germany) is used to perform the experiments. The laser system is equipped with
linear stages (X, Y) for positioning the workpiece (Aerotech, Pittsburgh, PA, USA) and a
fine cutting head (Precitec, Gaggenau, Germany) is attached to a third linear drive (Z). The
assisted gas nitrogen with purity greater than 99.999% flows coaxially to the laser beam.
The gas nozzle has a diameter of 0.8 mm and its distance to the workpiece is positioned
by a capacitive closed loop control of the z-linear drive. The emitting wavelength of the
laser is specified to be 1070 nm in conjunction with a beam propagation factor of M2 < 1:1.
The raw beam diameter of 7.25 mm is focused by a lens with a focal length of 50 mm.
The according Rayleigh length is calculated to 70 µm and the focus diameter to 10 µm,
respectively.
The design of the cutting head with the high-speed camera and a photo of the laser
system are illustrated in Figure 1. The dashed lines depict the primary laser radiation
from the fiber laser, which is collimated by a collimator and reflected by a dichroic mirror
downwards to the processing zone. There the laser radiation is focused by the processing
lens through the protective glass onto the work piece, which is placed on the XY stages.
The process radiation from the sheet radiates omnidirectional (dash-doted line), thus partly
through the nozzle and protective glass and is collimated by the processing lens upwards.
The process radiation passes the dichroic mirror and is focused by a lens onto the high-
Sensors 2021, 21, x FORspeed
PEER REVIEW
camera. The focus of the camera is set to the bottom side of the sheet in order to 4 o
have a sharp view of possible burr formations.

FigureFigure 1. Optical
1. Optical setupsetup
of theof the cutting
cutting head and
head (left) (left)image
and image
of theof the system
system (right).
(right).

2.2. Laser Cutting


The laser cuts are performed in electrical sheets of the type M270 (according to
10106 this denotes a loss of 2.7 W/kg during reversal of magnetism at 50 Hz and 1.5
with a sheet thickness of 0.35 mm. This sheet thickness is chosen because it fits well to
laser focus properties e.g., Rayleigh length and it is one of the most often used sheet thi
nesses for electrical motors and transformers, because it provides a good compromise
Sensors 2021, 21, 5831 4 of 16

2.2. Laser Cutting


The laser cuts are performed in electrical sheets of the type M270 (according to
EN 10106 this denotes a loss of 2.7 W/kg during reversal of magnetism at 50 Hz and
1.5 T) with a sheet thickness of 0.35 mm. This sheet thickness is chosen because it fits
well to the laser focus properties e.g., Rayleigh length and it is one of the most often
used sheet thicknesses for electrical motors and transformers, because it provides a good
compromise between low eddy currents and high productivity. Stacks of thicker sheets
are faster to produce because less sheets are required per stack but with increasing sheet
thickness also unwanted eddy currents increase. Thinner sheet thicknesses require a higher
production effort per stack and are more difficult to cut because they are very flexible, and
warp under the gas pressure and thermal influence. In these experiments only one sheet
thickness is used, but please note that in previous publications with similar systems an
adaptation of the results to other sheet thicknesses was possible with only minor additional
expenses [5,13].
As ad-hoc pre-experiments reveal, the parameter combination of a good quality cut is
a laser power of 500 W, a feed rate of 400 mm/s and a laser focus position on the bottom
side of the metal sheet. The gas nozzle has a diameter of 0.8 mm and is paced 0.5 mm above
the sheet surface and the gas pressure is 7 bar. For the experimental design, the parameters
are varied to intentionally enforce cut failures. Burr formations are caused by less gas flow
into the cut kerf due to higher nozzle to sheet distance, lower gas pressure, an overvalued
power to feed rate ratio or damaged nozzles. Cut interruptions are enforced by too high
feed rates or too low laser power.
In the experimental design, 39 cuts with different laser parameters are performed
for training the neural network and 22 cuts are performed for testing, with the cuts being
evenly distributed to the three cut categories (good cut, cuts with burr formation and cut
interruptions). A table of all cuts with laser machine parameters, category and use can be
found in the Appendix A. The cuts are designed from a straight line including acceleration
and deceleration paths of the linear stages. Exemplifying images of the sheets from all three
cut categories taken by optical microscope after the cutting process are shown in Figure 2.
Firstly, for a good quality cut, both top and bottom side of the cut kerf are characterized by
clear edges without damages. Secondly, for a cut with burr, the top side is similar to the
good quality cut, however on the bottom side drops of burr formation are clearly visible.
Thirdly, the images of the cut interruption reveal a molten line on the sheet top side and
only a slightly discolored stripe on the bottom side with both sides of the sheet not being
separated.
Sensors 2021, 21, x FOR PEER REVIEW 5 of 15

Sensors 2021, 21, 5831 5 of 16


top side and only a slightly discolored stripe on the bottom side with both sides of the
sheet not being separated.

Top
Good quality
cut
Bottom

Top

Cut with burr

Bottom

Top
Cut interrup-
tion
Bottom

Figure2.2.Images
Figure Imagesofof the
the top
top and
and bottom
bottom side
side of laser
of laser cuts
cuts with
with andand without
without cut cut errors
errors taken
taken withwith an optical
an optical microscope
microscope after
after laser cutting.
laser cutting.

2.3. Camera and Image Acquisition


2.3. Camera and Image Acquisition
For image acquisition during laser cutting, we used a high-speed camera (Fastcam
For image acquisition during laser cutting, we used a high-speed camera (Fastcam
AX50, Photron, Tokyo, Japan) with a maximum frame rate of 170,000 frames per second.
AX50, Photron, Tokyo, Japan) with a maximum frame rate of 170,000 frames per second.
The maximum resolution is 1024 × 1024 pixels, with a square pixel size of 20 × 20 µm2 in
The maximum resolution is 1024 × 1024 pixels, with a square pixel size of 20 × 20 µm2
combination with a Bayer CFA Color Matrix. For process image acquisition, videos of the
in combination with a Bayer CFA Color Matrix. For process image acquisition, videos
laser cutting process are grabbed with a frame rate of 10 kilo frames per second with an
of the laser cutting process are grabbed with a frame rate of 10 kilo frames per second
exposure time of 2 µs and a resolution of 128 × 64 pixels. Even at this high frame rate, no
with an exposure time of 2 µs and a resolution of 128 × 64 pixels. Even at this high
oversampling occurs and consecutive images are not similar, because the relevant under-
frame rate, no oversampling occurs and consecutive images are not similar, because the
lying melt flow dynamics are characterized by high melt flow velocities in the range of 10
relevant underlying melt flow dynamics are characterized by high melt flow velocities in
m/s [31] and vary therefore at estimated frequencies between 100 kHz and 300 kHz [32].
the range of 10 m/s [31] and vary therefore at estimated frequencies between 100 kHz and
Please note, due to the lack of external illumination in the cutting head, the brightness in
300 kHz [32]. Please note, due to the lack of external illumination in the cutting head, the
the images are caused by the thermal radiation of the process zone.
brightness in the images are caused by the thermal radiation of the process zone.
Two exemplifying images of each cut category are shown in Figure 3 with the cut
Two exemplifying images of each cut category are shown in Figure 3 with the cut
direction always upwards. The orientation of the images is always the same because the
direction always upwards. The orientation of the images is always the same because
straight lines are cut in the same direction. For complex cuts, images with the same orien-
the straight lines are cut in the same direction. For complex cuts, images with the same
tation can be transformed from various oriented images by rotation based on the move-
orientation can be transformed from various oriented images by rotation based on the
ment direction of the drives. In these images, brightness is caused by the thermal radiation
movement direction of the drives. In these images, brightness is caused by the thermal
of the hot melt. Good cuts are characterized by a bright circle at the position on the laser
radiation of the hot melt. Good cuts are characterized by a bright circle at the position on
focus, and below this, two tapered stripes indicating the flowing melt at the side walls of
the laser focus, and below this, two tapered stripes indicating the flowing melt at the side
the cut kerf, because in the middle the melt bath is blown out first. The cuts with burr are
walls of the cut kerf, because in the middle the melt bath is blown out first. The cuts with
similar to the good quality cuts but tapered stripes are formed differently. The cut inter-
burr are similar to the good quality cuts but tapered stripes are formed differently. The
ruptions are very different to the other categories and are characterized by larger bright
cut interruptions are very different to the other categories and are characterized by larger
areas and
bright areasa and
morea elliptical shape shape
more elliptical with no tapered
with stripes.stripes.
no tapered
Sensors 2021, 21,
Sensors 2021, 21, 5831
x FOR PEER REVIEW 66of
of 16
15

Good cut Cut with burr Cut interruption

Figure
Figure 3.
3. Examples
Examples of
of camera
camera images
images of
of the
the three
three cut
cut categories taken during
categories taken during laser
laser cutting
cutting with
with the
the high
high speed
speed camera.
camera.

From the 39 laser cuts, the experimental design delivers the same number of training
videos with
with overall
overall5252thousand
thousandtraining
trainingimages, while
images, from
while fromthethe
22 testing cutscuts
22 testing 34 thousand
34 thou-
test images
sand are provided.
test images It is worth
are provided. to mention,
It is worth thatthat
to mention, the the
sizesize
of several tenten
of several thousands
thousands of
training
of trainingimages
imagesis typical forfor
is typical training neural
training networks
neural networks[33]. ForFor
[33]. both training
both andand
training testing,
test-
the images
ing, are almost
the images evenly
are almost distributed
evenly on the
distributed on three categories
the three withwith
categories cut interruptions
cut interrup-
tions being slightly underrepresented. The reason for this underrepresentationis,
being slightly underrepresented. The reason for this underrepresentation is,that
that cut
interruptions only occur at high feed rates, i.e., images from acceleration and deceleration
paths can be used only partially and, in turn, less images per video can be captured. captured.

2.4. Computer
2.4. Computer Hardware
Hardware and and Neural
Neural Network
Network Design
Design
For learning
For learning andand evaluating
evaluating the the neural
neural network,
network, aa computer
computer with with an
an Intel
Intel Core
Core 7- 7-
8700 processor with a 3.2-GHz clock rate in combination
8700 processor with a 3.2-GHz clock rate in combination with 16-GB DDR4 RAM waswith 16-GB DDR4 RAM was
used. All calculations are performed with the CPU rather than the GPU to show that the
used. All calculations are performed with the CPU rather than the GPU to show that the
machine learning steps are also possible to run on standard computers, which are usually
machine learning steps are also possible to run on standard computers, which are usually
integrated with laser cutting machines. The used software was TensorFlow version 2.0.0
integrated with laser cutting machines. The used software was TensorFlow version 2.0.0
in combination with Keras version 2.2.4 (Software available from: https://tensorflow.org
in combination with Keras version 2.2.4 (Software available from: https://tensorflow.org
(accessed on 24 March 2021)).
(accessed on 24 March 2021)).
In most publications about image classification with neural networks, the images have
In most publications about image classification with neural networks, the images
major differences. In contrast, in the images captured in our experiments, the object to
have major differences. In contrast, in the images captured in our experiments, the object
analyze always has the same size, orientation and illumination conditions which should
to analyze always has the same size, orientation and illumination conditions which should
simplify the classification when compared to classifying common, moving items like vehi-
simplify the classification when compared to classifying common, moving items like ve-
cles or animals [34,35]. Furthermore, our images have a rectangular shape with 128 × 64
hicles or animals [34,35]. Furthermore, our images have a rectangular shape with 128 × 64
pixels, while most classification algorithms are optimized on square images sizes having
pixels,
mostlywhile most classification
a resolution of 224 × 224algorithms
pixels likeare optimizedSqueezeNet
MobileNet, on square images sizes having
or AlexNet [36,37].
mostly a resolution of 224 × 224 pixels like MobileNet, SqueezeNet
Because an enlargement of the image size slows the system drastically, two self-designed or AlexNet [36,37].
Because an enlargement
and completely differentofneural
the image size slows
network the system
are used with manydrastically,
elements twobeing
self-designed
adapted
and
to other, often used neural networks. The first network, as shown in Figure 4adapted
completely different neural network are used with many elements being is a basic to
other,
network often used convolution
without neural networks. The consists
and only first network, as shown
of image in Figure
flattening followed4 isbya basic
two fullynet-
work without convolution and only consists of image flattening
connected layers with N nodes and ReLU Activation. To classify the three different cut followed by two fully
connected
categories,layers
a fullywith N nodes
connected andwith
layer ReLU3 Activation. To classify
nodes and softmax the threecompletes
activation different the cut
categories, a fully connected layer with 3 nodes and softmax activation
network. The second network is a convolutional neural network with four convolution completes the net-
work.
blocksThe secondby
followed network
the same is athree
convolutional neural network
fully connected layers aswith four
in the convolution
basic network.blocksEach
followed by theofsame
block consists three fullylayer
a convolution connected
with alayers
kernelassize
in the
of 3basic
× 3 andnetwork. Eachwhich
M filters, block
consists
the output of aofconvolution layer is
the convolution with
addeda kernel
with size
input ofof
3 ×the
3 and M filters,
block. which the
Such bypasses are output
most
of the convolution is added with input of the block. Such bypasses
common in, e.g., MobileNet [36]. To reduce the number of parameters, a max pooling layer are most common in,
e.g.,
with a common pool size of 2 × 2 is used [26]. In contrast to often neural networks useda
MobileNet [36]. To reduce the number of parameters, a max pooling layer with
common pool size
in the literature, weofuse2 ×a2constant
is used [26].
insteadIn contrast to often filter
of an increasing neural networks
number used in the
for subsequent
literature,
convolution welayers
use a constant
and we use instead
normalof an increasing filter
convolutions rathernumber for subsequent
than separable convo-
or pointwise
lution layers and
convolutions. we use
Because normal
every blockconvolutions rather than
halves the image size inseparable or pointwise
2 dimensions, after 4 convo-
blocks
lutions.
the image Because
size is every
8 × 4 block
× M. Thehalves theconnected
fully image size in 2 dimensions,
layers after 4 have
after the flattening blocks thethe im-
same
age size is
number of8nodes
× 4 × M.
as The fully connected
the number layers delivered
of parameters after the flattening have the
by the flattened sameThe
layer. number
used
of nodes
model as the number
optimizer is Adam,ofwhich
parameters delivered
according to [38],by the flattened
together with SDG layer. The used
(Stochastic model
Gradient
optimizer is Adam,superior
Descent) provides which according to [38],
optimization together
results. with SDG we
Furthermore, (Stochastic
use the Gradient
loss function De-
Sensors
Sensors2021,
2021,21,
21,xxFOR
FORPEER
PEERREVIEW
REVIEW 77 of
of 15
15
Sensors 2021, 21, 5831 7 of 16

scent)
scent) provides
provides superior
superior optimization
optimization results.
results. Furthermore,
Furthermore, wewe use
use the
the loss
loss function
function “cat-
“cat-
“categorical
egorical crossentropy”
egorical crossentropy”
crossentropy” to to enable
to enable
enable categorical
categorical
categorical outputs
outputs
outputs (one
(one
(one hotencoding),
hot
hot encoding),and
encoding), andthe
and themetrics
the metrics
“accuracy”.
“accuracy”.
“accuracy”.

Figure
Figure 4.
4. Design
Design of
of the
the two
two neural
neural networks.
networks.

2.5.
2.5.Methodology
2.5. Methodology
Methodology
The
Themethodology
The methodologyof
methodology of our
our experiments
experiments is is shown
shown in in the
the workflow
workflow diagramdiagram in in Figure ..
Figure 555.
Figure
In
In a
a first
first step,
step, the
the laser
laser cuts
cuts are
are performed
performed and
and during
during cutting
cutting
In a first step, the laser cuts are performed and during cutting videos are taken from videos
videos are
are taken
taken from
from the
the
process
process
the process zone
zonezonewith
withwiththe
the high
the speed
high camera,
speedspeed
high camera, some
some of
camera, of these
sometheseofimages
images have
have been
these images been
have shown
been in
shown in Fig-
Fig-
shown
ure
ure 3.
3. After
in Figure 3.cutting,
After the
the cut
After cutting,
cutting, cutthekerfs
cutare
kerfs analyzed
kerfs
are by
by with
are analyzed
analyzed withby an optical
anwith
opticalan microscope
optical microscope
microscope and
and catego-
and
catego-
rized
rized manually
categorized
manually whether
manually
whether aa good
good cut,
whether acut, burr
goodburr formation
cut, or
or aa cut
burr formation
formation cutorinterruption occurred
a cut interruption
interruption occurred (exam-
occurred
(exam-
(examples
ples
ples of theseofimages
of these these shown
images images
shown in shown
in Figurein2).
Figure Figure
2). Based2).
Based on Based
on this on this classification,
this classification,
classification, the
the videos the
videos taken
taken videos
dur-
dur-
taken
ing laser during
cuttinglaserare cutting
labeled are labeled
with the with the
corresponding corresponding
class. In
ing laser cutting are labeled with the corresponding class. In case the cut quality changes class.
case theIn case
cut the
quality cut quality
changes
changes
within
within within
one
one cut, one
cut, the cut,
the video
video theis
isvideo is divided,
divided,
divided, so
so the so the quality
the quality
quality is is constant
is constant
constant within
within within
aa video.
video.a video.
ThenThen
Then the
the
the videos
videos
videos are are separated
are separated
separated in in training
in training
training videosvideos
videos and and
and test test
test videos, videos,
videos, so
so thesoimages
the the images
images for for testing
for testing
testing are are
are not
not
not from
from
from videosvideos
videos usedused
used for for training.
for training.
training. FromFrom
From the the training
the training
training videos,videos,
videos, the the single
the single
single frame
frame frame
is is extracted
is extracted
extracted and
and
and with
with
with thesethese
these images
images images
the the neural
the neural
neural network
network
network is is trained.
is trained.
trained. Furthermore,
Furthermore,
Furthermore, the the single
the single
single framesframes
frames are are
are ex-
ex-
extracted
tracted fromfromthe thetesttest videos,
videos, and
and thethe resulting
resulting images
images are
are used
used
tracted from the test videos, and the resulting images are used to test the trained neural toto test
test the
the trained
trained neural
neural
network.
network.
network.

Training
Trainingneural
neural
Training
Trainingvideos
videos Training
Trainingimages
images
network
network
Laser
Lasercutting
cutting&&
Sheet
Sheetanalysis
analysis
video
videoaquisition
aquisition
Testing
Testing neural
neural
Test
Testvideos
videos Test
Testimages
images
network
network

Figure
Figure 5. Workflow diagram.
5. Workflow
Workflow diagram.
Sensors 2021, 21,Sensors
5831 2021, 21, x FOR PEER REVIEW 8 of 16 8 o

3. Results 3. Results
3.1. Training 3.1.
Behaviour
Training Behaviour
The differentThe
neural networks
different are trained
neural networks on are
the training
trained dataset and the performance
on the training dataset and the per
is calculatedmance
on theistest dataset. Exemplarily, the training behavior
calculated on the test dataset. Exemplarily, the training of the convolutional
behavior of the conv
neural network with
tional 16 filters
neural networkin each
withconvolution is shown
16 filters in each in Figure
convolution 6. Apparently,
is shown in Figurethe
6. Apparen
training accuracy rises continuously
the training accuracy riseswithcontinuously
the training epochs,
with thereaching
training99% after 10
epochs, epochs99% afte
reaching
and 99.5% after 20 epochs,
epochs and 99.5%respectively.
after 20 On the other
epochs, hand, theOn
respectively. testthe
accuracy
other reaches
hand, the 94%test accur
after three epochs and fluctuates with further training around this level, which
reaches 94% after three epochs and fluctuates with further training around this leis a typical
behavior forwhich
neuralisnetwork
a typicaltraining
behavior [39].
forEven
neuralfurther
networktraining, above
training [39].20Even
epochs, results
further training, ab
only in a fluctuation of the accuracy rather than a continuous increase. To
20 epochs, results only in a fluctuation of the accuracy rather than a continuous reduce the incre
deviation ofTo thereduce
test results for comparisons
the deviation of the testbetween
resultsdifferent networks,between
for comparisons the meandifferent
of the netwo
test results between
the mean10ofand the20 epochs
test resultsis between
used. 10 and 20 epochs is used.

0.98

0.96
Accuracy

0.94

0.92
Training
0.9
Test

0.88
0 5 10 15 20 25
Training Epochs

Figure
Figure6.6.Training
Trainingaccuracy
accuracyand
andtest
testaccuracy
accuracyof
ofaaconvolutional
convolutionalneural
neuralnetwork
networkwith
with16
16filters.
filters.

3.2. Basic Neural Network


3.2. Basic Neural Network
To determineTo the performance
determine the of the basic neural
performance of thenetworks,
basic neural thosenetworks,
with nodethosenumbers
with node nu
N between 5bers andN1000 are trained
between on the
5 and 1000 training
are traineddataset
on the and tested
training on theand
dataset testtested
dataset.
on the test
The mean test accuracy
taset. The mean between 10 and 20between
test accuracy training10epochs
and 20and the required
training epochs andcalculation
the required ca
time per image are
lation shown
time in Figure
per image 7. It isinobvious
are shown Figure 7.that
It isthe accuracy
obvious thatfor
the aaccuracy
very small for a very sm
network with only five nodes is quite high, being 92.8%, and the calculation
network with only five nodes is quite high, being 92.8%, and the calculation time of 0.1 ms time of
per image being very fast. With an increasing number of nodes, the accuracy
ms per image being very fast. With an increasing number of nodes, the accuracy increases to increa
a maximumto ofa95.2%
maximum at 1000 nodes,atwhich
of 95.2% is accompanied
1000 nodes, by a higher calculation
which is accompanied by a highertime calculation t
of 0.32 ms. ofParallel to the
0.32 ms. calculation
Parallel time, also time,
to the calculation the trainable parameters
also the trainable increase increase
parameters with with
the number number
of nodesof starting from 122 thousand parameters for five nodes and
nodes starting from 122 thousand parameters for five nodes and reaching reaching
25 million parameters at 1000 nodes.
million parameters A further
at 1000 nodes.increase
A furtherof increase
the parameters is not considered
of the parameters is not conside
to be useful, to
because the training dataset consist of 420 million of pixels
be useful, because the training dataset consist of 420 million of pixels(number of images
(number of ima
x pixels per image),
x pixels so
perthe neuralsonetwork
image), tend
the neural to over tend
network fit thetotraining
over fit dataset rather
the training than rather t
dataset
developing developing
generalized generalized
features. Generally,
features. Generally, with the basic neural networkofaccuracie
with the basic neural network accuracies
94% (mean) 94%are achievable.
(mean) are achievable.
Sensors 2021, 21, x FOR PEER REVIEW 9 o
Sensors 2021, 21, x FOR PEER REVIEW 9 of 15
Sensors 2021, 21, 5831 9 of 16

0.96 2
0.96 2
Accuracy
0.953

Calculation time / ms per image


Accuracy
0.953

Calculation time / ms per image


1.6
0.95 1.6
0.95
0.947
0.947 1.2

Accuracy
Accuracy 1.2
0.942
0.94
0.941 0.942 0.941 0.940
0.94 0.940
0.8
0.935
0.8
0.935
0.93
0.93 0.928 0.4
0.928 0.4

0.92 0
0.92 1 10 1000 1000
1 10 100Number of nodes
1000
Number of nodes

Figureof7.the
Figure 7. Accuracy Accuracy of thenetwork
basic neural basic neural network of
as a function as nodes
a function of nodes
per fully per fully
connected connected layer
layer.
Figure 7. Accuracy of the basic neural network as a function of nodes per fully connected layer.
3.3. Convolutional Neural Network
3.3. Convolutional Neural Network
3.3. Convolutional Neural
Under the same Network
conditions as the basic
Under the same conditions asneural
the basic network, the convolutional
neural network, neural neural
the convolutional
Under
network theissame
also conditions
trained andas the basic
tested. neural
The network,
results of the the convolutional
accuracy
work is also trained and tested. The results of the accuracy and calculation and neural
calculation net-
time fortime for f
work filter
is alsonumbers
trained and tested.
between
numbers The results
4 and 464and
between of the
are64depictedaccuracy
are depictedin Figureand calculation
8. The
in Figure time
accuracy
8. The accuracyfor filter
of the neural
of the neural netw
numbers between
network 4
is quiteandhigh64 are
for depicted
all filter in Figure
numbers and8. The accuracy
fluctuates of
between the neural
94.6%
is quite high for all filter numbers and fluctuates between 94.6% and 95.8% with no c network
and 95.8% with
is quite
nohighclearfor all
trend. filter
trend. numbers
In In
addition,
addition, and
thethefluctuates
accuracy
accuracy between
also
alsovaries 94.6%
varies and
forforthe 95.8%
thesame with
samenetwork
networkno when
clear
whenititisis calcula
trend.calculated
In addition, several
the times.
accuracy However,
also the
varies calculation
for the same time increases
network
several times. However, the calculation time increases clearly with the whenclearly
it iswith the
calculated number
number of fil
of times.
several filters from 0.36
However, ms
theper image
calculation to 1.77
time ms. The
increases number
clearly of trainable
with
from 0.36 ms per image to 1.77 ms. The number of trainable parameters startthe parameters
number of start
filters withwith 34 th
34 thousand
from 0.36 ms per image for four
sand for filters
to 1.77
fourms. and
Theand
filters increases
number to 8.4 Million
of trainable
increases for
parameters
to 8.4 Million the 64
for the filters
start
64 with (details how
filters34(details
thou- how to to calcu
calculate the
sand for four filters number
theand of parameters
increases
number are
to 8.4 Million
of parameters described
are for in [25]).
the 64 filters
described For
in [25]). the
(detailsmean,
For thehow the convolutional
to calculate
mean, the convolutional ne
neural of
the number network is ableare
parameters to described
classify about in 95% For
[25]). of thethe image
mean, correctly.
the convolutional neural
network is able to classify about 95% of the image correctly.
network is able to classify about 95% of the image correctly.
0.96 2
0.96 Accuracy
0.9592 0.959
Accuracy
0.959 0.959
Calculation time / ms per image

Calculation time
Calculation time / ms per image

0.952 1.6
Calculation time
0.95 0.952 1.6
0.95 0.947
0.947 0.946
0.946 1.2
Accuracy

1.2
Accuracy

0.94
0.94
0.8
0.8
0.93
0.93 0.4
0.4

0.92 0
0.92 2 4 8 16 0 32 64
2 4 8 16 32 64
Number of Filters
Number of Filters
Figure
Figure8. Test accuracy
8. Test forfor
accuracy thethe
convolutional neural
convolutional network
neural as as
network a function of of
a function thethe
number of of
number filters.
filters.
Figure 8. Test accuracy for the convolutional neural network as a function of the number of filters.
Sensors 2021, 21, x FOR PEER REVIEW 10 of 15
Sensors 2021, 21, 5831 10 of 16

3.4. Comparison between Cut Failures


3.4. Comparison between Cut Failures
Since the literature is available for both burr detection and cut interruptions during
Since the literature
laser cutting, which vary is strongly
availablein for both burr
accuracy, thedetection
performanceand cut interruptions
of our duringin
neural networks
laser cutting, which vary strongly in accuracy, the performance of our neural
detecting one cut failure is determined. Therefore, the accuracy in classifying good cuts networks
inand
detecting one
cuts with cutas
burr failure
well as is good
determined.
cuts andTherefore, the accuracy
cut interruptions in classifying
is calculated good
separately. For
cuts and cuts with burr as well as good cuts and cut interruptions is calculated
this investigation, the convolutional neural network with 16 filters is chosen, because it separately.
For this investigation,
provides high accuracythe and
convolutional
a comparable neural networkcalculation
moderate with 16 filters
time.is The
chosen, because
results of the
itseparated
provides classification
high accuracy and a comparable moderate calculation time. The results
are shown in Figure 9. It is obvious that the detection of cut inter- of
the separated
ruptions classification
is very reliable witharethe
shown in Figure
accuracy being9. It is obvious
99.5%, as beingthat the detection
compared to 93.1%ofwhen
cut
interruptions is very reliable with the accuracy being 99.5%, as being compared to 93.1%
detecting burr formation. The reason for this can also be seen in Figure 3, where good cuts
when detecting burr formation. The reason for this can also be seen in Figure 3, where good
are much more similar to cuts with burr, while cut interruptions look very different to
cuts are much more similar to cuts with burr, while cut interruptions look very different to
both of the other failure classes. Both values individually agree with the literature values,
both of the other failure classes. Both values individually agree with the literature values,
which are 99.9% for the cut interruptions [13] and 92% for the burr detection [5], yet for
which are 99.9% for the cut interruptions [13] and 92% for the burr detection [5], yet for burr
burr detection in the literature a more complex burr definition is chosen. This shows that
detection in the literature a more complex burr definition is chosen. This shows that cut
cut interruptions are much easier to detect from camera images compared to burr for-
interruptions are much easier to detect from camera images compared to burr formations.
mations.

0.98 0.995

0.96
0.931
Accuracy

0.94

0.92

0.9

0.88

0.86
Interruptions Burr

Figure9.9.Comparison
Figure Comparisonofofthe
thetest
testaccuracy
accuracybetween
betweeninterruptions
interruptionsand
andburr
burrformations.
formations.

3.5.
3.5.Error
ErrorAnalysis
Analysis
For
Forthe
theerror
erroranalysis,
analysis, single misclassified images
single misclassified imagesandandthe thedistribution
distributionofofmisclassifi-
misclas-
sifications
cations are areanalyzed.
analyzed.ForFor thethe temporal
temporal distribution,
distribution, a video
a video of of a cut
a cut with
with different
different cut
cut qualities
qualities is produced.
is produced. TheThe measured
measured quality
quality obtained
obtained byby thethe optical
optical microscope
microscope and and
the
the prediction
prediction of the
of the convolutional
convolutional neural
neural network
network withwith 16 filters
16 filters is shown
is shown in Figure
in Figure 10.
10. Mis-
Misclassifications are indicated by red dots that are not placed on the blue
classifications are indicated by red dots that are not placed on the blue line and it can line and it can
clearly
clearlybebeseen,
seen,which
whichmisclassifications
misclassificationsoccur
occurmore
moreoften
oftenthan
thanothers.
others.TheThemost
mostfrequent
frequent
misclassifications
misclassifications are cuts predicted as burr. Interruptions are rarely misclassifiedand
are cuts predicted as burr. Interruptions are rarely misclassified and
other
otherimages
imagesare areseldom
seldommisclassified
misclassifiedasasinterruptions,
interruptions,which
whichaccompanies
accompaniesthe theresults
resultsinin
Section
Section3.4.
3.4.The
Thedistribution
distributionofofthethemisclassifications
misclassificationsreveals
revealsno noconcentration
concentrationonona aspecific
specific
sector
sector but minor accumulations of several misclassifications areobserved.
but minor accumulations of several misclassifications are observed.InInaddition,
addition,
some
someareas
areaswithout
withoutany anymisclassification
misclassificationororonly
onlysingle
singlemisclassifications
misclassificationscan canbebefound.
found.
These
These results reveal that misclassifications do not occur all at once or at a special eventbut
results reveal that misclassifications do not occur all at once or at a special event but
are widely distributed.
are widely distributed.
Sensors 2021, 21, x FOR PEER REVIEW 11 of 15
Sensors 2021, 21, 5831 11 of 16
Sensors 2021, 21, x FOR PEER REVIEW 11 of 15

Figure 10. Measured image class and prediction by the neural network.
Figure
Figure 10.
10. Measured
Measured image
image class
class and
and prediction
prediction by the neural
by the neural network.
network.
To analyze the misclassified images, two exemplified images from a good cut classi-
cutanalyze
fied as To analyze
with burrthe misclassified
theare shown in images,
misclassified images,
Figure two
11. Inexemplified
two contrast to images
exemplified theimages from
images ina good
from Figure cut
a good classified
3,cut classi-
where
the bright area is followed by two tapered stripes, in Figure 11, these stripes are hardlythe
as
fiedcut
aswith
cut burr
with are
burr shown
are in
shown Figure
in 11.
Figure In contrast
11. In to
contrast the
to images
the imagesin Figure
in 3,
Figurewhere
3, where
bright
the area
bright
observed. is followed
area
However, by following
is followed
these two bytapered stripes,
two stripes
tapered inimportant
Figure
stripes,
are in 11, these
Figure stripes
11,
for the theseare hardlyare
stripes
classification, observed.
hardly
because
However,
in observed.
this area thethese
burrfollowing
However, these stripes
following
is generated. are stripes
important
Therefore, for
arecase
in the the classification,
important
of missing for stripes, because
the classification, in this area
because
the classification
thethis
in
between burr is generated.
area
cuts the
withburr Therefore,
andiswithout
generated.burrinisthe casein
Therefore,
difficultofand
missing
the case stripes,
thusofcharacterized
missing the classification
stripes, between
the classification
by many misclas-
cuts withcuts
between
sifications. andwith
without
andburr is difficult
without burr isand thus characterized
difficult by many misclassifications.
and thus characterized by many misclas-
sifications.

Figure
Figure 11.11.
TwoTwo examples
examples of of cuts
cuts missclassified
missclassified as as burr.
burr.
Figure 11. Two examples of cuts missclassified as burr.
4. Discussion
4. Discussion
With the classification in three different categories during laser cutting, good cuts
4. Discussion
With the classification in three different categories during laser cutting, good cuts can
can be distinguished from cuts with burr and cut interruptions. The convolutional neural
With the classification
be distinguished from cuts with in three different
burr and categories during
cut interruptions. laser cutting, good
The convolutional neuralcuts
net-can
network has, depending on the number of filters, a better classification accuracy by about
be distinguished
work has, depending from
on thecutsnumber
with burr and cuta interruptions.
of filters, The convolutional
better classification accuracy by about neural1% net-
1% when compared to the basic networks. The maximum accuracy for the basic neural
work
when has, depending
compared to the on thenetworks.
basic number of Thefilters, a betteraccuracy
maximum classification
for accuracy
the basic by about
neural net- 1%
networks (1000 nodes) is also lower, being 95.2% as compared to a 95.8% accuracy of
when(1000
works comparednodes) toisthe
alsobasic networks.
lower, being The maximum
95.2% as compared accuracy
to a for theaccuracy
95.8% basic neural
of net-
the
the convolutional neural network with 16 filters. Nevertheless, the difference between
works (1000 network
convolutional
both neural neural
nodes)network
istypes
also islower,
with 16being
small, filters.
which
95.2% as compared
Nevertheless,
can be explained theby to a 95.8% between
difference
the objects
accuracy of the
both
in the images
convolutional
neural network
always havingtypesneural
the same network
is small, with
size,which 16 filters.
can beand
orientation Nevertheless,
explained by thewhich
brightness, the
objects difference
in the
is not between
images
usually the caseboth
always for
neural
having
manythe network
same
other types
size, is small,
taskswhich
orientation
classification and canAs be aexplained
brightness,
[34,35]. which by
consequence, thethe
is not objects
basicin
usually the the
neural images
case always
for many
network can
having
other
classify the
the same
classification
images size,
by orientation
tasks [34,35].orAs
bright and brightness,
a consequence,
dark zones and does which
the basic
not is neural
not usually
necessarily network the case
require can for many
classify
learning and
theother
extracting abstractions of 2D features which is the main advantage of convolutionalclassify
classification
images by bright tasks
or dark [34,35].
zones As
and a consequence,
does not the
necessarily basic neural
require network
learning and can
extract-
neural
ingthe images[25,26].
abstractions
networks by ofbright or dark which
2D features zones andis the does
mainnotadvantage
necessarily ofrequire learningneural
convolutional and extract-
net-
ing abstractions
works [25,26]. of 2D features which is the main advantage
For the required accuracy, the size of the cut failure has to be considered. Becauseof convolutional neural net-
of
works
theFor [25,26].
the
accuracy required accuracy,
being below 100%, thea size
postof the cut failure
algorithm has to which
is necessary be considered.
should report Because of
an error
theonly For
accuracy
when the arequired
being below
certain accuracy,
100%, of
amount the
a post size of
algorithm
failures the is
occurs cut
in failure
necessary
a sequent has to beshould
which
number considered.
report Because
of images. an
Toerror
detectof
the
only accuracy
when a being
certain below
amount 100%,
of a post
failures algorithm
occurs in ais necessary
sequent which
number
geometrically long failures, which can occur, e.g., by unclean optics, our classification should
of images. report
To an error
detect
only
systemwhen
geometrically along
certain
is adequate. amount
failures,
Very which
shortof failures,
failures
can occur, occurs
e.g.,
like in aunclean
by
single sequent
burr number
optics,
drops when ourof images. an To
classification
cutting detect
sys-
edge, are
geometrically long failures, which can occur, e.g., by unclean
probably not be detectable with our system. It is remarkable for the results with both optics, our classification sys-
neural networks, however, that at least 92.8% accuracy (cf. Figure 7) can be achieved
Sensors 2021, 21, 5831 12 of 16

with any network configuration independent from network type, number of nodes or
filters. This means that about 93% of the images are easy to classify because they differ
strongly between the categories. Furthermore, about 2% of the images can be classified by
more complex neural networks (cf. Sections 3.2 and 3.3). About 5% of the images, mostly
between the categories good cuts and cuts with burr formation, are very difficult to classify
because the images are quite similar (cf. Figure 3). For an industrial application, it has to
be further considered whether the intentionally enforced cut failures are representative for
typical industrial cut failures, e.g., as a result of unclean optics, which are not reproducible
in scientific studies.
The main advantage of the basic neural network is the much lower computation time
between 0.1 ms and 0.32 ms, while the convolutional neural network requires 0.36 ms to
1.7 ms, respectively. For typical available industrial cameras having maximum frame rates
in the range of 1000 Hz, a calculation time for the classification of about 1 ms is sufficient,
which is fulfilled by all basic and most of our convolutional neural networks. A similar
frame rate was also used by [5] when detecting burr formations during laser cutting. With
maximum cutting speeds of modern laser machines in the range of 1000 mm/s still a local
resolution of 1 mm is achieved which can clearly be considered as adequate for industrial
use.
Following this fundamental and comparative analysis, future investigations have to
address field trials of the proposed sensor system and classification scheme in industrial
cutting processes. Within such industrial environments additional error sources may
appear and further reduce the cut quality, such as damaged gas nozzles or partially unclean
optics which in turn are difficult to reproduce under laboratory conditions. The images
from these error sources can be added to the training data and improve the detection rate
of the classification system. To improve the detection rate it is also possible to classify not a
single image but a series of 3 to 10 subsequent images, which reduces the influence of a
single misleading image.

5. Conclusions
Overall, with our neural network approach, two cut failures during laser cutting can
be detected simultaneously by evaluating camera images with artificial neural networks.
With different neural network designs up to 95.8% classification accuracy can be achieved.
Generally, convolutional neural networks have only minor classification advantages of
about 1% over basic neural networks, while the basic neural networks are considerably
faster in calculation. The detection of cut interruptions is remarkably higher when com-
pared to the burr formation, because the images of cut interruptions are more different
from the good cuts compared to the images with burr formation. In general, the detection
rate is high enough to advance industrial applications.

Author Contributions: Conceptualization, B.A.; methodology, B.A.; software, B.A.; validation, B.A.,
investigation, B.A.; writing—original draft preparation, B.A. and R.H.; project administration, R.H.,
All authors have read and agreed to the published version of the manuscript.
Funding: This project was funded by the Bavarian Research Foundation.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.
Sensors 2021, 21, 5831 13 of 16

Appendix A
Table of all performed laser cuts with machine parameters, cut category and use for
training and test.

Nozzle Focus
Nr: Laser Feed Rate Category Use
Distance Position
W mm/s mm mm
1 500 600 0.5 −1.25 Cut Training
2 300 400 0.5 −1.25 Cut Training
3 500 500 0.5 −1.25 Cut Training
4 500 300 0.5 −1.25 Cut Training
5 300 600 0.5 −1.25 Interruption Test
6 200 500 1,0 −1.75 Interruption Training
7 500 500 0.8 −1.55 Burr Training
8 500 600 0.5 −1.25 Cut Test
9 250 500 0.5 −1.25 Interruption Test
10 500 400 0.5 −1.25 Cut Test
11 500 600 0.5 −1.25 Interruption Training
12 500 500 1,0 −1.75 Burr Training
13 500 500 0.5 −1.25 Cut Test
14 500 300 0.5 −1.25 Cut Training
15 500 200 0.5 −1.25 Cut Test
16 500 500 0.5 −1.25 Cut Training
17 500 500 0.5 −1.25 Interruption Test
18 400 500 0.9 −1.65 Burr Training
19 500 500 0.8 −1.55 Burr Training
20 200 500 1,0 −1.75 Interruption Training
21 500 300 0.5 −1.45 Burr Training
22 150 500 0.5 −1.25 Interruption Test
23 500 400 0.5 −1.25 Cut Training
24 500 500 0.5 −1.25 Cut Test
25 500 400 0.5 −1.25 Training
26 400 500 0.8 −1.55 Burr Training
27 500 500 0.8 −1.55 Burr Test
28 150 500 0.5 −1.25 Interruption Training
29 200 500 1,0 −1.75 Interruption Test
30 400 500 0.9 −1.65 Burr Training
31 300 600 0.5 −1.25 Interruption Training
32 500 500 1,0 −1.75 Burr Training
33 500 300 0.5 −1.25 Cut Test
Sensors 2021, 21, 5831 14 of 16

Nozzle Focus
Nr: Laser Feed Rate Category Use
Distance Position
W mm/s mm mm
34 400 500 1,0 −1.75 Burr Test
35 500 500 0.5 −1.25 Interruption Training
36 500 400 0.5 −1.25 Cut Training
37 150 500 0.5 −1.25 Interruption Training
38 400 500 0.5 −1.45 Burr Training
39 500 600 0.5 −1.25 Interruption Training
40 400 400 0.5 −1.25 Cut Training
41 500 600 0.5 −1.25 Cut Training
42 500 600 0.5 −1.25 Interruption Test
43 400 500 0.8 −1.55 Burr Test
44 400 500 0.5 −1.45 Burr Test
45 400 400 0.5 −1.25 Cut Test
46 500 500 0.5 −1.25 Cut Training
47 500 200 0.5 −1.25 Cut Training
48 300 400 0.5 −1.25 Interruption Training
49 400 500 0.5 −1.45 Burr Training
50 500 400 0.5 −1.25 Cut Test
51 500 500 0.8 −1.55 Burr Training
52 400 500 0.9 −1.65 Burr Training
53 400 500 0.9 −1.65 Burr Test
54 500 300 0.5 −1.45 Burr Training
55 300 400 0.5 −1.25 Cut Training
56 500 300 0.5 −1.45 Burr Test
57 250 500 0.5 −1.25 Interruption Training
58 300 400 0.5 −1.25 Cut Training
59 300 400 0.5 −1.25 Interruption Training
60 300 600 0.5 −1.25 Interruption Training
61 300 400 0.5 −1.25 Interruption Test

References
1. Kratky, A.; Schuöcker, D.; Liedl, G. Processing with kW fibre lasers: Advantages and limits. In Proceedings of the XVII
International Symposium on Gas Flow, Chemical Lasers, and High-Power Lasers, Lisboa, Portugal, 15–19 September 2008;
p. 71311X.
2. Sichani, E.F.; de Keuster, J.; Kruth, J.; Duflou, J. Real-time monitoring, control and optimization of CO2 laser cutting of mild steel
plates. In Proceedings of the 37th International MATADOR Conference, Manchester, UK, 25–27 July 2012; pp. 177–181.
3. Sichani, E.F.; de Keuster, J.; Kruth, J.-P.; Duflou, J.R. Monitoring and adaptive control of CO2 laser flame cutting. Phys. Procedia
2010, 5, 483–492. [CrossRef]
4. Wen, P.; Zhang, Y.; Chen, W. Quality detection and control during laser cutting progress with coaxial visual monitoring. J. Laser
Appl. 2012, 24, 032006. [CrossRef]
5. Franceschetti, L.; Pacher, M.; Tanelli, M.; Strada, S.C.; Previtali, B.; Savaresi, S.M. Dross attachment estimation in the laser-cutting
process via Convolutional Neural Networks (CNN). In Proceedings of the 2020 28th Mediterranean Conference on Control and
Automation (MED), Saint-Raphaël, France, 16–18 September 2020; pp. 850–855.
6. Schleier, M.; Adelmann, B.; Neumeier, B.; Hellmann, R. Burr formation detector for fiber laser cutting based on a photodiode
sensor system. Opt. Laser Technol. 2017, 96, 13–17. [CrossRef]
Sensors 2021, 21, 5831 15 of 16

7. Garcia, S.M.; Ramos, J.; Arrizubieta, J.I.; Figueras, J. Analysis of Photodiode Monitoring in Laser Cutting. Appl. Sci. 2020, 10, 6556.
[CrossRef]
8. Levichev, N.; Rodrigues, G.C.; Duflou, J.R. Real-time monitoring of fiber laser cutting of thick plates by means of photodiodes.
Procedia CIRP 2020, 94, 499–504. [CrossRef]
9. Tomaz, K.; Janz, G. Use of AE monitoring in laser cutting and resistance spot welding. In Proceedings of the EWGAE 2010,
Vienna, Austria, 8–10 September 2010; pp. 1–7.
10. Adelmann, B.; Schleier, M.; Neumeier, B.; Hellmann, R. Photodiode-based cutting interruption sensor for near-infrared lasers.
Appl. Opt. 2016, 55, 1772–1778. [CrossRef]
11. Adelmann, B.; Schleier, M.; Neumeier, B.; Wilmann, E.; Hellmann, R. Optical Cutting Interruption Sensor for Fiber Lasers. Appl.
Sci. 2015, 5, 544–554. [CrossRef]
12. Schleier, M.; Adelmann, B.; Esen, C.; Hellmann, R. Cross-Correlation-Based Algorithm for Monitoring Laser Cutting with
High-Power Fiber Lasers. IEEE Sens. J. 2017, 18, 1585–1590. [CrossRef]
13. Adelmann, B.; Schleier, M.; Hellmann, R. Laser Cut Interruption Detection from Small Images by Using Convolutional Neural
Network. Sensors 2021, 21, 655. [CrossRef]
14. Tatzel, L.; León, F.P. Prediction of Cutting Interruptions for Laser Cutting Using Logistic Regression. In Proceedings of the Lasers
in Manufacturing Conference 2019, Munich, Germany, 24 July 2019; pp. 1–7.
15. Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016,
187, 27–48. [CrossRef]
16. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput.
Intell. Neurosci. 2018, 2018, 7068349. [CrossRef] [PubMed]
17. Ting, F.F.; Tan, Y.J.; Sim, K.S. Convolutional neural network improvement for breast cancer classification. Expert Syst. Appl. 2018,
120, 103–115. [CrossRef]
18. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adeli, H. Deep convolutional neural network for the automated detection and
diagnosis of seizure using EEG signals. Comput. Biol. Med. 2018, 100, 270–278. [CrossRef] [PubMed]
19. Perol, T.; Gharbi, M.; Denolle, M. Convolutional neural network for earthquake detection and location. Sci. Adv. 2018, 4, e1700578.
[CrossRef]
20. Dung, C.V.; Anh, L.D. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019,
99, 52–58. [CrossRef]
21. Zhang, L.; Yang, F.; Zhang, Y.D.; Zhu, Y.J. Road crack detection using deep convolutional neural network. In Proceedings of the
2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3708–3712.
22. Nakazawa, T.; Kulkarni, D. Wafer Map Defect Pattern Classification and Image Retrieval Using Convolutional Neural Network.
IEEE Trans. Semicond. Manuf. 2018, 31, 309–314. [CrossRef]
23. Urbonas, A.; Raudonis, V.; Maskeliūnas, R.; Damaševičius, R. Automated Identification of Wood Veneer Surface Defects Using
Faster Region-Based Convolutional Neural Network with Data Augmentation and Transfer Learning. Appl. Sci. 2019, 9, 4898.
[CrossRef]
24. Khumaidi, A.; Yuniarno, E.M.; Purnomo, M.H. Welding defect classification based on convolution neural network (CNN) and
Gaussian kernel. In Proceedings of the 2017 International Seminar on Intelligent Technology and Its Applications (ISITIA),
Surabaya, Indonesia, 28–29 August 2017; pp. 261–265. [CrossRef]
25. Alom, M.Z.; Taha, T.M.; Yakopcic, C.; Westberg, S.; Sidike, P.; Nasrin, M.S.; van Esesn, B.C.; Awwal, A.A.S.; Asari, V.K.
The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. March 2018. Available online:
http://arxiv.org/pdf/1803.01164v2 (accessed on 26 August 2021).
26. O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks Nov 2015. Available online: http://arxiv.org/pdf/1511
.08458v2 (accessed on 26 August 2021).
27. Emura, M.; Landgraf, F.J.G.; Ross, W.; Barreta, J.R. The influence of cutting technique on the magnetic properties of electrical
steels. J. Magn. Magn. Mater. 2002, 254, 358–360. [CrossRef]
28. Schoppa, A.; Schneider, J.; Roth, J.-O. Influence of the cutting process on the magnetic properties of non-oriented electrical steels.
J. Magn. Magn. Mater. 2000, 215, 100–102. [CrossRef]
29. Adelmann, B.; Hellmann, R. Process optimization of laser fusion cutting of multilayer stacks of electrical sheets. Int. J. Adv. Manuf.
Technol. 2013, 68, 2693–2701. [CrossRef]
30. Adelmann, B.; Lutz, C.; Hellmann, R. Investigation on shear and tensile strength of laser welded electrical sheet stacks. In
Proceedings of the 2018 IEEE 14th International Conference on Automation Science and Engineering (CASE), Munich, Germany,
20–24 August 2018; pp. 565–569.
31. Arntz, D.; Petring, D.; Stoyanov, S.; Quiring, N.; Poprawe, R. Quantitative study of melt flow dynamics inside laser cutting kerfs
by in-situ high-speed video-diagnostics. Procedia CIRP 2018, 74, 640–644. [CrossRef]
32. Tennera, F.; Klämpfla, F.; Schmidta, M. How fast is fast enough in the monitoring and control of laser welding? In Proceedings of
the Lasers in Manufacturing Conference, Munich, Germany, 22–25 June 2015.
33. Keshari, R.; Vatsa, M.; Singh, R.; Noore, A. Learning Structure and Strength of CNN Filters for Small Sample Size Training. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June
2018.
Sensors 2021, 21, 5831 16 of 16

34. Chen, G.; Han, T.X.; He, Z.; Kays, R.; Forrester, T. Deep convolutional neural network based species recognition for wild animal
monitoring. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October
2014; pp. 858–862.
35. Dong, Z.; Wu, Y.; Pei, M.; Jia, Y. Vehicle Type Classification Using a Semisupervised Convolutional Neural Network. IEEE Trans.
Intell. Transp. Syst. 2015, 16, 2247–2256. [CrossRef]
36. Zhang, X.; Zhou, X.; Lin, M.; Sun, J. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. July
2017. Available online: http://arxiv.org/pdf/1707.01083v2 (accessed on 26 August 2021).
37. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-Level Accuracy with 50x Fewer
Parameters and <0.5MB Model Size. February 2016. Available online: http://arxiv.org/pdf/1602.07360v4 (accessed on 26 August
2021).
38. Bello, I.; Zoph, B.; Vasudevan, V.; Le, Q.V. Neural Optimizer Search with Reinforcement Learning. September 2017. Available
online: http://arxiv.org/pdf/1709.07417v2 (accessed on 26 August 2021).
39. An, S.; Lee, M.; Park, S.; Yang, H.; So, J. An Ensemble of Simple Convolutional Neural Network Models for MNIST Digit
Recognition. arXiv 2020, arXiv:2008.10400.

You might also like