A Parametric Study of 3D Printed Polymer Gears
A Parametric Study of 3D Printed Polymer Gears
A Parametric Study of 3D Printed Polymer Gears
https://doi.org/10.1007/s00170-020-05270-5
ORIGINAL ARTICLE
Received: 12 September 2019 / Accepted: 30 March 2020 / Published online: 28 April 2020
# The Author(s) 2020
Abstract
The selection of printing parameters for 3D printing can dramatically affect the dynamic performance of components such as
polymer spur gears. In this paper, the performance of 3D printed gears has been optimised using a machine learning process. A
genetic algorithm (GA)–based artificial neural network (ANN) multi-parameter regression model was created. There were four
print parameters considered in 3D printing process, i.e. printing temperature, printing speed, printing bed temperature and infill
percentage. The parameter setting was generated by the Sobol sequence. Moreover, sensitivity analysis was carried out in this
paper, and leave-one cross validation was applied to the genetic algorithm-based ANN which showed a relatively accurate
performance in predictions and performance optimisation of 3D printed gears. Wear performance of 3D printed gears increased
by 3 times after optimised parameter setting was applied during their manufacture.
relations of each multi-parameter. The process was shown in Fig. 4th row X(4,:) of X to get the infill values
1. Table 1 shows the result of the tests via the different Sobol
sequence settings. Inputð:; 1Þ ¼ Temperature0
Table 1 Input parameters generated by the Sobol sequence and output from test rig
Testing number Printing temp (°C) Printing speed (m/s) Bed temperature (°C) Infill percentage (%) Test result, gear fatigue time (hours)
1 230 25 30 20 0.04
2 253 50 50 50 20
3 264 38 60 35 11.11
4 241 63 40 65 30
5 247 44 55 28 1.94
6 269 69 35 58 24.69
7 258 31 45 43 9.32
8 236 56 65 73 21.03
9 238 41 43 61 15.57
10 261 66 63 31 10.1
11 272 28 53 76 30.18
12 250 53 33 46 20.6
13 244 34 68 54 10.12
14 267 59 48 24 6.66
15 255 47 38 69 12.9
16 233 72 58 39 0.36
17 234 48 64 44 12.77
18 257 73 44 74 36.8
19 268 36 34 29 1.65
20 245 61 54 59 16.66
21 251 30 49 37 2.88
22 274 55 69 67 20.16
23 262 42 59 22 2.67
24 240 67 39 52 10.32
25 237 33 51 71 12.24
26 260 58 31 41 1.96
27 271 45 41 56 7.28
28 248 70 61 26 0.06
29 243 39 36 78 21.24
30 265 64 56 48 27.78
31 254 27 66 63 25.71
32 231 52 46 33 0.39
33 232 38 54 55 25.2
34 255 63 34 25 11.38
35 266 26 44 70 8.4
36 243 51 64 40 1.76
37 249 32 39 62 5.16
38 271 57 59 32 4.17
39 260 45 69 77 34.49
40 238 70 49 47 15.67
41 241 29 62 28 0.07
42 263 54 42 58 14.79
43 274 41 32 43 3.06
44 252 66 52 73 30.45
45 246 48 47 21 0.04
46 269 73 67 51 12.77
47 257 35 57 36 16.38
48 235 60 37 66 32.77
49 234 37 41 79 25
Int J Adv Manuf Technol (2020) 107:4481–4492 4485
Table 1 (continued)
Testing number Printing temp (°C) Printing speed (m/s) Bed temperature (°C) Infill percentage (%) Test result, gear fatigue time (hours)
50 256 62 61 49 25.41
logical thinking approaches [26]. ANN is an appropriate over when the gap between the input value and output value
method for solving incomplete associative memory, defect is smaller than the expected value.
characteristic pattern recognition and automatic learning Figure 4 shows the structure of the ANN model. The ANN
[27]. There are three main reasons that ANN was selected model in this paper was carried out based on MATLAB neural
for this research; first of all, the calculation speed of the network toolbox. Moreover, there is a loop fitted in the model
ANN is significantly computationally cheaper than other aimed to select optimised hidden number of neural from 1 to
methods [28]. Secondly, ANN has strong fault-tolerant 20. Result shows 5 hidden sizes providing less error. The
ability to minimise the uncertainty during the experi- ANN model in this paper is composed of 4 input layer nodes,
ments. Thirdly, ANN is adept in addressing problems 5 hidden layer nodes and 1 output layer nodes. The initial
with multi-parameter regression, which is hard to solve parameters of ANN, such as the connection weights between
with purely numerical methods [29]. Back-propagation input layer, hidden layer and output layer, and threshold value
(BP) training algorithm is the most frequently used of hidden layer and output layer have large influence on the
ANN training method [30]. predictive performance. Due to the small number of training
data, best validation performance could occur at epoch 1 as
2.3.1 Back-propagation networks shown in Fig. 5.
The principles of the back-propagation networks The detailed 2.4 Genetic algorithm
stages of BP training method are the following: (1) The sam-
ple data for training are input to the network. (2) Data moves For the traditional ANN predictive models, without combin-
forward from input stage to each hidden layer until the output ing optimization algorithms, the initial parameters are deter-
stage, then the output data is generated. (3) The difference mined randomly, which is inefficient or prone to converging
between input data and output data is compared, and if the to local optima, slow convergence speed, overtraining, sub-
differences are larger than expected, they will be transferred jectivity in the determining of model parameters and often
back to the hidden layer. (4) The weight of each neuron is pose a convergence problem [31]. The optimised algorithm
adjusted based on the deviation via the steepest descent meth- GA is able to optimise the initial parameters of machine learn-
od that means calculating the minimum value (maximum val- ing models to increase the estimating accuracy and accelerate
ue) of the loss function along the gradient descent (ascent) the convergence speed of the ANN models [32, 33].
direction and the deviation transited to the input layer. (5) Genetic algorithm (GA) is a parallel random search optimisa-
The value proceeds forward again and after repeated iteration; tion algorithm to simulate the genetic mechanism of natural GA,
the error constantly diminishes. (6) The training process is and biological evolution GA can conduct efficient heuristic
search and parallel computing [34]. It introduces the biological the new generation is related to the fitness value of the indi-
evolutionary principle of ‘survival of the fittest’ in the coded vidual. The better the individual fitness value, the higher the
tandem population formed by optimisation parameters and probability of being selected [38].
chooses individuals according to the fitness function of the indi-
viduals and the operations of selection, cross and mutation to 2.4.2 Cross operation
make the individuals with high fitness value be retained; the
individuals with low fitness be eliminated [35]. The new gener- The cross operation refers to the selection of two individ-
ation would inherit the information of the previous generation uals from the old generation to produce new individuals by
and be superior to the previous generation. This iteration is re- randomly exchanging and combination of the chromosomal
peated until the predetermined expired criterion is met [36]. locations of the two old individuals [39].
The basic operations of the GA are divided into:
2.4.3 Mutation operation
2.4.1 Select operation
The mutation operation refers to the selection of an individual
The selection operation refers to the selection of individuals from the old generation and choosing a point in the chromosome
from the old generation to the new generation [37]. The prob- of the individual to mutate to produce a new individual. The
ability that the individual is selected from the old generation to basic process of GA is shown in Fig. 6.
5
10
Instances
4
Train
Validaon 3
Test
Best
10 Goal
2
-0.02
0.04
0.11
0.18
0.25
0.32
0.39
0.46
0.60
-0.72
-0.65
-0.58
-0.51
-0.44
-0.30
-0.23
-0.16
-0.09
0.53
-0.37
10 0 1 3 4 5 7
2 6
7 Epoch Errors=Targets-Outputs
(a) (b)
(c)
Fig. 5 Performance validation of ANN
(a) (b)
(c)
Fig. 7 Performance result fitted with test data
2.5 Leave-one-out cross validation algorithm is shown in Eq. 1; the results of the sensitivity anal-
ysis by using Garson’s algorithm is shown in Fig. 9.
Leave-one-out cross validation is a method which evaluates
L N
the performance of a machine learning algorithm, which in ∑ W ij W jk = ∑ W rj
this case is an ANN. As a technique, it can increase the pre- j¼1 r¼1
Rij ¼ ð1Þ
N L N
diction accuracy by increasing the training data point to and
∑ ∑ W ij W jk = ∑ W rj
decrease the test data point to 1. Hence, leave-one-out cross i¼1 j¼1 r¼1
validation could eliminate the randomness of dividing in-
stances into for training and testing. By changing the ratio of where Rijis the relative importance of input parameters,
training and testing of AAN, it could maximised the training Wij,Wjkare the connection weights of the input layer hidden
algorithm to provide a better understanding of model and layer and the hidden-output layer, i = 1,2….N, k = 1,2….M
clearer pattern of the Sobol sequence [40]. Due to the small (N and M are the numbers of the input parameters and output
amount of data, it is workable to maximise the number of the parameters).
training data.
Based on the established machine learning models, the sensi- Figure 7 shows the performance of each model fitting with
tivity analysis of the input parameters is conducted by original test data. Figure 7a shows the linear fitting between
adopting Garson’s algorithm. In 1991, Garson proposed the ANN model and test data given a Pearson product-
Garson’s algorithm [41, 42], later modified by Goh, for deter- moment correlation coefficient of 0.85326 and R square of
mining the relative importance of the input parameters to the 0.728. It shows high correlation related to the original test data
output parameter [39, 41, 43], the equation of Garson’s [44, 45]. However, to optimise the performance, it is possible
to increase the accuracy of the prediction model. Hence, the In the GA optimisation process, 200 iterations were selected
GA-based ANN (Fig. 7b) has been applied to the model which due to the decrease in computational time and convergence
yields closer agreements between the measured and predicted towards an optimised solution. Each iteration has a population
values of gear fatigue time. R2 increases from 0.728 to 0.8 of 50. The plot on the solid line represents the average error
when the GA is applied; moreover, Pearson’s r increased by corresponding to the test data, and the dotted line represents
nearly 5%. This result could be explained by the fact that the the best fit corresponding to the test data from the wear test rig.
proposed ANN-based predictive model accuracy in this case It is shown that average error was decreased from around 23 to
was increased with GA optimisation. Furthermore, the initial 10%; moreover, best fit was improved from 10 to less than
target was to achieve an R-squared value greater than 0.9; 5%, respectively. Hence, it can be shown that applying GA
hence, the GA-based ANN can provide a relatively satisfac- can increase the efficiency and accuracy of the ANN regres-
tory result. However, optimisation and prediction accuracy sion model (Fig. 8).
can be further increased by applying leave-one cross Figure 9 shows the comparison of the prediction of each
validation. model and test performed by wear test rig. Compared with
Figure 4c shows the model applied with both GA and evaluation methods such as Pearson’s r and R-squared, it is
leave-one cross validation, Pearson’s r and R2 dramatically shown that leave-one cross validation applied to the GA-based
increase from 0.83 to 0.97 and 0.728 to 0.956, respectively. ANN model provides better accuracy compared with a con-
Hence, a model with applied leave-one cross validation was ventional ANN model and the GA-based ANN model. Hence,
selected as the final model to carry out further analysis. as a result, leave-one cross validation applied to the GA-based
Figure 5 shows the result of optimisation performed by ANN model can provide an efficiently and relatively accurate
GA, which is used to optimise the ratio between ω and δ in model to carry out the prediction of performance of 3D printed
order to improve the accuracy of the ANN. The solid plot nylon spur gears with different manufacturing parameters.
represents the average error corresponding to the real test data. The model reveals (Fig. 10) that printing temperature con-
tributes to the performance of a printed gear by around 22% in
terms of weighting. Printing speed has around a 23% influ-
ence on the performance. Bed temperature contributes an
8.6% influence on the final result, showing a reduced impor-
tance compared with the rest of the parameters. Hence, by
using Garson’s algorithm, it is possible to identify the most
influential parameter regarding gear performance is infill per-
centage. Conceptually, this result makes sense as it is possible
that by increasing infill percentage, the rigidity of gear under
loads is increased.
In order to explore the power of the model in predicting
optimal gear performance and outputting the 3D printer pa-
rameters required, a simulation was carried out. Figure 8
Fig. 12 5 tests using optimisation setting for 3D printed gears shows the simulation of 14,256 combinations of different
Int J Adv Manuf Technol (2020) 107:4481–4492 4491