FULLTEXT01

Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

Development of Methodology for Finite

Element Simulation of Overhead Guard


Impact Test

Axel Hallén
Jacob Hjorth

Linköping University | IEI - Department of Management and Engineering


Master’s Thesis, 30hp | Mechanical Engineering - Applied Mechanics
Spring 2022 | LIU-IEI-TEK-A–22/04290–SE
Copyright
The publishers will keep this document online on the Internet – or its possible replacement – for
a period of 25 years starting from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-
load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial
research and educational purpose. Subsequent transfers of copyright cannot revoke this permis-
sion. All other uses of the document are conditional upon the consent of the copyright owner.
The publisher has taken technical and administrative measures to assure authenticity, security and
accessibility.

According to intellectual property law the author has the right to be mentioned when his/her work
is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures
for publication and for assurance of document integrity, please refer to its www home page:
https://ep.liu.se/.

© 2022, Axel Hallén & Jacob Hjorth

Attribution-NonCommercial (CC BY-NC)

I
Acknowledgements
This master’s thesis is the final part of our education at the Division of Solid Mechanics at Linköping
University, and has been carried out at Toyota Material Handling Manufacturing Sweden AB in
Mjölby. Firstly, we would like to thank Toyota Material Handling Manufacturing Sweden AB for
giving us the opportunity to conduct such an interesting and multifaceted master’s thesis. Specif-
ically, we want to thank our supervisor Maria Nygren for her guidance and support during our
work. We would also like to thank Andreas Ludvigsson for his help around CAD as well as Peter
Hahne for the conduction of the overhead guard impact test. In addition, we want to thank the
rest of the people at Toyota for their helpful and welcoming attitude.

We would also like to extend our gratitude to Assoc. Prof. Mattias Calmunger and the Division
of Engineering Materials at Linköping University for their support and for letting use their equip-
ment for material testing. Furthermore, a big thank you goes out to our academic supervisor Prof.
Jonas Stålhand, and our examiner Prof. Anders Klarbring for their advice and guidance.

Last but not least, we wish to express our sincere gratitude towards Linköping University for the
excellent education we have received, which has equipped us with the necessary tools to carry out
this master’s thesis.

Axel Hallén
Jacob Hjorth
Linköping, May 2022

II
Abstract
Forklifts that are capable of lifting heavy loads and reaching high lift heights are required by stan-
dards to have an overhead guard to protect the operator from falling objects. The same standards
specify a standardized procedure for testing the strength of these overhead guards. The test in-
volves dropping ten 45 kg wooden cubes and a heavy timber load onto the overhead guard. These
destructive tests are time-consuming and expensive, and it is the purpose of this master’s thesis to
develop a methodology for simulating this kind of test using the finite element method with a large
displacements, explicit scheme using the solver RADIOSS by Altair. This was achieved by first
designing, constructing, and testing a physical prototype of an overhead guard to use as a reference
for a finite element methodology to be validated against. The work has also included tensile testing
of the overhead guard material, and this was done both to obtain material data from the same
type of material as the prototype, and to get Johnson-Cook material parameters, which are hard
to come by in the literature. Next, a basic finite element model was created which showed a very
large discrepancy compared to the physical test results. An extensive investigation into aspects
surrounding finite element modeling and material modeling was undertaken, and resulted in a fi-
nal model which overestimated the displacements by about 40 % only. The remaining inaccuracy
is believed to mostly stem from inadequate strain-rate sensitivity data, caused by limitations in
available resources for material testing.

Keywords: Explicit Finite Element Analysis, Forklift Overhead Guard, RADIOSS, Drop Test,
Strain-Rate Sensitivity, Johnson-Cook, Plasticity

III
Contents
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Theory 3
2.1 Overhead Guard Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Tensile Test Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.3 Large Deformation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Explicit Nonlinear Finite Element Analysis . . . . . . . . . . . . . . . . . . . . . . 9
2.4.1 True Stress/Strain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.2 Load Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.3 Time-Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.4 Mass Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.5 Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.6 Energy Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Material Constitutive Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5.1 Linear-Elastic-Perfect-Plastic Model . . . . . . . . . . . . . . . . . . . . . . 16
2.5.2 Johnson-Cook Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.6 FE Modeling Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.1 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.2 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6.3 Welds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.7 FE-modeling of Contact Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Method 27
3.1 Prototype Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Material Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Testpiece design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.2 Conventional Tensile Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.3 Tensile Testing at Varying Strain-Rates . . . . . . . . . . . . . . . . . . . . 30
3.2.4 Post-Processing of Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Basic FE-modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.1 Geometry Simplification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.2 Meshing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.3 Element formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.4 Material Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.5 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.4.6 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4.7 Analysis Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5 Advanced FE-modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5.1 Meshing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5.2 Element-formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5.3 Material Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5.4 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.5 Welds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.6 Contact Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.7 Load Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.8 Damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5.9 Simulation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5.10 Time Step Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5.11 Extra Material Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.6 Validation of Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

IV
4 Results 41
4.1 Drop Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Tensile Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.1 Conventional Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Tests at Varying Strain rate . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3 Basic FE-modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4 Advanced FE-modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4.1 Mesh Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.4.2 Element Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4.3 Time Step Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.4 Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.5 Contact Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.6 Load Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.7 Material Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.5 Final Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.6 Extra Material Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5 Discussion 55
5.1 Drop Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.2 Tensile Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.3 Basic FE-Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.4 Advanced FE-Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.5 Final Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.6 Extra Material Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.7 Further Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

6 Conclusions 61

Appendices 64

V
List of Figures
1 Dynamic test permissible deformation according to ISO [1] . . . . . . . . . . . . . 4
2 Principal geometry of a tensile test piece . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Schematic representation of the difference between engineering measures and true
measures. Note that the true stress-strain curve is not necessarily bi-linear . . . . . 10
4 Characteristic length of 2D elements . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 General behaviour of fundamental energy measures in a dynamic analysis . . . . . 13
6 Schematic representation of the difference between isotropic and kinematic harden-
ing modes for a von Mises yield surface sketched in the π-plane where the coordinate
axes represent the principal stresses . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7 Schematic representation of the plastic modulus Ep . . . . . . . . . . . . . . . . . . 15
8 The general effect of strain-rate on the tensile properties of metals . . . . . . . . . 16
9 Schematic representation of a linear-elastic-perfect-plastic model. σy is the yield stress 16
10 Element experiencing shear-locking . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
11 Element deformed in an hourglass mode . . . . . . . . . . . . . . . . . . . . . . . . 20
12 Examples of common MPC’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
13 Point mass suspended in a spring and the concept of a penalty spring . . . . . . . 23
14 Common types of contact interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 24
15 The main geometry of the prototype showing its components and how it is assembled 27
16 The geometry of the test piece designed in this thesis work . . . . . . . . . . . . . 29
17 Typical tensile response when the material exhibits an upper and a lower yield point 31
18 The rig that the overhead guard was mounted on during testing . . . . . . . . . . . 32
19 Measurement points on the overhead guard . . . . . . . . . . . . . . . . . . . . . . 32
20 The impact points used in the dynamic test . . . . . . . . . . . . . . . . . . . . . . 33
21 The impact points used in the experiments . . . . . . . . . . . . . . . . . . . . . . 33
22 Boundary conditions of the FE-model . . . . . . . . . . . . . . . . . . . . . . . . . 35
23 The connection of the frame to the legs, highlighted in red . . . . . . . . . . . . . . 35
24 Deformation of overhead guard seen post testing . . . . . . . . . . . . . . . . . . . 42
25 Picture showing one of the deformed screws that connected the frame and legs . . 43
26 Picture showing the frame having been bent upwards . . . . . . . . . . . . . . . . . 43
27 The prototype overhead guard post testing . . . . . . . . . . . . . . . . . . . . . . 43
28 Stress-strain curves from conventional testing . . . . . . . . . . . . . . . . . . . . . 44
29 Stress-strain curves from conventional testing in true measures . . . . . . . . . . . 45
30 Power law curve fit on hardening behaviour of specimen 1 . . . . . . . . . . . . . . 45
31 The fractured test pieces, specimens 1-5, which were tested conventionally . . . . . 46
32 Stress-strain response at varying strain-rates . . . . . . . . . . . . . . . . . . . . . . 46
33 Data used to determine the parameter C . . . . . . . . . . . . . . . . . . . . . . . . 47
34 The fractured test pieces, specimens 6-14, which were tested at varying strain rates 47
35 The FE-model showing the behaviour of the frame having been bent upwards when
RBE2 & beam was used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
36 Comparison between material models 2, and 3 . . . . . . . . . . . . . . . . . . . . . 52
37 Displacement in the y-direction over the course of the simulation with the final model 53
38 Model energies during simulation with the final model . . . . . . . . . . . . . . . . 54

VI
List of Tables
1 Friction coefficient for different material interfaces . . . . . . . . . . . . . . . . . . 26
2 Testing rates during the varying-strain rate testing . . . . . . . . . . . . . . . . . . 30
3 Material parameters for the extra material model . . . . . . . . . . . . . . . . . . . 40
4 Displacement results for dynamic test after all ten cubes . . . . . . . . . . . . . . . 41
5 Displacement results of impact drop test. Total deflection is the sum of the deflection
from the dynamic and impact test . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6 Mechanical properties obtained from conventional tests . . . . . . . . . . . . . . . . 44
7 Plastic hardening properties from conventional tests . . . . . . . . . . . . . . . . . 45
8 Mechanical properties obtained from testing at varying strain-rates . . . . . . . . . 47
9 Displacement error results from the basic FE-model . . . . . . . . . . . . . . . . . 48
10 Displacement results from the basic FE-model . . . . . . . . . . . . . . . . . . . . . 48
11 Mesh sensitivity analysis on the shell model. Changes relate to the previous value,
e.g. Coarse to Normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
12 Mesh sensitivity analysis on the solid model. Changes relate to the previous value 49
13 Investigation of through-thickness integration points. Changes relate to the previous
value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
14 Investigation of fully integrated shell elements. Changes relate to the previous value 49
15 Investigation of fully integrated solid elements. Changes relate to the previous value 49
16 Results from mass-scaling on shell model where changes relate to the previous value 50
17 Results from mass-scaling on solid model where changes relate to the previous value 50
18 Results from investigation of connections in the shell model . . . . . . . . . . . . . 50
19 Results from investigation of connections in the solid model . . . . . . . . . . . . . 50
20 Results from investigation of contact definition . . . . . . . . . . . . . . . . . . . . 51
21 Results from investigation of friction coefficient . . . . . . . . . . . . . . . . . . . . 51
22 Results from investigation of radii on cubes in the shell model . . . . . . . . . . . . 51
23 Results from investigation of radii on timber in the solid model . . . . . . . . . . . 51
24 Results from investigation of material modeling . . . . . . . . . . . . . . . . . . . . 52
25 Displacement error results from the Final FE-model . . . . . . . . . . . . . . . . . 53
26 Displacement results from the Final FE-model . . . . . . . . . . . . . . . . . . . . 53
27 Displacement error results from the extra material model . . . . . . . . . . . . . . 54

VII
Nomenclature
Abbreviations
ISO International Organization for ∆t Time-step
Standardization ν
σ̇ij Jaumann stress-rate
ANSI American National Standards r
Institute σ̇ij Stress-rate due to rigid body
rotational velocity
FE Finite element
ϵtrue True strain
CFL Courant-Friedrichs-Lewy
ϵeng Engineering strain
FOPS Falling object protection structure
σtrue True stress
ROPS Roll over protection structure
σeng Engineering stress
FEM Finite element method
E pot Potential energy
MPC Multi-point constraint
m Mass
ADYREL Automatic dynamical relaxation
g Gravitational acceleration
KEREL Kinetic energy relaxation h Height
CAE Computer Aided Engineering E kinetic Kinetic energy
Symbols E Energy/Young’s modulus
L0 Original gauge length ∆te Elemental time-step
S0 Original cross-sectional area c Speed of sound/Gap constraint
Lc Parallel length/Characteristic function
length ∆tn Nodal time-step
k Coefficient of Ve Element volume
proportionality/Equivalent nodal
stiffness/Spring stiffness Ae Element area

Lt Total length ρ Density

X/Xi Material coordinates ν Poisson’s ratio

x/xi Spatial coordinates K Bulk modulus

t Time/Thickness λ Lamé’s first modulus

D/Dij Rate of deformation tensor µ Lamé’s second modulus/Coefficient


of friction
v/vi Velocity
M Mass matrix
L/Lij Velocity gradient tensor
u Displacement vector
F/Fij Deformation gradient tensor/Force
C Damping matrix
vector
K Stiffness matrix
Ω/Ωij Spin tensor
β Relaxation factor
l Length of body in deformed state
T Period
L Length of body in undeformed
state/Length σ Stress
σ/σij Cauchy stress ϵ Strain/Penalty spring stiffness

VIII
σf Flow stress ϵ̇0 Reference strain-rate
σy Yield strength σs Stress at a given strain-rate
A Johnson-Cook yield stress/Contact σu Ultimate tensile strength
area of element
ϵu Strain at ultimate tensile strength
B Johnson-Cook hardening modulus
ϵp Effective plastic strain R Curvature radius

n Johnson-Cook hardening exponent u Displacement

C Johnson-Cook strain-rate Π Total potential energy


sensitivity parameter
S Stiffness of contact segment
ϵ̇∗ Johnson-Cook ratio of equivalent
plastic strain rate to a reference Ff Friction force
plastic strain rate
FN Normal force
T∗ Homologous temperature
ϵf ailure Strain at failure
m Johnson-Cook thermal softening
exponent σplastic Plastic stress

IX
1 Introduction

1.1 Background
Toyota Material Handling is a world leading company within forklifts and logistics solutions, and
this master’s thesis has been done at their site in Mjölby, Sweden. A forklift is a powerful machine
that can lift heavy loads up to high lift heights, an operation associated with safety risks that need
to be managed, especially when a human is operating the forklift. One such risk is the possibility
of loads falling down on the vehicle, either from pallet shelves or from the forks. Such accidents
can cause serious injury or death. All forklifts with a lift height capacity exceeding 1.8 meters
must therefore be equipped with an overhead guard to prevent accidents. The design and strength
of such overhead guards are regulated by the standards SS-ISO 6055:2004 [1] and ANSI/ITSDF
B56.1-2020 [2]. These standards contain both geometrical design constraints and a standardized
procedure for testing.

The test procedure is as follows: Ten wooden cubes with side length 300 mm and 45 kg mass are
to be dropped in a defined sequence from a height of 1.5 m above the overhead guard. The first
cube is dropped directly above the operator’s head position. The next nine cubes are dropped in
a circle of 300 mm radius around the first drop point. Following this first test, there are limits
on permanent deformation of the overhead guard that must be fulfilled. Moreover, there can be
no fractures in the structure. Minor crack initiations are tolerated, however. The overhead guard
is then subjected to a second drop test where a lumber load is dropped from a height causing an
impact energy defined by the forklift’s capacity and lift height. After the test is finished, perma-
nent deformation of the overhead guard is measured and verified against permissible values in the
standards.

These drop tests are performed experimentally and are expensive, dangerous, and time consuming.
It is therefore desireable to replace these physical tests by numerical simulations. The standards
allow for virtual testing as a mean of validating a forklift’s safety, provided that the numerical
simulations have been sufficiently validated on similar roof designs and reflect experimental results
in an accurate and satisfying manner. To replace the physical experiments by numerical simula-
tions, the following must be done. First, design and produce a prototype overhead guard which is
similar to existing solutions. Second, physical lab tests of the prototype according to the standards
together with material testing. Third, a finite element model which show satisfactory agreement
to the experiments is created. Softwares used are CATIA and HyperWorks with the RADIOSS
explicit solver.

1.2 Objective
The goal of this master’s thesis is to develop a methodology for finite element simulation of drop
tests on overhead guards and to validate this method for one representative overhead guard that
is designed by the thesis students.

The problem formulation for this work reads: How is a finite element model of an overhead guard
made to ensure a satisfactory agreement between the numerical simulation and the results from
physical drop tests. The aim being to replace the latter, thereby reducing costs and saving time
while also eliminating high risk experiments.

1.3 Scope
The scope of this thesis work is limited to replacing norm tests of forklift overhead guards in early
stages of the development. Different design solutions can be compared and evaluated before a final
design is chosen. Physical tests are still required on the final solution. To completely replace phys-
ical overhead guard testing, the numerical simulation method needs to be proven highly accurate
for all possible overhead guard designs and materials. This was considered beyond the scope of a

1
master’s thesis.

Furthermore, in an effort to reduce modeling complexity, it was decided to only design and inves-
tigate overhead guards of sufficient strength that would avoid failure or fractures in the physical
experiments. Modeling damage and failure is difficult and would have taken unjustifiable time for
what was the goal with this master’s thesis.

Another limitation to this thesis work was that only one model of an overhead guard was investi-
gated when developing the method. It was not deemed feasible to develop a method using several
different overhead guard models within the span of one academic semester.

Regarding choice of materials, the scope of the thesis was limited to developing a methodology with
constitutive modeling of metals and excluding other material groups such as plastics, composites
and ceramics. This was done partly because the thesis students had knowledge about constitutive
modeling of metals prior to starting the thesis work, but also because Toyota Material Handling
mainly use metals for the overhead guards on their forklift models and it was thus seen as most
valuable to focus the methodology development on metals. Plastics and composite materials were
not of interest for this thesis work since Toyota Material Handling do not use such materials when
designing structural parts of overhead guards. Ceramics in the form of hardened glass are used in
some overhead guard models but such designs were not investigated since the authors had limited
knowledge about modeling such materials and also because of their more limited use in overhead
guard models.

Moreover, it was decided to limit finite element modeling to using the available options in the
RADIOSS solver. Developing finite element routines in Fortran could have been desirable, but this
was considered beyond the scope of the thesis work since it was deemed that the RADIOSS solver
is sophisticated enough to achieve the goal of the thesis.

Lastly, concerning material testing, the work was limited to the types of machines available at
Toyota’s facility in Mjölby and Linköping University’s laboratory. This meant limiting this work
to conventional quasi-static tensile testing and excluding methods of high strain-rate testing such
as e.g. tensile tests with servo-hydraulic machinery or Taylor impact tests.

2
2 Theory
2.1 Overhead Guard Standard
The standards SS-ISO 6055:2004 and ANSI/ITSDF B56.1-2020 specify the requirements for over-
head guards both regarding geometrical dimensions and safety. Henceforth, in this chapter, SS-ISO
6055:2004 and ANSI/ITSDF B56.1-2020 will be referred to as the ISO standard and the ANSI
standard, respectively. The two standards are similar but differ slightly in demands regarding
displacements after testing.

The foremost requirements on geometrical dimensions are:

• The overhead guard must be designed in a way as to not interfere with visibility, as detailed
in ISO 13564-1.
• The overhead guard must not be designed with openings in the top that exceed 150 mm in
one of the two dimensions width or length.
• Regarding trucks with a seated operator, the vertical clearance, between the seat index point
as specified in ISO 5353 and the underside of the part of the overhead guard below which the
operator’s head is located in normal operating conditions, shall not be less than 903 mm.
• Regarding trucks with a standing operator, the vertical clearance, between the platform on
which the operator stands and the underside of the part of the overhead guard below which
the operator’s head is located in normal operating conditions, shall not be less than 1880
mm.

The testing laid out in the standard consists of two parts: a dynamic test followed by an impact
drop test. The standards specifies the dynamic test as:

• The test object must have a mass of 45 kg and have a square face with a side length of
300 mm hitting the overhead guard. The side that hits the guard must be of oak wood or
a material of a similar density and be at least 50 mm thick. The test object’s corners and
edges need to have a radius of 10 mm.

• The test object is to be positioned above the overhead guard, at a height of 1.5 m and with
the striking face roughly parallel to the top of the overhead guard. The test object is to be
dropped 10 consecutive times. In the first drop, the test object shall impact with its centre
directly above the seat index point of the operator, according to ISO 5353, or above the
centre of the standing position, for models with a standing operator. The other 9 drops must
be performed in a clockwise direction on equidistantly spaced points on a circle of 300 mm
radius, around the first impact point. The first of the 9 drops shall impact on a point at the
front of the overhead guard.

After the dynamic test, there are limits on permanent deformation that must be fulfilled. The
ISO standard tolerates a maximum deformation of 20 mm while the ANSI standard tolerates a
maximum deformation of 19 mm. This limit is placed on all points situated on the impact circle,
see Fig. 1. In addition to the limit on permanent deformation, there may not be any visible
fractures in the structure. Minor crack initiations are tolerated.

3
Figure 1: Dynamic test permissible deformation according to ISO [1]

The impact drop test is specified in the standards as:

• The load must consist of 50 mm x 100 mm construction grade timber boards 3600 mm long,
and the complete timber must not be wider than 1000 mm. The timber boards are to be
placed with the 100 mm dimension of the cross section horizontal. The timber must be held
together with at least three metal bands, with one roughly in the centre and the two others
not further than 900 mm from each end.
• The load must have a minimum mass according to the tables in Appendix A.
• The load is to be centered above the overhead guard as detailed in the figure in Appendix
B. The flat surface in the 1000 mm dimension of the load shall impact the overhead guard
in this position.
• The timber load is to be dropped in free fall from a horizontal position and from a height to
produce the required impact energy, as specified in the tables in Appendix A.

Following the impact drop test, permanent deformation of the overhead guard is checked against
the standard. For models with a standing operator, the minimum permissible clearance of any
part of the guard is 1600 mm according to ISO, see figure in Appendix B. The same demand
in the ANSI standard is 1625 mm. For models with a seated operator, the minimum permissible
clearance according to the ISO standard is 250 mm, measured from the upper surface of the steering
wheel, see figure in Appendix B. The same demand in the ANSI standard is 765 mm, where this
is measured from the seat index point.

2.2 Tensile Test Standard


Material testing in the form of tensile tests have been performed in this work, following the standard
SS-EN ISO 6892-1:2016 [3] which details tensile testing of metallic materials at room temperature.
This section will therefore seek to provide an understanding of the most fundamental parts of the
standard.

First, the standard details the shape and dimensions of test pieces as:

4
• The test pieces can have a cross-section that is circular, square, rectangular, annular or, in
special circumstances some other uniform cross-section.
• It is preferable to use test pieces that have a proportional relationship between original gauge
length, Lo , and original cross-sectional area, So , according to Eq. (1). Test pieces that satisfy
this relationship are called proportional test pieces. See also Fig. 2.
• The original gauge length must not be less than 15 mm, and using a gauge length of less
than 20 mm increases measurement uncertainty post-testing.
• The test pieces must have a minimum transition radius between the gripped ends and the
parallel length of 12 mm, for proportional, non-cylindrical test pieces with a thickness equal
to or greater than 3 mm.

• The test piece’s parallel length, Lc must be at least equal to Lo +1.5 So , for proportional,
non-cylindrical test pieces with a thickness equal to or greater than 3 mm.

p
Lo = k So (1)
Where k is a coefficient of proportionality, which is normally set to 5.65. If the cross-sectional area
of the test piece is too small for the expression in Eq. (1) to be fulfilled with k as 5.65, then a
higher value of 11.3 for k may be used.

Figure 2: Principal geometry of a tensile test piece

Where Lt is the total length of the test piece.

When determining the original cross-sectional area of the test pieces, calipers can be used to mea-
sure the width and thickness dimensions. The standard recommends that a minimum of three
cross-sections be calculated for each test piece and that the original cross-sectional area, S0 then
be taken as the average of these measurements.

The standard specifies two methods for controlling the testing rate, method A which is based on
strain-rate and method B which is based on stress rate. Method A seeks to minimize the variation
in test rates during moments when strain-rate sensitive material parameters are determined and
also to minimize measurement uncertainty. Only method A will be discussed further.

Method A contains two different schemes for controlling the strain-rate. There is method A1
which is a closed loop system which controls the strain-rate directly based on feedback from an
extensometer. Method A2 is an open loop system in which an estimated strain-rate over the parallel
length is controlled. To achieve accurate measurements, it is recommended that the strain-rate is
kept at 0.00025 s−1 with a relative tolerance of ±20 % up to and including the determination of yield
strength. After yield parameters have been determined, the test is paused and the extensometer is
normally removed if one uses method A1 to prevent it from being damaged from sudden ruptures
that can occur at the end of the tensile test. It is recommended that the strain-rate is changed to
0.0067 s−1 with a relative tolerance of ±20 % when the test is resumed and carried on up until

5
fracture. It is also okay to use either a strain-rate of 0.00025 s−1 or 0.002 s−1 , both with a relative
tolerance of ±20 %. Regarding the two different control schemes, if using method A1 and the
extensometer is removed after yielding, an open loop control system is used for the remainder of
the test.

2.3 Large Deformation Theory


Working with and analyzing geometrically nonlinear problems, i.e. problems where large displace-
ments and rotations are present, necessitates a more sophisticated formulation of deformations and
stresses, compared to the small deformation case. This means an extension from the assumption
of infinitesimal displacements and rotations, and requires stepping into the realm of continuum
mechanics.

Before going further with the discussion, let us attempt a description of the applicability of the
assumption of infinitesimal displacements and rotations; the central pillars on which small defor-
mation theory rests. The main issue limiting its applicability is not necessarily the magnitude of
strains but rather rotations. If rotations are small and of the same magnitude as strains, and if
these are around 1 % or less, then linear strain measures are applicable. It should be noted that if
rotations are small enough to be neglected, then the linear strain measure can be appropriate up
to strains of about 10 %. To make it clearer, consider a general deformable body undergoing large
rigid-body rotations. Here, the small deformation theory is invalidated since the elements in the
linear strain tensor become non-zero although no deformation is present. [4]

Now that the applicability and limitations of small deformation theory is better understood, let
us expand our knowledge into large deformation theory, and let us do so by considering the phe-
nomenon of elastoplasticity. When undertaking incremental analysis of inelastic behaviour, three
kinematic conditions are encountered.

• Small displacements and small strain conditions: With these kinematics, small deformation
theory is employed in that infinitesimal displacements and rotations are assumed. The only
non-linearity exists within the material formulation, and as long as the material is loaded
elastically, the solution is identical with what would be obtained using the case of linear
elasticity. [5]

• Large displacements and rotation but small strains: In this case, the total Lagrangian refer-
ence frame is effectively used. This kinematic assumption allows large displacements, large
rotations, as well as large strains. If small strains are assumed however, one can use the
material models from the previous case of solely material nonlinearity with only a small
amount of extra work. This is achieved by substituting in the second Piola-Kirchhoff stress
tensor and the Green-Lagrange strain tensor in place of the small displacement engineering
measures of stress, and strain, respectively. [5]
• Large displacements and strains: With these kinematic assumptions, it is possible to use
either a total or updated Lagrangian reference frame. Here, the constitutive formulations are
fairly more complex, although they are direct extensions from the two previous cases. [5]

Reaching the third case above, by moving from the small deformation theory of plasticity can
be done in several ways. The case of large displacements and strains is what is implemented in
RADIOSS, with a co-rotational, updated Lagrangian formulation and through the use of Cauchy
stresses, and the rate-of-deformation tensor. For this reason, only this formulation will be pre-
sented. If the reader is unsure about the following concepts from continuum mechanics, and the
various different tensors, the authors recommend consulting [6].

Before moving on, it should be clarified what is meant by a co-rotational updated Lagrangian
formulation. Recalling continuum mechanics, one can employ two different reference frames, fun-
damentally. These are called Lagrangian and Eulerian reference frames, and in general, problems in
solid mechanics are analyzed using a Lagrangian reference frame, and problems in fluid mechanics

6
by using an Eulerian reference frame. The basic idea is that a Lagrangian formulation follows the
path of individual material particles, whereas an Eulerian formulation studies a fixed region that
material particles travel through. Leaving Eulerian formulations aside, there are two variants of the
Lagrangian formulation, called total and updated Lagrangian formulations, respectively. A total
Lagrangian formulation tracks a material particle by relating its current configuration to a fixed
initial reference frame. An updated Lagrangian formulation on the other hand, tracks a material
particle by relating its position in the current configuration to the previous current configuration,
which is kept fixed over the course of a time step and then ‘updated’ at the end of it. This means
that the reference frame moves with the material particle. With all this clarified, the discussion is
ready for an introduction of the co-rotational formulation.

The co-rotational formulation is a further development of the updated Lagrangian formulation,


and can be seen as a variant of it. In this formulation, the tracking of rigid-body movements is
separated from the tracking of deformations. Rigid-body translations and rotations are measured
with respect to a fixed, initial reference frame. Deformations on the other hand, are related to each
finite element’s individual coordinate system, whose position and orientation is obtained from the
rigid-body motion of the initial reference frame. In this way, the elements’ coordinate systems are
‘co-rotating’ with the body, hence giving the formulation its name.

Moving on, and remembering the concept of material coordinates described by the position vector
X, and spatial coordinates described by its position vector x, defined as

x = x (X, t) . (2)
Strain-rate is then obtained from the velocity derivative of the spatial coordinates
 
1 ∂vi ∂vj
Dij = + (3)
2 ∂xj ∂xi
where vi and xi are the components of the velocity and the spatial position vector, respectively.
We can also express the strain-rate in tensor form
1
L + LT ,

D= (4)
2
where L is the velocity gradient tensor. Its components are
∂vi
Lij = . (5)
∂xj
The velocity vi is that of a material particle and it is simply the time derivative of said material
particle’s position, i.e. the rate of change of spatial position
∂xi
vi = . (6)
∂t
The velocity difference between two material particles is given by:
∂vi
dvi = dxj = Lij dxj = Lij Fjk dXk , (7)
∂xj
where F is the deformation gradient tensor. Expressed in tensor form, the equation reads

dv = Ldx = LFdX. (8)


Now, it is possible to write the expression for the velocity difference directly according to

dv = (FdX) = ḞdX, (9)
∂t
where,
∂F
Ḟ = . (10)
∂t

7
As a result, one obtains

L = ḞF−1 . (11)
The velocity gradient tensor can be decomposed into two components: the rate-of-deformation
tensor, and the rate-of-rotation, or, spin tensor, denoted D and Ω, respectively, giving

L = D + Ω. (12)
Both of these are rate quantities, and as such, it is possible to treat the spin tensor as a vector. This
enables the decomposition of L into the symmetric strain-rate tensor and the anti-symmetric spin
tensor. This is analogous to small deformation theory, where the then infinitesimal displacement
gradient is split up into infinitesimal strain and infinitesimal rotation. Let us now express the rate
of deformation tensor and the spin tensor in terms of the deformation gradient tensor, respectively
as
1  −1 
D= ḞF + F−T ḞT , (13)
2
and
1  −1 
Ω= ḞF − F−T ḞT . (14)
2
The rate of deformation gives information about the current situation, but it does not provide a
measure of the total deformation of a body. In the general case, Eq. (12) is not integral analytically
except for one dimensional conditions, whereby the true strain is obtained
 
l
ϵ = ln , (15)
L
where l and L denote the length of a body in the deformed and undeformed state, respectively.
Note that the use of L here is completely separate from the bold font L used previously. As men-
tioned before, the rate of deformation tensor has a limitation, hence, an algorithm using only this
measure will lack the ability to produce results of total deformation. To combat this, the algorithm
must be supplemented with an additional measure. The rate of deformation is thus transformed
into another strain-rate. Unfortunately, it is not clear from the RADIOSS theory manual which
other strain-rate measure that is and how this transformation is done.

Moving on to how RADIOSS handles kinetics, the Cauchy stresses are calculated in an incremental
fashion by employing the stress-rate in an explicit time integration scheme as
ν
σij (t + ∆t) = σij (t) + σ̇ij ∆t, (16)
where ∆t is the size of a time-step. Note the superscript on the stress-rate term, it is not simply the
rate form of the Cauchy stress, but instead the Jaumann stress-rate, also known as the Jaumann
objective stress-rate. The problem with the Cauchy stresses is that they are associated with
spatial directions in the current configuration, and consequently, the Cauchy stress-rate components
become nonzero even in the sole presence of rigid-body rotations. This is obviously a problem since
from a material point of view, no deformation has taken place. The Cauchy stress-rate is a function
of both element average rigid-body rotation, and strain-rate, i.e.
ν r
σ̇ij = σ̇ij + σ̇ij , (17)
r
where σ̇ijis the stress-rate due to rigid-body rotational velocity. This means that the Jaumann
stress-rate is expressed as
ν r
σ̇ij = σ̇ij − σ̇ij . (18)
The Jaumann objective stress-rate is used in the constitutive stress-strain law since it is a measure
that follows the rigid-body rotation of the material. This gives an objective stress-strain law that is
independent of reference frame. It is obvious from Eq. (18) that to obtain the Jaumann stress-rate,

8
a correction for stress rotation must be done. The calculation of the stress-rate due to rigid-body
rotational velocity is done using the spin tensor defined by Eq. (14) according to
r
σ̇ij = σik Ωkj + σjk Ωki . (19)

2.4 Explicit Nonlinear Finite Element Analysis


To do a transient analysis when setting up a finite element (FE) analysis, one has to choose be-
tween using an implicit time integration scheme or an explicit one. In an explicit scheme, the state
at one time increment forward is calculated directly from the current state and can be thought
of as an extrapolation. In contrast, for an implicit scheme, the state one time increment forward
is calculated both from the current and the future state, resulting in a larger computational cost
per time increment but allowing for larger time increments compared to an explicit scheme. One
fundamental difference between the two methods is that while implicit schemes are unconditionally
stable, explicit schemes are conditionally stable. This means that in an explicit analysis, the size
of the time increment or time step needs to be kept smaller than a critical value lest the scheme
become unstable and produce inaccurate results.

Due to the difference in accuracy, complexity in setting up, and computational cost, there are
situations where one scheme is favourable over the other. Broadly speaking, the inherent complexity
of the physics that is to be simulated as well as the speed of it dictate whether it is preferable
to use an implicit scheme or an explicit one. In static loading, simulating phenomena with lower
levels of non-linearity such as elasticity and plasticity up to about linear buckling, an implicit
scheme is favourable since it generally can achieve sufficient accuracy and is associated with a
lower computational cost. When moving on to quasi-static loading, an implicit scheme is often
preferred as long as the non-linearities in the underlying physics are reasonably moderate, e.g. if
simulating damage or sudden rupture, an explicit scheme might be favourable since an implicit
one might not be able to produce accurate enough results. As an analysis becomes more and
more dynamic, it becomes impractical to use implicit schemes since they become more and more
costly, and instead, explicit schemes are used almost exclusively. It becomes natural therefore to
use explicit analysis for impact analyses such as in this thesis work.

2.4.1 True Stress/Strain


When dealing with large deformations while considering phenomena such as plasticity and strain-
hardening, it becomes necessary to use true measures of stress and strain rather than the con-
ventional engineering measures. The true measures take into account cross-sectional area change
that occurs with deformation. Engineering measures and true measures are approximately the
same in the elastic region of a stress-strain curve, but the true measures begin to diverge from the
engineering ones at the onset of plastic flow, i.e. the true measures become higher. This can be
seen in Fig. 3. Also note in said figure that the divergence between the two measures is accelerated
after necking occurs. True strain is calculated from engineering strain according to

ϵtrue = ln(1 + ϵeng ), (20)

and true stress is calculated from engineering stress as

σtrue = σeng (1 + ϵeng ). (21)

9
Figure 3: Schematic representation of the difference between engineering measures and true
measures. Note that the true stress-strain curve is not necessarily bi-linear

2.4.2 Load Modeling


To save on computational cost when modeling impact from falling objects, one should realize that
it is not necessary to simulate the time the falling object spends in free fall. Instead, it can be
dropped right above what it is supposed to impact, with a velocity corresponding to the velocity it
would reach due to acceleration by gravity. To find this velocity, the potential energy of the object
must first be calculated as

E pot = mgh, (22)


which is assuming that the object is dropped from rest. Here, m is the mass of the object, g is the
acceleration due to gravity, and h is the height the object is dropped from, which is relative to the
height at impact. Next, the kinetic energy of a body is found using elementary physics as

mv 2
E kinetic = , (23)
2
where v is the velocity of the body. Next, by equating the potential energy from Eq. (22) with
the kinetic energy from Eq. (23), such that E pot = E kinetic = E, the object’s velocity at impact is
found as
r
2E
v= . (24)
m
In a further effort to keep computational cost down, when modeling falling objects in drop tests, it
is common to model the objects as rigid bodies, since it is often the deformation of the body being
impacted that is of interest, and not that of the impacting body. This is similar to simulations of
vehicle crash tests, e.g. frontal or side impact of a car with a wall or a pillar, respectively. There
it is commonplace to model walls and pillars as rigid bodies, to reduce computational cost and
modeling complexity.

2.4.3 Time-Step
The time-step is critical to achieving an accurate explicit FE-analysis. In 1928 Courant, Friedrichs,
and Lewy described the CFL condition [7, 8], which is used to obtain the critical time-step. If the

10
time-step is larger than the critical time-step, the explicit scheme produces inaccurate results. The
CFL condition states that the distance a dilational stress wave propagates in a time-step must be
less than the distance between two grid points [9]. For a finite element mesh, this means that the
time step needs to be small enough to prevent the wave from passing through more than one node
in a time-step. RADIOSS has two options for calculating the time step using the CFL condition,
the elemental time step, and the nodal time step, are given respectively as
Lc
∆te ≤ , (25)
c
and
r
2m
∆tn = , (26)
k
where ∆t is the time step, Lc is the element’s characteristic length, m is the nodal mass, k is the
equivalent nodal stiffness and c is the wave propagation speed, which will be addressed shortly.

The characteristic length is the shortest distance a dilational stress wave can take to cross an
element, and it differs first of all depending on if the element is 1D, 2D, or 3D. In the simple
case of a 1D bar element, the characteristic length is the length of the element. In the 2D case,
the characteristic length for different element types is shown in Fig. 4. It can be observed that
for warped 4-node shell elements, the characteristic length is not simply the shortest size but
instead the ratio between the element area and the longest diagonal. Moving on to 3D, for 4-
noded tetrahedrons the characteristic length is simply the minimum height, while for 8-noded
solid elements the characteristic length is defined as
Ve
Lc = , (27)
Ae,max
where Ve is the element volume, and Ae,max is the maximum area of any of the element’s sides.

Figure 4: Characteristic length of 2D elements

A dilational stress wave travels with the speed of sound through a material, hence the wave
propagation speed c is the same as the speed of sound, which is a material parameter. Assuming
linear elasticity, the expression for the speed of sound through a 1D element is given by
s
E
c1D = , (28)
ρ
where E is Young’s modulus and ρ is the density. In the case of 2D shell elements with the
assumption of plane stress, the speed of sound is given by
s
E
c2D = , (29)
(1 − ν 2 )ρ
where ν is Poisson’s ratio. For a 3D element, the wave propagation speed can be expressed in
several ways by utilizing the relationship between the elastic moduli as

11
s s s
E(1 − ν) K 4µ λ + 2µ
c3D = = + = , (30)
(1 + ν)(1 − 2ν)ρ ρ 3ρ ρ
where K is the bulk modulus. Additionally, λ and µ are the Lamé moduli. The elemental time
step is normally smaller than or equal to the nodal time step. However, for an element with poor
aspect ratio, the elemental time step can be larger than the nodal time step.

Since the global time step in a model is dependent on the smallest element size, it is recommended
to have a uniform mesh. Having just a single element of smaller size than the rest will greatly
lower the critical time step and, therefore, significantly increase computational cost.

2.4.4 Mass Scaling


As seen in Eq. (26), the critical nodal time-step depends on mass, so by adding non-physical mass
to nodes, the time-step can be increased. This procedure is called mass scaling and is an easy way
to significantly increase the time-step, and thus speed up the computation. Mass scaling has some
drawbacks, however. The kinetic energy will increase since mass is added to the system, which can
affect the solution. It is therefore recommended to not increase the mass in a FE-model by more
than 2 % during the simulation [10]. Even if this criterion is fulfilled, it is still important to make
sure that no significant amount of mass is added at crucial areas of the model. Crucial meaning
dynamic areas, e.g. if mass scaling adds mass to the falling object during a drop test, the impact
energy will increase and thus affect the results.

2.4.5 Damping
When solving a dynamic problem using an explicit finite element scheme, unwanted vibrations
often occur in the model. To achieve a quasi-static solution, these vibrations need to be dealt
with. Three different ways of achieving a quasi-static solution will be presented here.

Firstly, one can avoid vibrations altogether if the load is applied at a sufficiently low speed. This
approach can not always be used since the rate of loading can greatly affect the solution, especially
if the material behaviour is dependent on the strain-rate. [4]

If vibrations are unavoidable in the model, damping can be added to dissipate the vibrations in-
between the loads and at the end of the simulation. Kinetic relaxation is a numerical damping
scheme that sets the nodal velocities to zero each time the kinetic energy reaches its maximum.
This method is very aggressive so it can damp the vibrations quickly, but it has a drawback: the
stress, strain and strain-rate tensors will not be in equilibrium if the maximum kinetic energy is
reached due to elastic vibration in a solid and shell part. This will lead to incorrect results. [11]

A less aggressive form of damping is dynamic relaxation. Dynamic relaxation damps the vibrations
by introducing a diagonal damping matrix to the dynamic equation

Mü + Cu̇ + Ku = F. (31)


The diagonal damping is proportional to the mass matrix according to

Cu̇ = M, (32)
T
where β is the relaxation factor and T is the period to be damped. RADIOSS can automatically
calculate the period to be damped. [4]

2.4.6 Energy Verification


When performing FE-analyses, it is good practice to inspect different measures of energies in the
FE-model to verify that an analysis was computed correctly. If numerical problems occur in an

12
analysis, it can sometimes be spotted as inconsistencies in the model energies. Verifying a FE-
model by inspecting energies is especially important in dynamic analyses.

Figure 5 shows the general behaviour of fundamental energies in a FE-model. This diagram
originates from this thesis work and comes from a FE-analysis of two cubes being dropped on the
overhead guard where damping was present. The first thing to verify when inspecting such a graph
in a dynamic analysis is that kinetic energy and internal energy follow each other in opposite. Here,
kinetic energy gets introduced in the model by the first cube which has an initial velocity. When
said cube strikes the structure, its kinetic energy gets converted into internal energy in the struc-
ture, mainly in the form of strain energy. The second thing to verify is that the total energy in the
model does not increase except when energy is added to the model by the nature of the analysis,
as seen in Fig. 5 at 0.15 seconds, when a second cube is dropped on the overhead guard. The
introduction of the second cube cause an increase in kinetic energy as before, which gets converted
into internal energy in the same fashion as before. From the same figure, it is seen that the total en-
ergy decreases slightly at first, which coincides with an increase in contact energy which represents
friction between structural parts among other things. Later, the total energy decreases sharply at
time 0.05 seconds, which is the time when damping is activated. Given that the model contains
contact and damping, the decrease in total energy is expected, but in general one should be wary of
the total energy decreasing unless there are mechanisms for energy dissipation present in the model.

Figure 5: General behaviour of fundamental energy measures in a dynamic analysis

Numerical instabilities relating to e.g. contact definitions can manifest themselves as inconsisten-
cies in the energy balance in the form of sharp unexpected increases in energy, which are easy to
spot. Thus, by verifying proper behaviour of the model energies in a FE-analysis, one can decrease
the risk of using improper analysis results. A detrimental phenomenon related to the performance
of finite elements is hourglassing which should be kept at a minimum to ensure valid results from
a FE-analysis. Hourglassing should be inspected for this reason and this can be done by looking
at hourglass energy during an analysis. If the hourglass energy is kept below 10 % of the internal
energy, [10, 12], it can be considered acceptable and the analysis results need not be discarded.

2.5 Material Constitutive Modeling


When modeling materials that undergo hardening, which is the case with most metals, one must
begin by understanding that the conventional yield criteria of, e.g. von Mises and Tresca, are only
applicable at initial yielding. Past the point of initial yielding, the yield surfaces representing the
aforementioned criteria in three dimensions, i.e. a circular and a hexagonal cylinder, respectively,
change as a result of hardening obtained by plastic deformation.

Figure 6 seeks to display the fundamental difference between isotropic and kinematic hardening

13
in the presence of an arbitrary stress tensor. By looking at this figure one sees that in a model
of isotropic hardening, the yield surface expands equally in all directions during plastic flow. This
means that after hardening, the material’s tensile and compressive strengths are increased equally.
In a model of kinematic hardening however, the yield surface retains its original size and instead
translates in the direction of the stress. This has the effect that the material’s tensile strength is
increased while at the same time its compressive strength is lowered, i.e. anisotropy is introduced.
This phenomenon is referred to as the Bauschinger effect. Note that the opposite effect is obtained
if the loading is compressive. [13]

(a) Isotropic hardening (b) Kinematic hardening

Figure 6: Schematic representation of the difference between isotropic and kinematic hardening
modes for a von Mises yield surface sketched in the π-plane where the coordinate axes represent
the principal stresses

Mixed hardening models exist, in which the yield surface both expands (isotropic) and translates
(kinematic) during plastic flow. Such a model was first introduced by P. Hodge in 1957 [14] and
is reached by combining the general flow rules for isotropic and kinematic hardening, respectively.
This first mixed hardening model assumed that a plastic strain increment can be linearly decom-
posed into one part describing isotropic hardening and one part describing kinematic hardening.
The ratio between isotropic and kinematic hardening is then determined by a simple factor, typi-
cally denoted M, ranging from 0 to 1, with values closer to zero yielding a model where kinematic
hardening is dominant and vice versa for values closer to 1. This factor M is determined experi-
mentally. [13]

An example of a mixed hardening model which is widely used in analyses of falling object protection
structures (FOPS), roll over protection structures (ROPS), and crashes is the Cowper-Symonds
model proposed in 1957 [15]. Another material model that is often used in the aforementioned
types of analyses is the Johnson-Cook model [16], which is a purely isotropically hardening model.
[17, 18]

The parameters one uses when modeling hardening and plastic flow typically comes from exper-
iments on the tensile or cyclic stress-strain behaviour of the material in question. Since plastic
flow is assumed to give no volume change, for isotropic materials it is sufficient to use plasticity
data from uniaxial experiments when modeling three-dimensional plasticity. The basic information
needed from e.g. a monotonic tensile test is a parameter known as the plastic modulus or the hard-
ening modulus. It is defined as the slope of the stress-plastic strain curve and is sometimes denoted
Ep . An explanation of what the plastic modulus is and how one obtains it from a stress-strain
diagram is given in Fig. 7.

14
Figure 7: Schematic representation of the plastic modulus Ep

Some metallic materials can display strain-rate sensitivity, i.e. a change in the material’s proper-
ties with changing strain-rate. If strain-rate sensitivity is likely to manifest itself, depends on the
type of material, the strain-rate it is subjected to, and the homologous temperature (ratio between
temperature and melting temperature, both in absolute measures). Some materials do not exhibit
strain-rate sensitivity at room temperature but do so at elevated temperatures. Moreover, some
materials only start experiencing strain-rate sensitivity at high strain-rates and the degree of sen-
sitivity can in general increase with increasing strain-rate. In conclusion, strain-rate sensitivity is
case dependent. [13]

By looking at Fig. 8, one can see that the strain-rate effect makes the material stronger. It is also
evident from said figure that increasing the strain-rate does not affect Young’s modulus but rather
the plastic region of the stress-strain diagram in general. During plastic flow, there exists a balance
between two counteracting effects: the rate of strain hardening, and the rate of dynamic recovery.
From a crystallographic point of view, plastic flow is the movement of dislocations along slip planes
in a crystal structure. The two aforementioned rates determine a steady-state dislocation density
in the material and this balance is affected both by temperature and strain-rate. At higher tem-
peratures, the rate of dynamic recovery increases, giving a lower steady-state dislocation density
and subsequently lower stress, i.e. thermal softening. This generally gives the opposite behaviour
from that of increased strain-rate seen in Fig. 8, and in addition, higher homologous temperature
tends to decrease Young’s modulus. Returning to the balance between rate of strain hardening
and rate of dynamic recovery, a higher strain-rate increases both strain hardening and dynamic
recovery but the increase in the latter is offsetted by the former and the net effect is an increase
in the steady-state dislocation density and thus a higher stress level. [13]

15
Figure 8: The general effect of strain-rate on the tensile properties of metals

2.5.1 Linear-Elastic-Perfect-Plastic Model


A simple constitutive model is the linear-elastic-perfect plastic model, the fundamental behaviour
of which is shown in Fig. 9.

Figure 9: Schematic representation of a linear-elastic-perfect-plastic model. σy is the yield stress

In the elastic region, the behaviour of the material is governed by Hooke’s law according to

σ = Eϵ, (33)
where σ denotes stress and ϵ denotes strain. When the stress level in the material reaches the yield
strength, σy , it begins to flow plastically without experiencing any strain-hardening (Ep = 0). The
flow rule for the model is thus simply given in accordance with

σf = σy . (34)

16
2.5.2 Johnson-Cook Model
The Johnson-Cook Model was introduced in 1983 and is a constitutive model for materials sub-
jected to large strains, high strain-rates, and/or high temperatures. It is an empirical model that
has independent terms for three phenomena: strain hardening, strain-rate hardening, and thermal
softening [16]. The Johnson-Cook model is an isotropically hardening model which in RADIOSS
is implemented together with the von Mises yield criterion, where the flow stress is given by

σf = [A + Bϵnp ][1 + C ln(ϵ̇∗ )][1 − (T ∗ )m ], (35)


where A is the initial yield stress, B is the hardening modulus, n is the hardening exponent, ϵp
is the effective plastic strain, C is the strain-rate sensitivity parameter, T ∗ is the homologous
temperature, and m is the thermal softening exponent. ϵ̇∗ is given by
ϵ̇p
ϵ̇∗ = , (36)
ϵ̇0
where ϵ̇0 is the reference strain-rate, and ϵ̇p is the effective plastic strain-rate [16]. Using Voigt
notation, the effective plastic strain-rate is calculated in RADIOSS as the maximum of the six
strain-rate components as [4]

ϵ̇p = max (ϵ̇x , ϵ̇y , ϵ̇z , 2ϵ̇xy , 2ϵ̇yz , 2ϵ̇xz ) . (37)


The authors realize that this explanation of the Johnson-Cook model’s implementation in RA-
DIOSS would ideally be more exhaustive, however, this is hindered by the lack of available infor-
mation.

Since the model isolates the effects of strain hardening, strain-rate hardening, and thermal soften-
ing, the material parameters can be obtained separately from three different material tests. Firstly
a tensile test at a reference strain-rate (normally 10−3 s−1 ), and at room temperature is used to
obtain A, B, and n. A can be obtained by finding the yield point in the stress-strain curve. At
reference strain-rate and room temperature, Eq. (35) is simplified to [19]

σf = A + Bϵnp . (38)
Rewriting Eq. (38), and taking the common logarithm of both sides gives

log(σf − A) = log B + n log ϵp . (39)


The hardening exponent n and the hardening modulus B can now be extracted from a stress-strain
curve where the elastic part has been removed. n is represented by the slope of the curve and log B
is obtained where the curve intercepts the vertical axis. [19]

Next, the strain-rate sensitivity parameter C can be obtained from stress-strain curves of tensile
tests performed at varying strain-rates at room temperatures. A point is taken from each of these
stress-strain curves at a constant plastic strain. Room temperature and constant strain simplifies
Eq. (35) to

σf = σs [1 + C ln(ϵ̇∗ )], (40)


where σs is the stress at a given strain-rate [20]. Equation 40 can be written as
 
σf ϵ̇p
− 1 = C ln . (41)
σs ϵ̇0
The slope of the curve obtained from Eq. (41) is the strain-rate sensitivity parameter C. Since the
overhead guard in this project is only subjected to room temperature conditions, the determination
of m will not be discussed.

In RADIOSS, the Johnson-Cook model offers two ways of defining the material parameters. The
user can either enter the standard Johnson-Cook material parameters mentioned above, or use a

17
simplified input. In the simplified input, instead of entering all the Johnson-Cook parameters, the
user only enters Young’s modulus E, density ρ, Poisson’s ratio ν, yield stress σy , ultimate tensile
strength σu , and strain at ultimate tensile strength ϵu . From this input, RADIOSS then automat-
ically calculates the Johnson-Cook parameters, excluding the strain-rate sensitivity parameter C.
[11]

2.6 FE Modeling Aspects


2.6.1 Discretization
When deciding between using solid elements or shell elements in a FE-analysis it is vital to first
examine the geometry at hand. Consider a general solid body with curvature radius R, length L,
and thickness t. The following approximate criteria then serve as a general guide to when shells
and solid elements are suitable, respectively. [21]

• L
t > 20 and R
t > 20 : Use Kirchhoff or Mindlin shell theory. Thin shell assumptions are
valid.
• 10 < Lt < 20 and 10 < R
t < 20: Use Mindlin shell theory. Assumption of moderately thick
shell is valid.
• 4< L
t < 10 and 4 < R
t < 1: Use Mindlin theory. Thick shell.

• L
t < 4 or R
t < 4: Use solid elements.

It should be noted that it is of course possible to utilize solid elements even for geometries beyond
the range detailed above. It is wise however, when dealing with structures where one dimension is
significantly smaller than the other two; to investigate the possibility of employing shell elements
since it can mean a dramatic saving of computational time compared to running the same analysis
with solid elements. Depending on what case is chosen from the list above, the strategy for meshing
differs.

• Thin shells: A 2D mesh is created at the midsurface of the body.


• Thick shells: A 3D mesh is created with the assumption of constant stress normal to the
working direction at each element.
• Solids: A 3D mesh is created without any assumption of constant normal stress.

One should take care when considering between solid and shell elements, not to decide solely based
on the geometry, but also taking into account the underlying physics being simulated and what
results are of interest. For example, if the stress and strain in the normal direction is likely to vary
significantly along the thickness of a structure, then shell elements are not suitable since they are
not able to provide a stress or strain profile along the thickness dimension but assumes that it is
more or less constant. In such a case, one must instead use either thick shell elements or solid
elements. In conclusion, the choice of element class is based upon the following: the geometry and
physics inherent to the analysis, the degree of accuracy required of the results, and the availability
of computational resources.

Regarding meshing, there are several measures of quality, and it is good practice to consider these
when discretizing a problem for a FE-analysis. First of, growth ratio is a measure of how much
the element size increases from one finite element to its immediate neighbouring element. If the
growth ratio is not restricted, one risks getting a too rapid change in the discretization, and as a
consequence, vital information about the underlying physics being simulated might be missed. It
is the authors’ understanding that is desirable to keep the growth ratio in a mesh below 1.2, i.e.
the element size changes no more than 20 % from one element to the next. The default growth
ratio criterion that is used in HyperWorks is 1.1.

Aspect ratio is a fundamental assessment of element quality and needs to be considered in any
serious attempt at FE-analysis. It is a measure of the relationship between the longest and the

18
shortest edges of an element. When using hexahedral solid elements, one should strive towards
creating elements with an aspect ratio close to 1 for optimal element performance. If the aspect
ratio is equal to 1 in the case of a hexahedral, it means that it is shaped like a perfect cube. It has
been suggested that elements with aspect ratios 1 < Aspect ratio < 3 be considered acceptable,
those with 3 < Aspect ratio < 10 be treated carefully, and that those with Aspect ratio > 10 are
of concern [22]. The default quality criterion in HyperWorks for aspect ratio is < 5.

Angle idealization is a measure of how much the interior angles in an element differ from the ideal
case. There are 3 angles at each corner of an element so a hexahedral solid element has 24 such
angle measures in total. The ideal case regarding this type of element is a cube which corresponds
to all internal angles being 90°. If the internal angles deviate too much from the ideal case, then
the element can produce spurious results. The default quality criterion in HyperWorks is 45°angle
deviation, meaning minimum angles of 45°and maximum angles of 135°, which is in line with [23].

Another important measure is the element Jacobian. It gives a description of an element’s volume
distortion from an ideal shape. The measure is the determinant of the Jacobian matrix, where the
matrix contains information about the volume, shape, and orientation of an element. The Jacobian
matrix defines the mapping of vertices from an ideal element to the real element. For instance,
a dramatically distorted element will have a negative element Jacobian, and many FE-softwares
have a built in stopping criterion detecting this and terminating the analysis. It has been reported
that a high-quality criterion is to demand that no more than 5 % of all element Jacobians be below
a value of 0.7 [12]. This is close in line to the default criterion in HyperWorks which is to consider
elements whose Jacobian fall below 0.7 as failed.

There exists additional measures of element quality and mesh quality, e.g. warpage and skew-
ness. However, the aim of this chapter is not to provide an exhaustive description of all available
measures, but rather a few fundamental ones which the authors have considered when undertaking
meshing in this thesis work. To learn more about FE-discretization and the theory behind different
element formulations, the reader is referred to [5] or [24].

Since tetrahedral elements are understood to be unphysically stiff, especially in large deforma-
tion bending, it is preferable to use hexahedral elements instead. When using hexahedral solid
elements in an analysis where bending is present, two common problems can arise; shear-locking
and hourglassing. These phenomena can cause highly inaccurate FE-results, and to understand
why, one first needs to understand the nature of the finite element method (FEM); calculations
are performed discretely, i.e. at specific locations, and values in between these are interpolated
using the specific finite element’s shape function. This can potentially cause problems, especially if
the employed elements are not suited to capturing the underlying physics one wishes to simulate.
Before explaining the aforementioned phenomena of shear-locking and hourglassing, it is important
to note that these can only arise under certain circumstances.

First of, shear-locking can arise in first-order, fully integrated elements. The linear nature of the
first-order elements means that they are unable to accurately capture the curvature present in the
real material under the influence of bending, and so a non-physical shear stress is introduced. This
causes the elements to become artificially stiff, thereby leading to inaccurate results. Fig. 10 shows
the principle behind shear-locking. The element is black in its undeformed state and purple in its
deformed state, and the green crosses represent the element’s Gauss points. In the deformed state,
it is evident that the angle indicated in Fig. 10 is not equal to 90°, and this is the cause of the
non-physical shear stress. [4, 25]

19
Figure 10: Element experiencing shear-locking

To prevent shear-locking, one can discretize the problem more finely, i.e. higher mesh-density.
This means that the linear first-order elements are able to more accurately capture the underlying
physics. Another approach to mitigate the problem of shear-locking is to use second-order element
formulations so that the curvature in the material can be accurately represented without the need
of a finer discretization. Both of these solutions lead to an increased computational cost however,
and so it might be preferable to take a third approach to the problem. This third approach consists
of reducing the number of Gauss points in the finite elements, yielding under-integrated elements
that cannot experience shear-locking. This has a drawback however, which leads to a discussion
about the phenomenon of hourglassing. [4, 25]

Hourglassing can occur in first-order and more rarely in second-order, under-integrated elements (1
Gauss point) and is a problem that needs to be prevented or at least kept under some acceptable
magnitude so as to not compromise the quality of results obtained in a FE-analysis. The phe-
nomenon gets its name from the fact that when it occurs, elements deform into hourglass-looking
shapes. It should be noted that hourglassing can occur in both solid and shell elements, although
not in triangular shells nor in tetrahedral solids. There are 12 distinct hourglass modes for solid
elements and 5 such modes for shell elements [4]. Hourglass modes represent non-physical, zero-
energy modes of deformation that produce zero strain and stress. Fig. 11 shows an element that
has deformed in an hourglass mode, the parts in the sketch have the same meaning as in Fig. 10.
Study the two dotted lines emanating from the Gauss point; due to there only being one Gauss
point, they are unable to change length as well as the angle in between them when the element
deforms. This is what can cause the element to deform in a manner that produces zero strain and
zero stress. [4, 26]

Figure 11: Element deformed in an hourglass mode

The obvious way to prevent hourglassing during a FE-analysis is to employ fully integrated ele-
ments. This does however reintroduce the risk of shear-locking as was discussed previously. It

20
is of course possible to minimize the possibility of hourglassing as well as shear-locking, through
the use of second-order elements, which unfortunately leads to increased computational cost. To
mitigate the risk of hourglassing without suffering the full cost of employing second-order elements,
there exists algorithms for hourglass control. One should bear in mind that using such algorithms
also lead to an increased computational cost, albeit less so than using fully integrated elements or
second-order elements. Another approach to minimizing the risk of hourglassing is to use a finer
discretization, i.e. a refined mesh. It is also possible to reduce the risk of hourglassing by taking
care when defining loads in a FE-model; pressure loads which affect a larger region are preferable
to point loads since loading a single node is more likely to induce hourglass modes. Lastly, if one
employs under-integrated first-order elements without using algorithms for hourglass control such
that the risk still remains, one should at least monitor hourglass energy as was discussed previously
under the section concerning energy verification. If data such as e.g. stress and strain in a localized
region of the FE-model is of interest, it is a good idea to visually inspect the mesh in said region
to ensure that hourglassing is not present as this risks invalidating the results. [4, 26]

2.6.2 Connections
Using multi-point constraints (MPC) is a way of decreasing modeling complexity and reducing com-
putational cost. Commonly, MPC’s are used to model connections between components, loads, and
boundary conditions. MPC’s allow one to define a behaviour between multiple nodes at the same
time. They work by imposing constraints on desired degrees of freedom of the nodes of interest.
Two common types of MPC’s which are most often used are kinematic couplings, and distributed
couplings, known in HyperWorks as RBE2’s and RBE3’s, respectively. Both of these types work
by defining independent and dependent nodes, but in different ways.

The main difference between RBE2’s and RBE3’s is that the former defines one independent node,
also known as master node, and the rest as dependent nodes, also known as slave nodes. Inversely,
RBE3’s define one dependent node and several independent nodes. This is shown clearly in Fig. 12
where master nodes are highlighted in yellow and the connections to slave nodes are visualized in
blue. The RBE2 coupling adds infinite stiffness to the constrained nodes while the RBE3 coupling
gives a distributed connection, thus not affecting the stiffness of the model.

(a) RBE2, kinematic coupling (b) RBE3, distributed coupling

Figure 12: Examples of common MPC’s

The reason why a RBE2 element induces stiffness in the affected nodes is because the motion of the
master node controls the motion of all the slave nodes. If all six degrees of freedom are constrained
in the definition of the kinematic coupling, then no slave node can exhibit any movement what so
ever relative to the master node, and consequently any other slave node. If on the other hand,
as an example, all degrees of freedom except for z-translation are constrained, then if the master
node would translate a distance h in the z-direction, then consequently, all slave nodes would also
translate a distance h in the z-direction. In effect, this behaviour turns the region of the RBE2
coupling into a rigid-body, thus adding stiffness.

21
As mentioned previously, RBE3’s work in opposite to RBE2’s, whereby several master nodes govern
the behaviour of a single slave node. The distributed coupling does this by giving the dependent
node a movement that is a weighted average of the independent nodes’ movement. Hence there is
no relationship dictating the relative motion between the independent nodes, and consequently no
additional stiffness is provided.

2.6.3 Welds
When modeling shell components that are welded together, several approaches can be used, here,
two approaches will be presented. Before going forward, note that the shell models in this thesis
were created by extracting midsurfaces from the components. Firstly, instead of actually modeling
the welds, one can extend the midsurfaces until they meet each other where they are connected.
This is done so that the areas where components are welded can share nodes. This simple approach
adds mass and stiffness to the model because of the extended surfaces, however this is not always
a problem since the added mass and stiffness can represent that of the welds. Another problem
with this method is that it will move the moment point between components to the wrong location.

The second approach to modeling shell components that are welded together is to use what is
known as a tied interface or a tie-contact. Using a tied interface, one can make a rigid connection
between a set of secondary/slave nodes to a main/master surface. With this type of connection,
it becomes easy to tie together two meshes of different sizes, instead of having to refine one or
the other so that the nodes in the interface between them can connect in even pairs. The contact
between the two surfaces then function such that the slave nodes are hindered from moving or
sliding on the master surface. This type of contact can be used to connect a solid mesh with a
solid mesh, a shell mesh with a shell mesh, or to connect a solid mesh with a shell mesh. Using
this method, the midsurfaces do not need to be extended, and therefore the moment point will not
be moved. This method still has a drawback however, it does not add any mass or stiffness that
could represent the mass or stiffness of the welds.

To more accurately model the welds, one needs to actually model them as solid components. When
modeling solid components that are welded together, a natural option for modeling the welds would
be to make them as solid components as well, mesh them with a solid mesh and let it align with
the meshes of the two adjoined components. This strategy can however pose difficulties to the
meshing of the components if the welds are situated in a difficult geometry, and especially if one
wishes to employ hexahedral elements. What is meant by a difficult geometry is one of great detail
and of small dimensions relative to element size. Because this strategy aims for the alignment of
the two component meshes with that of the weld, it can prove challenging to produce a uniform
mapped mesh. An alternative strategy would be to use tie-contact to connect the welds with the
components. It then works by defining the weld’s nodes on each respective side as slave entities
and then defining the immediate mesh on each respective component as master surfaces. This way,
the functionality of the weld is achieved in that the two components are connected solely through
it, and the added stiffness of the weld is also accounted for.

2.7 FE-modeling of Contact Conditions


Contact between bodies is a particularly difficult problem to analyze. Contact problems are highly
nonlinear in their nature, and range from frictionless small displacements contact to definitions
with friction in large displacements and large strains in plastic conditions. Even if the contact con-
ditions are the same, the solution of the nonlinear problems can be made much more demanding in
certain cases. With contact problems being taken into account, the nonlinearity of a FE-analysis
no longer consists of solely geometrical and material nonlinearities discussed previously.

The three most common contact algorithms employed in FE-softwares are the penalty stiffness
method, Lagrange multipliers, and the distributed parameter method. Both the penalty stiffness
method and the method of Lagrange multipliers are implemented in RADIOSS. The penalty stiff-
ness method is the most commonly used method and the only one that was considered in this

22
thesis, and therefore the coming discussion is limited to the penalty stiffness method. [10]

To get a conceptual understanding of the penalty method in contact problems, consider the simple
contact problem of a mass m, suspended in a spring with stiffness k, and under the action of
gravity g. The displacement u of the mass is constrained by the presence of a rigid surface, see
Fig. 13.

Figure 13: Point mass suspended in a spring and the concept of a penalty spring

For the spring-mass system, its energy can be formulated as


1 2
Π(u) = ku − mgu. (42)
2
The mass is not allowed to penetrate the rigid surface, hence, a constraint on the displacement u
must be formulated. Let us therefore express this fact as

c(u) = h − u ≥ 0, (43)
where, for c > 0, there is a gap between the mass and the rigid surface. For c = 0 there is no gap.

Now, the rigid surface is replaced by a penalty spring with stiffness ϵ, see Fig. 13. This means
that the energy of the spring-mass-penalty spring system can be written as
1 2 1 2
Π(u) = ku − mgu + ϵ (c(u)) . (44)
2 2
In this way, it can be concluded that in the penalty method, the contacting surfaces are treated
as stiff springs who generate a resistive force whose magnitude is a function of penetration depth
and spring stiffness, or penalty stiffness. It can also be concluded that this conceptual problem
has two limiting cases:

• ϵ → ∞ ⇒ u − h → 0 in the case of contact. This means that with the use of a very large
penalty stiffness, Eq. (43) is fulfilled, thus preventing penetration.
• ϵ → 0 representing the solution to the unconstrained problem, i.e. inactive contact condition.
Intuitively, a low penalty stiffness will result in large penetration depth.

In general, the penalty method is a simpler and less accurate method to deal with contact prob-
lems than the method of Lagrange multipliers. In the limiting case of infinite penalty stiffness, the
solution in terms of preventing penetration and calculating appropriate reaction forces will be the
correct solution as obtained by the method of Lagrange multipliers [27]. The benefit with using
the penalty method is its simplicity, and the fact that small penetrations can be accepted increases
the performance of the contact algorithm.

23
Leaving the penalty stiffness method behind, in general, when two bodies come into contact with
each other, a force is transmitted across their common interface. If friction is taken into account
there will be an additional force, and hence there will be both a normal force and a tangential
force in the contact interface, giving rise to a contact pressure and a shear stress, respectively. A
problem with contacting bodies is that they require a sophisticated algorithm due to the discon-
tinuity of boundary conditions. When the two bodies come close enough to each other so that
contact occurs, the boundary conditions of zero interpenetration must be activated analogous to
how a light switch switches on and off. When there are multiple bodies that have the possibility
of coming into contact with each other, then the FE-software one uses must be able to detect
collisions and toggle on contact conditions. If the software is unable to do so or if the user forgets
to provide it with the necessary information for doing so, then bodies would simply pass through
each other, which is extremely unphysical.

The concept of master and slave was alluded to in the previous section but a further explanation is
in order. The concept is widely used in technology and is meant to describe a relationship between
two entities or processes where the behaviour of one or several such slave entities is controlled
by the behaviour of one master entity. In the context of contact mechanics, this means that the
deformation of the slave body is governed by the deformation of the master body.

When defining contact conditions, two common types of interfaces are nodes to surface, and surface
to surface, see Fig. 14. Consider the two general bodies discretized with quadrilateral meshes,
where the red body is the slave and the blue body is the master. In the nodes to surface type of
interface, the surfaces in between the slave’s nodes are not part of the contact definition, and as
such it is possible for said surfaces to penetrate into the master’s body. Since it is only the nodes
of the slave body that are part of the contact interface, then penetrations will not be taken into
account unless they occur at the site of the nodes. In the other type of interface, the surface to
surface type, all parts of the two bodies are a part of the contact definition, and penetrations are
less likely to occur. This gives a better contact definition in that a higher accuracy is achieved,
although the contact algorithm becomes more expensive.

(a) Nodes to surface (b) Surface to Surface

Figure 14: Common types of contact interfaces

Some general aspects to consider when modeling contact conditions where one body is treated as
a master and the other as a slave are for example bodies’ relative mesh density, and in the case of
the penalty stiffness method, how to model the stiffness of the penalty springs. Regarding meshes,
it is advisable to let the body with the coarser mesh be the master body to avoid penetration
problems. Additionally, if one body is markedly larger than the other, it is also suitable to let
the larger body be the master. When it comes to stiffness, it is generally wise to let the stiffer
body be the master. Lastly, for rigid bodies, sharp corners should be avoided since they can pose
a challenge to the convergence of the contact algorithm. In such cases it is better to smoothen
out the corners with fillets while also ensuring a sufficiently fine spatial discretization to accurately
capture the new smoothened corners. [10, 28]

24
In RADIOSS, the determination of the penalty stiffness differs depending on if the interfacing
contact segments consist of shell elements or of solid brick elements. The determination is also
dependent on if the user wishes the contact definition to calculate the penalty stiffness based on
the stiffness of both contacting bodies or just one of them. More on that shortly, but first, let us
clarify the difference between shell elements and solid elements. For shell elements, the stiffness
of a contact segment is decided by the elements’ Young’s modulus as well as the shell thickness.
For solid brick elements, the stiffness is calculated from the bulk modulus and the geometry of the
element as

KA2
S= , (45)
V
where A is the area of the element on the side where the contact is occurring, and V is the element’s
volume. For homogeneous isotropic materials, the bulk modulus K can be defined using Young’s
modulus and Poisson’s ratio according to
E
K= . (46)
3(1 − 2ν)
When defining a contact condition in RADIOSS which uses the penalty stiffness method, the user
has a few options on how to define the interface or penalty stiffness. The default option is to base
the penalty stiffness solely on the stiffness of the main body, i.e. the master body. One should be
wary of the risk that this definition poses to the critical time step of the elements of the slave body
in the case of a more or less rigid-body as master. This is because the slave body who is less stiff
than the master body, then risks getting additional stiffness in the interface region because of the
contact definition. The higher stiffness then causes the critical time-step to decrease because the
speed of sound of the material is increased, according to Eq. (28). Another alternative is to let
the penalty stiffness be calculated solely based on the minimum of the stiffnesses of the slave body
and master body. Yet another alternative is to let the penalty stiffness be the average stiffness of
the two bodies, which again could potentially decrease the critical time step of the softer body.
Alternatively, it is also possible to calculate the penalty stiffness as the stiffnesses of both bodies
in series. [4]

An alternative approach to the strategy of a master and a slave body is to let both bodies be both
slaves and masters, which is known as symmetric master-slave contact. In the usual definition,
penetrations are only discovered when it is the slave body that is encroaching on the territory
of the master body and not the other way around. This can as discussed lead to problems with
penetration by the master body into the slave body not being discovered immediately. Using a
symmetric master-slave definition however, this problem can be alleviated, and hence higher ac-
curacy achieved. The contact algorithm works in much the same way as before except that the
subroutine which investigates the slave nodes for penetration is employed a second time to check
the master nodes as well. This makes the treatment of the two contacting bodies symmetric, hence
the name. This additional accuracy in the contact algorithm comes with the price of roughly dou-
bled computational cost compared to the usual master-slave contact definition. The higher cost
mainly arises as a fact of the extra processes in the subroutine checking for penetration. [29]

Regarding friction, a commonly used model is that of Coulomb friction wherein the frictional
behaviour of two surfaces is described by a coefficient of friction µ, and the frictional force is
governed by

Ff ≤ µFN , (47)
where FN is the normal force that each surface exerts on the other. In this model, one can
conceptualize the two surfaces as having rough profiles of valleys and peaks. The two surfaces
then stick to each other as one surface’s peaks attach to the other’s valleys, again, conceptually
speaking. The surfaces will then not slide, or slip relative to each other unless the shear stress
becomes high enough, whereafter they will again stick to each other after slipping. Suitable values
for a few material pairings of interest in this thesis are presented in Tab. 1. The values are taken
from [27].

25
Table 1: Friction coefficient for different material interfaces

Materials interfacing Friction coefficient µ


metal-wood 0.3-0.65
steel-steel 0.2-0.8
wood-steel 0.5-1.2
wood-wood 0.4-1.0

26
3 Method
3.1 Prototype Design
The prototype of an overhead guard designed in this thesis work is shown in Fig. 15. Figure
15 also shows the different components of the prototype and the names used in the report when
referring to them. Figure 15 also shows the assembly of the prototype with screws and shear pins.
Note that the fasteners seen in section view B can be used either to mount the overhead guard to
a test rig or to a forklift chassis. The left hand side of the prototype as seen in Fig. 15 is the side
facing forward when the overhead guard is mounted on a forklift, i.e. the operator sits or stands
underneath the parts called ribs and sees the legs/posts in front of him/her when looking in the
driving direction.

Figure 15: The main geometry of the prototype showing its components and how it is assembled

In addition to screws and shear pins, the assembly of the overhead guard consists of several welds.
The components called leg and leg plate are welded together using a total of six welds which are
placed intermittently. Moreover, the components known as frame and L-beam are fully welded
together in their contacting regions. Additionally, the four parts called ribs are fully welded on
each end to the L-beam and the frame, respectively. The frame consists of two equal parts welded
together at the outermost edge of the overhead guard. Lastly, the L-beam in the middle of the
overhead guard is made of two rectangular plates that are welded together using four intermittent
welds. When designing the welds, a balance between having as long welds as possible for strength
and stiffness without encountering problems with excessive heat transfer during welding is required.
Too much heat during the welding process could lead to deformations in the components due to
high thermal expansion.

To design the prototype, the CAD-program CATIA V5 was used and the internal design routines
at Toyota Material Handling were followed in the design process. The design of the prototype
was based on the existing design of the overhead guard for Toyota’s truck model SPE120 with
1.2 tonnes lifting capacity. The existing design was modified by changing dimensions and certain
geometric features to create a unique design. When modifying the design, all changes were made
in the direction of increasing strength as it was deemed necessary to avoid a design which would

27
experience fractures during the impact drop test. Considering damage and fracture during the
drop test was considered beyond the scope of this thesis work. The design of the overhead guard
was also limited to investigating only metallic materials. This was done for two reasons; firstly, the
majority of Toyota Material Handling’s products are made with steel so it was deemed desireable
to focus the methodology development on metallic materials. Secondly, the thesis students wanted
to limit the design to metallic materials since they had knowledge and experience of constitutive
modeling of metals but lacked the same knowledge about other material groups such as e.g. ceram-
ics, which could have been interesting to investigate since some of Toyota’s models are designed
with glass components in the overhead guard.

In addition to considering the strength of the overhead guard, an important factor was manu-
facturability. Due to the limited time available for the thesis work, it was desirable to get the
prototype produced and delivered as quickly as possible. The cost of material and processing was
also considered to avoid designing an unnecessarily expensive prototype. Due to these considera-
tions, all components of the overhead guard were made from sheet metal to allow for them to be
manufactured in-house at Toyota Material Handling’s facility in Mjölby. An exception to this was
the legs/posts which were too thick for in-house production and needed to be be manufactured by
an external contractor. In addition to this, the two parts making up the frame were formed using
sheet metal bending to achieve the radii at the ends of the guard. This operation was also done
by an external contractor. All components were cut out from sheet metal with the use of laser
cutting. All the components are made of the same material, a construction steel with a yield stress
of 355 MPa called SS2134-01 or EN 10 025-S355JR+N. The ribs were 6 mm thick, the leg plates
and the two parts making up the L-beam were 10 mm thick, the frame was 12 mm thick and the
legs were 20 mm thick. Originally, the prototype was designed to have different legs/posts than
the ones seen in Fig. 15 but problems arose since they would have too long of a delivery time from
the external contractor and so a last-minute change was made to the design of the prototype. This
change consisted of changing the legs/posts to the same design as in the SPE120 model since that
made it possible to get them delivered in time.

3.2 Material Testing


In an effort to perform accurate material constitutive modeling, it was decided at the start of the
thesis work that material testing should be undertaken. Considering the available resources for
material testing at both Toyota Material Handling in Mjölby and Linköping University and after
an investigation into the Johnson-Cook material model, it was decided to perform tensile testing
in the form of both conventional tests to obtain elastic and plastic material parameters, as well as
tensile tests at varying strain rates to obtain material parameters for the Johnson-Cook model.
The original idea was to create test pieces from the same batches of sheet metal as the components
of the prototype were manufactured from so as to get accurate material parameters unique for each
component. Since the prototype was manufactured from a total of 4 different material batches of
varying thicknesses, the plan was to perform conventional tensile testing on at least 3 different
specimens from each material batch. This proved difficult to realize however, due to a misunder-
standing during the ordering of the test pieces they were delivered with the wrong thicknesses.
Instead of receiving test pieces with 3 mm thickness as planned, test pieces with 6, 10, 12, and
20 mm thickness were delivered, which was much too thick. Problems also arose when the test
pieces were to be milled down to the correct thickness, and an adaption to this new situation had
to be made. A new order for test pieces was placed and these were manufactured through the use
of laser cutting directly on a piece of sheet metal of 3 mm thickness. These new test pieces were
made of the the same material as the components of the prototype but not from the same batches.
This material was as previously mentioned SS2134-01, also denoted EN 10 025-S355JR+N.

28
3.2.1 Testpiece design
When designing the test pieces, the ISO-standard mentioned previously in the theory chapter was
followed to create a proportional test piece which follows the relationship between gauge length
and cross-sectional area laid out in Eq. (1), with k as 5.65. Consideration was taken to the speci-
fications of the tensile test machine Instron 5582 which was to be used; specifically, its maximum
force output was rated at 100 kN. In order to not risk the integrity of the machine, some level
of margin was established so as to avoid using the machine at its full capacity. It was decided to
not utilize a force output higher than 50 kN. Since it was known from company internal material
data sheets that the material could be expected to have a yield stress around 355 MPa and an
ultimate tensile strength upwards of 600 MPa, the test pieces were designed with a cross-sectional
area such that a stress-level of 800 MPa could be achieved in the machine. This meant that the
cross-sectional area would have to be 60 mm2 , and with that a force level of 48 kN would be needed
from the machine, and the safety margin was thus followed. Because of considerations taken to
manufacturing aspects such as machineability and material wastage, it was decided to design the
test pieces with a thickness of 3 mm. Given this thickness, the parallel length section of the test
pieces was designed with a width of 20 mm to create the aforementioned cross-sectional area.

As mentioned above, it was desireable to create a proportional test piece, and given the aforemen-
tioned cross-sectional area, the gauge length was calculated at 44 mm. From the gauge length, the
parallel length was calculated according to the twos’ relationship explained in the theory section.
Also discussed in the theory section is the demand placed on the transition radius between gripped
ends and parallel length and this was followed. The tensile testing standard outlines that the
gripped ends of a test piece shall be designed with a width at least 33 % higher than that in the
parallel length section. Given this, the gripped ends were designed with a quadratic geometry of
side length 30 mm so as to create sufficient area for the clamping in the tensile test machine. The
resulting test piece is presented in Fig. 16.

Figure 16: The geometry of the test piece designed in this thesis work

3.2.2 Conventional Tensile Testing


The first step was to visually inspect the tensile specimens and look for signs of damage or visible
notches. If the specimens were satisfactory, their cross sectional area along the parallel length
was measured using calipers. This was done at three different places along the parallel length
on each specimen and the average cross-sectional area of these three measurements was used in
the computer program controlling the tensile test machine. The tensile test machine used in this
thesis work was Instron 5582 along with a hydraulic method of clamping. The specimens were
clamped on their gripped ends with a pressure of 200 bar. After the specimens had been clamped,
an extensometer was attached to the specimen. To achieve a strain rate of 2.5E-4s−1 , a velocity of
0.9 mm/min was prescribed to the machine. Before starting the tests, the measurements of force

29
and strain from the extensometer were set to zero. Since the extensometer was only to be utilized
in the elastic region, a pause criterion was defined at 2 % strain since this strain level would be
achieved some distance into the plastic region.

After the aforementioned preparations, the machine was started. Once the machine paused after
having reached the pause criterion, the extensometer was carefully removed and the test was
resumed and carried on until failure. After failure, the two fractured pieces of the specimen were
removed from the machine and the fractured ends were pieced together carefully and the cross-
sectional area in the fracture region was estimated by using calipers. The parallel length after
fracture was also measured. This procedure was repeated 5 times for 5 specimens to reduce the
impact of statistical variance. These specimens are named specimen 1-5 in the report.

3.2.3 Tensile Testing at Varying Strain-Rates


When doing tensile tests at varying strain rates, the preparations before starting the tests were
the same as described previously for the conventional tensile testing, except that no extensometer
was used since it would risk being damaged at the high strain rates. A total of nine specimens
were tested at different strain rates equally spaced on a log-scale with the exception of the ninth
specimen which was tested at the machine’s maximum crosshead speed.

Table 2: Testing rates during the varying-strain rate testing

Specimen Strain rate [s−1 ] Crosshead speed [mm/min]


6 1.00E − 4 0.36
7 2.68E − 4 0.97
8 7.20E − 4 2.59
9 1.93E − 3 6.95
10 5.18E − 3 18.6
11 1.39E − 2 50.0
12 3.73E − 2 134
13 1.00E − 1 360
14 1.39E − 1 500

3.2.4 Post-Processing of Test Data


After the tensile testing, the raw stress-strain data was post-processed to identify the material
properties needed in this thesis. The raw data was fairly clean, but the extensometer being
removed at 2 % strain caused some nonphysical data points, so these were removed manually.
Next, the elastic material parameters, namely Young’s modulus E, and the yield stress σy , were
extracted from the conventional tensile tests. The stress-strain curves did not have a single yield
point but instead clearly exhibited an upper and a lower yield point, see Fig. 17. Therefore, to get a
conservative estimate, the lower yield point was chosen. When the yield point had been extracted,
Young’s modulus was determined according to the tensile test standard ISO 6892-1:2016 [3], i.e.
using linear regression on the data points between 10 % and 40 % of the yield stress, where the
slope of the linear regression represent Young’s modulus. After that, the ultimate tensile strength
σu and the strain at ultimate tensile strength ϵu , were chosen as the stress and strain values where
the maximum stress was achieved. The strain at failure ϵf ailure , was taken as the strain value at
the last data point.

30
Figure 17: Typical tensile response when the material exhibits an upper and a lower yield point

Furthermore, the Johnson-Cook plastic parameters, namely hardening modulus B and hardening
exponent n, were also extracted from the conventional tensile tests. To calculate these parameters,
true measures were needed, and since the raw data from the tensile test machine was in the form
of engineering stress and engineering strain it was converted to true measures using Eq. (21), and
Eq. (20). First, the data points between the yield point to necking were extracted from the stress-
strain curves. Next, these data points were converted from stress and strain to plastic stress, and
plastic strain by subtracting the elastic stress, and the elastic strain, respectively. This simplified
the Johnson-Cook law to

σplastic = Bϵnp . (48)


After that, the hardening modulus B, and the hardening exponent n, were determined using a
nonlinear regression on the plastic stress-plastic strain data points. As stated in the theory chap-
ter, the original idea for calculating the hardening modulus B, and the hardening exponent n was
to first rewrite the Johnson-Cook equation to Eq. (39), and thereafter perform a linear regression
on it. However, while doing this it was discovered that the curve had three distinct linear parts,
and the linear regression could not achieve a good correlation to the tensile data. Therefore the
non linear regression as described above was used instead.

Both elastic and plastic material parameters determined from the conventional tensile tests were
extracted from each of the five specimens, whereafter the mean of each material parameter was
calculated.

Lastly, the strain-rate sensitivity parameter C was determined using data from the tensile tests
at varying strain-rate. As before, this data was first converted from engineering measures to true
measures. Next, the strain was converted to plastic strain by subtracting the elastic strain. One
point from each of the stress-strain curves at a constant plastic strain of 12 % was extracted. The
strain-rate and stress at these point were then plotted using Eq. (41), where reference strain rate
ϵ̇0 was the strain rate at the conventional tensile test, namely 2.5E − 4, and the reference stress
σs was the stress at 12 % plastic strain in the conventional tensile tests. Next, a linear regression
was performed on these points, where the slope of the linear regression represented the strain rate
sensitivity parameter C.

3.3 Experiments
First, a rig, see Fig. 18, was placed on six 15 mm thick rubber pads to represent the wheels on a
forklift. The fixture was weighted with 1494 kg in the front and 650 kg in the rear to represent
the weight of a real forklift.

31
Figure 18: The rig that the overhead guard was mounted on during testing

Next, the overhead guard was mounted on the rig with four M16 bolts with a fastening moment
of 189 Nm and four shear pins on each side. During the mounting process, a weight of 100 kg
was added in the outermost edge of the overhead guard to avoid play in the mounting area during
the experiment. After the overhead guard was mounted on the rig, 22 measurement points were
marked as seen in Fig. 19, and the planned impact points for the wooden cube in the dynamic
test were marked on top of the overhead guard according to Fig. 20. The initial distance from the
green weight to the measurement points was measured using a laser measuring device, see Fig. 21,
with four measurements being taken at each measurement point to ensure accuracy.

(a) Picture of the overhead guard with measurement


points marked
(b) Measurement points on the overhead
guard as seen from below

Figure 19: Measurement points on the overhead guard

32
Figure 20: The impact points used in the dynamic test

Figure 21: The impact points used in the experiments

Next, the wooden cube was attached to the forks of a forklift along with a pneumatic release mech-
anism and was thereafter raised to a height of 1.5 meters directly above the first impact point. To
prevent the cube from damaging nearby equipment or personnel, it was attached to the forks with
a safety rope so that after impact it could not bounce away too far. When everybody was at a
safe distance from the test rig, the cube was released. This was repeated on all impact points in
the correct order, see Fig. 20. After the cube had been dropped ten times, the distance from the
green weight to the measurement points was measured in the same manner as before.

Following the completion of the dynamic test, the timber load was prepared. Since the overhead
guard was designed for a forklift with a lift capacity of 1.2 tonnes, the timber weight needed to
have a minimum mass of 340 kg, according to the table in Appendix A. The timber load used in
the experiment had a weight of 343 kg, and therefore had to be dropped from a height of 1605
mm above the overhead guard to achieve the impact energy stated in the table in Appendix A, i.e.

33
5400 J. Furthermore, the timber load was 3600 mm long, 660 mm wide and 340 mm high. Using
the same release mechanism as for the wooden cube, the timber load was attached to the forks of a
forklift and raised 1605 mm above the centre-line of the overhead guard, see Fig. 20. Using a line
laser as a guide, the timber load was manually adjusted until it was approximately at right angles
with the overhead guard, and then the timber load was released. After the timber load had been
removed from the overhead guard, the distance between the green weight and the measurement
points was measured as before.

Lastly, the overhead guard was disassembled and all parts were visually inspected to find fractures,
cracks and areas of large deformation.

3.4 Basic FE-modeling


The FE-modeling in this thesis work started of by creating a model as simple as possible yet in-
volving everything necessary to simulate the experimental drop test of the overhead guard. This
modeling was aided by preexisting methodology documents internal to Toyota Material Handling
and the thesis students’ prior knowledge and intuition about reasonable simplifications and mod-
eling approaches.

3.4.1 Geometry Simplification


The work began by importing the CAD-model of the prototype into HyperWorks and simplifying
the geometry by removing small features such as chamfers, fillets and small holes. This was done
to ease meshing later on. Next, the midsurface of each component was extracted to create a shell
model. Midsurfaces of components that were supposed to be welded together were extended to
close the gaps between them.

3.4.2 Meshing
The midsurface of each component was meshed by a surface mesh consisting of primarily 8 mm
quadrilateral elements, with sporadic triangular elements needed to obtain a uniform mesh of high
quality. In the regions where components of the prototype were welded together, it was ensured
that both meshes were connected by sharing nodes.

3.4.3 Element formulations


The element formulation chosen for the shell mesh was the QEPH formulation which is an under-
integrated, four-node element with physical hourglass stabilization for general use. It was decided
to use five through thickness integration points.

3.4.4 Material Modeling


For simplicity, the first material model used was a linear-elastic-perfect-plastic model with density,
3
ρ=7850 kg/m , Young’s Modulus, E=210 GPa, and yield strength, σy =355 MPa. It was obvious
after the completion of the first analysis that regions of the overhead guard experienced strains well
into the plastic region. The maximum strain in the model was 23.5 % and this material model was
thus deemed insufficient as strain-hardening would likely have to be considered. It was therefore
decided to use the Johnson-Cook material model using a simplified input consisting of ultimate
strength and strain at ultimate strength instead of the conventional hardening parameters. For
ultimate tensile strength, σu =490 MPa was used along with strain at ultimate tensile strength,
ϵu =0.24. Since this modeling was done prior to the tensile testing, these parameters were taken
from a document internal to Toyota Material Handling.

3.4.5 Boundary Conditions


Regarding boundary conditions, the FE-model was fixed in place at the M16 holes on the lower
part of the legs, see Fig. 15. This was done by prescribing a fixed displacement of 0 in all six
degrees of freedom for all nodes in said holes by using a RBE2 MPC, see Fig. 22.

34
Figure 22: Boundary conditions of the FE-model

3.4.6 Connections
The legs were connected to the frame at the four holes for shear pins, see view A in Fig. 15. This
was done by fixing all nodes in said holes on the legs in relation to all nodes in the corresponding
holes on the frame by using a RBE2 MPC, see Fig. 23. This meant that the nodes on one
component completely followed the motion of the nodes on the other component.

Figure 23: The connection of the frame to the legs, highlighted in red

3.4.7 Analysis Setup


To model the loads in the drop test, ten cubes and one timber load was modeled with the same
dimensions as those used in the physical test. Although the overhead guard standard states that
the cube should have 10 mm radii on all the edges, the cubes were modeled without these for
eased meshing. The loads were placed 10 mm directly above their respective impact points, see
Fig. 20, and prescribed an initial velocity to match the one developed during free fall from their
respective drop height, in accordance with the discussion in the theory chapter. The cubes were
hence prescribed an initial velocity of 5.425 m/s, and the timber load was prescribed an initial
velocity of 5.636 m/s. The loads were modeled as rigid bodies since the deformation of these was
not of interest and deformable bodies would lead to higher computational cost. The cubes were
meshed with 20 mm large quadrilateral elements, making a uniform mesh. The timber load was
meshed with 50 mm large quadrilateral elements.

Due to the nature of the FE-analysis, the loads striking the structure induced vibrations which
would not cease on their own. Although the material has a small amount of damping in of itself

35
due to the element formulation, this effect is not strong enough to bring the structure to rest within
a reasonable time-frame. Therefore, additional damping was introduced in between all the loads
and after the last load to stop the vibrations so that every load would hit a motionless structure,
like in the physical test. The damping that was used was automatic dynamic relaxation, called
ADYREL in RADIOSS.

To capture the impact behaviour of the loads, contact definitions were introduced. As a start, a
general surface to surface definition utilizing the penalty stiffness method, called type24, was used,
with a unique contact definition for each load. The load was set as the master and the structure was
set as the slave. This meant that there was nothing defining self-contact and contact in between
components of the structure. This simple contact definition was chosen to save computational cost.

To keep computational cost down, each cube was given a 0.2 seconds simulation time. This was
done so that it would have enough time to impact the structure and so that the structure could
be damped before the next load, while avoiding unnecessary downtime that would add to com-
putational cost. At the start of each period, the respective cube had an initial velocity and the
previous cube was stopped to avoid unnecessary computational cost. The first half of the 0.2
seconds window consisted of the cube striking the structure and the second half of the window
consisted of damping. After the ten cubes had been dropped, the timber load was given a slightly
higher simulation time of 0.3 seconds since it was expected that it would be in contact with the
structure for longer. The first 0.2 seconds of this window consisted of the timber load striking the
structure and the last 0.1 seconds was made up of damping.

In the basic FE-model, no effort was made towards time-step control, i.e. no mass-scaling was
used, and the time-step was chosen automatically by RADIOSS depending on the smallest distance
between nodes using Eq. (26).

3.5 Advanced FE-modeling


Following the insights, and understanding for the problem at hand that was gained from the basic
FE-model, more advanced modeling was undertaken to improve the results. Here, it was decided
to create a solid model as well as to further develop the basic shell model. The reason for going
into solid modeling was that it become obvious that results of satisfactory accuracy would not be
achieved using a shell element model. It was decided to keep investigating a shell element model
because of the dramatically lower computational cost compared to a solid model. This was seen
as useful when testing various kinds of settings and modeling approaches, as there would then not
be as much waiting time until their effects could be studied.

3.5.1 Meshing
When undertaking meshing of both the solid model and when further developing the shell model,
the elements were inspected thoroughly, and checked against the quality criteria presented in the
previous theory chapter. In the event of elements failing to meet any of these criteria, they were
fixed to the best possible extent using tools for element optimization in HyperWorks. In the cre-
ation of the solid mesh, care was taken to keep the number of pentahedral elements to a minimum
to to ensure results of high accuracy.

The first strategy when creating the mesh for the solid model was to align the meshes of welded
components directly to the meshes of the welds, which was alluded to in the theory chapter about
welds. This way of creating a uniform, high-quality hexahedral mesh proved tedious and difficult
however since it required the geometrical entities to be split up into several smaller, and more
regularly shaped domains for the meshing tools in HyperWorks to function. To combat this issue,
the alternative approach alluded to previously of using tied-interfaces was tried, which proved suc-
cessful. This way, weld nodes could be connected to the immediate surfaces of components around
them, thus negating the need for complicated treatment of the components geometries in order to
get their meshes to align with the welds. The approach of using tied interfaces not only meant

36
that meshing complexity was severely reduced but also that higher quality meshes on the welded
components could be achieved.

To make sure that the discretization was not a limiting factor for the simulated results, a mesh
sensitivity analysis was undertaken for both the solid model and the shell model. The sensitivity
studies were performed in the conventional manner, where the mesh density was doubled for each
level, and the change in resulting displacement was checked along with the change in computational
cost in the form of simulation time.

3.5.2 Element-formulation
The merit of using first-order elements was discussed extensively in previous chapters. Although it
is uncommon to employ second-order elements in explicit dynamic analyses, it was deemed worth-
while to investigate any potential improvement to the simulated results that might be achieved
through their usage. This proved difficult to realize however; the use of second-order shell elements
was rendered impossible by the fact that RADIOSS had no such element formulation implemented.
Similarly, second-order hexahedral elements could not be tested since RADIOSS did not support
their use together with the implemented contact definitions. The effort to investigate the effect of
second-order element formulations was therefore abandoned.

For the shell model, the effect of having a varying number of through-thickness integration points
in the element formulation was investigated. This was done to see if results of higher accuracy
could be achieved with a higher number of points, to make sure that the number of points was not
a limiting factor. Only odd numbers of through-thickness integration points were tested to ensure
a symmetrical distribution about the elements’ midsurfaces.

Also discussed at length previously in the thesis is the merits and drawbacks of using fully integrated
elements. Under-integrated elements were used in the basic FE-model, and are commonplace in
explicit FE-analysis, but it was seen as an interesting thing to investigate. Analyses with fully-
integrated elements where therefore performed with both the shell model and the solid model.

3.5.3 Material Modeling


It was decided to focus on the Johnson-Cook material model since it is a widely used model and
because it can account for strain-rate sensitivity, which was hypothesized as being of great impor-
tance. It was described previously that RADIOSS gives the user the option of specifying material
parameters related to strain hardening either using the classic input or a simplified input. For this
reason, it was decided to test both of these options, which was done using material parameters
obtained from the material testing.

The Johnson-Cook model was also tested with the inclusion of the strain-rate sensitivity parameter
obtained from the material testing. This was done to see how strain-rate hardening would affect
the results.

1 Simplified input with original material data:


3
E =210 GPa , ρ=7850 kg/m , ν=0.3 , A=355 MPa , σu =490 MPa, ϵu =0.24
2 Simplified input with new material data:
3
E =180.8 GPa , ρ=7850 kg/m , ν=0.3 , A=390.29 MPa , σu =478.15 MPa, ϵu =0.1674,
3 Classic input with new material data:
3
E =180.8 GPa , ρ=7850 kg/m , ν=0.3 , A=390.29 MPa , B=597.9 Mpa, n=0.6545,
4 Classic input with new material data, including strain-rate sensitivity:
3
E =180.8 GPa , ρ=7850 kg/m , ν=0.3 , A=390.29 MPa , B=597.90 MPa, n=0.6545,
C=0.0103, ϵ̇0 =2.5E-4

37
3.5.4 Connections
Regarding the holes with shear pins where the legs were connected to the frame, two different
modeling approaches compared to the basic model were investigated. The first approach was
close to the modeling in the basic model, with RBE2 MPC’s in the shear pin holes as before,
but adding beam elements in between the two components leg and frame. That is, two RBE2
definitions in each hole, with a beam element connecting them. This beam element was given a
circular cross-section of the same dimension as the shear pin. This test was done for the shell model.

The other approach meant a less simplified modeling of the shear pins, with them being modeled
as solids according to the correct geometry as seen in Fig. 15 in view A. The M8 screws along with
the washers were excluded however, even if their holes were included in the modeling. Instead,
RBE2 definitions were utilized in the small holes by having one common RBE2 definition for each
of the eight through-holes without any beam elements.

3.5.5 Welds
About the modeling of the welds in the solid model, it was previously discussed how the original
modeling approach caused problems. The welds were therefore modeled with tie-contacts.
For the shell model it was also decided to test using tie-contact to connect the components without
extending the midsurfaces.

3.5.6 Contact Modeling


The basic FE-model did not include contact between structural parts of the overhead guard but
only contact between the loads and the structure. It was therefore tested what effect it would have
to account for contact between components of the overhead guard as well. This was first done
using the same contact definition as previously, i.e. the TYPE24 keyword.

It proved difficult to simulate contact between the components of the overhead guard as the
TYPE24 contact definition did not handle initial penetrations well when using the shell model,
and it created numerical problems. For this reason, it was decided to test the TYPE25 contact
definition instead since it was supposed to handle initial penetration better, and in general be an
improved version of the TYPE24 keyword.

Regarding friction, after investigation into relevant literature, it was found that the coefficients of
friction that had been used in the contact definitions in the basic model were unsuitable. Instead,
different coefficients of friction for both the contact between the overhead guard and the loads,
and the components of the overhead guard were tested, and these were taken from Tab. 1. Based
on the range of values from Tab. 1, it was decided to use the lowest values, the middle values, and
the highest values as:
• Steel-Steel pairing: 0.2, 0.5, and 0.8
• Wood-steel pairing: 0.5, 0.85, and 1.2

3.5.7 Load Modeling


In the basic FE-model, the impact cube was modeled without any radii along its edges for sim-
plicity, despite the fact that the cube used in the physical drop test had a radius of 10 mm along
all its edges, as dictated by the test standard. In the advanced FE-modeling therefore, the cube
load was modeled with these radii to closer resemble the physical experiment.

Regarding the modeling of the timber load, it was first modeled in the same way as in the basic
model, i.e. as a rectangular box with the same dimensions as in the physical test. Problems arose
however, when the contact definition was changed from the previous TYPE24 to TYPE25. When
the timber load would strike the solid model, some elements in close proximity to the impact region
would sometimes get distorted. To combat this issue, radii of 10 mm were added to the timber
load along all its edges, since sharp corners can pose numerical difficulties to contact algorithms.

38
3.5.8 Damping
After closer investigation of the displacement behaviour in the basic model, it was discovered
that there was needlessly long time in between cube impacts. Following the impact of a cube,
damping was activated to return the overhead guard to rest, and it was clear that there was po-
tential for reducing the damping time, since there was damping prescribed even after the structure
had returned to rest. For this reason, to reduce computational cost, the damping time was reduced.

Different from the basic model which used the ADYREL type of damping, it was seen as desirable
to test another type of damping called kinetic energy relaxation (KEREL). KEREL is a more ag-
gressive form of damping, and it was therefore believed that using it might allow for the simulation
time to be reduced. This proved difficult to implement however since using KEREL caused severe
numerical instabilities which caused the simulation to terminate prematurely.

3.5.9 Simulation Time


For similar reasons as to why damping was investigated, different simulation times were tested,
where events were given both longer and shorter time to happen. This was done to find how much
simulation time would be needed to receive stable results, and to make sure that the simulations
were not run for unnecessarily long times, which would be inefficient from the point of view of
computational cost.

3.5.10 Time Step Control


To reduce computational cost, mass scaling was investigated for both the shell model, and the
solid model. It was found in the literature that it is recommended to keep mass scaling below 2
% to ensure accuracy. The mass scaling was done such that a desired time-step was specified in
the engine files of the simulation, and the program then automatically added sufficient mass to
nodes in each element with a critical nodal time-step less than the specified value. This was done
systematically by starting from the minimum time-step necessitated by the mesh and increasing it
by an increment of 20 % and running a simulation containing only the overhead guard, whereafter
the output log was checked after about 1 minute to see how large the mass error was. The reason
for only including the overhead guard in this was that mass-scaling could only occur on deformable
bodies and not on the rigid loads. The mass error simple being a measure of a percentual increase
of mass in the model. If the mass error was below 1.5 %, the simulation was terminated and the
process repeated. This was iterated until the mass error got close to 1.5 %, whereafter the desired
time-step was changed in smaller increments until an error of 1.5 % was achieved.

The reason for restricting the mass error to 1.5 %, and not 2 % was to have some level of margin in
the event that further mass-scaling would be necessitated later in the simulation of the drop test
in case elements would deform significantly due to the impact of the loads. Severe compressive
deformation of an element could lead to its characteristic length being decreased, hence the need
for additional mass scaling.

3.5.11 Extra Material Model


Late in the thesis work, limitations of the previously mentioned material models were better
understood. These limitations are discussed at length later in the report. After searching for
relevant literature, a paper was discovered [30] which had tested a similar S355 steel as in this
thesis work using a drop-dart machine setup with a falling mass of 20 kg, dropped from a height of
2 m, hence achieving a velocity of about 6.2 m/s, which is similar to the situation of the physical
drop test in this thesis. It was thus decided to test the parameters from the aforementioned paper
to see how the results might be improved by using strain-rate sensitivity data obtained at strain-
rates close to the levels experienced by the overhead guard during the physical drop test. The
material parameters that were used in this test are given by Tab. 3.

39
Table 3: Material parameters for the extra material model

E [GPa] ρ [kg/m3 ] ν [-] A [MPa] B [MPa] n [-] C [-] ϵ̇0 [s−1 ]


206 7850 0.3 457 417 0.50 0.074 1

The reader might note that the yield stress A is considerably higher than in the previously men-
tioned material models. This is due to the fact that a reference strain rate ϵ̇0 of 1 s−1 was used
in that paper, and as such an adjustment of the yield stress was made by those authors. When
using the Johnson-Cook model, many take the reference strain-rate to be 1 s−1 , although this is
not ideal, it is a common misunderstanding made in the literature [31].

3.6 Validation of Simulation


To validate the FE-model, a comparison with the results from the experiments had to be done.
Since the ANSI and ISO overhead guard standards place requirements on the maximum deflection
after the dynamic test, and the impact drop test, these values were the most important comparison
of the FE-model to the experiments. Meaning, the maximum deflection after the dynamic test,
and after the impact drop test from the FE-model was compared to the equivalent measurements
from the experiments using a percentage error.

The method developed in this thesis is supposed to work for all kinds of overhead guards, so the
general deformation behaviour of the overhead guard is also important. Therefore, the deflection
in all of the measurement points both after the dynamic drop and after the impact drop test, was
compared between the FE-model and the experiments using a mean absolute percentage error.
Furthermore, the FE-model and the results from the experiments were compared visually to see
how similar their behaviours were, especially looking at areas of large deformation and behaviour
around the bolted connections.

40
4 Results
4.1 Drop Test
Table 4 shows that one can see that the top of the overhead guard was not perfectly parallel to the
floor because of the way it was mounted using a 100 kg weight at the outermost edge to avoid play
in the mounting area. Furthermore, the deflection was smallest close to the legs and largest at the
outermost edge. One can also see, by looking at the middle measurement points (points 7 to 12)
that the deformation on the ribs were larger than on the frame. Also note that since no deflection
was larger than 19 mm, this prototype of an overhead guard passed the dynamic test according to
both the ISO and the ANSI standards.

Table 4: Displacement results for dynamic test after all ten cubes

Point Pre test [mm] Post test [mm] Deflection [mm]


1 1867 1866 -1
2 1867 1864 -3
3 1866 1864 -2
4 1867 1864 -3
5 1867 1864 -1
6 1868 1867 -1
7 1862 1856 -6
8 1862 1853 -9
9 1862 1852 -10
10 1862 1853 -9
11 1862 1855 -8
12 1863 1857 -6
13 1857 1846 -11
14 1856 1844 -12
15 1857 1845 -12
16 1857 1846 -11
17 1858 1846 -12
18 1858 1849 -9
19 1868 1867 -1
20 1868 1867 -1
21 1867 1865 -2
22 1867 1865 -2

Table 5 shows that the deformation followed the same pattern as after the dynamic test but with
greater magnitude. The smallest deflection was found near the legs, and the largest deflection at
the outermost edge. Note that if the overhead guard would have been mounted on a Toyota SPE
1.2 ton forklift, the prototype overhead guard would have passed the impact test in both the ANSI
and the ISO standards.

41
Table 5: Displacement results of impact drop test. Total deflection is the sum of the deflection
from the dynamic and impact test

Point Pre test [mm] Post test [mm] Deflection [mm] Total Deflection [mm]
1 1866 1853 -13 -14
2 1864 1852 -12 -15
3 1864 1851 -13 -15
4 1864 1852 -12 -15
5 1866 1853 -13 -14
6 1867 1853 -14 -15
7 1856 1823 -33 -39
8 1853 1820 -33 -42
9 1852 1818 -34 -44
10 1853 1819 -34 -43
11 1855 1821 -34 -42
12 1857 1822 -35 -41
13 1846 1794 -52 -63
14 1844 1785 -59 -71
15 1845 1786 -59 -71
16 1846 1786 -60 -71
17 1846 1786 -60 -72
18 1849 1794 -55 -64
19 1867 1862 -5 -6
20 1867 1861 -6 -7
21 1865 1849 -16 -18
22 1865 1848 -17 -19

Next, when inspecting the overhead guard after the experiments it was found that there were no
significant deformation of the legs, except for in the thin part highlighted in Fig. 24a. Fig. 24b
shows a picture of the deformation in this area after the experiments. It is clear where the defor-
mation starts. The angle α in Fig. 24b was 2.4° on the right side and 2.2° on the left side.

(b) Picture showing the deformation of the leg after


the experiment
(a) Leg with the deformed area highlighted

Figure 24: Deformation of overhead guard seen post testing

Furthermore, the outermost screws connecting the legs to the frame assembly were significantly
deformed after the impact drop test, see Fig. 25, while the shear pins in the same bolt connections
showed no significant deformation. Fig. 26 shows that the part of the frame where it was in contact
with the legs has bent upwards due to a hinge like behaviour around the shear pin. Lastly, during
the visual inspection of the overhead guard after the experiments, no fractures or cracks could be
found on the overhead guard.

42
Figure 25: Picture showing one of the deformed screws that connected the frame and legs

Figure 26: Picture showing the frame having been bent upwards

The prototype overhead guard in its deformed state after the drop tests is seen in Fig. 27.

Figure 27: The prototype overhead guard post testing

43
4.2 Tensile Tests
4.2.1 Conventional Tests
Figure 28 together with Tab. 6 shows the results from the conventional tensile testing in engineering
measures and the obtained standard mechanical properties. Although not clear from Fig. 28, the
material showed a clear behaviour of having an upper and a lower yield limit, hence the datum
points for the yield limit σy in Tab. 6 is for the lower yield limit.

Figure 28: Stress-strain curves from conventional testing

Table 6: Mechanical properties obtained from conventional tests

Specimen σy [MPa] E [GPa] σu [MPa] ϵu [-] ϵf ailure


1 392.71 218.35 481.03 0.1634 0.3158
2 382.58 175.18 469.28 0.1702 0.3116
3 389.61 158.98 478.25 0.1720 0.3256
4 391.02 185.89 478.90 0.1640 0.3241
5 395.55 165.61 483.27 0.1674 0.3180
Mean 390.29 180.80 478.15 0.1674 0.3190

Figure 29 shows the results from the conventional testing in true measures. Figure 30 shows
the hardening behaviour of specimen 1 and the best obtained curve fit to it. Table 7 shows the
obtained hardening parameters for all 5 specimens as well as a measure of the conformity of the
model to the experimental data in the form of R-squared values. Lastly, Fig. 31 shows specimens
1-5 post-testing, note that the specimens show clear signs of necking, and display ductile fractures.

44
Figure 29: Stress-strain curves from conventional testing in true measures

Figure 30: Power law curve fit on hardening behaviour of specimen 1

Table 7: Plastic hardening properties from conventional tests

Specimen Hardening modulus B [MPa] Hardening exponent n [-] R-squared [-]


1 581.72 0.6386 0.9926
2 579.89 0.6481 0.9918
3 616.76 0.6681 0.9904
4 591.13 0.6526 0.9915
5 613.98 0.6653 0.9904
Mean 597.90 0.6545 0.9913

45
Figure 31: The fractured test pieces, specimens 1-5, which were tested conventionally

4.2.2 Tests at Varying Strain rate


Figure 32 together with Tab. 8 shows the results of testing at varying strain rates. Figure 32 dis-
plays the stress-strain behaviour in the plastic region up until shortly before necking was initiated.
Table 8 contains the obtained standard mechanical properties, excluding Young’s modulus. Note
that specimens 9 and 14 deviated from the overall pattern of hardening with increasing strain-rate.

Figure 32: Stress-strain response at varying strain-rates

46
Table 8: Mechanical properties obtained from testing at varying strain-rates

Specimen σy [MPa] σu [MPa] ϵu [-] ϵf ailure


6 394.21 475.36 0.1739 0.3209
7 395.35 478.32 0.1758 0.3172
8 401.28 485.47 0.1769 0.3227
9 400.96 483.49 0.1614 0.3043
10 408.54 491.48 0.1632 0.3012
11 412.14 495.83 0.1592 0.2905
12 418.84 501.09 0.1586 0.3035
13 435.00 507.02 0.1608 0.3103
14 430.00 501.19 0.1714 0.3092
Mean 410.70 491.03 0.1668 0.3089

Figure 33 shows the data used to determine the strain rate sensitivity parameter C for the Johnson-
Cook model, and the curve fit resulting from using linear regression. The value of C was found to
be 0.0103, where the R-squared value of the linear fit was 0.9501.

Figure 33: Data used to determine the parameter C

Lastly, Fig. 34 shows specimens 6-14 post testing. Observe that the specimens show clear signs of
necking, and display ductile fractures.

Figure 34: The fractured test pieces, specimens 6-14, which were tested at varying strain rates

47
4.3 Basic FE-modeling
Table 9 shows the resulting displacement errors compared to the physical drop test when using the
basic FE-model. Additionally, Tab. 10 shows the displacements of all measurements points when
using the basic FE-model. Again, the errors are compared to the physical drop test. Note that the
basic FE-model grossly overpredicted the maximum displacements both after the cubes and after
the timber.
Table 9: Displacement error results from the basic FE-model

Error of mean Error of max Error of mean Error of max


deflection after deflection after deflection after deflection after
cubes [%] cubes [%] timber [%] timber [%]
259 473 70.9 141

Table 10: Displacement results from the basic FE-model

Point Deflection after cubes [mm] Error [%] Deflection after timber [mm] Error [%]
1 -1.03 3.77 -13.5 -2.87
2 -3.79 26.3 -14.1 -5.47
3 -6.07 203 -15.6 4.05
4 -6.25 108 -15.7 5.10
5 -3.87 287 -14.2 1.58
6 -0.968 -3.14 -13.4 -10.6
7 -25.9 332 -76.4 96.1
8 -37.7 319 -91.8 118
9 -39.9 299 -93.8 113
10 -38.6 329 -92.4 114
11 -37.3 367 -90.6 115
12 -26.9 349 -76.2 85.9
13 -55.6 405 -149 136
14 -66.4 454 -173 144
15 -67.2 460 -173 144
16 -68.0 518 -173 144
17 -68.7 473 -173 141
18 -58.6 551 -149 133
19 -0.492 -50.7 -7.35 22.6
20 -0.394 -60.5 -7.09 1.37
21 -1.17 -41.2 -17.0 -5.30
22 -1.12 -43.6 -16.8 -11.3

4.4 Advanced FE-modeling


Observe that investigations undertaken in the advanced FE-modeling were performed mostly on
the shell model due to its superior simulation time. In certain cases, it was deemed that it would
be inappropriate to extrapolate conclusions from a shell model onto a solid model, and in those
cases, investigations were performed on the solid model instead. Moreover, when investigating
numerical FE-aspects rather than the improvement of the results towards the physical drop test,
only one cube drop was simulated in an effort to save time.

4.4.1 Mesh Sensitivity Analysis


Table 11 and Tab. 12 show the results from mesh sensitivity analyses on the shell model, and on
the solid model, respectively. Note that the changes are absolute values. It should also be noted
from Tab. 11 and Tab. 12, that computational cost clearly increased for both the shell model,
and the solid model, and that the increase is significant for both levels of mesh refinement. Also
evident from said tables is that the displacements in the solid model did not change as much when
going from the normal mesh to the fine one as they did in the shell model.

48
Table 11: Mesh sensitivity analysis on the shell model. Changes relate to the previous value, e.g.
Coarse to Normal

No. of elements Target Increase in


Mesh Change in mean Change in max
in overhead element computational
level displacement [%] displacement [%]
guard size [mm] cost [%]
Coarse 7146 11.3 - - -
Normal 14724 8.00 75.5 28.5 58.9
Fine 28702 5.66 36.1 11.6 127

Table 12: Mesh sensitivity analysis on the solid model. Changes relate to the previous value

No. of elements Target Increase in


Mesh Change in mean Change in max
in overhead element computational
level displacement [%] displacement [%]
guard size [mm] cost [%]
Coarse 93722 5.04 - - -
Normal 203752 4.00 20.4 1.62 95.2
Fine 396488 3.17 4.93 0.15 110

4.4.2 Element Formulation


Table 13 shows the results from investigating the effect of varying the number of through-thickness
integration points. Note that the changes are absolute values. From Tab. 13 it should be noted
that the increase in computational cost with an increased number of through-thickness integration
points is relatively low compared to the change in displacement.

Table 13: Investigation of through-thickness integration points. Changes relate to the previous
value

Change in mean Change in max Increase in


Points
displacement [%] displacement [%] computational cost [%]
1 - - -
3 99.5 98.7 7.8
5 11.0 3.73 1.4
9 3.82 1.06 3.5

Table 14 and Tab. 15 show the effects of using under integrated or fully integrated shell elements,
and solid elements, respectively. Examining Tab. 14, and Tab. 15, the reader should realize that
the increase in computational cost was low relative to the change in displacements for the shell
model, while the increase in computational cost was significant in relation to the small change in
displacements for the solid model.

Table 14: Investigation of fully integrated shell elements. Changes relate to the previous value

Integration Change in mean Change in max Increase in


points displacement [%] displacement [%] computational cost [%]
1 - - -
2x2 24.4 10.3 43.1

Table 15: Investigation of fully integrated solid elements. Changes relate to the previous value

Integration Change in mean Change in max Increase in


points displacement [%] displacement [%] computational cost [%]
1 - - -
2x2x2 2.48 1.80 119

49
4.4.3 Time Step Control
Table 16 and Tab. 17 show the results of utilizing mass-scaling on the shell model, and the
solid model, respectively. Note that the changes are absolute values, and that the decrease in
computational cost was significantly higher than the change in displacements for both models.

Table 16: Results from mass-scaling on shell model where changes relate to the previous value

Mass Critical time Change in mean Change in max Decrease in


scaling step [s] displacement [%] displacement [%] computational cost [%]
No 2.06E-7 - - -
Yes 8.00E-7 3.18 4.05 73.0

Table 17: Results from mass-scaling on solid model where changes relate to the previous value

Mass Critical time Change in mean Change in max Decrease in


scaling step [s] displacement [%] displacement [%] computational cost [%]
No 1.03E-7 - - -
Yes 3.50E-7 4.79 1.06 70.2

4.4.4 Connections
Table 18 and Tab. 19 display the errors compared to the physical drop test when taking different
approaches to modeling connections in the shell model, and the solid model, respectively. Figure
35 shows that when using the RBE2 & beam approach, the FE-model mimicks the behaviour of
the physical prototype.

Table 18: Results from investigation of connections in the shell model

Error of max Mean error Error of max


Mean error after
Connection type deflection after after timber deflection after
cubes [%]
cubes [%] [%] timber [%]
RBE2 102 170 76.7 113
RBE2 & Beam 89.8 138 68.5 114

Figure 35: The FE-model showing the behaviour of the frame having been bent upwards when
RBE2 & beam was used

Table 19: Results from investigation of connections in the solid model

Error of max Mean error Error of max


Mean error after
Connection type deflection after after timber deflection after
cubes [%]
cubes [%] [%] timber [%]
RBE2 & beam 45.0 47.2 52.6 92.0
Solid shear pin 42.6 41.4 42.7 69.4

50
4.4.5 Contact Modeling
Table 20 shows the error compared to the experimental results when using different contact def-
initions. Table 21 displays the comparative error when using different friction coefficients for the
contact between the loads, and the overhead guard.

Table 20: Results from investigation of contact definition

Contact Contact between Mean Error of max Mean Error of max


defini- parts of overhead error after deflection after error after deflection after
tion guard cubes [%] cubes [%] timber [%] timber [%]
TYPE24 No 120 209 67.9 119
TYPE25 No 126 219 70.2 119
TYPE25 Yes 89.8 138 68.5 114

In the method, it was outlined that the idea was to test varying friction coefficients both for the
friction between loads and the overhead guard, as well as in between the structural parts of the
overhead guard. This turned out to be difficult to realize however as the higher friction coefficients
between the structural parts yielded problems with damping the overhead guard. As such, the
results presented in Tab. 21 are for the friction between loads and the overhead guard, where the
friction coefficient between the parts of the overhead guard was kept at 0.2 in all analyses.

Table 21: Results from investigation of friction coefficient

Friction Error of max Error of max


Mean error after Mean error after
Coeffi- deflection after deflection after
cubes [%] timber [%]
cient cubes [%] timber [%]
0.2 89.8 138 68.5 114
0.5 69.7 95.8 55.1 89.7
0.85 71.8 99.7 55.5 102
1.2 74.2 101 55.2 106

4.4.6 Load Modeling


Table 22, and Tab. 23 show the error compared to the experiment when modeling the cubes, and
the timber load with 10 mm radii, in the shell model, and the solid model, respectively

Table 22: Results from investigation of radii on cubes in the shell model

Mean error after Error of max deflection


Radii
cubes [%] after cubes [%]
No 92.2 145
Yes 89.8 138

Table 23: Results from investigation of radii on timber in the solid model

Mean error after Error of max deflection


Radii
timber [%] after timber [%]
No 52.6 92.1
Yes 55.8 73.7

4.4.7 Material Modeling


Table 24 displays the error compared to the physical drop test for the material model setups detailed
in the method chapter. Note that all the models predicted too large displacements. Figure 36 shows
how well material models 2, and 3 compare to the stress-strain response of specimen 1 from the
conventional tensile testing.

51
Table 24: Results from investigation of material modeling

Material Mean error Error of max deflection Mean error after Error of max deflection
model after cubes [%] after cubes [%] timber [%] after timber [%]
1 218 376 77.9 150
2 131 224 75.3 132
3 143 247 78.0 138
4 89.8 138 68.5 114

Figure 36: Comparison between material models 2, and 3

4.5 Final Model


Based on the findings from all the analyses above, a final model was constructed with the following
settings:

• Solid model with the normal mesh level


• Under-integrated first-order elements

• ≤ 2% mass scaling with a critical time-step of 3.5E-7 s


• Solid shear pins for connections
• TYPE25 contact keyword including contact between parts of the overhead guard
• Friction coefficient of 0.2 between structural parts

• Friction coefficient of 0.5 between loads and overhead guard


• 10 mm radii on cubes, and no radii on timber
• Material model no. 4

These settings were chosen after careful consideration of aspects such as accuracy, computational
cost, and modeling reasonableness. The reasoning of this is discussed at length in the next chapter.
The results from using these settings is shown in Tab. 25 and Tab. 26, in a fashion analogous
to the results from the basic FE-model. Said tables show that the errors improved dramatically
compared to using the basic FE-model.

52
Table 25: Displacement error results from the Final FE-model

Error of mean Error of max Error of mean Error of max


deflection after deflection after deflection after deflection after
cubes [%] cubes [%] timber [%] timber [%]
42.6 41.4 42.7 69.4

Table 26: Displacement results from the Final FE-model

Point Deflection after cubes [mm] Error [%] Deflection after timber [mm] Error [%]
1 -0.715 -28.4 -9.32 -33.3
2 -2.47 -17.5 -10.5 -29.8
3 -4.20 110. -11.7 -21.4
4 -4.04 34.7 -11.6 -22.3
5 -2.41 141 -10.4 -25.4
6 -0.730 -26.9 -9.25 -38.3
7 -5.49 -8.43 -53.7 37.8
8 -9.04 0.452 -61.8 47.2
9 -12.0 20.7 -64.5 46.6
10 -12.1 34.9 -64.4 49.9
11 -10.2 28.5 -62.1 47.9
12 -7.27 21.1 -53.9 31.7
13 -11.1 1.10 -104. 65.3
14 -14.2 18.7 -121. 71.6
15 -15.4 29.1 -121. 71.1
16 -16.3 49.0 -121. 71.2
17 -16.9 41.3 -122. 69.4
18 -14.9 65.8 -104. 63.1
19 -0.356 -64.3 -5.74 -4.30
20 -0.380 -61.9 -5.78 -17.4
21 -0.669 -66.5 -11.6 -35.0
22 -0.689 -65.5 -11.5 -39.4

Figure 37 presents the displacements of the 22 measurement points over the course of the simula-
tion. Note that the overhead guard was in a state of rest before each impact, and that there are
essentially no unnecessarily long periods of inactivity. The simulation took roughly 20 hours to
finish, and this was a significant reduction from about 70 hours when no mass-scaling was used.
These calculations were performed using a setup of 32 cores.

Figure 37: Displacement in the y-direction over the course of the simulation with the final model

53
Figure 38 shows the behaviour of the fundamental model energies during the course of the simula-
tion. Note that the energies’ behaviours are consistent with the desired behaviours, as detailed in
the theory chapter. In particular, the total energy is only increased with the advent of a new load,
and that energy is only taken away during contact, and or damping. Moreover, when looking at
Fig. 38c, excluding the effect of external work due to contact and damping, the total energy stays
constant except when a new load is introduced.

(a) Cube drops

(b) Timber drop (c) Total energy

Figure 38: Model energies during simulation with the final model

4.6 Extra Material Model


Table 27 shows the results of testing the extra material model. Note that this is the only result
where too small displacements were predicted

Table 27: Displacement error results from the extra material model

Error of mean Error of max Error of mean Error of max


deflection after deflection after deflection after deflection after
cubes [%] cubes [%] timber [%] timber [%]
43.1 -24.9 36.2 43.2

54
5 Discussion
5.1 Drop Test
The first thing to realize regarding the physical drop test is that it is associated with a certain
degree of randomness, causing the results to be not entirely replicable. This means that no matter
how good of a FE-model one creates, there will always be a slight discrepancy between it and the
physical case. To clarify, this randomness arises due to various reasons. One such reason is how the
impacting cubes and timber loads are positioned and oriented before their release. The positioning
of the loads is done manually with the help of a laser measuring device, and are lifted in place
with the help of a forklift, where the loads are suspended from its forks via a release mechanism.
The crudeness of this method causes the loads to rotate slightly from the desired orientation of
being at right angles with the side of the overhead guard. Additionally, due to the nature of
physical testing, there is an acceptable margin of error in the test procedure, for example, when it
comes to the drop height, should it deviate a few millimetres, it would still be considered acceptable.

Moreover, the impact behaviour of the loads is slightly chaotic, i.e. minor differences in drop
conditions, and the state of the overheard guard will yield large differences in the outcome. As
an example, post impact, the loads might bounce up, and strike the overhead guard once or twice
more. Another instance might be that the load bounces of to the side of the overhead guard and
gets swung back, thus striking the overhead guard from the side. Understandably, this randomness
makes it nearly impossible to fully capture the behaviour of the physical tests in a FE-simulation.
Arguably, this randomness leads to a discrepancy in behaviours, however, since any possible sec-
ondary or tertiary impacts will have an energy level much lower than the first strike, they should
not lead to major differences in plastic deformation.

Regarding the rig setup, it was seen during the experiment, and when viewing high-speed footage
of it, that the rig was not completely rigid. Instead, it shook back and forth with high frequency,
primarily in the x-direction. Furthermore, the rig was placed on rubber pads to approximate
wheels, and it is possible that these absorb some amount of energy during testing, by elastic
deformation in the y-direction.

5.2 Tensile Tests


The main drawback with the material testing in this thesis was the highest achievable strain-rate
during testing. The highest strain-rate that could be tested was 0.139 s−1 , meanwhile, it was
seen from the FE-simulations that the overhead guard experienced strain-rates upwards of 260
s−1 . This is more than 1000 times higher than during the tensile tests, which is far from ideal.
The Johnson-Cook model is meant to extrapolate strain-rate sensitivity data to a certain extent,
however, extrapolating as far as 1000 times higher surely causes inaccurate results. Moreover,
as mentioned in the theory chapter, the degree of strain-rate sensitivity varies nonlinearly with
strain-rate, meaning that the linear extrapolation becomes even more inaccurate. The strain-rate
sensitivity’s dependence on strain-rate is often wrongfully neglected and can be a major cause of
inaccuracy [32]. From studying relevant literature testing strain-rate sensitivity of similar materials
at higher strain-rates, it was found that the value of the parameter C was higher in those cases
[30, 33].

Concerning the values of Young’s modulus and yield stress that were determined by the conven-
tional tensile testing, they both differ from what was expected. First, the mean yield stress is
slightly higher at 390 MPa than the material’s specified yield strength of 355 MPa, which is almost
10 % higher. This is however not necessarily a cause for concern since suppliers of materials face
demands on the lowest permissible strength of the material. In effect this means that when deliv-
ering a steel with a yield strength of 355 MPa, the supplier is allowed to use a steel with higher
strength than specified but not lower. Specimen 1 showed a slightly higher than expected albeit
reasonable Young’s modulus, whereas the other four specimens displayed lower values of Young’s
modulus than what is to be expected from a common steel where one would expect the Young

55
modulus to be somewhere in the region of 190-210 GPa.

A possible reason for the unexpectedly low values of Young’s modulus is the extensometer which
was used. The extensometer had seen much use prior to the testing for this thesis, and as such
it is conceivable that it was a source of inaccuracy when determining the Young modulus for the
different specimens. Regarding the obtained yield stresses, their determination is not so dependent
on the accuracy of the extensometer, and therefore the authors have more confidence in their
correctness. Moreover, the obtained strain hardening parameters B and n were deemed to be
reasonable given that they are in line with the findings in [30].

5.3 Basic FE-Modeling


The simulation conducted with the basic FE-model yielded several times larger displacements than
the results from the physical drop test, as seen by Tab. 9, and Tab. 10. Interestingly, the largest
errors were found when looking at the displacements after the ten cubes. The errors after the tim-
ber load, although large, the discrepancy there was still much lower than after the cubes. The large
errors should come as no surprise given that the basic model lacked several important features.
The most obvious drawback of the model was that it was employing shell elements, although all
parts of the overhead guard except the ribs were unsuitable for a shell formulation. This is due to
the geometry of the parts, i.e. the ratio of their respective thicknesses to their other dimensions
was much too large, as was discussed in the theory chapter. Another issue with the basic model
was the way that midsurfaces were extended into each other, thereby moving the moment point
between the parts to a nonphysical location, as mentioned in the theory chapter.

Furthermore, the basic model did not account for strain-rate sensitivity in its material model, which
was later found to be a phenomenon of utmost importance. Additionally, the data of mechanical
properties that were used in the basic model proved to be a cause of inaccuracy, given the results
from the material testing. Regarding contact definitions, the basic model did not take into account
contact between the parts of the overhead guard, which has to be seen as a flaw. Intuitively, in the
physical reality, the parts of the overhead guard do come into contact with each other and friction
does play a role.

Lastly, looking at the results of the mesh-sensitivity analysis performed on the advanced shell
model, it was clear that the mesh employed in the basic model was insufficiently fine. To clarify,
the mesh used for the basic model is the same mesh that was denoted as the normal mesh in the
mesh-sensitivity analysis, and the results from using it could obviously not have converged.

5.4 Advanced FE-Modeling


Firstly, regarding the mesh sensitivity analysis, it was clear from Tab. 11 and Tab. 12 that for
both the shell mesh, and the solid mesh, the results cannot be said to have converged. For the solid
mesh however, the results were close to converging, and one has to consider the dramatic increase
in computational cost of 110 % which accompanies the minor change in mean displacement of
5 %. Moreover, when the 110 % increase in cost is put in relation to the change in maximum
displacement of barely 0.15 %, one has to consider the cost to benefit ratio that employing the fine
mesh means. Doing so, the authors deem that this consideration favours the normal mesh.

Looking at Tab. 13, unsurprisingly, the use of a single through-thickness integration point yields
wildly unphysical behaviour, since the element formulation degenerates into a membrane. More-
over, it is clear to see that if one uses a shell mesh, one should not hesitate to use a high number
of integration points for fear of increasing computational cost. The results demonstrate that a
significant change in accuracy can be achieved with a minor increase in computational cost.

Moving on to the investigation of fully integrated elements, the difference between the shell ele-
ments, and solid elements is striking. Tab. 14 shows that a significant change in accuracy could
be achieved along with a reasonable increase in computational cost. Looking at Tab. 15 however,

56
the same cannot be said for solid elements, where a dramatic increase in computational cost was
needed to improve the accuracy of the results just slightly. The conclusion from all this has to be
that when using shell elements, it is worth it to employ fully integrated elements. Doing so, one
has to be aware that the risk for shear locking is introduced, and this then has to be managed
somehow. When it comes to using solid elements, it is instead recommended to use under inte-
grated elements since the saving in computational cost from using full integration far offsets the
minor loss in accuracy.

Onwards, to the topic of mass-scaling, where the investigation clearly shows the merit of time-step
control. Table 16 and Tab. 17 show that mass-scaling achieved a dramatic decrease in compu-
tational cost for both the shell model and the solid model. In both cases, any objections over
the changes in displacements is clearly outweighed by the saving in computational cost. Here, the
mass scaling was limited to 2 %, and it is paramount to understand the risk that accompanies the
possibility for dramatic reductions in computational cost. As was mentioned in the theory chapter,
when using mass-scaling, one should take care to ensure that no significant amounts of mass is
added to highly-dynamic regions in the model.

When it comes to the connections between the frame and the legs, upgrading the RBE2 definitions
of the basic model to also include a beam element connecting two separate RBE2 definitions on
the leg, and frame, respectively, proved to be beneficial. Fig. 35 shows the resulting displacement
after the impact of the timber load, and comparing it to Fig. 26, it is clear that the FE-model
was able to capture the same behaviour as was seen with the physical prototype post testing. In
addition to this, Tab. 18 also show that modeling the connections with RBE2 MPC’s and beam
elements gave more accurate results when looking at the results of the physical drop test than the
previous modeling approach. Interestingly, the improvement in accuracy was higher when looking
at the results after the cube drops than when looking at the results after the timber drop. Using
the solid FE-model, another modeling approach that was investigated was to do a solid modeling
of the shear pins, and as the results from Tab. 19 show, this turned out to give an improved
accuracy, primarily when looking at the timber drop. The improvement in results is believed to
arise from the fact that with this approach, more mass and stiffness is added to a critical region
of the model, which naturally should lead to lower displacements. This improvement in accuracy
has to be placed in relation to the increased modeling complexity however. The geometry of the
shear pins along with the plate they were attached to meant increased difficulty when meshing,
and some elements were created that were smaller than the elements in the surrounding meshes.

Regarding the investigation into contact definitions, it is evident from Tab. 20 that the switch
from the TYPE24 keyword to the TYPE25 keyword did not cause any major differences. This
was expected since the TYPE25 contact definition was described as a further development of the
TYPE24 contact definition. The TYPE25 contact definition proved to give slightly worse results
than the previous TYPE24 contact definition when contact between parts of the overhead guard
was omitted. Taking contact between structural parts into account however, the TYPE25 keyword
proved to give results of significantly higher accuracy when looking at the displacements after the
cube drops. Interestingly, the same improvement was not seen when looking at the displacements
after the timber load however. This would seem to indicate that in the presence of such a heavy
load, the friction between structural parts becomes almost neglectable.

Furthermore, in the study of contact modeling, the friction coefficient was varied, and it showed
that the results were not very sensitive as long as the friction coefficient was kept in the interval
between 0.5 and 1.2. Table 21 indicate that a major improvement in accuracy was achieved when
going from the previous value of 0.2 to using 0.5 instead. Since using a friction coefficient of 0.5
proved to give the best results, the takeaway was to use this in the final analysis. Regarding the
failure to simulate higher friction between the structural parts of the overhead guard, the authors
are unsure as to why but see it as likely that the difficulty of the damping scheme was due to
the higher friction coefficient introducing a more discontinuous stick and slip behaviour. Unfortu-
nately, due to time limitations, investigating this further was not possible.

57
Now, about the modeling of the loads, Tab. 22 and Tab. 23 clearly show that the addition of 10
mm radii to the sides of the cubes and the timber in general proved beneficial in terms of simulation
accuracy. For the cubes, adding radii unequivocally improved accuracy, however, for the timber
load, although an improvement was seen in regards to the error of max displacement, the opposite
effect was observed when looking at the mean error. In the case of the cubes, adding these radii
made them more similar to the physical cubes used in the experiment on the overhead guard pro-
totype. In the case of the timber load however, these radii are somewhat nonphysical in that the
real-life timber load did not have such pronounced fillets on its edges even if they had some form
of radii. On the other hand, modeling the timber load with exactly 90 °corners is also somewhat
nonphysical given how such corners are likely to be treated numerically in a contact algorithm. It
was previously mentioned in the theory chapter around FE-modeling of contact conditions that
sharp corners should be avoided since they can cause serious problems in the numerical treatment
of contact mechanics.

The reason for adding the radii to the timber in the first place was that it was sometimes observed
when simulating the impact of the timber load in the solid model that elements in close vicinity
to where the timber’s edges impacted, would become distorted. The reason for this was likely the
high contact pressures, both in the normal, and the tangential direction that were observed for
those elements at the time of impact. When examining the same region of the model, after the
timber load had been modified to include 10 mm radii, only minor distortions of elements were
observed, and the contact pressures were substantially lower. It should be said that this phe-
nomenon is believed to have a mostly local effect, and not cause any major difference in the global
displacement behaviour of the model. The addition of radii to the timber is not unproblematic
however, since the timber in the physical experiment did not have these radii, the simulated load
case differs slightly in that the impact area is made smaller. In addition, the impact from the
timber becomes more concentrated further out in the x-direction of the overhead guard, causing
a more severe load case. Given all this, the authors believe that an improvement in numerical
aspects surrounding contact mechanics is gained from adding radii to the loads due to the inherent
difficulty of contact algorithms to handle sharp corners. However, for the reasons listed above, it
was decided to only use radii on the cubes, and to model the timber load without radii, as possi-
ble improvements in the contact algorithm would be offsetted by the unphysicality of the load case.

To the topic of material modeling, examining Tab. 24 one sees that model no. 1 sticks out, and
that it predicted much too large displacements compared to the others. One also notes that the
material models predict more similar displacements after the timber load, than what is the case
after the cubes. The major difference between material model 1 compared to model 2 and 3 is that
model 1 had a lower yield strength, and a higher Young’s modulus. Apart from that, its prediction
of strain hardening was quite similar to those of models 2 and 3. Material models 2 and 3 show
that no major difference was achieved when calculating the strain-hardening parameters manually
compared to letting RADIOSS decide them automatically. However, when one looks at Fig. 36,
it becomes clear that Tab. 24 falsely indicate that the the simplified input of model 2 would be
superior to what was calculated manually. Said figure shows that model 2 overpredicts the strain
hardening at the beginning of the curve, thereby leading to smaller displacements, causing the
misleading results. It is evident from Fig. 36 that material model 3, constituted a much better
representation of the tensile test results. Lastly, it is clear from the results of model 4 that taking
strain-rate sensitivity into account made a major difference to the accuracy of the results. This
proves that the effect of the material’s strain-rate sensitivity cannot be ignored when undertaking
simulations of the sort presented in this thesis.

Lastly, regarding material modeling, this type of large deformation elastoplasticity analysis is
sensitive to differences in material parameters, for example yield strength. It is therefore important
to ensure the validity of material parameters. One has to understand that differences between
material batches might be an unavoidable source of error since it might not be feasible to directly
test the material of every overhead guard.

58
5.5 Final Model
As a start, it is clear when comparing the results in Tab. 25 to those of the basic FE-model in
Tab. 9 that a tremendous increase in accuracy was achieved with the final model. One also notes
that although the error in mean displacements is similar after the cubes, and after the timber
drop, the error of max displacement after the cubes is better than when looking at the results after
the timber drop. It is worth noting that the mean error can be somewhat misleading, due to the
fact that all measurement points carry the same weight in the calculation of the mean error. This
means that points where the difference in displacements is small in magnitude, such as for example
points 19-22, the percentage error still becomes fairly high, hence skewing the mean error somewhat.

When looking at Fig. 37, it is evident that the final FE-model displays a satisfactory displacement
behaviour in terms of damping, and simulation time. The sharp dips in the curves occur at times
of impact from the cubes between 0, and 1.45 seconds, and at 1.6 seconds for the timber load. It is
clear that the damping was sufficient to bring the overhead guard to rest before each new impact,
which is necessary to ensure good results. The reader also notes that the first cube drop was given
more time than the subsequent drops, and this is because the damping scheme had a difficult time
damping the structure after the first impact, and it therefore proved necessary to provide more
simulation time to ensure that the structure would be brought to rest.

Now, analyzing the model energies shown in Fig. 38, the correctness of the simulation is verified
from an energy standpoint, meaning that there are no large spikes or dips in energy, or other
strange nonphysical behaviours. It is also worth noting that the hourglass energy is practically
nonpresent, which is clearly below the highest acceptable value of 10 % of the internal energy.

It is the authors’ beliefs that the improvement seen with the final FE-model is mainly due to the
following reasons, it employed a solid hexahedral mesh, it accounted for strain-rate sensitivity and
used a higher yield strength after the previous one was found to be too low. Moreover, there was a
careful review of aspects relating to explicit FE-analysis, which provided a good trade-off between
numerical accuracy, and computational efficiency. Additionally, great care was taken in regards to
the spatial discretization of the problem.

Regarding the merits of the final model, it should be clarified that although it is a substantial
improvement from the previous case, it still provides a fairly conservative estimate of the physical
drop test. The model is better at predicting displacement results after the cubes than it is at
predicting the displacements after the timber load. This is good since the majority of failed
overhead guards in this kind of physical test, fail because of too large displacements after the
cubes, since the demands placed on displacement after the timber load are much more lax. Lastly,
to improve the model’s accuracy, it is believed that the only major hurdle is acquiring the strain-
rate sensitivity parameter C from a more representable strain-rate range, as is discussed extensively
throughout this chapter.

5.6 Extra Material Model


Looking at the results of the extra material model in Tab. 27, our concerns regarding the limitations
of the strain-rate sensitivity data proved justified. When using a value for the parameter C in the
Johnson-Cook model that had been obtained from testing in a more appropriate range of strain-
rates, the effect was so large as too produce too small displacements for the first time in all of
the analyses. The takeaway from this extra material model is that the dependence of strain-rate
sensitivity on strain-rate cannot be omitted as it leads to major inaccuracies. However, one should
not draw too big conclusions from the specific results of this material model on the problem at
hand since it is not certain that the authors of that paper tested exactly the same material as in
this thesis. It should also be noted that it is possible that the test setup in said paper resulted
in higher strain-rates than what occurs in this thesis. This is difficult to discern since that paper
never stated explicitly what strain-rates were tested.

59
5.7 Further Work
As mentioned previously, the rig was not completely rigid during the physical drop test, which
means that it could be worth looking in to how to capture this in the modeling of the boundary
conditions. This could perhaps be done by modeling the whole rig or forklift body depending
on the situation. The authors believe that this might provide a minor increase to the modeling
accuracy, but not any major difference to the present methodology, except if the overhead guard
to rig mounting cannot be assumed to be completely rigid.

The limitation of the tensile testing conducted in this thesis has been made apparent. The authors
therefore recommend that tensile testing be performed at much higher strain-rates, which would
require servo hydraulic tensile testing machines capable of achieving test rates of the same magni-
tude as those experienced by the prototype in the physical drop test, i.e. strain-rates upwards of
260 s−1 . It is believed that obtaining the strain-rate sensitivity parameter C from such tests would
increase the accuracy of the developed FE-methodology dramatically. Furthermore, having such
material data would undoubtedly prove useful if Toyota Material Handling would want to develop
FE-methodology for simulating other high-strain rate situations in the future.

Moreover, concerning the Johnson-Cook material model, given that strain-rate sensitivity varies
with strain-rate, one might want to investigate the use of the modified Johnson-Cook model, also
called the Huh-Kang model. This model fits a 2nd degree polynomial to the strain-rate sensitiv-
ity data by introducing a second strain-rate sensitivity parameter. This means that the model
is able to more accurately capture the strain-rate sensitivity over a wide range of strain-rates.
The Huh-Kang model is not yet implemented in RADIOSS, however, the FE-software LS-DYNA
supports it. The superiority of the Huh-Kang model over the conventional Johnson-Cook model
in characterizing strain-rate sensitivity over a large strain-rate range is supported by [31].

The work in this thesis has been heavily focused on constitutive modeling of metals, but the au-
thors want to stress that the rest of the methodology is still applicable to overhead guards of other
materials. In such cases, one would just need to supplement the methodology with appropriate
material models for the case at hand.

To fully validate the methodology proposed by this thesis work, it is desirable to test it on different
designs of overhead guards.

60
6 Conclusions
The authors conclude that the developed methodology is sound, and can be used as a powerful
tool throughout the entire design process. The methodology can be used to create FE-models
that simulate overhead guard impact tests to exclude bad designs early on in the design process,
and also at a later stage to refine and evaluate minor changes to a design. This should make
it possible to avoid building and testing excessive amounts of physical prototypes, and instead
reserve physical testing for the very last stage of a proposed design. This would of course mean a
dramatic reduction of lead time and cost in the development of new overhead guard designs. To
use this methodology to simulate a final norm test however, improvements in modeling accuracy,
and validation on several types of overhead guard designs is needed.

It was found that the matters of greatest importance to the merits of this methodology were,
material modeling that specifically took strain-rate sensitivity into account, employing a solid hex-
ahedral mesh, and thoroughly reviewing FE-aspects.

In this thesis, there has been a heavy focus on aspects relating to explicit FE-analysis. This has
been aimed at providing some of the fundamental knowledge needed to undertake these kind of
analyses. It might seem daunting at first to get into explicit FE-analysis of large deformations, but
it is at the same time highly rewarding. It provides opportunities to simulate very complicated
mechanics, and serves as one of the most powerful tools in the CAE-toolbox.

One has to be aware that due to the inherent randomness of the experimental procedure, there
will always be some small, unavoidable level of inaccuracy.

As a final remark, throughout this thesis, many modeling aspects have been covered, and we, the
authors believe that the only major remaining obstacle towards achieving a model of high
accuracy is to acquire strain-rate sensitivity data from higher, more representable strain-rate
ranges.

61
References
[1] Swedish Standards Institute. SS-ISO 6055:2004. SIS, edition 2.
[2] American National Standards Institute. ANSI/ITSDF B56.1-2020. ANSI.

[3] Swedish Standards Institute. SS-EN ISO 6892-1:2016. SIS, edition 2.


[4] Altair RADIOSS. RADIOSS Theory Manual. Altair Engineering, Inc, 2021.
[5] K. J. Bathe. Finite Element Procedures. Prentice Hall, Upper Saddle River, New Jersey
07458, 1996.

[6] A. J. M. Spencer. Continuum Mechanics. Dover Publications, Inc, Mineola, New York, 2004.
[7] R. Courant, K. Friedrichs, and H. Lewy. Über die partiellen Differenzengleichungen der
mathematischen Physik. Mathematische Annalen / Zeitschriftenband / Artikel / 34 - 74,
1928. Accessed through German digital archive Digizeitschriften 2022-03-18; http:
//www.digizeitschriften.de/dms/img/?PID=GDZPPN002272628&physid=phys36#navi.
[8] R. Courant, K. Friedrichs, and H. Lewy. On the Partial Difference Equations of
Mathematical Physics. Translated by Phyllis Fox 1956, AEC Computing Facility, Courant
Institute of Mathematical Sciences, New York University, 1956. Accessed through Internet
Archive 2022-03-18; https://archive.org/details/onpartialdiffere00cour/page/n1/mode/2up.

[9] G. Cocchetti, M. Pagani, and U. Perego. Selective Mass Scaling and Critical Time-Step
Estimate for Explicit Dynamics Analyses with Solid-Shell Elements. Computers &
Structures Vol.127, pp. 39-52, 2013.
[10] Altair University. Crash Analysis with RADIOSS, a Study Guide. Altair Engineering, Inc,
2015.

[11] Altair RADIOSS. RADIOSS Reference Guide. Altair Engineering, Inc, 2021.
[12] T. A. Burkhart, D. M. Andrews, and C. E. Dunning. Finite Element Modeling Mesh Quality,
Energy Balance and Validation Methods: A Review with Recommendations Associated with
the Modeling of Bone Tissue. Journal of Biomechanics Vol. 46, pp. 1477-1488, 2013.

[13] D. C. Stouffer and L. T. Dame. Inelastic Deformation of Metals - Models, Mechanical


Properties, and Metallurgy. John Wiley & Sons, Inc, 1996.
[14] P. Hodge. Discussion of the Prager Hardening Law. Journal of Applied Mechanics, Vol. 23,
pp-482-484, 1957.

[15] G. R. Cowper and P. S. Symonds. Strain-Hardening and Strain-Rate Effects in the Impact
Loading of Cantilever Beams. Brown University, Division of Applied Mechanics, Technical
Report No. 28, 1957.
[16] G. R. Johnson and W. H. Cook. A Constitutive Model and Data for Metals Subjected to
Large Strains, High Strain Rates and High Temperatures. In Proceedings of the Seventh
International Symposium on Ballistics, pp. 541-547, Hague, Netherlands, 1983.

[17] A. Das. High Strain Rate Deformation Behaviour of Automotive Grade Steels. PhD thesis,
Indian Institute of Technology, Kharagpur, Department of Metallurgical and Materials
Engineering, 2019.
[18] G. Kokot and W. Ogierman. The Numerical Simulation of FOPS and ROPS Tests using
LS-DYNA. Mechanika, Vol. 25, No. 5, pp. 383-390, 2019.
[19] D. Rajput et. al. Evaluation of Johnson-Cook Material Model Parameters of AA6063-T6.
International Research Journal of Engineering and Technology, Vol. 7, No. 5, 2020.

62
[20] C. Lakshmana Rao, V. Narayanamurthy, and K.R.Y. Simha. Applied Impact Mechanics.
John Wiley & Sons, Inc, 2017.

[21] HyperWorks. RADIOSS User Guide. Altair Engineering, Inc, 2017.


[22] C. Felippa. FEM Modeling: Mesh, Loads and BCs, chapter 7. University of Colorado,
pp.1-19, 2012.
[23] D. I. Holmes. Generalized Mehotd of Decomposing Solid Geometry into Hexahedron Finite
Elements. In Proceedings of the Fourth International Meshing Roundtable, pp. 141-152,
Albuquerque, New Mexico, 1995.
[24] R.L. Taylor O.C. Zienkiewicz and J.Z. Zhu. The Finite Element - Its Basis and
Fundamentals. Elsevier Butterworth Heinemann, 2005.

[25] DYNAmore GmbH. Locking in a Quadrilateral Element. LS-DYNA Support. Accessed


online 2022-04-17;
https://www.dynasupport.com/tutorial/element-locking/locking-in-a-quadrilateral-element.
[26] DYNAmore GmbH. Hourglass. LS-DYNA Support. Accessed online 2022-04-17;
https://www.dynasupport.com/howtos/element/hourglass.

[27] P. Wriggers. Computational Contact Mechanics. Springer-Verlag, Berlin Heidelberg, 2006.


[28] M. Bäker. How to Get Meaningful and Correct Results from Your Finite Element Model.
Institut für Werkstoffe, Technische Universität Braunschweig, 2018.
[29] DYNAmore GmbH. Contact Types. LS-DYNA Support. Accessed online 2022-05-02;
https://www.dynasupport.com/tutorial/contact-modeling-in-ls-dyna/contact-types.
[30] M. Avalle, G. Belingardi, and M. Gamarino. An Inverse Method for the Identification of
Strain-Rate Sensitivity Parameters of Sheet Steels. In Proceedings of the Eighth
International Conference on Structures under Shock and Impact, pp. 13-22, Crete, Greece,
2004.

[31] L. Schwer. Optional Strain-Rate Forms for the Johnson-Cook Constitutive Model and the
Role of the Parameter Epsilon 0. 6th European LS-DYNA Users’ Conference, Gothenburg,
Sweden, 2007.
[32] P. Larour. Strain Rate Sensitivity of Automotive Sheet Steels: Influence of Plastic Strain,
Strain Rate, Temperature, Microstructure, Bake-Hardening and Pre-Strain. PhD thesis,
Rhenish-Westphalian Technical University Aachen, Faculty of Georesources and Materials
Engineering, 2010.
[33] E. Cadoni et. al. Tensile and Compressive Behaviour of S355 Mild Steel in a Wide Range of
Strain Rates. The European Physical Journal Special Topics, Vol. 227, pp.29-43, 2018.

63
Appendices
Appendix A

Overhead Guard Impact test Loads According to ISO [1]

Truck rated capacity [kg] Impact test energy [J] Minimum mass of test load [kg] Drop height [m]
Under 1000 3600 340 1.08
1000 to 1500 5400 340 1.62
1501 to 2500 10800 680 1.62
2501 to 3500 21760 1360 1.63
3501 to 6500 32640 1360 2.45
6501 to 1000 43520 1360 3.27
Over 10000 48960 1360 3.67

Overhead Guard Impact test Loads According to ANSI

Truck rated capacity [kg] Impact test energy [J] Minimum mass of test load [kg] Drop height [m]
1360 and under 5400 340 1.62
1361 to 2270 10800 680 1.62
2271 to 3630 21760 1360 1.63
3631 to 6350 32640 1360 2.45
6351 to 11300 43520 1360 3.27
11301 and over 48960 1360 3.67

64
Appendix B

Impact drop test setup according to ISO [1]

(b) Seated operator


(a) Standing operator

Impact test permissible deformation according to ISO [1]

65

You might also like