V-Imp - USA-Diagonostics Important Paper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 81

ORNL/TM-12695

Instrumentation and Controls Division

DETECTION AND LOCATION OF MECHANICAL SYSTEM DEGRADATION


BY USING DETECTOR SIGNAL NOISE DATA*

B. Damiano
E. D. Blakeman
L. D. Phillips

Date Published–June 1994

*Research sponsored by the Laboratory Directed Research Program of Oak Ridge National
Laboratory.

Prepared by
OAK RIDGE NATIONAL LABORATORY
Oak Ridge, Tennessee 37831-6285
managed by
MARTIN MARIETTA ENERGY SYSTEMS, INC.
for the
U.S. DEPARTMENT OF ENERGY
under contract DE-AC05-840R21400

MARTIN M A R I E T T A E N E R G Y S Y S T E M S L I B R A R I E S

lllllllllllllllllllllllllllll
CONTENTS

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vii

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

l. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 OVERVIEW OF THE DIAGNOSTIC METHOD . . . . . . . . . . . . . . . . . . . . . 2
1.2 PROGRAM FOR DEVELOPING AND DEMONSTRATING
THE DIAGNOSTIC METHOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 OVERVIEW OF REPORT. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2. PRECURSORY INFORMATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1 VIBRATION MODEL OF THE MOTOR-PUMP UNIT . . . . . . . . . . . . . . . . 6
2.1.1 Mathematical Model Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Calculation of Resonance Frequencies and Mode Shapes . . . . . . . . . . . . 10
2.1.3 Calculation of the Temporal Response to Initial
Conditions or Applied Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10
2.2 APPLICATION OF NEURAL NETWORKS AS CLASSIFIERS/
INTERPOLATORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11
2.2.1 Backpropagation Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

3. PHASE II: APPLYING THE DIAGNOSTIC METHOD


TO COMPUTER-SIMULATED DATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
3.1 FORMATION OF THE TRAINING SETS USED DURING PHASE I. . . . 15
3.2 CREATION AND TRAINING OF THE NEURAL NETWORK . . . . . . . . . . 18
3.3 RESULTS OBTAINED FROM APPLYING THE DIAGNOSTIC
METHOD TO COMPUTER-SIMULATED DATA . . . . . . . . . . . . . . . . . . .l9
3.3.1 Prediction of Spring Rates From Calculated Eigenvalue
and Eigenvector Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20
3.3.2 Prediction of Spring Rates from Calculated Vibration Spectra . . . . . . . . . 20
3.4 DISCUSSION OF RESULTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

4. PHASE II: APPLYING THE DIAGNOSTIC METHOD TO


MEASURED DATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
4.1 THE BENCH-TOP TEST UNIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28
4.1.1 Description of the Bench-Top Test Unit . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1.2 Description of the Data-Acquisition System . . . . . . . . . . . . . . . . . . . . . . 31
4.2 MATHEMATICAL MODEL OF THE BENCH-TOP TEST UNIT . . . . . . . . 31
4.2.1 Model Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.2.2 Measured Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33
4.3 FORMATION OF THE TRAINING SETS AND TRAINING OF THE
NEURAL NETWORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33

iii
4.4 RESULTS OBTAINED FROM APPLYING THE DIAGNOSTIC
METHOD TO MEASURED DATA... . . . . . . . . . . . . . . . . . . . . . . . . . . .35
4.5 DISCUSSION OF RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5. SUMMARY AND CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39


5.1 RESULTS AND CONCLUSIONS BASED ON THE COMPUTER
SIMULATION RESULTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .39
5.2 RESULTS AND CONCLUSIONS BASED ON THE APPLICATION OF
THE DIAGNOSTIC METHOD TO THE BENCH-TOP TEST UNIT . . . . . . 40

6. REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42

Appendix A. FREQUENCY SPECTRUM DECOMPOSITION TECHNIQUE . . . . . 45

Appendix B. PHASE I MODE SHAPES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49

Appendix C. DETAILED DRAWINGS OF THE BENCH-TOP TEST UNIT . . . . . . 57

Appendix D. MEASURED RESONANCE FREQUENCIES AND


MODE-SHAPE COMPONENTS FROM THE
BENCH-TOP TEST UNIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63

iv
LIST OF FIGURES

Figure Page
1.1 Block diagram of the method for interpreting detector signal noise data . . . . . . 4

2.1 Three-beam pump model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Generalized backpropagation neural network with L hidden layers . . . . . . . . . . 13

2.3 Neural network processing element number j in hidden layer i . . . . . . . . . . . . . 13

3.1 Comparison of the known and estimated spring rates for the values
of K mm used in the training set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21

3.2 Comparison of the known and estimated spring rates for the values
of K mp used in the training set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21

3.3 Comparison of the known and estimated spring rates for the values
of EI 45 used in the training set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

3.4 Comparison of the known and estimated spring rates for the values
of K mm between those used in the training set . . . . . . . . . . . . . . . . . . . . . . . .22

3.5 Comparison of the known and estimated spring rates for the values
of K mp between those used in the training set . . . . . . . . . . . . . . . . . . . . . . . .23

3.6 Comparison of the known and estimated spring rates for the values
of EI 45 between those used in the training set . . . . . . . . . . . . . . . . . . . . . . . . .23

3.7 Effect of the number of training set members on neural network


estimation accuracy for the 3 training set types used during Phase I . . . . . . . . . 24

4.1 The three-beam model of the bench-top test unit . . . . . . . . . . . . . . . . . . . . . . . 29

4.2 Effect of internal pressure on the spring rate of the Firestone lM1A airspring . 30

4.3 Comparison of the measured and calculated values of resonant frequency . . . . 34

4.4 A comparison of the known and estimated spring rate values for spring rates
contained in the training set.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35

4.5 A comparison of the known and estimated spring rate values made
by using measured data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36

4.6 Estimated value of Kmm calculated while holding the value of Kmp fixed . . . . . . . 37
4.7

B.1

B.2

B.3

B.4

B.5

B.6

vi
LIST OF TABLES

Table Page

2.1 Pump model lumped parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3.1 Spring rate values used in the Phase I training sets . . . . . . . . . . . . . . . . . . . . . . 17

3.2 Training sets used in the Phase I portion of the investigation . . . . . . . . . . . . . . 18

3.3 Neural network parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...20

3.4 Comparison of known spring rates with spring rates estimated


from simulated spectral data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26

4.1 Bench-top test unit model lumped parameters . . . . . . . . . . . . . . . . . . . . . . . . . 32

4.2 Resonant frequencies of the bench-top test unit model . . . . . . . . . . . . . . . . . . 33

vii
BLANK PAGE
ABSTRACT

This report describes the investigation of a diagnostic method for detecting and
locating the source of structural degradation in mechanical systems. The goal of this
investigation was to determine whether the diagnostic method would be practically and
successfully applied to detect and locate structural changes in a mechanical system. The
diagnostic method uses a mathematical model of the mechanical system to define
relationships between system parameters, such as spring rates and damping rates, and
measurable spectral features, such as natural frequencies and mode shapes. These
model-defined relationships are incorporated into a neural network, which is used to relate
measured spectral features to system parameters. The diagnosis of the system’s condition
is performed by presenting the neural network with measured spectral features and
comparing the system parameters estimated by the neural network to previously estimated
values. Changes in the estimated system parameters indicate the location and severity of
degradation in the mechanical system.
The investigation involved applying the method by using computer-simulated data and
data collected from a bench-top mechanical system. The effects of neural network
training set size and composition on the accuracy of the model parameter estimates were
investigated by using computer-simulated data. The measured data were used to
demonstrate that the method can be applied to estimate the parameters of a “real”
mechanical system.
The results show that this diagnostic method can be applied to successfully locate and
estimate the magnitude of structural changes in a mechanical system. The average error
in the estimated spring rate values of the bench-top mechanical system was approximately
5 to 10%. This degree of accuracy is sufficient to permit the use of this method for
detecting and locating structural degradation in mechanical systems. It is also shown that
the neural network training sets required for this level of estimate accuracy are not
impractically large and can consist of natural frequency and mode shape information that
is sufficient to provide system parameter estimates.

ix
BLANK PAGE
1. INTRODUCTION

This report describes the investigation of a new diagnostic method for detecting and
locating the source of structural degradation in mechanical systems. The goal of this
investigation was to determine whether the diagnostic method would be practically and
successfully applied to detect and locate structural changes in a mechanical system. The
diagnostic method uses a mathematical model of the mechanical system to define
relationships between system parameters such as spring rates and damping rates and to
measurable spectral features such as the characteristics of resonance peaks. These
model-defined relationships are incorporated into a neural network that is used to relate
measured spectral features to system parameters. The diagnosis of the system’s condition
is performed by presenting the neural network with measured spectral features and
comparing the system parameters estimated by the neural network to previously estimated
values. Changes in the estimated system parameters indicate the location and severity of
degradation in the mechanical system.
This method has a significant advantage over current monitoring methods (which
track specific spectral characteristics such as resonance frequencies or the total energy in a
frequency interval). Current monitoring methods can detect mechanical degradation but
provide little indication of the magnitude or location of the degradation. The main
advantage of this new method is that the signature interpretation is based on mathematical
model results, allowing a direct association between spectral changes and structural
degradation. This approach removes much of the subjectiveness commonly associated with
signature interpretation.
Spectral analysis of detector signal noise data is a proven technique for monitoring
1
the condition of mechanical systems. This technique is well established for monitoring
rotating machinery and, more recently, has been applied to monitor the condition of
1-4
nuclear reactor internal components. The main activities comprising a monitoring
program involve data collection and storage, data analysis, and interpretation of analysis
results. Of these activities, interpretation remains the least developed, mainly because of
the difficulty in automating the intuitive processes involved in interpreting spectral data.
Interpretation of spectral data is typically based on the characteristics of resonance
peaks. Resonance peak characteristics are frequency, width, amplitude, and skewness (a
measure of asymmetry). A crucial step in interpreting spectra is identifying the cause of
each resonance peak. Resonance peaks are normally caused by either internally generated
or externally imposed driving forces or by an amplified system response to external driving
forces caused by a structural resonance. The source of most peaks caused by internal or
external driving forces can usually be identified on the basis of known operating conditions
such as rotation speed or can be derived from knowledge or measurement of the external
forces. Peaks due to structural resonances can often be identified on the basis of results
obtained from measurements made during special tests such as impact testing or
coastdown measurements. Additional measured information such as the relative phases
between detector signals may be available and can be useful in determining the mode
shape corresponding to some structural resonances. When structural degradation occurs,
the features of the resonance peaks corresponding to structural resonances change. Thus,
changes in resonance peaks indicate that mechanical components have experienced
degradation.

1
2

Mathematical simulations of mechanical systems have been used to determine the


5-8
cause of changes in peaks corresponding to structural resonances. This approach
involves development of a mathematical model describing the dynamic behavior of the
mechanical system and adjusting the model parameters to obtain agreement between the
calculated and measured spectral features. If the measured spectral features change, the
model parameters are readjusted to give agreement between the calculated spectral
features and the new measurements. The model parameter adjustments indicate where
degradation may have occurred in the mechanical system.
The main shortcoming of previous applications of mathematical modeling techniques
to interpret spectral data is the lack of an easily implemented, systematic method to adjust
the model parameters to obtain agreement between calculated and measured spectral
features. This tedious process has been performed manually, involving considerable time
and effort and also requiring significant expertise on the part of those adjusting the model
parameters. The method investigated by this study uses a neural network to adjust the
model parameters, resulting in a systematic and, from the perspective of the user,
relatively simple process for matching model parameters to measured spectral features.

1.1 OVERVIEW OF THE DIAGNOSTIC METHOD

The diagnostic method combines a mathematical model of the monitored system to


relate system parameters to measurable spectral phenomena, a technique to extract the
significant features from the frequency spectra, and a neural network to match the
extracted spectral features to corresponding system parameters. Five steps comprise the
technique.

1. Develop a mathematical model describing the vibrations and dynamics of the


monitored mechanical system.

2. Design a neural network that will be trained to simulate the model.

3. Use the mathematical model to form a training set for the neural network. The
training set will consist of calculated model responses (i.e., spectral features) as known
inputs and the corresponding spring and damping constants (i.e., system or model
parameters) as known outputs. Thus, if the relationship between the spectral features
and the system parameters is single valued, the neural network will perform the
mathematical inverse of the model.

4. Train the neural network by iteratively adjusting the neural network connection
weights to obtain optimum agreement between the neural network and the
mathematical model.

5. Use the trained neural network to estimate the system parameters corresponding to a
measured set of spectral features.

Only the most general discussion about model development will be made here
because a unique model must be developed for each application. The modeling technique
is independent of the monitoring method. Thus, for some applications, relatively coarse
3

lumped-parameter approximations may be suitable; for others, detailed models employing


sophisticated modeling techniques such as finite element methods may be needed. The
only requirement placed on the mathematical model by the monitoring method is that the
model must simulate the significant and measurable effects caused by the changing of
system parameters. The parameter most likely to change in mechanical systems is stiffness
or damping, and the significant and measurable effects of either of these changes will be
the characteristics of the resonance frequencies and mode shapes.
The supervised learning mode of neural network training used in this investigation
involves adjusting internal neural network parameters until satisfactory agreement is
obtained between the sets of known input and output parameters in the training set. For
the neural network to relate spectral features to system parameters, the training set will
use spectral features calculated by the mathematical model as neural network input and
will use the corresponding model input parameters as neural network output. After
training, the neural network will effectively contain all of the significant information
available from the model and will, in effect, perform the mathematical inverse of the
model.
The implementation of the trained neural network for interpreting vibration
signatures is summarized in Fig. 1.1. Sensor signals are conditioned and then transformed
into frequency spectra by an fast Fourier transform (FFT) algorithm. The spectral data
are decomposed, yielding frequency peaks and mode shape components. These spectral
features are used as input to the neural network. The neural network outputs are
estimates of the original system parameters. Comparison of the latest estimated system
parameters with previously estimated values indicates whether degradation has occurred
and the severity of any degradation. The specific model parameter that experiences a
change indicates the location of the degradation because each model parameter represents
a specific system component.

1.2 PROGRAM FOR DEVELOPING AND DEMONSTRATING


THE DIAGNOSTIC METHOD

The program for investigating the monitoring method consisted of two phases:
Phase I involved software tool development and concept demonstration by applying the
diagnostic method to computer-simulated data. Phase II applied the software tools
developed during Phase I to measured data acquired from a bench-top test unit.

Phase I

The Phase I activities concentrated on developing software tools and on applying


these tools to computer-simulated data. An outline of the steps comprising Phase I is
given below

I.1. Complete the development of the mathematical model of a motor-driven centrifugal


pump that had been used in a previous Oak Ridge National Laboratory (ORNL)
project. This model was modified to include calculation of resonance frequencies,
Q values, and mode shapes. The code output is either the time response of each
mass point (when calculating the response to forcing functions) or the frequency,
Q value, and mode shape of each natural mode.
4

Fig. 1.1. Block diagram of the method for interpreting detector signal noise data
(FFT = fast Fourier transform).

I.2. Form a training set for the neural network. A training set will be formed by using
the model developed in Step 1.1.

I.3. Create and train the neural network.

I.4. Test the ability of the neural network to reproduce model results by providing the
neural network with spectral features calculated by using known model parameters
and then comparing these values to those estimated by the trained neural network.

I.5. Test the diagnostic method by using computer-simulated data. This test was done by
calculating the response of the model to an initial mass-point deflection for known
model parameters, transforming the response into the frequency domain,
decomposing the resulting frequency spectrum to obtain the appropriate spectral
features, and then providing these to the neural network to estimate the model
parameters. Comparison of the known and estimated model parameters was used to
evaluate the effectiveness of the diagnostic method.

Phase II

The Phase II activities concentrate on the creation and instrumentation of the


bench-top test unit and the application of the diagnostic method to measured data. An
outline of the steps comprising Phase II are given below:

II.1. Select and purchase the equipment required to construct and instrument the
bench-top test unit. The bench-top test unit system was constructed so the
5
mathematical model developed in Phase I could be easily adapted to model the
behavior of the bench-top unit.

II.2. Construct, instrument, and test the bench-top unit.

II.3. Write, test, and debug a code to interface the output from the data acquisition and
spectral analysis code to the frequency spectrum decomposition code.

II.4. Form an input data set describing the bench-top unit for the mathematical model
developed in Step I.1.

II.5. Form a training set by using the input data set describing the bench-top unit. Train
the neural network.

II.6. Form a frequency spectrum, decompose the spectrum, and apply the results of the
frequency spectrum-decomposition to the neural network by using data collected
from the bench-top unit. Introduce stiffness or damping changes to the bench-top
system, and observe the ability of the system to detect the effect of the changes and
diagnose the location and magnitude of the change. Step II.6 will be used to
evaluate the overall feasibility of the diagnostic method.

1.3 OVERVIEW OF REPORT

For completeness, Chapter 2 of this report presents background information on


mechanical modeling, resonant frequencies, and neural networks for readers unfamiliar
with these subject areas. Chapter 3 describes the analytical portion of the Phase I
investigation. Chapter 3 includes descriptions of the development of the software tools
and the creation and training of the neural network and also presents and discusses the
results obtained from applying the diagnostic method to computer-simulated data.
Chapter 4 describes the experimental portion of the Phase II investigation. Chapter 4
includes descriptions of the bench-top test unit and its instrumentation, the mathematical
model of the bench-top test unit, and the creation and training of the neural network and
also presents and discusses the results obtained from applying the diagnostic method to
measured data. Chapter 5 summarizes the results of the investigation and draws
conclusions concerning the feasibility of the diagnostic method on the basis of those
results.
.—

1
6

2. PRECURSORY INFORMATION
I
This chapter presents general background information in the fields of mechanical
modeling, resonance frequency and mode shape calculation, and neural networks.
Although this information is not required to fully describe the investigation of the
monitoring technique, it may be needed to obtain a thorough understanding of the
investigation. The subjects discussed are the mathematical model of the mechanical
system used in this investigation, a description of the frequency spectrum decomposition
technique, and the application of neural networks as classifiers or interpolators.

2.1 VIBRATION MODEL OF THE MOTOR-PUMP UNIT

This section describes the mathematical model of the motor-pump unit used in this
investigation. This model is used to calculate resonance frequencies and mode shapes and
can also be used to calculate the system response to initial conditions or applied forces.

2.1.1 Mathematical Model Description

A lumped parameter model consisting of three interconnected beams, modeled as a


series of mass points connected by massless beam sections, represents the motor-driven
centrifugal pump (Fig. 2.1). The first beam represents the motor shaft and its attached
mass, the second beam represents the pump shaft and its attached mass, and the third
beam represents the machine base plate. Different geometry and material properties may
be used for each beam section. The motor and pump shafts are connected by a coupling,
which is represented by a linear spring and a trunnion spring. The motor and pump shafts
are connected to the base plate (i.e., to their casings) by the bearings, which are
represented by linear springs and proportional dampers. Two linear spring/proportional
damper pairs are used to represent the flexible mounts. The attachment of the flexible
mounts to the bench is assumed to be fixed.
The number and the location of the mass points were selected to represent the
system’s mass distribution and deflection and to allow for the application of external
forces. The motor and pump shafts contain the majority of the mass points because these
beams model relatively flexible components that experience significant bending and are
subjected to a variety of external forces. The steel base plate is approximated as a rigid
body because it is stiff relative to the motor shaft, pump shaft, and flexible mounts. Two
mass points are used to represent the base-plate mass.
The components represented by each mass point are listed in Table 2.1.
Concentrated masses exist at both ends of the motor shaft, at the coupling end of the
pump shaft, and between each bearing pair. These mass-point locations allow the
application of forces generated by rotating imbalances, coupling misalignments, and
flow-induced forces on the impeller. Mass points at the bearing locations allow forces due
to bearing defects to be applied to the motor and pump shafts. Portions of the motor or
pump shaft mass are included in each of these mass points. The base-plate mass points
include the base-plate mass and the mass of the motor and pump cases. The motor and
pump cases are assumed to be rigidly attached to the base plate because the stiffness of
1

The mass matrix is a diagonal matrix containing the mass values of the eleven mass
points. The elements of the stiffness matrix are calculated by using the method of
9
influence coefficients. The relationship between the mass-point forces and the
deflections can be represented by the matrix equation,

d = IC f , (2.2)
where

d = the deflection vector,


IC = the influence coefficient matrix
f = the applied force vector.

An element of the influence coefficient matrix IC i,j, is defined as the deflection at


mass point i caused by a unit force applied to mass point j. Thus, the elements of the j th
column of the influence inefficient matrix are the mass-point location deflections caused
by a unit force applied at the location of mass point j. The deflections of the model due
to forces applied perpendicular to the beam axis at the mass-point locations are calculated
by determining the beam reactions, using the reactions and applied forces to form the
9
moment equations, analytically integrating the moment equations twice to obtain beam
deflection equations, and finally applying boundary and matching conditions to determine
the integration constants. Repeating this calculation for unit forces applied at each
mass-point location forms the columns of the influence coefficient matrix. Equation (2.2)
can be solved for the applied force vector by premultiplying by the inverse of the
influence coefficient mattrix,

(2.3)

where
-1
IC = K is the stiffness matrix.

After the mass and stiffness matrices are formed, the undamped equation of motion,

can be used to find the undamped natural frequencies (eigenvalues) and mode shapes
(eigenvectors). A coordinate transformation that uncouples the equations can be formed
by using the modal matrix P, whose columns are the eigenvectors of Eq. (2.4). The modal
matrix and its transpose are orthogonal with respect to the mass and stiffness matrices
because of the orthogonality of the eigenvectors. Using the modal matrix to perform a
coordinate transformation results in uncoupling the equations of motion,

(2.7)

The modal damping ratios can be determined from examining the rate of decay of
each vibration mode during the system’s response to an initial deflection. The modal
matrix is then used to transform the damping matrix into the original coordinates
10

Once the damping matrix is obtained, all the matrix elements of Eq. (2.1) are known and
the mathematical model of the pump is complete.

2.1.2 Calculation of Resonance Frequencies and Mode Shapes

(2.9)

2.1.3 Calculation of the Temporal Response to Initial Conditions or Applied Forces

Simulation of the response of the motor-pump unit to initial conditions or to applied


forces was performed by using the Advanced Continuous Simulation Language (ACSL)
10
computer code. The ACSL code was used to calculate the temporal response at the
mass-point locations by numerical integration of Eq. (2.9). The initial section of the
program forms the mass, stiffness, and damping matrices; combines these to form matrix A
of Eq. (2.9); and sets the initial acceleration, velocity, and position of each mass point.
The dynamic section of the program calculates the acceleration, velocity, and position of
each mass point for each time increment. The output from the simulation, which consists
of the time, acceleration, velocity, and the position of selected mass points, is stored in
ASCI text and binary plotting files.
The calculated output for any mass point simulates measured acceleration data
assuming the dynamics of the accelerometer are such that they do not significantly affect
the measurement. This assumption is valid for frequencies below 80% of the
accelerometer resonance frequency. For example, the calculated acceleration of mass
point 10 simulates the signal that would be obtained from an accelerometer mounted on
the motor casing. By processing the calculated acceleration through an FFT, a frequency
spectrum can be obtained for each mass point. If the simulation corresponds to free
vibration, the frequency spectra can be used to determine the resonance frequencies and
mode shape components for the corresponding mass points. Resonance frequencies
correspond to the peaks in the frequency spectrum, and the relative magnitude of each
mode shape component may be obtained from the appropriate Fourier-transformed
displacement component. The relative angle between eigenvector components can be
obtained by comparing the angles from the appropriate Fourier-transformed displacement
components. This information is extracted from the frequency spectra by the frequency
11

spectrum decomposition technique. This technique is described in Appendix A and a


11,12
previous application to the analysis of nuclear power plant noise is also discussed.

2.2 APPLICATION OF NEURAL NETWORKS AS CLASSIFERS/


INTERPOLATORS

A neural network is a mathematical algorithm modeled loosely on the concept of its


biological counterpart, the brain. Just as the brain contains a vast number of
interconnected neurons, a neural network comprises interconnected neuronlike
“processing elements.” Each processing element is a simple processor which, like its
biological counterpart, receives signals from neighboring elements and outputs a response
based on the weighted sum of those inputs. In most neural network configurations, the
processing elements are arranged in connected layers, and the number of processing
elements per layer may range from a few to thousands.
A distinguishing feature of a neural network is that it is adaptable and must undergo
a training phase prior to application. During training, the values of coefficients (i.e.,
weights) that control the relative effects of processing elements are iteratively adjusted to
produce a more desirable response. This adjustment is analogous to the biological
learning process in which the interconnection strengths between neurons are strengthened
or weakened, depending on relative activation rates. In general, both artificial and
13
biological neural systems are governed by Hebb’s Law, which states that when two
neurons consistently and simultaneously have a high output, the interconnection strength
between the neurons is increased.
Numerous neural network algorithms exist, and it is beyond the scope of this paper to
discuss or even summarize them. However, in general, they can be grouped into two
broad categories, depending on whether they are trained from a combination of input and
output data (supervised) or from input data only (unsupervised). Two of the better
14
known unsupervised neural networks are the Kohonen network and the Carpenter/
15
Grossberg Adaptive Resonance Theory (ART) network. These algorithms organize
input data into categories on the basis of a measure of similarity and, thus, perform cluster
16
analysis much like well-known classical algorithms such as the K-Means.
Supervised networks, on the other hand, produce output values rather than clusters of
data. Thus, supervised network training requires a set of data that includes both input and
corresponding output values. The network is trained to reproduce each output value for
the corresponding input value; thus, when it is presented with an unknown input, the
correct output results. The training is typically accomplished by adjusting the weights of
the processing element interconnections by an amount proportional to the error (i.e., the
difference between the correct output and the output actually produced by the network).
If the error is large, then large adjustments are made.
Two main applications of supervised neural networks exist: pattern recognition and
function approximation. In the first application, a network is trained to classify input
vectors (patterns). Outputs are constrained to discrete values, each of which indicates
assignment to a particular class. In the second application, also known as system
identification, a neural network is trained to duplicate the functional process of a physical
system or mathematical model. Outputs in this case may be continuous rather than
discrete. The interest in neural networks for this latter purpose stems largely from the
Stone-Weierstrass Theorem which, when generalized to neural networks, states that even
12

with only one hidden layer (see later discussion of backpropagation), a neural network can
17
approximate any continuous function. As a result, a theoretical basis exists to conclude
that a neural network can be trained to duplicate the mathematical details of an arbitrarily
complex mathematical system. An advantage is that the functional response of a complex
system may be duplicated with no requirement for knowledge of the internal details of the
system. A conventional competing approach is the use of polynomial curve fitting.
However, some research has indicated that neural networks can be orders of magnitude
more accurate than polynomial models, which tend to oscillate and may require an
18
impractically large number of coefficients.

2.2.1 Backpropagation Network

The backpropagation model is the most widely used supervised neural network model
and is used for the system identification in this work. This network is characterized by
three or more layers of processing elements comprising an input buffer layer, at least one
“hidden” layer, and an output layer. A conceptual example depicting L hidden layers is
shown in Fig. 2.2. Figure 2.3 shows a single processing element (element j in layer l)

(2.10)

(2.11)

The nonlinear transfer function is usually a hyperbolic tangent defined by

(2.12)

or the sigmoid function defined by

A backpropagation network is trained by modifying the interconnection weights to


minimize a global error function given by
13

Fig. 2.2. Generalized backpropagation neural network with L hidden layers.

Fig. 2.3. Neural network processing element number j in hidden layer i.


(2.15)

(2.16)

(2.17)

(2.18)

The constant A (O < A < 1) in Eq. (2.15) is known as the “Learning Coefficient.” It has
been heuristically shown that if A is decreased during the learning phase, convergence is
optimized. From Eqs. (2.10) and (2.11), we observe that data flow is in the feedforward
direction from input to output. However, Eqs. (2.17) and (2.18) show that error
information required to adjust the interconnection weights is propagated in a backward
direction from output to input. Hence, the name “backpropagation” has been adopted.
One issue that has attracted concern regarding gradient descent methods is that of
local minima. That is, how can one be certain that the minimum error obtained is indeed
a global minimum? It has been shown that for a local minimum to exist, all the weights
19
must be such that any change in these weights will increase the error. The indication is
that the probability of a local minimum decreases for a specific data set as the number of
weights is increased. Thus, if a local minimum is suspected, the addition of nodes to the
hidden layer should eliminate the local minimum. A further indication is that local minima
present more of a theoretical problem than one that actually occurs in practice.
15

3. PHASE I: APPLYING THE DIAGNOSTIC METHOD


TO COMPUTER-SIMULATED DATA

This chapter describes the development of software tools and the application of the
diagnostic method to computer-simulated data. The computer simulation was intended to
address the following questions:

1. What effect does the training set composition and size have on the accuracy of the
model parameter estimates made by the neural network?

2. Is the formation of the training set or the neural network training so computationally
intensive that the diagnostic method is impractical?

3. Are eigenvalues and eigenvector components a practical choice for forming the neural
network training set, and is the information contained in these values sufficient for
the diagnostic method to accurately predict model parameters?

4. Can information equivalent to the eigenvalues and eigenvector components used to


form the training set be easily extracted from vibration spectra?

5. Can the trained neural network solve the “inverse problem,” that is, can the neural
network be used to accurately estimate the model parameters that correspond to a
given set of eigenvalues and eigenvector components (resonance frequencies and
mode shape components)?

With the use of computer-simulated data, a direct comparison of the estimated and known
model parameters can be used to evaluate the accuracy of the neural network
interpolation. The use of computer-simulated data also allows an examination of the
effect of the frequency spectrum decomposition on diagnosis accuracy.
The formation of the neural network training sets is described in Sect. 3.1. The
application of the NeuralWorks Professional II+ code to perform the neural network
calculations is described in Sect. 3.2. The application of the diagnostic method to
computer-simulated data is described in Sect. 3.3, and a discussion of the results is
presented in Sect. 3.4.

3.1 FORMATION OF THE TRAINING SETS USED DURING PHASE I

The members of the training sets used during Phase I were selected so the effects of
the spacing between the members in the training set and the effects of the number and
type of model output values on neural network prediction accuracy could be examined.
The number of model output values determines the number of nodes in the neural
network input and hidden layers. The spacing between the model input values affects the
neural network prediction accuracy because closer input value spacings result in neural
network interpolation over a narrower range during the recall phase. Because each
training set used in this investigation spans the same range of input and output values, the
16

number of members contained in the training set increases as the spacing between
members decreases. The effect of the number of spring rates used during the generation
of the training set n and the number of variations of spring rate j, ij on the number of
cases contained in each training set N is given by

(3.1)

Mode A Mode A is characterized by relatively large mode shape components


corresponding to mass points 5, 6, and 8. The mode shape components
corresponding to mass points 5 and 6 are in-phase, and the mode shape
components corresponding to mass points 5 and 8 are out-of-phase. The
mode shape components corresponding to mass points 10 and 11 have a
value of virtually zero.

Mode C Mode C is characterized by relatively large mode shape components


corresponding to mass points 5, 6, and 8. The mode shape components
corresponding to mass points 5 and 6 are in-phase, and the mode shape
components corresponding to mass points 5 and 8 are also in-phase. The
mode shape components corresponding to mass points 10 and 11 have a
value of virtually zero.
17

I
18

The members of the training sets used during the Phase I portion of this investigation
are summarized in Table 3.2. The training sets differ in the number of spring rate
combinations contained in the training sets and in the number and type of neural network
input parameters. Training sets 1, 2, and 3 used only the resonance frequencies of modes
A, C, E, and F (presented in that order) as input. Training sets 4,5, and 6 use the
resonance frequencies and normalized mode shape components corresponding to mass
points 5, 8, 10, and 11 for modes A, C, E, and F as input. Training sets 7, 8, and 9 use
the resonance frequencies and normalized mode shape components of mass points 5, 8,
10, and 11 corresponding to the 9 lowest resonance frequencies as input. Training
sets 1 through 6 order the neural network input parameters on the basis of mode and,
thus, require some preprocessing of the input parameters. This preprocessing removes

relatively small neural network. Training sets 7, 8, and 9 contain resonance frequency and
mode shape information corresponding to the 9 lowest resonance frequencies. This
approach removes the need to preprocess the input parameters but results in larger neural
networks.

3.2 CREATION AND TRAINING OF THE NEURAL NETWORK

All neural network development in this work was performed by using the
20
NeuralWorks Professional II/PLUS code distributed by NeuralWare, Inc., of Pittsburgh.
This package offers an extensive selection of neural network paradigms. However, as was
discussed in Sect. 2.2, the backpropagation algorithm appeared the most suitable for our
application. We found, after some experimentation, that only one hidden layer was
required in each network representation and little could be gained by increasing this
19

components for each resonance frequency. Thus, for training sets 1, 2, and 3, which use
four resonance frequencies and no mode shapes, only four inputs were used. For sets 4,
5, and 6, which use four resonance frequencies and four mode shapes (per frequency),
20 inputs were required. Similarly, for sets 7, 8, and 9, which have nine frequencies and
four mode shapes, 45 inputs were required. Thus, the complexity of the neural networks
used in this work varied from a 4- to a 45-dimensional input.
No technique is universally accepted for determining the ideal number of nodes to
include in the hidden layer of a backpropagation network. However, we found, through
trial and error, that a suitable number of hidden layer nodes was ˜50% of the input
dimension (provided that number remained greater than the number of output nodes). A
larger number only complicated the network and did not improve either convergence rate
or the final accuracy; a smaller number in some cases degraded the final results. On this
basis, the number of hidden nodes for the three groupings 1-3,4-6, and 7-9 were 3, 10,
and 20 respectively.
Additional factors that must be considered in the development of a backpropagation
network are the nonlinear transfer function used and the variation of the learning rule
incorporated. For our work, the hyperbolic tangent gave better results than the sigmoid
function and was used in all cases. Network learning was achieved by using the
cumulative delta rule, a version of the gradient descent rule discussed in Sect. 2.2. The
weight changes (delta weights) are averaged over a specified number of training set
member presentations to the network before the weights are actually updated. The
number of results accumulated before the update is referred to as the “epoch.” Thus, for
the standard gradient descent approach in which the weights are updated after each
presentation, the epoch is 1. The advantage of an epoch > 1 is that oscillatory results
from individual input-desired output pairs are smoothed out. The disadvantages are that
more presentations are required for each weight update, and if the epoch is too large, the
weight changes may average to a very small value, resulting in slow convergence. For the
cased we examined, the epoch size appeared to have a marginal effect on the ultimate
convergence of the network. In general, we found that an epoch size that was several
percentage of the training set size resulted in a short training time.
Table 3.3 summarizes the above discussion describing network parameters for the
nine training sets considered. Table 3.3 shows also the number of iterations that were
used for training each network. Errors are discussed in the following section.

3.3 RESULTS OBTAINED FROM APPLYING THE DIAGNOSTIC METHOD


TO COMPUTER SIMULATED DATA

This section presents the results obtained from applying the diagnostic method to
computer-simulated data. The use of the diagnostic method to estimate model spring
rates from calculated eigenvalue and eigenvector components is presented first, followed
by the results obtained from applying the method to computer-simulated vibration spectra.
20

3.3.1 Prediction of Spring Rates From Calculated Eigenvalue and


Eigenvector Information

The ability of the neural network to reproduce the training set output, given the
training set input, is shown in Figs. 3.1, 3.2, and 3.3. The difference between the known
and predicted output is a measure of the effectiveness of the neural network training.
These figures show results obtained by applying a neural network trained by using training
set 6; this particular neural networld/training set combination was selected because it
produced the most accurate results. In these figures, examples of perfect agreement
between the known and estimated values fall on the diagonal line. For the neural
network reproducing the spring rates used in training set 6, the average absolute error and
standard deviation of the three estimated spring rates is 3.6 and 3.1% respectively.
The capability of the neural network to generalize over the training set (i.e., to
interpolate between the spring rate values used in the training set) is shown in Figs. 3.4,
3.5, and 3.6. A test set was formed by using calculated results for spring rates between
those used in the training set. The overall absolute error is 3.2% and the standard
deviation is 2.7%.
The effect of the training set size on the accuracy of the neural network estimate of
spring rates not included in the training set is shown in Fig. 3.7. These results show that
for each of the training set types, the accuracy of the neural network spring rate estimate
improves as the number of training set members increases. This improvement occurs
because the greater number of training set members reduces the range over which the
neural network must interpolate to estimate the spring rate values. The rate of
improvement decreases as the number of training set members increases.

3.3.2 Prediction of Spring Rates from Calculated Vibration Spectra

The ACSL computer code was used to calculate the model mass-point deflections
caused by an initial deflection of mass point 1. The calculated deflection data were
Fig. 3.7. Effect of the number of training set members on neural
network estimation accuracy for the 3 training set types used during
Phase I.

Fourier transformed into the frequency domain, and the neural network input data
(i.e., resonant frequencies and mode shape amplitudes) were manually extracted from the
frequency spectra. Comparison of the known and estimated spring rates indicates how
well the technique can be applied to spectral data.
The response of the system to a l-in. (2.54-cm) initial displacement of mass point 1
was calculated for a simulation time span of 4.5 s with the mass point displacements
recorded at time intervals of 0.001 s. The results of this calculation were used to form a
4096-line Fourier transform to be performed with a frequency increment of ˜0.25 Hz and
a maximum frequency of 250 Hz. A fourth-order Butterworth filter with a 200-Hz cutoff
frequency was applied to the time-domain data before the displacement of mass points 5,
8, 10, and 11 were transformed into the frequency domain.
The amplitude and phase of the spectra at each resonance frequency were used to
form the mode shape components used as neural network input. To form the neural
25

3.4 DISCUSSION OF RESULTS

The results from the computer simulation show that the trained neural network can
be used to estimate model parameters to within 3%, given appropriate eigenvalue/
eigenvector (resonance frequency and mode shape) information. Thus, the neural
network can be used to solve the “inverse problem,” provided the following conditions are
met:

1. The relationship between the neural network input (the eigenvalue/eigenvector


information) and the neural network output (the model parameters) must be single
valued.

2. The values of the parameters used as neural network input must be significantly
affected by changes in the neural network output parameters.

If the relationship between the neural network input and output is not single valued, the
model parameter estimates will be inaccurate. Training sets 1, 2, and 3 show this effect.
These training sets use only resonance frequencies as neural network input. Because
some of the spring rate combinations in the training set have virtually identical resonance
frequency values (the mode shape component values are different), one-to-one
correspondence between neural network input and output does not exist. The results
shown in Fig. 3.7 show that the model parameter estimates made by using neural networks
trained with training sets 1, 2, or 3 are less accurate than those made by using neural
networks trained with training sets 4 through 9. The relationship between the neural
network input and output is single valued in training sets 4 through 9 because these
training sets include mode shape components.
27

Figure 3.7 shows also the effect of the training set member spacing on the accuracy of
the model parameter estimates. As the number of members in the training set increased,
the spacing between members decreased because the range of spring rates used to form
the training sets remained constant. As the training set member spacing decreased, the
range over which the neural network must interpolate to estimate model parameters also
decreases, thus accounting for the improvement in the accuracy of the model parameter
estimates as the number of members in the training set is increased.
The results clearly show that the selection of eigenvalues and eigenvectors to form
the training sets was appropriate. The calculation of these quantities is relatively simple
and computationally efficient, especially when compared to extracting similar information
from a calculated time response. Finally, we showed also that equivalent information can
be extracted from a frequency spectrum and used to obtain an estimate of the model
parameters. Thus, taken together, these results show that the neural network can be
trained to solve the inverse problem in which the model spring rates are determined from
spectral features such as resonance frequencies and mode shape components. Because
these same spectral features can be extracted from measured signals, we anticipated that
similar results would be obtained from the hardware demonstration.
28

4. PHASE II: APPLYING THE DIAGNOSTIC


METHOD TO MEASURED DATA

The bench-top test unit and the data acquisition system are described in this chapter.
This equipment was used during the Phase II portion of the investigation to demonstrate
the use of the diagnostic method on a relatively simple mechanical system. In addition to
demonstrating the method, the Phase II work provides an indication of the effects of
modeling and measurement errors on the accuracy of the model parameter estimates.
Appendix C contains the detailed drawings of the bench-top test unit.

4.1 THE BENCH-TOP TEST UNIT

A description of the bench-top test unit and the data acquisition system are given in
this section.

4.1.1 Description of the Bench-Top Test Unit

The bench-top test unit consists of three main structural members: the frame base,
the top beam, and the test beam (Fig. 4.1). A motor-driven assembly consisting of an
electric motor, coupling, shaft, and bearings is mounted on the test beam. The
motor-driven assembly corresponds to the motor, coupling, and pump of the mathematical
model. The test beam corresponds to the base plate of the mathematical model. The
structural members of the bench-top test unit were designed to deflect only in the vertical
plane. This constraint is imposed to force consistency between the deflections of the
bench-top test unit and the calculated deflections of the mathematical model. The main
purpose of the frame base and the top beam is to constrain the motion of the test beam.
No motion constraint was applied to the motor/coupling/shaft assembly, because the motor
was not used and the motor/coupling/shaft was excited only in the vertical direction during
this investigation.
The frame base is constructed of 4-in. (10.16-cm) x 5.4 steel channel, 72 in. (1.8 m)
in length. At each end of the base are two horizontal stabilizing members and an 18-in.
(46-cm) vertical upright, each constructed of 4-in. (10.16-cm) x 5.4 steel channel and
welded to the horizontal base. Each upright has two 0.390-in.- (0.99-cm-) wide vertical
slots, 7.12 in. (18.1 cm) in length, to provide an adjustable attachment for the top beam.
The frame base is placed on an 0.125-in.- (0.32-cm-) thick neoprene rubber mat to provide
some isolation from the bench top. The frame base is bolted to the bench top through
the horizontal stabilizing members. The top beam is constructed of 4-in. (10.16-cm) x 5.4
steel channel and is bolted to the uprights at each end.
The test beam is constructed of 7-in. (17.8-cm) x 9.80 steel channel and is
constrained to move only in the vertical plane by four rollers, two on each end of the
beam. Theses rollers contact the faced outside surface of each upright and also act to
prevent twisting of the test beam. The test beam is mounted to the base and to the top
beam by four Firestone Model 1M1A air springs that correspond to the mounting springs

The dependence of the spring rate on air pressure of the 1M1A air springs at the
design height of 2.5 in. (6.35 cm) is shown in Fig. 4.2. These spring rates may be
calculated by using
Fig. 4.2. Effect of internal pressure on the
spring rate of the Firestone 1M1A airspring.
31
4.1.2 Description of the Data-Acquisition System

The data-acquisition system consisted of an IBM-compatible 486 PC computer


equipped with a 16-bit, multiple-channel digital data-acquisition board (AT-MIO-16X from
National Instruments Corporation). The LabVIEW data-acquisition package, also from
22
National Instruments Corporation, was used as the software driver. All data were low-
pass-filtered by using a Rockland 852 active-filter prior to digitization.

4.2 MATHEMATICAL MODEL OF THE BENCH-TOP TEST UNIT

The three-beam pump model described in Sect. 2.1 was modified to simulate the
vibration of the bench-top test unit. The modifications were as follows:

1. Four mass points were used to represent the mass distribution of the test beam. To
properly describe the uniform mass distribution of the test beam, the mass points
were located at each end of the beam and at one third of the test beam length from
each end.

2. The individual beam-bending stiffnesses between each mass point were eliminated
because the test beam is uniformly stiff along its length.

3. The mounting springs were moved to the end of the test beam to more closely model
the attachment of the test beam to the frame base.

A diagram of the bench-top test unit model is shown in Fig. 4.1. Table 4.1 lists the
components represented by each lumped parameter in the model and lists also the
parameter values that describe the bench-top test unit. The mass matrix, the stiffness
matrix, and the undamped eigenvalues and eigenvectors were obtained by performing the
calculations described in Sect. 2.1.1.

The bearing radial spring rates were calculated by using

(4.3)
32
33

4.2.2 Measured Data

4.3 FORMATION OF THE TRAINING SETS AND TRAINING OF THE


NEURAL NETWORK
I

Fig. 4.3. Comparison of the measured and calculated values of resonant frequency.
Fig. 4.4. Comparison of the known and estimated spring rate values for
spring rates contained in the training set.

4.4 RESULTS OBTAINED FROM APPLYING THE DIAGNOSTIC METHOD


TO MEASURED DATA
Fig. 4.5. Comparison of the known and estimated spring rate values made
by using measured data.

4.5 DISCUSSION OF RESULTS

The demonstration clearly shows the ability of the diagnostic method to estimate the
mounting spring rate values to within 5 to 10%. Thus, it can be concluded from this
demonstration that for mechanical systems in which the relationship between the
measured output (neural network input) and the monitored system parameters (neural
network output) is single valued and which show significant changes in the measured
output for significant changes in the monitored system parameters, a successful application
of the monitoring method to detect, locate, and estimate the magnitude of structural
changes can be expected.
38
The three main sources of error in the model parameter estimates are modeling
errors, neural network errors, and measurement errors. The results from the computer
simulation indicate that the neural network results are on the order of 2 to 3%. The
model error, as indicated by the comparison of the measured and calculated values shown
in Fig. 4.3, is believed to be ˜5%. The measurement errors are also believed to be on
the order of 5%. These errors are attributed to leaky pressure regulators that allowed the
air spring pressure to vary during each measurement, a sticking pressure gauge on one air
spring, and the unit-to-unit variability that would cause errors in the air spring
pressure-to-spring rate conversion. Considering the task performed by the neural network,
it is expected that both modeling and measurement errors would have a significant effect
on the accuracy of the method’s parameter estimation. Either type of error would result
in a mismatch between the measured quantities and the training set input values, causing
poor parameter value estimation. Thus, it appears that both the modeling and the
measurements need to be carefully performed to achieve optimum parameter estimates
with the diagnostic method.
39

5. SUMMARY AND CONCLUSIONS

Computer simulation results and a demonstration using a bench-top test unit were
used to determine that the diagnostic method using frequency spectra to estimate
structural parameters can be successfully applied to detect and locate structural changes in
a mechanical system. In particular, it was shown that a neural network trained by using
eigenvalues and eigenvector components calculated by a mathematical model, can be used
to estimate the structural condition of the mechanical system by using measurements of
resonance frequencies and mode shape components extracted from vibration spectra. We
concluded that the diagnostic method should be capable of being successfully applied to
monitor the structural condition of a mechanical system with the following characteristics:

1. The relationship between the measured parameters (neural network input) and the
monitored parameter (neural network output) must be single valued if the neural
network is to train properly.

2. Changes in the monitored parameters must have a significant effect on the values of
the measurable parameters.

A discussion of specific results and conclusions based on the computer simulation results
and on the demonstration using the bench-top test unit are presented next.

5.1 RESULTS AND CONCLUSIONS BASED ON THE COMPUTER


SIMULATION RESULTS

The computer simulation addressed the following questions:

1. What effect does the training set composition and size have on the accuracy of the
model parameter estimates made by the neural network?

2. Is the formation of the training set or the neural network training so computationally
intensive that the diagnostic method is impractical?

3. Are eigenvalues and eigenvector components a practical choice for forming the neural
network training set, and is the information contained in these values sufficient for
the diagnostic method to accurately predict model parameters?

4. Can information equivalent to the eigenvalues and eigenvector components used to


form the training set be easily extracted from vibration spectra?

5. Can the trained neural network solve the “inverse problem,” that is, can the neural
network be used to accurately estimate the model parameters that correspond to a
given set of eigenvalues/eigenvectors (resonance frequencies and mode shapes)?
40
Thus, most of the important considerations concerning the applicability of the monitoring
method are addressed by the computer simulation portion of the investigation.
The simulation results show that the accuracy of the neural network parameter
estimation depends heavily on the composition and the “spacing” of the members in the
training set. If the training set composition is such that the relationship between the
neural network input and output is not single valued, the resulting model parameter
estimation is poor. An example of this behavior is the relatively high error associated with
training sets 1, 2, and 3 (Fig. 3.7). Because these training sets contained only resonance
frequencies as input, in some cases more than one combination of spring rates resulted in
nearly identical resonance frequency values. Including mode shape components in training
sets 4 through 9 avoids this problem, producing better parameter estimates, as shown by
the lower error values obtained by using these training sets.
Figure 3.7 shows also that the spacing between members in the training set affects the
accuracy of the estimated spring rates. As the number of training set members increases,
the spacing between adjacent members decreases because the parameter range remains
constant for all training sets. Thus, the interpolation between training set members
performed by the neural network when estimating spring rate values occurs over a smaller
interval as the training set member spacing decreases, resulting in more accurate spring
rate estimates.
The amount of computation required to form the training set and train the neural
network was not prohibitively large in the applications of the diagnostic method used in
this work. Although the applicability of this statement is obviously limited by the relatively
simple models and the small number of parameters adjusted in this work, no reason is
apparent to expect prohibitively large calculations for significantly larger models. Thus,
this question remains open at this time but does not appear to pose a great threat to the
practical application of the diagnostic method.
The results obtained from the ACSL-calculated spectra clearly show that, as expected,
information equivalent to the calculated eigenvalues and eigenvector components can be
obtained from vibration spectra. These results also show that using calculated eigenvalues
and eigenvectors to form the neural network training set is an effective and
computationally efficient choice. However, one drawback to using eigenvector
components in the training set is that the number of required sensors equals the number
of eigenvector components contained in the training set. Thus, it may be impossible in
some applications to obtain all of the measurements required to produce the necessary
eigenvector (mode-shape) components.
Finally, the results from the computer simulation, taken together, show that the
trained neural network can accurately (to within 3%) solve the inverse problem of
determining model parameters from the resonance frequencies and mode-shape
components. The results of the computer simulation indicate that the diagnostic method
should be applicable to real mechanical systems that have a single-valued relationship
between the neural network input and output and that have monitored parameters have a
significant effect on the measurable quantities.

5.2 RESULTS AND CONCLUSIONS BASED ON THE APPLICATION


OF THE DIAGNOSTIC METHOD T0 THE BENCH-TOP TEST UNIT

The application of the diagnostic method to the bench-top test unit was intended
primarily as a demonstration of the method on a simple mechanical system. In addition to
41
demonstrating the method, an indication of the effect of modeling and measurement
errors on the method’s accuracy was obtained.
The demonstration clearly shows the capability of the diagnostic method to estimate
(to within 5 to 10%) values of the mounting spring rates. Thus, we concluded from the
demonstration that the diagnostic method can be used to detect, locate, and estimate the
magnitude of structural changes in mechanical systems that have a single-valued
relationship between neural network input and output and that have monitored
parameters that significantly affect the measured parameters.
The effect of modeling and measurement errors on the method’s accuracy, although
not specifically investigated, are indicated by the results. The three main error sources are
modeling errors, neural network errors, and measurement errors. From the results shown
in Chap. 3 and in Fig. 4.4, the neural network errors are known to be on the order of 2%.
The model error, indicated by the comparison of calculated and measured values shown in
Fig. 4.3, is estimated to be on the order of 5%. The measurement errors, although not
quantified, are believed to be relatively large, on the order of 5%. This error was due to
difficulty with the pressure regulators (which continuously bled air, changing the air spring
pressure), stickiness in one of the pressure gauges, and the unavoidable unit-to-unit
variability that introduced errors into the pressure-to-spring rate equation. Both modeling
and measurement errors will cause a mismatch between the measurements and the neural
network input values contained in the training set, resulting in poor system parameter
estimates. Thus, both the system modeling and the measurements must be performed with
a considerable level of care if the method is to provide accurate parameter estimates.
Note that this sensitivity to modeling error does not necessarily mean that
complicated mathematical models are always needed. The required model complexity will
depend on the dynamic characteristics that need to be measured to detect changes in the
monitored system parameters. If parameters such as the mounting spring rates need to be
monitored, as was done in this work, the rigid-body modes supply all the information
needed to detect and locate changes in these spring rates. These modes can be accurately
modeled by using a relatively simple model, as shown by the results in Chap. 4. If, on the
other hand, system parameters such as shaft flexural modes that affect higher modes need
to be monitored, a more complex model would be needed.
Finally, it should be pointed out that, although the diagnostic method has been
presented only in connection with vibration signature interpretation, this is really a specific
application of a more general methodology. The general methodology can be summarized
in three steps: (1) form a training set from mathematical simulation results, (2) train a
neural network to estimate model input parameters from model output values, and (3) use
the neural network to monitor the system (simulation model) parameters over time, thus
detecting and identifying the source of changes in measured values. This methodology has
a range of application beyond vibration signature analysis. This technique should be
applicable to monitor parameters in any system or process that satisfies the requirements
that the relationship between the measured output and the monitored parameters be
single-valued, that shows sufficient sensitivity to the parameters being monitored, and that
can be accurately modeled. Thus, the results presented in this report, in addition to
showing that the diagnostic method can be applied to detect and locate the source of
changes in vibration signatures, also serves as a successful demonstration of the more
general methodology.
42

6. REFERENCES

1. J. A. Thie, Power Reactor Noise, American Nuclear Society, LaGrange Park, Ill., 1981.

2. J. S. Mitchell, An Introduction to Machinery Analysis and Monitoring, PennWell


Publishing, Tulsa, Okla., 1981.

3. F. J. Sweeney, Utility Guidelines for Reactor Noise Analysis, EPRI NP-4970, Electric
Power Research Institute, Palo Alto, Calif., 1987.

4. B. Damiano and R. C. Kryter, Current Applications of Vibration Monitoring and


Neutron Noise Analysis: Detection and Analysis of Structural Degradation of Reactor
Vessel Internals from Operational Aging, NUREG/CR-5479, ORNL/TM-11398, Oak
Ridge National Laboratory, 1990.

5. F. J. Sweeney and D. N. Fry, “Thermal Shield Support Degradation in Pressurized


Water Reactors,” Flow-Induced Vibration–1986, PVP-Vol. 104, American Society of
Mechanical Engineers, New York, 1986.

6. B. T. Lubin, R. Longo, and T. Hammel, “Analysis of Internals Vibration Monitoring


and Loose Part Monitoring System Data Related to the St. Lucie Thermal Shield
Failure,” Prog. Nucl. Energy 21, 117-26 (1987).

7. R. Sunder and D. Wach, “Reactor Diagnosis Using Vibration and Noise Analysis in
PWRs,” in Operational Safety of Nuclear Power Plants I, International Atomic Energy
Agency, Vienna (1984).

8. C. Puyal et al., “Solution to Thimble Vibrations of French and Belgian Reactors


Using Accelerometers and Neutron Noise,” 19th Informal Meeting on Reactor Noise,
Rome, June 4-6, 1986.

9. W. T. Thomson, Theory of Vibration with Applications, Prentice Hall, Englewood


Cliffs, N.J., 1972.

10. Advanced Continuous Simulation Language Reference Manual, Ed. 10.0, Mitchell &
Gauthier Associates, Inc., Concord, Mass., 1991.

11. R. T. Wood, “A Neutron Noise Diagnostic Methodology for Pressurized Water


Reactors,” Ph.D. dissertation, The University of Tennessee, Knoxville, 1990.

12. R. T. Wood and R. B. Perez “Modeling and Analysis of Neutron Noise from an
Ex-Core Detector at a Pressurized Water Reactor,” SMORN VI, A Symposium on
Nuclear Reactor Surveillance and Diagnostics, Gatlinburg, Tenn., May 19-24, 1991.

13. D. O. Hebb, The Organization of Behavior, A Neuropsychological Theory, Wiley,


New York, 1949.
43

14. T. Kohonen, Self-Organization and Associative Memory,3rd ed., Springer-Verlag,


Berlin, 1989.

15. G. A. Carpenter and S. Grossberg, “A Massively Parallel Architecture for a


Self-Organizing Neural Pattern Recognition Machine,” Comput. Vision Graphics
Image Process. 37 (1987).

16. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley,
New York, 1973.

17. K S. Narendra and K Parthasarathy, “Identification and Control of Dynamical


Systems Using Neural Networks,” IEEE Trans. Neural Network l(l) (March 1990).

18. A Lapedes and R. Farber, Nonlinear Signal Recessing Using Neural Networks:
Prediction and System Modeling, LA-UR-87-2662, Los Alamos National Laboratory,
June 1987.

19. M. Smith, Neural Networks for Statistical Modeling, Van Nostrand Reinhold, New
York, 1993.

20. NeuralWorks Professional II/PLUS and NeuralWorks Explorer Reference Guide,


NeuralWare, Inc., Pittsburgh, 1991.

21. Firestone Airstroke Actuators, Airmount Isolators Engineering Manual and Design
Guide, Firestone Industrial Products Company, Noblesville, Ind.

22. LabVIEW Users Manual, National Instruments Corp., Austin, Tex. 1993.

23. E. P. Gargiulo, “A Simple Way to Estimate Bearing Stiffness,” Mach. Des. 107-10
(July 24, 1980).
BLANK PAGE
Appendix A

FREQUENCY SPECTRUM DECOMPOSITION TECHNIQUE


BLANK PAGE
47

A model describing the frequency spectrum resulting from the vibration of a


mechanical system was developed at Oak Ridge National Laboratory and successfully
l
applied to detector noise data obtained from a nuclear power plant. This model
characterizes peaks in the noise spectra in terms of frequency, width, amplitude, and
skewness. As a result, a frequency spectrum can be decomposed into parameters
representing the mechanical motion of system components.
The response of a detector to vibrations depends on the interaction of the vibratory
motion with the detection process. For the derivation of the detector spectrum model,
the vibration amplitudes of the system are convolved with window (or detector field of
view) functions, which describe the physics of the detection process, to represent the
detector response to each motion. Contributions from each resonance motion to the
detector signal are summed to give the total detector response. An expression for the
power spectral density (PSD) of the measured signal is formed by using the Fourier
transform of the detector response. The detector PSD is given by

(A.l)
48

Other peaks not initially present may appear during the monitoring cycle. The effect
of these submerged peaks can be accounted for by adding fitting peaks to represent them.
Because it is important to compare fitted parameters from comparable models, the fitting
algorithm can be configured to fit the model with a varying number of peaks during each
measurement to provide a set of resonance parameters that can be used should
unanticipated submerged peaks appear or visible peaks disappear.
The detector spectrum model has been implemented as user-supplied functions and
derivative subroutines in a generalized least-squares fitting code. The background term
can be formed from a low-frequency dynamic feedback model (if available) or can be user
supplied. The background parameter included in the detector spectrum model fit
represents the integral magnitude of the feedback dynamics contribution to the spectra. If
no such information is available about the process dynamics, then the background can be
addressed by fitting an additional, nonphysical peak to the low-frequency portion of the
spectrum.
The values of the resonance parameters for each fit following the first can be used as
initial parameter estimates for the subsequent fit. In this way, insight into the evolution of
the spectra gained at each application of parameter estimation is available as an a priori
input to the next fit. This insight can prove valuable in the cases where component
vibrations become closely coupled during a monitoring cycle.

REFERENCE

1. R. T. Wood and R. B. Perez “Modeling and Analysis of Neutron Noise from an


Ex-Core Detector at a Pressurized Water Reactor,” SMORN VI, A Symposium on
Nuclear Reactor Surveillance and Diagnostics, Gatlinburg, Tenn., May 19-24, 1991.
Appendix B

PHASE I MODE SHAPES


BLANK PAGE
‘1

51
52
53
54
55
,

Appendix C

DETAILED DRAWINGS OF THE BENCH-TOP TEST UNIT

,
BLANK PAGE
Appendix D

MEASURED RESONANCE FREQUENCIES AND MODE-SHAPE


COMPONENTS FROM THE BENCH-TOP TEST UNIT
BLANK PAGE
66
67
68
69
70
71

You might also like