Nonlinearity in Structural Dynamics Chapter 04

Download as pdf or txt
Download as pdf or txt
You are on page 1of 75

Chapter 4

The Hilbert transform—a practical


approach

4.1 Introduction
The Hilbert Transform is a mathematical tool which allows one to investigate
the causality, stability and linearity of passive systems. In this chapter its main
application will be to the detection and identification of nonlinearity. The theory
can be derived by two independent approaches: the first, which is the subject of
this chapter, relies on the decomposition of a function into odd and even parts and
the behaviour of this decomposition under Fourier transformation. The second
method is more revealing but more complicated, relying as it does on complex
analysis; discussion of this is postponed until the next chapter.
The Hilbert transform is an integral transform of the same family as
the Fourier transform, the difference is in the kernel function. The complex
exponential e i!t is replaced by the function 1=i ( !), so if the Hilbert
transform operator is denoted by H, its action on functions 1 is given by2
Z 1
1 G( )
HfG(!)g = G~ (!) = i
PV d
!
(4.1)
1
where P V denotes the Cauchy principal value of the integral, and is needed as
the integrand is singular, i.e. has a pole at ! = . To maintain simplicity of
notation, the P V will be omitted in the following discussions, as it will be clear
from the integrands, which expressions need it. The tilde ~ is used to denote the
transformed function.
1 ()
In this chapter and the following the functions of interest will generally be denoted g t and G !()
to indicate that the objects are not necessarily from linear or nonlinear systems. Where it is important
() () () ( )
to make a distinction h t and H ! will be used for linear systems and  t and ! will be used
for nonlinear.
2 This differs from the original transform defined by Hilbert and used by mathematicians, by the
introduction of a prefactor 1 i=i
= . It will become clear later why the additional constant is useful.

127
128 The Hilbert transform—a practical approach

The Hilbert transform and Fourier transform also differ in their


interpretation. The Fourier transform is considered to map functions of time
to functions of frequency and vice versa. In contrast, the Hilbert transform is
understood to map functions of time or frequency into the same domain, i.e.
HfG(!)g = G~ (!) (4.2)
Hfg(t)g = g~(t): (4.3)
The Hilbert transform has long been the subject of study by mathematicians,
a nice pedagogical study can be found in [204]. In recent times it has been adopted
as a useful tool in signal processing, communication theory and linear dynamic
testing. A number of relevant references are [24, 43, 49, 65, 81, 89, 105, 116,
126, 130, 151, 210, 211, 247, 255]. The current chapter is intended as a survey
of the Hilbert transform’s recent use in the testing and identification of nonlinear
structures.

4.2 Basis of the method


4.2.1 A relationship between real and imaginary parts of the FRF
The discussion begins with a function of time g (t) which has the property that
g(t) = 0 when t < 0. By a slight abuse of terminology, such functions will be
referred to henceforth as causal.
Given any function g (t), there is a decomposition
g(t) = geven(t) + godd(t) = 21 (g(t) + g( t)) + 21 (g(t) g( t)) (4.4)
as depicted in figure 4.1. If, in addition, g (t) is causal, it follows that

geven(t) = gg((jjttjj))==22;; tt >
<0
0 (4.5)

and 
godd(t) = g(gjt(jj)t=j)2=;2; tt > 0
<0. (4.6)

That this is only true for causal functions is shown by the simple
counterexample in figure 4.2. It follows immediately from equations (4.5) and
(4.6) that
geven(t) = godd(t)  (t) (4.7)
godd(t) = geven(t)  (t) (4.8)
where (t) is the signum function, defined by 3

(
1; t>0
(t) = 0; t=0 (4.9)
1; t < 0.
3 The notation sgn(t) is often used.
Basis of the method 129

g(t)

geven(t)

godd(t)

Figure 4.1. Decomposition of a causal function into odd and even parts.

Assuming that the Fourier transform of g (t) is defined, it is straightforward


to show that
Re G(!) = Ffgeven(t)g (4.10)
and
Im G(!) = Ffgodd(t)g: (4.11)
Substituting equations (4.7) and (4.8) into this expression yields
Re G(!) = Ffgodd(t)  (t)g (4.12)
Im G(!) = Ffgeven(t)  (t)g: (4.13)
Now, noting that multiplication of functions in the time domain corresponds
to convolution in the frequency domain, and that Ff(t)g = i=! (see
appendix D), equations (4.12) and (4.13) become
i
Re G(!) = i Im G(!)  (4.14)
!
i
Im G(!) = Re G(!)  : (4.15)
!
Using the standard definition of convolution,
Z 1
X (!)  Y (!) = d X ( )Y (! ): (4.16)
1
130 The Hilbert transform—a practical approach

g(t)
1

1 1

3
2
geven(t)

1
godd(t)
2

Figure 4.2. Counterexample decomposition for a non-causal function.

Equations (4.14) and (4.15) can be brought into the final forms
1
Z 1 Im G( )
Re G(!) = d (4.17)
 1 !
1
Z 1 Re G( )
Im G(!) = + d : (4.18)
 1 !
It follows from these expressions that the real and imaginary parts of
a function G(! ), the Fourier transform of a causal function g (t), are not
independent. Given one quantity, the other is uniquely specified. (Recall that
these integrals are principal value integrals.) Equations (4.17) and (4.18) can
be combined into a single complex expression by forming G(! ) = Re G(! ) +
i Im G(!), the result is
1 1
Z
G( )
G(!) = d : (4.19)
i 1 !
Now, applying the definition of the Hilbert transform in equation (4.1) yields
G(!) = G~ (!) = HfG(!)g: (4.20)
Basis of the method 131

So G(! ), the Fourier transform of a causal g (t), is invariant under the


Hilbert transform and Re G(! ) and Im G(! ) are said to form a Hilbert transform
pair. Now, recall from chapter 1 that the impulse response function h(t) of a
linear system is causal, this implies that the Fourier transform of h(t)—the FRF
H (!)—is invariant under Hilbert transformation. It is this property which will be
exploited in later sections in order to detect nonlinearity as FRFs from nonlinear
systems are not guaranteed to have this property.
Further simplifications to these formulae follow from a consideration of the
parity (odd or even) of the functions Re G(! ) and Im G(! ). In fact, Re G(! ) is
even
Z 1 Z 1
Re G( !) = dt g(t) cos( !t) = dt g(t) cos(!t) = Re G(!t)
1 1
(4.21)
and Im G(! ) is odd or conjugate-even
Z 1 Z 1
Im G( !) = dt g(t) sin( !t) = dt g(t) sin(!t)
1 1
= Im G(!) = Im G(!) (4.22)

where the overline denotes complex conjugation.


Using the parity of Im G(! ), equation (4.17) can be rewritten:

1 1 Im G( )
Z
Re G(!) = d
 1 !
Z 0 Z 1 
1 Im G( ) Im G( )
= d + d
 1 ! 0 !
 Z 1 Z 1 
1 Im G( ) Im G( )
= d + d
 0 ! 0 !
Z 1 Z 1 
1 Im G( ) Im G( )
= d + d
 0 ! 0 !
Z 1 Z 1 
1 Im G( ) Im G( )
= d + d
 0 +! 0 !
Z 1
2 Im G( )
= d 2 !2 (4.23)
 0
and similarly
2! 1 Re G( )
Z
Im G(!) = d 2 :
!2
(4.24)
 0
These equations are often referred to as the Kramers–Kronig relations [154].
The advantage of these forms over (4.17) and (4.18) is simply that the range of
integration is halved and one of the infinite limits is removed.
132 The Hilbert transform—a practical approach

4.2.2 A relationship between modulus and phase


Suppose G(! ), the Fourier transform of causal g (t), is expressed in terms of gain
and phase:
G(!) = jG(!)jei(!) (4.25)
where p
jG(!)j = (Re G(!))2 + (Im G(!))2 (4.26)
and  
Im G(!)
(!) = tan 1 : (4.27)
Re G(!)
Taking the natural logarithm of (4.25) yields 4

log G(!) = log jG(!)j + i(!): (4.28)

Unfortunately, log jG(! )j and (! ), as they stand, do not form a Hilbert
transform pair. However, it can be shown that the function (log G(! )
log G(0))=! is invariant under the transform and so the functions (log jG(!)j
log jG(0)j)=! and ((!) (0))=! do form such a pair. If in addition, the
minimum phase condition, (0) = 0, is assumed, the Hilbert transform relations
can be written:
2!2 1
Z
( )
log jG(!)j log jG(0)j = d
( 2 !2 )
(4.29)
 0
Z 1
2! log jG(!)j log jG(0)j
(!) = d 2 !2 : (4.30)
 0
The effort involved in deriving these equations rigorously is not justified as
they shall play no further part in the development; they are included mainly for
completeness. They are of some interest as they allow the derivation of FRF
phase from FRF modulus information, which is available if one has some means
of obtaining auto-power spectra as
s
jH (!)j = SSyy ((!!)) : (4.31)
xx

4.3 Computation
Before proceeding to applications of the Hilbert transform, some discussion of
how to compute the transform is needed. Analytical methods are not generally
applicable; nonlinear systems will provide the focus of the following discussion
and closed forms for the FRFs of nonlinear systems are not usually available.
Approximate FRFs, e.g. from harmonic balance (see chapter 3), lead to integrals
4 Assuming the principal sheet for the log function.
Computation 133

Im G(ωj)ωj
ω2j ω2i

∆ω
2 ∆ω 2∆ω

ω1 ωi ωn

Figure 4.3. Integration mesh for direct Hilbert transform evaluation.

(4.1) which cannot be evaluated in closed form. It is therefore assumed that a


vector of sampled FRF values G(! i ); i = 1; : : : ; N , will constitute the available
data, and numerical methods will be applied. For simplicity, equal spacing ! ,
of the data will be assumed. A number of methods for computing the transform
are discussed in this section.

4.3.1 The direct method

This, the most direct approach, seeks to estimate the frequency-domain integrals
(4.17) and (4.18). In practice, the Kramers–Kronig relations (4.23) and (4.24)
are used as the range of integration is simplified. Converting these expressions to
discrete sums yields

N
2X Im G(!j )!j
Re G~ (!i ) = !
 j=1 !j2 !i2
(4.32)

N
2!i X Re G(!j )
Im G~ (!i ) = !
 j=1 !j2 !i2
(4.33)

and some means of avoiding the singularity at ! i = !j is needed. This


approximation is the well-known rectangle rule. It can be lifted in accuracy
to the trapezium rule with very little effort. The rectangular sub-areas should
be summed as in figure 4.3 with half-width rectangles at the ends of the range.
The singularity is avoided by taking a double-width step. The effect of the latter
strategy can be ignored if ! is appropriately small.
134 The Hilbert transform—a practical approach

Figure 4.4. Hilbert transform of a simulated SDOF linear system showing perfect overlay.

Figure 4.4 shows a linear system FRF with the Hilbert transform
superimposed. Almost perfect overlay is obtained. However, there is an important
assumption implicit in this calculation, i.e. that ! 1 = 0 and that !N can be
substituted for the infinite upper limit of the integral with impunity. If the integrals
from 0 to ! 1 or !N to infinity in (4.23) and (4.24) are non-zero, the estimated
Hilbert transform is subject to truncation errors. Figure 4.5 shows the effect of
truncation on the Hilbert transform of a zoomed linear system FRF.
Computation 135

Figure 4.5. Hilbert transform of a simulated SDOF linear system showing truncation
problems.

4.3.2 Correction methods for truncated data

There are essentially five methods of correcting Hilbert transforms for truncation
errors, they will now be described in order of complexity.
136 The Hilbert transform—a practical approach

4.3.2.1 Conversion to receptance


This correction is only applicable to data with ! 1 = 0, commonly referred to
as baseband data. The principle is very simple; as the high-frequency decay
of receptance FRF data is faster (O(! 2 )) than mobility or accelerance data
(O(! 1 ) and O(1) respectively), the high-frequency truncation error for the latter
forms of the FRF is reduced by initially converting them to receptance, carrying
out the Hilbert transform, and converting them back. The relations between the
forms are
HI (!) = i!HM (!) = !2HR (!): (4.34)

4.3.2.2 The Fei correction term


This approach was developed by Fei [91] for baseband data and is based on the
asymptotic behaviour of the FRFs of linear systems. The form of the correction
term is entirely dependent on the FRF type; receptance, mobility or accelerance.
As each of the correction terms is similar in principle, only the term for mobility
will be described.
The general form of the mobility function for a linear system with
proportional damping is

N
X i!Ak
HM (!) = 2 2 (4.35)
k=1 !k ! + i2k !k !
where Ak is the complex modal amplitude of the k th mode; ! k is the undamped
natural frequency of the k th mode and  k is its viscous damping ratio. By
assuming that the damping is small and that the truncation frequency, ! max , is
much higher than the natural frequency of the highest mode, equation (4.35) can
be reduced to (for ! > ! max )
N
X Ak
HM (!) = i (4.36)
k=1 !
which is an approximation to the ‘out-of-band’ FRF. This term is purely imaginary
and thus provides a correction for the real part of the Hilbert transform via
equation (4.32). No correction term is applied to the imaginary part as the error
is assumed to be small under the specified conditions.
The actual correction is the integral in equation (4.1) over the interval
(!max ; 1). Hence the correction term, denoted C R (!), for the real part of the
Hilbert transform is

2 1
Z N Z 1
Im(G( )) 2X d
C R (!) = d 2 !2 = A 2
k
!2
(4.37)
 wmax ! k=1 !max
Computation 137

which, after a little algebra [91], leads to


 
! Im(G(!max )) ! +!
C R (!) = max ln max : (4.38)
! !max !
4.3.2.3 The Haoui correction term
The second correction term, which again, caters specifically for baseband data,
is based on a different approach. The term was developed by Haoui [130], and
unlike the Fei correction has a simple expression independent of the type of FRF
data used. The correction for the real part of the Hilbert transform is
2
Z 1 Im(G( ))
C R (!) = d 2 !2 : (4.39)
 wmax
The analysis proceeds by assuming a Taylor expansion for G(! ) about ! max
and expanding the term (1 ! 2 = 2 ) 1 using the binomial theorem. If it is
assumed that !max is not close to a resonance so that the slope dG(! )=d! (and
higher derivatives) can be neglected, a straightforward calculation yields

Im(G(!max )) !2 !4
 
C R (!) = C R (0) 2 + 4 + (4.40)
 !max 2!max
where C R (0) is estimated from
 Z 
2 wmax Im(G( ))
C R (0) = Re(G(0)) d : (4.41)
 0+
Using the same approach, the correction term for the imaginary part, denoted
by C I (! ), can be obtained:

!3 !5
 
I 2 !
C (!max ) = Re(G(!max )) + 3 + 5 + : (4.42)
 !max 3!max 5!max
4.3.2.4 The Simon correction method
This method of correction was proposed by Simon [229]; it allows for truncation
at a low frequency, ! min and a high frequency ! max . It is therefore suitable for
use with zoomed data. This facility makes the method the most versatile so far.
As before, it is based on the behaviour of the linear FRF, say equation (4.35)
for mobility data. Splitting the Hilbert transform over three frequency ranges:
(0; !min), (!min; !max ) and (!max ; 1), the truncation errors on the real part of
the Hilbert transform, B R (! ) at low frequency and the now familiar C R (! ) at
high frequency, can be written as
Z
2 !min Im(G(!))
B R (! ) = d 2 !2 (4.43)
 0
138 The Hilbert transform—a practical approach

and
2
Z 1 Im(G(!))
C R (!) = d 2 !2 : (4.44)
 !max
If the damping can be assumed to be small, then rewriting equations (4.40)
and (4.44) using the mobility form (4.35) yields
Z
2 !min X N 2 Ak
B R (!) = d 2 2
!k )( 2 !2 )
(4.45)
 0 k=1 (
and
2 1 N 2 Ak
Z X
C R (!) = d 2 :
 !max k=1 ( 2 !k )( 2 !2 )
(4.46)

Evaluating these integrals gives


N   
X Ak (!max + !k )(!k !min)
B R (!) + C R (!) = 2 !k2 ) !k ln (!max !k )(!k + !min)
k=1
 (!
 
(! + !min)(!max !)
+ ! ln : (4.47)
(! !min)(!max + !)
The values of the modal parameters A k and !k are obtained from an initial
modal analysis.

4.3.2.5 The Ahmed correction term


This is the most complex correction term theoretically, but also the most versatile.
It is applicable to zoomed data and, like the Simon correction term, assumes that
the FRF takes the linear form away from resonance. The form of the correction
depends on the FRF type; to illustrate the theory the mobility form (4.35) will be
assumed. The form (4.35) gives real and imaginary parts:
N
X 2Ak k !k !2
Re HM (!) = 2 22 2 2 2 (4.48)
k=1 (!k ! ) + 4k !k !
XN
Ak !(!k2 !2 )
Im HM (!) = 2 22 2 2 2: (4.49)
k=1 (!k ! ) + 4k !k !
So, assuming that the damping can be neglected away from resonant regions,
N
X 2Ak k !k !2
Re HM (!) = 2 !2 )2 (4.50)
k=1 (!k
N
X Ak !
Im HM (!) = 2 :
(!k !2 )2
(4.51)
k=1
Computation 139

ω low ωa ωb ω ri ωc ω d ωhigh

Figure 4.6. Frequency grid for the Ahmed correction term.

Suppose mode i is the lowest mode in the measured region with resonant
frequency ! ri and therefore has the greatest effect on the low-frequency
truncation error B R (! ), the relevant part of Im H m can be decomposed:
i 1
m (! ) =
X Ak ! Ai !
Im HM 2k !2 )2 + (!i2 !2 )2 (4.52)
k=1 ( !
where the superscript m indicates that this is the mass asymptote of the FRF. In
the lower part of the frequency range !=! k is small and the first term can be
expanded:
" #
i 1 i 1  2  
X Ak ! X ! ! !
2 2 2 = A k 1 + +    = O (4.53)
k=1 (!k ! ) k=1 !k
!k !k
and neglected, so
m (! )  Ai !
Im HM :
(!i2
!2 )2
(4.54)

Now, Ahmed estimates the unknown coefficient A i by curve-fitting function


(4.54) to the data in the range ! a to !b where !a > !min and !b < !high
(figure 4.6). (An appropriate least-squares algorithm can be found in [7].) The
low-frequency correction to the Hilbert transform is then found by substituting
(4.51) into the appropriate Kramers–Kronig relation, so
Z
2 ! 2 Ai
B R (! ) =
min
d
( 2 !i2 )( 2 !2 )
(4.55)
 0
140 The Hilbert transform—a practical approach

and this can be evaluated using partial fractions


    
Ai ! ! ! + !min
B R (!) = 2 2 !i ln i min + w ln : (4.56)
(!i ! ) !i + !min ! !min
The high-frequency correction term depends on the stiffness asymptote of the
FRF,
N
s (!) = X Ak ! A!
Im HM 2 2 2 + 2 j 22 (4.57)
(! ! )
k=j +1 k
(!j ! )
where mode j is the highest mode in the measured region which is assumed to
contribute most to the high-frequency truncation error C R (! ). In the higher part
of the frequency range ! k =! is small and the first term can now be expanded:
N N   ! 2   ! 
X Ak ! X Ak !k k k
( ! 2 !2 )2 =
! !
1 +
!
+    = O
!
(4.58)
k=j +1 k k=j +1 k

and neglected, so
s (!)  Aj !
Im HM 2
(!j !2 )2
(4.59)

and aj is estimated by fitting the function (4.59) to the data over the range ! c to
!d (figure 4.6). The high-frequency correction term is obtained by substituting
(4.59) into the Kramers–Kronig relation:

2 1 2 Aj
Z
C R (! ) = d 2
 !min ( 2 !j )( 2 !2 )
(4.60)

and this integral can also be evaluated by partial fractions:


    
Aj ! ! ! +!
C R (!) = ! ln max j + w ln max :
(!j2 !2 ) j !max + !j !max !
(4.61)
Note that in this particular case, Ahmed’s correction term is simply a reduced
form of the Simon correction term (4.47). This is not the case for the correction
to the imaginary part. This depends on the asymptotic behaviour of the real part
of HM (! ) (4.50). The mass asymptote for the real part takes the form
i 1
m (! ) =
X 2Ak k !k !2 2Ai i !i !2
Re HM 2 + :
!2 )2 (!i2 !2 )2
(4.62)
k=1 (!k
As before, the sum term can be neglected where !=! k is small, so

m (! )  2Ai i !i !2 ai ! 2
Re HM =
(!i2 !2)2 (!i2 !2 )2
(4.63)
Computation 141

and the ai coefficient is estimated as before by curve-fitting.


The correction term for the imaginary part of the Hilbert transform is,
therefore, Z
2 !min 3 ai
B I (! ) = d 2 2 :
! i )2 ( 2 ! 2 )
(4.64)
 0 (
Evaluation of this expression is a little more involved, but leads to
    
2a1 ! ! + !min ! + !min
B I (!) = i1 (! ) ln + i2 (!) ln i
 ! !min !i !min
 
2!min
+ i3 (!) 2 2 (4.65)
!i !min
where
! 1 1
i1 (! ) = ; i2 (! ) = ; i3 (! ) = :
2(!i4 !4) 4!i (!i2 !2 ) 4(!i2 !2 )
(4.66)
Finally, to evaluate the high-frequency correction to the imaginary part of the
Hilbert transform, the stiffness asymptote of the real part is needed. The starting
point is
k !2 !2
N
Re HMs (!) = X + 2j 22
2 2 2 (4.67)
k=j +1 (!k ! ) (!j ! )
where k = 2Ak k !k . Expanding the first term yields
N 
 ! 2  N
X k X
k 1+ +  k (4.68)
k=j +1
! k=j +1
as !k =! is considered to be small. The final form for the asymptote is

s (!)  b1 + b2 !2 + b3 !4
Re HM
(!j2 !2 )2
(4.69)

where the coefficients


XN N
X N
X
b1 = !j4 k; b2 = j 2!j2 k; b3 = k (4.70)
k=j +1 k=j +1 k=j +1
are once again obtained by curve-fitting.
The high-frequency correction is obtained by substituting (4.69) into the
Kramers–Kronig integral. The calculation is a little involved and yields
    
2! !max ! ! !j
C I (!) = j 1 (! ) ln + j2 (!) ln max
 !max + ! !max !j
 
2!max
+ j 3 (! ) 2
!max !j2
(4.71)
142 The Hilbert transform—a practical approach

where
b3 !(2!j2 + !2 ) + b2 ! b1 !j2
j 1 (! ) = ;
2(!j2 !2)
b1 ! b2 + 3b3!j2
j 2 (! ) = ;
4(!j2 !2)
(4.72)

a1! + b2 + b3 !j2
j3 ( ! ) = :
4(!j2 + !2 )
Note that these results only apply to mobility FRFs, substantially different
correction terms are needed for the other FRF forms. However, they are derived
by the same procedure as the one described here.
Although the Ahmed correction procedure is rather more complex than the
others, it produces excellent results. Figure 4.7 shows the Hilbert transform
in figure 4.5 recomputed using the Ahmed correction terms; an almost perfect
overlay is obtained.

4.3.2.6 Summary
None of the correction methods can claim to be faultless; truncation near to a
resonance will always give poor results. Considerable care is needed to obtain
satisfactory results. The conversion to receptance, Fei and Haoui techniques are
only suitable for use with baseband data and the Simon and Ahmed corrections
require a priori curve-fitting. The next sections and the next chapter outline
approaches to the Hilbert transform which do not require correction terms and
in some cases overcome the problems.
Note also that the accelerance FRF tends to a constant non-zero value as
! ! 1. As a consequence the Hilbert transform will always suffer from
truncation problems, no matter how high ! max is taken. The discussion of this
problem requires complex analysis and is postponed until the next chapter.

4.3.3 Fourier method 1


This method relies on the fact that the Hilbert transform is actually a convolution
of functions and can therefore be factored into Fourier operations. Consider the
basic Hilbert transform,

1 1
Z
G( )
HfG(!)g = G~ (!) = i 1
d
!
: (4.73)

Recalling the definition of the convolution product ,


Z 1
f1 (t)  f2 (t) = d f1 ( )f2 (t  ) (4.74)
1
Computation 143

Figure 4.7. Hilbert transform with Ahmed’s correction of zoomed linear data.

it is clear that
i
G~ (!) = G(!)  : (4.75)
!
Now, a basic theorem of Fourier transforms states that

Fff1(t)f2 (t)g = Fff1(t)g  Fff2(t)g: (4.76)


144 The Hilbert transform—a practical approach

It therefore follows from (4.75) that


 
i
F 1 fG~ (!)g = F 1 fG(!)gF 1 !
= g(t)(t) (4.77)

where (t) is the signum function defined in (4.9). (Ff(t)g = i=(!) is proved
in appendix D.)
It immediately follows from (4.77) that

G~ (!) = FÆ 2 ÆF 1 fG(!)g (4.78)

where the operator 2 represents multiplication by (t), i.e. 2 fg (t)g = g (t)(t)


and composition is denoted by Æ, i.e. (f 1 Æ f2 )(t) = f1 (f2 (t)). In terms of
operators,
H = FÆ 2 ÆF 1 (4.79)
and the Hilbert transform can therefore be implemented in terms of the Fourier
transform by the three-step procedure:
(1) Take the inverse Fourier transform of G(! ). This yields the time domain
g(t).
(2) Multiply g (t) by the signum function (t).
(3) Take the Fourier transform of the product g (t)(t). This yields the required
Hilbert transform G ~ (!).
In practice these operations will be carried out on sampled data, so the
discrete Fourier transform (DFT) or fast Fourier transform will be used. In the
latter case, the number of points should usually be 2 N for some N .
The advantage of this method over the direct method described in the
previous section is its speed (if the FFT is used). A comparison was made in
[170]. (The calculations were made on a computer which was extremely slow by
present standards. As a consequence, only ratios of the times have any meaning.)

Number of points 256 512


Direct method 6.0 min 24.1 min
Fourier method 1 1.0 min 2.0 min

The disadvantages of the method arise from the corrections needed. Both
result from the use of the FFT, an operation based on a finite data set.
The first problem arises because the FFT forces periodicity onto the
data outside the measured range, so the function (t) which should look like
figure 4.8(a), is represented by the square-wave function sq(t) of figure 4.8(b).
This means that the function G(! ) is effectively convolved with the function
i cot(!) = Ffsq(t)g instead of the desired i=(!). (See [260] for the
appropriate theory.) The effective convolving functions is shown in figure 4.9(b).
Computation 145
(a)
ε(t)
+1

-1

(b)
sq(t) +1

-1

Figure 4.8. Effect of the discrete Fourier transform on the signum function.

As ! ! 0, i cot(! ) ! i=(! ), so for low frequencies or high


sampling rates, the error in the convolution is small. If these conditions are not
met, a correction should be made. The solution is simply to compute the discrete
inverse DFT of the function i=(! ) and multiply by that in the time-domain
in place of (t). The problem is that i=(! ) is singular at ! = 0. A naive
approach to the problem is to zero the singular point and take the discrete form 5
of i=(! ):
8
> 0; k=1
>
> i N
< ; 2k
Uk = (k 1) 2 (4.80)
>
> i N
>
: ; + 1  k  N.
(N + 1 k) 2
The corresponding time function, often called a Hilbert window, is shown
in figure 4.10 (only points t > 0 are shown). It is clear that this is a poor
representation of (t). The low-frequency component of the signal between
5 There are numerous ways of coding the data for an FFT, expression (4.80) follows the conventions
of [209].
146 The Hilbert transform—a practical approach

(a)
F [ ε(t) ] = -i
πω

(b)

F [ sq(t) ] = -i cot πω

Figure 4.9. Desired Hilbert window and periodic form from the discrete FFT.

!=2 and !=2 has been discarded. This can be alleviated by transferring
energy to the neighbouring lines and adopting the definition

8
> 0; k=1
>
>
> 3i
>
> ; k=2
>
> 2 
>
> i N
< ; 3k
Uk = (k 1) 2 (4.81)
>
> i N
>
>
> ; +1k N 1
>
>
>  (N + 1 k ) 2
: 3i ;
>
>
k = N.
2
Computation 147
1.5

1.2

1.0
Windoe Magnitude

0.8

0.5

0.2

0.0
0 64 128 192 256 320 384 448 512
Window Index

Figure 4.10. Naive discrete Hilbert window.

The Hilbert window corresponding to this definition is shown in figure 4.11.


There is a noticeable improvement.
The next problem is of circular convolution. The ideal convolution is shown
in figure 4.12. The actual convolution implemented using the FFT is depicted
in figure 4.13. The error occurs because the function G(! ) should vanish in
region B but does not because of the imposed periodicity. The solution is
straightforward. The sampled function G(! ), defined at N points, is extended
to a 2N -point function by translating region B by N points and padding by zeros.
The corresponding Hilbert window is computed from the 2N -point discretization
of 1=(! ). The resulting calculation is illustrated in figure 4.14.
Finally, the problem of truncation should be raised. The Fourier method can
only be used with baseband data. In practice, G(! ) will only be available for
positive ! , the negative frequency part needed for the inverse Fourier transform
is obtained by using the known symmetry properties of FRFs which follow
from the reality of the impulse response. Namely, Re G( ! ) = Re G(! ) and
Im G( !) = Im G(!). If one naively completes the FRF of zoomed data by
these reflections, the result is as shown in figure 4.15(b), instead of the desired
figure 4.15(a). This leads to errors in the convolution. One way of overcoming
this problem is to pad the FRF with zeros from ! = 0 to ! = ! min . This is
148 The Hilbert transform—a practical approach
1.5

1.2

1.0
Windoe Magnitude

0.8

0.5

0.2

0.0
0 64 128 192 256 320 384 448 512
Window Index

Figure 4.11. Corrected discrete Hilbert window.

inefficient if the zoom range is small or at high frequency and will clearly lead to
errors if low-frequency modes have been discarded.
Of the correction methods described in section 4.4.2, the only one applicable
is conversion to receptance and this should be stressed. This is only effective for
correcting the high-frequency error. However, as previously discussed, the data
should always be baseband in any case.
In summary then, the modified Fourier method 1 proceeds as follows.
(1) Convert the measured 12 N -point positive-frequency FRF G(! ) to an N -point
positive-frequency FRF by translation, reflection and padding.
(2) Complete the FRF by generating the negative-frequency component. The
real part is reflected about ! = 0, the imaginary part is reflected with a sign
inversion. The result is a 2N -point function.
(3) Take the inverse Fourier transform of the discretized i=(! ) on 2N points.
This yields the Hilbert window hi(t).
(4) Take the inverse Fourier transform of the 2N -point FRF. This yields the
impulse response g (t).
(5) Form the product g (t) hi(t).
(6) Take the Fourier transform of the product. This yields the desired Hilbert
transform G ~ (!).
Computation 149

G (ω)

1
πω

Ideal convolution G(ω) * 1


πω

Figure 4.12. Ideal convolution for the Hilbert transform.

4.3.4 Fourier method 2

Fourier method 1 was discussed as it was the first Hilbert transform method to
exploit Fourier transformation. However, it is rather complicated to implement
and the method discussed in this section is to be preferred in practice.
The implementation of this method is very similar to Fourier method 1;
however, the theoretical basis is rather different. This method is based on the
properties of analytic 6 signals and is attributed to Bendat [24]. Given a time

6 This terminology is a little unfortunate, as the word analytic will have two different meanings in
this book. The first meaning is given by equation (4.82). The second meaning relates to the pole-zero
structure of complex functions—a function is analytic in a given region of the complex plane if it has
no poles in that region. (Alternatively, the function has a convergent Taylor series.) The appropriate
meaning will always be clear from the context.
150 The Hilbert transform—a practical approach

G(ω)

1
πω
B

Circular convolution
component

Range of convolution

Figure 4.13. The problem of circular convolution.

signal g (t), the corresponding analytic signal, a(t), is given by 7

a(t) = g(t) g~(t) = g(t) Hfg(t)g: (4.82)

Taking the Fourier transform of this equation yields

A(!) = G(!) F Æ Hfg(t)g = G(!) F Æ H Æ F 1 fG(!)g: (4.83)

Now, recall that the Hilbert transform factors into Fourier operations. The
decomposition depends on whether the operator acts on time- or frequency-
domain functions. The appropriate factorization in the frequency domain is given
by (4.79). Essentially the same derivation applies in the time domain and the
result is
H = F 1 Æ 2 ÆF : (4.84)
7 This definition differs from convention

a(t) = g(t) + i~g(t):


The reason is that the conventional definition of the Hilbert transform of a time signal omits the
imaginary i, and reverses the sign to give a true convolution, i.e.
Z 1 g( )
Hfg(t)g = g~(t) = 1 d :
1 t 
Modifying the definition of the analytic signal avoids the unpleasant need to have different Hilbert
transforms for different signal domains.
Computation 151

G(ω)

0 1 N N 3N 2N
Ω πω 2 2

-N N

Figure 4.14. Solution to the circular convolution problem using translation and
zero-padding.

Substituting this expression into (4.83) yields

A(!) = G(!)+ 2fG(!)g = G(!)[1 + (!)] (4.85)

so 8
< 2G(! ); !>0
A(!) = G(!); !=0 (4.86)
:
0; !<0
thus, the spectrum of an analytic signal depends only on the spectrum of the real
part. This fact is the basis of the method.
Any function of frequency has a trivial decomposition

G(!) = Re G(!) + i Im G(!): (4.87)

However, if G(!) has a causal inverse Fourier transform, i Im G(!) =


HfRe G(!)g by (4.17). Therefore
G(!) = Re G(!) + HfRe G(!)g (4.88)

so G(! ) is analytic, provided that ! is considered to be a time-like variable. If the


Fourier transform, (not the inverse transform) is applied
Z 1
G ( ) = F fG(!)g = d! e i! G(!) (4.89)
1
152 The Hilbert transform—a practical approach

(a) True data

zoom range

(b) Effective data

(c) Convolving function

Figure 4.15. Convolution problem for zoomed data.

the result is
8
< 2GR( );  >0
G ( ) = : GR( );  = 0 (4.90)
0;  <0
where
GR( ) = F fRe G(!)g (4.91)

so the Fourier transform of the FRF is completely specified by the Fourier


Computation 153

transform of the real part8 . This fact provides a means of computing the FRF
imaginary part from the real part. In principle, three steps are required:

(1) Take the Fourier transform F  of the FRF real part Re G(! ), i.e. GR( ).
(2) Form the transform G ( ) using (4.89).
(3) Take the inverse Fourier transform F  1 of G ( ). The result is G(! ), i.e. the
desired Hilbert transform, Re~G(! ), has been obtained as the imaginary part.

A trivial modification—exchange Im G and Re G—in this argument leads to


the means of computing Im~ G(! ).
One advantage of the method is its speed, the timings are essentially those
of Fourier method 1. Also, because the FFT is applied to a spectrum, which has
already been obtained by FFT and is periodic, there are no leakage effects. The
method is subject to the same truncation problems that afflict all the methods and
the only applicable correction is conversion to receptance. The implementation
of the method is now illustrated by a case study [142].

4.3.5 Case study of the application of Fourier method 2

The structure used to obtain the experimental data was a composite 23 1 scale
aircraft wing used for wind tunnel tests. The wing was secured at its root to a
rigid support, effectively producing a cantilever boundary condition. Excitation
of the wing was via an electrodynamic exciter attached to the wing via a push
rod (stinger) and a force transducer. The excitation was a band-limited random
signal in the range 0–512 Hz. The response of the wing was measured using
lightweight accelerometers. (Note that random excitation is not optimal for
nonlinear structures—this will be discussed later. This study is intended to show
how the Hilbert transform is computed, and one can only validate the method on
a linear structure.)
Figure 4.16 shows the accelerance FRF measured by the experiment. At least
seven modes are visible. For information, the resonance at 76 Hz was identified
as first wing bending, that at 215 Hz was identified as first wing torsion.
8 Z Z 1
Note that
1
Re G(!) = dt cos(!t)g(t) = dt e i!tgeven (t)
1 1
so,
Z
1 Z 1
GR( ) = d! e i ! dt e i!tgeven (t)
1
Z 1
1
Z 1 Z 1
= dt geven (t) d! e i!( +t) = 2 dt geven (t)Æ( + t)
1 1 1
= 2geven (  ):
So GR is essentially the even component of the original time signal. This fact does not help with
the development of the algorithm. However, it does show that the terminology ‘pseudo spectrum’ for
GR, which is sometimes used, is probably inappropriate.
154 The Hilbert transform—a practical approach

Figure 4.16. A typical experimental cross-accelerance FRF measured from a scaled wing
model.

The first step in the procedure is to correct for truncation; the FRF is
converted to receptance by dividing by ! 2 (avoiding the division at ! = 0).
The result is shown in figure 4.17. To further reduce truncation errors, the FRF
was extended to 2N points by padding with zeroes (figure 4.18).
The next stage was the completion of the FRF, i.e. the conversion to a
double-sided form. The negative frequency parts were obtained by assuming even
symmetry for the real part and odd symmetry for the imaginary part. The double-
sided signals are given in figure 4.19.
The function GR( ) was formed by Fourier transforming the real part
(figure 4.20(a)). This was converted to G ( ) by zeroing the negative- component
and doubling the positive- part. The  = 0 line was left untouched. Taking the
inverse Fourier transform then gave Re] G as the imaginary part.
The function GI ( ) was formed by Fourier transforming the imaginary part
of the FRF (figure 4.20(b)). This was also converted to the full G ( ) as before.
Taking the inverse FFT gave Im ] G as the real part.
Both the real and imaginary parts of the Hilbert transform have now been
obtained. The next stage was simply to convert back to the accelerance form.
In order to evaluate the results, the Hilbert transform is shown overlaid on
the original FRF in figure 4.21, the two curves should match. Both the Bode
magnitude and Nyquist plots are given. The somewhat poor quality of the Nyquist
Computation 155

Figure 4.17. Receptance FRF converted from accelerance FRF in figure 4.16.

Figure 4.18. Receptance FRF padded with zeroes to 2fmax .

comparison is due to the limited frequency resolution.


The method clearly produces an excellent Hilbert transform and indicates,
for the excitation used, that the system is nominally linear.
Having established methods of computing the transform, it is now finally
time to show how the method allows the detection and identification of
nonlinearity.
156 The Hilbert transform—a practical approach

Figure 4.19. (a) Double-sided (even function) real part of the FRF of figure 4.18. (b)
Double-sided (odd function) imaginary part of the FRF of figure 4.18.

4.4 Detection of nonlinearity

The basis of the Hilbert transform as a nonlinearity detection method is


equation (4.20) which asserts that the Hilbert transform acts as the identity on
Detection of nonlinearity 157

Figure 4.20. (a) Pseudo-spectrum from the Fourier transform of the curve in figure 4.19(a).
(b) Pseudo-spectrum from the Fourier transform of the curve in figure 4.19(b).

functions G(! ) which have causal inverse Fourier transforms, i.e.

G(!) = HfG(!)g , F 1 fG(!)g = g(t) = 0; 8t < 0: (4.92)

The inverse Fourier transform of a linear system FRF H (! ), is the system


impulse response h(t) which is always zero for negative times by the principle of
causality (see chapter 1). This means that the FRF H (! ) is invariant under the
Hilbert transform. There is no compelling reason why this condition should hold
158 The Hilbert transform—a practical approach

Figure 4.21. Overlay of the experimental (——) and Hilbert transformed (– – –) data in
(a) Bode plot, (b) Nyquist plot.

for the FRF of a nonlinear system.


Consider the FRF of a generic nonlinear system G(!). It is impossible to
show that F 1 fG(! )g = g (t) will

(1) be real and


(2) be causal.
Detection of nonlinearity 159

In practice reality is imposed because the one-sided FRF is often converted


to a double-sided FRF by imposing evenness and oddness conditions on the real
and imaginary parts respectively. This forces a real g (t). This, in turn, means that
the usual consequence of nonlinearity is non-causality of the ‘impulse response’
function, i.e. the inverse Fourier transform of the FRF. This does not mean that the
system is non-causal in the physical sense; cause must always precede effect. It
simply means that the inverse Fourier transform of a nonlinear system FRF must
not be interpreted as an impulse response. The specification and calculation of
nonlinear system impulse responses is more complicated and will be discussed in
a later chapter. The fact that g (t) 6= 0 for all negative t is often referred to as
artificial non-causality.
As a result the Hilbert transform will not act as the identity on G(! ):
G(!) 6= HfG(!)g. It is possible to see this directly using the factorization (4.79)
of H,
HfG(!)g = Ff(t)g(t)g: (4.93)
If g (t) is causal, (t)g (t) = g (t) and H is the identity. If not (t)g (t) 6= g (t)
and H 6= Id. The argument is summarized diagrammatically in figure 4.22.
The question arises: If H is not the identity, what is its effect on nonlinear
system FRFs? Consider the hardening Duffing oscillator,

my + cy_ + ky + k3 y3 = x(t); k3 > 0: (4.94)

Suppose an FRF is obtained from this system with x(t) a low-amplitude


signal (the appropriate form for x(t), i.e. whether stepped-sine or random etc. is
discussed later.) At low levels of excitation, the linear term dominates and the
FRF is essentially that of the underlying linear system. In that case, the Hilbert
transform will overlay the original FRF. If the level of excitation is increased,
the Hilbert transform will start to depart from the original FRF; however because
the operator H is continuous, the main features of the FRF—resonances etc—are
retained but in a distorted form. Figure 4.23 shows the FRF of a Duffing oscillator
and the corresponding Hilbert transform, the level of excitation is set so that the
Hilbert transform is just showing mild distortion.
A number of points are worth noting about figure 4.23. First, it is sometimes
helpful to display the FRF and transform in different formats as each conveys
different information: the Bode plot and Nyquist plot are given here. The figure
also shows that the Hilbert transform is a sensitive indicator of nonlinearity.
The FRF shows no discernible differences from the linear form, so using FRF
distortion as a diagnostic fails in this case. The Hilbert transform, however, clearly
shows the effect of the nonlinearity, particularly in the Nyquist plot. Finally,
experience shows that the form of the distortion is actually characteristic of the
type of nonlinearity, so the Hilbert transform can help in identifying the system.
In the case of the hardening cubic stiffness, the following observations apply.
In the Bode plot the peak of the Hilbert transform curve appears at a higher
frequency than in the FRF. The peak magnitude of the Hilbert transform is higher.
160 The Hilbert transform—a practical approach

Figure 4.22. Demonstration of artificial non-causality for a nonlinear system.

In the Nyquist plot, the characteristic circle is rotated clockwise and elongated
into a more elliptical form. Figure 4.24 shows the FRF and transform in a more
extreme case where the FRF actually shows a jump bifurcation. The rotation and
elongation of the Nyquist plot are much more pronounced.
The characteristic distortions for a number of common nonlinearities are
summarized next (in all cases the FRFs are obtained using sine excitation).

4.4.1 Hardening cubic stiffness


The equation of motion of the typical SDOF system is given in (4.94). The FRF
and Hilbert transform in the two main formats are given in figure 4.23. The FRF
is given by the dashed line and the transform by the solid line.
In the Bode plot the peak of the Hilbert transform curve appears at a higher
frequency than in the FRF. The peak magnitude of the Hilbert transform is higher.
In the Nyquist plot, the characteristic circle is rotated clockwise and elongated
into a more elliptical form.
Detection of nonlinearity 161

Figure 4.22. (Continued)

4.4.2 Softening cubic stiffness


The equation of motion is
my + cy_ + ky + k3 y3 = x(t); k3 < 0: (4.95)
The FRF and Hilbert transform are given in figure 4.25. In the Bode plot the
peak of the Hilbert transform curve appears at a lower frequency than in the FRF.
The peak magnitude of the Hilbert transform is higher. In the Nyquist plot, the
characteristic circle is rotated anti-clockwise and elongated into a more elliptical
form.

4.4.3 Quadratic damping


The equation of motion is
my + cy_ + c2 y_ jy_ j + ky+ = x(t); c2 > 0: (4.96)
The FRF and Hilbert transform are given in figure 4.26. In the Bode plot the
peak of the Hilbert transform curve stays at the same frequency as in the FRF, but
162 The Hilbert transform—a practical approach

Figure 4.23. Hilbert transform of a hardening cubic spring FRF at a low sine excitation
level.

increases in magnitude. In the Nyquist plot, the characteristic circle is elongated


into an ellipse along the imaginary axis.
Detection of nonlinearity 163

Figure 4.24. Hilbert transform of a hardening cubic spring FRF at a high sine excitation
level.

4.4.4 Coulomb friction


The equation of motion is

my + cy_ + cF y_ jy_ j + ky+ = x(t); cF > 0: (4.97)

The FRF and Hilbert transform are given in figure 4.27. In the Bode plot
164 The Hilbert transform—a practical approach

Figure 4.25. Hilbert transform of a softening cubic spring FRF at a high sine excitation
level.

the peak of the Hilbert transform curve stays at the same frequency as in the
FRF, but decreases in magnitude. In the Nyquist plot, the characteristic circle is
compressed into an ellipse along the imaginary axis.
Note that in the case of Coulomb friction, the nonlinearity is only visible if
the level of excitation is low. Figure 4.28 shows the FRF and transform at a high
level of excitation where the system is essentially linear.
Choice of excitation 165

Figure 4.26. Hilbert transform of a velocity-squared damping FRF.

4.5 Choice of excitation


As discussed in the first two chapters, there are essentially four types of excitation
which can be used to produce a FRF: impulse, stepped-sine, chirp and random.
Figure 2.17 shows the resulting FRFs. The question arises as to which of the FRFs
generates the inverse Fourier transform with the most marked non-causality; this
will be the optimal excitation for use with the Hilbert transform.
Roughly speaking, the FRFs with the most marked distortion will transform
166 The Hilbert transform—a practical approach

Figure 4.27. Hilbert transform of a Coulomb friction system FRF at a low sine excitation
level.

to the most non-causal time functions. Recalling the discussion of chapter 2, the
most distorted FRFs are obtained from stepped-sine excitation and, in fact, it will
be proved later that such FRFs for nonlinear systems will generically show Hilbert
transform distortions. (The proof requires the use of the Volterra series and is
therefore postponed until chapter 8 where the appropriate theory is introduced.)
Choice of excitation 167

Figure 4.28. Hilbert transform of a Coulomb friction system FRF at a high sine excitation
level.

This form of excitation is therefore recommended. The main disadvantage is its


time-consuming nature.
At the other end of the spectrum is random excitation. As discussed in
chapter 2, random excitation has the effect of producing a FRF which appears to
be linearized about the operating level. For example, as the level of excitation
is increased for a hardening cubic system, the resonant frequency increases,
168 The Hilbert transform—a practical approach

but the characteristic linear Lorentzian shape appears to be retained. In fact,


Volterra series techniques (chapter 8) provide a compelling argument that random
excitation FRFs do change their form for nonlinear systems, but they still do not
show Hilbert transform distortions. Random excitation should not, therefore,
be used if the Hilbert transform is to be used as a diagnostic for detecting
nonlinearity.
The impulse and chirp excitations are intermediate between these two
extremes. They can be used if the test conditions dictate accordingly. Both
methods have the advantage of giving broadband coverage at reasonable speed.

4.6 Indicator functions


The Hilbert transform operations described earlier give a diagnosis of nonlinearity
with a little qualitative information available to those with appropriate experience.
There has in the past been some effort at making the method quantitative. The
FREEVIB approach discussed later actually provides an estimate of the stiffness
or damping functions under certain conditions. There are also a number of
less ambitious attempts which are usually based on computing some statistic or
indicator function which sheds light on the type or extent of nonlinearity. Some
of the more easily computable or interpretable are discussed in the following.

4.6.1 NPR: non-causal power ratio


This statistic was introduced in [141]. It does not make direct use of the Hilbert
transform, but it is appropriate to discuss it here as it exploits the artificial non-
causality of nonlinear system ‘impulse responses’. The method relies on the
decomposition
g(t) = F 1 fG(!)g = gn (t) + gc(t) (4.98)
where gc (t) is the causal part defined by

gc(t) = g0;(t); tt 
<0
0 (4.99)

and gn (t) is the non-causal part



0; t0
gn (t) = g(t); t < 0. (4.100)

The non-causal power ratio (NPR) is then defined as the ratio of non-causal
power Pn to the total system power P as encoded in the FRF
R0
Pn 1 dt jgn(t)j2
NPR =
P
= R1
1 dt jg(t)j
2: (4.101)
Indicator functions 169
. .
k3 y3 k3 > 0 c2 y y

.
k3 y3 k3 < 0 Fc sgn( y)

Figure 4.29. Non-causal power ratio plots for various SDOF nonlinear systems.

By Parseval’s theorem, this also has a representation as

R0 2
NPR =
Pn
= 1 R1 1 dt jgn(t)j :
2 1 d! jG(!)j2
(4.102)
P

This index is readily computed using an inverse FFT.


The NPR is, of course, a function of excitation amplitude (the form of the
excitation being dictated by the considerations of the previous section). Kim
and Park [141] compute this function for a number of common nonlinearities:
hardening and softening cubic springs and quadratic and Coulomb damping. It
is argued that the functions are characteristic of the nonlinearity as shown in
figure 4.29, the cubic nonlinearities show NPRs which increase quickly with
amplitude as expected. The NPR for quadratic damping shows a much more
gentle increase, and the Coulomb friction function decreases with amplitude—
again in agreement with intuition. The function certainly gives an indication of
nonlinearity, but claims that it can suggest the type are probably rather optimistic.
The method is not restricted to SDOF systems. A case study is presented in
[141] and it is suggested that computing the NPRs for all elements of the FRF
matrix can yield information about the probable location of the nonlinearity.
170 The Hilbert transform—a practical approach

4.6.2 Corehence
This measure of nonlinearity, based on the Hilbert transform, was introduced in
[213] as an adjunct to the coherence function described in chapter 2. The basis of
the theory is the operator of linearity P , defined by 9

G~ (!) = P (!)G(!): (4.103)

The operator is the identity P (! ) = 1 8! if the system is linear (i.e. G(! )


has a causal inverse Fourier transform). Deviations of P from unity indicate
nonlinearity. Note that P is a function of the level of excitation. As in the case
of the coherence 2 (chapter 2), it is useful to have a normalized form for the
operator, this is termed the corehence and denoted by  2 . The definition is10

(!)2 =
jE fG~ (!)G(!) gj2 :
E fjG~ (!)j2 gE fjG(!)j2 g
(4.104)

There appears to be one major advantage of corehence over coherence.


Given a coherence which departs from unity, it is impossible to determine whether
the departure is the result of nonlinearity or measurement noise. It is claimed in
[213] that this is not the case for corehence, it only responds to nonlinearity. It is
also stated that a coherence of unity does not imply that the system is nonlinear.
However, a rather unlikely type of nonlinearity is needed to create this condition.
It is suggested that the corehence is more sensitive than the coherence.

4.6.3 Spectral moments


Consider a generic time signal x(t); this has a representation
1 1
Z
x(t) = d! ei!t X (!) (4.105)
2 1
where X (! ) is the spectrum. It follows that, if x(t) is n-times differentiable,
dn x in 1
Z
= d! !n ei!t X (!) (4.106)
dtn 2 1
9 There are actually a number of P operators, each associated with a different FRF estimator, i.e. H1 ,
H2 etc. The results in the text are for the estimator H1 (!) = Syx (!)=Sxx (!).
10The actual definition in [213] is

(!)2 =
jG~ (!)G(!) j2 :
jG~ (!)j2 jG(!)j2
() ~( )
However, the expectation operators are implied; if the G ! and G ! are themselves expectations,
expression (4.104) collapses to unity. There, is therefore, an implicit assumption that the form of
excitation must be random as it is in the case of the coherence. Now, it is stated above that the Hilbert
transform of an FRF obtained from random excitation does not show distortions. This does not affect

f ~ g 6= f ~ g f g
the utility of the corehence as that statement only applies to the expectation of the FRF, i.e. the FRF
after averaging. Because E GG E G E G , the corehence departs from unity for nonlinear
systems.
Indicator functions 171

in 1
so Z
dn x n
n X (! ) = i M (n)
= d! ! (4.107)
dtn t=0 2 1 2
where M (n) the nth moment integral of X (! ) or the nth spectral mo-
R 1 denotes
ment— 1 d! ! n X (! ). Now it follows from the Taylor’s series
1 1 dn x 1 (it)n
n= 1
X X
x(t) = n t M (n) (4.108)
n=1 n! dt t=0 2 n=1 n!
that the function x(t) is specified completely by the set of spectral moments. As a
result, X (! ) is also specified by this set of numbers. The moments offer a means
of characterizing the shape of the FRF or the corresponding Hilbert transform
in terms of a small set of parameters. Consider the analogy with statistical
theory: there, the mean and standard deviation (first- and second-order moments)
of a probability distribution establish the gross features of the curve. The third-
and fourth-order moments describe more subtle features—the skewness and the
‘peakiness’ (kurtosis). The latter features are considered to be measures of the
distortion from the ideal Gaussian form. The zeroth moment is also informative;
this is the energy or area under the curve.
Assuming that the moments are estimated for a single resonance between
!min and !max , the spectral moments of an FRF G(!) are
Z !max
(
MG = n ) d! !nG(!): (4.109)
!min
Note that they are complex, and in general depend on the limits; for consistency,
the half-power points are usually taken. The moments are approximated in
practice by
!X
MG(n) 
max

!kn G(!k )! (4.110)


k=!min
where ! is the spectral line spacing.
So-called Hilbert transform describers—HTDs—are then computed from

( n ) MG(~n) MG(n)
HTD = 100
MG(n)
(4.111)

and these are simply the percentage differences between the Hilbert transform
moments and the original FRF moments.
In practice, only the lowest-order moments have been investigated; in the
terminology of [145], they are
real energy ratio (RER) = Re HTD (0)
imaginary energy ratio (IER) = Im HTD (0)
real frequency ratio (RFR) = Re HTD (1) :
172 The Hilbert transform—a practical approach

Figure 4.30. The variation in Hilbert transform describers (HTDs) for various SDOF
nonlinear systems.

They are supplemented by

N~ NG
imaginary amplitude ratio (IAR) = Im 100 G
NG
where Z !max
NG = d! G(!)2 (4.112)
!min
(which is essentially the centroid of the FRF about the ! -axis).
Measurement of apparent damping 173

Figure 4.30 shows the plots of the HTD statistics as a function of applied
force for several common nonlinearities. The parameters appear to separate
stiffness and damping nonlinearities very effectively. Stiffness nonlinearity is
identified from the changes in the RFR and IAR, while damping nonlinearity
is indicated by changes in the energy statistics without change in the other
describers. Note that the describers tend to zero at low forcing for the polynomial
nonlinearities as expected; G ~ ! G in this region. For the discontinuous
nonlinearities, clearance and friction, the describers tend to zero at high forcing
as the behaviour near the discontinuities becomes less significant. The describers
therefore indicate the level of forcing at which the FRF of the underlying linear
system can be extracted.

4.7 Measurement of apparent damping


It is well known that the accurate estimation of damping for lightly damped and/or
nonlinear structures presents a difficult problem. In the first case, traditional
methods of curve-fitting to FRFs break downpdue to low resolution of the peaks.
In the second case, the damping ratio c=2 km is not constant, whether the
nonlinearity is in stiffness or damping (as a result, the term apparent damping
ratio is used). However, it transpires that there is an effective procedure based
on the Hilbert transform [245], which has actually been implemented on several
commercial FRF analyers. The application to light damping is discussed in [4, 5].
Investigations of nonlinear systems are presented in [187, 188].
The basis of the method is the analytic signal. Consider the function
e( +i)t with  > 0. It is shown in appendix C that there are relations between
the real and imaginary parts:

Hfe t sin(t)g = ie t cos(t) (4.113)

and
Hfe t cos(t)g = ie t sin(t) (4.114)

provided  is small. These relations therefore apply to the impulse response of a


linear system provided the damping ratio  is small (overall constant factors have
no effect):
1
h(t) = e !n t sin(! t); t > 0
d (4.115)
m!d
which can be interpreted as the real part of an analytic signal,

1 1
ah (t) = h(t) h~ (t) = e !n t sin(!dt) i e !nt cos(!d t)
m!d m!d
i ( !n +i!d )t
= e : (4.116)
m!d
174 The Hilbert transform—a practical approach

Now, the magnitude of this analytic signal is given by


q
jah (t)j = h2 h~ 2 = m!1 e !n t (4.117)
d
and this is revealed as the envelope of the impulse response (see section 3.12) 11.
Taking the natural logarithm of this expression yields

log jah (t)j = !n t log(m!d ) (4.118)

and this provides a new time-domain algorithm for estimating the damping of a
system, given the linear system FRF H (! ):

(1) Take the inverse Fourier transform of H (! ) to get the impulse response h(t).
(2) Take the Hilbert transform of h(t) and form the analytic impulse response
ah (t) as in (4.116).
(3) Plot the log magnitude of a h (t) against time; the gradient (extracted by a
linear regression) is  = p! n . p
(4) If !d is measured, !n =  2 !n2 + !d2 =  2 + !d2 and  = =!n .

There are no real subtleties involved in applying the method to a nonlinear


system. The only critical factor is choice of excitation. It can be shown that
random excitation properly represents the apparent damping (in the sense that the
FRF Syx =Sxx correctly represents the amount of power dissipated), this is the
appropriate excitation. Note that curve-fitting to the FRF would also characterize
the damping; this method is of interest because it extends to light damping,
is more insensitive to noise and also because it makes neat use of the Hilbert
transform.
To illustrate the procedure, random excitation FRFs were obtained for the
Duffing oscillator system

y + 5y_ + 104 y + 109y3 = x(t) (4.119)

at low and high levels of excitation. Figure 4.31 shows the corresponding log
envelopes. Extremely clear results are obtained in both cases. In contrast, the
corresponding FRFs with curve-fits are shown in figure 4.32. The high excitation
FRF is significantly noisier.
11 Note that using the conventional definition of analytic signal and Hilbert transform given in
footnote 4.6, equation (4.116) is modified to

ah (t) = h(t)+ih~ (t) =


1 e !n t sin(!d t) i m!1 e !n t cos(!d t) = m!i e( !n +i!d )t
m!d d d
and equation (4.117) becomes
q
jah (t)j = h2 + h~ 2 = m!1 e !n t
d
and the argument then proceeds unchanged.
Identification of nonlinear systems 175

Figure 4.31. Impulse response and envelope function for a nonlinear system under random
excitation: (a) low level; (b) high level.

An experimental example for an impacting cantilever beam (figure 4.33) also


shows the utility of the method. Figure 4.34 shows the FRF, impulse response
and log envelope for the low excitation case where the system does not impact.
Figure 4.35 shows the corresponding plots for the high-excitation contacting
case—note that the FRF is considerably noisier. If the initial, linear, portions of
the log envelope curves are used for regression, the resulting natural frequencies
and damping ratios are given in figure 4.36.
Thepapparent variation in damping ratio  is due to the fact that the definition
 = c= km depends on the nonlinear stiffness. The corresponding value of c
should be constant (by linearization arguments presented in chapter 2).

4.8 Identification of nonlinear systems


The method described in this section is the result of a programme of research by
Feldman [92, 93, 94]. It provides a means of obtaining the stiffness and damping
176 The Hilbert transform—a practical approach

Figure 4.32. Result of curve-fitting FRFs for data in figure 4.31.


Identification of nonlinear systems 177

Figure 4.33. Nonlinear (impacting) cantilever beam test rig.

characteristics of SDOF systems. There are essentially two approaches, one based
on free vibration FREEVIB and one on forced vibration FORCEVIB. They will
be discussed separately. Note that Feldman uses the traditional definition of the
analytic signal and time-domain Hilbert transform throughout his analysis.
178 The Hilbert transform—a practical approach

Figure 4.34. Data from the nonlinear beam in non-impacting condition: (a) measured
FRF; (b) calculated impulse response; (c) calculated envelope.
Identification of nonlinear systems 179

Figure 4.35. Data from the nonlinear beam impacting condition: (a) measured FRF;
(b) calculated impulse response; (c) calculated envelope.
180 The Hilbert transform—a practical approach

Figure 4.36. Results of the estimated natural frequency and apparent damping ratio for the
impacting cantilever: (a) linear regime; (b) nonlinear regime.

4.8.1 FREEVIB
Consider a SDOF nonlinear system under free vibration:

y + h(y_ )y_ + !02 (y)y = 0: (4.120)

The object of the exercise of identification is to use measured data, say


y(t), and deduce the forms of the nonlinear damping function h(y_ ) and nonlinear
stiffness k (y ) = !02 (y ).
Identification of nonlinear systems 181

The method is based on the analytic signal defined in (4.82)

Y (t) = y(t) y~(t) (4.121)

and uses the magnitude and phase representation

Y (t) = A(t)ei (t) (4.122)

where A(t) is the instantaneous magnitude or envelope, and (t) is the


instantaneous phase. Both are real functions so

y(t) = A(t) cos( (t)); y~ = iA(t) sin( (t)) (4.123)

and
p
A(t) = y(t)2 y~(t)2 (4.124)
 
y~(t)
(t) = tan 1 : (4.125)
iy(t)
So both envelope and phase are available as functions of time if y (t) is known
and y~(t) can be computed. The derivatives can also be computed, either directly
or using the relations
" #
y(t)y_ (t) y~(t)y~_ (t) Y_ (t)
A_ (t) = p 2 2 = A(t) Re (4.126)
y(t) y~(t) Y (t)
" #
i(y ( t ) _ (t) y_ (t)~y (t))
y
~ _ (t)
Y
!(t) = _ (t) = = Im
y(t)2 y~(t)2
(4.127)
Y (t)
where ! (t) is the instantaneous frequency, again a real signal. The last two
equations can be used to generate the first two derivatives of the analytic signal
" #
A_ (t)
Y_ (t) = Y (t) + i!(t) (4.128)
A(t)
" #
A(t) 2 A_ (t)!(t)
Y (t) = Y (t) !(t) + 2i + i!_ (t) : (4.129)
A(t) A(t)
Now, consider the equation of motion (4.120), with h(y_ (t)) = h(t) and
!02 (y(t)) = !02 (t) considered purely as functions of time (there is a slight abuse
of notation here). Because the functions h and ! 02 will generally be low-order
polynomials of the envelope A, they will have a lowpass characteristic. If the
resonant frequency of the system is high, y (t) will, roughly speaking, have a
highpass characteristic. This means that h and y can be considered as non-
overlapping signals (see appendix C) as can ! 02 and y . If the Hilbert transform
182 The Hilbert transform—a practical approach

is taken of (4.120), it will pass through the functions h and ! 02 . Further, the
transform commutes with differentiation (appendix C again), so

y~ + h(t)y~_ + !02 (t)~y = 0: (4.130)

Adding (4.120) and (4.130) yields a differential equation for the analytic
signal Y , i.e.
Y + h(t)Y_ + !02 (t)Y = 0 (4.131)
or, the quasi-linear form,

Y + h(A)Y_ + !02 (A)Y = 0: (4.132)

Now, the derivatives Y and Y_ are known functions of A and ! by (4.128)


and (4.129). Substitution yields
" !#
A A_ A_
Y !2 + !02 + h + i 2! + !_ + h! = 0: (4.133)
A A A
Separating out the real and imaginary parts gives

A_ !_
h(t) = 2 (4.134)
A !
A A_
!02 (t) = !2 h (4.135)
A A
or
A A_ 2 A_ !_
!02 (t) = !2 +2 2 + (4.136)
A A A!
and these are the basic equations of the theory.
On to practical matters. Suppose the free vibration is induced by an impulse,
the subsequent response of the system will take the form of a decay. y (t) can
be measured and y~ can then be computed 12. This means that A(t) and ! (t) are
available by using (4.124) and (4.125) and numerically differentiating (t).
Now, consider how the damping function is obtained. h(t) is known from
(4.134). As A(t) is monotonically decreasing (energy is being dissipated), the
inverse function t(A) is single-valued and can be obtained from the graph of
A(t) against time (figure 4.37). The value of h(A) is simply the value of h(t)
at t(A) (figure 4.38). Similarly, the stiffness function is obtained via the sequence
A ! t(A) ! !02 (t(A)) = !02 (A). The inverse of the latter mapping A(!)
is sometimes referred to as the backbone curve of the system. (For fairly simple
systems like the Duffing oscillator, the backbone curves can be calculated [41].)
12 As in the frequency-domain case, there are a number of methods of computing y~, the decomposition
H = F 1 Æ 2 ÆF provides one.
Identification of nonlinear systems 183

Envelope

t (A) t
Figure 4.37. Envelope used in Feldman’s method.

h ( t (A)) = h(A)
A

t (A) t
Figure 4.38. Damping curve for Feldman’s method.

Once h(A) and !02 (A) are known, the damper and spring characteristics
fd (A) and fs (A) can be obtained trivially
fd (A) = !(A)Ah(A) (4.137)
fs (A) = A!02 (A): (4.138)

Note that as there are no assumptions on the forms of f d and fs , the method
is truly non-parametric. However, once the graphs A ! f d etc have been
obtained, linear least-squares methods (as described in chapter 6) suffice to
estimate parameters.
The method can be readily illustrated using data from numerical
simulation13 . The first system is a Duffing oscillator with equation of motion

y + 10y_ + 104y + 5  104y3 = 0 (4.139)


13 The results for figures 4.39–4.41 were obtained by Dr Michael Feldman—the authors are very
grateful for permission to use them.
184 The Hilbert transform—a practical approach

a
0.4

0.2
y(t), A(t)

−0.2

−0.4
0 0.2 0.4 0.6 0.8 1 1.2

b
30

25
f(t), Hz

20

15

10
0 0.2 0.4 0.6 0.8 1 1.2
Time, s
c d

0.45 0.45

0.4 0.4

0.35 0.35

0.3 0.3

0.25 0.25
A

0.2 0.2

0.15 0.15

0.1 0.1

0.05 0.05

10 15 20 25 3 4 5 6 7
Frequency, Hz Damping coef., 1/s

Figure 4.39. Identification of cubic stiffness system: (a) impulse response; (b)
envelope; (c) backbone curve; (d) damping curve; (e) stiffness characteristic; (f ) damping
characteristic.
Identification of nonlinear systems 185

4
x 10 e f
1 800

0.8
600

0.6
400
0.4

200
0.2

Damping force
Spring force

0 0

−0.2
−200

−0.4
−400
−0.6

−600
−0.8

−1 −800
−0.5 0 0.5 −100 −50 0 50 100
Displacement Velocity

Figure 4.39. (Continued)

and initial condition y_ (0) = 200. Figure 4.39(a) shows the decaying displacement
and the envelope computed via equation (4.124). Figure 4.39(b) shows the
corresponding instantaneous frequency obtained from (4.127). The backbone and
damping curve are given in figures 4.39(c) and (d) respectively. As expected
for a stiffening system, the natural frequency increases with the amplitude of
excitation. Apart from a high-frequency modulation, the damping curve shows
constant behaviour. Using equations (4.138) and (4.139), the stiffness and
damping curves can be obtained and these are shown in figures 4.39(e) and (f ).
The second example shows the utility of the method for non-parametric
system identification. The system has a stiffness deadband, the equation of motion
is
y + 5y_ + fs (y) = 0 (4.140)
where 8
< 104 (y 0:1); y > 0:1
fs (y ) = 0;
: 4
jyj < 0:1 (4.141)
10 (y + 0:1); y < 0:1
and the motion began with y_ (0) = 200 once more. The sequence of figures
4.40(a)–(f ) show the results of the analysis. The backbone curve (figure 4.40(c))
shows the expected result that the natural frequency is only sensitive to the
nonlinearity for low levels of excitation. The stiffness curve (figure 4.40(e))
shows the size of the deadband quite clearly. (This is useful information, if
the clearance is specified, the parameter estimation problem becomes linear and
186 The Hilbert transform—a practical approach

0.5
y(t), A(t)

−0.5

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

20
f(t), Hz

15

10

5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Time, s

c d
0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6
A

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

5 10 15 1.5 2 2.5 3 3.5


Frequency, Hz Damping coef., 1/s

Figure 4.40. Identification of backlash system: (a) impulse response; (b) envelope; (c)
backbone curve; (d) damping curve; (e) stiffness characteristic, (f ) damping characteristic.
Identification of nonlinear systems 187

e f
8000 600

6000
400

4000

200
2000

Damping force
Spring force

0 0

−2000
−200

−4000

−400
−6000

−8000 −600
−1 −0.5 0 0.5 1 −100 −50 0 50 100
Displacement Velocity

Figure 4.40. (Continued)

simple methods suffice to estimate the stiffness function.) Note that because the
initial displacement did not decay away completely, there are gaps in the stiffness
and damping functions at low amplitude.
The final example shows a damping nonlinearity. The system has equation
of motion
y + 300 sgn(y_ ) + 104y = 0 (4.142)
so Coulomb friction is present. The decay began with the same initial conditions
as before and the resulting anlysis is shown in figures 4.41(a)–(f ). Note the
characteristic linear decay envelope for this type of nonlinear system as shown
in figure 4.41(a). In this case, the backbone (figure 4.41(c)) shows no variation of
natural frequency with amplitude as expected. The coefficient of friction can be
read directly from the damping function (figure 4.41(f )).
Further examples of nonlinear systems can be found in [93, 95]. A practical
application to a nonlinear ocean mooring system is discussed in [120].
All of these examples have viscous damping models. It is a simple matter to
modify the theory for structural (hysteretic) damping, the equation of motion for
the analytic signal becomes
 
i
Y + !02(A) 1 + Æ(A) Y = 0 (4.143)

where Æ (A) is the loss factor or logarithmic decrement. The basic equations are
A
!02 (t) = !2 (4.144)
A
188 The Hilbert transform—a practical approach

1
y(t), A(t)

−1

−2

0 0.2 0.4 0.6 0.8 1 1.2

20
f(t), Hz

15

10

0 0.2 0.4 0.6 0.8 1 1.2


Time, s
c d

2.5 2.5

2 2
A

1.5 1.5

1 1

0.5 0.5

10 15 20 2 4 6
Frequency, Hz Damping coef., 1/s

Figure 4.41. Identification of Coulomb friction system: (a) impulse response; (b)
envelope; (c) backbone curve; (d) damping curve; (e) stiffness characteristic; (f ) damping
characteristic.
Identification of nonlinear systems 189

4
x 10 e f
3
600

2 400

1 200

Damping force
Spring force

0 0

−1 −200

−2 −400

−600
−3
−4 −2 0 2 4 −200 −100 0 100 200
Displacement Velocity

Figure 4.41. (Continued)

and
_
2A! !_
Æ(t) = :
A!02 !02
(4.145)

The method described here is only truly suitable for monocomponent signals,
i.e. those with a single dominant frequency. The extension to two-component
signals is discussed in [96].

4.8.2 FORCEVIB
The analysis for the forced vibration case is very similar to FREEVIB; the
presence of the excitation complicates matters very little. Under all the same
assumptions as before, the quasi-linear equation of motion for the analytic signal
can be obtained:
X
Y + h(A)Y_ + !02 (A)Y = : (4.146)
m
Carrying out the same procedures as before which lead to equations (4.134) and
(4.135) yields
(t) A_ !_
h(t) = 2 (4.147)
!m A !
and
(t) (t)A_ A A_ 2 A_ !_
!02 (t) = !2 + +2 2 + (4.148)
m A!m A A A!
190 The Hilbert transform—a practical approach

where (t) and (t) are, respectively, the real and imaginary parts of the
input/output ratio X=Y , i.e.

X (t) x(t)y(t) + x~(t)~y(t) x~(t)y(t) x(t)~y(t)


= (t) + i (t) = +i
y2 (t) + y~2 (t) y2 (t) + y~2 (t)
(4.149)
Y (t)
where x(t) is the real part of X (t), i.e. the original physical excitation.
Implementation of this method is complicated by the fact that an estimate of
the mass m is needed. This problem is discussed in detail in [94].

4.9 Principal component analysis (PCA)


This is a classical method of multivariate statistics and its theory and use are
documented in any textbook from that field (e.g. [224]). Only the briefest
description will be given here. Given a set of p-dimensional vectors fxg =
(x1 ; : : : ; xp ), the principal components algorithm seeks to project, by a linear
transformation, the data into a new p-dimensional set of Cartesian coordinates
(z1 ; z2 ; : : : ; zp). The new coordinates have the following property: z 1 is the linear
combination of the original x i with maximal variance, z 2 is the linear combination
which explains most of the remaining variance and so on. It should be clear
that, if the p-coordinates are actually a linear combination of q < p variables,
the first q principal components will completely characterize the data and the
remaining p q will be zero. In practice, due to measurement uncertainty, the
principal components will all be non-zero and the user should select the number
of significant components for retention.
Calculation is as follows: given data fxg i = (x1i ; x2i ; : : : ; xip ); i =
1; : : : ; N , form the covariance matrix [] (see appendix A—here the factor
1=(N 1) is irrelevant)
N
X
[] = (fxgi fxg)(fxgi fxg)T (4.150)
i=1
(where fxg is the vector of means of the x data) and decompose so

[C ] = [A][][A]T (4.151)

where [] is diagonal. (Singular value decomposition can be used for this step
[209].) The transformation to principal components is then

fz gi = [A]T (fxgi fxg): (4.152)

Considered as a means of dimension reduction then, PCA works by


discarding those linear combinations of the data which contribute least to
the overall variance or range of the data set. Another way of looking at
the transformation is to consider it as a means of identifying correlations or
Principal component analysis (PCA) 191

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.42. FRF H1 for symmetric 2DOF linear system.

redundancy in data. The transformation to principal components results in


uncorrelated vectors and thus eliminates the redundancy.
The first applications of the method in dynamics date back to the early 1980s.
One of the first references is by Moore [191]. The first applications in modal
testing or structural dynamics are due to Leuridan [163, 164]. In both cases, the
object of the exercise was model reduction.
Consider a structure instrumented with p sensors, say measuring
displacement. At each time instant t, the instrumentation returns a vector
of measurements fy (t)g = (y (t) 1 ; : : : ; y (t)p ). Because of the dynamical
interactions between the coordinates there will be some correlation and hence
redundancy; using PCA this redundancy can potentially be eliminated leaving
a lower dimensional vector of ‘pseudo-sensor’ measurements which are linear
192 The Hilbert transform—a practical approach

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.43. FRF H2 for symmetric 2DOF linear system.

combinations of the original, yet still encode all the dynamics. This was the idea
of Leuridan.
In terms of sampled data, there would be N samples of fy (t)g taken at
regular intervals t. These will be denoted fy (t i )g; i = 1; : : : ; N . The signals
observed from structures are usually zero-mean, so the covariance matrix for the
system is
X N
[] = fy(ti )gfy(ti )gT : (4.153)
i=1
It is not particularly illuminating to look at the principal time signals.
Visualization is much simpler in the frequency domain. The passage from time
to frequency is accomplished using the multi-dimensional version of Parseval’s
Principal component analysis (PCA) 193

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.44. Principal FRF P H1 for symmetric 2DOF linear system.

Theorem. For simplicity consider the continuous-time analogue of (4.153)

Z 1
[] = dt fy(t)gfy(t)gT: (4.154)
1

Taking Fourier transforms gives

Z 1 
1 1
Z  Z 1
1

[] = dt i
d! e fY (!1 )g
! 1t
d! e i !2t
fY (!2 )g T
1 2 1 1 2 1 2
(4.155)
194 The Hilbert transform—a practical approach

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.45. Principal FRF P H2 for symmetric 2DOF linear system.

where the reality of the time signals has been used. Rearranging yields

1 1 1
Z Z  Z 1 
T 1 i( 1 !2 )t
[] = d! d! fY (!1 )gfY (!2 )g dte ! :
2 1 1 1 2 2 1
(4.156)
Now, using the integral representation of the Æ -function from appendix D, one
finds

1 1 1
Z Z
[] = d! d! fY (!1 )gfY (!2 )gT Æ(!1 !2 )
2 1 1 1 2
(4.157)
Principal component analysis (PCA) 195

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.46. Corrected principal FRF P H1 for symmetric 2DOF linear system.

and the projection property of Æ (! ) (again—appendix D) gives the final result

1 1
Z
[] = d! fY (!1 )gfY (!1 )gT
2 1 1
(4.158)

and the transformation which decorrelates the time signals also decorrelates the
spectra. (In (4.158) the overline refers to the complex conjugate and not the mean.
In order to avoid confusion with complex quantities, the mean will be expressed
in the rest of this section using the expectation operator, i.e. x = E [x].)
Now suppose the system is excited at a single point with a white excitation so
196 The Hilbert transform—a practical approach

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.47. Corrected principal FRF P H2 for symmetric 2DOF linear system.

that X (! ) = P . This defines a vector of FRFs fH (! )g = fY (! )g=P . Because


1 1
Z
[] = P 2 d! fH (!)gfH (!)gT (4.159)
2 1
the same principal component transformation as before also decorrelates the
FRFs. (A similar result occurs for systems excited by sinusoidal excitation.) This
offers the possibility of defining principal FRFs.
At this point it is useful to look at a concrete example. Consider the 2DOF
linear system,
my1 + cy_1 + 2ky1 ky2 = X sin(!t) (4.160)
my2 + cy_ 2 + 2ky2 ky1 = 0: (4.161)
Principal component analysis (PCA) 197
0.0003

0.0002
Receptance

0.0001

0.0000
0.0 10.0 20.0 30.0 40.0 50.0 60.0
Frequency (Hz)

Figure 4.48. Principal FRFs for asymmetric 2DOF linear system.

0.0003

X = 1.0
X = 5.0
X = 10.0
0.0002
Receptance

0.0001

0.0000
0.0 10.0 20.0 30.0 40.0 50.0 60.0
Frequency (Hz)

Figure 4.49. FRF 1 for symmetric 2DOF nonlinear system at low medium and high
excitation.

This defines a vector of FRFs (H 1 (! ); H2 (! )) = (Y1 (! )=X; Y2 (! )=X ).


The FRFs H1 and H2 are shown in figures 4.42 and 4.43.
If the principal FRFs P H1 (! ) and P H2 (! ) are computed by the PCA
procedure of (4.150)–(4.152) using the discrete version of (4.159)

X2
N=
[] = fH (!i )gfH (!i )gT (4.162)
i=1
198 The Hilbert transform—a practical approach
0.0003

X = 1.0
X = 5.0
X = 10.0
0.0002
Receptance

0.0001

0.0000
0.0 10.0 20.0 30.0 40.0 50.0 60.0
Frequency (Hz)

Figure 4.50. FRF 2 for symmetric 2DOF nonlinear system at low medium and high
excitation.

0.0004

X = 1.0
0.0003 X = 5.0
X = 10.0
Receptance

0.0002

0.0001

0.0000
0.0 10.0 20.0 30.0 40.0 50.0 60.0
Frequency (Hz)

Figure 4.51. Principal FRF P 1 for symmetric 2DOF nonlinear system at low medium
and high excitation.

the results are as shown in figures 4.44 and 4.45. The decomposition appears to
have almost produced a transformation to modal coordinates, both FRFs are only
mildly distorted versions of SDOF FRFs. In fact in this case, the distortions are
simple to explain.
The previous argument showed that the principal component transformation
for time data also decorrelated the FRF vector. However, this proof used integrals
Principal component analysis (PCA) 199
0.0003

X = 1.0
X = 5.0
X = 10.0
0.0002
Receptance

0.0001

0.0000
0.0 10.0 20.0 30.0 40.0 50.0 60.0
Frequency (Hz)

Figure 4.52. Principal FRF P 2 for symmetric 2DOF nonlinear system at low medium
and high excitation.

with infinite ranges. In practice, the covariance matrices are computed using
finite summations. In the time-domain case, this presents no serious problems
in applying (4.153) as long as the records are long enough that the means of the
signals approximate to zero. However, in the frequency domain, the FRFs are
not zero-mean due to the finite frequency range. This means that the covariance
matrix in (4.162) is inappropriate to decorrelate the FRF vector. The remedy is
simply to return to equation (4.150) and use the covariance matrix

X2
N=
[] = (fH (!i )g E [fH (!i )g])(fH (!i )g E [fH (!i )g])T : (4.163)
i=1
Using this prescription gives the principal FRFs shown in figures 4.46 and
4.47. This time the principal component transformation has produced modal
FRFs. Unfortunately, this situation is not generic. It is the result here of
considering a system with a high degree of symmetry; also the mass matrix is
unity and this appears to be critical. Figure 4.48 shows the principal FRFs for
a system identical to (4.160) and (4.161) except that the two equations have
different mass values—the decoupling property has been lost even though the
modal transformation can still achieve this. However, throughout the development
of the PCA method it was hoped that the principal FRFs would generally exhibit
some simplification.
In terms of nonlinear systems, the aim of PCA (or as it is sometimes called—
the Karhunen–Loeve expansion [257]) is to hopefully localize the nonlinearity in
a subset of the responses. By way of illustration consider the system in (4.160)
and (4.161) supplemented by a cubic stiffness nonlinearity connecting the two
200 The Hilbert transform—a practical approach

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.53. Principal FRF P 1 for symmetric 2DOF nonlinear system with Hilbert
transform.

masses

my1 + cy_1 + 2ky1 ky2 + k3 (y1 y2 )3 = X sin(!t) (4.164)


my2 + cy_2 + 2ky2 ky1 + k3 (y2 y1 )3 = 0: (4.165)

The FRFs for the system at a number of different levels of excitation are
given in figures 4.49 and 4.50. The distortion is only shown on the second mode
as this is the only nonlinear mode (as discussed in section 3.1). When the principal
FRFs are computed (figures 4.51 and 4.52), only the second principal FRF shows
the distortion characteristic of nonlinearity. Again one should not overemphasize
these results due to the high symmetry of the system.
Principal component analysis (PCA) 201

Magnitude

Frequency (Hz)
Phase (rad)

Frequency (Hz)

Figure 4.54. Principal FRF P 2 for symmetric 2DOF nonlinear system with Hilbert
transform.

The reason for the presence of this section in this chapter is that any test
for nonlinearity can be applied to the principal FRFs including of course the
Hilbert transform. This has been studied in the past by Ahmed [7] amongst
others. Figures 4.53 and 4.54 show the result of applying the Hilbert transform
to the principal FRFs for the system discussed earlier. As one might expect, the
nonlinearity is only flagged for the second mode.
With that brief return to the Hilbert transform the chapter is concluded.
The Hilbert transform has been seen to be a robust and sensitive indicator of
nonlinearity. It is a little surprising that it has not yet been adopted by suppliers
of commercial FRF analysers. The next chapter continues the Hilbert transform
theme by considering an approach to the analysis which uses complex function
theory.

You might also like