Smoothing Irregular data using Polynomial Filters
∗
†
Pieter Reyneke∗ , Norman Morrison† , Derrick Kourie‡ and Corné de Ridder§
Department Radar and Imaging Systems, Denel Dynamics, Irene, 0157, South Africa
Department of Electrical Engineering, University of Cape Town, Cape Town, 7700, South Africa
‡ Fastar Research Group, University of Pretoria, Pretoria, 0002, South Africa
§ School of Computing, University of South Africa, Pretoria, 0002, South Africa
Email:
[email protected]
Abstract—To make provision for irregularly occurring updates
a “variable-step” polynomial filter is derived which improves the
smoothing-, and coasting capabilities of the 1-step predictor and
the current-estimate polynomial filters. Similar to the original
filters, the variable-step polynomial filter comprises an autoinitialising expanding memory filter which later switches to a
fading memory filter. Results are compared by running the
proposed and original filter versions in parallel.
Keywords—Radar tracking filters, Polynomial approximation,
Smoothing, Extrapolation, State estimation
I. I NTRODUCTION
The 1-step predictor and current-estimate filters introduced
by Morrison in [1] and [2] both recursively calculate a least
squares approximation of a process model, determining the
state of a polynomial curve fitted over noisy observations. In
the present context, recursive refers to adding one observation
at a time. These filters are subject to the constraint that
observations should be evenly spaced. However, in natural
environments, measurements cannot always be assigned to an
integer type time batch without losing accuracy. The reasons
for this are threefold:
• High-fidelity time stamps are usually measured during
real-time process control as floating point values.
• Floating point time values more accurately position updates in time, leading to less ambiguity caused by natural
variations (related, for example, to temperature, target
clutter, occlusion, etc.)
• Detector or algorithm design may result in nondeterministic jumps in the update intervals, or in uneven
even/odd time-interval symmetry. Examples where variable update intervals can be expected include: nodding
algorithms, zig-zag sweep detectors, linear sensor integration and asynchronous mode changing.
For above mentioned reasons this research proposes a variablestep extension to the polynomial filters derived in [1] and [2].
A. Background — Polynomial filters
Recursive polynomial filters calculate either a least squares
solution (the expanding memory polynomial (EMP)) or a
weighted least squares solution with the weight (θ ∈ (0, 1))
fading the previous state estimate, Z(t) = θZ(t−δ)+Γ(1−θ)
(the fading memory polynomial (FMP)). For the EMP, Γ(n)
(see Section IV-A), based on orthonormal discrete Legendre
polynomials, is used to update a least squares fit. In the case of
FMP, Γ(θ) (see Section IV-B), based on orthonormal discrete
Laguerre polynomials, is used to update weighted least squares
update fit with an update weight of (1 − θ). This is done to
realise a recursive autoregressive state update in each case, as
derived in [2] — i.e. one sample at a time is added to the
current state vector.
Morrison [1], distinguishes between a current-estimate and
a 1-step predictor. Equations 1 and 2 show the computation
of the predicted state, Zn∗ and the error term en for both the
current-estimate filter and the 1-step predictor filter.
Z ∗ n = Φ(1)Zn−1 ,
en = yn − z0∗ n ,
...
...
(predict state Zn∗ )
(1)
(calculate error term en ) (2)
However, the formula for updating differs for the two respective cases, as shown in equations 3 and 4. Furthermore, in
each case below, either Γi (n) or Γi (θ) needs to be used in the
place of Γi to do the update for the EMP or FMP respectively,
see Sections IV-A and IV-B.
The current-estimate filter: Use either Γi (n) or Γi (θ) and
do the update:
Zn = Z ∗ n + Γi en
(3)
The 1-step predictor filter: Use either Γi (n) or Γi (θ) and
do the update:
Φ(−1)Zn+1,n = Z ∗ n + Γi en
(4)
nd
This last step, Equation 4, written out for the 2
1-step predictor update, renders:
order,
z2∗ n+1,n = z2∗ n,n−1 + Γ2 en
z1∗ n+1,n = z1∗ n,n−1 + 2z2∗ n+1,n + Γ1 en
z0∗ n+1,n = z0∗ n,n−1 + z1∗ n+1,n − z2∗ n+1,n + Γ0 en
The remainder of this paper is laid out as follows. Section
II provides the classical linear tracking differential equation.
In Section III the motivation for a change in state transition
matrix is presented. A formal explanation of the extension
of the current-estimate filter to a variable-step polynomial
filter can be found in Section IV. Section V reports on
results obtained from trial runs on simulated polynomial data
thereby verifying the prediction capability of a variable-step
implementation. Section VI presents the smoothing results of
very noisy and irregular, real data. Finally, results obtained
during missile testing are provided in VII, before concluding
the paper in Section VIII.
N OMENCLATURE
The Padé expanded STM is:
In what follows, the following notation is used: t represents
time in seconds; τ represents the constant expected update
period (or system batch time) in seconds; δ represents deltatime in seconds; η represents time in τ ’s; and ζ represents
delta-time in τ ’s. Additionally, the term order is used to refer
to a non-negative integer value attributed to a fit, a differential
equation, a process model or a filter model; and the term
degree is used to refer to a value attributed to a polynomial.
II. T HE CLASSICAL D IFFERENTIAL E QUATION
To explain polynomial filters, consider the derivation of the
classical 2nd -order linear differential equation (DE), where it
is assumed that the 3rd derivative is 0.
dr
dṙ
... dr̈
r =
r̈ =
=0
(5)
dt
dt
dt
If r is defined as r = [r1 , r2 , r3 ]T = [r, ṙ, r̈]T , the DE
becomes:
ṙ =
r˙1 = r2
r˙2 = r3
r˙3 = 0
(6)
This may be written in matrix form as follows:
ṙ = Ar
where:
0
A= 0
0
1
0
0
(7)
0
1
0
(8)
Note that r = Z, which is the nth order state vector.
Ψ(t) = eAt is a solution for the linear DE system in Equation 7. The system has infinitely many solutions depending
on initial conditions. Note that Ψ(δ) can be used as a state
transition matrix (STM).
III. T WO S TATE T RANSITION M ATRICES TO CHOOSE FROM
An STM is used to shift a DE system’s state vector from
time t by an amount δ, a real number, to time t+δ via a simple
multiplication. This is done without recalculating the state (Z).
Thus, using the STM one can shift the state as follows: Z ∗ (t+
δ) = Ψ(δ)Z(t). Note, the asterisk indicates that the newly
calculated state is an estimate at t + δ. The asterisk is omitted
at t since t is where the last fit was calculated.
As the STM of the polynomial one can either choose the
Padé expanded STM, Ψ(δ), or the commonly used STM
for polynomial filtering Φ(δ). (See [1].) Both can be used
directly for prediction of states at any time, with or without
denormalisation. For example in our case, as will be seen,
Zn ∗ could be predicted from Zη−ζ directly before updating
the state, where η = τt with τ a real positive number, and
ζ = τδ .
The linear DE has the following discrete solution [3]:
r(t + δ) = Ψ(δ)r(t)
(9)
Note that [3] compares many algorithms for computing the
matrix exponential. (See the expm function in [4].)
[Ψ(δ)]i,j =
δ j−i
(j − i)!
{∀ i, j| 0 ≤ i ≤ j ≤ m}
0 elsewhere
(10)
where m = order + 1 is the dimension of the square matrix
Ψ.
This transition matrix (exactly the same as the one wellknown from least squares), can be written out as follows:
2
δ m−1
δm
1 δ δ2! · · · (m−1)!
m!
δ m−2
δ m−1
0 1 δ ···
(m−2)!
(m−1)!
δ m−3
δ m−2
0 0 1 · · · (m−3)!
(m−2)!
Ψ(δ) =
(11)
..
..
..
.. ..
..
. .
.
.
.
.
0 0 0 ···
1
δ
0 0 0 ···
0
1
The commonly used STM for polynomial filtering, defined
in [1], is represented as follows:
µ ¶
j (j−i)
{∀ i, j| 0 ≤ i ≤ j ≤ m}
Φi,j (δ) =
δ
0 elsewhere
i
(12)
j!
=
δ (j−i)
(13)
(j − i)!i!
where m = order + 1 is the dimension of the square matrix
Φ.
This original transition matrix, in Equation 13, can be
written out as follows:
m!
m−1!
δ m−2 (m−1)!1!
δ m−1
1 δ δ 2 · · · (m−2)!1!
0 1 δ ···
m−1!
m!
m−3
m−2
(m−3)!2! δ
(m−2)!2! δ
m−1!
m!
m−4
m−3
0 0 1 · · · (m−4)!3!
δ
(m−3)!3! δ
Φ(δ) =
.. .. ..
..
..
..
. . .
.
.
.
0 0 0 ···
1
δ
0 0 0 ···
0
1
(14)
The following discrete system results from the commonly
used STM:
r(t + δ) = Denorm(Φ(ζ))r(t)
(15)
where Denorm is the denormalisation function described in
[1] and in Section IV, Equation 19.
We prefer to utilize and extend the first STM option,
presented in Equation 10 and written out in Equation 11,
because Ψ(δ) can easily be differentiated, by simply using only one less row and column of Ψ(δ), e.g. in Matlab: z_dot = Psi(1:N-1,1) * z'(2:N);. The same
holds for higher order derivatives. This characteristic is also
applicable to the multi-dimensional case.
A. A note on multi-dimensional use
There are essentially two ways to expand the system to
three or more dimensions: either replicate the observation and
state vectors in the row direction (i.e. make it wider); or block
expand the matrix with zeros. Commonly, state estimation
methods block expand the matrix with zeros. In our case, only
column replication of r in Equation 15 was performed — r
becomes a 3-D vector. For the 2nd order case:
x y z
r(= Z) : = ẋ ẏ ż
(16)
ẍ ÿ z̈
B. Altering the original EMP and FMP to use the preferred
form STM, i.e Ψ, to simplify estimation and prediction
The original expanding memory polynomial (EMP) and fading memory polynomial (FMP) filters as derived in Chapters
9 and 13 of [2], can be adapted to utilise the STM in Equation
10.
ΓΨ is defined in terms of Γ(= ΓΦ ) for the ith -degree
polynomial set for both the EMP and FMP filters, as follows:
ΓΨ (j, i) = Γ(j, i) × (i − j)!
where j ∈ [0, i].
As an example, the change for the 4th degree ΓΨ is as
presented:
ΓΨ (4, 4) = αΨ
ΓΨ (3, 4) = βΨ
ΓΨ (2, 4) = γΨ
ΓΨ (1, 4) = δΨ
ΓΨ (0, 4) = ²Ψ
= α × 0!
= β × 1!
= γ × 2!
= δ × 3!
= ² × 4!
standard 1-step predictor and current-estimate filter versions
described in [1] and [2]. The two filters are normalised/scaled
respectively as follows:
EMP See Equation 17.
FMP in the case of the FMP the filter parameter θ needs
to be normalised to ensure that the amount of fading
per update time (τ ) is the same as the original. This
is done by calculating the effective theta (θef f ) as
follows : θef f = θζ .
As previously discussed, in all cases normalised time η(t)
starts with 0 at the track start time (t0 ) and is scaled (normalised) by the constant expected update period τ , i.e. η(t) =
t−t0
(see Equation 17). Denormalisation differs slightly from
δ
merely substituting Equation 19. If one wishes to get the Φ
equivalent and original state vector one needs to multiply
the respective Ψij and the state elements with ζ(j−i)!
( j−i) and
i!
ζ i respectively. This was added in the test code in order to
validate equivalence to the original.
This concludes the derivation. Using normalised time the
prediction/extrapolation/estimation formula up to batch time η
becomes : Zη∗ = Ψ(ζ)Zη−ζ based on the previous fit done at
the batch time η − ζ with η, ζ ∈ R.
A. The Expanding Memory Polynomial Filter
The update workflow for adding an observation, yn , to the
EMP filter is as follows:
IV. S OLUTION : E XTENDING THE C URRENT-E STIMATE
P OLYNOMIAL F ILTER TO A VARIABLE - STEP F ILTER
In this section, the variable-step filter version is derived
from the current-estimate filter (see Chapter 12 of [1]). This
filter was then implemented in Matlab and verified on various
scenarios.
Firstly, η is defined as the normalised time — a real number
measured as a multiple of the expected update period (τ ) from
the start time of a track (t0 ).
t − t0
η=
(17)
τ
Secondly, ζ =
number.
δ
τ,
defines the normalised delta-time, a real
t + δ − t0
(18)
τ
Note, n = 0, 1, 2, . . . , the update- or batch number, as
originally defined in [1], stays unchanged and is defined as
the number of observations added to the EMP filter.
The correct time can be recovered by denormalising as
follows:
η = η + ζ, n = n + 1 . . . (n and η is updated)
Zη∗ = Ψ(ζ)Zη−ζ ,
. . . (predict by ζ)
eη = yn − z0∗ η ,
Zη =
δ =ζ ×τ
(19)
(20)
(21)
When using the variable time-step EMP and FMP filter
versions, certain normalisations must be performed in order to
make them parameter identical, i.e. “hot-pluggable” with the
. . . (calculate the error)
+ Γn eη ,
. . . (do the update)
The state vector update for the EMP filter — after prediction
using Ψ up to the current normalised time η — now becomes:
•
0th-degree
z0η = z0η ∗ + αn eη ,
•
1st-degree
z1η = z1η ∗ + βeη
z0η = z0η ∗ + αeη
6
β=
(n + 2)(2)
2(2n + 1)
α=
(n + 2)(2)
η+ζ =
t = η × τ + t0
t + δ = (η + ζ) × τ + t0
Zη∗
•
2nd-degree
z2η = z2η ∗ + (2!)γeη
z1η = z1η ∗ + βeη
z0η = z0η ∗ + αeη
30
γ=
(n + 3)(3)
18(2n + 1)
β=
(n + 3)(3)
3(3n2 + 3n + 2)
α=
(n + 3)(3)
α=
1
n+1
•
3rd-degree
•
= z3η ∗ + (3!)δeη
= z2η ∗ + (2!)γeη
= z1η ∗ + βeη
= z0η ∗ + αeη
140
δ=
(n + 4)(4)
120(2n + 1)
γ=
(n + 4)(4)
20(6n2 + 6n + 5)
β=
(n + 4)(4)
8(2n3 + 3n2 + 7n + 3)
α=
(n + 4)(4)
z2η = z2η ∗ + (2!)γeη
z1η = z1η ∗ + βeη
z0η = z0η ∗ + αeη
1
γ = (1 − θ)3
2
3
β = (1 − θ)2 (1 + θ)
2
α = 1 − θ3
z3η
z2η
z1η
z0η
•
= z4η ∗ + (4!)²eη
= z3η ∗ + (3!)δeη
= z2η ∗ + (2!)γeη
= z1η ∗ + βeη
= z0η ∗ + αeη
630
²=
(n + 5)(5)
700(2n + 1)
δ=
(n + 5)(5)
1050(n2 + n + 1)
γ=
(n + 5)(5)
25(12n3 + 18n2 + 46n + 20)
β=
(n + 5)(5)
5(5n4 + 10n3 + 55n2 + 50n + 24)
α=
(n + 5)(5)
B. The Fading Memory Polynomial Filter
The update workflow for adding an observation, yn , to the
FMP filter is as follows:
θ = θζ
. . . (θ is updated)
η =η+ζ
. . . (η is updated)
Zη∗ = Ψ(ζ)Zη−ζ , . . . (predict by ζ)
eη = yn − z0∗ η , . . . (calculate the error)
Zη = Zη∗ + Γθ eη , . . . (do the update)
Similar to the update for the EMP filter, after prediction
using Ψ, the details for state vector update for the FMP filter
are as follows:
•
0th-degree
z0η = z0η ∗ + αeη ,
•
•
1st-degree
z1η = z1η ∗ + βeη
z0η = z0η ∗ + αeη
β = (1 − θ)2
α = 1 − θ2
α=1−θ
3rd-degree
= z3η ∗ + (3!)δeη
= z2η ∗ + (2!)γeη
= z1η ∗ + βeη
= z0η ∗ + αeη
1
δ = (1 − θ)4
6
γ = (1 − θ)3 (1 + θ)
1
β = (1 − θ)2 (11 + 14θ + 11θ2 )
6
α = 1 − θ4
z3η
z2η
z1η
z0η
4th-degree
z4η
z3η
z2η
z1η
z0η
2nd-degree
•
4th-degree
= z4η ∗ + (4!)²eη
= z3η ∗ + (3!)δeη
= z2η ∗ + (2!)γeη
= z1η ∗ + βeη
= z0η ∗ + αeη
1
(1 − θ)5
²=
24
5
δ=
(1 − θ)4 (1 + θ)
12
5
γ=
(1 − θ)3 (7 + 10θ + 7θ 2 )
24
5
β=
(1 − θ)2 (5 + 7θ + 7θ 2 + 5θ3 )
12
α = 1 − θ5
z4η
z3η
z2η
z1η
z0η
V. T ESTING
The Matlab code for an implementation of the EMP and
FMP filters are extremely simple and available from the
corresponding author.
Testing was done on the following scenarios.
A. The polynomial filter in a noisy environment
Here the original and improvised versions of the polynomial
filters are compared using known simulated polynomials of
different orders. The simulation parameters were PD = 0.5,
in the presence of σ = 50.0 additive noise, for an update
period of τ = 1s. The resulting graphs when approximating a
4th -degree polynomial with a 3rd -order filter model are shown
in Figure 1. The filter parameter θ was set to 0.95.
Note, when choosing a constant update rate the variablestep and original polynomial filters render near identical state
vector estimates. Thus when the derived variable-step filter
was compared to both the original filters the results were
2
Σ (Zvstep − Zoriginal ) < 10−10 over the total track-time of
100s (τ = 1s). During the comparison to the 1-step predictor
a “one batch” prediction was necessary on the variable-step
∗
filter, i.e Zη+1
= Ψ(1)Zη , before the mentioned sum-ofsquare subtraction could be done.
Degree = 4, Run4
1000
fZ(0)=881.8053
fZ(0) Error, last value , 1Step=848.0403, ContStep=795.6041
1000
View publication stats
800
200
500
0
600
0
400
−500
−60
200
−40
−20
0
fZ(1)=34.8626
20
40
−40
−20
0
fZ(2)=1.4308
20
40
−40
−20
0
fZ(3)=0.017189
20
40
−200
−60
−40
−20
0
20
40
fZ(1) Error, last value , 1Step=11.1344, ContStep=0.36173
50
50
0
0
−200
−50
−40
−30
−20
−10
0
Pd =0.5, Ps =0, Ns =50 Sjitter =0.3
10
20
0
30
−50
−60
−50
−60
−40
−20
0
20
40
fZ(2) Error, last value , 1Step=−0.23652, ContStep=−1.6583
2
5
150
0
1step
cont
100
0
−2
50
−5
−60
0
−4
−60
−40
−20
0
20
40
fZ(3) Error, last value , 1Step=−0.0028933, ContStep=−0.082431
0.2
0.2
−50
0
−100
−150
−50
−40
−30
−20
−10
0
10
20
0
−0.2
−60
30
−40
−20
0
20
−0.2
−60
40
−40
−20
0
20
40
(b) 4th -degree state vector
(a) Position and position-errors
4th -degree polynomial function
Fig. 1.
VI. R ESULTS FROM SMOOTHING VERY NOISY FLIGHT
DATA THAT VERIFY SMOOTHING AND PREDICTION
CAPABILITY
Figure 2 displays the smoothing achieved on “detected area
in pixels”, an irregularly detected feature, extracted from noisy
flight data. The data-set parameters were as follows: PD ≈
0.25, in the presence of non-linearly changing additive noise
σ as a function of range, and, for an irregular update period
of around τ = 0.01s.
4th order
State Vector
6
0.04
4
z0*
0.06
2
0.02
0
0
input
orig
filtered
degree
pred − 0.1s
−0.04
−0.06
0
0.5
1
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0
0.2
0.4
0.6
0.8
1
1.2
1.4
5
z1*
−0.02
1.5
0
−5
20
4th order errors
0.06
0
z2*
0.04
−20
0.02
−40
input
filtered
pred − 0.1s
0
5000
z3*
−0.02
−0.04
−0.06
0
0.5
1
0
−5000
1.5
(b) 3rd -degree state vector
(a) Position and position-errors
Fig. 2. 3rd -degree polynomial filter on a very noisy image feature, detected
area
VII. R ESULTS FROM SMOOTHING A FLIGHT PROFILE IN
3-D THAT VERIFY SMOOTHING AND PREDICTION
CAPABILITY DURING A M ISSILE ON M ISSILE ENCOUNTER
polynomial filter adheres to the Cramér-Rao lower bound
(CRLB1 ) on variance, and is thus CRLB consistent [2]. The
fading memory polynomial filter is not CRLB consistent.
The modified polynomial filters described in this paper
possess all the characteristics of the originally devised polynomial filters ([2]). They also have the following improved
capabilities:
• The time-interval of the modified (variable-step) polynomial filters does not have to be evenly spaced.
• Missed detections do not have to be cycled (updated) in
the modified polynomial filters.
• Denormalisation is only needed if the modified polynomial filter should be hot-pluggable with the original. (See
Section IV.)
• The state transition matrix differs slightly from the polynomial filters originally devised in [2]. These differences
are proposed to make predictions and derivations of
predictions simpler. (See Section III.) The ith order
derivation can now be done with the following sub-matrix
i
Z(t)
formula: d dt
= Ψ1:N −i,1 × Z(t)1+i:N
i
Note, the variance reduction factor (VRF) derivation done
in [2] and refined in [1] remains unchanged and that these
references remain the authoritative sources on the following
aspects for Polynomial Filtering:
• Switching from EMP to FMP upon reaching equal VRF.
• Quick settling, automatic degree iteration.
• Outlier exclusion.
Polynomial filters are simple, elegant and fast. It is feasible
to implement banks of these filters in firmware with type
(either EMP or FMP), θ and order as the tunable parameters.
These parameters can all be changed on the fly.
Future work includes doing the reverse of the recursive
expanding polynomial filter by discarding one sample at a
time, thereby providing a recursive version for a fixed memory
length expanding memory polynomial filter.
R EFERENCES
[1] N. Morrison, Filter Engineering — A Practical Approach: The GaussNewton and Polynomial Filters. To be published, 2011.
[2] ——, Introduction to Sequential Smoothing and Prediction. McGrawHill, New York, 1969.
[3] G. H. Golub and C. F. Van Loan, Matrix Computations, 2nd ed. Johns
Hopkins University Press, 1989.
[4] Mathworks, “Matrix exponential function — expm(),” Online:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/expm.html,
2009.
[5] H. Cramér, Mathematical Methods of Statistics, 1st ed. Princeton Univ.
Press, 1946.
[6] C. R. Rao, “Information and the accuracy attainable in the estimation of
statistical parameters,” Bull. Calcutta Math. Soc, vol. 37, p. 8189, 1946.
(a) Velocity, acceleration and their er- (b) 3rd -degree polynomial filter (in
[7] Z. Long, N. Ruixin, and P. Varshney, “A sensor selection approach for
rors
black) compared to the LS(100)(red)
target tracking in sensor networks with quantized measurements,” ICASSP
2008, March 2008.
Fig. 3. Max 3rd -degree polynomial compared to Least-Square with history
of 100 in a 3D Missile on Missile encounter
Velocity
5
Velocity Errors
i=476; time=32.5478
0.5
Est
True
4
1500
0.4
0.3
3
1000
FitOrder=2; Nfit=250
0.2
Z
2
0.1
1
−1
500
0
0
1500
−0.1
0
0.5
1
1.5
−0.2
0
0.5
1
0
−500
1.5
−400
−300
−200
−100
Z
500
Acceleration Errors
30
10
20
0
10
1000
0
Y
Acceleration
20
1500
0
0
1000
−30
5000
0
−400
0
Est
True
Y
500
−20
10000
−200
Z
−10
−5000
X
−10
0
0.5
1
1.5
−20
0
0.5
1
1.5
0
30
35
40
45
t
VIII. C ONCLUSION
This paper improved on the fading- and expanding memory
polynomial filters devised in [2]. The expanding memory
1 The Cramér-Rao lower bound is a limit to the variance that can be attained
by a linear unbiased estimator of a parameter of a distribution. See [5], [6]
for fundamentals, [7] for a well-explained application and [1] for the proof
of CRLB consistency, specifically for polynomial filters.