Ordinal pattern dependence as a multivariate dependence measure
Annika Betkena , Herold Dehlingb , Ines Nüßgenc , Alexander Schnurrd,∗
arXiv:2012.02445v2 [math.ST] 26 Aug 2021
a Faculty
of Electrical Engineering, Mathematics and Computer Science, University of Twente, 7500 AE Enschede, The Netherlands
b Faculty of Mathematics, Ruhr-University Bochum, 44780 Bochum, Germany
c Department of Mathematics, Siegen University, 57072 Siegen, Germany
d Department of Mathematics, Siegen University, 57072 Siegen, Germany
Abstract
In this article, we show that the recently introduced ordinal pattern dependence fits into the axiomatic framework of
general multivariate dependence measures, i.e., measures of dependence between two multivariate random objects.
Furthermore, we consider multivariate generalizations of established univariate dependence measures like Kendall’s
τ, Spearman’s ρ and Pearson’s correlation coefficient. Among these, only multivariate Kendall’s τ proves to take the
dynamical dependence of random vectors stemming from multidimensional time series into account. Consequently,
the article focuses on a comparison of ordinal pattern dependence and multivariate Kendall’s τ in this context. To
this end, limit theorems for multivariate Kendall’s τ are established under the assumption of near-epoch dependent
data-generating time series. We analyze how ordinal pattern dependence compares to multivariate Kendall’s τ and
Pearson’s correlation coefficient on theoretical grounds. Additionally, a simulation study illustrates differences in the
kind of dependencies that are revealed by multivariate Kendall’s τ and ordinal pattern dependence.
Keywords: concordance ordering, limit theorems, multivariate dependence, ordinal pattern, ordinal pattern
dependence, time series
2020 MSC: Primary 62H12, Secondary 62F12
1. Introduction
Recently, various attempts have been made to generalize classical dependence measures for one-dimensional
random variables (like Pearson’s correlation coefficient, Kendall’s τ, Spearman’s ρ) to a multivariate framework. The
aim of these is to describe the degree of dependence between two random vectors with a single number. This has to be
separated from the branch of research where the dependence within one vector is described by a single number (see
[13, 14] and the references therein).
Roughly speaking, one can separate the following two approaches: (I) In a first step, the main properties which
classical dependence measures between two random variables display, are extracted. In a second step, multivariate
analogues of the dependence measures which satisfy canonical generalizations of these properties in a multivariate
framework, are defined. However, often a canonical interpretation of these measures is not at hand. (II) Given two
time series, one wants to describe their co-movement.
Along these lines, the definition of ordinal pattern dependence (see [15]) follows the latter approach. Originally,
axiomatic systems are disregarded by the notion of ordinal pattern dependence, which is naturally interpreted as the
degree of co-monotonic behavior of two time series. Against the background of this approach, limit theorems have
been proved in the time series setting (see [16] for the SRD case and [11] for the LRD case).
Both approaches in defining multivariate dependence measures have proved to be useful, but by now, they have
been analyzed separately. In the present paper, we close the gap between the two. To this end, we recall the definition of ordinal pattern dependence in the subsequent section and show that it is a multivariate dependence measure
according to the definition introduced in [7]. In Section 3, we establish consistency and asymptotic normality for
∗ Corresponding
author. Email address:
[email protected]
Preprint submitted to Journal of Multivariate Analysis
August 27, 2021
estimators of ordinal pattern dependence in the framework of i.i.d. random vectors. Section 4 deals with multivariate
extensions of well-established univariate dependence measures. It turns out that multivariate Kendall’s τ is the only
one among these that captures the dynamical dependence between random vectors. Starting with approach (I), we
prove limit theorems for an estimator of multivariate Kendall’s τ in the time series context. In the last section, the
different measures are compared from a theoretical point-of-view as well as by simulation studies.
2. Ordinal pattern dependence as a measure of multivariate dependence
If (Xi , Yi ), i ≥ 1, denotes a stationary, bivariate process, we define, for any integers i, h ≥ 1, the random vectors of
consecutive observations
X(h)
i
Y(h)
i
··= (Xi , . . . , Xi+h ),
··= (Yi , . . . , Yi+h ).
The goal of this paper is to consider the concept of ordinal pattern dependence as a multivariate measure of
(h)
dependence between the random vectors X(h)
i and Yi stemming from a stationary, bivariate process (Xi , Yi ), i ≥ 1,
with continuous marginal distributions, and to compare it to established measures of dependence. Note that, by
(h)
stationarity of the underlying process, the joint distribution of the vector (X(h)
i , Yi ) does not depend on i. We will
(h)
thus use the symbol (X(h) , Y(h) ) for a generic random vector with the same joint distribution as any of the (X(h)
i , Yi )
(h)
(h)
and we write X = (X1 , . . . , X1+h ), Y = (Y1 , . . . , Y1+h ). Furthermore, note that it is common to count the number of
increments h rather than the length of the vector, since ordinal patterns can be calculated by exclusively considering
the increments of the time series. Moreover, here, and in the following, we consider vectors as column vectors.
However, for the sake of readability and notational convenience, we omit the notation ⊤ indicating the transpose of
vectors.
2.1. Ordinal pattern dependence
For h ∈ N let S h denote the set of permutations of {0, . . . , h}, which we write as (h + 1)-tuples containing each of
the numbers 0, . . . , h exactly once. The ordinal pattern of order h refers to the permutation
Π(x0 , . . . , xh ) = (π0 , . . . , πh ) ∈ S h
which satisfies xπ0 ≥ · · · ≥ xπh (see [3, 4]). In the present paper, we only consider continuous marginals. Allowing for
non-continuous marginals would require the additional restriction π j−1 > π j if xπ j−1 = xπ j for j ∈ {1, ..., h} (see [17]).
Definition 1. We define the ordinal pattern dependence between two random vectors X(h) = (X1 , . . . , Xh+1 ) and
Y(h) = (Y1 , . . . , Yh+1 ) by
P
Pr Π X(h) = Π Y(h) − π∈S h Pr Π(X(h) ) = π Pr Π(Y(h) ) = π
.
(1)
OPDh (X(h) , Y(h) ) =
P
1 − π∈S h Pr Π(X(h) ) = π Pr Π(Y(h) ) = π
This definition of ordinal pattern dependence only takes positive dependence into account. Negative dependence
can be included by analyzing the co-movement of X and −Y = (−Yi )i∈N . Typically, one is interested in measuring
either positive or negative dependence. If one wants to consider both dependencies at the same time, a consideration
of the quantity
OPDh (X, Y)+ − OPDh (X, −Y)+ ,
where a+ := max{a, 0} for every a ∈ R, seems natural. In order to keep things less technical, we only consider the
simpler measure (1). For recent developments in the theory of ordinal patterns, see [2] and [9], and for a related
approach to analyze dependence between dynamical systems, see [6].
2
2.2. Axiomatic definition of multivariate dependence measures
With the following definition, [7] establish an axiomatic theory for multivariate dependence measures between
n-dimensional random vectors. This has been strongly inspired by the axiomatic framework of [14], who follow a
copula-based approach to define and analyze multivariate dependence measures within one vector.
Definition 2. Let L0 denote the space of random vectors with values in Rn on the common probability space (Ω, F , Pr).
We call a function µ : L0 × L0 −→ R an n-dimensional measure of dependence if
1. it takes values in [−1, 1];
2. it is invariant with respect to simultaneous permutations of the components within two random vectors X and
Y;
3. it is invariant with respect to monotonically increasing transformations of the components of the two random
vectors X and Y;
4. it is zero for two independent random vectors X and Y;
5. it respects concordance ordering, i.e., for two pairs of random vectors X, Y and X∗ , Y∗ , it holds that
!
!
X∗
X
4C ∗ ⇒ µ(X, Y) ≤ µ(X∗ , Y∗ ).
Y
Y
Here, 4C denotes concordance ordering, i.e.,
!
!
X
X∗
4C ∗ if and only if F(X) ≤ F(X∗∗ ) and F̄(X) ≤ F̄(X∗∗ ) ,
Y
Y
Y
Y
Y
Y
where ≤ is meant pointwise and F̄ denotes the survival function.
Theorem 1. The ordinal pattern dependence OPDh is an h + 1-dimensional measure of dependence.
The proof, which is a bit involved and makes use of mulitvariate distribution functions and survival functions, has
been postponed to Section 6.
3. Limit Theorems for Ordinal Pattern Dependence of i.i.d. Vectors
In Section 5, we compare ordinal pattern dependence to other concepts of multivariate dependence. These have
been introduced and used for sequences of independent random vectors. In contrast to this, the definition of ordinal
pattern dependence applies to random vectors stemming from multivariate time series. Nonetheless, ordinal pattern
dependence can as well be applied to independent random vectors. Limit theorems that provide the asymptotic
distribution of ordinal pattern dependence in this setting have not yet been established. We close this gap by the
following considerations:
Let (Xi , Yi ), i ≥ 1, be independent copies of (X, Y), and define
n
q̂X,π,n :=
n
n
1X
1X
1X
1{Π(Xi )=π} , q̂Y,π,n :=
1{Π(Yi )=π} , q̂(X,Y),n :=
1{Π(Xi )=Π(Yi )} ,
n i=1
n i=1
n i=1
as well as the corresponding probabilities
qX,π ··= Pr (Π(X) = π) , qY,π ··= Pr (Π(Y) = π) , q(X,Y) ··= Pr (Π(X) = Π(Y)) .
According to the law of large numbers q̂X,π,n , q̂Y,π,n , and q̂(X,Y),n are strongly consistent estimators for these probabilities.
3
Proposition 1. Let (Xi , Yi ), i ≥ 1, be independent copies of (X, Y). Then, as n → ∞,
q̂X,π,n −→ qX,π , q̂Y,π,n −→ qY,π , q̂(X,Y),n −→ q(X,Y)
almost surely.
The following theorem establishes asymptotic normality of ordinal pattern dependence of i.i.d. random vectors.
For this, we introduce the following notation:
q̂X,n := q̂X,π,n
π∈Sh+1
, q̂Y,n := q̂Y,π,n
π∈Sh+1
, qX := qX,π
π∈Sh+1
, qY := qY,π
π∈Sh+1
.
Theorem 2. Let (Xi , Yi ), i ≥ 1, be independent copies of (X, Y). Then, as n → ∞,
P
P
!
√ q̂(X,Y),n − π∈S h q̂X,π,n q̂Y,π,n q(X,Y) − π∈S h qX,π qY,π D
P
P
n
−
−→ N(0, σ2 ),
1 − π∈S h q̂X,π,n q̂Y,π,n
1 − π∈S h qX,π qY,π
where the limit variance σ2 is given by
σ2 = ∇ f (q(X,Y) , qX , qY ) Σ ∇ f (q(X,Y) , qX , qY ) ⊤ .
Here, the matrix Σ is defined as in Proposition 2 (see below), and ∇ f is the gradient of the function
f : R × R(h+1)! × R(h+1)! → R, defined by
u − v⊤ · w
.
f (u, v, w) =
1 − v⊤ · w
The proof of Theorem 2 is based on the following proposition, which establishes the joint asymptotic normality
of q̂X,π,n , q̂Y,π,n , and q̂(X,Y),n .
Proposition 2. Under the same assumptions as in Theorem 2, we have
√ q̂(X,Y),n − q(X,Y) D
−→ N(0, Σ),
q̂X,n − qX
n
q̂Y,n − qY
where Σ is the symmetric (2(h + 1)! + 1) × (2(h + 1)! + 1) matrix
Σ11 Σ12 Σ13
Σ = Σ21 Σ22 Σ23
Σ31 Σ32 Σ33
with
Σ11 = q(X,Y) (1 − q(X,Y) ) ∈ R,
Σ12 = (σπ (1, 2))π∈Sh ∈ R1×(h+1)! ,
Σ13 = (σπ (1, 3))π∈Sh ∈ R
1×(h+1)!
,
Σ22 = (σπ,π′ (2, 2))π,π′ ∈Sh ∈ R(h+1)!×(h+1)! ,
Σ23 = (σπ,π′ (2, 3))π,π′ ∈Sh ∈ R(h+1)!×(h+1)! ,
Σ33 = (σπ,π′ (3, 3))π,π′ ∈Sh ∈ R(h+1)!×(h+1)! ,
σπ (1, 2) = Pr(Π(X) = Π(Y) = π) − qX,Y · qX,π ,
σπ (1, 3) = Pr(Π(X) = Π(Y) = π) − qX,Y · qY,π ,
(
−qX,π qX,π′
if π , π′
σπ,π′ (2, 2) =
qX,π (1 − qX,π ) if π = π′ ,
σπ,π′ (2, 3) = Pr(Π(X) = π, Π(Y) = π′ ) − qX,π qY,π′ ,
(
if π , π′
−qY,π qY,π′
σπ,π′ (3, 3) =
qY,π (1 − qY,π ) if π = π′ .
Due to symmetry of Σ, the remaining blocks are defined by Σ21 = Σ⊤12 , Σ31 = Σ⊤13 , Σ32 = Σ⊤23 .
4
Proof. The proof follows directly from the multivariate central limit theorem applied to the partial sums of the
(2(h + 1)! + 1)-dimensional i.i.d. random vectors
ξi = 1{Π(Xi )=Π(Yi )} , (1{Π(Xi )=π} )π∈Sh , (1{Π(Yi )=π} )π∈Sh .
The limit covariance matrix is the covariance matrix of ξ1 , which is given by the formulae stated in the formulation of
this proposition.
Proof of Theorem 2. We apply the delta method to the function f , defined in the formulation of the theorem, together
with the multivariate CLT established in Proposition 2. In this way, we obtain
D
√
n f (q̂(X,Y),n , q̂X,n , q̂Y,n ) − f (q(X(h) ,Y(h) ) , qX(h) , qY(h) ) −→ N(0, ∇ f (q(X(h) ,Y(h) ) , qX(h) , qY(h) ) · Σ · (∇ f (q(X(h) ,Y(h) ) , qX(h) , qY(h) ))⊤ ).
This proves the statement of the theorem.
4. Ordinal pattern dependence in contrast to multivariate Kendall’s τ
In this article, we are explicitly studying the dependence between random vectors stemming from stationary time
series. In this regard, the main drawback of univariate dependence measures is that these do not incorporate crossdependencies which characterize the dynamical dependence between two random vectors. Univariate dependence
measures focus on the dependence between Xi and Yi , i.e., on the dependence at the same point in time. In contrast,
ordinal pattern dependence captures the dynamics of time series.
In the following, we study two multivariate generalizations of univariate dependence measures, namely the multivariate extension of Pearson’s correlation coefficient, established in [12], and multivariate Kendall’s τ as introduced
in [7].
Definition 3. For two (h + 1)-dimensional random vectors X, Y ∈ L2 with invertible covariance matrices ΣX and ΣY
and cross-covariance matrix ΣX,Y , we define Pearson’s correlation coefficient by
tr ΣX,Y
,
ρ (X, Y) ··=
tr (ΣX ΣY )1/2
where A1/2 is the principal square root of the matrix A, such that A1/2 A1/2 = A.
For the multivariate generalization of Pearson’s correlation coefficient, we obtain
tr ΣX,Y
Cov(X1 , Y1 ) + . . . + Cov(X1+h , Y1+h )
=
ρ (X, Y) =
.
1/2
tr (ΣX ΣY )
tr (ΣX ΣY )1/2
As a result, the cross-correlations have no impact on the value of Pearson’s correlation coefficient. Therefore, the
multivariate Pearson’s correlation coefficient does not seem to be appropriate for our approach. The same holds true
for generalizations of Spearman’s ρ due to the close relationship between these concepts. We hence focus on the
multivariate generalization of Kendall’s τ:
4.1. Multivariate Kendall’s τ
The definition of multivariate Kendall’s τ that we consider in this section is taken from [7]. In that paper, the
authors investigated the dependence between two multivariate random vectors. Therefore, for our purposes, it is
appropriate to use it in the time series context. For a multivariate generalization of Kendall’s τ within one random
vector see [13].
Definition 4. For two h + 1-dimensional random vectors X, Y, we define Kendall’s τ by
τ(X, Y) ··= Corr 1{X≤X̃ } , 1{Y≤Ỹ } ,
where X̃, Ỹ is an independent copy of (X, Y).
5
The following lemma establishes a representation of multivariate Kendall’s τ for Gaussian processes in terms of
the probabilities pX and pY that enter in our definition of ordinal pattern dependence.
Lemma 1. Let (Xi , Yi ), i ≥ 1, denote a stationary mean zero Gaussian process and let X(h) = (X1 , . . . , X1+h ) and
Y(h) = (Y1 , . . . , Y1+h ). Then, we have
Pr (X1 ≤ 0, . . . , X1+h ≤ 0, Y1 ≤ 0, . . . , Y1+h ≤ 0) − p̃X(h) p̃Y(h)
τ X(h) , Y(h) =
,
p
p̃X(h) (1 − p̃X(h) ) p̃Y(h) (1 − p̃Y(h) )
where p̃X(h) = Pr (X1 ≤ 0, . . . , X1+h ≤ 0) and p̃Y(h) = Pr (Y1 ≤ 0, . . . , Y1+h ≤ 0).
Proof. Let X̃(h) , Ỹ(h) be an independent copy of (X(h) , Y(h) ) with X̃(h) = (X̃1 , . . . , X̃1+h ) and Ỹ(h) = (Ỹ1 , . . . , Ỹ1+h ). It
then holds that
τ X(h) , Y(h) = Corr 1{X(h) ≤X̃(h) } , 1{Y(h) ≤Ỹ(h) }
= Corr 1{X(h) −X̃(h) ≤0} , 1{Y(h) −Ỹ(h) ≤0}
Pr X1 − X̃1 ≤ 0, . . . , X1+h − X̃1+h ≤ 0, Y1 − Ỹ1 ≤ 0, . . . , Y1+h − Ỹ1+h ≤ 0 − pX(h) pY(h)
=
p
pX(h) (1 − pX(h) ) pY(h) (1 − pY(h) )
with pX(h) = Pr X1 − X̃1 ≤ 0, . . . , X1+h − X̃1+h ≤ 0 and pY(h) = Pr Y1 − Ỹ1 ≤ 0, . . . , Y1+h − Ỹ1+h ≤ 0 . Note that for
independent centered Gaussian processes,
D √
X(h) − X̃(h) , Y(h) − Ỹ(h) = 2 X(h) , Y(h) .
This explicitly implies that the cross-correlations within X(h) − X̃(h) , Y(h) − Ỹ(h) equal those within X(h) , Y(h) . Therefore, we have pX(h) = p̃X(h) and
Pr X1 − X̃1 ≤ 0, . . . , X1+h − X̃1+h ≤ 0, Y1 − Ỹ1 ≤ 0, . . . , Y1+h − Ỹ1+h ≤ 0 − pX(h) pY(h)
p
pX(h) (1 − pX(h) ) pY(h) (1 − pY(h) )
Pr (X1 ≤ 0, . . . , X1+h ≤ 0, Y1 ≤ 0, . . . , Y1+h ≤ 0) − p̃X(h) p̃Y(h)
=
.
p
p̃X(h) (1 − p̃X(h) ) p̃Y(h) (1 − p̃Y(h) )
Although for h ≥ 2 we cannot derive an analytic expression for
Pr (X1 ≤ 0, . . . , X1+h ≤ 0) , Pr (Y1 ≤ 0, . . . , Y1+h ≤ 0)
or
Pr (X1 ≤ 0, . . . , X1+h ≤ 0, Y1 ≤ 0, . . . , Y1+h ≤ 0) ,
we know that these orthant probabilities of a multivariate Gaussian distribution are determined by the entries of the
correlation matrices and by the entries of the cross-correlation matrix of X(h) and Y(h) . In contrast to multivariate
Pearson’s correlation coefficient, multivariate Kendall’s τ constitutes a multivariate dependence measure that takes
the dynamical dependence of data stemming from time series into account.
4.2. Estimation of multivariate Kendall’s τ
[7] consider an estimator for multivariate Kendall’s τ based on independent vectors (Xi , Yi ), 1 ≤ i ≤ n. In our
setup, we will define an empirical version of Kendall’s τ based on the dependent vectors (Xi(h) , Y(h)
i ), 1 ≤ i ≤ n. For
this, we will follow the ideas of [5], who considered estimation of the classical univariate Kendall’s τ for bivariate
time series under some mild dependence condition.
6
Given an independent copy (X̃(h) , Ỹ(h) ) of the vector (X(h) , Y(h) ), we have
τ(X(h) , Y(h) ) =
=
p
p(X(h) ,Y(h) ) − pX(h) pY(h)
pX(h) (1 − pX(h) )pY(h) (1 − pY(h) )
ψ(pX(h) , pY(h) , p(X(h) ,Y(h) ) ),
where p(X(h) ,Y(h) ) := Pr(X(h) ≤ X̃(h) , Y(h) ≤ Ỹh ), pX(h) := Pr(X(h) ≤ X̃(h) ), pY(h) := Pr(Y(h) ≤ Ỹ(h) ), and where ψ : R3 → R
is defined by
ψ(x, y, z) := p
z − xy
x(1 − x)y(1 − y)
.
(2)
The probabilities pX(h) , pY(h) , and p(X(h) ,Y(h) ) can be estimated by their sample analogues defined by
p̂X(h) ,n :=
X
X
1
1
1{X(h) ≤X(h) } , p̂Y(h) ,n :=
1 (h) (h) ,
j
n(n − 1) 1≤i, j≤n i
n(n − 1) 1≤i, j≤n {Yi ≤Y j }
X
1
p̂(X(h) ,Y(h) ),n :=
1 (h) (h) (h) (h) ,
n(n − 1) 1≤i, j≤n {Xi ≤X j ,Yi ≤Y j }
(h)
where X(h)
i = (Xi , . . . , Xi+h ) and Yi = (Yi , . . . , Yi+h ). The plug-in estimator for Kendall’s τ is then given by
τ̂n (X(h) , Y(h) ) := ψ( p̂X(h) ,n , p̂Y(h) ,n , p̂(X(h) ,Y(h) ),n ).
In what follows, we will derive the joint limit distribution of the random vector ( p̂X(h) ,n , p̂Y(h) ,n , p̂(X(h) ,Y(h) ),n ) and the limit
distribution of τ̂n (X(h) , Y(h) ) by the delta method. For this, observe that p̂X(h) ,n , p̂Y(h) ,n , and p̂(X(h) ,Y(h) ),n are U-statistics
with symmetric kernels
f ((x, y), (x′ , y′ )) =
1
1
1{x≤x′ } + 1{x≥x′ } , g((x, y), (x′ , y′ )) =
1{y≤y′ } + 1{y≥y′ } ,
2
2
1
′ ′
h((x, y), (x , y )) = 1{x≤x′ ,y≤y′ } + 1{x≥x′ ,y≥y′ } .
2
(h)
Note that the underlying random vectors (X(h)
i , Yi ), i ≥ 1, are dependent, so that standard U-statistics theory for
independent data does not apply. However, we can apply an ergodic theorem for U-statistics established in [1].
Theorem 3. Assume that (Xi , Yi ), i ≥ 1, is a stationary ergodic process, and that (X(h) , Y(h) ) has a continuous
distribution. Then, as n → ∞, we obtain almost surely
p̂X(h) ,n −→ pX(h) , p̂Y(h) ,n −→ pY(h) , p̂(X(h) ,Y(h) ),n −→ p(X(h) ,Y(h) ) .
Proof. We apply Theorem U from [1]. The kernels f, g, and h are almost everywhere continuous and thus condition
(ii) of Theorem U holds.
In order to establish asymptotic normality of these estimators, we have to make some assumptions assuring shortrange dependence of the underlying process. We will use the concept of near-epoch dependence in probability introduced in [5]. This concept is a variation of the usual L2 -near-epoch dependence and does not require any moment
assumptions.
Definition 5. (i) Given two sub-σ-fields A, B ⊂ F , we define the absolute regularity coefficient
X
β(A, B) = sup{ | Pr(Ai ∩ B j ) − Pr(Ai ) Pr(B j )|},
i, j
where the supremum is taken over all integers m, n ≥ 1, all partitions A1 , . . . , Am ∈ A, and all partitions B1 , . . . , Bn ∈ B
of the sample space Ω.
7
(ii) For a stationary stochastic process Zi , i ∈ Z, we define the absolute regularity coefficients
0
βk := β(F−∞
, Fk∞ ),
where Fkl denotes the σ-field generated by the random variables Xk , . . . , Xl . The process (Zi )i∈Z is called absolutely
regular if limk→∞ βk = 0.
(iii) An Rd -valued stochastic process Xi , i ≥ 1, is called near-epoch dependent in probability (in short P-NED) on
the stationary process Zi , i ∈ Z, if (Xi , Zi ), i ≥ 1, is a stationary process, and if there exists a sequence (ak )k≥0 of
approximating constants with limk→∞ ak = 0, a sequence of functions fk : R2k+1 → Rd , and a nonincreasing function
Φ : (0, ∞) → (0, ∞) such that
Pr(|X0 − fk (Z−k , . . . , Zk )| ≥ ǫ) ≤ ak Φ(ǫ).
Proposition 3. Let (Xi , Yi ), i ≥ 1, be a stationary process that is P-NED on an absolutely regular process Zk , k ∈ Z,
and assume that
∞
X
ak Φ(k−6 ) = O(k−6(2+δ)/δ ) and
kβkδ/(2+δ) < ∞
k=1
(h)
for some δ > 0. Moreover, assume that Y
−X
(h)
has a bounded density. Then, the following approximations hold:
n
√
n p̂X(h) ,n − pX(h) =
2 X
(h)
f1 (X(h)
√
i , Yi ) + oP (1),
n i=1
n
2 X
(h)
g1 (X(h)
√
i , Yi ) + oP (1),
n i=1
n
2 X
(h)
h1 (X(h)
√
i , Yi ) + oP (1),
n i=1
√
n p̂Y(h) ,n − pY(h) =
√
n p̂(X(h) ,Y(h) ),n − p(X(h) ,Y(h) ) =
where the functions f1 , g1 , and h1 are the first order terms in the Hoeffding decomposition of the kernels f , g, and h,
respectively.
Remark 1. The first order term of the Hoeffding decomposition is given by
f1 (x, y) =E f ((x, y), (X̃(h) , Ỹ(h) )) − pX(h) =
=
1
Pr(X(h) ≤ x) + Pr(X(h) ≥ x) − Pr(X(h) ≤ X̃(h) )
2
1
F(x) + F̄(x) − Pr(X(h) ≤ X̃(h) ),
2
where F(x) := Pr(X(h) ≤ x) and F̄(x) := Pr(X(h) ≥ x). Similarly, we get
g1 (x, y) =
h1 (x, y) =
1
G(x) + Ḡ(x) − Pr(Y(h) ≤ Ỹ(h) ),
2
1
H(x, y) + H̄(x, y) − Pr(X(h) ≤ X̃(h) , Y(h) ≤ Ỹ(h) ),
2
where G, Ḡ, H, and H̄ are defined analogously to F and F̄.
Proof of Proposition 3. This follows from Lemma D.6 of [5] noting that the variation condition is satisfied because
the distribution of Y(h) − X(h) has a bounded density.
Theorem 4. Under the same assumptions as in Proposition 3, we have
p̂X(h) ,n − pX(h)
√
D
(h)
(h)
p̂
−
p
n
−→ N(0, Σ),
Y ,n
Y
p̂(X(h) ,Y(h) ),n − p(X(h) ,Y(h) )
8
where Σ ∈ R3×3 is the limit covariance matrix whose diagonal and off-diagonal entries are given by
σ11
∞
X
(h)
(h)
(h)
(h)
= Var f1 (X(h)
,
Y
)
+
2
Cov f1 (X(h)
1
1
1 , Y1 ), f1 (Xi , Yi ) ,
σ12
∞
X
(h)
(h)
(h)
(h)
(h)
(h)
Cov f1 (X(h)
= Cov f1 (X(h)
,
Y
),
g
(X
,
Y
)
+
1
1 , Y1 ), g1 (Xi , Yi )
1
1
1
1
i=2
+
∞
X
i=2
i=2
(h)
(h)
(h)
Cov f1 (X(h)
i , Yi ), g1 (X1 , Y1 ) .
Proof. By the multivariate central limit theorem for partial sums of NED processes we obtain
n
D
1 X
(h)
(h)
(h)
(h)
(h)
S n := √
f1 (X(h)
i , Yi ), g1 (Xi , Yi ), h1 (Xi , Yi ) −→ N(0, Σ);
n i=1
see, e.g.,[18]. Now, the statement of the theorem follows from Proposition 3 together with an application of Slutsky’s
lemma.
Theorem 5. Under the assumptions of Proposition 3, the estimator τ̂n (X(h) , Y(h) ) of Kendall’s τ τ(X(h) , Y(h) ) is consistent and asymptotically normal. More precisely, we obtain
D
√
n τ̂n (X(h) , Y(h) ) − τ(X(h) , Y(h) ) −→ N(0, (∇ψ)Σ(∇ψ)⊤ ),
where ψ is defined by (2) and Σ is defined as in Theorem 4.
Proof. This follows from the previous theorem, together with the delta method applied to the function ψ.
5. Ordinal Pattern Dependence in Contrast to Other Dependence Measures
For independent vectors (Xi , Yi ), 1 ≤ i ≤ n, all dependence measures considered in the previous sections make
sense. Yet, for measuring dependence between two time series, only ordinal pattern dependence and Kendall’s τ
seem to be reasonable choices of dependence measures. In this section, we point out what kind of dependencies
are measured by ordinal pattern dependence and how ordinal pattern dependence compares to classical dependence
measures such as Pearson’s correlation coefficient and multivariate Kendall’s τ.
5.1. The case h = 1
Axiom (4) in Definition 2 ensures that a multivariate dependence measure takes the value zero if the respective
vectors are independent. In this regard, a natural question that arises when studying the dependence between two
random vectors is whether the considered dependence measure may also differentiate between independent vectors
and uncorrelated, but dependent, random vectors. In this section, we provide an answer to this question by giving
examples of marginally uncorrelated Gaussian random vectors with non-vanishing ordinal pattern dependence. For
this purpose, we initially characterize ordinal pattern dependence of order 1 for Gaussian random vectors.
Proposition 4. Let X = (X1 , X2 ) and Y = (Y1 , Y2 ) be two Gaussian random vectors satisfying E(X1 ) = E(X2 ),
E(Y1 ) = E(Y2 ) and Var(X2 − X1 ) , 0 , Var(Y2 − Y1 ). Then, it holds that
OPD1 (X, Y) = τ(X2 − X1 , Y2 − Y1 ) =
2
arcsin Corr(X2 − X1 , Y2 − Y1 ).
π
Proof. By definition
P
Pr(Π(X1 , X2 ) = Π(Y1 , Y2 )) − π∈S 1 Pr(Π(X1 , X2 ) = π) Pr(Π(Y1 , Y2 ) = π)
P
OPD1 (X, Y) =
.
1 − π∈S 1 Pr(Π(X1 , X2 ) = π) Pr(Π(Y1 , Y2 ) = π)
9
(3)
Since X2 − X1 and Y2 − Y1 are both Gaussian random variables with mean zero and non-zero variance, we obtain
Pr(X1 < X2 ) = Pr(X2 < X1 ) = Pr(Y1 < Y2 ) = Pr(Y2 < Y1 ) =
and thus Pr(Π(X1 , X2 ) = π) = Pr(Π(X1 , X2 ) = π) =
1
2
1
,
2
for any π ∈ S 1 . Hence, we obtain
OPD1 (X, Y) = 2 Pr(Π(X1 , X2 ) = Π(Y1 , Y2 )) − 1.
Moreover, it holds that
Pr(X1 < X2 , Y1 < Y2 ) = Pr(X2 − X1 > 0, Y2 − Y1 > 0) = Pr(X1 − X2 > 0, Y1 − Y2 > 0) = Pr(X1 > X2 , Y1 > Y2 ),
and hence
OPD1 (X, Y) = 4 Pr(X1 − X2 > 0, Y1 − Y2 > 0) − 1.
From Lemma 1 with h = 1, we find τ(X2 − X1 , Y2 − Y1 ) = 4 Pr(X2 − X1 < 0, Y2 − Y1 < 0) − 1, and thus OPD1 (X, Y) =
τ(X2 − X1 , Y2 − Y1 ).
Finally, using the orthant probabilities formula for Gaussian random variables we obtain
Pr(X2 − X1 < 0, Y2 − Y1 < 0) =
1
1
+
arcsin Corr(X2 − X1 , Y2 − Y1 ),
4 2π
and thus OPD1 (X, Y) = π2 arcsin Corr(X2 − X1 , Y2 − Y1 ).
In the following we provide an example of marginally uncorrelated Gaussian random vectors with non-vanishing
ordinal pattern dependence of order 1.
Definition 6. A stationary bivariate Gaussian process Wi = (Xi , Yi ) is called AR(1) process, if there exists a matrix
A ∈ R2×2 , and an i.i.d. N(0, I2 )-distributed Gaussian process ξi = (ǫi , ηi ) such that the AR(1)-equation
(4)
Wi = AWi−1 + ξi
is satisfied.
Remark 2. Given a matrix A ∈ R2×2 , a stationary Gaussian AR(1)-process exists, if and only if all eigenvalues of A
are strictly less than 1 in absolute value.
We will now consider the special example when the AR(1)-matrix is given by
!
a b
A=
,
b −a
(5)
where a2 + b2 < 1. Thus, the AR(1)-equation for Wi = (Xi , Yi ) takes the form
Xi = aXi−1 + bYi−1 + ǫi , Yi = bXi−1 − aYi−1 + ηi .
In the next lemma, we state some properties of the processes (Xi )i≥1 and (Yi )i≥1 , and we give an explicit formula for
their ordinal pattern dependence of order 1.
Lemma 2. Consider the stationary bivariate Gaussian AR(1)-process Wi = (Xi , Yi ), i ≥ 1, satisfying (4) with matrix
A given by (5) such that a2 + b2 < 1. Then, it holds that
Cov(X1 , Y1 ) = 0, Var(X1 ) = Var(Y1 ) =
OPD1 (X(1) , Y(1) ) =
1
1 !− a2 − b2
2
b
arcsin − √
π
1 − a2
10
(6)
√
Proof. The eigenvalues of A are λ1,2 = ± a2 + b2 , and thus (4) has a unique stationary solution. Since the AR(1)equation defines a Markov chain with state space R2 , the joint distribution of Wi = (Xi , Yi ) is uniquely characterized
by the distributional fixed point equation
D
W = A W + ξ,
where ξ = (ǫ, η) has a bivariate normal distribution with mean zero and covariance matrix I2 , and where ξ is independent of W. We will now show that W ∼ N(0, σ2 I2 ) satisfies this equation with
σ2 =
1
.
1 − a2 − b2
In order to prove this, we need to calculate the distribution of A W + ξ. Since the distribution is Gaussian, it suffices
to calculate the variances and the covariance. We obtain
Cov(aX + bY + ǫ, bX − aY + η) = abσ2 − abσ2 = 0,
a2 + b2
1
+1=
= σ2 ,
2
2
1−a −b
1 − a2 − b2
a2 + b2
1
Var(bX − aY + ǫ) = b2 σ2 + a2 σ2 + 1 =
+1=
= σ2 ,
1 − a2 − b2
1 − a2 − b2
Var(aX + bY + ǫ) = a2 σ2 + b2 σ2 + 1 =
which shows that A W + ξ has indeed the same distribution as W.
In order to determine the OPD1 of the two processes, we need to calculcate the correlation of the differences. The
covariance of the increments is given by
Cov(X2 − X1 , Y2 − Y1 ) = Cov((a − 1)X1 + bY1 + ǫ2 , bX1 + (−a − 1)Y1 + η2 ) = b(a − 1)σ2 − b(a + 1)σ2
−2b
= −2bσ2 =
1 − a2 − b2
and the variances of the increments are given by
Var(X2 − X1 ) = (a − 1)2 σ2 + b2 σ2 + 1 =
(a − 1)2 + b2
2(1 − a)
+1=
,
2
2
1−a −b
1 − a2 − b2
Var(Y2 − Y1 ) = b2 σ2 + (a + 1)2 σ2 + 1 =
b2 + (a + 1)2
2(a + 1)
+1=
.
2
2
1−a −b
1 − a2 − b2
Thus, we obtain the following formula for the correlation of the increments:
b
−2b
=−√
Corr(X2 − X1 , Y2 − Y1 ) = √
4(1 − a)(a + 1)
1 − a2
Using the identity OPD1 ((X1 , X2 ), (Y1 , Y2 )) =
2
π
arcsin Corr(X2 − X1 , Y2 − Y1 ), we finally obtain (6).
Remark 3. (i) The special choice of the AR(1)-matrix A made in (5) assures that the two processes (Xi )i≥1 and (Yi )i≥1
have identical marginals, and that Xi and Yi are independent for each fixed i. In fact, one can show that the latter two
properties only hold if A is either of the form (5) or of the form
!
a b
A=
.
(7)
−b a
In this case, using similar calculations as above, one obtains OPD1 ((X1 , X2 ), (Y1 , Y2 )) = 0.
(ii) Lemma 2 provides an example of a Gaussian process for which Pearson’s correlation of Xi and Yi equals 0, i.e.,
the one-dimensional marginals are independent. However, the processes (Xi )ı≥1 and (Yi )i≥1 are not independent, as
can be seen from the identity for OPD(X(1) , Y(1) ).
11
We illustrate our results by simulating a bivariate AR(1)-process
!
Xi
·
, i ∈ {1, . . . , 500},
Wi ·=
Yi
with Wi = AWi−1 + ξi , where
A ··=
a
b
!
!
b
ε
, ξi ··= i ,
−a
ηi
(8)
ξi being a multivariate Gaussian random vector with covariance matrix Σξ = I2 (with I2 denoting the identity matrix).
We choose a2 + b2 < 1, but close to 1, in order to obtain Cov(Xi , Yi ) = 0, but high ordinal pattern dependence. For the
simulations summarized by the boxplots in Fig. 1 we chose a = 0.7 and b = −0.7. Clearly, the median of the boxplots
that are based on the values of Pearson’s correlation coefficient approaches zero, while the median of the boxplots that
are based on the values of ordinal pattern dependence seem to converge to a value between 0.75 and 1.
Bivariate AR(1) time series
1.0
0.5
0.0
−0.5
100
300
500
length of time series
OPD1
Pearson's correlation coefficient
Fig. 1: Boxplots of Pearson’s correlation coefficient and ordinal pattern dependence of order h = 1 based on 5000 repetitions of a bivariate
AR(1)-process (Xi , Yi ), i ∈ {1, . . . , n}, satisfying (8) with a = 0.7 and b = −0.7.
Fig. 2 depicts one sample path of the single time series Xi , i ∈ {1, . . . , 500}, and Yi , i ∈ {1, . . . , 500}, the corresponding increment processes, as well as scatterplots of the original observations and their increments. The scatterplots clearly indicate that, while the original observations are uncorrelated, the increment processes are positively
correlated. Moreover, the scatterplots of the two processes and their increments in Fig. 2 underline uncorrelatedness
of the original processes and a high dependence of their increments.
5.2. The case h = 2
Let us recall that for the computation of the ordinal pattern dependence of order h = 1, the crucial quantity
is Corr(X2 − X1 , Y2 − Y1 ) since, according to Proposition 4, OPD1 (X(1) , Y(1) ) is just a monotone transformation of
this correlation. It is, therefore, natural to wonder whether it is possible to construct a stationary, bivariate process
(Xi , Yi )i≥1 with OPD1 (X(1) , Y(1) ) = 0, but OPD2 (X(2) , Y(2) ) , 0.
The AR(1)-process in Lemma 2 does not fulfill these conditions, since the restriction
Corr(X2 − X1 , Y2 − Y1 ) = − √
b
1 − a2
=0
implies b = 0. As a result, we obtain a process Wi = (Xi , Yi ) = (aXi−1 + ξi , −aYi−1 + ηi ), that does not incorporate
any dynamical dependence between the processes Xi , i ≥ 1, and Yi , i ≥ 1. The only dependence in this model exists
within each component. Yet, this does not have an impact on ordinal pattern dependence.
Following Remark 3, the choice of the matrix A in (7) yields Corr (Xi , Yi ) = 0 for i ∈ {1, 2}, and OPD1 (X(1) , Y(1) ) =
0. This leads to the question whether this special construction of an AR(1)-process fulfills OPD2 (X(2) , Y(2) ) , 0.
12
20
10
10
0
0
−10
−10
−20
0
100
200
300
X
400
500
0
Y
100
200
300
increments of X
400
500
increments of Y
10
20
increments of Y
5
Y
0
−5
10
0
−10
−10
−20
−15
−5
0
5
10
15
−10
−5
X
0
5
10
increments of X
Fig. 2: Sample paths of Xi , i ∈ {1, . . . , 500}, and Yi , i ∈ {1, . . . , 500}, their increments at lag 1, as well as corresponding scatterplots based on a
bivariate AR(1)-process (Xi , Yi ), i ∈ {1, . . . , n}, satisfying (8) with a = 0.7 and b = −0.7.
Lemma 3. Consider the stationary bivariate Gaussian AR(1)-process Wi = (Xi , Yi ), i ≥ 1, satisfying (4) with matrix
A given by (7), and where a2 + b2 < 1. Then, it holds that
1
,
1 − a2 − b2
OPD1 ((Xi , Xi+1 ), (Yi , Yi+1 )) = 0, Corr (X2 − X1 , Y3 − Y2 ) = −b, Corr (X3 − X2 , Y2 − Y1 ) = b.
Cov(Xi , Yi ) = 0, Var(Xi ) = Var(Yi ) = σ2 =
Proof. The first three identities can be shown as in Lemma 2. Thus, it remains to show the latter two. It holds that
Var(X2 − X1 ) = (a − 1)2 σ2 + b2 σ2 + 1 =
(a − 1)2 + b2 + 1 − a2 − b2
= 2(1 − a)σ2 .
1 − a2 − b2
Analogously, we obtain
Var(Y3 − Y2 ) = 2(1 − a)σ2 .
Furthermore, it holds that
Cov(Y3 − Y2 , X2 − X1 ) = E (Y3 X2 ) − E (Y2 X2 ) − E (Y3 X1 ) + E (Y2 X1 ) = 2b(a − 1)σ2 ,
since
E(Y3 X2 ) = −bσ2 , E(Y3 X1 ) = −2abσ2 .
13
Alltogether, we arrive at
Corr(Y3 − Y2 , X2 − X1 ) =
2ab − 2b
= −b.
2(1 − a)
Corr(Y2 − Y1 , X3 − X2 ) = b is derived by similar calculations.
Lemma 3 provides an example of a bivariate process (Xi , Yi )i≥1 for which Corr(Xi , Yi ) = 0 and OPD1 (X(1) , Y(1) ) =
0, but where the processes (Xi )i≥1 and (Yi )i≥1 are nevertheless dependent. The fact that the increments X2 − X1 and
Y3 − Y2 are dependent, leads us to conjecture that OPD2 (X(2) , Y(2) ) , 0, but we do not have a proof. In order to
compute OPD2 (X(2) , Y(2) ), we have to calculate
Pr (Π(X1 , X2 , X3 ) = π, Π(Y1 , Y2 , Y3 ) = π)
for any π ∈ S2 . This requires computations of orthant probabilities for 4-dimensional Gaussian vectors, e.g. for
π = (0, 1, 2) we obtain
Pr (Π(X1 , X2 , X3 ) = π, Π(Y1 , Y2 , Y3 ) = π) = Pr (X1 ≤ X2 ≤ X3 , Y1 ≤ Y2 ≤ Y3 )
= Pr (X2 − X1 ≥ 0, X3 − X2 ≥ 0, Y2 − Y1 ≥ 0, Y3 − Y2 ≥ 0) .
To the best of our knowledge, there are no explicit formulas for these probabilities known.
In what follows, we present an example of a bivariate AR(2)-process (Xi , Yi )i≥1 for which Corr(Xi , Yi ) = 0 and
OPD1 (X(1) , Y(1) ) = 0, but where the processes (Xi )i≥1 and (Yi )i≥1 are dependent. For this example, we show by means
of a Monte Carlo simulation, that OPD2 (X(2) , Y(2) ) , 0.
Example 1. Let Wi , i ≥ 1, be a bivariate AR(2)-process defined by
!
X
Wi ··= i , i ≥ 1,
Yi
where Wi = AWi−2 + ξi with
A ··=
a
b
!
!
b
ε
and ξi ··= i ,
−a
ηi
(9)
ξi , i ≥ 1, bivariate Gaussian random vectors with covariance matrix Σξ = I2 (with I2 denoting the identity matrix) and
X1 ··= ξ1 , Y1 = η1 , X2 ··= ξ2 , Y2 = η2 . Moreover, we assume that σ2 ··= Var(X1 ) = Var(Y1 ). By definition it holds that
Cov(X1 , Y1 ) = Cov(X2 , Y2 ) = 0. Moreover, we have OPD1 (X(1) , Y(1) ) = 0 since
Cov(X3 − X2 , Y3 − Y2 ) = E (aX1 + bY1 + ξ3 − X2 ) (bX1 − aY1 + η3 − Y2 ) = abσ2 − baσ2 = 0.
This construction of AR(2)-processes can be extended to AR(h) for h ∈ N, if one wants to obtain OPDh X(h) , Y(h) , 0
but OPDi X(i) , Y(i) = 0, i = 1, . . . , h − 1 and Corr (X1 , Y1 ) = 0 by using h independent AR(1)-processes and couple
them via
(2)
X j = A X (1)
j−h , X j−h .
To illustrate the strong connection between a large correlation of the increments at lag 1 and OPD2 (X(2) , Y(2) ) we
consider estimated values for OPD2 (X(2) , Y(2) ) based on simulations of a bivariate AR(2)-process that satisfies the
above assumptions; see Fig. 3. The corresponding boxplots clearly indicate that, as the length of the time series
increases, the ordinal pattern dependence of order h = 1 approaches zero, while the ordinal pattern dependence of
order h = 2 converges to a a value between 0.1 and 0.25.
14
1.0
0.5
0.0
−0.5
−1.0
100
300
500
length of time series
OPD1
OPD2
Fig. 3: Boxplots of ordinal pattern dependence of order h = 1 and h = 2 based on 5000 repetitions of a bivariate AR(2)-process (Xi , Yi ),
i ∈ {1, . . . , n}, satisfying (9) with a = 0.01 and b = 0.98.
5.3. Ordinal pattern dependence in contrast to multivariate Kendall’s τ
Recall that for Gaussian random vectors X(1) = (X1 , X2 ) and Y(1) = (Y1 , Y2 ) satisfying E(X1 ) = E(X2 ), E(Y1 ) =
E(Y2 ), and Var(X2 − X1 ) , 0 , Var(Y2 − Y1 ), it holds that
OPD1 (X(1) , Y(1) ) = τ X̃1 , Ỹ1 ,
where X̃1 ··= X2 − X1 and Ỹ1 ··= Y2 − Y1 ; see Proposition 4.
In general, the following proposition establishes a relation between ordinal pattern dependence and multivariate
Kendall’s τ for Gaussian processes:
Proposition 5. Let (Xi , Yi ), i ≥ 1, be a bivariate, mean zero stationary Gaussian process. Then, it holds that
p
2τ X̃1 , . . . , X̃h , Ỹ1 , . . . , Ỹh
p̃X̃(h) 1 − p̃X̃(h) p̃Ỹ(h) 1 − p̃Ỹ(h)
(h)
(h)
OPDh X , Y
=
P
1 − π∈S h Pr Π X(h) = π Pr Π Y(h) = π
X Pr Π X(h) = Π Y(h) = π − Pr Π X(h) = π Pr Π Y(h) = π
+
,
P
1 − π∈S h Pr Π X(h) = π Pr Π Y(h) = π
π∈S \ T
h
h
where T h ··= {(h, . . . , 0), (0, . . . , h)}, X̃i = Xi+1 − Xi , Ỹi = Yi+1 − Yi ,
p̃X̃(h) = Pr (Π (X1 , . . . , X1+h ) = (0, . . . , h)) = Pr X̃1 ≤ 0, . . . , X̃h ≤ 0 ,
p̃Ỹ(h) = Pr (Π (Y1 , . . . , Y1+h ) = (0, . . . , h)) = Pr Ỹ1 ≤ 0, . . . , Ỹh ≤ 0 ,
and
OPDh X(h) , Y(h) =
with
P
π∈S h
q
pX(h) ,π 1 − pX(h) ,π pY(h) ,π 1 − pY(h) ,π
τ X1+π1 − X1+π0 , . . . , Y1+πh − Y1+πh−1
P
1 − π∈S h pX(h) ,π pY(h) ,π
pX(h) ,π = Pr (Π (X1 , . . . , X1+h ) = π) , pY(h) ,π = Pr (Π (Y1 , . . . , Y1+h ) = π) .
Remark 4. It is a characteristic of Gaussian observations (Xi ), i ≥ 1, and (Yi ), i ≥ 1, that the distribution of
X1+π1 − X1+π0 , . . . , X1+πh − X1+πh−1 , Y1+π1 − Y1+π0 , . . . , Y1+πh − Y1+πh−1
15
is uniquely determined by the autocovariances and crosscovariances of X(h) and Y(h) . For this reason, it is possible
to express all the dependencies in the vector above by the two-dimensional marginals of a multivariate Gaussian
distribution. However, since we do not have a closed expression for orthant probabilities of a multivariate Gaussian
vector with more than 3 elements, it is not possible to constitute a closed form for ordinal pattern dependence in terms
of Kendall’s τ.
The proof of Proposition 5 has been postponed to Section 6. In order to illustrate the relation of multivariate
Kendall’s τ and ordinal pattern dependence, characterized through Proposition 5, we consider the case h = 1 in the
following example:
Example 2. Let X = (X1 , X2 ) and Y = (Y1 , Y2 ) be Gaussian random vectors as in Proposition 4. Recall that
OPD1 (X, Y) = τ (X2 − X1 , Y2 − Y1 ) =
2
arcsin (Corr(X2 − X1 , Y2 − Y1 )) .
π
Moreover, if X and Y have standard normal marginal distributions, it holds that
Corr(X2 − X1 , Y2 − Y1 ) =
In general, we know that τ(X, Y) =
sin π2 τ(X, Y) .
As a result, we obtain
2
π
2E (X1 Y1 ) − E (X1 Y2 ) − E (Y1 X2 )
.
2 − 2E (X1 X2 )
arcsin (Corr(X, Y)) for a Gaussian random vector (X, Y), and hence Corr(X, Y) =
2 sin π2 τ(X1 , Y1 ) − sin π2 τ(X1 , Y2 ) − sin π2 τ(X2 , Y1 )
2
.
OPD1 (X, Y) = arcsin
π
2 − 2 sin π τ(X1 , X2 )
2
Therefore, the ordinal pattern dependence of order 1 is determined by τ (X1 , Y1 ), τ (X1 , Y2 ), τ (X2 , Y1 ), and τ (X2 , Y2 ).
5.4. Simulation Study
In this section, we compare the estimators for ordinal pattern dependence and multivariate Kendall’s τ based on
the vectors Xi(h) , Y(h)
i , generated by bivariate processes (Xi , Yi ), i ≥ 1, in a simulation study. Fig. 4 corresponds to the
case h = 2. The following situations are considered:
1. We simulate (Xi ), i ≥ 1, and (Yi ), i ≥ 1, as two independent AR(1) time series with
Xi = ρXi−1 + εi , Yi = ρYi−1 + ηi ,
where |ρ| < 1, and (εi ), i ≥ 1, and (ηi ), i ≥ 1, are two independent sequences of random variables, both i.i.d.
standard normally distributed. In this case, the sequences (Xi ), i ≥ 1, and (Yi ), i ≥ 1, are generated by the
function arima.sim in R. For the simulations depicted in Fig. 4, we chose ρ = 0.5. As expected, the values
of both dependence measures vary around 0. Moreover, the boxplots become narrower as the sample size
increases confirming consistency of the estimators. The boxplots that correspond to the estimate for Kendall’s τ
are wider. This indicates a faster convergence of the estimators for ordinal pattern dependence. For a systematic
simulation study see Table A.1 in the appendix.
2. We simulate (Xi ), i ≥ 1, and (Yi ), i ≥ 1, as sequences of independent, multivariate normal random vectors with
values in R3 and a joint normal distribution with expectation 0 and covariance matrix
1 0 0 ρ ρ ρ
0 1 0 ρ ρ ρ
0 0 1 ρ ρ ρ
Σ =
(10)
.
ρ ρ ρ 1 0 0
ρ ρ ρ 0 1 0
ρ ρ ρ 0 0 1
16
The R6 -valued random vectors (Xi , Yi ), i ≥ 1, are generated by the function rmvnorm in R. For the simulations
depicted in Fig. 4, we chose ρ = 0.2. Note that the values of both dependence measures deviate from 0,
thereby indicating a correlation between the two processes (Xi ), i ≥ 1, and (Yi ), i ≥ 1. In fact, the boxplots
of both estimators look very similar so that the rates of convergence seem to be comparable. For a systematic
simulation study see Table A.2 in the appendix.
3. We simulate (Xi ), i ≥ 1, as an AR(1) time series, while (Yi ), i ≥ 1, corresponds to (Xi ), i ≥ 1, shifted by one
time point. More precisely, we simulate
Xi = ρXi−1 + εi ,
where |ρ| < 1, and (εi ), i ≥ 1, is an i.i.d. standard normally distributed sequence of random variables, and we
define Yi = Xi+1 . For the simulations depicted in Fig. 4, we chose ρ = 0.5.
It is intriguing that ordinal pattern dependence does not detect the high correlation of the time series. The
theoretical value of ordinal pattern dependence for h = 1 in this case is given by
OPD1 (X(1) , Y(1) ) =
2
2
arcsin Corr(X2 − X1 , Y2 − Y1 ) = arcsin Corr(X2 − X1 , X3 − X2 ).
π
π
Routine calculations yield the following formula for the ordinal pattern dependence of order 1 between an
AR(1) process and the same process shifted by one time point:
!
2
ρ−1
OPD1 (X(1) , Y(1) ) = arcsin
.
π
2
As a result, OPD1 (X(1) , Y(1) ) = −0.297, −0.161, −0.032 for ρ = 0.1, 0.5, 0.9. These values coincide with the
results of the corresponding systematic simulation study; see Table A.3 in the appendix.
In [16] on page 713, an approach is presented to solve the insensitivity of ordinal pattern dependence concerning
time shifts. The authors introduced time shifted ordinal pattern dependence in order to investigate and compare
time series that are known to have a similar behavior within a certain time deviation. These time series arise
for example in the context of hydrology, if discharge data of a river is considered for two different locations.
For a real-world data analysis see [10]. Using this approach, (time-shifted) ordinal pattern dependence of 1 is
detected, since all patterns coincide if we reshift the second time series.
17
(1) Independent AR(1) time series
(2) Multivariate normal random vectors
0.2
0.2
0.1
0.1
0.0
0.0
−0.1
−0.1
−0.2
−0.2
100
300
500
100
length of time series
300
500
length of time series
(3) Shifted AR(1) time series
0.8
0.4
0.0
100
300
500
length of time series
Kendall's τ
OPD2
Fig. 4: Boxplots of ordinal pattern dependence of order h = 2 based on 5000 repetitions of a bivariate process (Xi , Yi ), i ∈ {1, . . . , n}, corresponding
to the situations (1)-(3).
6. Proofs
Proof of Theorem 1. The first four axioms in Definition 2 are easily verified, the fifth one is involved. We show
that the fifth axiom is fulfilled for OPDh with h = 2. For h = 1, the difficulties in the proof are not revealed, while for
h > 2 the proof works analogously, but is notationally more complicated. Due to stationarity, it is enough to focus on
the first three components of X, Y, X∗ , and Y∗ , i.e., without loss of generality we consider
X = (X1 , X2 , X3 ), Y = (Y1 , Y2 , Y3 ), X∗ = (X1∗ , X2∗ , X3∗ ), Y∗ = (Y1∗ , Y2∗ , Y3∗ ).
Moreover, we can restrict our considerations to Pr (Π (X) = Π (Y)) (the remaining summands of OPDh (X, Y) only
relate to the distribution of X and Y separately). By Axiom 2 (invariance under permutation) it is, furthermore,
18
enough to consider the monotone increasing pattern, that is, Π(X1 , X2 , X3 ) = (3, 2, 1). Let
!
!
X∗
X
D ∗
D ∗
4C ∗ .
X = X , Y = Y , and
Y
Y
(11)
It is a well-known fact that (11) implies
!I
!I
X∗
X
4C ∗
Y
Y
for all subvectors of variables with indices in I ⊆ {1, . . . , 6}, i.e., removing dimensions does not influence which scenario has the larger dependence measure; see [8]. We will make extensive use of this fact in what follows. Moreover,
recall that we assume that all considered marginal distribution functions are continuous.
Defining
Pr x1 ,y1 (A) ··= Pr(A|X1 = x1 , Y1 = y1 ),
considering Π(X1 , X2 , X3 ) = (3, 2, 1), and using disintegration twice yields
Pr (Π(X1 , X2 , X3 ) = Π(Y1 , Y2 , Y3 )) = Pr (X1 ≤ X2 ≤ X3 , Y1 ≤ Y2 ≤ Y3 )
Z
Pr x1 ,y1 (x1 ≤ X2 ≤ X3 , y1 ≤ Y2 ≤ Y3 ) dP(X1 ) (x1 , y1 )
= Pr ({X1 ≤ X2 } ∩ {X2 ≤ X3 } ∩ {Y1 ≤ Y2 } ∩ {Y2 ≤ Y3 }) =
Y1
R2
Z
Pr x1 ,y1 (X2 ≤ X3 , Y2 ≤ Y3 |x1 ≤ X2 , y1 ≤ Y2 ) · Pr x1 ,y1 (X2 ≥ x1 , Y2 ≥ y1 ) dP(X1 ) (x1 , y1 )
=
Y1
R2
Z Z
Pr x1 ,y1 (X2 ≤ X3 , Y2 ≤ Y3 |X2 = x2 , Y2 = y2 ) dP xX1 ,y2 1 (x2 , y2 ) Pr x1 ,y1 (X2 ≥ x1 , Y2 ≥ y1 ) dP(X1 ) (x1 , y1 ).
=
Y1
(Y2 )
R2 [x1 ,∞[×[y1 ,∞[
Since F̄(X3 ) (x2 , y2 ) = Pr x1 ,y1 (X2 ≤ X3 , Y2 ≤ Y3 |x1 ≤ X2 , y1 ≤ Y2 ), it follows that
Y3
Z Z
F̄(X3 ) (x2 , y2 ) dP xX1 ,y2 1 (x2 , y2 )F̄(X2 ) (x1 , y1 ) dP(X1 ) (x1 , y1 ).
Pr (Π(X1 , X2 , X3 ) = Π(Y1 , Y2 , Y3 )) =
Y1
Y2
Y3
(Y )
2
R
[x1 ,∞[×[y1 ,∞[
2
Due to (11), we deduce that
Z
Z
X
1[a,∞[×[b,∞[ (x, y) dP( j ) (x, y) ≤
1[a,∞[×[b,∞[ (x, y) dP
Yj
X∗
(Y ∗j )
(x, y)
j
for a, b ∈ R and j ∈ {1, 2, 3}. Since survival functions can be approximated by sums of indicator functions, the
bounded convergence theorem yields
Z
Z
F̄ X3∗ (x2 , y2 ) dP xX1 ,y∗ 1 (x2 , y2 ).
F̄ X3∗ (x2 , y2 ) dP xX1 ,y2 1 (x2 , y2 ) ≤
( Y2 )
(Y2∗ )
[x1 ,∞[×[y1 ,∞[ (Y ∗ )
[x1 ,∞[×[y1 ,∞[ (Y ∗ )
3
3
2
Moreover, up to scaling, the function
H(x1 , x2 ) ··=
Z
F̄
[x1 ,∞[×[y1 ,∞[
∗
(YX3∗ )
3
(x2 , y2 ) dP xX1 ,y∗ 1 (x2 , y2 )
(Y2∗ )
2
can be considered as a survival function. Hence, we can use an approximation by sums of indicator functions for both,
H and F̄ X2∗ . Most notably, the product of two functions having this property is of the same type, i.e., it can as well be
(Y ∗ )
2
approximated by sums of indicator functions.
Thus, we finally arrive at
Pr (Π(X1 , X2 , X3 ) = Π(Y1 , Y2 , Y3 ) = (3, 2, 1))
Z Z
F̄ X3∗ (x2 , y2 ) dP xX1 ,y∗ 1 (x2 , y2 )F̄ X2∗ (x1 , y1 ) dP X1∗ (x1 , y1 )
≤
(Y ∗ )
(Y ∗ )
(Y2∗ )
R2 [x1 ,∞[×[y1 ,∞[ (Y3∗ )
2
1
2
∗
∗
∗
∗
∗
∗
∗
∗
∗
∗
= Pr(X1 ≤ X2 ≤ X3 , Y1 ≤ Y2 ≤ Y3 ) = Pr Π(X1 , X2 , X3 ) = Π(Y1 , Y2∗ , Y3∗ ) = (3, 2, 1) .
19
Proof of Proposition 5. First, note that
n
o
P
(h)
= Π Y(h) = π − Pr Π X(h) = π Pr Π Y(h) = π
π∈T h Pr Π X
(h)
(h)
OPDh X , Y =
P
1 − π∈S h Pr Π X(h) = π Pr Π Y(h) = π
n
o
P
(h)
= Π Y(h) = π − Pr Π X(h) = π Pr Π Y(h) = π
π∈S h \T h Pr Π X
.
+
P
1 − π∈S h Pr Π X(h) = π Pr Π Y(h) = π
Focusing on the pattern π = (0, . . . , h) in the first summand yields
Pr Π X(h) = Π Y(h) = (0, . . . , h) = Pr (X1 ≥ . . . ≥ X1+h , Y1 ≥ . . . ≥ Y1+h )
= Pr X̃1 ≤ 0, . . . , X̃h ≤ 0, Ỹ1 ≤ 0, . . . , Ỹh ≤ 0
q
=τ X̃1 , . . . , X̃h , Ỹ1 , . . . , Ỹh
p̃X̃(h) 1 − p̃X̃(h) p̃Ỹ(h) 1 − p̃Ỹ(h) + p̃X̃(h) p̃Ỹ(h) .
D
Due to symmetry of the multivariate normal distribution, we have X(h) , Y(h) = −X(h) , −Y(h) . Therefore, it follows
that
Pr Π X(h) = Π Y(h) = (0, . . . , h) = Pr Π X(h) = Π Y(h) = (h, . . . , 0) .
Now let π = (π0 , . . . , πh ) be a permutation of 0, · · · , h. If Π (X1 , . . . , X1+h ) = π, it holds that X1+π0 ≥ X1+π1 ≥ . . . ≥ X1+πh .
As a result, ordinal pattern dependence can be expressed by the following formula:
P
π∈S h Pr (Π (X1 , . . . , X1+h ) = Π (Y1 , . . . , Y1+h ) = π) − pX(h) ,π pY(h) ,π
(h)
(h)
P
OPDh X , Y
=
1 − π∈S h Pr (Π (X1 , . . . , X1+h ) = π) Pr (Π (Y1 , . . . , Y1+h ) = π)
P
π∈S h Pr X1+π0 ≥ X1+π1 ≥ . . . ≥ X1+πh , Y1+π0 ≥ Y1+π1 ≥ . . . ≥ Y1+πh − pX(h) ,π pY(h) ,π
P
=
1 − π∈S h pX(h) ,π pY(h) ,π
P
π∈S h Pr X1+π1 − X1+π0 ≤ 0, . . . , X1+πh − X1+πh−1 ≤ 0, . . . , Y1+πh − Y1+πh−1 ≤ 0 − pX(h) ,π pY(h) ,π
P
=
1 − π∈S h pX(h) ,π pY(h) ,π
P
q
pX(h) ,π 1 − pX(h) ,π pY(h) ,π 1 − pY(h) ,π
π∈S h τ X1+π1 − X1+π0 , . . . , Y1+πh − Y1+πh−1
P
=
.
1 − π∈S h pX(h) ,π pY(h) ,π
7. Conclusion and Outlook
We have shown that ordinal pattern dependence is a multivariate measure of dependence in an axiomatic sense.
When applied to bivariate time series, it can be interpreted as a value describing the co-movement of the two component time series in equal moving windows. In contrast to other dependence measures, it has thus been developed
against a time series background. To make ordinal pattern dependence comparable to other multivariate dependence
measures, we adapted the definitions of the latter to the time series approach. We figured out that univariate dependence measures do not carry enough information for an analysis of dependencies between two random vectors. They
do not take any dynamical dependence into account, given for example by the cross-correlations of the considered
random vectors. The same holds true for the multivariate extension of Pearson’s ρ. Hence, both measures are inappropriate in a time series context. For Gaussian observations, there is a close relationship between ordinal pattern
dependence and the multivariate version of Kendall’s τ. We proved that for h = 1 ordinal pattern dependence of two
random vectors can be represented as Kendall’s τ of the corresponding increment vectors. The provided simulations
show that the values of Kendall’s τ and ordinal pattern dependence of the same data set differ in concrete situations.
This emphasizes that the two measures operate on different levels, i.e., one operates on the level of the original process, the other on the level of increments. Moreover, for this reason ordinal pattern dependence is insensitive with
20
respect to time shifts: it only relies on the dependence between the corresponding increments. However, an extension
of ordinal pattern dependence that is sensitive to dependence shifted in time is given in [16]. The authors of that article
introduced time shifted ordinal pattern dependence that allows for time shifts between the moving windows of the two
time series. Using this approach it is possible to detect a co-movement of the two time series that does not happen
in the same moving window. Furthermore, it is an interesting topic for further research to compare ordinal pattern
dependence to copula-based dependence measures. Originally, copulas were introduced to focus on the dependence
within a multivariate random vector without taking the marginal distributions into account. If all univariate margins
are continuous, the multivariate generalizations of Kendall’s τ and multivariate Spearman’s ρ in [7] only depend on
the underlying copula. As these two measures are defined to measure dependence between two random vectors, this
approach is a promising starting point to extend these ideas to a time series background. An investigation of ordinal
pattern dependence with respect to the framework of copulas as well as a comparison to multivariate Spearman’s ρ
is ongoing research, but not in the scope of the present article. Since ordinal pattern dependence admits a canonical
interpretation, and since limit theorems in the short and the long-range dependent framework are at hand, we suggest
to use this measure to complement the classical time series analysis with an ordinal point of view.
Acknowledgments
We would like to thank two anonymous referees whose comments have helped to improve the presentation of
our results. This research was supported in part by the German Research Foundation (DFG) through Collaborative
Research Center SFB 823 Statistical Modelling of Nonlinear Dynamic Processes and the project Ordinal-PatternDependence: Grenzwertsätze und Strukturbrüche im langzeitabhängigen Fall mit Anwendungen in Hydrologie, Medizin und Finanzmathematik (SCHN 1231/3-2).
References
[1] J. Aaronson, R. Burton, H. Dehling, D. Gilat, T. Hill, B. Weiss, Strong laws for L- and U-statistics, Transactions of the American Mathematical
Society 348 (1996) 2845 – 2866.
[2] C. Bandt, Order patterns, their variation and change points in financial time series and Brownian motion, Statistical Papers 61 (2020) 1565 –
1588.
[3] C. Bandt, B. Pompe, Permutation entropy: A natural complexity measure for time series, Physical review letters 88 (2002) 174102–1 – 4.
[4] C. Bandt, F. Shiha, Order patterns in time series, Journal of Time Series Analysis 28 (2007) 646 – 665.
[5] H. Dehling, D. Vogel, M. Wendler, D. Wied, Testing for changes in Kendall’s tau, Econometric Theory 33 (2017) 1352 – 1386.
[6] I. Echegoyen, V. Vera-Ávila, R. Sevilla-Escoboza, J. H. Martinez, J. M. Buldu, Ordinal synchronization: Using ordinal patterns to capture
interdependencies between time series, Chaos, Solitons & Fractals 119 (2019) 8 – 18.
[7] O. Grothe, J. Schnieders, J. Segers, Measuring association and dependence between random vectors, Journal of Multivariate Analysis 123
(2014) 96 – 110.
[8] H. Joe, Multivariate concordance, Journal of Multivariate Analysis 35 (1990) 12 – 30.
[9] M. Mohr, F. Wilhelm, M. Hartwig, R. Möller, K. Keller, New approaches in ordinal pattern representations for multivariate time series,
Proceedings of the 33rd Florida Artificial Intelligence Research Society Conference (FLAIRS-33), pp. 124 – 129.
[10] I. Nüßgen, Ordinal pattern analysis: Limit theorems for multivariate long-range dependent Gaussian time series and a comparison to multivariate dependence measures, Ph.D. thesis, University of Siegen, 2021.
[11] I. Nüßgen, A. Schnurr, Ordinal pattern dependence in the context of long-range dependence, Entropy (2021) 670.
[12] G. Puccetti, Measuring linear correlation between random vectors, Available at SSRN: https://ssrn.com/abstract=3116066 or
http://dx.doi.org/10.2139/ssrn.3116066 (September 18, 2019).
[13] J.-F. Quessy, M. Saı̈d, A.-C. Favre, Multivariate Kendall’s tau for change-point detection in copulas, Canadian Journal of Statistics 41 (2013)
65–82.
[14] F. Schmid, R. Schmidt, T. Blumentritt, S. Gaißer, M. Ruppert, Copula-based measures of multivariate association, in: P. Jaworski, F. Durante,
W. K. Härdle, T. Rychlik (Eds.), Copula theory and its applications, lecture notes in statistics 198, Springer Berlin Heidelberg, Berlin,
Heidelberg, 2010, pp. 209–236.
[15] A. Schnurr, An ordinal pattern approach to detect and to model leverage effects and dependence structures between financial time series,
Statistical Papers 55 (2014) 919 – 931.
[16] A. Schnurr, H. Dehling, Testing for structural breaks via ordinal pattern dependence, Journal of the American Statistical Association 112
(2017) 706 – 720.
[17] M. Sinn, K. Keller, Estimation of ordinal pattern probabilities in Gaussian processes with stationary increments, Computational Statistics &
Data Analysis 55 (2011) 1781 – 1790.
[18] J. M. Wooldridge, H. White, Some invariance principles and central limit theorems for dependent heterogeneous processes, Econometric
Theory 4 (1988) 210 – 230.
21
Appendix A. Simulation study
In this section, we empirically compare multivariate Kendall’s τ and ordinal pattern dependence by simulation studies for different
settings.
Table A.1: We generate Xi , 1 ≤ i ≤ n, and Yi , 1 ≤ i ≤ n, as two independent AR(1) time series with Xi = ρXi−1 + εi , Yi = ρYi−1 + ηi , where |ρ| < 1, and εi , 1 ≤ i ≤ n, and ηi ,
1 ≤ i ≤ n, are two independent sequences of random variables, both i.i.d. standard normally distributed.
n
h
mean (sd)
median (IQR)
mean (sd)
median (IQR)
mean (sd)
median (IQR)
Kendall’s τ
100
300
500
1
1
1
0.007 (0.062)
0.003 (0.035)
0.002 (0.027)
0.004 (0.086)
0.003 (0.048)
0.002 (0.036)
0.012 (0.087)
0.004 (0.051)
0.003 (0.040)
0.011 (0.121)
0.004 (0.068)
0.002 (0.053)
0.018 (0.168)
0.008 (0.111)
0.006 (0.088)
0.017 (0.238)
0.007 (0.153)
0.004 (0.119)
opd
100
300
500
1
1
1
0.009 (0.106)
0.003 (0.062)
0.002 (0.048)
0.003 (0.139)
0.002 (0.085)
0.004 (0.062)
0.011 (0.101)
0.004 (0.059)
0.003 (0.046)
0.016 (0.137)
0.005 (0.083)
0.003 (0.063)
0.008 (0.098)
0.003 (0.058)
0.001 (0.044)
0.003 (0.137)
0.004 (0.075)
-0.000 (0.061)
Kendall’s τ
22
method
100
300
500
2
2
2
0.008 (0.053)
0.003 (0.030)
0.002 (0.023)
0.002 (0.070)
0.002 (0.040)
0.001 (0.031)
0.015 (0.083)
0.005 (0.048)
0.003 (0.038)
0.011 (0.114)
0.004 (0.065)
0.002 (0.050)
0.031 (0.170)
0.013 (0.111)
0.009 (0.089)
0.027 (0.239)
0.011 (0.154)
0.007 (0.120)
opd
ρ = 0.9
100
300
500
2
2
2
0.003 (0.055)
0.001 (0.033)
0.000 (0.025)
-0.000 (0.073)
-0.000 (0.044)
0.000 (0.034)
0.004 (0.055)
0.002 (0.032)
0.001 (0.025)
0.001 (0.077)
0.001 (0.044)
0.001 (0.034)
0.004 (0.056)
0.001 (0.033)
0.000 (0.025)
0.002 (0.075)
0.001 (0.044)
-0.001 (0.034)
Kendall’s τ
ρ = 0.5
100
300
500
3
3
3
0.006 (0.043)
0.002 (0.024)
0.001 (0.019)
-0.002 (0.053)
-0.000 (0.032)
0.000 (0.024)
0.015 (0.076)
0.005 (0.044)
0.003 (0.035)
0.007 (0.103)
0.002 (0.059)
0.001 (0.047)
0.040 (0.169)
0.016 (0.111)
0.012 (0.089)
0.031 (0.239)
0.014 (0.154)
0.009 (0.120)
opd
ρ = 0.1
100
300
500
3
3
3
0.001 (0.027)
0.001 (0.015)
-0.000 (0.012)
-0.001 (0.035)
-0.001 (0.021)
-0.001 (0.016)
0.001 (0.027)
0.001 (0.016)
0.000 (0.013)
-0.002 (0.037)
-0.000 (0.021)
-0.000 (0.017)
0.002 (0.031)
0.001 (0.018)
0.000 (0.014)
-0.001 (0.041)
-0.001 (0.024)
-0.000 (0.019)
Table A.2: We simulate (Xi ), i ≥ 1, and (Yi ), i ≥ 1, as sequences of independent, multivariate normal random vectors with values in R3 and a joint normal distribution with
expectation 0 and covariance matrix (10).
n
h
mean (sd)
median (IQR)
mean (sd)
median (IQR)
mean (sd)
median (IQR)
Kendall’s τ
100
300
500
1
1
1
0.045 (0.034)
0.045 (0.020)
0.045 (0.015)
0.045 (0.047)
0.045 (0.026)
0.044 (0.021)
0.091 (0.037)
0.091 (0.021)
0.091 (0.016)
0.091 (0.050)
0.091 (0.028)
0.091 (0.023)
0.141 (0.039)
0.142 (0.022)
0.142 (0.017)
0.141 (0.053)
0.142 (0.030)
0.142 (0.023)
opd
100
300
500
1
1
1
0.066 (0.064)
0.065 (0.036)
0.065 (0.029)
0.066 (0.081)
0.064 (0.048)
0.065 (0.038)
0.129 (0.063)
0.128 (0.037)
0.128 (0.028)
0.130 (0.086)
0.129 (0.051)
0.128 (0.039)
0.196 (0.066)
0.195 (0.038)
0.195 (0.030)
0.198 (0.087)
0.195 (0.053)
0.194 (0.040)
Kendall’s τ
23
method
100
300
500
2
2
2
0.031 (0.030)
0.030 (0.017)
0.030 (0.013)
0.029 (0.041)
0.029 (0.023)
0.030 (0.018)
0.063 (0.033)
0.062 (0.019)
0.063 (0.015)
0.062 (0.045)
0.062 (0.026)
0.062 (0.019)
0.100 (0.036)
0.100 (0.021)
0.100 (0.016)
0.100 (0.050)
0.100 (0.029)
0.100 (0.022)
opd
ρ = 0.3
100
300
500
2
2
2
0.031 (0.034)
0.030 (0.020)
0.030 (0.016)
0.031 (0.047)
0.030 (0.027)
0.030 (0.021)
0.065 (0.037)
0.064 (0.022)
0.064 (0.017)
0.063 (0.051)
0.063 (0.029)
0.063 (0.023)
0.101 (0.040)
0.102 (0.024)
0.102 (0.018)
0.099 (0.054)
0.101 (0.031)
0.102 (0.025)
Kendall’s τ
ρ = 0.2
100
300
500
3
3
3
0.019 (0.024)
0.019 (0.014)
0.019 (0.011)
0.017 (0.032)
0.018 (0.019)
0.018 (0.015)
0.041 (0.029)
0.042 (0.017)
0.041 (0.013)
0.039 (0.038)
0.041 (0.022)
0.041 (0.018)
0.067 (0.033)
0.068 (0.019)
0.069 (0.015)
0.064 (0.043)
0.067 (0.026)
0.068 (0.020)
opd
ρ = 0.1
100
300
500
3
3
3
0.011 (0.017)
0.011 (0.010)
0.011 (0.008)
0.010 (0.023)
0.011 (0.014)
0.011 (0.010)
0.024 (0.020)
0.024 (0.012)
0.024 (0.009)
0.023 (0.028)
0.024 (0.015)
0.024 (0.012)
0.041 (0.023)
0.041 (0.013)
0.041 (0.010)
0.040 (0.032)
0.040 (0.018)
0.041 (0.014)
Table A.3: We generate Xi , 1 ≤ i ≤ n, from an AR(1) time series with parameter ρ, while Yi , 1 ≤ i ≤ n, corresponds to Xi , 1 ≤ i ≤ n, shifted by one time point, i.e., Yi = Xi+1 .
n
h
mean (sd)
median (IQR)
mean (sd)
median (IQR)
mean (sd)
median (IQR)
Kendall’s τ
100
300
500
1
1
1
0.354 (0.059)
0.360 (0.034)
0.361 (0.026)
0.354 (0.080)
0.361 (0.046)
0.361 (0.036)
0.509 (0.058)
0.521 (0.033)
0.524 (0.025)
0.512 (0.077)
0.522 (0.044)
0.525 (0.033)
0.738 (0.055)
0.777 (0.030)
0.784 (0.023)
0.744 (0.072)
0.779 (0.039)
0.786 (0.031)
100
300
500
1
1
1
-0.282 (0.083)
-0.292 (0.049)
-0.295 (0.038)
-0.279 (0.112)
-0.292 (0.066)
-0.295 (0.050)
-0.152 (0.090)
-0.159 (0.053)
-0.158 (0.041)
-0.157 (0.118)
-0.159 (0.071)
-0.157 (0.054)
-0.026 (0.097)
-0.030 (0.057)
-0.030 (0.044)
-0.023 (0.131)
-0.028 (0.074)
-0.029 (0.060)
100
300
500
2
2
2
0.437 (0.070)
0.450 (0.038)
0.453 (0.030)
0.438 (0.095)
0.450 (0.052)
0.454 (0.041)
0.575 (0.065)
0.591 (0.036)
0.594 (0.027)
0.579 (0.086)
0.593 (0.049)
0.595 (0.037)
0.783 (0.055)
0.816 (0.027)
0.823 (0.021)
0.791 (0.071)
0.817 (0.036)
0.824 (0.028)
100
300
500
2
2
2
-0.088 (0.037)
-0.088 (0.021)
-0.087 (0.016)
-0.088 (0.049)
-0.088 (0.028)
-0.087 (0.022)
-0.030 (0.044)
-0.028 (0.025)
-0.028 (0.019)
-0.031 (0.060)
-0.029 (0.033)
-0.028 (0.025)
0.050 (0.052)
0.052 (0.030)
0.052 (0.024)
0.049 (0.070)
0.051 (0.041)
0.052 (0.032)
100
300
500
3
3
3
0.462 (0.083)
0.485 (0.047)
0.489 (0.035)
0.465 (0.113)
0.485 (0.062)
0.489 (0.047)
0.602 (0.075)
0.624 (0.040)
0.629 (0.030)
0.609 (0.101)
0.625 (0.053)
0.630 (0.040)
0.804 (0.058)
0.837 (0.027)
0.844 (0.020)
0.814 (0.073)
0.840 (0.035)
0.845 (0.026)
100
300
500
3
3
3
-0.029 (0.016)
-0.025 (0.010)
-0.024 (0.007)
-0.030 (0.022)
-0.025 (0.013)
-0.024 (0.010)
-0.008 (0.024)
-0.003 (0.014)
-0.002 (0.011)
-0.010 (0.033)
-0.004 (0.019)
-0.002 (0.015)
0.040 (0.038)
0.045 (0.022)
0.046 (0.017)
0.037 (0.050)
0.045 (0.030)
0.046 (0.023)
opd Kendall’s τ
24
method
opd
ρ = 0.9
Kendall’s τ
ρ = 0.5
opd
ρ = 0.1