arXiv:2010.10169v1 [math.PR] 20 Oct 2020
MULTIVARIATE TEMPERED STABLE RANDOM FIELDS
D. KREMER AND H.-P. SCHEFFLER
Abstract. Multivariate tempered stable random measures (ISRMs) are constructed and
their corresponding space of integrable functions is characterized in terms of a quasi-norm
utilizing the so-called Rosinski measure of a tempered stable law. In the special case of exponential tempered ISRMs operator-fractional tempered stable random fields are presented
by a moving-average and a harmonizable representation, respectively.
Dedicated to the late Mark M. Meerschaert.
1. Introduction
Multivariate tempered stable distributions are obtained by modifying the Lévy measure of a
multivariate stable law by a certain tempering function q(r, θ), see (2.1) below. This results
in tempering the large jumps of the corresponding Lévy process. See [13, 21] for the basic
theory of tempered stable laws. A prominent subclass of tempered stable laws is obtained
by using exponential tempering q(r, θ) = qλ (r) = e−λr for some λ > 0. These exponential
tempered stable laws have what is called semi-heavy tails, meaning that the tails decay like
a power law for in an intermediate regim and then decay exponentially fast near infinity.
They have various applications, see e.g. [1, 2, 5, 15, 18, 19].
Symmetric α-stable random measures M(dx) are used to construct a large class of stable
processes in terms of stochastic integrals via
Z
X(t) = ft (x) M(dx)
for appropriate kernel functions ft . Prominent examples are fractional stable processes given
by either a moving-average or a harmonizable representation, see [22] and the literature cited
there. By modifying the kernel function of fractional stable processes, so-called tempered
fractional stable motions are obtained, see [6]. The purpose of this paper is to construct
fractional tempered stable fields and processes and to study their basic properties. In doing
so, we are not modifying the kernel function, but utilizing so-called tempered stable random
Date: October 21, 2020.
2010 Mathematics Subject Classification. 60E07; 60G52; 60G57; 60H05.
Key words and phrases. Tempered stable distributions; Independently scattered random measures; Stochastic integrals; Tangent fields.
1
2
D. KREMER AND H.-P. SCHEFFLER
measures. That is, the random noise M(dx) gets tempered, whereas the kernel functions
remains unchanged.
Using the recently developed general theory of multivariate independently scattered random
measures (abbreviated by ISRM) and their corresponding stochastic integrals (see [11]), in
Section 2 we introduce multivariate tempered stable ISRMs and characterize theirs space
of integrable functions utilizing a quasi-norm based the so-called Rosinski measure of the
underlying tempered stable law. The most prominent example is given by the exponentially
tempered ISRM Mλ , see Example 2.7 below. Using those we then give examples of operatorfractional tempered stable fields, using ideas and methods of [10, 12]. It turns out that the
range of possible parameters of those fields is larger than their corresponding stable fields.
In Section 4, we analyze what happens if the exponential tempering parameter λ tends to
zero. It turns out that in this case Mλ (and the constructed random fields) converge in
distribution to theirs stable counterpart M0 . Moreover, it is shown that operator-fractional
tempered stable random fields given by the moving-average representation behave locally
like operator-fractional stable random fields, using the notion of localisability.
2. Preliminary Results
We start with some notation. Write min(a, b) = a ∧ b and max(a, b) = a ∨ b, respectively.
Let k·k be the Euclidian norm on Rd with inner product h·, ·i. Denote the set of all linear
operators on Rd by L(Rd ) with I = Id being the identity operator on Rd , represented as d × d
matrices in each case. Then, by a little abuse of notation, we also write k·k for the operator
norm on L(Rd ) (which is induced by the vector space norm k·k). Denote the transposed
vector of x by xt and the adjoint matrix of D by D ∗ , respectively. Finally, all occurring
random vectors are defined on an underlying probability space (Ω, A, P) and we equip Rd
with its Borel σ-Algebra B(Rd ) (accordingly for L(Rd )).
2.1. Tempered stable distributions and random measures. Consider µ to be a tempered α-stable distribution (abbreviated by T αS) on Rd in the sense of Definition 2.1 in [21],
where the stability index 0 < α < 2 is fixed throughout the paper. That means that µ is
infinitely-divisible without Gaussian part and that its Lévy measure φ in polar coordinates
is of the form
(2.1)
φ(dr, dθ) = r −α−1 q(r, θ) dr σ(dθ).
Here σ is a finite measure on S d−1 := {x ∈ Rd : kxk = 1} and q : (0, ∞) × S d−1 → (0, ∞) is
a Borel function such that, for every θ ∈ S d−1 , we have:
n
∂
(i) q(·, θ) is completely monotone, i.e. (−1)n ∂r
n q(r, θ) > 0 for all r > 0 and n ∈ N.
(ii) limr→∞ q(r, θ) = 0.
3
We call q the tempering function of the T αS distribution µ. It allows to define a measure R
on Rd , which is often referred to as the Rosinski measure. The details can be found in [21].
Actually, it is uniquely determined by the relation
Z Z ∞
(2.2)
φ(A) =
1A (rx)r −α−1 e−r dr R(dx), A ∈ B(Rd ).
Rd
0
It fulfills
R({0}) = 0,
Z
Rd
(kxk2 ∧ kxkα ) R(dx) < ∞,
R
which implies that (1 ∧ kxk2 ) R(dx) < ∞. So R is a Lévy measure, too. Also recall the
Lévy-Khintchine-Formula (see Theorem 3.1.11 in [17]) to verify that µ ∼ [a, 0, φ], where
the vector a ∈ Rd is a shift. In other words, the characteristic function of µ is given by
µ
b(u) = exp(ψ(u)), where
Z
ihu, xi
ihu,xi
(2.3)
ψ(u) = iha, ui +
e
−1−
φ(dx), u ∈ Rd
2
1
+
kxk
Rd
is the corresponding log-characteristic function (abbreviated by LCF).
Remark 2.1.
(i) If q additionally fulfills limr↓0 q(r, θ) = 1 for every θ ∈ S d−1 , then µ
is called a proper T αS distribution in
R theαterminology of [21]. Theorem 2.3 in [21]
states that µ is proper if and only if kxk R(dx) < ∞.
(ii) It is usual to assume that µ is full, i.e. not concentrated on any hyperplane of Rd .
Using Proposition 3.1.20 in [17] together with (2.2) it follows that µ is full if and
only if R is not concentrated on any d − 1 dimensional subspace of Rd . In this case,
following the notation in [17], we say that R is M-full.
(iii) The function ψ can sometimes be represented more explicitly, cf. [13] for the case
d = 1. Moreover, Theorem 2.9 in [21] presents integral representations for ψ in terms
of the measure R. However, they are based on a different function within the integral
in (2.3), which leads to a different shift vector. See Remark 8.4 in [23] for details.
(iv) The previous distinction is negligible if µ is symmetric. Because in this case φ is also
symmetric and a = 0, which allows to rewrite (2.3) as
Z
(2.4)
ψ(u) =
(coshu, xi − 1) φ(dx), u ∈ Rd .
Rd
In what follows we want to use µ in order to construct a new class of T αS random measures.
For this purpose let (S, Σ, ν) be a σ-finite measure space and S := {A ∈ Σ : ν(A) < ∞}.
Note that S is a so-called δ-ring whose generated σ-algebra equals Σ (see [11]). Recall that
a mapping M : S → {X : Ω → Rd | X random vector} is called an Rd -valued iindependently
scattered) random measure on S if the following conditions hold true:
(RM1 ) For every finite choice A1 , ..., Ak of disjoint sets in S the random vectors
M(A1 ), ..., M(Ak ) are stochastically independent.
4
D. KREMER AND H.-P. SCHEFFLER
(RM2 ) For every sequence (An ) ⊂ S of disjoint sets fulfilling ∪∞
n=1 An ∈ S we have that
∞
X
M(An ) almost surely (a.s.).
M(∪∞
A
)
=
n=1 n
n=1
Using Example 3.7 in [11] we get the following result.
Theorem 2.2. Recall that µ ∼ [a, 0, φ], let S and ν be as before. Then there exists an
Rd -valued ISRM M on S such that M(A) has an infinitely-divisible distribution given by
M(A) ∼ [ν(A) · a, 0, ν(A) · φ] for every A ∈ S.
In view of Proposition 2.5 below it is reasonable to call M a tempered (α-)stable ISRM,
which is generated by µ. Also note that M, being infinitely-divisible, is closely related to its
so-called control measure λM on Σ, cf. Theorem 3.2 in [11]. In particular, λM equals ν up to
a positive constant.
2.2. Integration theory. Usually, the main benefit of ISRMs results from the consideration
of corresponding stochastic integrals. More precisely, for f : S → L(Rd ), we want to define
integrals of the form
Z
IM (f ) := I(f ) :=
f (s)M(ds),
S
leading to Rd -valued random vectors again. Since this integral is achieved by a stochastic
limit, conditions for the existence of I(f ) have been studied in [11] (see [20] for the case
d = 1). We omit the details. Instead we focus on the main results, adapted to our setting.
Therefore, denote the set of all functions f : S → L(Rd ) for which I(f ) is well-defined by
I(M). Then Proposition 4.5 in [11] and Theorem 2.3 in [12] yield the following.
Theorem 2.3. Let M be as in Theorem 2.2 and define U : L(Rd ) → Rd by
Z
Dx
Dx
U(D) := Da +
φ(dx), D ∈ L(Rd ).
−
2
2
1 + kDxk
1 + kxk
Rd
Then, for f : S → L(Rd ) measurable, the following statements are equivalent:
(i) f ∈ I(M).
(ii) J1 (f ) + J2 (f ) < ∞ holds true, where
Z
Z Z
(2.5)
J1 (f ) :=
kU(f (s))k ν(ds) and J2 (f ) :=
(1 ∧ kf (s)xk2 ) φ(dx) ν(ds).
d
S
S R
R
d
∗
(iii) For every u ∈ R we have that |ψ(f (s) u)| ν(ds) < ∞ and the mapping
Z Z
d
R ∋ u 7→
(coshf (s)∗u, xi − 1) φ(dx) ν(ds)
S
is continuous.
Rd
5
In each caseR it follows that I(f ) is infinitely-divisible. More precisely, I(f ) ∼ [af , 0, φf ],
where af := U(f (s)) ν(ds) and
φf (A) := (ν ⊗ φ)({(s, x) ∈ S × Rd : f (s)x ∈ A \ {0}}),
R
Hence, the LCF of I(f ) is given by Rd ∋ u 7→ ψ(f (s)∗ u) ν(ds).
(2.6)
A ∈ B(Rd ).
The notion stochastic integral is first of all justified by the fact that I(M) is a real vector
space and that the mapping I(M) ∋ f 7→ I(f ) is linear. Moreover, for f, f1 , f2 , ... ∈ I(M),
we have that I(fn ) → I(f ) in probability if and only if
Z
d
ψ((fn (s) − f (s))∗ u) ν(ds) → 0 (as n → ∞).
(2.7)
∀u ∈ R :
S
This and further properties can be found in Proposition 4.3 and Theorem 4.4 in [11]. In
addition, we identify elements in I(M) that are identical ν-a.e. because this implies that the
resulting integrals are equal a.s. and vice versa.
Next we want to understand J2 (f ) (recall (2.5)) in terms of the Rosinski measure R, which
is inspired by Lemma 2.10 in [21].
Lemma 2.4. We have that
J2 (f ) < ∞
⇔
H(f ) :=
Z Z
S
Rd
(kf (s)xkα ∧ kf (s)xk2 ) R(dx) ν(ds) < ∞.
Proof. Recall that 0 < α < 2 is fixed and consider for z > 0 the incomplete gamma type
functions
Z z
Z z
Z ∞
(2−α)−1 −u
1−α −u
γ(2 − α, z) :=
u
e du =
u e du and Γ(−α, z) :=
u−α−1 e−u du,
0
0
z
respectively. For instance, see Proposition 12 in [9] together with [24] in order to verify that
the following asymptotics hold true: As z → 0 we have that
z α−2 γ(2 − α, z) → (2 − α)−1
On the other hand, as z → ∞,
γ(2 − α, z) → Γ(2 − α) =
Z
and z α Γ(−α.z) → α−1 .
∞
0
u1−α e−u du and z 1+α ez Γ(−α, z) → 1.
If we define g(z) := z −2 γ(2 − α, z) + Γ(−α, z), it follows that
lim z α g(z) = (2 − α)−1 + α−1 > 0,
(2.8)
(2.9)
z→0
lim z 2 g(z) = lim (γ(2 − α, z) + z 1+α ez Γ(−α, z) × z 1−α e−z ) = Γ(2 − α) > 0.
z→∞
z→∞
Combine (2.8)-(2.9) and use the fact that z 7→ g(z)/(z −α ∧ z −2 ) is continuous as well as
strictly positive on (0, ∞) to obtain constants C1 , C2 > 0 fulfilling
(2.10)
∀z > 0 :
C1 (z −α ∧ z −2 ) ≤ g(z) ≤ C2 (z −α ∧ z −2 ).
6
D. KREMER AND H.-P. SCHEFFLER
On the other hand, (2.2) yields that
Z Z Z ∞
J2 (f ) =
(1 ∧ krf (s)xk2 )r −α−1 e−r dr R(dx) ν(ds).
S
Rd
0
Now fix (s, x) ∈ S × Rd such that f (s)x 6= 0 and let z = kf (s)xk−1 to observe that
Z ∞
Z z
Z ∞
2 −α−1 −r
−2
1−α −r
(1 ∧ krf (s)xk )r
e dr = z
r e dr +
r −α−1 e−r dr = g(z).
0
0
z
Then the assertion follows from (2.10).
The function H(f ), which we just introduced, is similar to the approach in [10] and will be
useful again later. But before we still have to explain in which sense M and the corresponding
integrals have a tempered stable behavior. In this context, note that f = 1A I belongs to
I(M) for every A ∈ S.
Proposition 2.5. Consider f, f1 , ...fk ∈ I(M). Then the random vector (I(f1 ), ..., I(fk ))t
has a T αS distribution. Moreover, the Rosinski measure of I(f ) is given by
(2.11)
Rf (A) := (ν ⊗ R)({(s, x) ∈ S × Rd : f (s)x ∈ A \ {0}}),
which means that
(2.12)
f
φ (A) =
Z
Rd
Z
0
A ∈ B(Rd ),
∞
1A (ry)r −α−1 e−r dr Rf (dy).
Proof. We first assume that k = 1. Recall that I(f ) has no Gaussian part and that its Lévy
measure φf is given by (2.6). Using (2.2) together with the definition of Rf from (2.11) it
follows for every A ∈ B(Rd ) that
Z Z Z ∞
f
φ (A) =
1A\{0} (rf (s)x)r −α−1 e−r dr R(dx) ν(ds)
d
ZS ZR ∞ 0
=
1A (ry)r −α−1 e−r dr Rf (dy),
Rd
0
i.e. (2.12) holds true. It remains to show that
Z
(2.13)
(kykα ∧ kyk2 ) Rf (dy) < ∞.
Because in this case and based on (2.12), Theorem 2.3 in [21] states that φf really is the
Lévy measure of a T αS distribution on Rd . Actually, by (2.11) and in view of Lemma 2.4,
condition (2.13) is equivalent to the finiteness of J2 (f ). And since f ∈ I(M) implies that
J2 (f ) < ∞ due to Theorem 2.3, this completes the proof for k = 1. In the general case
7
Corollary 4.9 in [11] states that (I(f1 ), ..., I(fk ))t is infinitely-divisible, while a combination
of Example 3.7 and Theorem 4.4 in [11] shows that its LCF is given by
Rk·d ∋ (u1 , ..., uk )t 7→
(2.14)
Z
k
X
ψ
S
fj (s)∗ uj
j=1
!
ν(ds).
P
Define ψ ′ ((u1, ..., uk )t ) := ψ( kj=1 uj ). Then one can verify that ψ ′ is the LCF of a distribution
µ′ on Rk·d, which is also T αS. Since the corresponding notation is somewhat involved, the
details are left to the reader. Anyway, it follows from Theorem 2.2 that there exists an
Rk·d -valued ISRM, which is generated by µ′ and which we denote by M′ . Moreover, define
the block diagonal matrices f ′ (s) := f1 (s) ⊕ · · · ⊕ fk (s) ∈ L(Rk·d) for every s ∈ S. Then,
using (2.14), it is easy to check that f ′ ∈ I(M′ ) and that
t d
(I(f1 ), ..., I(fk )) =
Z
f ′ (s) M′ (ds),
S
d
where = means equality in distribution. Hence, the argument for (I(f1 ), ..., I(fk ))t having a
T αS distribution reduces to the case k = 1 from above.
If the T αS distributed generator µ is proper things become
R more practicable. For this
purpose let L1 (ν) := {g : S → R | g is measurable and |g(s)| ν(ds) < ∞} and recall
Remark 2.1 (i).
Corollary 2.6. Assume that µ is proper. Then we have:
(i) kf kα ∈ L1 (ν) implies that J2 (f ) < ∞ and, provided that f ∈ I(M), also that the
T αS random Rvector I(f ) is proper.
(ii) Moreover, if kxk≥1 kxk2 R(dx) < ∞, a sufficient condition for J2 (f ) < ∞ is given
by (kf kα ∧ kf k2 ) ∈ L1 (ν).
R
Proof. The fact that µ is proper yields
kxkα R(dx) < ∞. Recall from Lemma 2.4 that
R
α
J2 (f ) < ∞ if and only if H(f ) = (kyk ∧ kyk2) Rf (dy) < ∞, where H(f ) is obviously
bounded by
Z
Rd
α
f
kyk R (dy) =
Z Z
S
Rd
α
kf (s)xk R(dx) ν(ds) ≤
Z
S
α
kf (s)k ν(ds)
Z
Rd
kxkα R(dx).
8
D. KREMER AND H.-P. SCHEFFLER
Hence,
(i) follows from Proposition 2.5, since we have that I(f ) is proper if and only if
R
kykα Rf (dy) < ∞. In a similar way we obtain (ii) due to the following computation:
Z Z
H(f ) =
(kf (s)xkα ∧ kf (s)xk2 ) R(dx) ν(ds)
d
S R
Z
Z
α
2
≤ (kf (s)k ∧ kf (s)k ) ν(ds)
(kxkα ∨ kxk2 ) R(dx)
d
R
Z
Z
ZS
α
2
α
2
(2.15)
kxk R(dx) +
kxk R(dx) .
= (kf (s)k ∧ kf (s)k ) ν(ds)
kxk≤1
S
kxk≥1
R
Note that the finiteness of kxk≥1 kxk2 R(dx) is equivalent to the finiteness of
R
R
kxk2 φ(dx) or Rd kxk2 µ(dx), respectively (see Proposition 2.7 in [21]).
kxk≥1
2.3. The full and proper-symmetric case. From now on we only consider the case that
the generator µ is full as well as proper and symmetric. In view of Remark 2.1 this implies
that f ∈ I(M) ⇔ J2 (f ) < ∞ together with I(f ) ∼ [0, 0, φf ]. In particular, the conditions of
Corollary 2.6 (i) are fulfilled. The following example fits into this setting and also illustrates
part (ii) of Corollary 2.6.
Example 2.7. Fix λ > 0 and explicitly consider the proper tempering function q(r, θ) =
qλ (r) := e−λr in (2.1). Then, if σ is symmetric, φ = φλ and µλ ∼ [0, 0, φλ] are also symmetric.
Moreover, using the construction from (2.3)-(2.5) in [21], we observe that the corresponding
Rosinski measure Rλ is concentrated on the set λ · S d−1 , where it equals the image measure
λ(σ), defined by λ(σ)(A) := σ(λ−1 · A) (cf. Example 1 in [21]). It follows that
Z
Z
2
kxk Rλ (dx) =
kλθk2 σ(dθ) = λ2 σ(S d−1 ) < ∞.
kxk≥1
S d−1
Thus, in view of Corollary 2.6 (ii), (kf kα ∧ kf k2) ∈ L1 (ν) implies that f ∈ I(Mλ ), where
Mλ denotes the ISRM that is generated by µλ . In a similar way it follows from Lemma 2.4
that I(Mλ ) does not depend on λ > 0.
We remark that the previous example will be very important later. Now consider
f, f1 , f2 , ... ∈ I(M) and observe that I(fn ) − I(f ) = I(fn − f ) ∼ [0, 0, φfn−f ] in the present
case. Hence, as an alternative to criterion (2.7), we have that I(fn ) → I(f ) in probability
if and only if the convergence φfn −f → 0 in the space M (that is the space of all measures
on Rd that assign finite measure to sets bounded away from zero) holds true. That means
φfn −f (A) → 0 for all A ∈ B(Rd ) bounded away from zero. See Theorem 3.1.16 in [17] for
details. Lemma 2.1 in [11] shows that this is also equivalent to J2 (fn − f ) → 0. On the other
hand, relation (2.10) from the proof of Lemma 2.4 revealed that
(2.16)
C1 H(f ) ≤ J2 (f ) ≤ C2 H(f )
9
holds true for all measurable f . Hence, as n → ∞ we have that
(2.17)
I(fn ) → I(f ) in probability
⇔
Similarly to (2.13) in [10] this motivates to introduce
Z
h(D) :=
(kDxkα ∧ kDxk2 ) R(dx),
Rd
together with
H(f, δ) :=
Z
S
H(fn − f ) → 0.
D ∈ L(Rd )
h(δ −1 f (s)) ν(ds) ∈ [0, ∞],
δ > 0.
Recall that R is M-full due to Remark 2.1. Then we obtain the following observation.
Lemma 2.8. Under the previous assumptions there exists a constant K > 0 such that
(kDkα ∧ kDk2 ) ≤ K h(D) holds true for every D ∈ L(Rd ).
Proof. Fix 1 ≤ i, j ≤ d. Denote the corresponding entry of the matrix D by Di,j and let
ej be the j-th unit vector in Rd . Since R is M-full, we have that R({zej : z 6= 0}) > 0.
In particular, using that R is a Lévy measure, there exists some 0 < cj ≤ 1 such that
√
√
1/α
Fj := {zej : |z| ≥ cj } fulfills 0 < R(Fj ) < ∞. In view of cj < cj it follows that
2
|Di,j |α ∧ Di,j
≤ kDej kα ∧ kDej k2
Z
√
1/α
−1
(kcj Dej kα ∧ k cj Dej k2 )1Fj (x) R(dx)
= (cj R(Fj ))
Z
−1
(kDxkα ∧ kDxk2 )1Fj (x) R(dx) ≤ (cj R(Fj ))−1 h(D).
≤ (cj R(Fj ))
Note that i as well as j were arbitrary and that all norms on L(Rd ) are equivalent. Hence,
up to a positive constant, K can be chosen as max({(cj R(Fj ))−1 : 1 ≤ j ≤ d}).
We call a mapping k·kM : I(M) → [0, ∞) a quasi-norm (on I(M)) if it has the usual
properties of a norm, except a possible weakening of the triangular inequality. That is, there
exists some A ≥ 1 such that
(2.18)
∀f1 , f2 ∈ I(M) :
kf1 + f2 kM ≤ A(kf1 kM + kf2 kM )
holds true. Recall that we identify elements in I(M) that are identical ν-a.e.
Theorem 2.9.
(a) The following identities hold true:
I(M) = {f : S → L(Rd ) | f is measurable and H(f, δ) < ∞ for all δ > 0}
= {f : S → L(Rd ) | f is measurable and H(f, δ) < ∞ for some δ > 0}.
10
D. KREMER AND H.-P. SCHEFFLER
(b) kf kM := inf{δ > 0 : H(f, δ) ≤ 1} defines a quasi-norm on I(M). Moreover, for
every f ∈ I(M) and δ > 0, we have
α
2
α
2
kf kM
kf kM
kf kM
kf kM
∧
≤ H(f, δ) ≤
∨
.
(2.19)
δ
δ
δ
δ
(c) There exists some L1 ≥ 1 such that, for every f ∈ I(M) and δ > 0, we have
(2.20)
P(kI(f )k ≥ δ) ≤ L1 H(f, δ).
Additionally, if 0 < p < α, there exists a constant L2 = L2 (p) > 0 such that
E(kI(f )kp ) ≤ L2 kf kpM .
(2.21)
(d) The vector space I(M) is complete with respect to k·kM and, for f, f1 , ... ∈ I(M), we
have the characterization
I(fn ) → I(f ) in probability
⇔
kfn − f kM → 0.
Proof of Theorem 2.9. Note that H(f ) = H(f, 1) and verify that part (a) follows from
Lemma 2.4 after a verification of
(2.22)
∀δ > 0 :
(δ −α ∧ δ −2 )H(f, 1) ≤ H(f, δ) = H(δ −1 f, 1) ≤ (δ −α ∨ δ −2 )H(f, 1).
We now prove part (b). Obviously, f = 0 implies that H(f, δ) = 0 for all δ > 0, which yields
kf kM = 0. Conversely, if kf kM = 0, we derive from (2.22) that H(f, 1) = 0, i.e. h(f (s)) = 0
for almost every s ∈ S. In view of Lemma 2.8 we obtain for those s that f (s) = 0. Also note
that the we have H(ρf, δ) = H(f, |ρ|−1δ) for all ρ 6= 0 and δ > 0. Hence, the homogeneity
property kρf kM = |ρ| kf kM (ρ ∈ R) holds true by definition of k·kM . It remains to prove
(2.18). For this purpose recall that ku1 + u2 k2 ≤ 2(ku1 k2 + ku2k2 ) (u1, u2 ∈ Rd ) and that
the mapping [0, ∞) ∋ z 7→ (1 ∧ z) is sub-additive. Using (2.16) we compute for all δ > 0 and
f1 , f2 ∈ I(M) that
H(f1 + f2 , δ) ≤ C1−1 J2 (δ −1 (f1 + f2 ))
≤ 2C1−1 (J2 (δ −1 f1 ) + J2 (δ −1 f2 )) ≤ 2C2 /C1 (H(f1 , δ) + H(f2, δ)).
Now choose A ≥ 1 such that H(f, Aδ) ≤ (4C2 /C1 )−1 H(f, δ) holds true for every δ > 0 and
f as before, which is possible due to (2.22). Since H(f, ·) is decreasing, it follows for ε > 0
arbitrary that
H(f1 + f2 , A(kf1 kM + kf2 kM + ε))
≤ 2C2 /C1 (H(f1, A(kf1 kM + kf2 kM + ε)) + H(f2 , A(kf1 kM + kf2 kM + ε)))
≤ 2C2 /C1 (H(f1, A(kf1 kM + ε)) + H(f2 , A(kf2 kM + ε)))
≤ 2−1 (H(f1 , kf1 kM + ε) + H(f2 , kf2 kM + ε)) ≤ 1,
where in the last step we used the definition of k·kM . Letting ε → 0, the same argument
shows (2.18) and hence that k·kM is a quasi-norm. Finally, (2.19) is obvious in the case
kf kM = 0. Else we observe that H(f, kf kM ) = 1 by continuity of δ 7→ H(f, δ). Then,
11
similar as before, we can write H(f, δ) = H((δ/kf kM )−1 f, kf kM ) such that (2.19) follows
from (2.22).
For the proof of part (c) recall that |1 − exp(z)| ≤ |z| for any z ∈ C with Re z ≤ 0 and that
the LCF of I(f ) is given by (2.14) for k = 1. Combine this with Lemma 2.7 in [10] to obtain
a constant C > 0 such that, for every δ > 0, the relation
Z
Z
d
(2.23)
P(kI(f )k ≥ δ) ≤ Cδ
|ψ(f (s)∗u)| ν(ds) du
kuk∞ ≤δ−1
S
holds true. Here k·k∞ denotes the maximum norm on Rd . At the same time there exists some
C0 ≥ 1 fulfilling k·k ≤ C0 k·k∞ . Also recall (2.4) and, for every z ∈ R, that | cos(z) − 1| ≤
2(1 ∧ z 2 ). From the Cauchy-Schwarz inequality it follows for every kuk∞ ≤ δ −1 that
Z
Z Z
∗
|ψ(f (s) u)| ν(s) ≤
| coshf (s)∗ u, xi − 1| φ(dx) ν(s)
S
S Rd
Z Z
≤2
(2.24)
(1 ∧ kuk2 kf (s)xk2 ) φ(dx) ν(s)
d
S R
Z Z
2
(1 ∧ kδ −1 f (s)xk2 ) φ(dx) ν(s) = 2 C02 J2 (δ −1 f ).
≤ 2 C0
S
Rd
Let L1 := 2C02 C C2 to verify that (2.16) together with (2.23) implies (2.20). For the second
statement of part (c) fix 0 < p < α. Then it is well-known that
Z∞
E(kI(f )kp ) = p δ p−1 P(kI(f )k ≥ δ) dδ,
0
where (2.19)-(2.20) provide the inequality
P(kI(f )k ≥ δ) ≤ 1[0,kf kM ] (δ) + L1 (kf kM /δ)α 1(kf kM ,∞) (δ),
δ > 0.
Now, as in the proof of Theorem 2.8 in [10], we observe that (2.21) holds true for a constant
L2 > 0. The details are left to the reader. In order to show that I(M) is complete, let
(fn ) ⊂ I(M) be a Cauchy-sequence with respect to k·kM . Then we have to find some
f ∈ I(M) such that kfn − f kM → 0 as n → ∞, or at least along a suitable subsequence,
since (fn ) is Cauchy and since k·kM fulfills (2.18). Hence, without loss of generality and
throughout following the idea of the proof of Theorem 5.2.1 in [7], it can be assumed that
kfm − fn kM ≤ 2−N
(2.25)
for all m, n ≥ N .
Recall the definition of H(·, 1) and define the sets Aj := {s ∈ S : h(fj+1 (s) − fj (s)) > j −4 }
for all j ∈ N. Using (2.19) and (2.25), it follows that
P∞
j −4 ν(An ) ≤ H(fj+1 − fj , 1) ≤ 2−αj ,
j ∈ N,
which implies that j=1 ν(Aj ) < ∞. Thus B := lim supj→∞ Aj is a ν-null set. Let f (s) := 0
for every s ∈ B. Conversely, for s ∈ B c = S \ B, there exists some N(s) ∈ N fulfilling s ∈
/ Aj
for all j ≥ N(s). Furthermore, Lemma 2.8 allows us to assume that N(s) is chosen large
12
D. KREMER AND H.-P. SCHEFFLER
enough such that we have kfj+1(s) − fj (s)k ≤ 1 for every j ≥ N(s). Actually, if we use
Lemma 2.8 again, this implies for all s ∈ B c and m, n ≥ N(s) that
kfm (s) − fn (s)k ≤
=
∞
q
X
j=N (s)
∞
q
X
j=N (s)
≤
√
k(fj+1(s) − fj (s))k2
k(fj+1(s) − fj (s))kα ∧ k(fj+1 (s) − fj (s))k2
∞
∞
q
X
X
√
K
h(fj+1 (s) − fj (s)) ≤ K
j −2 .
j=N (s)
j=N (s)
It follows that kfm (s) − fn (s)k → 0 as N(s) tends to ∞, which shows that (fn (s)) is Cauchy
with respect to the operator norm k·k. Denote the corresponding limit by f (s) and observe
that f : S → L(Rd ) is measurable with fn (s) → f (s) ν-almost everywhere as n → ∞.
Moreover, (2.18) and (2.25) imply for every n ∈ N that kfn kM ≤ A(kf1 kM + 2−1 ). Also
note that D 7→ h(D) is continuous by the dominated convergence theorem. Hence, Fatou’s
Lemma and (2.19) yield that
Z
H(f, 1) =
lim inf h(fn (s)) ν(ds) ≤ lim inf H(fn , 1) ≤ lim inf (kfn kαM ∨ kfn k2M ) < ∞,
S
n→∞
n→∞
n→∞
i.e. f belongs to I(M) due to part (a). Similarly, it follows from (2.25) for every n ∈ N that
H(fn − f, 1) ≤ lim inf (kfn − fm kαM ∨ kfm − fm k2M ) ≤ 2−αn .
m→∞
In view of (2.19) this shows that kfn − f kM → 0 as n → ∞. The additional statement of
part (d) is just a consequence of (2.17) and (2.19).
Remark 2.10. Since I(f ) has a T αS distribution it is well-known that E(kI(f )kp ) is finite
for all 0 < p < α. However, this can also be true for p ≥ α, depending on the behavior of
Rf (see Proposition 2.7 in [21]). But in this case we would need sharper estimates in order
to perform an appropriate proof of (2.21).
We finish this section with a useful observation, which, for D = f (s), we implicitly encountered within the foregoing proofs. More precisely, recall (2.2) and (2.24) together with the
proofs of Lemma 2.4 and Corollary 2.6 (particularly (2.15)) as well as Example 2.7.
Corollary 2.11. In the situation of Example 2.7 we denote the LCF of µλ by ψλ . Then
there exists a constant T > 0 such that
|ψλ (D ∗ u)| ≤ T (1 + kuk2 )(kDkα ∧ kDk2 )
holds true for every D ∈ L(Rd ) and u ∈ Rd .
13
3. Examples of multivariate tempered stable random fields:
Moving-average and harmonizable representation
Throughout this section fix λ > 0 and, for the rest of this paper, we explicitly consider the
n-dimensional Lebesgue measure ν(ds) = ds on (S, Σ) = (Rn , B(Rn )). Then, until further
notice (in Section 3.2), let µλ and Mλ be as in Example 2.7, where σ is a finite, symmetric
measure on S d−1 . In addition, we assume that σ is M-full, since this ensures that the
generator µλ is full.
Example 3.1. Consider the case n = 1 and let X(t) := Mλ ([0, t]) for every t ≥ 0. If
we recall (RM1 )-(RM2 ) it is easy to see that the resulting Rd -valued stochastic process
X = {X(t) : t ≥ 0} is a Lévy process. And since X(1) has a T αS distribution it follows that
X is a T αS Lévy process in the sense of [21].
Note that there are interesting results in [21] concerning the so-called short time and long
time behavior of T αS Lévy processes, respectively. However, in what follows, we treat the
general case n ≥ 1. More precisely and in order to take advantage of our previous efforts, we
will investigate Rd -valued random fields of the form X = {X(t) : t ∈ Rn }, where X(t) = I(ft )
for every t ∈ Rn and a suitable integrand (or kernel function) ft ∈ I(Mλ ). In this context,
note that we benefit from Proposition 2.5 in the following sense, where 0 < α < 2 is still
fixed.
Definition 3.2. A random field X = {X(t) : t ∈ Rn } is called tempered (α-)stable (abbreviated as a T αS random field ) if all finite-dimensional distributions are T αS.
3.1. Moving-average representation. Our multivariate approach needs some preparation. Fix some matrix E ∈ Q(Rn ), where
Q(Rn ) := {E ∈ L(Rn ) : All real parts of the eigenvalues of E are strictly positive}.
Recall that exp(E), i.e. the matrix exponential of E is given by
∞
X
Ej
exp(E) =
with cE := exp((log c)E),
j!
j=0
c > 0.
For instance, see Proposition 2.2.2 in [17] for basic properties about the matrix exponential
that we will use in sequel. This allows to define so-called generalized polar coordinates with
respect to E. More precisely, Lemma 6.1.5 in [17] states that there exists a norm k·kE on
Rn such that for SEn−1 := {x ∈ Rn : kxkE = 1} the mapping Ψ : (0, ∞) × SEn−1 → Rn \ {0},
Ψ(c, θ) = cE θ is a homeomorphism. In other words, every x 6= 0 can be uniquely written as
x = τ (x)E l(x) for some radial part τ (x) > 0 and some direction l(x) ∈ SEn−1 . In this context,
a function ϕ : Rn → C is called E-homogeneous if ϕ(cE x) = c ϕ(x) for all c > 0 and x 6= 0.
On the other hand, letting β > 0, the authors in [4] say that a function ϕ : Rn → [0, ∞) is
(β, E)-admissible if the following conditions hold true:
14
D. KREMER AND H.-P. SCHEFFLER
(i) ϕ(x) > 0 for all x 6= 0.
(ii) ϕ is continuous and for any 0 < A < B there exists a constant C > 0 such that, for
A ≤ kyk ≤ B,
(3.1)
τ (x) ≤ 1
⇒
|ϕ(x + y) − ϕ(y)| ≤ Cτ (x)β .
Here, τ (x) = τE (x) is the radial part of x in terms of the generalized polar coordinates
with respect to E.
Now we are able to state the first main result of this section, where the two last-mentioned
properties are actually combined. See Theorem 2.11 and Corollary 2.12 in [4] for useful
examples of such functions. Also recall that I = Id is the identity operator on Rd .
Theorem 3.3. Let E be as before and denote its trace by q. Assume that ϕ : Rn → [0, ∞)
is an E-homogeneous, (β, E)-admissible function for some β > 0. Moreover, consider a
d × d-matrix D ∈ Q(Rd ) such that the maximum of the real parts of the eigenvalues of D,
denoted by H, fulfills
1 1
.
(3.2)
H <β+q
−
α 2
Then the random field X = {X(t) : t ∈ Rn }, defined by
Z h
i
q
q
D− α
I
D− α
I
X(t) :=
ϕ(t − s)
Mλ (ds),
− ϕ(−s)
Rn
exists. Furthermore:
(a) X is a T αS random field in the sense of Definition 3.2.
(b) X is stochastically continuous.
(c) X has stationary increments. That is, for any h ∈ Rn ,
f dd
{X(t + h) − X(h) : t ∈ Rn } = {X(t) : t ∈ Rn },
f dd
where = denotes equality of all finite-dimensional distributions.
(d) If αq is not an eigenvalue of D, then X(t) is full for every t 6= 0.
Proof. Note that X(0) = 0 a.s. Then fix t ∈ Rn \ {0} and observe that ϕ(x) = 0 ⇔ x = 0.
It follows that
(3.3)
q
q
ft (s) := ϕ(t − s)D− α I − ϕ(−s)D− α I ,
s ∈ Rn
is well-defined for every s ∈
/ {0, t}. However, the cases s ∈ {0, t} are negligible since, up to
a constant, the n-dimensional Lebesgue measure equals the control measure of Mλ in the
present case. Then it suffices to verify that (kft (s)kα ∧kft (s)k2 ) ∈ L1 (ds) due to Example 2.7.
For this purpose denote the generalized polar coordinates with respect to E by (τ (·), l(·))
15
and let τ (0) = 0. Recall the proof of Theorem 2.5 in [14] and that, if H < β holds true
instead of (3.2), there it has been shown for some suitably chosen γ > 0 that
kft (s)kα = kft (s)kα 1{τ (s)≤γ} + kft (s)kα 1{τ (s)>γ} ∈ L1 (ds).
Actually, for the verification of kft (s)kα 1{τ (s)≤γ} ∈ L1 (ds) the quoted proof only makes use
of the assumptions on ϕ together with D ∈ Q(Rd ). Conversely, it uses H < β in order to
establish the accuracy of kft (s)kα 1{τ (s)>γ} ∈ L1 (ds). Now it is easy to verify that condition
(3.2) allows us to slightly modify the corresponding estimates from the proof of Theorem 2.5
in [14] and hence to obtain that kft (s)k2 1{τ (s)>γ} ∈ L1 (ds) holds true. The details are left
to the reader. Anyway, in view of
(3.4)
(kft (s)kα ∧ kft (s)k2 ) ≤ kft (s)kα 1{τ (s)≤γ} + kft (s)k2 1{τ (s)>γ} ,
s ∈ Rn ,
this proves the existence of X. Then part (a) follows from Proposition 2.5. Now let us prove
part (b). For this purpose fix t ∈ Rn and observe that, due to (2.17), we have to show
that H(ft+h − ft ) → 0 as h → 0. Hence, if we recall Example 2.7 together with (2.15), the
claimed stochastic continuity would follow if
Z
kft+h (s) − ft (s)kα ∧ kft+h (s) − ft (s)k2 ds → 0
Rn
holds true as h → 0. Thus by (3.3), a change of variables and (3.4), it suffices to prove that
Z
(3.5)
kfh (s)kα 1{τ (s)≤γ} + kfh (s)k2 1{τ (s)>γ} ds → 0 (h → 0),
Rn
actually leading back to the above-mentioned modification and its corresponding estimates.
We omit the details, which, using the dominated convergence theorem, can be found in the
proof of Theorem 2.5 in [14] again and which eventually give (3.5). Moreover, by linearity of
the stochastic integral and (2.14), the proof of part (c) merely reduces to the foregoing change
of variables. Finally, fix t 6= 0 and recall from Remark 2.1 in [14] that ft (s) is invertible for
s in a subset of Rn with positive Lebesgue measure provided that αq is not an eigenvalue of
D. Then the assertion of part (d) follows from Proposition 2.6 (a) in [12].
Remark 3.4. Assume that (3.2) is particularly fulfilled with H < β. Then the foregoing proof
showed that the function kft (s)kα from (3.3) belongs to L1 (ds). In this case Corollary 2.6
(a) implies that X(t) has even a proper T αS distribution.
3.2. Harmonizable representation. Although Theorem 3.3 provides a large class of multivariate tempered stable random fields by now, we want to present a further class of this
type in the sequel. The resulting random fields use a so-called harmonizable representation,
which is also popular in the context of classical α-stable settings, that is without the use of
tempering. For instance see [4, 14, 22], just to mention a few. It is also mentionable that,
by investigating the scalar-valued α-stable moving-average and harmonizable representation
from [4], Proposition 6.1 in [3] already established different path properties between both
representations. Note that Remark 3.6 as well as Corollary 3.7 below will emphasize these
16
D. KREMER AND H.-P. SCHEFFLER
differences.
In what follows we have to use kernel functions taking values in L(Cd ), that is the space
of d × d-matrices with entries from the complex numbers C. Moreover, this also requires
the use of a complex-valued tempered stable ISRM. Essentially, we do so by modifying the
underlying generator. More precisely, let σ̃ be a symmetric and finite as well as M-full measure on S 2d−1 = {x ∈ R2d : kxk = 1}. For λ > 0 as before consider the tempering function
q(r, θ) = qλ (r) = e−λr again, but based on the polar coordinates in R2d now. Accordingly
to (2.1) this allows to define a Lévy measure φ̃λ on R2d . Let µ̃λ ∼ [0, 0, φ̃λ ] with LCF ψ̃λ be
the generator of an R2d -valued ISRM in the sense of Theorem 2.2 (still with ν(ds) = ds on
e λ.
(S, Σ) = (Rn , B(Rn ))), denoted by M
e λ can be naturally identified with an Cd -valued ISRM, which we denote by MC
Note that M
λ
hereinafter and which, similarly as before, allows for the use as integrator in a corresponding
integration theory. We refer the reader to [11] for details. In any case, Proposition 4.11 in
[11] states for a measurable function g : Rn → L(Cd ) that the Rd -valued random vector
Z
(3.6)
Re
g(s) MCλ (ds)
Rn
is well-defined (as a stochastic limit) if and only if
Re g −Im g
e λ ),
(3.7)
g̃ :=
∈ I(M
0
0
where g̃ ∈ L(R2d ). Then we can state the second main result of this section.
Theorem 3.5. Let E ∈ Q(Rn ), denoting its trace by q again and its smallest real part of the
eigenvalues by a. Assume ϕ : Rn → [0, ∞) to be a continuous and E ∗ -homogeneous function
fulfilling ϕ(x) > 0 for all x ∈ Rn \ {0}. Also consider D ∈ L(Rd ), where h and H denote the
minimum and the maximum of the real parts of the eigenvalues of D, respectively. Moreover,
let the assumption
1 1
(3.8)
q
<h≤H<a
−
2 α
hold true. Then the random field Y = {Y (t) : t ∈ Rn }, defined by
Z
q
(3.9)
Y (t) := Re
eiht,si − 1 ϕ(s)−D− α I MCλ (ds),
Rn
exists in the sense of (3.6). Furthermore:
(a) Y is tempered stable.
(b) Y is stochastically continuous.
(c) Y (t) is full for every t 6= 0.
Proof. Denote the generalized polar coordinates with respect to E t by (τ1 (·), l1(·)) and observe that Lemma 4.2 in [14], although stated for D ∈ Q(Rd ), remains true for D ∈ L(Rd )
17
due to (2.2) in [12]. Then, using the assumption H < a, it follows exactly as in the proof
q
of Theorem 2.6 in [14] that kskα kϕ(s)−D− α I kα 1{τ1 (s)<1} ∈ L1 (ds). Conversely, assuming
that h > 0 (which is nothing else than D ∈ Q(Rd )), the aforementioned proof in [14]
q
showed that kϕ(s)−D− α I kα 1{τ1 (s)≥1} ∈ L1 (ds). Actually, by a slight modification it turns
out that we just need the assumption h > q( 21 − α1 ) from (3.8) in order to establish that
q
kϕ(s)−D− α I k2 1{τ1 (s)≥1} ∈ L1 (ds) instead. If we combine our findings it follows that the
function
q
q
Ψ(s) := kskα kϕ(s)−D− α I kα 1{τ1 (s)<1} + kϕ(s)−D− α I k2 1{τ1 (s)≥1} ,
(3.10)
s ∈ Rn
belongs to L1 (ds), where the case s = 0 is negligible again. Now fix t ∈ Rn . Then, in view
of (3.7) and Euler’s formula, we define
q
q
(cosht, si − 1)ϕ(s)−D− α I − sinht, siϕ(s)−D− α I
, s ∈ Rn
(3.11)
g̃t (s) :=
0
0
and observe by equivalence of all norms on L(R2d ) that there exists some K0 > 0 fulfilling
q
kg̃t (s)k ≤ K0 (| cosht, si − 1| + | sinht, si|)kϕ(s)−D− α I k,
(3.12)
s ∈ Rn .
Recall the idea from (3.4) and the inequality (a + b)α ≤ 2(aα + bα ). Then we obtain another
constant K1 > 0 such that, for every s ∈ Rn , kg̃t (s)kα ∧ kg̃t (s)k2 is bounded by
q
q
K1 (| cosht, si − 1|α + | sinht, si|α )kϕ(s)−D− α I kα 1{τ1 (s)<1} + kϕ(s)−D− α I k2 1{τ1 (s)≥1} .
Finally, using the Cauchy-Schwarz inequality together with some routine estimates for sin(·)
and cos(·), we verify that
kg̃t (s)kα ∧ kg̃t (s)k2 ≤ K2 (1 + ktkα )Ψ(s),
(3.13)
s ∈ Rn
holds true for some constant K2 > 0 and due to (3.10). In view of (3.7) and Example 2.7
(which remains true accordingly) this proves the existence of Y. We now prove part (a) and
consider k ∈ N as well as t1 , ..., tk ∈ Rn arbitrary. It follows from Remark 4.13 in [11] that
the random vector (Y (t1 ), ..., Y (tk ))t is infinitely-divisible and that its LCF is given by
(3.14)
!t
Z
k
k
X
X
∗ q
∗ q
ψ̃λ
(coshtj , si − 1)ϕ(s)−D − α I uj , −
u 7→
sinhtj , siϕ(s)−D − α I uj ds,
Rn
j=1
j=1
where u = (u1 , ..., uk )t ∈ Rk·d . Recall the specific form of g̃t from (3.11) and observe
that (Y (t1 ), ..., Y (tk ))t can be regarded as projection of the Rk·(2d) -valued random vector
R
fλ for every t ∈ Rn . More precisely, for every
(Z(t1 ), ..., Z(tk ))t , where Z(t) := g̃t dM
t
k·2d
w = (w1 , ..., wk ) ∈ R
with wj = (uj , vj )t (for j = 1, ...k) we can rewrite (3.14) as
!
Z
k
X
ψ̃λ
g̃t (s)∗ wj ds.
Rn
j=1
18
D. KREMER AND H.-P. SCHEFFLER
In any case, (Y (t1 ), ..., Y (tk ))t inherits the T αS-property from (Z(t1 ), ..., Z(tk ))t due to
Proposition 2.5. For the proof of part (b) we fix t ∈ Rn and have to show that Y (t+h)−Y (t)
converges in distribution to zero as h → 0. For this purpose define
cosht, siId sinht, siId
A(t, s) :=
∈ L(R2d )
− sinht, siId cosht, siId
and recall from Proposition 4.3 in [11] that the integral in (3.6) is also linear (as a function
of g). Then, according to (3.14) and similarly as in the proof of Proposition
5.2 in [12], it
R
d
can be computed that the LCF of Y (t + h) − Y (t) is given by R ∋ u 7→ ψ̃λ (ζ(t, h, s, u)) ds,
where
t
q
q
∗
−D ∗ − α
I
−D ∗ − α
I
ζ(t, h, s, u) : = A(t, s) (coshh, si − 1)ϕ(s)
u, − sinhh, siϕ(s)
u
(3.15)
= (g̃h (s)A(t, s))∗ (u, v)t
and the last-mentioned identity holds true for every v ∈ Rd . For fixed u ∈ Rd and due to
Lévy’s continuity theorem it remans to show the convergence
Z
(3.16)
ψ̃λ (ζ(t, h, s, u)) ds → 0 (h → 0).
Observe for every s ∈ Rn that we have ψ̃λ (ζ(t, h, s, u)) → 0 by continuity of ψ̃λ . At the same
time it is obvious that Corollary 2.11 remains true accordingly for ψ̃λ . Hence, if we combine
Corollary 2.11 with (3.15) (say for v = 0), it follows that
|ψ̃λ (ζ(t, h, s, u)| ≤ T (1 + kuk2 )(kg̃h (s)A(t, s)kα ∧ kg̃h (s)A(t, s)k2 )
≤ T (1 + kuk2 )(kg̃h (s)kα ∧ kg̃h (s)k2 ).
Note that, in the second step, we used the sub-multiplicativity of the operator norm together
with the face that kA(t, s)k = 1. Then the dominated convergence theorem gives (3.16) due
to (3.13). Finally, the argument for part (c) is mostly presented in the proof of Proposition
5.2 (c) in [12] and therefore left to the reader.
Remark 3.6. Besides the fact that we demand ϕ to be E ∗ -homogeneous, we can drop condition (3.1) compared to Theorem 3.3. In addition, we get without further assumptions that
Y (t) is full for every t 6= 0. Let us also emphasize that we did not require that D belongs to
Q(Rd ), which will be explained in Section 4 more into detail.
The following observation indicates another difference between both representations.
Corollary 3.7. Let the assumptions of Theorem 3.5 be fulfilled. Additionally, assume that
the measure σ̃ is invariant under all operators A ∈ T (2d), i.e. A(σ̃) = σ̃, where
(cos β)Id (sin β)Id
T (2d) :=
: β ∈ [0, 2π) .
−(sin β)Id (cos β)Id
Then Y has stationary increments.
19
Proof of Corollary 3.7. Fix t, h ∈ Rn and observe that, by swapping the roles of t and
h
Y (h) is given by Rd ∋ u 7→
R in the proof of Theorem 3.5, the LCF of Y (t + h) −−λr
does not depend on θ, it is
ψ̃λ (ζ(h, t, s, u)) ds. Moreover, since q(r, θ) = qλ (r) = e
easy to compute that the given assumption implies that
∀A ∈ T (2d) ∀w ∈ R2d :
ψ̃λ (A∗ w) = ψ̃λ (w).
R
Hence, since A(0, s) = I2d , we observe for every u ∈ Rd that ψ̃λ (ζ(h, t, s, u)) ds equals
R
ψ̃λ (ζ(0, t, s, u)) ds, which is nothing else than the LCF of Y (t). Finally, in view of (3.14),
the fdd-extension is obvious and therefore left to the reader. This shows that Y has stationary
increments.
3.3. The special cases d = 1 and n = 1. Certainly, our multivariate approach is very
general and its notation therefore rather involved. For many applications it is often enough
to consider the cases d = 1 or n = 1 (or even d = n = 1). For those readers we briefly want
to remark some useful observations and examples:
• Recall that we denote the greatest real part of the eigenvalues of D by H, which, in
case d = 1, is just a real number. Then H = h is referred to as the so-called Hurstindex in literature. Sometimes this Hurst-Index can help to determine possible scaling
properties (see Section 4 below) as well as sample path properties of the corresponding
random fields.
• Consider the situation from Example 2.7 again, particularly for d = 1 and α 6= 1. In
this case we have σ = c(ε−1 + ε1 ) for some c > 0, where εx denotes the point measure
in x. Then it follows from Remark 2.8 in [13] that the ψλ can be computed as
ψ(u) = −2c Γ(−α)[λα − Re (λ + iu)α ],
u ∈ R.
• On the other hand, in case n = 1, it appears natural to consider the function ϕ(·) = |·|,
which is 1-homogeneous and (1, 1)-admissible. Then for the existence of the movingaverage representation X = {X(t) : t ∈ R} from Theorem 3.3 it only remains to
ensure that 0 < H < α1 + 21 holds true. Moreover, X(t) is full for every t 6= 0 provided
that H 6= α1 . Conversely, in the context of Theorem 3.5, condition (3.8) is equivalent
to 21 − α1 < h ≤ H < 1 in this case.
4. The role of the tempering parameter
e λ , MC and the respective
Recall the definitions from Section 3, particularly those of Mλ , M
λ
generators. In either case we call λ > 0 the tempering parameter. This section will essentially
deal with the behavior of the random fields from Section 3 as λ → 0. For this purpose we
first have to define further generators and corresponding random measures that deal with
the case λ = 0 in a reasonable way.
20
D. KREMER AND H.-P. SCHEFFLER
Remark 4.1. Roughly speaking, for q ≡ 1 in (2.1) we get back the class of α-stable distributions on Rd , although this q is no valid tempering function anymore. Anyway, let
µ0 ∼ [0, 0, φ0], where φ0 (dr, dθ) := r −α−1 dr σ(dθ) and note that µ0 is symmetric again.
Similarly to Theorem 2.2 there exists an Rd -valued random measure, which is generated by
µ0 and which we denote by M0 . We call M0 a symmetric α-stable (abbreviated by SαS)
ISRM. Note that M0 corresponds to Example 2.4 in [12]. There it has been shown that a
measurable function f : S → L(Rd ) is integrable with respect to M0 (i.e. f ∈ I(M0 )) if and
only if
Z Z
Z Z
S
S d−1
kf (s)θkα σ(dθ) ν(ds) =
Rn
S d−1
kf (s)θkα σ(dθ) ds < ∞.
In view of Lemma 2.4 and Example 2.7 it easily follows that I(M0 ) ⊂ I(Mλ ) = I(M1 ) for
every λ > 0. Finally, in a similar way as before we obtain an R2d -valued SαS random
e 0 (generated by µ̃0 with LCF ψ̃0 ) and a corresponding Cd -valued one, denoted by
measure M
MC0 . We omit the details.
4.1. A note on different kinds of tempering. Let us emphasize that a main advantage
of our approach is based on the fact that we use T αS distributions as a generator for the
underlying random measures. By doing so we combine both the heavy tailed behavior of
classical α-stable distributions and Gaussian trends. Hence, concerning the corresponding
stochastic integrals, the observation of Remark 4.1 above is not surprising. Namely we
enlarge the class of possible kernel functions when using T αS generators instead of α-stable
ones. Let us illustrate this effect by means of the kernel functions from Section 3.
For this purpose note that the following statement is due to Theorem 2.5 and Theorem 2.6
in [14], respectively, except for a negligible difference. That is, the authors in [14] assume σ
(and σ̃) to be uniformly distributed on S d−1 (and S 2d−1 ), at least up to a constant, i.e. the
LCF of µ0 is given by ψ0 (u) = −ρkukα for some ρ > 0.
Proposition 4.2.
(a) Assume that the conditions of Theorem 3.3 are fulfilled with H <
β instead of (3.2). Then the random field X0 = {X0 (t) : t ∈ Rn } exists, where
Z h
i
q
q
X0 (t) :=
ϕ(t − s)D− α I − ϕ(−s)D− α I M0 (ds), t ∈ Rn .
Rn
(b) Assume that the conditions of Theorem 3.5 are fulfilled for some D ∈ Q(Rd ) (i.e.
(3.8) reduces to H < a). Then the random field Y0 = {Y0 (t) : t ∈ Rn } exists, where
Z
q
Y0 (t) := Re
eiht,si − 1 ϕ(s)−D− α I MC0 (ds), t ∈ Rn .
Rn
In both cases, X0 and Y0 are SαS random fields, respectively.
Here the notion of SαS random fields is analogous to Definition 3.2. Moreover, as expected,
the assumptions under which X0 and Y0 exist are more restrictive compared to their natural
21
counterparts from Section 3 and we lose the T αS property. However, the remaining properties from Theorem 3.3 and Theorem 3.5 still apply to X0 and Y0 accordingly.
Actually, the random fields from Section 3 have a true tempered stable character that arises
from Proposition 2.5. Let us be more specific what we mean by the word true: Recall the
main results from [6] and note that there a different kind of tempering is considered, which
extends the univariate idea from [16]. That is, the authors in [6] somehow add a tempering
function to the actual kernel function, say f (·), while they abstain from the use of T αS
random measured as integrator. This has two consequences. On the one hand, it follows
that the behavior of f (s) is restrained for large values of s. However, on the other hand, the
resulting stochastic integrals have no T αS distribution.
Also note that, in [6], the aforementioned approach has been performed both for a movingaverage representation and for a harmonizable representation. We focus on the first one.
Then, taking into account the different notation in [6], it is easy to verify that we obtain
the following observation as a by-product of Theorem 3.1 in [6] (namely for B = α1 I). The
details are left to the reader.
Example 4.3. Fix λ > 0 and let ϕ : Rn → [0, ∞) be an E-homogeneous function for some
E ∈ Q(Rn ), where q denotes the trace of E again. Also consider D ∈ Q(Rd ) arbitrary. Then
the random field X̂ = {X̂(t) : t ∈ Rn }, defined by
Z h
i
q
q
X̂(t) :=
e−λϕ(t−s) ϕ(t − s)D− α I − e−λϕ(−s) ϕ(−s)D− α I M0 (ds),
Rn
exists, where M0 is as above. Moreover, X̂ is a SαS random field.
Recall from the proof of Theorem 2.5 in [14] that, in the context of Proposition 4.2 (a),
condition H < β is needed in order to control the behavior of ft (s) from (3.3) for large
values of s. Hence, as already announced before, this condition disappears when tempering
in the sense of [6]. For the same reason we observe in Example 4.3 that ϕ has not to be
admissible anymore.
4.2. Scaling properties and tangent fields. In what follows we will show that the random
fields from Section 3 have an intimate relation to those from Proposition 4.2.
Lemma 4.4. We have that
∀ρ, λ > 0 ∀u ∈ Rd :
ψλ (ρ u) = ρα ψλ/ρ (u).
Similarly, the statement holds true for ψ̃λ .
Proof. In view of µλ ∼ [0, 0, φλ], where φλ is symmetric, we obtain that
Z
Z
ψλ (ρu) =
(coshρu, xi − 1) φλ (dx) =
(coshu, xi − 1) (ρφλ )(dx).
Rd
Rd
22
D. KREMER AND H.-P. SCHEFFLER
It remains to show that (ρφλ ) = ρα · φλ/ρ . Fix A ∈ B(Rd ). Then, due to (2.1) and by a
change of variables, we compute that
Z
Z ∞
(ρφλ )(A) =
1ρ−1 A (rθ)r −α−1 e−λr dr σ(dθ)
d−1
ZS Z0 ∞
=
1A (ρrθ)r −α−1 e−λr dr σ(dθ)
d−1
S
Z 0Z ∞
= ρα
1A (rθ)r −α−1 e−λr/ρ dr σ(dθ) = ρα · φλ/ρ (A).
S d−1
0
In order to be more specific we denote by Xλ = {Xλ (t) : t ∈ Rn } and Yλ = {Yλ (t) : t ∈ Rn }
the moving-average and harmonizable representation corresponding to Mλ and MCλ attained
in Section 3, respectively. Then the next result states scaling properties for Xλ and Yλ and
is therefore of independent interest.
Proposition 4.5.
(4.1)
(a) Under the assumptions of Theorem 3.3 we have that
∀c > 0 :
f dd
{Xλ (cE t) : t ∈ Rn } = {cD Xc αq λ (t) : t ∈ Rn }.
(b) Conversely, under the assumptions of Theorem 3.5 we have that
(4.2)
∀c > 0 :
f dd
{Yλ (cE t) : t ∈ Rn } = {cD Yc− αq λ (t) : t ∈ Rn }.
Proof. For the proof of part (a) recall ft (·) from (3.3) and observe for (almost) all s ∈ Rn
that, by E-homogeneity of ϕ, we have
(4.3)
q
fcE t (s) = cD− α I ft (c−E s).
Now fix k ∈ N and t1 , ..., tk ∈ Rn . Then, due to (2.14), (4.3), Lemma 4.4 and by a change of
variables, we compute that the LCF of (Xλ (cE t1 ), ..., Xλ (cE tk ))t is given by
!
Z
k
X
(4.4)
Rk·d ∋ (u1 , ..., uk ) 7→
ψλ
fcE tj (s)∗ uj ds
Rn
=
Z
j=1
ψλ
Rn
= c−q
=
Rn
k
X
ftj (c−E s)∗ c
D∗
uj
j=1
Z
Rn
Z
c
q
−α
ψc αq λ
ψc αq λ
k
X
ftj (c−E s)∗ c
j=1
k
X
j=1
ftj (s)∗ c
D∗
uj
!
D∗
!
uj
ds,
ds
!
ds
23
which, again by (2.14), is just the LCF of (cD Xc αq λ (t1 ), ..., cD Xc αq λ (tk ))t . This gives (4.1).
Concerning part (b) recall the representation of Y (t) = Yλ(t) from (3.9). Using the E ∗ homogeneity of ϕ (which, in general, is different from the function ϕ in part (a)) we obtain
that
(4.5)
∗− q I
α
= c α (cosht, cE si − 1)φ(cE s)−D
∗− q I
α
= c α sinht, cE siϕ(cE s)−D
(coshcE t, si − 1)φ(s)−D
and, similarly, that
sinhcE t, siϕ(s)−D
(4.6)
q
q
∗
∗
∗
∗
∗− q I
α
∗− q I
α
cD
∗
cD .
∗
Note that the trace of E ∗ equals q again. Hence, based on (3.14) and (4.5)-(4.6), we can
proceed as in the proof of part (a) in order to verify the accuracy of (4.2). The details are
left to the reader.
Remark 4.6. Observe that there is a striking difference in the scaling behavior between the
moving-average and the harmonizable representation of T αS random fields considered in
Section 3.
We need another auxiliary result, which is of independent interest again. For this purpose
recall Proposition 4.2 and that the conditions mentioned there (and quoted below) always
f dd
imply the existence of the associated T αS representations from Section 3. Here, ⇒ means
convergence of all finite-dimensional distributions.
(a) Assume that the conditions of Theorem 3.3 are fulfilled with H < β
Lemma 4.7.
f dd
instead of (3.2). Then, as λ → 0, we have that Xλ ⇒ X0 .
(b) Assume that the conditions of Theorem 3.5 are fulfilled for some D ∈ Q(Rd ). Then,
f dd
as λ → 0, we have that Yλ ⇒ Y0 .
Proof. We only prove part (a) since the proof of part (b) is very similar and, moreover, mostly
covered by the ideas in the proof of Theorem 4.10 below. Fix k ∈ N as well as t1 , ..., tk ∈ Rn
and u1 , ..., uk ∈ Rd . Then, in view of (4.4) (for c = 1), the assertion would follow from
!
!
Z
Z
k
k
X
X
(4.7)
ψλ
ftj (s)∗ uj ds →
ψ0
ftj (s)∗ uj ds (λ → 0)
Rn
Rn
j=1
j=1
due to Lévy’s continuity theorem. Using (2.1) and (2.4) we first observe for every λ > 0 and
u ∈ Rd that
Z
Z ∞
|ψλ (u)| = −ψλ (u) =
(1 − coshu, rθi)r −α−1 e−λr dr σ(dθ)
d−1
ZS Z0 ∞
≤
(4.8)
(1 − coshu, rθi)r −α−1 dr σ(dθ) = −ψ0 (u) = |ψ0 (u)|.
S d−1
0
In particular, the dominated convergence theorem implies for every u ∈ Rd that we have
ψλ (u) → ψ0 (u) as λ → 0. At the same time it is well-known that we have |ψ0 (u)| ≤ ρkukα ,
24
D. KREMER AND H.-P. SCHEFFLER
where ρ > 0 is a constant (see the proof of Example 2.4 in [12]), and that the inequality
1 − cos(a + b) ≤ 2(2 − cos(a) − cos(b)) holds true (see Proposition 1.3.4 in [17]). Combine
this with (4.8) to verify the accuracy of
!
k
k
k
X
X
X
k−1
∗
k−1
∗
ψλ
kuj kα kftj (s)kα .
(4.9)
ftj (s) uj ≤ 2
ψ0 (ftj (s) uj ) ≤ 2 ρ
j=1
j=1
j=1
In addition and under the present assumptions, Theorem 2.5 in [14] states that the function
on the right-hand side of (4.9) belongs to L1 (ds). Using the dominated convergence theorem
again this gives (4.7).
The foregoing observations can be used to go a step further, leading to the notion of so-called
tangent fields. Note that, in the univariate case, the idea goes back to [8]. The following
definition turns out to be a multivariate extension of it and is essentially due to [10].
Definition 4.8. Fix x ∈ Rn and let X = {X(t) : t ∈ Rn } as well as X′ = {X ′ (t) : t ∈ Rn }
be Rd -valued random fields. Then, for E ∈ Q(Rn ) and D ∈ Q(Rd ), we say that X is
(E, D)-localisable at x with local form/tangent field X′ if we have that
(4.10)
f dd
{c−D (X(x + cE t) − X(x)) : t ∈ Rn } ⇒ {X ′ (t) : t ∈ Rn } (c → 0).
Let us emphasize that, in general, the tangent field X′ as well as the corresponding operators
E and D in (4.10) do depend on the point x at which we have localisability. However, this
is not the case in our setting as the following main result demonstrates.
Theorem 4.9. Assume that the conditions of Theorem 3.3 are fulfilled with H < β instead
of (3.2). Then we have for every λ > 0 and x ∈ Rn that the moving-average representation
Xλ is (E, D)-localisable at x with tangent field X′ = X0 (not depending on x).
Proof. Fix k ∈ N and consider t1 , ..., tk ∈ Rn . Then, using the fact that Xλ has stationary
increments together with (4.1), it is easy to check that we have
∀c > 0 :
f dd
{c−D (Xλ (x + cE t) − Xλ (x)) : t ∈ Rn } = {Xc αq λ (t) : t ∈ Rn }.
Thus the assertion follows from Lemma 4.7.
Recall (4.10) and observe for all x, t ∈ Rn that x + cE t contracts to x as c → 0, since
E ∈ Q(Rn ) and due to Lemma 4.2 in [14]. Hence, it is crucial to emphasize that the
convergence in (4.11) below is meant as c → ∞, which is striking differently. Accordingly,
we do no longer use the term tangent field hereinafter.
Theorem 4.10. Assume that the conditions of Theorem 3.5 are fulfilled for some D ∈
Q(Rd ). Then we have for every λ > 0 and x ∈ Rn that
(4.11)
f dd
{c−D (Yλ (x + cE t) − Yλ (x)) : t ∈ Rn } ⇒ Y0
(c → ∞).
25
Proof. Let us recall the proof of Theorem 3.5, particularly (3.14) and (3.15). Then, for
t1 , ..., tk ∈ Rn as before, we compute that the LCF of (c−D (Yλ (x+cE tj )−Yλ(x)) : 1 ≤ j ≤ k)t
is given by
R
k·d
t
∋ (u1 , ..., uk ) 7→
Z
k
X
ψ̃λ
Rn
E
ζ(x, c tj , s, c
−D ∗
uj )
j=1
!
ds.
At the same time, by definition of A(x, s) and ζ, we can use (4.5)-(4.6) together with
Lemma 4.4 to obtain that
Z
ψ̃
Rn
k
X
ζ(x, cE tj , s, c
j=1
−D ∗
uj )
!
ds =
Z
ψ̃λ
Rn
= cq
=
Rn
k
X
E∗
ζ(c−E x, tj , c s, uj )
j=1
Z
Rn
Z
c
q
α
ψ̃c−q/α λ
k
X
E∗
ψ̃c−q/α λ
ds
ζ(c−E x, tj , c s, uj )
j=1
k
X
!
ζ(c
−E
x, tj , s, uj )
j=1
!
!
ds
ds,
where in the last step we performed a change of variables. Fix u1 , ..., uk ∈ Rd . Then, by
Lévy’s continuity theorem and in view of A(0, s) = I2d , it obviously remains to establish that
(4.12)
Z
Rn
ψ̃c−q/α λ
k
X
ζ(c−E x, tj , s, uj )
j=1
!
ds →
Z
ψ̃0
Rn
k
X
ζ(0, tj , s, uj )
j=1
!
ds (c → ∞).
First observe for every w ∈ R2d and as λ → 0 that, as in the proof of Lemma 4.7, we have
ψ̃λ (w) → ψ̃0 (w). That is, µ̃λ converges to µ̃0 weakly due to Lévy’s continuity theorem again.
Then Lemma 3.1.10 in [17] states that the convergence ψ̃λ → ψ̃0 holds even uniformly on
compact subsets of R2d . Hence, since c−E x → 0 as c → ∞ (see Lemma 4.2 in [14]), it is easy
to check for every s ∈ Rn that we have
ψ̃c−q/α λ
k
X
j=1
ζ(c−E x, tj , s, uj )
!
→ ψ̃0
k
X
j=1
ζ(0, tj , s, uj )
!
(c → ∞).
26
D. KREMER AND H.-P. SCHEFFLER
Use essentially the same argument as in (4.9) to verify that there exists a constant L > 0
(only depending on k) such that, for every c > 0 and s ∈ Rn , we have
!
k
k
X
X
−E
kζ(c−E x, tj , s, uj )kα
ζ(c x, tj , s, uj ) ≤ L
ψ̃c−q/α λ
j=1
j=1
≤L
(4.13)
=L
k
X
j=1
k
X
j=1
kA(c−E , x)kα kg̃tj (s)kα kuj kα
kg̃tj (s)kα kuj kα ,
where in the second step we benefited from (3.15) together with the sup-multiplicativity of
the operator norm. Also note that we used kA(c−E , x)kα = 1 again. Anyway and based on
(3.12), it was just the outcome of Theorem 2.6 in [14] that the function in (4.13) belongs to
L1 (ds). Eventually, the dominated convergence theorem gives (4.12).
Remark 4.11. Observe that the proof of Theorem 4.10 is simpler if we additionally assume
that Y has stationary increments as in Corollary 3.7. In fact, by (4.2) and Lemma 4.7 we
get in this particular case that
f dd
{c−D (Yλ (x + cE t) − Yλ (x)) : t ∈ Rn } = {c−D Yλ (cE t) : t ∈ Rn }
f dd
f dd
= {Yc− αq λ (t) : t ∈ Rn } ⇒ {Y0 (t) : t ∈ Rn }
as c → ∞.
References
[1] M. S. Alrawashdeh, J. F. Kelly, M. M. Meerschaert, and H.-P. Scheffler. Applications of inverse tempered
stable subordinators. Computers & Mathematics with Applications, 73(6):892–905, 2017.
[2] B. Baeumer and M. M. Meerschaert. Tempered stable lévy motion and transient super-diffusion. Journal
of Computational and Applied Mathematics, 233(10):2438–2448, 2010.
[3] H. Biermé and C. Lacaux. Hölder regularity for operator scaling stable random fields. Stochastic Processes and their Applications, 119(7):2222–2248, 2009.
[4] H. Biermé, M. M. Meerschaert, and H.-P. Scheffler. Operator scaling stable random fields. Stochastic
Processes and their Applications, 117(3):312–332, 2007.
[5] A. Chakrabarty and M. M. Meerschaert. Tempered stable laws as random walk limits. Statistics &
probability letters, 81(8):989–997, 2011.
[6] G. Didier, S. Kanamori, and F. Sabzikar. On multivariate fractional random fields: tempering and
operator-stable laws. arXiv preprint arXiv:2002.09612, 2020.
[7] R. M. Dudley. Real analysis and probability, volume 74. Cambridge University Press, 2002.
[8] K. J. Falconer. Tangent fields and the local structure of random fields. Journal of Theoretical Probability,
15(3):731–750, 2002.
[9] G. Jameson. The incomplete gamma functions. The Mathematical Gazette, 100(548):298–306, 2016.
27
[10] D. Kremer and H.-P. Scheffler. Multi operator-stable random measures and fields. Stochastic Models,
35(4):429–468, 2019.
[11] D. Kremer and H.-P. Scheffler. Multivariate stochastic integrals with respect to independently scattered
random measures on δ-rings. Publicationes Mathematicae Debrecen, 95(1-2):39–66, 2019.
[12] D. Kremer and H.-P. Scheffler. Operator-stable and operator-self-similar random fields. Stochastic Processes and their Applications, 129(10):4082–4107, 2019.
[13] U. Küchler and S. Tappe. Tempered stable distributions and processes. Stochastic Processes and their
Applications, 123(12):4256–4293, 2013.
[14] Y. Li and Y. Xiao. Multivariate operator-self-similar random fields. Stochastic Processes and their Applications, 121(6):1178–1200, 2011.
[15] M. M. Meerschaert, P. Roy, and Q. Shao. Parameter estimation for tempered power law distributions.
Communications in statistics–theory and methods, 2009.
[16] M. M. Meerschaert and F. Sabzikar. Tempered fractional stable motion. Journal of Theoretical Probability, 29(2):681–706, 2016.
[17] M. M. Meerschaert and H.-P. Scheffler. Limit distributions for sums of independent random vectors:
Heavy tails in theory and practice, volume 321. John Wiley & Sons, 2001.
[18] M. M. Meerschaert and A. Sikorskii. Stochastic models for fractional calculus, volume 43. Walter de
Gruyter, 2011.
[19] M. M. Meerschaert, Y. Zhang, and B. Baeumer. Tempered anomalous diffusion in heterogeneous systems. Geophysical Research Letters, 35(17), 2008.
[20] B. S. Rajput and J. Rosinski. Spectral representations of infinitely divisible processes. Probability Theory
and Related Fields, 82(3):451–487, 1989.
[21] J. Rosiński. Tempering stable processes. Stochastic processes and their applications, 117(6):677–707,
2007.
[22] G. Samoradnitsky and M. S. Taqqu. Stable non-Gaussian random processes: stochastic models with
infinite variance, volume 1. CRC press, 1994.
[23] K.-I. Sato. Lévy processes and infinitely divisible distributions. Cambridge university press, 1999.
[24] N. Temme. Uniform asymptotic expansions of the incomplete gamma functions and the incomplete beta
function. Mathematics of Computation, 29(132):1109–1114, 1975.
Dustin Kremer, Department Mathematik, Universität Siegen, 57068 Siegen, Germany
Email address:
[email protected]
Hans-Peter Scheffler, Department Mathematik, Universität Siegen, 57068 Siegen, Germany
Email address:
[email protected]