arXiv:1304.6671v1 [math.CA] 24 Apr 2013
Products of Beta distributed random variables
Charles F. Dunkl
Abstract. This is an expository note on useful expressions for the density
function of a product of independent random variables where each variable
has a Beta distribution.
1. Introduction
This is a brief exposition of some techniques to construct density functions
m (u )
Q
j n
with moment sequences of the form
, where (a)n denotes the Pochhammer
(v
j )n
j=1
symbol Γ(a+n)
Γ(a) . Such a density f (x) can be expressed as a certain Meijer G-function,
that is, a sum of generalized hypergeometric series, and as a power series in (1 − x)
whose coefficients can be calculated by a recurrence. The former expression is
pertinent for numerical computations for x near zero, while the latter is useful for
x near 1.
All the random variables considered here take values in [0, 1], density functions
are determined by their moments: for a random variable X we have P [X < a] =
Ra
R1
f (x) dx for 0 ≤ a ≤ 1, and the expected value E (X n ) = 0 xn f (x) dx is the
0
nth moment. The basic building block is the Beta distribution (α, β > 0)
(1.1)
h (α, β; x) =
1
β−1
xα−1 (1 − x)
,
B (α, β)
where B (α, β) := Γ(α)Γ(β)
Γ(α+β) , then
Z 1
xn h (α, β; x) dx =
0
n
(u)
(α)n
, n = 0, 1, 2, . . . .,
(α + β)n
o
thus (v) n is a moment sequence if 0 < u < v (with α = u, β = v − u). The
n
moments of the product of independent random variables are the products of the
respective moments, that is, suppose the densities of (independent) X and Y are
f, g respectively and define
Z 1
x dt
f (t) g
(1.2)
f ∗ g (x) =
,
t t
x
2000 Mathematics Subject Classification. Primary 33C20, 33C60; Secondary 62E15.
Key words and phrases. Beta function, moment sequences.
1
2
CHARLES F. DUNKL
Ra
then f ∗ g is a density, P [XY < a] = 0 f ∗ g (x) dx for 0 ≤ a ≤ 1 and
Z 1
Z 1
Z 1
xn (f ∗ g (x)) dx =
xn f (x) dx
y n g (y) dy.
0
0
0
These are the main results: suppose the parameters u1 , . . . , um and v1 , . . . , vm
satisfy vi > ui > 0 for each i, then there is a unique density function f with the
m (u )
Q
j n
;
moment sequence
j=1 (vj )n
(1) if also ui − uj ∈
/ Z for each i 6= j then for 0 ≤ x < 1
! m
m
m
Y
Y
1
Γ (uj − ui ) ui −1
Γ (vk ) X
(1.3)
x
f (x) =
Γ (uk ) i=1 Γ (vi − ui )
Γ (vj − ui )
j=1,j6=i
k=1
∞ Y
m
X
(ui − vk + 1)n n
×
x ;
(ui − uk + 1)n
n=0
k=1
Pm
(2) for δ := i=1 (vi − ui ) there is an (m + 1)-term recurrence for the coefficients {cn } such that
)
(
∞
m
X
1 Y Γ (vi )
n
δ−1
, 0 < x ≤ 1.
1+
cn (1 − x)
(1.4)
f (x) =
(1 − x)
Γ (δ) i=1 Γ (ui )
n=1
The use of the inverse Mellin transform to derive the series expansion in (1.3) is
sketched in Section 2. The differential equation initial value problem for the density
is described in Section 3, and the recurrence for (1.4) is derived in Section 4.
The examples in Section 5 include the relatively straightforward situation m =
2 and the density of the determinant of a random 4 × 4 positive-definite matrix of
trace one, where m = 3.
2. The inverse Mellin transform
The Mellin transform of the density f is defined by
Z 1
M f (p) =
xp−1 f (x) dx.
0
This is an analytic function in {p : Re p > 0} and agrees with the meromorphic
function
m
Y
Γ (vj ) Γ (uj + p − 1)
p 7→
Γ (uj ) Γ (vj + p − 1)
j=1
at p = 1, 2, 3, . . . thus the two functions coincide in the half-plane by Carlson’s
theorem. The inverse Mellin transform is
Z σ+i∞
1
M f (p) x−p dp,
f (x) =
2πi σ−i∞
for σ > 0; it turns out the integral can be evaluated by residues (it is of MellinBarnes type). For each j and each n = 0, 1, 2, . . . there is a pole of M f (p) at
PRODUCTS OF BETA VARIABLES
3
p = 1 − n − uj ; the hypothesis ui − uj ∈
/ Z for each i 6= j implies that each pole is
simple. The residue at p = 1 − n − uk equals
uk −1+n
x
m
Y
j=1
×
lim
p→1−n−uk
Y
Γ (vj )
Γ (ui − uk − n)
Γ (uj ) Γ (vj − uk − n)
i6=k
(p − 1 + n + uk ) Γ (uk + p − 1) .
To simplify this we use
Γ (a)
Γ (a)
n
= (−1)
,
(a − n)n
(1 − a)n
Γ (p + p0 + 1)
lim (p + p0 ) Γ (p0 + p − n) = lim (p + p0 )
p→−p0
p→−p0
(p + p0 − n)n+1
Γ (a − n) =
=
(−1)n
Γ (1)
=
.
(−n)n
n!
Thus
(2.1)
f (x) =
m
Y
Γ (vi )
Γ
(ui )
i=1
!
m
X
k=1
∞ Y
m
Y Γ (uj − uk ) X
(1 + uk − vi )n n
xuk −1
x ,
Γ (vk − uk )
Γ (vj − uk ) n=0 i=1 (1 + uk − ui )n
j6=k
(note (1 + uk − uk )n = n!);in fact this is a Meijer G-function (see [4, 16.17.2]).
3. The differential equation
The equation is of Mellin-Barnes type: let ∂x :=
differential operator
T (u, v) = −x
m
Y
j=1
(D + 2 − vj ) +
m
Y
j=1
d
dx , D
:= x∂x and define the
(D + 1 − ui ) .
The highest order term is (1 − x) xm ∂xm and the equation has regular singular points
at x = 0 and x = 1. We find
m
m
∞
∞
Y
Y
X
X
(n + c + 1 − ui ) xn
(n + c + 2 − vj ) xn+1 +
T (u, v) xc
cn xn = xc
cn −
n=0
n=0
j=1
j=1
m
∞
m
Y
Y
X
(n + c + 1 − vj )
(n + c + 1 − ui ) − cn−1
= xc
xn cn
n=1
+ xc c0
m
Y
j=1
j=1
j=1
(c + 1 − ui ) .
The solutions of the indicial equation are c = ui − 1, 1 ≤ i ≤ m. Assume ui − uj ∈
/Z
for i 6= j. Let c = u1 − 1 then obtain a solution of T (u, v) f (x) = 0 by solving the
recurrence
Qm
m
Y
(u1 − vj + n)
j=1 (u1 − vj + 1)n
cn =
cn−1 = Qm
c0 .
(u1 − uj + n)
n! j=2 (u1 − uj + 1)n
j=1
4
CHARLES F. DUNKL
Thus the solutions of T (u, v) f = 0 are linear combinations of
u1 − v1 + 1, . . . , u1 − vm + 1
f1 (x) := xu1 −1 m Fm−1
;x ,
u1 − u2 + 1, . . . , u1 − um + 1
∞ Y
m
X
(ui − vj + 1)n n
fi (x) := xui −1
x ,
(u
i − uj + 1)n
n=0 j=1
for 1 ≤ i ≤ m (note the factor (ui − ui + 1)n = n!).
Lemma 1. Suppose g is differentiable on (0, 1], g (j) (1) = 0 for 0 ≤ j ≤ k and
h (x) := (D + s) g (x) then h(j) (1) = 0 for 0 ≤ j ≤ k − 1. Furthermore if k ≥ 0
then for n ≥ 0
Z 1
Z 1
xn h (x) dx = (s − n − 1)
xn g (x) dx.
0
g
0
∂xj D
x∂xj+1
Proof. By induction
=
+
(1) + (j + s) g (j) (1). Next
Z
Z 1
Z 1
n
n
x g (x) dx +
x h (x) dx = s
(j+1)
=s
Z
for j ≥ 0.
Hence h(j) (1) =
1
xn+1 g ′ (x) dx
0
0
0
j∂xj
1
n
x g (x) dx + g (1) − (n + 1)
0
and g (1) = 0 by hypothesis.
Z
1
xn g (x) dx,
0
This is the fundamental initial value system:
(3.1)
T (u, v) f (x) = 0,
f (j) (1) = 0, 0 ≤ j ≤ m − 1
Proposition 1. Suppose f is a solution defined on (0, 1] of (3.1) then for
n≥0
Z
Z 1
m
Y
(ui )n 1
n
f (x) dx.
x f (x) dx =
(vi )n 0
0
i=1
Qj
Proof. For 0 ≤ j ≤ m let hj = i=1 (D + 1 − ui ) f , thus hj+1 = (D + 1 − uj+1 ) hj
R1
(k)
and by the Lemma hj (1) = 0 for 0 ≤ k ≤ m − 1 − j. Also 0 xn hj+1 (x) dx =
R1
R1
− (n + uj+1 ) 0 xn hj (x) dx for 0 ≤ j ≤ m − 1. By induction 0 xn hm (x) dx =
R
Q
1 n
(−1)m m
i=1 (n + ui ) 0 x f (x) dx.
R 1 n Qj
R 1 n+1
m Qm
Similarly 0 x x i=1 (D + 2 − vi ) f (x) dx = (−1)
f (x) dx.
i=1 (n + vi ) 0 x
R1 n
Thus the integral 0 = 0 x T (u, v) f (x) dx implies the recurrence
Z 1
Z
m
Y
(ui + n) 1 n
n+1
x
f (x) dx =
x f (x) dx.
(vi + n) 0
0
i=1
Induction completes the proof.
Pm
Observe that the coefficients {γi } of the solution i=1 γi fi (x) of the system
are not explicit here, but they are found in the inverse Mellin transform expression.
PRODUCTS OF BETA VARIABLES
5
4. The behavior near x = 1 and the recurrence
First we establish the form of the density f (x) in terms of powers of (1 − x).
Lemma 2. For α, β, γ > 0 and 0 < x ≤ 1
Z 1
x γ−1
β+γ−1
β−1
dt = B (β, γ) (1 − x)
1−
tα−1 (1 − t)
t
x
2 F1
γ − α, β
;1− x .
β+γ
Proof. Change the variable of integration t = 1 − s + sx then the integral
becomes
Z 1
β+γ−1
α−γ β−1
γ−1
(1 − x)
(1 − s (1 − x))
s
(1 − s)
ds
0
γ − α, β
β+γ−1 Γ (β) Γ (γ)
= (1 − x)
;1− x .
2 F1
Γ (β + γ)
β+γ
This is a standard formula, see [3, (9.1.4), p.239] and is valid in 0 < x ≤ 1 (where
|1 − x| < 1).
Set δ :=
Pm
i=1
(vi − ui ).
Proposition 2. There exists a sequence {cn } such that
)
(
∞
m
X
1 Y Γ (vi )
n
δ−1
.
1+
cn (1 − x)
(1 − x)
f (x) =
Γ (δ) i=1 Γ (ui )
n=1
Proof. Argue by induction. For m = 1 we have (see (1.1))
Γ (v1 )
v −u −1
xu1 −1 (1 − x) 1 1
Γ (u1 ) Γ (v1 − u1 )
(
)
∞
X
(1 − u1 )n
Γ (v1 )
v1 −u1 −1
n
=
1+
.
(1 − x)
(1 − x)
Γ (u1 ) Γ (v1 − u1 )
n!
n=1
f (x) =
Assume the statement is proven for some m ≥ 1, then g = f ∗h (um+1 , vm+1 − um+1 ; ·)
m+1
Q (ui )n
has the moments
(vi ) . The convolution integral (see (1.2)) is a sum of terms
n
i=1
1
x vm+1 −um+1 −1 dt
t
t
t
x
δ+n+vm+1 −um+1 −1 Γ (δ + n) Γ (vm+1 − um+1 )
= Cn xum+1 −1 (1 − x)
Γ (δ + n + vm+1 − um+1 )
δ + n, vm+1 − 1
× 2 F1
;1 − x
δ + vm+1 − um+1 + n
Cn
Z
δ+n−1
(1 − t)
x um+1 −1
1−
P∞ (1−um+1 )j
j
(1 − x) . Thus the lowest power
by Lemma 2 ; and xum+1 −1 = 1 + j=1
j!
of (1 − x) appearing in g is δ + vm+1 − um+1 − 1 which occurs for n = 0. By the
inductive hypothesis
m
C0 =
Γ (vm+1 )
1 Y Γ (vi )
,
Γ (δ) i=1 Γ (ui ) Γ (um+1 ) Γ (vm+1 − um+1 )
6
CHARLES F. DUNKL
and so the coefficient of (1 − x)
C0
δ+vm+1 −um+1 −1
in g is
m+1
Y Γ (vi )
1
Γ (δ) Γ (vm+1 − um+1 )
=
;
Γ (δ + vm+1 − um+1 )
Γ (δ + vm+1 − um+1 ) i=1 Γ (ui )
this completes the induction.
For the next step we need to express T (u, v) in the form xm (1 − x) ∂xm +
Pm−1 j
j
j=0 x (aj − bj x) ∂x . Recall the elementary symmetric polynomials in the variables {z1 , . . . , zm } given by the generating function
m
Y
(q + zj ) =
m
X
en (z) q m−n ,
n=0
j=1
so e0 (z) = 1, e1 (z) = z1 + z2 + . . . + zm , e2 (z) = z1 z2 + . . . + zm−1 zm and em (z) =
z1 z2 . . . zm . Thus
m
Y
j=1
(D + 1 − ui ) =
m
X
j=0
j
em−j (1 − u) (x∂x ) =
Pj
m
X
aj xj ∂xj .
j=0
Let (x∂x )k = i=0 Ak,i xi ∂xi then x∂x (x∂x )j = i=0 Ak,i ixi ∂xi + xi+1 ∂xi+1 , so
Ak+1,i = Ak,i−1 + iAk,i . This recurrence has the boundary values A0,0 = 1, A1,0 =
0, A1,1 = 1. The solution consists of the Stirling numbers of the second kind,
denoted S (k, i) (see [4, 26.8.22]). Thus
m
X
Pj
aj xj ∂xj
=
m
X
j=0
j=0
=
aj =
m
X
j=0
m
X
i=j
In particular am = 1, am−1 =
Similarly
m
X
bj xj ∂xj =
j=0
=
bj =
m
Y
j=1
m
X
j=0
m
X
i=j
m
2
em−j (1 − u)
xj ∂xj
m
X
i=j
j
X
S (j, i) xi ∂xi
i=0
S (i, j) em−i (1 − u) ,
S (i, j) em−i (1 − u) , 0 ≤ j ≤ m.
Qm
+e1 (1 − u), and a0 = em (1 − u) =
(D + 2 − vj ) =
xj ∂xj
m
X
i=j
m
X
j=0
j=1
(1 − uj ).
j
em−j (2 − v) (x∂x )
S (i, j) em−j (2 − v) ,
S (i, j) em−j (2 − v) , 0 ≤ j ≤ m.
The differential equation leads to deriving recurrence relations for the coefficients {cn }. Convert the differential operator T (u, v) to the coordinate t = 1 − x;
j
d
set ∂t := dt
(so that ∂t = −∂x ). Write (expanding xj = (1 − t) with the binomial
PRODUCTS OF BETA VARIABLES
7
theorem)
T (u, v) = −x
=−
=
m
X
bj xj ∂xj
+
m
X
j=0
j=0
j+1
m X
X
j+1
i
j=0 i=0
1
X
aj xj ∂xj
k
(−1)
k=−m
i
j
(−t) bj (−∂t ) +
j
m X
X
j
j=0 i=0
m
X
j=max(0,−k)
i
i
j
(−t) aj (−∂t )
j
j+1
aj −
bj tk+j ∂tj .
j+k
j+k
Pm
j+1 j
∂t . The term with k = 0 is
The highest order term (k = 1) is
j=0 bj t
Pm
j
j
.
The
two
bottom
terms
(k = −m, 1 − m) are
(a
−
(j
+
1)
b
)
t
∂
j
j
t
j=0
m
(−1) (am − bm ) ∂tm = 0,
(−1)m−1 (am−1 − bm−1 − t∂t ) ∂tm−1 = (−1)m−1
m
X
i=1
(vi − ui ) − m − t∂t
!
∂tm−1 ;
the remaining terms (−m < −k < 0) are
m
X
j
j+1
k
(−1)
aj −
bj tj−k ∂tj−k ∂tk .
j−k
j−k
j=k
Apply T (u, v) to tγ (with the aim of finding a solution tc
j
0); note ∂tj tγ = (−1) (−γ)j tγ−j , then
T (u, v) tγ =
R−1 (γ) =
m−1
X
P∞
n=0 cn t
n
to T (u, v) f =
Rk (γ) tγ−k ,
k=−1
m
X
bj (−1)j (−γ)j ,
j=0
Rm−1 (γ) = (−γ)m−1
m
X
i=1
(vi − ui ) − γ − 1
!
= (−γ)m−1 (δ − 1 − γ) ,
and
Rk (γ) = (−γ)k Rk′ (γ) ,
m
X
j
j
+
1
j−k
Rk′ =
(−γ + k)j−k , 0 ≤ k < m.
aj −
bj (−1)
j−k
j−k
j=k
These sums can be considerably simplified (and the Stirling numbers are not needed).
Introduce the difference operator
∇g (c) = g (c) − g (c − 1) .
8
CHARLES F. DUNKL
This has a convenient action, for j ≥ 0 and arbitrary k
∇ (k − c)j = (k − c)j − (k + 1 − c)j = {k − c − (k + j − c)} (k + 1 − c)j−1
= −j (k + 1 − c)j−1 ,
k
∇k (−c)j = (−1)
j!
(−c + k)j−k , k ≤ j.
(j − k)!
Define the polynomials
p (c) =
m
Y
i=1
(c + 1 − ui ) , q (c) =
m
Y
i=1
(c + 2 − vi )
q1 (c) = (1 + c∇) q (c) = (1 + c) q (c) − cq (c − 1) .
Proposition 3. R−1 (γ) = q (γ) and for 0 ≤ k ≤ m − 1
1 k
1
∇ p (γ) −
∇k q1 (γ) .
k!
(k + 1)!
P
Pm
j j γ
j j γ
Proof. By construction m
j=0 aj t ∂t t = p (γ) and
j=0 bj t ∂t t = q (γ).
1
∇k to both sides (∇ acts on the variable γ) of
Apply k!
Rk′ (γ) =
p (γ) =
m
X
aj tj ∂tj tγ =
m
X
j
(−1) aj (−γ)j ,
j=0
j=0
to obtain
m
1 k
1 X
j!
∇ p (γ) =
aj (−γ + k)j−k
(−1)j−k
k!
k!
(j − k)!
j=k
m
X
j
j−k
aj (−γ + k)j−k .
=
(−1)
j−k
j=k
Also
q1 (γ) = (1 + γ∇) q (γ) = (1 + γ∇)
m
X
bj tj ∂tj tγ = (1 + γ∇)
=
j=0
Apply
j
(−1) bj (−γ)j
j=0
j=0
m
X
m
X
m
n
o X
(−1)j bj (1 + j) (−γ)j .
(−1)j bj (−γ)j − jγ (1 − γ)j−1 =
1
k
(k+1)! ∇
j=0
to both sides to obtain
m
X
1
1
j!
j−k
∇k q1 (γ) =
(−γ + k)j−k
(−1)
bj (1 + j)
(k + 1)!
(k + 1)!
(j − k)!
j=k
m
X
j−k j + 1
bj (−γ + k)j−k .
=
(−1)
j−k
j=k
This completes the proof.
PRODUCTS OF BETA VARIABLES
9
Hence
T (u, v) tc
∞
X
c n tn = tc
n=0
∞
X
cn
n=0
= tc
∞
X
m−1
X
Rk (n + c) tn−k
k=−1
tn
n=1−m
m−1
X
Rk (n + k + c) cn+k .
k=−1
The recurrence for the coefficients for n ≥ m is
min(m,n)
Rm−1 (n + c) cn = −
X
k=1
Rm−1−k (n − k + c) cn−k ,
(−n − c)m−1 (δ − 1 − n − c) cn =
min(m−1,n)
−
X
′
(−n + k − c)m−1−k Rm−1−k
(n − k + c) cn−k − q (n − m + c) cn−m ,
k=1
where ci = 0 for i < 0. At n = 0 the equation is (−c)m−1 (δ − 1 − c) c0 = 0. Let
c = 0 then for 0 ≤ n ≤ m − 2 the equations are
(−n)m−1 (δ − 1 − n) cn = −
n
X
k=1
′
(−n + k)m−1−k Rm−1−k
(n − k) cn−k ,
but (−n + k)m−1−k = 0 for m − 1 − k > n − k (and 0 ≤ k ≤ n) thus the coefficients
c0 , c1 , . . . , cm−2 are arbitrary, providing m − 1 linearly independent solutions to
T (u, v) f = 0. The recurrence can be rewritten as
m−1
X
−1
′
(−1)k (m − 1 − k)!Rm−1−k
(n − k) cn−k ,
cm−1 =
(m − 1)! (δ − m)
k=1
(m−1
)
X
−1
1
1
′
cn =
R
(n − k) cn−k +
q (n − m) cn−m , n ≥ m.
(δ − 1 − n)
(−n)k m−1−k
(−n)m−1
k=1
Assume that δ ∈
/ Z to avoid poles. But these are different from the desired
solution which has c = δ − 1 as was shown in Proposition 2. The recurrence
behaves better in this case. Indeed
1
cn =
n
+
min(m−1,n)
X
k=1
(−n + k − δ + 1)m−1−k ′
Rm−1−k (n − k + δ − 1) cn−k
(−n − δ + 1)m−1
1
q (n − m + δ − 1) cn−m ,
n (−n − δ + 1)m−1
which simplifies to
(4.1)
cn =
+
1
n
min(m−1,n)
X
k=1
1
R′
(n − k + δ − 1) cn−k
(−n − δ + 1)k m−1−k
1
q (n − m + δ − 1) cn−m .
n (−n − δ + 1)m−1
The term with cn−m occurs only for n ≥ m. The denominator factors are of the
form (δ + n − 1) (δ + n − 2) . . . (δ + n − k). If n ≥ m then the smallest factor is
10
CHARLES F. DUNKL
δ + n − m > δ > 0; otherwise the smallest factor is δ (for k = n). Hence this
solution is well-defined for any δ > 0.
Theorem 1. Suppose u1 , . . . , um , v1 , . . . , vm satisfy vi > ui > 0 for each i then
m
Q
(ui )n
there is a density function f on [0, 1] with moment sequence
(vi ) and
n
i=1
m
1 Y Γ (vi )
δ−1
f (x) =
(1 − x)
Γ (δ) i=1 Γ (ui )
(
1+
∞
X
n=1
n
cn (1 − x)
)
,
where
coefficients {cn } are obtained with the recurrence (4.1) using c0 = 1, and
Pthe
m
δ = i=1 (vi − ui ).
Proof. The density exists because it is the distribution of the random variable
(ui )n
i=1 Xi where the Xi ’s are jointly independent and the moments of Xi are (vi )n
for each i. By Proposition 2 f has the series expansion given in the statement. Let
g (x) be the function given in the statement and suppose for now that δ > m then g
is a solution of the differential system (3.1) (because of the factor (1 − x)δ−1 ). By
Proposition 1 g has the same moments as Cf for some constant C. By Proposition
2 f and g have the same leading coefficient in their series expansions. Hence f = g.
The coefficients cn are analytic in the parameters for the range δ > m. Each
R1
moment 0 xn f (x) dx is similarly analytic and so the formula is valid for all δ > 0,
by use of analytic continuation from the range δ > m.
Qm
The coefficients occurring in the recurrence (4.1) are expressions in the parameters u, v, which can be straightforwardly computed, especially with computer
symbolic algebra.
5. Examples
5.1. Density for m = 2. For the easy case m = 2 we can directly find the
density function, in a slightly different form.
Given u1 , u2 , v1 , v2 > 0 and δ = v1 + v2 − u1 − u2 > 0 set
g (u, v; x) =
Γ (v1 ) Γ (v2 )
Γ (u1 ) Γ (u2 ) Γ (δ)
× xu2 −1 (1 − x)
δ−1
2 F1
v2 − u2 , v1 − u2
;1− x
δ
then
Z
0
1
xn g (u, v; x) dx =
(u1 )n (u2 )n
, n = 0, 1, 2, . . . .
(v1 )n (v2 )n
PRODUCTS OF BETA VARIABLES
11
Proof. Consider
Z
0
=
=
1
xn+u1 −1 (1 − x)
δ−1
2 F1
v2 − u2 , v1 − u2
; 1 − x dx
δ
∞
X
(v2 − u2 )m (v1 − u2 )m Γ (n + u1 ) Γ (δ + m)
m! (δ)m
Γ (u1 + δ + m + n)
m=0
∞
Γ (u1 ) (u1 )n Γ (δ) X (v2 − u2 )m (v1 − u2 )m
Γ (n + u1 + δ) m=0 m! (n + u1 + δ)m
Γ (u1 ) (u1 )n Γ (δ) Γ (n + u1 + δ) Γ (n + u1 + δ − v2 − v1 + 2u2 )
Γ (n + u1 + δ) Γ (n + u1 + δ − v2 + u2 ) Γ (n + u1 + δ − v1 + u2 )
Γ (u1 ) Γ (u2 ) Γ (v2 + v1 − u1 − u2 ) (u1 )n (u2 )n
=
,
Γ (v2 ) Γ (v1 )
(v2 )n (v1 )n
=
for each n.
By using standard transformations we can explain the other formulation for
g (u, v; x) near x = 0. From [3, p.249, (9.5.7)]
F
α, δ
α, δ
Γ (γ − α − δ) Γ (γ)
F
;x
;1 − x =
Γ (γ − α) Γ (γ − δ)
1+α+δ−γ
γ
γ − α, γ − δ
Γ (α + δ − γ) Γ (γ) γ−α−δ
x
F
;x .
+
Γ (α) Γ (δ)
1+γ−α−δ
applied to g (u, v; x) (provided u1 − u2 ∈
/ Z) we find
(5.1)
Γ (v2 ) Γ (v1 ) Γ (u2 − u1 )
Γ (u1 ) Γ (u2 ) Γ (v2 − u1 ) Γ (v1 − u1 )
v2 − u2 , v1 − u2
δ−1
u1 −1
;x
×x
(1 − x)
2 F1
1 + u1 − u2
Γ (v2 ) Γ (v1 ) Γ (u1 − u2 )
+
Γ (u1 ) Γ (u2 ) Γ (v2 − u2 ) Γ (v1 − u2 )
v2 − u1 , v1 − u1
δ−1
u2 −1
;x .
×x
(1 − x)
2 F1
1 + u2 − u1
g (u, v; x) =
This is quite similar to the general formula (2.1), and the following standard transformation explains the difference
(5.2)
2 F1
a, b
c − a, c − b
c−a−b
; x = (1 − x)
;x .
2 F1
c
c
If u1 − u2 ∈ Z then there are terms in log x. The relevant formula can be found in
[3, p. 257, (9.7.5)]. Suppose u2 = u1 + n and n = 0, 1, 2, . . . , δ = v2 + v1 − 2u1 − n
12
CHARLES F. DUNKL
then
g (u, v; x) =
u1 −1
×x
Γ (v2 ) Γ (v1 )
(1 − x)δ−1
Γ (u1 ) Γ (u1 + n) Γ (v2 − u1 ) Γ (v1 − u1 )
n−1
X
k=0
(n − k − 1)!
k
(v2 − u1 − n)k (v1 − u1 − n)k (−x)
k!
Γ (v2 ) Γ (v1 )
δ−1
(1 − x)
Γ (u1 ) Γ (u1 + n) Γ (v2 − u1 − n) Γ (v1 − u1 − n)
1
v2 − u1 , v1 − u1
n
;
x
× (−1) xu1 +n−1 (− log x)
F
2 1
n+1
n!
n
(−1) Γ (v2 ) Γ (v1 )
δ−1
+
xu1 +n−1 (1 − x)
Γ (u1 ) Γ (u1 + n) Γ (v2 − u1 − n) Γ (v1 − u1 − n)
∞
X
(v2 − u1 )k (v1 − u1 )k
×
k! (n + k)!
+
k=0
{ψ (k + 1) + ψ (n + k + 1) − ψ (v2 − u1 + k) − ψ (v1 − u1 + k)} xk .
If u1 − u2 ∈
/ Z then near x = 0 the density is ∼ C0 xu1 −1 + C1 xu2 −1 , but if
u2 = u1 + n then the density ∼ C0 xu1 −1 + C1 xu1 +n−1 (− log x).
5.2. Example: parametrized family with m = 3. Consider the determinant of a random 4 × 4 state, that is, a random (with the Hilbert-Schmidt metric)
positive-definite matrix with trace one. The moments can be directly computed
for the real and complex cases and incorporated into a family of variables with a
parameter. Here the variable is 256 times the determinant (to make the range [0, 1])
and α = 21 for R, α = 1 for C, and α = 2 for H (the quaternions). This example is
one of the motivations for the preparation of this exposition. The problem occurred
in Slater’s study of the determinant of a partially transposed state in its role as
separability criterion [5].
The moment sequence is
(1)n (α + 1)n (2α + 1)n
, n = 0, 1, 2, · · · ;
3α + 45 n 3α + 23 n 3α + 47 n
thus δ = 6α + 32 . For generic α the density is
3
− 3α, 12 − 3α, 14 − 3α
3 (12α + 1) (6α + 1) (4α + 1)
4
;x
3 F2
64α2
1 − α, 1 − 2α
3
Γ 6α + 52 Γ 3α + 23 210α
− 2α, 21 − 2α, 14 − 2α
xα 3 F2 4
;x
−
4α sin (πα) Γ (α + 1) Γ (8α + 1)
1 − α, 1 + α
−8α
2 3
5
5
(2α + 1) π Γ 3α + 2 Γ 6α + 2 2
+
x2α
3
4
48 sin (πα) sin (2πα) Γ 2α + 21 Γ α + 23 Γ (α + 1)
3
− α, 21 − α, 14 − α
4
;x .
× 3 F2
1 + α, 1 + 2α
PRODUCTS OF BETA VARIABLES
For numeric computation at α =
for example
1
2 , 1, 2
13
one can employ interpolation techniques;
1
2
(f (α0 + h; x) + f (α0 − h; x)) − (f (α0 + 2h; x) + f (α0 − 2h; x))
3
6
4
1 ∂
−
f (α0 + ξh; x) h4 ,
6 ∂α
f (α0 ; x) =
where f (α; x) denotes the density for specific α and the last term is the error (for
some ξ ∈ (−2, 2)); thus the perturbed densities can be computed by the general
formula.
5.3. Example:
the recurrence for
Q4 m = 4. Given u1 , . . . , u4 , v1 , . . . , v4 deQ4
fine p (c) = i=1 (c + 1 − ui ) , q (c) = i=1 (c + 2 − vi ), q1 (c) = (c + 1)q (c) −
P4
cq (c − 1), δ = i=1 (vi − ui ) ,
R0′ (γ) = p (γ) − q1 (γ) ,
1
R1′ (γ) = ∇p (γ) − ∇q1 (γ) ,
2
1 2
1
′
R2 (γ) = ∇ p (γ) − ∇2 q1 (γ) ,
2
6
then set c0 = 1,
1 ′
R (δ − 1) c0 ,
δ 2
1
1
c2 =
R′ (δ) c1 −
R′ (δ − 1) c0 ,
2 (δ + 1) 2
2δ (δ + 1) 1
1
1
c3 =
R2′ (δ + 1) c2 −
R′ (δ) c1
3 (δ + 2)
3 (δ + 1) (δ + 2) 1
1
R′ (δ − 1) c0 ,
+
3δ (δ + 1) (δ + 2) 0
1
1
cn =
R2′ (n + δ − 2) cn−1 −
R′ (n + δ − 3) cn−2
n (δ + n − 1)
n (δ + n − 2)2 1
1
1
R0′ (n + δ − 4) cn−3 +
q (n + δ − 5) cn−4 ,
+
n (δ + n − 3)3
n (δ + n − 3)3
c1 =
for n ≥ 4.
5.4. Example: a Macdonald-Mehta-Selberg
integral. Let S be the 3n
o
P4
dimensional unit sphere x ∈ R4 : i=1 x2i = 1 with normalized surface measure
Q
dω. Consider 1≤i<j≤4 (xi − xj )2 as a random variable (that is, evaluated at a dω1
random point). Interestingly, the maximum
108 isoachieved at the 24 points
n value
p
√
with (permutations of the) coordinates ± 61 9 ± 3 6 , which is the zero-set of
√
the rescaled Hermite polynomial H4
6t . The Macdonald-Mehta-Selberg integral
(see [2, p. 319]) implies (for κ ≥ 0)
Z
Y
1 Γ (1 + 2κ) Γ (1 + 3κ) Γ (1 + 4κ)
2κ
|xi − xj | dω (x) = 6κ
.
3
2
Γ (2 + 6κ) Γ (1 + κ)
S 1≤i<j≤4
14
CHARLES F. DUNKL
For integer values κ = n the Gamma functions
simplify
to Pochhammer symbols;
then by use of formulas like (1)4n = 44n 14 n 12 n 43 n (1)n the value becomes
1 3
1
1
4 n 2 n 4 n
.
µn =
108n 65 n (1)n 67 n
Q
Let fD denote the density function of D = 108 1≤i<j≤4 (xi − xj )2 (by the general
results the range of D is [0, 1]). Applying Theorem 1 we find
√
1
221
2
156697
232223093
2
3
2
1+
fD (x) =
(1 − x)
(1 − x) +
(1 − x) +
(1 − x) + . . . .
3π
216
155520
235146240
By formula (1.3)
− 43
fD (x) = γ1 x
− 14
+ γ3 x
3 F2
5 1 1
12 , 4 , 12
3 1
4, 2
3 F2
3 7 11
4 , 12 , 12
3 5
2, 4
where
γ1 =
!
;x
− 12
+ γ2 x
3Γ
Γ
!
;x
!
;x ,
π
3 2
4
3 F2
2 1 1
3, 2, 3
3 5
4, 4
7
12
Γ
11
12
,
√
2 3
γ2 = −
,
3π
2
7
3
11
1
Γ
Γ
.
γ3 = 3 Γ
3π
4
12
12
It is straightforward
to derive a series for the cumulative distribution function
Rx
FD (x) = 0 fD (t) dt. Figures 1 and 2 are graphs of fD and FD respectively (of
course there is vertical asymptote for fD ). For computations we used terms up to
the eighth power, with the series in x for 0 < x ≤ 0.55 and the (1 − x) series for
0.55 < x ≤ 1. For a better view there is a graph of FD (x) for 0 ≤ x ≤ 0.04 in Fig.3
and of 1 − FD (x) for 0.4 ≤ x ≤ 1 in Fig. 4.
References
[1] W. N. Bailey, Generalized Hypergeometric Series, Cambridge University Press, 1935.
[2] C. F. Dunkl and Y. Xu, Orthogonal Polynomials of Several Variables, Encyclopedia of Mathematics and its Applications 81, Cambridge University Press, Cambridge, 2001.
[3] N. N. Lebedev, Special Functions and their Applications, transl. by R. A. Silverman, Dover
Publications, New York, 1972.
[4] F. Olver, D. Lozier, R. Boisvert, C. Clark, eds., NIST Handbook of Mathematical Functions, Cambridge University Press, 2010; Digital Library of Mathematical Functions,
http://dlmf.nist.gov.
[5] P. B. Slater, Bures and Hilbert-Schmidt 2 × 2 determinantal moments, J. Phys. A: Math.
Theor. 45 (2012), 455303, arXiv:1207.1297v2, 4 Oct. 2012.
Dept. of Mathematics, PO Box 400137, University of Virginia, Charlottesville VA
22904-4137
E-mail address:
[email protected]
URL: http://people.virginia.edu/~cfd5z/home.html
PRODUCTS OF BETA VARIABLES
15
density of D
1
0.8
0.6
0.4
0.2
0
0.2
0.4
x
0.6
0.8
1
Figure 1. Density of D, partial view
1
0.8
0.6
0.4
0.2
0
0.2
0.4
x
0.6
0.8
Figure 2. Cumulative distribution function of D
1
16
CHARLES F. DUNKL
0.6
0.5
0.4
0.3
0.2
0.1
0
0.01
0.02
x
0.03
0.04
Figure 3. Part of cumulative distribution of D
0.05
0.04
0.03
0.02
0.01
0 0.5
0.6
0.7
x
0.8
0.9
1
Figure 4. Part of complementary cumulative distribution of D