Gaussian Integral

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

THE GAUSSIAN INTEGRAL

KEITH CONRAD

Let Z ∞ Z ∞ Z ∞
− 21 x2 −x2 2
I= e e dx, J =
dx, and K = e−πx dx.
−∞ 0 −∞
√ √
These positive numbers are related: J = I/(2 2) and K = I/ 2π.
√ √
Theorem. With notation as above, I = 2π, or equivalently J = π/2, or equivalently K = 1.
We will give multiple proofs of this. (Other lists of proofs are in [4] and [9].) It is subtle since
1 2 R∞ 1 2
e− 2 x has no simple antiderivative. For comparison, 0 xe− 2 x dx can be computed with the
1 2
antiderivative −e− 2 x and equals 1.

1. First Proof: Polar coordinates


The most widely known proof, due to Poisson [9, p. 3], expresses J 2 as a double integral and
then uses polar coordinates. To start, write J 2 as an iterated integral using single-variable calculus:
Z ∞ Z ∞ Z ∞ Z ∞  Z ∞Z ∞
2 2 2 2 2 2
J2 = J e−y dy = Je−y dy = e−x dx e−y dy = e−(x +y ) dx dy.
0 0 0 0 0 0
View this as a double integral over the first quadrant. To compute it with polar coordinates, the
first quadrant is {(r, θ) : r ≥ 0 and 0 ≤ θ ≤ π/2}. Writing x2 + y 2 as r2 and dx dy as r dr dθ,
Z π/2 Z ∞
2
J2 = e−r r dr dθ
0 0
Z ∞ Z π/2
−r2
= re dr · dθ
0 0

1 2 π
= − e−r ·
2 0 2
1 π
= ·
2 2
π
= .
4
√ 1
Since J > 0, J = π/2. It is argued in [1] that this method can’t be used on any other integral.

2. Second Proof: Another change of variables


Our next proof uses another change of variables to compute J 2 . As before,
Z ∞ Z ∞ 
2 −(x2 +y 2 )
J = e dx dy.
0 0

1For a visualization of this calculation as a volume, in terms of R ∞ e−x2 dx instead of J, see https://www.
−∞
youtube.com/watch?v=cy8r7WSuT1I. We’ll do a volume calculation for I 2 in Section 5.
1
2 KEITH CONRAD

Instead of using polar coordinates, set x = yt in the inner integral (y is fixed). Then dx = y dt and
Z ∞ Z ∞  Z ∞ Z ∞ 
2 −y 2 (t2 +1) −y 2 (t2 +1)
(2.1) J = e y dt dy = ye dy dt,
0 0 0 0
where the interchange of integrals is justified by Fubini’s theorem for improper Riemann integrals.
(The appendix
Z ∞ gives an approach using Fubini’s theorem for Riemann integrals on rectangles.)
2 1
Since ye−ay dy = for a > 0, we have
0 2a
Z ∞
2 dt 1 π π
J = 2
= · = ,
0 2(t + 1) 2 2 4

so J = π/2. This proof is due to Laplace [7, pp. 94–96] and historically precedes the widely used
technique of the previous proof. We will see in Section 9 what Laplace’s first proof was.

3. Third Proof: Differentiating under the integral sign


For t > 0, set
Z t 2
−x2
A(t) = e dx .
0
The integral we want to calculate is A(∞) = 2
J and
then take a square root.
Differentiating A(t) with respect to t and using the Fundamental Theorem of Calculus,
Z t Z t
0 −x2 −t2 −t2 2
A (t) = 2 e dx · e = 2e e−x dx.
0 0
Let x = ty, so
Z 1 Z 1
0 −t2 −t2 y 2 2 )t2
A (t) = 2e te dy = 2te−(1+y dy.
0 0
The function under the integral sign is easily antidifferentiated with respect to t:
2 2 2 2
∂ e−(1+y )t d 1 e−(1+y )t
Z 1 Z
A0 (t) = − dy = − dy.
0 ∂t 1 + y 2 dt 0 1 + y2
Letting
2 2
e−t (1+x ) 1
Z
B(t) = dx,
0 1 + x2
we have A0 (t) = −B 0 (t) for all t > 0, so there is a constant C such that
(3.1) A(t) = −B(t) + C
Z 0 2
−x2
for all t > 0. To find C, we let t → 0+
in (3.1). The left side tends to e dx = 0 while
Z 1 0

the right side tends to − dx/(1 + x2 ) + C = −π/4 + C. Thus C = π/4, so (3.1) becomes
0
2 2 2
t
e−t (1+x ) 1
Z Z
−x2 π
e dx = − dx.
0 0 1 + x2
4

Letting t → ∞ in this equation, we obtain J 2 = π/4, so J = π/2.
A comparison of this proof with the first proof is in [20].
THE GAUSSIAN INTEGRAL 3

4. Fourth Proof: Another differentiation under the integral sign


Here is a second approach to finding J by differentiation under the integral sign. I heard about
it from Michael Rozman [14], who modified an idea on math.stackexchange [22], and in a slightly
less elegant form it appeared much earlier in [18].
For t ∈ R, set
Z ∞ −t2 (1+x2 )
e
F (t) = dx.
0 1 + x2
R∞
Then F (0) = 0 dx/(1 + x2 ) = π/2 and F (∞) = 0. Differentiating under the integral sign,
Z ∞ Z ∞
0 −t2 (1+x2 ) −t2 2
F (t) = −2te dx = −2te e−(tx) dx.
0 0

Make the substitution y = tx, with dy = t dx, so


Z ∞
0 −t2 2 2
F (t) = −2e e−y dy = −2Je−t .
0

For b > 0, integrate both sides from 0 to b and use the Fundamental Theorem of Calculus:
Z b Z b Z b
0 −t2 2
F (t) dt = −2J e dt =⇒ F (b) − F (0) = −2J e−t dt.
0 0 0

Letting b → ∞ in the last equation,



π 2 2 π π
0 − = −2J =⇒ J = =⇒ J = .
2 4 2

5. Fifth Proof: A volume integral


Our next proof is due to T. P. Jameson [5] and it was rediscovered by A. L. Delgado [3]. Revolve
1 2 1 2 2
the curve z = e− 2 x in the xz-plane around the z-axis to produce the “bell surface” z = e− 2 (x +y ) .
See below, where the z-axis is vertical and passes through the top point, the x-axis lies just under
the surface through the point 0 in front, and the y-axis lies just under the surface through the
point 0 on the left. We will compute the volume V below the surface and above the xy-plane in
two ways. Z 1
First we compute V by horizontal slices, which are discs: V = A(z) dz where A(z) is the area
0
of the disc formed by slicing the surface at height z. Writing the radius of the disc at height z as
1 2
r(z), A(z) = πr(z)2 . To compute r(z), the surface cuts the xz-plane at a pair of points (x, e− 2 x )
1 2
where the height is z, so e− 2 x = z. Thus x2 = −2 ln z. Since x is the distance of these points from
the z-axis, r(z)2 = x2 = −2 ln z, so A(z) = πr(z)2 = −2πln z. Therefore
Z 1 1
V = −2π ln z dz = −2π (z ln z − z) = −2π(−1 − lim z ln z).
0 0 z→0+

By L’Hospital’s rule, limz→0+ z ln z = 0, so V = 2π. (A calculation of V by shells is in [11].)


Next we compute the volume by vertical slices in planes x = constant. Vertical slices are scaled
bell curves: look at the black contour lines in the picture. The equation of the bell curve along the
1 2 2
top of the vertical slice with x-coordinate x is z = e− 2 (x +y ) , where y varies and x is fixed. Then
4 KEITH CONRAD

Z ∞
V = A(x) dx, where A(x) is the area of the x-slice:
−∞
Z ∞ 1 1 2
Z ∞ 1 2 1 2
2 +y 2 )
A(x) = e− 2 (x dy = e− 2 x e− 2 y dy = e− 2 x I.
−∞ −∞
Z ∞ Z ∞ 1 2
Z ∞ 1 2
Thus V = A(x) dx = e− 2 x I dx = I
e− 2 x dx = I 2 .
−∞ −∞ −∞ √
Comparing the two formulas for V , we have 2π = I 2 , so I = 2π.

6. Sixth Proof: The Γ-function


Z ∞
For any integer n ≥ 0, we have n! = tn e−t dt. For x > 0 we define
0
Z ∞
dt
Γ(x) = tx e−t ,
0 t
so Γ(n) = (n − 1)! when n ≥ 1. Using integration by parts, Γ(x + 1) = xΓ(x). One of the basic
properties of the Γ-function [15, pp. 193–194] is
Z 1
Γ(x)Γ(y)
(6.1) = tx−1 (1 − t)y−1 dt.
Γ(x + y) 0
THE GAUSSIAN INTEGRAL 5

Set x = y = 1/2:
 2 Z 1
1 dt
Γ = p .
2 0 t(1 − t)
Note
  Z ∞
√ −t dt Z ∞ e−t Z ∞ −x2 Z ∞
1 e 2
Γ = te = √ dt = 2x dx = 2 e−x dx = 2J,
2 0 t 0 t 0 x 0
2
R1 p 2
so 4J = 0 dt/ t(1 − t). With the substitution t = sin θ,
Z π/2
2 2 sin θ cos θ dθ π
4J = = 2 = π,
0 sin θ cos θ 2
√ √ √
so J = π/2. Equivalently,
Z ∞ Γ(1/2) = π. Any method that proves Γ(1/2) = π is also a method
2
that calculates e−x dx.
0

7. Seventh Proof: Asymptotic estimates



We will show J = π/2 by a technique whose steps are based on [16, p. 371].
For x ≥ 0, power series expansions show 1 + x ≤ ex ≤ 1/(1 − x). Reciprocating and replacing x
with x2 , we get
2 1
(7.1) 1 − x2 ≤ e−x ≤ .
1 + x2
for all x ∈ R.
For any positive integer n, raise the terms in (7.1) to the nth power and integrate from 0 to 1:
Z 1 Z 1 Z 1
2 dx
(1 − x2 )n dx ≤ e−nx dx ≤ .
0 0 0 (1 + x2 )n

Using the changes of variables x = sin θ on the left, x = y/ n in the middle, and x = tan θ on the
right,
Z π/2 Z √n Z π/4 Z π/2
2n+1 1 −y 2 2n−2
(7.2) (cos θ) dθ ≤ √ e dy ≤ (cos θ) dθ < (cos θ)2n−2 dθ.
0 n 0 0 0
R π/2
Set Ik = 0 (cos θ)k dθ, so I0 = π/2, I1 = 1, and (7.2) implies
Z √n
√ 2 √
(7.3) nI2n+1 ≤ e−y dy < nI2n−2 .
0

We will show that as k → ∞, kIk2 → π/2. Then


√ √
√ n √
r
1 π π
nI2n+1 = √ 2n + 1I2n+1 → √ =
2n + 1 2 2 2
and √ √
√ n √
r
1 π π
nI2n−2 = √ 2n − 2I2n−2 → √ = ,
2n − 2 2 2 2

Z n √ √
2
so by (7.3), e−y dy → π/2. Thus J = π/2.
0
6 KEITH CONRAD

To show kIk2 → π/2, first we compute several values of Ik explicitly by a recursion. Using
integration by parts,
Z π/2 Z π/2
k
Ik = (cos θ) dθ = (cos θ)k−1 cos θ dθ = (k − 1)(Ik−2 − Ik ),
0 0
so
k−1
(7.4) Ik = Ik−2 .
k
Using (7.4) and the initial values I0 = π/2 and I1 = 1, the first few values of Ik are computed and
listed in Table 1.

k Ik k Ik
0 π/2 1 1
2 (1/2)(π/2) 3 2/3
4 (3/8)(π/2) 5 8/15
6 (15/48)(π/2) 7 48/105
Table 1.

From Table 1 we see that


1 π
(7.5) I2n I2n+1 =
2n + 1 2
for 0 ≤ n ≤ 3, and this can be proved for all n by induction using (7.4). Since 0 ≤ cos θ ≤ 1 for
k
θ ∈ [0, π/2], we have Ik ≤ Ik−1 ≤ Ik−2 = k−1 Ik by (7.4), so Ik−1 ∼ Ik as k → ∞. Therefore (7.5)
implies
2 1 π 2 π
I2n ∼ =⇒ (2n)I2n →
2n 2 2
as n → ∞. Then
2 2 π
(2n + 1)I2n+1 ∼ (2n)I2n →
2 √
as n → ∞, so kIk2 → π/2 as k → ∞. This completes our proof that J = π/2.
Remark 7.1. This proof is closely related to the fifth proof using the Γ-function. Indeed, by (6.1)
Γ( k+1 1 Z 1
2 )Γ( 2 )
k+1 1
= t(k+1)/2+1 (1 − t)1/2−1 dt,
Γ( 2 + 2 ) 0

and with the change of variables t = (cos θ)2 for 0 ≤ θ ≤ π/2, the integral on the right is equal to
R π/2
2 0 (cos θ)k dθ = 2Ik , so (7.5) is the same as
Γ( 2n+1 1 2n+2 1
2 )Γ( 2 ) Γ( 2 )Γ( 2 )
I2n I2n+1 =
2Γ( 2n+2
2 ) 2Γ( 2n+3
2 )
Γ( 2n+1 1 2
2 )Γ( 2 )
=
4Γ( 2n+1
2 + 1)
Γ( 2n+1 1 2
2 )Γ( 2 )
=
4 2n+1 2n+1
2 Γ( 2 )
Γ( 12 )2
= .
2(2n + 1)
THE GAUSSIAN INTEGRAL 7

√ √
By (7.5), π = Γ(1/2)2 . We saw in the fifth proof that Γ(1/2) = π if and only if J = π/2.

8. Eighth Proof: Stirling’s Formula


Z ∞
1 2 √
Besides the integral formula e− 2 x dx = 2π that we have been discussing, another place
√ −∞
in mathematics where 2π appears is in Stirling’s formula:
nn √
n! ∼ 2πn as n → ∞.
en

In 1730 De Moivre proved n! ∼ C(nn /en ) n for some
√ positive number C without being able to
determine C. Stirling soon thereafter showed C = 2π and wound up having the whole formula √
named after him. We will show that
√ determining that the constant√C in Stirling’s formula is 2π
is equivalent to showing that J = π/2 (or, equivalently, that I = 2π).
Applying (7.4) repeatedly,
2n − 1
I2n = I2n−2
2n
(2n − 1)(2n − 3)
= I2n−4
(2n)(2n − 2)
..
.
(2n − 1)(2n − 3)(2n − 5) · · · (5)(3)(1)
= I0 .
(2n)(2n − 2)(2n − 4) · · · (6)(4)(2)

Inserting (2n − 2)(2n − 4)(2n − 6) · · · (6)(4)(2) in the top and bottom,

(2n − 1)(2n − 2)(2n − 3)(2n − 4)(2n − 5) · · · (6)(5)(4)(3)(2)(1) π (2n − 1)! π


I2n = 2
= n−1 2
.
(2n)((2n − 2)(2n − 4) · · · (6)(4)(2)) 2 2n(2 (n − 1)!) 2

Applying De Moivre’s asymptotic formula n! ∼ C(n/e)n n, ,
√ 1

C((2n − 1)/e)2n−1 2n − 1 π (2n − 1)2n 2n−1 2n − 1 π
I2n ∼ n−1 n−1
√ 2
= 2(n−1) 2n 1
2n(2 C((n − 1)/e) n − 1) 2 2n · 2 Ce(n − 1) (n−1)2 (n − 1) 2

as n → ∞. For any a ∈ R, (1 + a/n)n → ea as n → ∞, so (n + a)n ∼ ea nn . Substituting this into


the above formula with a = −1 and n replaced by 2n,

e−1 (2n)2n √12n π π


(8.1) I2n ∼ = √ .
2n · 22(n−1) Ce(e−1 nn )2 n12 n 2 C 2n
√ √
Since Ik−1 ∼ Ik , the outer terms in (7.3) are both asymptotic to nI2n ∼ π/(C 2) by (8.1).
Therefore
Z √n
2 π
e−y dy → √
0 C 2
√ √ √
as n → ∞, so J = π/(C 2). Therefore C = 2π if and only if J = π/2.
8 KEITH CONRAD

9. Ninth Proof: The original proof



The original proof that J = π/2 is due to Laplace [8] in 1774. (An English translation of
Laplace’s article is mentioned in the bibliographic citation for [8], with preliminary comments on
that article in [17].) He wanted to compute
Z 1
dx
(9.1) √ .
0 − log x
√ R∞ 2 √
Setting y = − log x, this integral is 2 0 e−y dy = 2J, so we expect (9.1) to be π.
Laplace’s starting point for evaluating (9.1) was a formula of Euler:
Z 1 Z 1 s+r
xr dx x dx 1 π
(9.2) √ √ =
0 1 − x2s 0 1 − x2s s(r + 1) 2
for positive r and s. (Laplace himself said this formula held “whatever be” r or s, but if s < 0 then
the number under the square root is negative.) Accepting (9.2), let r → 0 in it to get
Z 1 Z 1
dx xs dx 1π
(9.3) √ √ = .
0 1−x 2s
0 1−x 2s s2
Now let s → 0 in (9.3). Then 1 − x2s ∼ −2s log x by L’Hopital’s rule, so (9.3) becomes
Z 1 2
dx
√ = π.
0 − log x

Thus (9.1) is π.
Euler’s formula (9.2) looks mysterious, but we have met it before. In the formula let xs = cos θ
with 0 ≤ θ ≤ π/2. Then x = (cos θ)1/s , and after some calculations (9.2) turns into
Z π/2 Z π/2
(r+1)/s−1 1 π
(9.4) (cos θ) dθ (cos θ)(r+1)/s dθ = .
0 0 (r + 1)/s 2
R π/2
We used the integral Ik = 0 (cos θ)k dθ before when k is a nonnegative integer. This notation
1 π
makes sense when k is any positive real number, and then (9.4) assumes the form Iα Iα+1 = α+1 2 for
α = (r +1)/s − 1, which is (7.5) with a possibly nonintegral index. Letting r = 0 and s = 1/(2n + 1)
in (9.4) recovers (7.5). Letting s → 0 in (9.3) corresponds to letting n → ∞ in (7.5), so the proof
in Section 7 is in essence a more detailed version of Laplace’s 1774 argument.

10. Tenth Proof: Residue theorem


Z ∞
2 /2
We will calculate e−x dx using contour integrals and the residue theorem. However, we
−∞
2
can’t just integrate e−z /2 ,
as this function has no poles. For a long time nobody knew how to
handle this integral using contour integration. For instance, in 1914 WatsonZ [19, p. 79] wrote

2
“Cauchy’s theorem cannot be employed to evaluate all definite integrals; thus e−x dx has not
0
been evaluated except by other methods.” In the 1940s several contour integral solutions were
published using awkward contours such as parallelograms [10], [12, Sect. 5] (see [2, Exer. 9, p. 113]
for a recent appearance). Our approach will follow Kneser [6, p. 121] (see also [13, pp. 413–414] or
[21]), using a rectangular contour and the function
2
e−z /2
√ .
1 − e− π(1+i)z
THE GAUSSIAN INTEGRAL 9

This function comes out of nowhere, so our first task is to motivate the introduction of this function.
We seek a meromorphic function f (z) to integrate around the rectangular contour γR in the
figure below, with vertices at −R, R, R + ib, and −R + ib, where b will be fixed and we let R → ∞.

Suppose f (z) → 0 along the right and left sides of γR uniformly as R → ∞. Then by applying
the residue theorem and letting R → ∞, we would obtain (if the integrals converge)
Z ∞ Z −∞ X
f (x) dx + f (x + ib) dx = 2πi Resz=a f (z),
−∞ ∞ a

where the sum is over poles of f (z) with imaginary part between 0 and b. This is equivalent to
Z ∞ X
(f (x) − f (x + ib)) dx = 2πi Resz=a f (z).
−∞ a

Therefore we want f (z) to satisfy


2 /2
(10.1) f (z) − f (z + ib) = e−z ,
where f (z) and b need to be determined.
2
Let’s try f (z) = e−z /2 /d(z), for an unknown denominator d(z) whose zeros are poles of f (z).
We want f (z) to satisfy
2 /2
(10.2) f (z) − f (z + τ ) = e−z
for some τ (which will not be purely imaginary, so (10.1) doesn’t quite work, but (10.1) is only
2
motivation). Substituting e−z /2 /d(z) for f (z) in (10.2) gives us
2
!
−z 2 /2 1 e−τ z−τ /2 2
(10.3) e − = e−z /2 .
d(z) d(z + τ )
Suppose d(z + τ ) = d(z). Then (10.3) implies
2 /2
d(z) = 1 − e−τ z−τ ,
and with this definition of d(z), e −z 2 /2
/d(z) satisfies τ 2

2 √ (10.2) if and only if e = 1, or equivalently


τ ∈ 2πiZ. The simplest nonzero solution is τ = π(1 + i). From now on this is the value of τ , so
2
e−τ /2 = e−iπ = −1 and we set
2 2
e−z /2 e−z /2
f (z) = = ,
d(z) 1 + e−τ z
which is Kneser’s function mentioned earlier. This function satisfies (10.2) and we henceforth ignore
the motivation (10.1). Poles of f (z) are at odd integral multiples of τ /2.
We will integrate this f (z) around the rectangular contour γR below, whose height is Im(τ ).
10 KEITH CONRAD

The poles of f (z) nearest the origin


√ are plotted in the figure; they lie along the line y = x. The
only pole of f (z) inside γR (for R > π/2) is at τ /2, so by the residue theorem
2 2
Z
e−τ /8 2πie3τ /8 √
f (z) dz = 2πiResz=τ /2 f (z) = 2πi −τ 2 /2 = √ = 2π.
γR (−τ )e − π(1 + i)
As R → ∞, the value of |f (z)| tends to 0 uniformly along the left and right sides of γR , so
√ Z ∞ Z −∞+i√π
2π = f (x) dx + √
f (z) dz
−∞ ∞+i π
Z ∞ Z ∞

= f (x) dx − f (x + i π) dx.
−∞ −∞

In the second integral, write i π as τ − π and use (real) translation invariance of dx to obtain
√ Z ∞ Z ∞
2π = f (x) dx − f (x + τ ) dx
−∞ −∞
Z ∞
= (f (x) − f (x + τ )) dx
−∞
Z ∞
2
= e−x /2 dx by (10.2).
−∞

11. Eleventh Proof: Fourier transforms


For a continuous function f : R → C that is rapidly decreasing at ±∞, its Fourier transform is
the function Ff : R → C defined by
Z ∞
(11.1) (Ff )(y) = f (x)e−ixy dx.
−∞
R∞
For example, (Ff )(0) = −∞ f (x) dx.
Here are three properties of the Fourier transform.
• If f is differentiable, then after using differentiation under the integral sign on the Fourier
transform of f we obtain
Z ∞
0
(Ff ) (y) = −ixf (x)e−ixy dx = −i(F(xf (x)))(y).
−∞
THE GAUSSIAN INTEGRAL 11

• Using integration by parts on the Fourier transform of f , with u = f (x) and dv = e−ixy dx,
we obtain
(F(f 0 ))(y) = iy(Ff )(y).
• If we apply the Fourier transform twice then we recover the original function up to interior
and exterior scaling:
(11.2) (F 2 f )(x) = 2πf (−x).
The 2π is admittedly a nonobvious scaling factor here, and the proof of (11.2)
√ is nontrivial. We’ll
show the appearance of 2π in (11.2) is equivalent to the evaluation of I as 2π.
2
Fixing a > 0, set f (x) = e−ax , so
f 0 (x) = −2axf (x).
Applying the Fourier transform to both sides of this equation implies iy(Ff )(y) = −2a −i 1
(Ff )0 (y),
which simplifies to (Ff )0 (y) = − 2a
1
y(Ff )(y). The general solution of g 0 (y) = − 2a
1
yg(y) is g(y) =
−y 2 /(4a)
Ce , so
2 2
f (x) = e−ax =⇒ (Ff )(y) = Ce−y /(4a)
2 /2
for some constant C. We have 1/(4a) = a when a = 1/2, so set a = 1/2: if f (x) = e−x then
−y 2 /2
(11.3) (Ff )(y) = Ce = Cf (y).
R∞ 2
Setting y = 0 in (11.3), the left side is (Ff )(0) = −∞ e−x /2 dx = I, so I = Cf (0) = C.
Applying the Fourier transform to both sides of (11.3) with C = I √ and using (11.2), we get
2πf (−x) = I(Ff )(x) = I 2 f (x). At x = 0 this becomes 2π = I 2 , so I = 2π since I > 0. That is
the Gaussian integral calculation. If we didn’t know that the constant on the right side of (11.2) is
2π, whatever its value is would 2
√ wind up being I , so saying 2π appears on the right side of (11.2)
is equivalent to saying I = 2π.
There are other ways to define the Fourier transform besides (11.1), such as
Z ∞ Z ∞
1
√ f (x)e−ixy dx or f (x)e−2πixy dx.
2π −∞ −∞
These transforms have properties similar to the transform as defined in (11.1), so they can be used
in its place to compute the Gaussian integral. Let’s see how such a proof looks using the second
alternative definition, which we’ll write as
Z ∞
(Ff
e )(y) = f (x)e−2πixy dx.
−∞
For this Fourier transform, the analogue of the three properties above for F are
e )0 (y) = −2πi(F(xf
• (Ff e (x)))(y).
0
• (F(f
e ))(y) = 2πiy(Ff e )(y).
• (Fe2 f )(x) = f (−x).
The last property for Fe looks nicer than that for F, since there is no overall 2π-factor on the right
side (it has been hidden in the definition of F).
e On the other hand, the first two properties for Fe
have overall factors of 2π on the right side while the first two properties of F do not. You can’t
escape a role for π or 2π somewhere in every possible definition of a Fourier transform.
2
Now let’s run through the proof again with Fe in place of F. For a > 0, set f (x) = e−ax .
Applying Fe to both sides of the equation f 0 (x) = −2axf (x), 2πiy(Ff e )(y) = −2a 1 (Ff )0 (y),
−(2πi)
12 KEITH CONRAD

2 2
e )0 (y) = − 2π y(Ff )(y). Solutions of g 0 (y) = − 2π yg(y) all look like
and that is equivalent to (Ff a a
2 2
Ce−(π /a)y , so
2
f (x) = e−ax =⇒ (Ff e )(y) = Ce−(π2 /a)y2
2 /a)y 2 2
for a constant C. We want π 2 /a = π so that e−(π = e−πy = f (y), which occurs for a = π.
2
Thus when f (x) = e−πx we have
2
(11.4) e )(y) = Ce−πy = Cf (y).
(Ff
R∞ 2
When y = 0 in (11.4), this becomes −∞ e−πx dx = C, so C = K: see the top of the first page for
2
the definition of K as the integral of e−πx over R.
Applying Fe to both sides of (11.4) with C = K and using (Fe2 f )(x) = f (−x), we get f (−x) =
K(Ffe )(x) = K 2 f (x). At x = 0 this becomes 1 = K 2 , so K = 1 since K > 0. That K = 1, or
R∞ 2
in more explicit form −∞ e−πx dx = 1, is equivalent to the evaluation of the Gaussian integral I

with the change of variables y = 2πx in the integral for K.

Appendix A. Redoing Section 2 without improper integrals in Fubini’s theorem


In this appendix we will work out the calculation of the Gaussian integral in Section 2 without
relying on Fubini’s theorem for improper integrals. The key equation is (2.1), which we recall:
Z ∞ Z ∞  Z ∞ Z ∞ 
−(t2 +1)y 2 −(t2 +1)y 2
ye dt dy = ye dy dt.
0 0 0 0
The calculation in Section 2 that the iterated integral on the right is π/4 does not need Fubini’s
theorem in any form. It is going from the iterated integral on the left to π/4 that used Fubini’s
theorem for improper integrals. The next theorem could be used as a substitute, and its proof will
only use Fubini’s theorem for integrals on rectangles.
Theorem A.1. For b > 1 and c > 1,
Z ∞ Z ∞     
−(t2 +1)y 2 π 1 1
ye dt dy = + O +O √ .
0 0 4 b c
Having b → ∞ and c → ∞ in Theorem A.1 makes the right side π/4 without changing the left
side.
2 1
Lemma A.2. (1) For all x ∈ R, e−x ≤ 2 .
Z ∞ x +1
dx π
(2) For a > 0 2 x2 + 1
= .
0 a Z ∞ 2a
dx 1 π 
(3) For a > 0 and c > 0, 2 2
= − arctan(ac) .
Zc ∞ a x + 1 a 2
dx 1
(4) For a > 0 and c > 0, 2 x2 + 1
< 2 .
c a a c
π 1
(5) For a > 0, − arctan a < .
2 a
Proof. The proofs of (1), (2), and (3) are left to the reader.
R ∞ To prove (4), replace 1 + a2 t2 by the
smaller value a t . To prove (5), write the difference as a dx/(x + 1) and then bound 1/(x2 + 1)
2 2 2

above by 1/x2 . 
Now we prove Theorem A.1.
THE GAUSSIAN INTEGRAL 13

Proof. Step 1. For b > 1 and c > 1, we’ll show the improper integral can be truncated to an integral
over [0, b] × [0, c] plus error terms:
Z ∞ Z ∞  Z b Z c     
−(t2 +1)y 2 −(t2 +1)y 2 1 1
ye dt dy = ye dt dy + O √ +O .
0 0 0 0 c b
R∞
Subtract the integral on the right from the integral on the left and split the outer integral 0
Rb R∞
into 0 + b :
Z ∞ Z ∞  Z b Z c  Z b Z ∞ 
2 2 2 2 2 2
ye−(t +1)y dt dy − ye−(t +1)y dt dy = ye−(t +1)y dt dy
0 0 0 0 0 c
Z ∞ Z ∞ 
−(t2 +1)y 2
+ ye dt dy.
b 0

On the right side, we will show the first iterated integral is O(1/ c) and the second iterated integral
is O(1/b). The second iterated integral is simpler:
Z ∞ Z ∞  Z ∞ Z ∞ 
−(t2 +1)y 2 −(yt)2 2
ye dt dy = e dt ye−y dy
b 0 b 0
Z ∞ Z ∞ 
dt 2
≤ 2 t2 + 1
ye−y dy by Lemma A.2(1)
y
Zb ∞ 0
π 2
= ye−y dy by Lemma A.2(2)
b 2y
π ∞ −y2
Z
= e dy
2 b
π ∞ dy
Z
≤ by Lemma A.2(1)
2 b y2 + 1
π 1 1
= since 2 < 2,
2b y +1 y

and this is O(1/b). Returning to the first iterated integral,


Z b Z ∞  Z b Z ∞ 
−(t2 +1)y 2 −(yt)2 2
ye dt dy = e dt ye−y dy
0 c 0 c
Z b Z ∞ 
dt 2
≤ 2 2
ye−y dy by Lemma A.2(1)
0 c y t +1
Z 1 Z ∞  Z b Z ∞ 
dt −y 2 dt 2
= 2 2
ye dy + 2 2
ye−y dy
0 c y t +1 1 c y t +1
Z 1 Z ∞  Z b
dt −y 2 1 2
≤ 2 2
ye dy + 2
ye−y dy by Lemma A.2(4)
0 c y t +1 1 y c
1
1 b dy
Z Z
π 
−y 2
= − arctan(yc) e dy + by Lemma A.2(3)
0 2 c 1 yey2
1 ∞ dy
Z 1 Z
π 
≤ − arctan(yc) dy + .
0 2 c 1 yey2
√ R1
The last term is O(1/c). We will show the first term is O(1/ c) by carefully splitting up 0 .
14 KEITH CONRAD

For 0 < ε < 1,


Z 1 Z ε Z 1
π  π  π 
− arctan(yc) dy = − arctan(yc) dy + − arctan(yc) dy.
0 2 0 2 ε 2
Both integrals are positive, and the first one is less than (π/2)ε. The integrand of the second
integral is less than 1/(yc) by Lemma A.2(5), so
Z 1 Z 1
π  dy 1−ε 1
− arctan(yc) dy < < < .
ε 2 ε yc εc εc
Therefore
Z 1
π  π 1
0< − arctan(yc) dy < ε +
0 2 2 εc

for each ε in (0, 1). Use ε = 1/ c to get
Z 1  
π  π 1 1
0< − arctan(yc) dy < √ + √ = O √ .
0 2 2 c c c
√ √
That proves the first iterated integral is O(1/ c) + O(1/c) = O(1/ c) as c → ∞.
Step 2. For b > 0 and c > 0, we will show
Z b Z c     
2 2 π 1 1
ye−(t +1)y dt dy = + O b2
+ O .
0 0 4 e c
By Fubini’s theorem for continuous functions on a rectangle in R2 ,
Z b Z c  Z c Z b 
−(t2 +1)y 2 −(t2 +1)y 2
ye dt dy = ye dy dt.
0 0 0 0
Rb 2 2
For the inner integral on the right, the formula 0 ye−ay dy = 1/(2a) − 1/(2aeab ) for a > 0 tells
us
Z b
2 2 1 1
ye−(t +1)y dy = − ,
0 2(t + 1) 2(t + 1)e(t2 +1)b2
2 2

so
Z c Z b
1 c dt 1 c
 Z Z
2 2 dt
ye−(t +1)y dy dt = 2+1
− 2 + 1)e(t2 +1)b2
0 0 2 0 t 2 (t
Z0
1 1 c dt
(A.1) = arctan(c) − .
2 2 0 (t + 1)e(t2 +1)b2
2

Let’s estimate these last two terms. Since


Z ∞ Z ∞ Z ∞   
dt dt π dt π 1
arctan(c) = 2
− 2
= +O 2
= +O
0 t +1 c t +1 2 c t 2 c
and
Z c Z c Z ∞  
dt dt 1 dt 1 1
≤ ≤ =O ,
0 (t + 1)e(t2 +1)b2
2
0 t + 1 eb2
2
0 t + 1 eb2
2 eb2
feeding these error estimates into (A.1) finishes Step 2. 
THE GAUSSIAN INTEGRAL 15

References
[1] D. Bell, “Poisson’s remarkable calculation – a method or a trick?” Elem. Math. 65 (2010), 29–36.
[2] C. A. Berenstein and R. Gay, Complex Variables, Springer-Verlag, New York, 1991.
Z ∞ 2
[3] A. L. Delgado, “A Calculation of e−x dx,” The College Math. J. 34 (2003), 321–323.
0
[4] H. Iwasawa, “Gaussian Integral Puzzle,” Math. Intelligencer 31 (2009), 38–41.
[5] T. P. Jameson, “The Probability Integral by Volume of Revolution,” Mathematical Gazette 78 (1994), 339–340.
[6] H. Kneser, Funktionentheorie, Vandenhoeck and Ruprecht, 1958.
[7] P. S. Laplace, Théorie Analytique des Probabilités, Courcier, 1812.
[8] P. S. Laplace, “Mémoire sur la probabilité des causes par les évènemens,” Oeuvres Complétes 8, 27–65. (English
trans. by S. Stigler as “Memoir on the Probability of Causes of Events,” Statistical Science 1 (1986), 364–378.)
[9] P. M. Lee, http://www.york.ac.uk/depts/maths/histstat/normal history.pdf.
[10] L. Mirsky, The Probability Integral, Math. Gazette 33 (1949), 279. URL http://www.jstor.org/stable/
3611303.
[11] C. P. Nicholas and R. C. Yates, “The Probability Integral,” Amer. Math. Monthly 57 (1950), 412–413.
[12] G. Polya, “Remarks on Computing the Probability Integral in One and Two Dimensions,” pp. 63–78 in Berkeley
Symp. on Math. Statist. and Prob., Univ. California Press, 1949.
[13] R. Remmert, Theory of Complex Functions, Springer-Verlag, 1991.
[14] M. Rozman, “Evaluate Gaussian integral using differentiation under the integral sign,” Course notes for Physics
2400 (UConn), Spring 2016.
[15] W. Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw-Hill, 1976.
[16] M. Spivak, Calculus, W. A. Benjamin, 1967.
[17] S. Stigler, “Laplace’s 1774 Memoir on Inverse Probability,” Statistical Science 1 (1986), 359–363.
[18] J. van Yzeren, “Moivre’s and Fresnel’s Integrals by Simple Integration,” Amer. Math. Monthly 86 (1979),
690–693.
[19] G. N. Watson, Complex Integration and Cauchy’s Theorem, Cambridge Univ. Press, Cambridge, 1914.
[20] http://gowers.wordpress.com/2007/10/04/when-are-two-proofs-essentially-the-same/#comment-239.
[21] http://math.stackexchange.com/questions/34767/int-infty-infty-e-x2-dx-with-complex-analysis.
[22] http://math.stackexchange.com/questions/390850/integrating-int-infty-0-e-x2-dx-using-feynmans-
parametrization-trick

You might also like