322-2175 Lab Book Draft06
322-2175 Lab Book Draft06
322-2175 Lab Book Draft06
20 March 2018
(Changes to Lab #5 on MTF)
Contents
Preface ix
1 Introduction 1
1.1 Introduction to Laboratory Experiments and Reports . . . . . . . . . 1
1.1.1 Laboratory Etiquette . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Laboratory Notebook . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Laboratory Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.1 Grammar and Syntax . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Setting Up Optical Experiments . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Spatial Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Measurements and Error in Experiments . . . . . . . . . . . . . . . . 9
1.5.1 The Certainty of Uncertainty . . . . . . . . . . . . . . . . . . 9
1.5.2 “Accuracy” vs. “Precision” . . . . . . . . . . . . . . . . . . . 9
1.5.3 Significant Figures and Round-Off Error . . . . . . . . . . . . 9
1.5.4 Reading Vernier Scales . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Propagation of Uncertainty/Error . . . . . . . . . . . . . . . . . . . . 11
1.6.1 Error of a Summation or Difference . . . . . . . . . . . . . . . 13
1.6.2 Error of a Product or Ratio . . . . . . . . . . . . . . . . . . . 14
1.6.3 Value of a Measurement Raised to a Power . . . . . . . . . . . 16
1.6.4 Problems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7 Series Expansions that You Should Know . . . . . . . . . . . . . . . . 17
1.7.1 Taylor’s Series: . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.2 Maclaurin’s Series: . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.3 Binomial Expansion: . . . . . . . . . . . . . . . . . . . . . . . 19
1.7.4 Geometric Series . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7.5 Finite Geometric Series: . . . . . . . . . . . . . . . . . . . . . 21
1.7.6 Exponential Series . . . . . . . . . . . . . . . . . . . . . . . . 22
1.7.7 Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . 23
v
vi CONTENTS
6.4.1 Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.4.2 Procedure(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
ix
Chapter 1
Introduction
1
2 CHAPTER 1 INTRODUCTION
1. Title;
2. Labeled axes with units;
3. Data points plotted must be shown with symbols and no connecting lines (the
size of the data point symbol should be chosen wisely);
4. Fits to data and/or theoretical curves should use lines with no symbols;
5. Computed coefficients of any fits of curves to data must be displayed on the
plot;
6. Legends should be used if plotting more than one data set on a single graph.
4 CHAPTER 1 INTRODUCTION
You may use the graphing capabilities in Microsoft ExcelTM , but recognize that
its default format is not appropriate for most graphs; for example, its default for
background is “gray,” and I strongly urge you to use a white background in your
graphs, if for no other reason than it makes the lines easier to see..
Discuss your results and how they relate to theory. You need to compute the per-
cent difference between the measured and theoretical values. Consider experimental
errors and try and determine their most likely source. Try to determine the most
significant sources of possible experimental error rather than just listing all possible
error sources.
Diagrams of setups are particularly useful in lab reports. These can be very simple
— they do not have to be artistic. Include relevant dimensions that are necessary in
any equations in the description.
All required material in your lab writeup should be typed on a word processor,
both for ease of submission and for archiving. This includes equations, figures, data
tables, and answers to any questions. Handwritten pages will not be accepted. Include
captions with figures and data tables. Number your equations, data tables, and figures
and refer to them by number, e.g., “As demonstrated by Eq. 1, . . . ”, “As shown in
Fig. 2, . . . ”, etc. Equations are typically numbered in parentheses located flush with
the right margin, while figures and tables are numbered in their captions. The lists
of equations, figures, and tables are numbered separately.
The general idea of the lab reports this quarter is to follow the format of real
research papers as closely as possible. If you would like some examples of what such
papers really look like, go to the library or the CIS Reading Room and look through a
couple of imaging science journals, e.g. Optical Engineering, the Journal of the Optical
Society of America (JOSA), or the Journal of Imaging Science and Technology.
Summary:
Finally summarize your findings and comment on your success (or lack of) in
performing the experiment.
1.3.2 Equations
You will need to include equations and subscripts in your lab writeups, so you need
to have the means to do so. Subscript fonts are available in most word processors
and equation fonts in many. For example, Microsoft WordTM includes a rudimentary
“Equation Editor”, and add-on software (such as MathTypeTM from MathSoft) also
is available. You might consider investing in a scientific word processor that includes
equation, graphing, and curve-fitting features – many are available, and the time
1.4 SETTING UP OPTICAL EXPERIMENTS 5
you are likely to save over the course of your college career will easily outweigh the
cost and learning time.
As already mentioned, number any equations used in your lab writeup. Note that
the symbol “*” that is commonly used for multiplication in some disciplines is more
often used to refer to the mathematical operation of convolution in imaging. I suggest
that you use either symbol “×” or “·” to denote multiplication in lab reports, and
NOT the letter “x.” The necessary symbols are available in most symbol fonts and
you should be using them in all of your documents and slide presentations.
The circular “eye” features are “Airy” or “Fresnel” patterns created by light
scattered by dust spots in the optical path as it combines with unscattered light.
(adapted from http://thestandardmodel.blogspot.com/2013/07/size-distribution-of-
sedimentary-haze.html)
The modulation is removed from the beam in the following manner, The laser
beam appears as a source located at infinity, meaning that the phase of the light from
the laser is (at least approximately) constant across the small area of the laser beam.
The laser beam from the source at infinite distance is focused on the small pinhole,
which means that light from the dust located closer to the pinhole is “out of focus.”
Since less scattered light gets through the pinhole, the quality of the transmitted
beam is significantly improved.
D0 = pinhole diameter
λ0 = wavelength of laser ( 632.8 nm for HeNe)
f0 = focal length of microsoft objective
d0 = diameter of laser beam
NA = numerical aperture of objective
Microscope objective lenses are commonly specified by the resulting angular mag-
1.4 SETTING UP OPTICAL EXPERIMENTS 7
nification (e.g., 40X) in a standard setup (typically with tube length of 160 mm) — the
focal length of a microscope objective is not commonly specified, but the approximate
focal lengths of common objectives shown in the table:
The spatial filter “kits” we have available include three microscope objectives
(10X, 20X, and 40X) and three pinholes (diameters of 5 μm, 10 μm, and 25 μm). The
objective with the longest focal length and therefore the smallest power (10X for
the kit) will generate a diffraction pattern with the largest scale and so should be
combined with the largest-diameter pinhole. In short, the best combinations likely
are:
The smallest-power objective with the largest pinhole will be easiest to adjust,
but will give the smallest-diameter beam.
8 CHAPTER 1 INTRODUCTION
1. With the magnetic pinhole mount removed, adjust the laser and microscope
objective so that the diverging beam illuminates the subject. Care should be
taken to ensure that the beam. passes through the lens cell and is closely
aligned to the optical axis. This will reduce aberrations, improve the brightness
of illumination, and ease alignment of the pinhole.
3. Adjust the z axis (lens focusing knob) until the pinhole is substantially outside
the lens focus. This will produce a large area light spot on the pinhole plane
and ease location of the pinhole. The light passing through the pinhole will be
small in this configuration so a 3" × 5" white card should be placed a foot or
so in front of the filter. Adjust the x and y axes in a raster sweep until a spot
of light is seen on the white card.
4. Adjust the z-axis knob until the lens is more nearly focused on the pinhole. The
spot image will brighten and grow in size as this is done. It will be necessary
to readjust the x- and y-axis adjustments as the pinhole comes into focus. The
adjustments become more critical as the focus is approached. Try to keep the
spot in sight on the card at all times. ’ As the focus is reached the sharply
defined Fresnel zone patterns will become fuzzy and disappear ond the card or
subject will become illuminated with a uniformly clean light.
Pinhole spatial filter: the laser beam is brought to a focus by the microscope objective
and the pinhole diameter D0 is chosen to “match” the size of the diffracted spot of
light.
1.5 MEASUREMENTS AND ERROR IN EXPERIMENTS 9
1. The MOST significant digit is the leftmost nonzero digit in the number
2. If there is no decimal point, then the LEAST significant digit is the rightmost
nonzero digit
3. If there is a decimal point, then the LEAST significant digit is the rightmost
digit, even if it is a zero.
10 CHAPTER 1 INTRODUCTION
4. All digits between the most and least significant digit are “significant digits.”
One of the things that drives me nuts most quickly is the tendency of students to
include all of the digits that came out of their calculator or spreadsheet. This is not
only misleading, but factually incorrect. In the result of a measurement, the number
of significant digits should be one larger than the precision of any analog measuring
device. In other words, you should be able to read an analog scale with more precision
than given by the scale; if the scale is labeled in millimeters, you should be able to
estimate the measurement to about a tenth of a millimeter. Retain this extra digit
and include an estimate of the error, e.g., for a measurement of a length s, you might
report the measurement as s = 10.3 mm ± 0.2 mm
Four readings of a scale. The first two scales do not have a vernier. The red index
of the first example on the left is nearly lined up with the value of 40 and the
uncertainty is probably of the order ±0.1 unit. In the second example, the red index
is between 40 and 41, and the measurement is approximately 40.3 ± 0.2 units. In
the third example, with a vernier, the index is between 40 and 41 and the sixth line
of the vernier scale lines up with one line on the stationary scale; the reading is
40.6 ± 0.1. In the last example, the index is between 41 and 42 and the vernier index
lines up with the 3 on the stationary scale, so the reading is 41.3 ± 0.1.
It is generally assumed that the mean value of the quantity z (sometimes labeled z̄)
to be determined is the same function of the mean values of the measured quantities:
z̄ = f [x̄, ȳ, · · · ]
zn = f [xn , yn , · · · ]
which (clearly) has units of the square of the calculated quantity z. The difference
of the nth calculation of z from the mean z̄ may be calculated from the differences of
the individual measurements from their means:
∂z ∂z
zn − z̄ ∼
= (xn − x̄) · + (yn − ȳ) · + ···
∂x ∂y
Thus the variance in the calculation is:
" N µ ¶2 #
1 X ∂z ∂z
σ 2z ∼
= lim (xn − x̄) · + (yn − ȳ) · +···
N→∞ N ∂x ∂y
n=1
" µ ¶2 # " µ ¶2 #
1 X 1 X
N N
2 ∂z 2 ∂z
= lim (xn − x̄) · + lim + (yn − ȳ) ·
N→∞ N ∂x N→∞ N n=1 ∂y
n=1
" µ ¶ µ ¶#
1 X
N
∂z ∂z
+ lim 2 (xn − x̄) (yn − ȳ) +···
N→∞ N ∂x ∂y
n=1
µ ¶2 µ ¶2
¶ µ ¶ µ
∼ ∂z ∂z ∂z ∂z
σ 2z = σ 2x · + · σ 2y
+ · · 2σ 2xy
+ ···
∂x ∂y ∂y ∂x
p
The error in the calculation of z is the standard deviation σ z = σ 2z and has the
same units as z.
Now consider the specific cases that are faced in calculations made from data in
experiments.
1.6 PROPAGATION OF UNCERTAINTY/ERROR 13
Example:
For example, consider the calculation of the perimeter of a quadrilateral shape with
sides x and y from measurements of two sides; the perimeter is:
z = 2x + 2y
z = 2200 mm ± 20.1 mm
What if the error in the measurement of y is the same as that in the measurement of
x? Then the standard deviation is:
q √
σ z = 4 · (±1 mm)2 + 4 · (±1 mm)2 = 2 2 mm ∼ = 2.83 mm
z = 2200 mm ± 2.83 mm
If the perimeter is calculated from measurments of the four sides individually, each
with standard deviation of 1 mm:
z = x1 + x2 + y1 + y2
If the computation is the scaled product of the two measurements x and y, e.g.,
z = ±a · x · y
∂z
= ±a · y
∂x
∂z
= ±a · x
∂y
1.6 PROPAGATION OF UNCERTAINTY/ERROR 15
µ ¶2 µ ¶2 µ ¶µ ¶
a ax a ax
σ 2z = σ 2x ± 2
+ σy ± 2 2
+ 2σ xy ± ∓ 2
y y y y
2 2 2 2
a ax ax
= σ2x 2 + σ 2y 4 − 2σ 2xy 3
y y y
µ 2 2¶ µ 2 2 2 2¶
σ 2
ax σy a x y σ 2xy a2 x2
= x2 + − 2
x y2 y2 y4 xy y 2
σ 2 ¡ ¢ σ 2y ¡ ¢ σ 2xy ¡ 2 ¢
= x2 z 2 + 2 z 2 − 2 z
x y xy
σ2 σ 2 σ 2y σ 2xy
=⇒ 2z = x2 + 2 − 2
z x y xy
Example:
Now consider the area of the quadrilateral used in the last example, where x =
100 mm ± 1 mm, y = 1000 mm ± 10 mm. The calculated area is:
z = x · y = 105 mm2
z = ax±b
The derivative of the calculation with respect to the measurement is:
∂z ¡ ¢ bz
= ±ab · x±b−1 = ±
∂x x
and the standard deviation is related by:
σz σx
=b
z x
Example:
Consider the error in the area of a circle based on an inaccurate measurement of its
diamter, say x = 100 mm, σ x = 5 mm. The area is
³ x ´2 π
z=π = · (100 mm)2 = 2500π mm2 ∼ = 7854.0 mm2
2 4
which implies that a = π and b = 2. The standard deviation is:
σx 5 mm
σ z = zb · = 2500π mm2 · 2 ·
b 100 mm
2 2
= 250π mm = 785.40 mm
1.6.4 Problems:
x00 x0 df x2 d2 f x3 d3 f
f [x + x0 ] = · f [x] + · + 0 · 2 + 0 · 2 + ···
0! Ã 1! dx ! 2! dx 3! dx
X∞ n
¯
n ¯
x0 d f ¯
= ·
n=0
n! dxn ¯x=x0
X∞ µ n ¶
0 x20 00 x30 000 x0 (n)
= f [x0 ] + x0 · f [0] + · f [x0 ] + · f [x0 ] + · · · = · f [x0 ]
2 6 n=0
n!
This is the Taylor series for the reference location x0 = 0, so the derivatives are
evaluated at the origin.
18 CHAPTER 1 INTRODUCTION
¯ ¯ ¯
x0 x1 df ¯¯ x2 d2 f ¯¯ x3 d3 f ¯¯
f [x] = · f [0] + · + · + · +···
0! 1! dx ¯x=0 2! dx2 ¯x=0 3! dx2 ¯x=0
X∞ µ n ¯ ¶
x dn f ¯¯
= ·
n=0
n! dxn ¯x=0
X∞ µ n ¶
0 x2 00 x3 000 x (n)
= f [0] + x · f [0] + · f [0] + · f [0] + · · · = · f [0]
2 6 n=0
n!
θ0 θ1 θ2 θ3 θ4
cos [θ] = · cos [0] + · (− sin [0]) + (− cos [0]) + (sin [0]) + · cos [0] + · · ·
0! 1! 2! 3! 4!
θ2 θ4 θ6
= 1+0− −0+ +0− + ···
2! 4! 6!
θ2 θ4 θ6
= 1− + − + ···
2! 4! 6!
The fact that the cosine includes only even powers of the argument θ in the Maclaurin
series means that the cosine must be an even function; it is symmetric with respect
to the origin of coordinates and may be “reversed” without affecting the values of the
function at each argument:
(−θ)2n = (+θ)2n
cos [−θ] = cos [+θ]
θ0 θ1 θ2 θ3 θ4
sin [θ] = · sin [0] + · (cos [0]) + (− sin [0]) + (− cos [0]) + · sin [0] + · · ·
0! 1! 2! 3! 4!
θ3 θ5 θ7
= 0+θ−0− +0+ −0− + ···
3! 5! 7!
θ3 θ5 θ7
= θ− + − + ···
3! 5! 7!
Thesine includes only odd powers of the argument θ and therefore is an odd function;
it is antisymmetric with respect to the origin of coordinates and is negated upon
“reversal” about the origin:
(−θ)2n+1 = − (+θ)2n+1
sin [−θ] = − sin [+θ]
The sine includes only odd powers of the argument θ and therefore is an odd function;
it is antisymmetric with respect to the origin of coordinates and is negated upon
1.7 SERIES EXPANSIONS THAT YOU SHOULD KNOW 19
(−θ)2n+1 = − (+θ)2n+1
sin [−θ] = − sin [+θ]
1 0 n 1 n (n − 1) 2 n (n − 1) (n − 2) 3 n!
(1 + x)n = ·x + ·x + ·x + · x + ··· + · xr + · · ·
0! 1! 2! 3! (n − r)!r!
n (n − 1) 2 n (n − 1) (n − 2) 3
= 1 + nx + x + x +···
⎛ ⎞ ⎛ 2⎞ ⎛ ⎞ 6 ⎛ ⎞ ⎛ ⎞
n n n n n
≡ ⎝ ⎠ + ⎝ ⎠ · x + ⎝ ⎠ · x2 + ⎝ ⎠ · x3 + · · · + ⎝ ⎠ · xr + · · ·
0 1 2 3 r
⎛ ⎞ ⎛ ⎞
X∞
n n!
= ⎝ ⎠ · xn where ⎝ ⎠ ≡ and 0! ≡ 1
n=0 n r (n − r)!r!
Example:
n (n − 1) n (n − 1) (n − 2)
(1 − x)n = 1 · (−x)0 + n · (−x)1 + (−x)2 + (−x)3 + · · ·
2 6
n (n − 1) 2 n (n − 1) (n − 2) 3
= 1 − nx + x − x + ···
2! 3!
Example:
¡1¢ ¡ 1¢ ¡1¢ ¡ 1¢ ¡ 3¢
1 p 1 −2 2 −2 −2
(1 + x) = (1 + x) = 1 + · x +
2 2
·x + 2 · x2 + · · ·
2 2 6
1 1 2 1 3
=1+ ·x− ·x + · x + ···
2 8 16
Example:
¡1¢ ¡ 2¢ ¡1¢ ¡ 2¢ ¡ 5¢
1 p 1 −3 2 −3 −3
3
(1 − x) = (1 − x) = 1 + · x +
3 3
·x + 3 · x2 + · · ·
3 2 6
1 1 5
= 1 + · x − · x2 + · x3 + · · ·
3 9 81
If the variable t satisfies the condition |t| < t, then the series may be written in the
simple form
X
∞
1
tn = if |t| < 1
n=0
1−t
This gives a useful method for approximating quantities that may be written in the
form (1 − t)−1 .
Example:
1 1
= = 1 + 0.1 + 0.01 + 0.001 + · · · = 1.11111 · · ·
0.9 1 − 0.1
so the first-order approximation to (0.9)−1 is the sum of the “zero-order” and “first-
order” terms:
(0.9)−1 ≈ 1.1
1.7 SERIES EXPANSIONS THAT YOU SHOULD KNOW 21
1 1
4 = =
0.25 1 − 0.75
= 1 + (0.75) + (0.75)2 + (0.75)3 + (0.75)4 + (0.75)5 + · · ·
Example:
X
∞
1
sinn [θ] = 1 + sin [θ] + sin2 [θ] + · · · =
n=0
1 − sin [θ]
X
∞
1 1
=⇒ sinn [0] = 1 + sin [0] + sin2 [0] + · · · = = =1
n=0
1 − sin [0] 1 − 0
X
∞ hπ i hπ i hπ i 1 1 1 1
=⇒ sinn = 1 + sin + sin2 + ··· = 1 + + + ··· = £π¤ = 1 =2
n=0
6 6 6 2 4 1 − sin 6 1− 2
X
∞ hπ i hπ i hπ i 1 1
=⇒ sinn = 1 + sin + sin2 + · · · = 1 + 1 + 12 + · · · = £π¤ = =∞
n=0
2 2 2 1 − sin 2 1−1
A geometric series just considered that has been truncated after N + 1 terms:
X
N
tn = 1 + t + t2 + t3 + · · · + tN
n=0
X
∞ X
∞
n
= t − tn
n=0 n=N+1
n=N
X
N X
∞ X
∞
n n
t = t − t(p+N+1)
n=0 n=0 p=0
X∞ X∞
¡ p N+1 ¢
= tn − t ·t
n=0 p=0
X
∞ X
∞
n N+1
= t −t · tp
n=0 p=0
µ ¶
1 1
= − tN+1 ·
1−t 1−t
X
N
1 − tN+1
tn =
n=0
1−t
Examples:
X
4
(0.1)n = 1 + 0.1 + 0.01 + 0.001 + 0.0001 = 1.1111
n=0
X
3
1 − (0.75)4
(0.75)n = = 2.734375
n=0
1 − (0.75)
X
3
(0.75)n = 1 + 0.75 + (0.75)2 + (0.75)3
n=0
= 1 + 0.75 + 0.5625 + 0.421875
= 2.734375
X
∞
un
exp [u] = eu =
n=0
n!
1 u u2 u3
= + + + + ···
0! 1! 2! 3!
1 1
= 1 + u + u2 + u3 + · · ·
2 6
1.7 SERIES EXPANSIONS THAT YOU SHOULD KNOW 23
X
∞
(iθ)n
iθ
exp [+iθ] = e =
n=0
n!
1 (iθ) (iθ)2 (iθ)3 (iθ)4
= + + + + + ···
0! 1! 2! 3! 4!
1 1 1 1 5 5
= 1 + iθ + i2 θ2 + i3 θ3 + i4 θ4 + i θ + ···
µ 2 6 ¶ 24
µ 120 ¶
θ2 θ4 θ3 θ5
= 1− + −··· +i · θ − + − ···
2 24 6 120
θ2 θ4
cos [θ] = 1 − + − ···
2 24
lim {cos [θ]} = 1
θ→0
θ3 θ5
sin [θ] = θ − + − ···
6 120
lim {sin [θ]} = θ
θ→0
2.1 Abstract:
The objective of this experiment is to measure the practical limit to the resolution of
images obtained with a pinhole camera. In the ray model of light propagation, the
resolution of a pinhole camera may be calculated by simple geometry as discussed
below and shown in the figure: the smaller the pinhole diameter d0 = ab, the smaller
the recorded “image” of a point emitter and therefore the better the image resolution.
This means that two point emitters can be close together at the object and still
produce images that may be distinguished at the image plane. Put another way,
the two sources can be “resolved” from the image. However, as the diameter d0 of
the pinhole is reduced to a sufficiently small size, the wave model of light eventually
“kicks in” and must be used to explain the description of resolution.
In this experiment, you will observe the transition in the behavior of light from
rays to waves of electromagnetic energy as you attempt to improve the resolution of
your pinhole camera. Lab write-ups are due one week after you take your data.
2.2 Theory:
25
26 CHAPTER 2 LAB #1: PINHOLE CAMERA
To obtain the best resolution, we want the distance pq to be small compared to the
height of the image, which is determined by the equation for transverse magnification:
µ ¶
z2 0 z2
MT = − =⇒ h = h · −
z1 z1
A simple metric of the angular resolution ∆θ is the ratio of the image height h0 to
the projected diameter pq of the pinhole:
h0
∆θ =
pq
which means that a smaller diameter d0 for the pinhole translates to better resolution
∆θ.
D0 [mm]
1.0
0.8
0.6
0.4
0.2
0.0
0.0 0.1 0.2 0.3 0.4 0.5
d0 [mm]
Approximate spatial resolution of pinhole camera as function of pinhole diameter d0
for λ0 = 550 nm, z1 = 39 mm and z2 = 18 mm. The red dashed line is shows the
diameter D0 of the image for the wave model of light as a function of the diameter
d0 of the pinhole for the wave model of light (diffraction). The blue dashed line is
the same graph for the ray model where the diameters are proportional, The black
line is the sum of the two curves and shows an obvious “minimum” which may be
viewed as determining a type of “optimum” pinhole diameter d0 .
You will attempt to measure data in the lab leading to this curve and compare
to the above expression. If you can measure both the decrease in D0 for large values
of the pinhole diameter d0 and the increase in D0 for small values of the pinhole
diameter, you will have successfully shown that both ray and wave properties are
exhibited by the light imaged by your pinhole camera.
28 CHAPTER 2 LAB #1: PINHOLE CAMERA
2.3.2 Procedure
1. Mount the pinhole on the camera and obtain an image of the target illuminated
by the fiber optic lamp.
2. Set up the CCD video camera and light source. You will need to find some
way to place your sample targets above the end of the fiber optic (a distance of
about 5 mm should work well). The distance from the target to the pinhole on
the camera should be something like 40 mm − 50 mm. Optical posts from the
main optics lab can be used for this purpose, if no other equipment is available
in your lab.
3. Use one of the millimeter grids as a target and find a pinhole that gives a
reasonable image. You may need to illuminate the grid with the desk lamp
rather than the fiber fed light source in order to get a more useful image and/or
you may need to put a white piece of paper behind the target. If you don’t have
one of the millimeter grids, you can make a series of tick marks 1 mm apart on
a white piece of paper for your target. This will work just as well. Whichever
method you use, align the tick marks either horizontally or vertically, capture
an image, and then import it into image processing software. Do a line scan
across the image. The distance between the dips that represent the millimeter
ticks will tell you how to convert pixel spacing to millimeters.
4. Measure and record the distances z1 and z2 used in your setup. If you change
these later on, make sure you record those values as well, and you’ll also have
to repeat step 3 above in that case.
5. Before taking any data, calculate where you expect the minimum in D0 to be,
based on a wavelength of 550 nm, which is the middle of the visible region of
the spectrum. (In other words, take the derivative of D0 with respect to d0 with
the object and image distances z1 and z2 , set it equal to zero and solve for dmin ,
the value of d0 that gives the minimum value of D0 ). Make sure that at least a
couple of the pinhole masks that you use have diameters larger than dmin and
2.3 EXPERIMENTAL SETUP 29
a couple of them have diameters smaller than dmin , so that you will be able to
characterize the function completely.
6. Place a slit on the sample holder. Illuminate it and observe the slit on the
monitor. Move the slit around until you achieve the longest image that you
can, and align the slit either horizontally or vertically. Capture the image and
import into the software, and then do a line scan across the slit. Record the
full width at half maximum (FWHM) of the slit image. That is, the distance
across the image from the half-maximum point on one side of the peak to the
half-maximum point on the other side, measured in pixels. You can convert this
to millimeters using your results from step 3.
7. With a piece of aluminum foil, make two small holes separated by a small
distance, e.g., in the range of 12 mm − 2 mm apart, to use as your next target.
You can measure the distance between the holes with a ruler, and the diameter
of the holes using hand held microscope in the optics kit. Capture an image of
the holes with the pinhole camera. Measure the FWHM for each of the two holes
as well as the distance between them on the image, and convert to millimeters.
Make sure that the distance measured from the image is consistent with the
measurement you made with the ruler using the camera scale calibration in
step 3.
8. Repeat parts 6 and 7 with all of the pinhole masks available for the camera.
With the smaller pinholes, there will obviously be less light making it to the
detector, and your measurements will become more challenging. Do the best
you can to get as many data points as possible. Save one or two scan graphs
to illustrate the experiment in your report. You don’t have to save any of the
images unless you want to.
9. Plot the peak width D0 (in mm) versus the pinhole diameter, d0 (in mm), for
the slit and pinhole cases. Plot the expected curve along with your data.
10. If time permits, consider the following. Since the diffraction formula for the
image width D0 depends on wavelength, try using the blue (λ0 ≈ 450 nm) and
red (λ0 ≈ 650 nm) filters in the optics kit. This is in order to see if there
is a difference in the width of the image you obtain, depending on the color
(i.e. wavelength) of the light from the object. Use the smallest pinhole or slit
that you can for this exercise, so that the diffraction effects will be as large as
possible.
Chapter 3
We are able to determine locations and magnifications of images by using the model
of light as a “ray” in geometrical optics. However, the concept of light as a “wave” is
even more fundamental to imaging, particularly because of its manifestation in the
observation of “diffraction,” which is a “spreading” of light energy during propagation
that results in the fundamental limitation on the performance of every optical imaging
system. “Interference” and “diffraction” may be interpreted as the same phenomenon,
differing only in the number of sources involved (interference =⇒ few sources, say 2
- 10; diffraction =⇒ many sources, up to an infinite number, though possibly over a
very small area).
3.1 Theory:
This lab will investigate the diffraction patterns generated from apertures of differ-
ent shapes and observed at different distances. As we have mentioned, the physical
process that results in observable diffraction patterns is identical to that responsi-
ble for interference. In the latter case, we generally speak of the intensity patterns
generated by light after passing through a few apertures whose size(s) generally are
smaller than the distance between them. The term diffraction usually is applied to
the process either for a single large aperture, or (equivalently) a large number of small
(usually infinitesimal) contiguous apertures.
In studies of both interference and diffraction, the patterns are most obvious if
the illumination is coherent, which means that the phase of the sinusoidal electric
fields is rigidly deterministic. In other words, knowledge of the phase of the field
at some point in space and/or time determines the phase at other points in space
and/or time. Coherence has two flavors: spatial and temporal. For spatially coherent
light, the phase difference ∆φ ≡ φ1 − φ2 of the electric field measured at two different
points at the same time at two points in space separated by a vector distance ∆r
remains constant for all times and for all such points in space. If the phase difference
measured at the SAME location at two different times separated by ∆t ≡ t1 − t2 is
31
32CHAPTER 3 LAB #2: FRESNEL AND FRAUNHOFER DIFFRACTION
the same for all points in space, the light is temporally coherent. Light from a laser
may be considered to be BOTH spatially and temporally coherent. The properties
of coherent light allow phase differences of light that has traveled different paths to
be made visible, since the phase difference is constant with time. In interference,
the effect often results in a sinusoidal fringe pattern in space. In diffraction, the
phase difference of light from different points in the same large source can be seen
as a similar pattern of dark and bright fringes, though not (usually) with sinusoidal
spacing.
Observed diffraction patterns from the same object usually look very different at
different distances to the observation plane. If viewed very close to the aperture (in
the Rayleigh-Sommerfeld diffraction region), then Huygens’ principle says that the
amplitude of the electric field is the summation (integral) of the spherical wavefronts
generated by each point in the aperture. The resulting amplitude pattern may be
quite complicated to evaluate. If observed somewhat farther from the aperture, the
spherical wavefronts may be accurately approximated by paraboloidal wavefronts.
The approximation applies in the near field, or the Fresnel diffraction region. If
viewed at a large distance compared to the extent of the object, the light from different
locations in the aperture may be accurately modeled as plane waves with different
wavefront tilts. This occurs in the Fraunhofer diffraction region.
where “∗” is the symbol indicating the operation of convolution and h [x, y; λ0 , z1 ] is
the “impulse response of light propagation” for wavelength λ0 and axial distance z1
The impulse response for light propagation in the Fresnel approximation is a constant-
magnitude function with a paraboloidal phase, i.e., the phase is a function of the
square of the radial distance.forming a paraboloidal shape. The convolution operation
involves a translation of the reversed input function in the integration
h coordinates
i
α2 +β 2
[α, β], followed by multiplication by the quadratic-phase factor exp +iπ λ0 z1 and
then evaluation of the area for each value of the output coordinates [x, y]. The
calculation is “complicated” and computationally intensive, but may be implemented
in the frequency domain via a Fourier transform.
In the frequency domain, the transfer function of light propagation is:
H [ξ, η; λ0 , z1 ] = F2 {h [x, y; λ0 , z1 ]}
∙ ¸ ½ ∙ ¸¾
1 z1 x2 + y 2
= exp +i · 2π · · F2 exp +iπ
iλ0 z1 λ0 λ0 z1
∙ ¸ ½ ∙ ¸¾ ½ ∙ ¸¾
1 z1 x2 y2
= exp +i · 2π · · F1 exp +iπ · F1 exp +iπ
iλ0 z1 λ0 λ0 z1 λ0 z1
∙ ¸ p h i
1 z1 π £ ¤
= exp +i · 2π · · λ0 z1 exp +i · exp −iπλ0 z1 ξ 2
iλ0 z1 λ 4
p h πi 0 £ ¤
· λ0 z1 exp +i · exp −iπλ0 z1 η 2
∙ 4 ¸
1 z1 £ ¡ ¢¤
= exp +i · 2π · · iλ0 z1 · exp −iπλ0 z1 ξ 2 + η 2
iλ0 z1 λ
∙ ¸ 0
z1 £ ¡ ¢¤
= exp +i · 2π · · exp −iπλ0 z1 ξ 2 + η 2
λ0
1-D model of Fresnel diffraction of coherent light: (a) two rectangular apertures that
differ in width; (b) the irradiances of the diffraction patterns observed in the Fresnel
diffraction region.
In words, the input function f [x, y] is transformed into the equivalent function F [ξ, η],
where the coordinates ξ, η are spatial frequencies measured in cycles per unit length,
3.1 THEORY: 35
e.g., cycles per mm. In optical propagation, the end result is a function of the original
2-D coordinates [x, y], which means that the coordinates [ξ, η] are “mapped” back to
the space domain via a scaling factor. Since the coordinates of the transform have
dimensions of (length)−1 and the coordinates of the diffracted light have dimensions
of length, the scale factor applied to ξ and η must have dimensions of (length)2 . It
is easy to show that the scaling factor is the product of the two length parameters
available in the problem: the wavelength λ0 and the propagation distance z1 . The
pattern of diffracted light in the Fraunhofer diffraction region is:
Z Z +∞ ∙ µ ¶¸
x y
g [x, y] ∝ F2 {f [x, y]}|λ0 z1 ξ→x,λ0 z1 η→y ≡ f [α, β] exp −i · 2π · α +β dα dβ
−∞ λ0 z1 λ0 z1
Example:
Consider Fraunhofer diffraction of a simple 2-D rectangular object:
⎧
⎪
⎪ 1 if |x| < a20 and |y| < b20
⎪
⎪
∙ ¸ ⎪⎨ 1 if |x| = a0 and |y| < b0 or |x| < ¯¯ b0 ¯¯ and |y| =
⎪ b0
x y (edges)
f [x, y] = RECT , ≡ 2 2 2 2 2
a0 b0 ⎪
⎪ 1 a b
|x| = 2 and y = 2 (corners)
⎪ 4 if
0 0
⎪
⎪
⎪
⎩ 0 if |x| > a0 and > b0 (outside)
2 2
36CHAPTER 3 LAB #2: FRESNEL AND FRAUNHOFER DIFFRACTION
Z y=+
b0 Z x=+ ∙a0 ¸ ∙ ¸
2 2 xα yβ
exp −i · 2π · exp −i · 2π · dα dβ
b
y=− 20
a
x=− 20 λ0 z λ0 z
Z x=+ a0 ∙µ ¶ ¸ Z y=+ b0 ∙µ ¶ ¸
2 x 2 y
= exp −i · 2π · · α dα · exp −i · 2π · · β dβ
a
x=− 20 λ0 z b
y=− 20 λ0 z
h³ ´ i ¯α=+ a20 h³ ´ i ¯β=+ b20
x
exp −i · 2π · λ0 z · α ¯ ¯ exp −i · 2π · λ0 z · β ¯¯y
= ³ ´ ¯ · ³ ´ ¯
x ¯ y ¯
−i · 2π · λ0 z ¯ a0
−i · 2π · λ0 z ¯ b
α=− 2 β=− 20
h i h i h i h i
exp −iπ aλ00xz − exp +iπ aλ00xz exp −iπ λyb00z − exp +iπ λyb00z
= ³ ´ · ³ ´
−i · 2π · λa00z −i · 2π · λb00z
⎛ h i⎞ ⎛ h i⎞
a0 x b0 y
sin π λ0 z sin π λ0 z
= |a0 | ⎝ ⎠ · |b0 | ⎝ ⎠
π aλ00xz π λb00yz
⎡ ⎤
x y
≡ |a0 b0 | · SINC ⎣ ³ ´ , ³ ´ ⎦
λ0 z λ0 z
a0 b0
¯ ⎡ ⎤¯2
¯ ¯
¯ x y ¯
2 ¯
g [x, y] ∝ (a0 b0 ) · ¯SINC ⎣ ³ ´ , ³ ´ ⎦¯¯
¯ λ0 z λ0 z ¯
a
0 b 0
Where the 2-D “SINC” function is defined as the orthogonal product of two 1-D SINC
functions:
sin [πx] sin [πy]
SINC [x, y] ≡ SINC [x] · SINC [y] ≡ ·
πx πy
which has the pattern shown in the figure.
“Imaging” of coherent point sources in the Fraunhofer diffraction region: (a) light
from a single on-axis point source propagates to the aperture in the Fraunhofer
diffraction region and then propagates again to the observation plane in the “second”
Fraunhofer diffraction region; (b) the light generated by two point sources in the
Fraunhofer region overlap to demonstrate the concept of resolution.
1. Propagation from the input object f [x, y] to the Fraunhofer diffraction region
over the distance z1 ,
3. a second propagation over the distance z2 into the Fraunhofer diffraction region
(determined from the aperture).
To eliminate an awkward notation, we will substitute the notation p [x, y] for the
magnitude of the pupil function |t [x, y]|. In this example, we assume that the pupil
has no phase component, so that Φt [x, y] = 0, though solution of the more general
case is straightforward. The 2-D input function f [x, y; z = 0] is illuminated by a
38CHAPTER 3 LAB #2: FRESNEL AND FRAUNHOFER DIFFRACTION
unit amplitude monochromatic plane wave with wavelength λ0 . The light propagates
into the Fraunhofer diffraction region at a distance z1 , where the resulting amplitude
pattern is:
∙ ¸ ∙ ¸ ∙ ¸
E0 z1 (x2 + y 2 ) x y
E [x, y; z1 ] = exp +2πi exp +iπ F ,
iλ0 z1 λ0 λ0 z1 λ0 z1 λ0 z1
This pattern illuminates the 2-D aperture function p [x, y] and then propagates the
distance z2 into the Fraunhofer diffraction region (determined by the support of p).
A second application produces the amplitude at the observation plane:
à ! à !
1 z1
+2πi λ
(x2 +y 2 )
+iπ λ z 1 z2
+2πi λ
( x2 +y2 )
+iπ λ z
E [x, y; z1 + z2 ] = E0 e 0 e 0 1 e 0 e 0 2
iλ0 z1 iλ0 z2
½ ∙ ¸ ¾¯
x y ¯
· F2 F , · p [x, y] ¯¯
λ0 z1 λ0 z1 ξ= x , y
λ0 z2 λ0 z2
µ ¶ (x2 +y2 ) 1 + 1
1 +2πi
z1 +z2
+iπ
= E0 − 2 e λ0
e λ0 z1 z2
λ0 z1 z2
· (λ0 z1 )2 (f [−λ0 z1 ξ, −λ0 z1 η] ∗ P [ξ, η])|ξ= x
, y
λ0 z2 λ0 z2
µ ¶ 2 µ ∙µ ¶ µ ¶ ¸ ∙ ¸¶
z1 +2πi z1λ+z2 +iπ (x λ+y ) z1 + z1
2
z1 z1 x y
= E0 − e 0 e 0 1 2 f − x, − y ∗P ,
z2 z2 z2 λ0 z2 λ0 z2
2 µ ∙ ¸ ∙ ¸¶
E0 +2πi z1λ+z2 +iπ (x λ+y ) z1 + z1
2
x y x y
= e 0 e 0 1 2 f , ∗P ,
MT MT MT λ0 z2 λ0 z2
where the theorems of the Fourier transform and the definition of the transverse
magnification from geometrical optics, MT = − zz21 , have been used. Note that if the
propagation distances z1 and z2 must both be positive in Fraunhofer diffraction, which
requires that MT < 0 and the image is “reversed.”
The irradiance of the image is proportional to the squared magnitude of the am-
plitude: ¯ ¯ ¯ ∙ ¸ ∙ ¸¯2
¯ E0 ¯2 ¯ x y x y ¯¯
2 ¯
|E [x, y; z1 + z2 ]| = ¯ ¯ ¯f , ∗P ,
MT ¯ ¯ MT MT λ0 z2 λ0 z2 ¯
In words, the output amplitude created by this imaging “system” is the product
of some constants, a quadratic-phase function of [x, y], and the convolution of the
input amplitude scaled by the transverse
h magnification
i and the scaled replica of the
x y
spectrum of the aperture function, P λ0 z2 , λ0 z2 . Since the output is the result of a
convolution, we identify the spectrum as the impulse response of a shift-invariant con-
volution that is composed of two shift-variant Fourier transforms and multiplication
by a quadratic-phase factor of [x, y]. This system does not satisfy the strict condi-
3.2 EQUIPMENT: 39
tions for shift invariance because of the leading quadratic-phase factor and the fact
that the input to the convolution is a scaled and reversed replica of the input to the
system. That said, these details are often ignored to allow the process to considered
to be shift invariant. We will revisit this conceptual imaging system after considering
the mathematical models for optical elements.
3.2 Equipment:
1. He:Ne laser;
3. pinhole spatial filter, including microscope objective to expand the beam and
pinhole aperture to “clean up” the beam. Note that a larger power of objec-
tive gives larger beam in shorter distance, but requires a smaller pinhole. The
instructions for using the spatial filter are included in the section “Setting Up
Optical Experiments” in the “Introduction” chapter of this lab manual.
6. aluminum foil, needles, and razor blades to make your own objects for diffrac-
tion;
3.3 Procedure:
1. Set up the laser on the optical bench; make sure you have positioned it at a
height that allows you to use the other carriers that hold the object and lenses.
Also make sure that the laser is parallel to the rail at a fixed height. It may be
convenient to mark the location of the laser spot on the wall to assist positioning
of the spatial filter.
2. Add the spatial filter to the setup on the bench with only the microscope objec-
tive (without the pinhole). I suggest using the objective with the Adjust the x-y
position of the filter (using the x-y on the carrier for coarse adjustment and the
x-y micrometers on the spatial filter for fine adjustment) to locate the center of
the expanded laser beam at the same location as the unexpanded laser spot.
Adjust the distance between the objective and the pinhole by turning the mi-
crometer for the z-axis. While doing this, watch the variation in the “bright-
ness” and “shape” of the pattern on an index card. Slowly move the x, y, and
z micrometers to obtain the “brightest” and “cleanest” beam. This likely will
take a while the first few times you do this, but be assured that your skills will
improve with practice.
4. Insert the converging lens from the Metrologic kit into the laser output to bring
the beam to a focus; mount the pinhole from the Metrologic kit and place it
at the focus of the lens (as best you can); record the diffraction pattern of the
circle with the digital camera; this is the Airy disk pattern.
5. Set up the experimental bench as in the figure with the observing screen close to
the aperture (propagation distance z1 ' 300 mm or so) to examine the results
in the Fresnel diffraction region. The first positive lens should have a short focal
length to ensure that the cone angle of the expanded beam is large. Measure
and record the distances and lens parameters. A number of apertures are avail-
able for use, including single and multiple slits of different spacings, single and
multiple circular apertures, needles (both tips and eyes), razor blades, etc.. In
addition, aluminum foil and needles are available to make your own apertures.
42CHAPTER 3 LAB #2: FRESNEL AND FRAUNHOFER DIFFRACTION
(a) For the first object, use a sharp edge, such as a single-edge razor blade.
Record the Fresnel diffraction pattern with the digital camera (without a
lens) “close” to the location of the edge object.
(b) Without changing the propagation distance, place a second sharp edge at
approximately 90◦ from the first and record the pattern.
(c) Move the camera farther back and repeat the same measurements for the
single edge and for two edges at approximately 90◦ .
(d) Repeat for the camera as far from the object as you can feasibly do given
the constraints of the laboratory.
(e) Make a single slit or square a square aperture, either by using razor blades
or aluminum foil. Repeat the same measurements used for the edges. Note
the form and size of the diffraction pattern by sketching how its “lightness”
varies with position and note the sizes and locations of any features. For
a slit or circular aperture, you should note light and dark regions in the
pattern; measure the positions of some maxima and minima (at least 5).
Use the data to derive a scale of the pattern. Sketch the pattern noting
the scale.
(f) Repeat the previous step with a “wider” slit or aperture. Note the differ-
ence in the results.
(g) Vary the distance between the screen and the diffracting object. Repeat
measurements. What is the relation between the change in distance and
the change in scale of the pattern? Repeat for 5 different distances where
the character of the pattern remains the same.
6. For a slit or aperture (square or round), observe the diffraction pattern at a large
distance from the aperture (several feet away for a small aperture, a propor-
tionally larger distance for a larger aperture) to examine Fraunhofer diffraction.
You may “fold” the pattern with one or two mirrors or you may use a lens to
“image” the pattern, i.e., to bring the image of the pattern created “a long
distance away” much closer to the object. Whichever method you use, be sure
to use the same setup for all measurements.
3.3 PROCEDURE: 43
(a) For an aperture of a known fixed (small) size, increase the distance to
the observation plane as much as you can. Estimate the location of the
transition between the Fresnel and Fraunhofer diffraction regions (this will
certainly be ill-defined and “fuzzy”). Record and justify your measure-
ment.
(b) Add another lens to the system as shown below to “bring infinity closer”
(f) Now overlay a periodic structure (grid) with a circular aperture and observe
the pattern. The overlaying of the two slides produces the product of the
two patterns (also called the modulation of one pattern by the other). The
Fraunhofer diffraction pattern wil the the convolution of the two patterns.
(g) Examine the image and diffraction pattern of the transparency Albert
(Metrologic slide #24). Note the features of the diffraction pattern and
relate them to the features of the transparency.
(h) Examine the pattern generated by a Fresnel Zone Plate (Metrologic slide
#13) at different distances. The FZP is a circular grating whose spacing
decreases with increasing distance from the center. Sketch a side view of
the FZP and indicate the diffraction angle for light incident at different
distances from the center of symmetry. You might also overlap another
transparency (such as a circular aperture) and the FZP and record the
result. I guarantee that this result will not resemble that of part d.
Metrologic slide #21: “chirped grating” whose spatial frequency changes with
position.
(i) If time permits, you can also find the diffraction patterns of other objects,
such as the tip and/or the eye of the needle.
3.4 QUESTIONS 45
3.4 Questions
1. This experiment demonstrates that interaction of light with an obstruction will
spread the light. For example, consider Fresnel diffraction of two identical
small circular apertures that are separated by a distance d. How will diffraction
affect the ability to distinguish the two sources? Comment on the result as lens
diameter d is made smaller.
2. The Fresnel Zone Plate (Metrologic slide #13) may be viewed as a circularly
symmetric grating with variable period that decreases in proportion to the radial
distance from the center. It is possible to use the FZP as an imaging element
(i.e., as a lens). Use the model of diffraction from a constant-period grating
to describe how the FZP may be used to “focus” light in an optical imaging
system. This may be useful for wavelengths (such as X rays) where imaging
lenses do not exist.
Chapter 4
4.1 Theory:
In this lab, you will create “holographic” images of synthetic planar objects that
you create. These “computer-generated holograms” — “CGH” is the common “TLA”
(“three-letter acronym”) — are mappings of the complex amplitude (magnitude AND
phase) of light reflected from or transmitted by synthetic objects. The wavefronts
that would emerge from point sources in the object are “reconstructed” by the light
diffracted by the hologram. This technology is very useful in optical testing, where the
desired wavefront that would be produced by an optical element may be simulated by
a CGH and then used to test the fabricated optic (this is what was done — incorrectly
— for the primary mirror of the Hubble Space Telescope in the 1980s).
Holography is based on the diffraction of light by an object, which may be
viewed as a result of Huygens’ principle that light propagates by creating spheri-
cal “wavelets”. A hologram is the record of the interference pattern generated by the
wavefront from an object and from a known source (the reference). The principle of
holography was first described by Dennis Gabor in the 1940s, but did not become
practical until lasers that produced (nearly) monochromatic light became available in
the 1960s. The modifications to the holographic process that were first implemented
by Leith and Upatniaks vastly improved the process and allowed 3-D objects to be
reconstructed easily by laser illumination.
In the early days of holography, where the holograms were made “for real” on an
optical bench, much research was devoted to improving the spatial resolution of the
photochemical emulsions used to create the holograms. Spatial resolutions of up to
3000 cycles per mm were obtained for silver halide emulsions and up to 5000 cycles
per mm for photopolymer materials. Since the rapid decline in chemical emulsions,
these materials are no longer available.
The description of a hologram is based on the theory of optical diffraction, which
is specified in three regions that differ in their distance from the source (object). The
three classes of diffraction region are:
47
48 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
The term (iλ0 )−1 is a phase term that is required for normalization; the factor
z1−1 is the approximationh of theiinverse square law in the Fresnel diffraction
region; the factor exp +i · 2π λz10 is a constant phase term due to the propa-
gation for the source hplane to the i observation plane, and the last (and most
2 2
significant) term exp +i · π xλ0+yz1
is a correction to the phase due to the off-
axis location of the observation point. If the object function is a set of point
sources (with the same wavelength λ0 ) located in the plane specified by z = 0,
the “image” of the diffracted light is the superposition of replicas this impulse
response for each source point. We can specify the object distribution by a 2-D
function f [z, y; z = 0, λ0 ] and the resulting superposition g [x, y; z1 , λ0 ] at the
observation plane may be written as a convolution of the source function with
this impulse response
Computer-generated holograms may be made that operate in the near or far field.
The amplitude in the near field is the convolution of the object and a quadratic-phase
impulse response, while that in the far field is proportional to the Fourier transform of
4.1 THEORY: 49
the object amplitude. The goal of CGH therefore is to figure out the complex-valued
amplitude (magnitude AND phase) at the object plane that generates the desired
complex amplitude at the observation plane for the specific diffraction approximation.
Once the desired object complex amplitude is determined, the only “trick” in CGH
is to figure out a way to encode the complex values in a medium that can only render
real values (in passing, we should note that it is possible to generate phase values in
a medium by changing the optical thickness of an emulsion, but this is beyond the
scope of this project.).
The primary goal of this laboratory is to make holograms in the Fraunhofer dif-
fraction region (far field), so that the diffracted amplitude is the Fourier transform of
the object. In this case, the input pattern is easy to calcuate from the desired object.
The far-field assumption also means that the object is planar (two-dimensional) and
has no depth. It is possible to encode holograms with depth; a method is briefly de-
scribed later, but since it requires photographic reduction in scale, it is not required
here.
The rendering method we shall use to make the hologram is constrained even
further than the previous comment that the pattern must be real valued. We shall
approximate the complex-valued Fourier transform of the object using only a bitonal
rendering, i.e., transparent apertures with unit transmittance on an opaque back-
ground. The method is known as “detour phase,” because the positions of the trans-
parent apertures ensure that the light takes different paths to reach the observation
plane, and therefore has different arrival times, and therefore different phases, there.
The radiation from these sources is observed at the plane defined by z = z1 , which
is assumed to be sufficiently distant so that the spherical waves are suitably approx-
imated by plane waves. In other words, the description of Fraunhofer diffraction is
appropriate. In such a case, the observed amplitude s [x, y; z = z1 ] is proportional to
the Fourier transform of the source function evaluated at suitably scaled coordinates:
The measurable quantity at the observation plane is the irradiance |s [x, y; z1 ]|2 , which
is proportional to:
µ ∙ µ ¶ ¸¶
2 x0
|s [x, y; z1 ]| ∝ 1 + a0 · exp [+iφ0 ] · exp −i · 2π ·x
λ0 z1
µ ∙ µ ¶ ¸¶
x0
· 1 + a0 · exp [−iφ0 ] exp +i · 2π ·x
λ0 z1
∙ µ ¶ ¸
¡ 2
¢ x0
= 1 + a0 + 2a0 · cos 2π · x − φ0
λ0 z1
µ ∙ µ ¶ ¸¶
¡ 2
¢ 2a0 x0
= 1 + a0 1 + · cos 2π · x − φ0
1 + a20 λ0 z1
µ ¶
¡ 2
¢ 2a0
≡ 1 + a0 1 + · cos [2πξ 0 x − φ0 ]
1 + a20
x0
where ξ 0 ≡ is the spatial frequency of the sinusoidal irradiance pattern observed
λ0 z1
at the plane z = z1 . This sinusoidal “grating” oscillates over the range of amplitudes
(1 + a20 ) ± 2a, and therefore its modulation is:
¡ ¢ ¡ ¢
|s [x, y; z1 ]|2 max − |s [x, y; z1 ]|2 min
m0 ≡ ¡ ¢ ¡ ¢
|s [x, y; z1 ]|2 max + |s [x, y; z1 ]|2 min
[(1 + a20 ) + 2a0 ] − [(1 + a20 ) − 2a0 ] 2a0
= 2 2
=
[(1 + a0 ) + 2a0 ] + [(1 + a0 ) − 2a0 ] 1 + a20
tation, period, phase at the origin, and modulation) are determined by the physical
properties of the point sources (orientation, separation, relative phase, and relative
amplitude). A recording of the irradiance pattern (as on a photographic emulsion)
preserves the evidence of these properties, but it would be desirable to find a method
for “reconstructing” images of the sources from this recording. In other words, we
would like to find an “inverse filter” for the process of generating the sinusoidal grat-
ing.
To find the “inverse filter” for the process, it is necessary to model the photo-
graphic process mathematically, including the photographic storage of the pattern.
We all know that photographic emulsions record “negative” images of the irradiance
distribution, which is more accurately called the scaled “complement” in this con-
text, as the measureable parameter is the optical “transmittance” t, which is in the
interval 0 ≤ t ≤ 1 and thus is nonnegative. The recorded transmittance t is small
where the irradiance |s [x, y; z1 ]|2 is large. If the recording is “linear”, then the trans-
mittance is doubled if the irradiance is halved. We also assume that the emulsion is
“thin” and records only the interference pattern produced in the plane of the emul-
sion. “Thick” holograms record interference patterns in three dimensions and their
description requires further details.
Our goal is to process the emulsion to produce a “hologram” whose optical trans-
mittance is a faithful rendition of the complement of the sinusoidal irradiance, in-
cluding the period, orientation, initial phase, and modulation. In our simple model,
we will assume that the hologram is processed so that exposure to the “average”
irradiance (i.e., the “bias” of the grating) produces a transmittance t [x, y; z1 ] = 12 ,
while the modulation of the transmittance t [x, y; z1 ] is identical £to that of the origi-
2 2¤
nal irradiance |s [x, y; z1 ]| . The appropriate mapping function u |s [x, y; z1 ]| is the
complement of the incident irradiance after scaling to half-unit bias:
£ ¤ ¡ ¢
t [x, y; z1 ] = u |s [x, y; z1 ]|2 = |s|2 max − |s [x, y; z1 ]|2
µ ∙ µ ¶ ¸¶
1 x0
= 1 − m0 · cos 2π x − φ0 1 [y]
2 λ0 z1
1
=⇒ (1 − cos [2πξ 0 x − φ0 ]) 1 [y] for a0 = 1
2
1
=⇒ − a0 · cos [2πξ 0 x − φ0 ] 1 [y] for a0 ' 0
2
x0
where ξ 0 ≡ This recorded pattern t [x, y; z1 ] is the hologram. In this case, it is a
λ0 z1
simple sinusoidal pattern along the x-axis and constant along y. Note also how simple
this pattern was to calculate — this is the basis for the idea of computer-generated
holography.
A side comment about “reality” is essential here. In real life, it is no simple
task to develop the emulsion so that the transmittance is proportional to the scaled
complement of the irradiance. Nonlinear deviations from this ideal behavior that
complicate the analysis can easily “creep into” the processing. We will consider
the qualitative effects of the nonlinearities after analyzing the ideal result. Also,
52 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
this example assumes that the holographic recording is infinitely large. Though this
assumption seems to be unrealistic, it is no more so than the assumption of Fraunhofer
diffraction itself, which is only valid within some region near the optical axis.
Now, replace the processed hologram in its original position and reilluminate it by
light from the on-axis source. The illumination is a plane wave that travels down the
z-axis. The action of the hologram is to modulate the source illumination to produce
an amplitude pattern proportional to t [x, y, z1 ]. The hologram is “reconstructed”
by propagating this amplitude distribution to an observation plane “downstream”
from the hologram. Consider that the propagation distance to the observation plane
is z2 , so that the amplitude is observed at the plane defined by z = z1 + z2 . If
z2 is sufficiently large so that the Fraunhofer diffraction model is appropriate, then
the output amplitude pattern is again proportional to the Fourier transform of the
recorded transmittance pattern t [x, y; z1 ]:
1 x y
∝ δ ,
2 λ0 z2 λ0 z2
∙ µ ¶ ¸
1 a0 1 z2 y
− δ x + x0 , · exp [−iφ0 ]
2 1 + a20 λ0 z2 z1 λz
∙ µ ¶ 0 2¸
1 a0 1 z2 y
− 2
δ x − x0 , · exp [−iφ0 ]
2 1 + a0 λ0 z2 z1 λ0 z2
∙ µ ∙ ¸ ∙ ¸ ¶¸
(λ0 z2 )2 a0 z2 −iφ0 z2
= δ [x, y] − δ x + x0 , y e + δ x − x0 , y · exp [−iφ0 ]
2 1 + a20 z1 z1
µ ∙ ¸ ∙ ¸ ¶
a0 z2 z2
∝ δ [x, y] − δ x + x0 , y · exp [−iφ0 ] + δ x − x0 , y · exp [−iφ0 ]
1 + a20 z1 z1
where the scaling property of the Dirac delta function has been applied. The output
irradiance is proportional to the squared magnitude of this pattern, which squares
the weighting factors of the Dirac delta functions:
µ ¶2 µ ∙ ¸ ∙ ¸¶
2 a0 z2 z2
|g [x, y; z1 ]| ∝ δ [x, y] + δ x + x0 , y + δ x − x0 , y
1 + a20 z1 z1
³ m ´2 µ ∙ z2
¸ ∙
z2
¸¶
0
= δ [x, y] + δ x + x0 , y + δ x − x0 , y
2 z1 z1
where the real-valued character of a0 has been used. In words, this output “image”
is a set of three Dirac delta functions: one “on axis” with “power” proportional to
z2
unity, and two separated from the origin by x = ± x0 with “power” proportional
µ ¶2 ³ ´ z1
a0 m0 2
to = . The “reconstructed” image is a “magnified” replica of the
1 + a20 2
4.2 EQUIPMENT: 53
autocorrelation of the source function that has been scaled in both position and .
The intensity of the off-axis Dirac delta functions are scaled by 14 if a0 = 1 and by
approximately a20 if a0 ' 0.
4.2 Equipment:
1. Computer running IDL (or other software tool that includes a random number
generator and the discrete Fourier transform);
3. He:Ne Laser, optical bench, and optical components to reconstruct the image.
4.3 Procedure:
This procedure was adapted (i.e., stolen) from that used by Dr. William J. Dallas
of the University of Arizona (http://www.radiology.arizona.edu/dallas/CGH.htm). It
computes a “detour-phase” computer-generated hologram (CGH) by representing the
complex amplitude of the Fourier transform of the object at one location in a cell
made from multiple bitonal pixels. Put another way; each cell provides a quantized
approximation of the complex-valued Fourier transform, which then may be placed
in a beam of coherent light that propagates to the Fraunhofer diffraction region to
evaluate the Fourier transform of the Fourier transform. The phase change in the
light is generated by changing the distance that the light must travel to arrive at
the observation point, which then adds to light that traveled other paths and may
“add” constructively or “cancel” destructively. In this way, the light distribution of
the original object may be reconstructed (at least approximately).
54 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
Principle of detour phase: (a) the transparent apertures are equally spaced so that
the optical path length through the aperture is the same for all paths, which means
that the light arrives at corresponding locations with the same optical phase; (b)
three of the transparent apertures have been displaced, so the optical path lengths
through those apertures to the same observation locations are longer, thus changing
the optical phase.
The method used here was originally implemented by Adolf Lohmann in the 1960s,
and is particularly attractive because it is both simple to implement and yet also
produces fairly good images of simple objects. Other variations of this method have
also been implemented, as described later.
In the Lohmann hologram, an approximation of the complex-valued Fourier trans-
form is displayed as a real-valued array of “bitonal” apertures. The pixels in the
pattern can take on one of two values for the transmittance: “0” (opaque) or “1”
(transparent). The algorithm encodes the Fourier transform of the object pattern,
and thus assumes that the hologram is reconstructed by propagating the light to the
Fraunhofer diffraction region, where diverging spherical wavefronts propagating from
a point object may be approximated by plane waves. The object is assumed to be
planar (2-D) because the light is assumed to propagate a large distance from the
hologram (into the Fraunhofer diffraction region), so that the wavefronts are assumed
to be planar. The difference in time required to travel 3-D objects may be modeled
if the light diffracts into the Fresnel diffraction region.
Consider the Argand diagram of the complex amplitude of a sample after normal-
izing the magnitude and the corresponding Lohmann cell where the magnitude |F | ∼ =
0.65 and Φ {F } ∼
= 6π
. The sampled values in an 8 × 8 cell are |F |quantized = 5
8
= 0.625
(opening a 5-cell aperture) and Φquantized {|F |} = + π4 :
4.3 PROCEDURE: 55
Lohmann Type-I hologram cell (left) Argand diagram showing the 65 available states
in an 8 × 8 cell and a sample with normalized magnitude ∼ = 0.65 and phase angle
∼ π
= + 6 . The complex amplitude is assigned to the nearest available quantization level
forming the cell on the right.
where f [n, m] is the bitonal array containing the alphabetic character. The end
result is the object function g [n, m]. The indices [n, m] determine the spatial
position in the array via the relationships:
x = n · ∆x
y = m · ∆y
where ∆x, ∆y are the intervals between samples along the two axes. For our
purposes here, we may just think of ∆x = ∆y = 1 unit. An example for a 1-D
function (3-bar chart modeled after the USAF resolution target is shown in part
(c) of the figure:
4.3 PROCEDURE: 57
1-D simulation of detour-phase CGH: (a) bitonal object f [x] consisting of sets of
three “bars”; (b) random phase selected from uniform distribution over interval
−π ≤ φ < +π; (c) < {f [x]} with random phase; (d) = {f [x]} with random phase.
4. Compute either the discrete Fourier transform (DFT) or its “fast” relative
(FFT) of the 2-D “centered” complex-valued N × N array g [n, m] with the
random phase; this produces a complex-valued array of two indices in the “fre-
quency domain,” e.g., G [k, ], where k and are the indices for the “spatial
frequencies” of the sinusoidal components of g [n, m]. In other words, the spa-
tial frequencies of the samples are:
1
ξ = k · ∆ξ = k ·
N · ∆x
1
η = · ∆η = ·
N · ∆y
k
If ∆x = ∆y = 1 unit of length, then ξ = N
and η = N
“cycles per unit length.”
58 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
N
−1
X
2 X
2
−1 N
∙ ¸
(nk + m )
G [k, ] = g [n, m] · exp −2πi
N
m=− N
2
n=− N
2
N N
−1 −1
X
2 X
2
−1 NN
−1 µ ∙ ¸¶ µ ∙ ¸¶
X X
22
(nk + m ) (nk + m )
= greal [n, m] · cos +2π − gimag [n, m] · sin +2π
N N
N N
m=− 2
n=− 2
N
−1 −1 µ N
∙ ¸¶ µ ∙ ¸¶
X
2 X 2
(nk + m ) (nk + m )
+i· gimag [n.m] · cos +2π − greal [n, m] · sin +2π
N N
N N
m=− 2
n=− 2
Re {G [k, ]}
N
−1 N −1 µ ∙ ¸¶ µ ∙ ¸¶
X 2X
2
(nk + m ) (nk + m )
= greal [n, m] · cos +2π − gimag [n, m] · sin +2π
N N
N N
m=− 2
n=− 2
Im {G [k, ]}
N
−1 N −1 µ ∙ ¸¶ µ ∙ ¸¶
X 2X
2
(nk + m ) (nk + m )
= gimag [n, m] · cos +2π − greal [n, m] · sin +2π
N N
N N
m=− 2
n=− 2
For further information, see Fourier Methods in Imaging, §15.5.2, pp. 536-538.
5. Compute the magnitude and phase of the 2-D FFT G [k, ] of the array g [n, m]:
q
|G [k, ]| = (Re {G [k, ]})2 + (Im {G [k, ]})2
∙ ¸
−1 Im {G [k, ]}
Φ {G [k, ]} = tan , where − π ≤ Φ < +π
Re {G [k, ]}
The magnitude and phase of the DFT of the 1-D array shown before are dis-
played in the figure:
FFT G [k] for 1-D object g [n] = f [n] · exp [+iφ [n]], where φ [n] is the random phase
at pixel indexed by n: (a) magnitude |G [k]| ≥ 0; (b) phase −π ≤ Φ {G [k]} < +π.
6. Normalize the magnitudes at each pixel by dividing by the maximum of all the
magnitudes.
|G [k, ]|
0≤ ≤1
|G|max
7. Select a cell size for the hologram; this is the size of the bitonal cell that will
be used to approximate the complex amplitude (magnitude and phase) of each
pixel in the Fourier transform array G [k, ]; I suggest trying 8 × 8 cells, based
on the discussion of the array size presented earlier. Note that this enlarges
your array by factors of 8 in each dimension (which should not be a problem).
8. Quantize the normalized magnitudes so that the largest value is the linear di-
mension of your cell array, i.e., if you are using an 8 × 8 cell, multiply the
normalized magnitude |G[k, ]|
|G|max
by 8 and then round to the nearest whole number.
One way to do this is to evaluate the greatest integer of the factor + 1/2:
∙ ¸
|G [k, ]| 1
greatest integer of 8 · +
|G|max 2
60 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
9. You might want to evaluate and examine the histogram of the magnitudes,
which should follow a “Rayleigh” distribution of the form:
£ ¤
|G [k]| · exp − |G [k]|2
1.0
likelihood
0.8
0.6
0.4
0.2
0.0
1 2 3 4 5 6 7 8
magnitude
Rayleigh distribution
The large values of the magnitude still occur infrequently, but are much more
common than if the random phase had not been included.
10. Quantize the phase at each pixel by dividing the calculated angle by 2π radians,
multiplying by the linear dimension of your cell, and rounding to the nearest
whole number — this will produce phase numbers in the interval [−4, +3]:
∙ ¸
Φ {G [k, ]} 1
greatest integer of 8 · +
2π 2
11. Now for the trickiest part; we need to make bitonal apertures to make the 8 × 8
cell for each pixel in the 64 × 64 DFT array and display them as corresponding
to the quantized magnitude and quantized phase. The apertures take the form
of transparent holes on an opaque background in each cell of size 8 × 8 pixels.
With this cell size, the normalized magnitude is quantized to the interval 0 ≤
|Gnormalized [k, ]| ≤ 8 and the phase is in ther interval [−4, +3]. The vertical
“length” of the aperture in each cell is the quantized magnitude of that pixel in
G [k, ], while the horizontal “position” of the aperture is the quantized phase
of the pixel G [k, ]. A 2-D example of the cells that might be obtained is shown
in the figure:
4.3 PROCEDURE: 61
Example of the cell patterns that might be obtained for a particular CG hologram.
The results for the same 1-D example (3-bar target) is shown in the figure, where
the “heights” of the cells encode the magnitude and the “position” (left-right)
encode the phase.
Output of CGH after rendering in cells: (a) showing cells with length proportional to
the magnitude of each sample of F [k]; (b) after lengths of cells have been quantized
to integer values between 0 and 8; (c) magnified view, showing that the apertures are
located at different phases.
12. While you are at it, I suggest also adding the option to increase the width of the
cells to three pixels, thus making a Lohmann Type-III hologram. This is quite
simple — the slot passing the light is replicated in the columns to either side.
The resulting hologram passes more light to the reconstruction, but the average
phase of that cell is still maintained (a description is presented somewhat later
in this chapter). In the cases where the quantized phase is −π or + 3π 4
, one of
the two additional columns must “wrap around” to the other side of the cell.
13. Add the capability to subtract the quantization error from a just-quantized from
the complex amplitude of the next sample before quantizing. This step is quite
simple and can dramatically improve the fidelity of the reconstruction.
14. After creating the bitonal CGH, you can check on its success by simulating
the optical reconstruction in the Fraunhofer diffraction region, which merely
62 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
15. Print out the bitonal array with an electrophotographic laser or inkjet printer
on overhead transparency film at a small size. If the printer is capable of 600
dpi, try printing at this scale and perhaps at 300 dpi.
16. Display the hologram by illuminating with a He:Ne laser and view in the Fraun-
hofer diffraction region (i.e., far “downstream” from the hologram). The use
of a lens to move the Fraunhofer region closer to the object is recommended,
and then you may use a negative lens just after the Fourier plane to enlarge the
pattern.
This algorithm for evaluating the Fourier transform appears often in various imag-
ing applications, and is particularly important in optics because optical propagation
may be modeled as convolution with an appropriately scaled quadratic-phase func-
tion.
4.3 PROCEDURE: 63
The process of increasing the width of the cell to pass more light is easy to understand
by example. We will start with the complex amplitude already used to model the
Lohmann Type-I CGH: the aperture in the cell in the original recipe was one pixel
wide placed at the quantized phase location. Lohmann also developed a variation with
apertures that are three pixels wide by including the lines in the apertures on either
side. Consider the Argand diagram of a sample after normalizing the magnitude and
the corresponding Lohmann cell where the magnitude |F | ∼ = 0.65 and Φ {F } ∼ = π6 .
The sampled values in an 8 × 8 cell are |F |quantized = 58 = 0.625 (opening a 5-cell
aperture) and Φquantized {|F |} = + π4 :
The cells on either side could be opened, which maintains the same quantized phase
of + π4 , but increases the transmitted light by a factor of 3:
64 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
Argand diagram of the same sample for a Lohmann Type-III hologram, showing that
the apertures to either side are opened, thus transmitting more light, while
maintaining the same quantized phase.
If the sample phase is near the edge of the cell, then the aperture is “split” left
and right. Consider a sample with quantized magnitude Fquantized = 78 and phase
Φquantized {F } = −π; the Lohmann Type-I hologram is:
Argand diagram and Lohmann Type-Ib cell for a sample with phase at the edge of
the cell.
and the additional cells are on the “same” side and the “opposite” side of the cell:
4.3 PROCEDURE: 65
Argand diagram and Lohmann Type-III cell for the same sample with phase at the
edge of the cell, showing that the aperture is “split” to either side of the cell. The
quantized phase is unchanged.
4.3.4 Example:
In 2008, Juliet Bernstein used this recipe to make a 512 × 512 bitonal hologram of
a 64 × 64 bitonal letter “A” (obviously using 8 × 8 cells) with the additive random
phase. The hologram was electrophotographically printed onto a transparency at 300
dpi; inkjet printing might work better, since the inkjet transparency is not fused by
heat, which induces random variations in thickness of the transparency, which in turn
produces random variations in the phase of the transmitted light.
Lohmann Type-III bitonal hologram of “A” by Juliet Bernstein in 512 × 512 array.
The random phase ensures that the average magnitude is larger and the 3-pixel-wide
apertures transmit more light.
The replica images in the reconstruction are due to the 8 × 8 cell. The inverted
replicas are due to fact that the hologram is real valued, so its Fourier transform is
symmetric (even).
4.4 Questions
1. Explain the number of replicas in the reconstructed image of the hologram.
2. Consider the limitations of the laser printer. The rectangular apertures of the
hologram are approximated by “spots” of toner that are fused onto the trans-
parency film by heat. As the scale of the rendered pattern is reduced, the
4.5 OPTIONAL BONUS — YET OTHER VARIANTS 67
spreading out of the toner spots ensures that the desired rectangular patterns
will not be printed. What does this mean for the reconstruction of the holo-
gram? A sketch of the effect of toner spread on the rendered pattern may be
helpful.
3. Your reconstruction is probably pretty noisy. What are the possible mechanisms
that generate the noise?
The Lee and Burckhardt holograms are other variants on the Lohmann methods that
are discussed in the book Optical Holography by Collier, Burckhardt, and Lin (listed
in the references and in §23.3 in Fourier Methods in Imaging). The Lee method used
a two-pixel-wide aperture over four phase levels separated by π2 radians. Different
quantized magnitudes were rendered in each aperture to obtain more quantization
levels in complex amplitude. The Burckhardt method did the same thing with a
two-pixel-wide aperture over three phase levels separated by 2π
3
radians.
Feel free to try these methods too; once you have the rendering set up, it is easily
adapted to the other methods.
This method for generating holograms of 3-dimensional arrays of point sources was
described by Dale Nassar in Circuit Cellar in 1990 (back when we had chemical-
emulsion photography! The article is referenced below with a link to a pdf copy on
the course website).
The basis for holography of objects in the near field is the quadratic-phase impulse
response. The hologram of each point source is a Fresnel zone plate. As we saw in
the lab, the zone plate acts as a lens with both positive and negative focal lengths.
Lenses with different focal lengths have different “chirp rates.” The hologram of a set
of point sources is really the summation of a set of Fresnel zone plates.
68 CHAPTER 4 LAB #3: COMPUTER-GENERATED HOLOGRAPHY
Fresnel zone plate, which is the thresholded interference pattern of the emission from
a point source with a referenc plane wave.
The increase in spatial frequency of the zone plate with increasing radius is the
reason for the ultimate limit in the capability of the Fresnel hologram; you can only
sample up to some maximum frequency without aliasing.
The pattern computed for the hologram must then be rendered on some medium
that may be inserted into the laser beam (Nassar used a pen plotter). You have
the capability of printing the pattern using a laser printer, with the caveat that the
scale of the rendered pattern must be small enough so that the diffraction angle
is sufficiently large to see the reconstruction. This is the reason for photographic
reduction of the printed hologram The extra step makes the process harder to do
with current technology (an irony that does not escape!).
Note that the output irradiance is the squred magnitude of the output amplitude,
so that the magnitude is the square root of the desired squared magnitude if using a
gray-scale object.
4.7 References:
R.L. Easton, Jr., §23 in Fourier Methods in Imaging, John Wiley & Sons, 2010.
Reynolds, G.O., J.B. DeVelis, G.B. Parrent, and B.J. Thompson, The New Phys-
ical Optics Notebook, SPIE, 1989.
Iizuka, K. Engineering Optics, Springer-Verlag, 1984
Dallas, W.J., “Computer-Generated Holograms”, §6 in The Computer in Optical
Research, B.R. Frieden, Springer-Verlag, 1980.
Collier, Robert J., C.B. Burckhardt, and L.H. Lin, Optical Holography, Academic
Press, 1971.
B.R. Brown and A.W. Lohmann, Appl.Opt., 5, 967, 1966.
A.W. Lohmann and D.P. Paris, Appl.Opt., 6, 1739, 1967.
R.L. Easton, Jr., R. Eschbach, and R. Nagarajan, Error diffusion in cell-oriented
holograms to compensate for printing constraints, Journal of Modern Optics 43(6),
1219-1236, 1996.
4.7 REFERENCES: 69
5.1 Theory:
The laboratory on Fraunhofer diffraction showed that the amplitude after light had
propagated sufficiently far from a 2-D input distribution.f [x, y] is proportional to the
2-D Fourier transform mapped back to the space domain by the scale factor λ0 z1 :
where the propagation distance z1 satisfies the accepted condition for Fraunhofer
diffraction, such as the rigorous constraint:
71
72 CHAPTER 5 LAB #4: FOURIER OPTICS
you have also seen that this large distance may be brought “closer” by adding a lens
after the input function to produce a practical system:
Besides using the “compact” Fourier transform system in this lab, you will also
add a second lens to compute the “Fourier transform of the Fourier transform.” The
most obvious way to do this is to place the second lens after the Fourier transform
plane, as shown:
4-f optical “correlator” composed of two cascaded 2-f Fourier transformers. The
frequency domain is accessible at the midpoint plane.
In words, the second lens is located one focal length away from the Fourier transform
plane and the output is observed one focal length away from the lens. For an obvious
reason, this is called a “4f” imaging system. If the Fourier transform of the desired
impulse response (psf) of a filter is placed at the midplane, then the system is some-
times called a “4f optical convolver” because it may evaluate the convolution of the
input with the impulse response. By rotating the object about the optical axis by
180◦ , the correlation of the two functions may be evaluated, which is the reason for
the other common name of “4f optical correlator.”
5.1 THEORY: 73
It is easy to trace a ray from an “arrow” located at the object plane parallel to the
axis. The “image” of this ray will be inverted (“upside down”), which indicates that
the “Fourier transform of the Fourier transform” is a reversed replica of the function:
½ ∙ ¸¾
x y
F2 {F2 {f [x, y]}} = F2 F , ∝ f [−x, −y]
λ0 f λ0 f
Demonstration that the output of the 4f-system is a reversed replica f [−x, −y] of
the input function f [x, y] .
This imaging system makes the Fourier transform F [ξ, η] of the input function
f [x, y] “accessible” where it can be modified. For example, a function of the form
µ ¶
r
H1 (r) = CY L
d0
placed at the Fourier transform plane will pass the light near the optical axis and
block light away from the optical axis — it acts as the transfer function of a lowpass
filter. The complement of this function of the form
µ ¶
r
H2 (r) = 1 − CY L
d0
would block the light near the axis (sinusoidal components with small spatial frequen-
cies) and pass those with large frequencies — it is a highpass filter.
The 2-f Fourier transforming system theoretically produces the “exact” Fourier
transform (magnitude AND phase) at the output, subject to the quality of the beam
and the lenses used. However, the fact that sensors measure the squared magnitude
means that the phase often is not important. A simpler system that does produce the
correct squared magnitude is shown in the next figure. The first lens forms the Fourier
transform of the input object f [x, y] at the image of the point source, while the second
lens with focal length f2 is located to image the object via the imaging equation. The
spatial filter at the midplane removes some of the frequency components from the
object spectrum, thus changing the spectrum of the filtered image.
74 CHAPTER 5 LAB #4: FOURIER OPTICS
Fourier filtering system that does not produce accurate phase. The first lens forms
the Fourier transform at the image of the point source (at the “midplane”), where
the transfer function of the magnitude filter is placed. The second lens generates the
second Fourier transform
5.2 Equipment:
1. He:Ne laser
2. microscope objective to expand the beam; larger power gives larger beam in
shorter distance;
3. pinhole aperture to “clean up” the beam;
(15) Slit
5.3 Procedure:
1. Set up the experimental bench to see Fraunhofer diffraction with a lens, so that
the output is the Fourier transform of the input. Then add a second lens L2 to
compute the Fourier transform of the Fourier transform.
2. Make a pinhole aperture with a needle and a white index card. This pinhole
will be used to position lens L2 . Place the pinhole at the Fourier transform
plane made by the first lens.
3. Place lens L2 one focal length from the Fourier transform plane and a mirror one
focal length behind the lens. Look at the image of the pinhole on the rear side
of the the index card while moving lens L2 along the optical axis. The correct
location of lens L2 occurs where the image of the pinhole is smallest. Remove
the pinhole without moving the holder for the pinhole; this is the location of
the Fourier transform plane.
5.3 PROCEDURE: 77
Alignment procedure
4. Replace the mirror by a viewing screen and insert a white light source as shown:
4-f Optical Imaging System, with Fourier plane between the two lenses.
The image of the transparency should be in focus. If not, then you need to read-
just the focus.
5. Set up the simplified Fourier filtering system shown above and repeated below.
78 CHAPTER 5 LAB #4: FOURIER OPTICS
6. Observe the images of the following Metrologic slides: #10 (medium grid), #13
(concentric circles with variable widths; this is a Fresnel zone plate); #19 (fan
pattern).
7. Put Metrologic slide #10 (medium grid) at the input plane. The slides #3
(circular aperture ), #15 (narrow slit), and a square aperture from #16 will be
used as filters placed at the Fourier transform plane. You also may want to use
a small pinhole as a filter; pierce a piece of aluminum foil with a needle and
place at the Fourier transform plane.
8. Use the same setup, but replace the input with Metrologic slide #7 (concentric
wide circles)
9. Use slide #22 (simulation of cloud-chamber photograph) as the object and slide
#23 (transparent bar with obstruction at center) as the filter. Position the filter
so that the transparent bar is perpendicular to the lines in the input.
10. Use slide #25 as the input and #23 as the filter.
(a) Orient the filter bar in the vertical direction; observe the output.
(b) Orient the filter bar in the horizontal direction; observe the output
11. Use slide #24 (halftone image of Albert Einstein) as the input and a variable-
diameter iris or slides #17 and #18 (circular apertures) as the filter. Experiment
with the filter.
5.4 References:
Fourier Methods in Imaging, R.L. Easton, Jr.,
Optics, E. Hecht, Chapters 10, 11
Chapter 6
Historically, the assessment of optical system performance has been fraught with
difficulties and misunderstandings. There is a tendency (particularly by people in
industrial management) to seek a single metric that describes the action of an optical
system; such a single number is often called “resolution,” but it does not come close to
describing the performance of the system. A more appropriate metric is the modula-
tion transfer function (MTF), which is a real-valued function that describes the “how
well” the modulation of each sinusoidal frequency component is transmitted from the
input object to the output image for a system acting in “natural” (“incoherent”)
light. This will describe the action of the lens system
For imaging systems with circularly symmetric apertures, the MTF is circularly
symmetric as well and is usually presented as a 1-D function of spatial frequency
measured for along a radial axis. If the aperture is rectangular or nonsymmetric (such
as a rectangular pupil or a “multiple-aperture system”), then it may be necessary to
present multiple graphs plotted along different axes.
6.1 Theory
smax = A0 + A1
smin = A0 − A1
so the modulation of s [x] is the ratio of the oscillating amplitude to the constant
part:
(A0 + A1 ) − (A0 − A1 ) A1
=⇒ ms [ξ 0 ] = =
(A0 + A1 ) + (A0 − A1 ) A0
In any situation where the minimum amplitude is 0, then A0 = A1 and the modulation
ms = 1. In the trivial case where the maximum and minimum values are identical
(the biased “sinusoid” is a constant value), then A1 = 0 and ms = 0; the function
has “no modulation.”
If the imaging system satisfies the linearity criterion, then the output generated
from a sinusoidal input at frequency ξ 0 must be a sinusoid with that same frequency,
and the form of the output might be:
where B0 and B1 are determined by the action of the system. Therefore, the modu-
lation of each frequency of the output g [x] has the same form:
(B0 + B1 ) − (B0 − B1 ) B1
=⇒ mg [ξ 0 ] = =
(B0 + B1 ) + (B0 − B1 ) B0
The “modulation transfer” at the specified spatial frequency ξ 0 is the ratio of the
output and input modulations:
mg [ξ 0 ]
MT [ξ 0 ] =
ms [ξ 0 ]
If the imaging system is nonlinear, then “new” spatial frequencies are generated
in the output signal g [x], which means that it is not possible to evaluate the ratio
to evaluate the modulation transfer. For example, if the system is a “square-law”
6.1 THEORY 83
detector for the biased sinusoidal input signal s [x], the output signal is:
which again includes three Dirac delta functions, but the frequencies are now ξ =
0, ±2 · ξ 0 . This means that it is not possible to evaluate the ratio of the modulation
at the output frequency, because the corresponding input frequency did not exist.
The range of possible values of the modulation is 0 ≤ ms ≤ +1, which may appear
to suggest (though incorrectly) that the range of possible values of the “modulation
transfer” is the same. However, this is not strictly correct, as we know that it is
possible for the system to “negate” a particular spatial frequency (where “negate” is
used in the sense “multiply by −1”). For example, a digital image processing system
might evaluate a “Laplacian” sharpener (the difference between the original image
and a blurred replica). We know that averaging (“blurring”) a function over a width
larger than a period can “invert” the sinusoid because of the negative lobes of the
SINC function. In such a case, the MTF is negative because the algebraic sign of the
output modulation is negative compared to the input modulation.
Note that the action of the system may also “translate” the input by inducing
a constant phase shift φ0 in the sinusoid. The change in initial phase of the output
compared to the input at each spatial frequency ξ determines the “phase transfer
function,” which may be denoted by the pseudoacronym ΦTF:
(φ0 [ξ])g
ΦTF [ξ] =
(φ0 [ξ])s
We can also evaluate the modulation of the single-frequency biased sinusoid in the
frequency domain; the spectrum is easy to evaluate:
A1 A1
F1 {A0 · 1 [x] + A1 · cos [2πξ 0 x]} = A0 · δ [ξ − 0] + · δ [ξ + ξ 0 ] + · δ [ξ − ξ 0 ]
2 2
A1
= A0 · δ [ξ] + · (δ [ξ + ξ 0 ] + δ [ξ − ξ 0 ])
2
84 CHAPTER 6 LAB #5: MODULATION TRANSFER FUNCTION
This means that the is the scaled ratio of the spectrum evaluated at the origin and
at the frequency in question:
2 · |F [ξ 0 ]|
ms [ξ 0 ] =
|F [0]|
Of course, we know that we can decompose any input function f [x, y] into its
sinuoidal components with different spatial frequencies via the Fourier transform.
The amplitude of a specific spatial frequency component of a 2-D function is:
Z Z +∞
f [x, y] · exp [−i · 2π · (ξ 0 x + η 0 y)] dx dy = F [ξ 0 , η 0 ]
−∞
If the bias is sufficiently large so that the input function is nonnegative (as in realistic
objects to be imaged), then the modulation of the input function at this frequency is:
2 · |F [ξ 0 , η 0 ]|
m [ξ 0 , η 0 ] =
|F [0, 0]|
The modulation transfer function does not include phase effects and may be cas-
caded. In other words, if the system consists of several stages, each with its own
MTF, then the MTFs of each stage may be multiplied to evaluate the MTF of the
entire system.
Recall that a linear and shift-invariant imaging system may be modeled math-
ematically as the convolution of the input function with an impulse response. In
coherent light, the sysetm is linear in amplitude:
where H [ξ, η] is called the Optical Transfer Function (OTF ). The transfer function
H is generally complex valued, which means that the filter affects both the magnitude
and phase of the spectrum of the input. It often is convenient to write the OTF as
magnitude and phase:
so the corresponding imaging equation relates the “intensity” of the input and the
6.1 THEORY 85
where the incoherent impulse response h is the squared magnitude of the impulse
response in coherent light:
h [x, y] ≡ |h [x, y]|2
This allows us to normalize the transfer function by dividing the amplitude at each
point by that at the origin, so the result could theoretically have values between ±1:
H [ξ, η]
−1 ≤ ≤ +1
(H [ξ, η])max
This ratio is the modulation transfer of the imaging system; it specifies the multi-
plicative factor applied to the amplitude at each spatial frequency by the action of
the system:
H [ξ, η]
≡ MT [ξ, η]
(H [ξ, η])max
If the system optics are circularly symmetric, then so is the transfer function:
∙q ¸
2 2
H [ξ, η] → H ξ + η = H (ρ)
=⇒ MT [ξ, η] → MT (ρ)
The radial plot of the modulation transfer for each radial frequency is the Modulation
Transfer Function (MTF)
86 CHAPTER 6 LAB #5: MODULATION TRANSFER FUNCTION
Consider first a simple lens with a circular pupil and no manufacturing imperfections,
so that its behavior is due to diffraction only. Its pupil function is a cylinder:
⎧
⎪ 0 if r > d0
µ ¶ ⎪ ⎪
⎨ 2
r 1 d
p (r) = CY L ≡ 2 if r = 20
d0 ⎪
⎪
⎪
⎩ 1 if r < d0
2
so the impulse response in the Fraunhofer diffraction region (at the focus of the lens)
in monochromatic (“coherent”) light with wavelength λ0 is a scaled replica of the 2-D
Fourier transform of the pupil, which is a “sombrero” or “Besinc” (“Bessel SINC”)
function:
µ ¶2
d0
h [x, y; z2 , λ0 ] → h (r; z2 , λ0 ) = π · SOMB (d0 ρ)|ρ→ r
2 λ0 z2
⎛ ³ ´⎞
d0 r
πd20 ⎝ 2 · J1 π λ0 z2 ⎠
= · ³ ´
4 π d0 r
λ0 z2
where the last step follows because the circular pupil is (by definition!) symmetric.
The transfer function in polychromatic light with dominant wavelength λ0 is a scaled
replica of the autocorrelation of the pupil function:
The shape of the transfer function is the profile of the autocorrelation of the circle,
which may be shown to be (Goodman, Introduction to Fourier Optics, Eq. 6-32)
2 h −1 p i ³r´
CY L (r) FCY L (r) = cos (r) − r (1 − r2 ) · CY L
π 2
6.2 MTF VS SFR 87
1.2
1.0
0.8
0.6
0.4
0.2
-1.4 -1.2 -1.0 -0.8 -0.6 -0.4 -0.2 0.2 0.4 0.6 0.8 1.0 1.2 1.4
r
-0.2
If the cylinder function has diameter d0 , the formula for the normalized autocor-
relation is:
³ ´ ³ ´ ⎡ và !⎤
µ ¶ u µ ¶ µ ¶
CY L dr0 FCY L dr0 2 ⎣ −1 r ru r
2
r
= cos − t 1− ⎦ · CY L
πd20 π d0 d0 d0 2d0
4
πd20
where the denominator 4
is the area of the pupil.
Sinusoidal targets with different spatial frequencies are available (such as from Applied
Image, Inc., which is a local firm near the corner of East Main Street and Culver
Road in Rochester — https://www.appliedimage.com). It is possible to measure the
modulations of the images of the different targets to obtain samples of the MTF at
different spatial frequencies. Applied Image also makes slant-edge and random-noise
targets that may be used to measure MTF. We will consider these later.
To reduce the requirement for a wide dynamic range, it is feasible to measure the
response from a “line source” (rather than from a point source) and then convert the
result to the form of the MTF. The line source that is “narrow” along the x-axis and
“long” along y has the form:
The image created by a system with impulse response (or point-spread function =
“psf”) is the convolution of the input and the impulse response, which may be eval-
6.3 MEASURING THE MTF/SFR 89
In words, the line-spread function is the spatial integral of the impulse response along
one dimension. The line-spread function is an integral and therefore reduces the
dynamic range from point to point. To illustrate, consider the impulse response of a
diffraction-limited optical system, so that there are no significant aberrations. The
psf is the squared magnitude of the “sombrero” (besinc) function. For a diffraction-
limited system with pupil diameter d0 and focal length f0 , the psf has the circularly
symmetric form:
¯ µ ¶ ¯2
¯ r ¯
¯ µ ¶2 J1 π λ0 f0 ¯
¯ d0 ¯
h (r) = ¯¯π ¯
d0
·2· r ¯
¯ 2 π
λ0 f0
¯
¯ d0 ¯
¯ ⎛ ⎞¯2
¯ µ ¶2 ¯
¯ d r ¯
= ¯¯π ´ ⎠¯¯
0
· SOMB ⎝ ³
¯ 2 λ0 f0 ¯
0 d
References:
http://www.edmundoptics.com/technical-resources-center/optics/modulation-transfer-function
http://www.imatest.com/docs/sharpness/
http://www.optikos.com/wp-content/uploads/2013/11/How-to-Measure-MTF.pdf
http://www.normankoren.com/Tutorials/MTF.html
90 CHAPTER 6 LAB #5: MODULATION TRANSFER FUNCTION
http://www.cis.rit.edu/~rlepci/IMGS322/Fischer_MTF.pdf
6.4.1 Equipment
1. CCD camera (Nikon D50) and 18mm-55mm, f/3.5-5.6 “zoom” lens
2. Optics Kit
3. He:Ne Laser
6.4.2 Procedure(s)
1. SFR of desktop scanner:
(a) Follow Peter Burns’ procedure for a desktop scanner using his test targets.
(a) Use a “tilted” razor blade as the knife edge target for the Nikon D50 digital
camera
(b) Measure the SFR when the camera is “sharply” focused
(c) Measure the SFR for several examples of defocus — you should start with
VERY SMALL changes in focus to see if you can distinguish the two curves.
(d) Look for the evidence of the defocus in the resulting SFR curves.
Chapter 7
7.1 Abstract:
The objective of this experiment is to determine how the parameters of the imaging
system (aperture setting = “f/stop”), focal length, object distance/image magnifi-
cation) affect the “resolution” of the image and the range of object distances that
appear to be “in focus” for a lens — this is what we call the “depth of field” or “fo-
cus range.” An introduction to the second topic for photographers is available at the
website:
http://www.apogeephoto.com/know-your-camera-depth-of-field-experiments/
7.2 Theory:
You already know the relation between object distance z1 , image distance z2 , lens
focal length f, and transverse magnification MT :
1 1 1
+ =
z1 z2 f
z2
MT = −
z1
This indicates that only a single object distance z1 will appear to be “in focus”
for a specific image distance z2 . However, the deterioration in “sharpness” with
variation in object distance may be gradual or rapid. This experiment is intended to
demonstrate how this variation is affected by system parameters. In the notes, we
see that the depth of field ∆z is related to the f/# and the image magnification by
the approximate result:
λ0 · (f/#)2
∆z ≈ cd ·
MT2
where cd is a constant that depends on the assumptions made for the focus criteria
93
94CHAPTER 7 LAB #6: SPATIAL RESOLUTION AND DEPTH OF FIELD
2. Optics Kit
3. He:Ne Laser
2. Use a fine needle to make two round holes in the aluminum foil as close together
as you can get. The “roundness” of the holes is improved by “spinning” the
needle as you puncture the foil.
3. Mount the foil on a carrier and illuminate from the back by an expanded laser
beam (to reduce the brightness).
4. Move the camera on the rail to be as far from the holes as you can get — the
goal here is to just “barely” resolve the two holes when using the laser. Take
images
5. Without changing anything else in the setup, use the while-light optical fiber
illuminator as the light source. Take images again. Compare the two results.
Which illumination source produces images that are “better resolved?” You
might want to display “line scans” across the images in an external program
(e.g., ImageJ) to test the resolution.
6. Repeat the experiment without changing the setup except defocus the lens “a
bit.”
7.3 EXPERIMENTAL SETUP 95
Figure 7.1:
2. You will want to check the depth of field for different aperture stops (f/#) AND
for different object distances (different transverse magnifications). Estimate the
depth of field ∆z in each case from the images you collect. Test these values
of ∆z against the equation and plot the results to see whether the equation
“makes sense.”
Chapter 8
8.1 Theory
Labs on optical imaging systems using the concept of light as a “ray” in geometrical
optics were able to determine the locations of images and their magnifications. How-
ever, the concept of light as a “wave” also is fundamental to imaging, particularly in
its manifestation in “diffraction”, which is the fundamental limitation on the action
of an optical imaging system. “Interference” and “diffraction” may be interpreted as
the same phenomenon, differing only in the number of sources involved (interference
=⇒ few sources, say 2 - 10; diffraction =⇒ many sources, up to an infinite num-
ber). In this lab, the two sources are obtained by dividing the wave emitted by a
single source by introducing two apertures into the system; this is called “division of
wavefront”. In the next lab, we will divide the light by introducing a beamsplitter to
create “division-of-amplitude interferometry”.
and the travelling-wave analogue:for two plane waves propagating along the z axis:
A0 cos [k1 z − ω1 t] + A0 cos [k2 z − ω2 t] = 2A0 cos [kmod z − ω mod t] cos [kavg z − ω avg t]
k1 − k2 ω1 − ω2 k1 + k2 ω1 + ω2
kmod = , ω mod = , kavg = , ω avg =
2 2 2 2
97
98CHAPTER 8 LAB #7: DIVISION-OF-WAVEFRONT INTERFERENCE
If the dispersion is normal, the resulting wave is the product of a slow traveling wave
with velocity
ω mod
vmod =
kmod
and a rapid traveling wave with velocity
ω avg
vavg =
kavg
Thus far, we have described traveling waves directed along one axis (usually z).
Of course, the equations can be generalized easily to model waves traveling in any
direction. Instead of a scalar angular wavenumber k, we can define the 3-D wavevector
with components along the three Cartesian axes:
and which points in the direction of travel of the wave. The length of the wavevector
is proportional to λ−1
0 . q 2π
|k| = kx2 + ky2 + kz2 = .
λ0
The equation for a traveling wave in 3-D space becomes:
A 2-D or 3-D wavefront can exhibit a periodic variation in the phase φ [r, t] =
k·r−ω0 t, even if ω 1 = ω 2 = ω 0 → λ1 = λ2 = λ0 . If light from a single source is divided
into two sections by introducing two apertures into the system, Huygens’ principle
indicates that the light through the two apertures will “spread” and recombine. When
viewed at a single location, the two “beams” of light with the same wavelength will
recombine with different wavevectors such that |k1 | = |k2 | = |k|. This happens when
the Cartesian sum of the components of k are the same, but
λ1 = λ2 .
Light of the same wavelength (and same optical frequency) is coherent, which means
that it can combine constructively or destructively; the amplitude will “cancel” at
locations where the phase difference is an odd multiple of π radians. Two plane waves
of the same optical frequency ω 0 , one traveling in direction k1 and one in direction
k2 are:
If k1 = [0, ky , kz ] and k2 = [0, −ky , kz ], then the wavevectors differ only in the sign of
the y-component and have the same length:
2π
|k1 | = |k2 | = =⇒ λ1 = λ2 ≡ λ0 .
λ0
The components of the wavevectors may be specified in terms of the angle relative to
the z- axis:
2π
kz = |k| cos [θ] = cos [θ]
λ0
2π
ky = sin [θ] .
λ0
The superposition of the electric fields is:
Note that there is no time dependence in the first term: this is a time-invariant
pattern. Also recall that the measured quantity is the intensity of the pattern, which
is the time average of the squared magnitude. The time-invariant term thus becomes
the visible pattern:
∙ ¸
2 2πy
|f1 [x, y, z, t] + f2 [x, y, z, t] | ∝4A20 cos 2
sin [θ]
λ0
µ ∙ ¸¶
2 1 4πy
= 4A0 · 1 + cos sin [θ] 1 [y]
2 λ0
∙ ¸
2 2πy λ0
∝ 2A0 cos , where D0 ≡
D0 2 · sin [θ]
where the identity cos2 [β] = 12 (1 + cos [2β]) has been used. The intensity pattern has
a cosine form (maximum at the center) and a period proportional to λ0 and inversely
proportional to sin [θ]. If θ is small, the period D0 of the pattern is long. If the
distance L0 to the observation plane is large, then sin [θ] ∼
= Ld00 and the period of the
pattern is approximately:
λ0 · L0
D0 ∼=
d0
which leads to an easily remembered mnemonic expression with the two transverse
distances and the two longitudinal distances appearing on different sides of the equa-
100CHAPTER 8 LAB #7: DIVISION-OF-WAVEFRONT INTERFERENCE
tion:
D0 · d0 ' L0 · λ0
Remember that D0 is the period of the “irradiance” fringe calculated from the squared
magnitude of the amplitude.
The second part of the laboratory uses a single slit and a mirror to produce
interference from the slit and its “image.” In this case, the “imaging” of the slit
reduces the requirement on the matching of the phase of light that illuminates the
pair of apertures in the original setup.
8.2 Equipment:
8.3 PROCEDURE: 101
Setup for observing interference fringes from two apertures separated by the distance
d0 observed after propagating the distance L0 to produce intensity interference
fringes with period D0 .
1. He:Ne Laser;
6. aluminum foil, single-edged razor blades, and needles, to make objects for dif-
fraction;
8.3 Procedure:
1. Use a piece of perfboard as the object. Perfboard has regularly spaced holes
1
(often separated by 10 inch) of approximately equal size. Use the black tape to
cover all but one hole in the perfboard and record the image at the observation
plane (either photographically or by sketch).
1
2. Open up an adjacent hole to create two holes separated by d0 = 10
inch; record
the image.
3. Record images created by two holes separated by a larger interval d2 = 0.2 inch,
and by many holes. Measure the distances L0 , D0 , and d0 to find the approxi-
mate wavelength λ0 of the laser (actual λ0 = 632.8 nm).
102CHAPTER 8 LAB #7: DIVISION-OF-WAVEFRONT INTERFERENCE
5. Lloyd’s Mirror
(a) Make a single slit and a double slit in this way: tape the edges of a piece
of aluminum foil to a microscope slide. Cut the slits with a razor blade
so that they are separated by 0.5 mm or less. Make one slit longer than
the other (say, 5 mm or so) so that you can switch from one slit to two
easily. Hold the single slit close to one eye and look at a white light source.
Estimate the angular width of the central maximum by making marks on
a piece of paper that is behind the light source to give the scale (measure
the distances to get the angular size). Then measure the slit width using
a magnifying glass or optical comparator.
(b) Look at a white light source (e.g., the sky or a frosted light bulb) through
the double slit. Describe what you see.
(c) Now make a Lloyd’s mirror by taking a second microscope slide and hold-
ing it as shown in the figure (you might also try the same configuration
except with the aluminum foil slit on the opposite side from your eye. You
can stick the slides together with duct tape modeling clay, or putty. Adjust
the ”mirror” to get as narrow a separation between the slit and its “im-
age” as possible, say 0.5 mm. Bring the assembly to one eye and focus on a
source at a “large” distance away. Look for a few (3 or 4) black “streaks”
parallel to the slit; these are the “zeros” of the interference pattern due to
destructive interference of the light directly from the source and reflected
from the horizontal microscope slide. The light from these two are always
coherent (regardless of the source!) but out of phase by 180◦ due to the
phase change on reflection.
9.1 Theory:
9.1.1 Division-of-Wavefront Interferometry
In the last lab, you saw that coherent (single-wavelength) light from point sources at
two different locations could be combined after having traveled along two different
paths. The recombined light exhibited sinusoidal fringes whose spatial frequency
depended on the difference in angle of the light beams when recombined. The regular
variation in relative phase of the light beams resulted in constructive interference
(when the relative phase difference is 2π , where is an integer) and destructive
interference where the relative phase difference is 2π +π. The centers of the “bright”
fringes occur where the optical paths differ by an integer multiple of λ0 .
103
104CHAPTER 9 LAB #8: DIVISION-OF-AMPLITUDE INTERFERENCE
coherence length is the difference in distance travelled by two beams of light such
that they generate detectable interference when recombined. The coherence time is
the time period that elapses between the passage of these two points that can “just”
interfere:
coherence = c · tcoherence
The coherence time (obviously) has dimensions of time (units of seconds). Light emit-
ted by a laser spans a very narrow range of wavelengths, and may be approximated
as emitting a single wavelength λ0 . The temporal “bandwidth” of a light source is
the range of emitted temporal frequencies:
∆ν = ν max − ν min
µ ¶
c c 1 1
= − =c· −
λmin λmax λmin λmax
µ ¶
λmax − λmin
=c·
λmax · λmin
∙ ¸
c · ∆λ cycles
= = Hz
λmax · λmin second
The reciprocal of the temporal bandwidth has dimensions of time and is the coherence
time.
1
tcoherence = [ s] → ∞ for laser
∆ν
c λmax · λmin
coherence = [ m] = → ∞ for laser
∆ν ∆λ
These results demonstrate that light from a laser can be delayed by a very long
time and still produce interference when recombined with “undelayed” light. Equiva-
lently, laser light may be sent down a very long path and then recombined with light
from the source and still produce interference (as in a Michelson interferometer).
Question:
Compute the coherence time and coherence length for white light, which may be
modeled as containing “equal amounts” of light with all frequencies from the blue
(∼
= 400 nm) to the red (λ ∼= 700 nm). You will find these to be very small, and thus
it is MUCH easier to create fringes with a laser than with white light. The coherence
length of the source may be increased by filtering out many of the wavelengths, which
also reduces much of the available intensity!
If the translatable mirror is displaced so that the two beams travel different dis-
tances from the beamsplitter to the mirror (say 1 and 2 ), then the total optical path
of each beam is 2 · 1 and 2 · 2 . The total optical path difference is:
OP D = 2 · 1 −2· 2 =2·∆ [ m]
Note that the path length is changed by TWICE the translation distance of the mirror.
The OPD may be scaled to be measured in number of wavelengths of difference simply
by dividing by the laser wavelength:
2·∆
OP D = [wavelengths]
λ0
We know that each wavelength corresponds to 2π radians of phase, so the optical
phase difference is the OPD multiplied by 2π radians per wavelength:
∙ ¸
radians 2·∆
OΦD = 2π · [wavelengths]
wavelength λ0
2·∆
= 2π · [radians]
λ0
If the optical phase difference is an integer multiple of 2π (equivalent to saying that
9.1 THEORY: 107
the optical path difference is an integer multiple of λ0 ), then the light will combine
“in phase” and the interference is constructive; if the optical phase difference is an
odd-integer multiple of π, then the light combines “out of phase” to destructively
interfere.
The Michelson interferometer pictured above uses a collimated laser source (more
properly called a Twyman-Green interferometer), the two beams are positioned so
that all points of light are recombined with their exact duplicate in the other path
except for (possibly) a time delay if the optical paths are different). If the optical
phase difference of the two beams is 2πn radians, where n is an integer, then the light
at all points recombines in phase and the field should be uniformly “bright”. If the
optical phase difference is (2n + 1) π radians, then light at all points recombines “out
of phase” and the field should be uniformly “dark”.
If the collimated light in one path is “tilted” relative to the other before recombin-
ing, then the optical phase difference of the recombined beams varies linearly across
the field in exactly the same fashion as the Young’s two-slit experiment described
in the handout for the last laboratory. In other words, the two beams travel with
different wavevectors k1 and k2 that produce the linear variation in optical phase dif-
ference. With the Michelson, we have the additional “degree of freedom” that allows
us to change the optical path difference of the light “at the center”. As shown in the
figure, if the optical path difference at the center of the observation screen is 0, then
the two beams combine in phase at that point. If one path is lengthened so that the
optical phase difference at the center is π radians, then the light combines “out of
phase” to produce a dark fringe at the center.
Michelson interferometer with collimated light so that one beam is tilted relative to
the other. The optical phase difference varies linearly across the field, producing
linear fringes just like Young’s two-slit interference.
108CHAPTER 9 LAB #8: DIVISION-OF-AMPLITUDE INTERFERENCE
The strict definition of a Michelson interferometer assumes that the light is an expand-
ing spherical wave from the source, which may be modeled by deleting the collimating
lens. If optical path difference is zero (same path length in both arms of the inter-
ferometer), then the light recombines in phase at all points to produce a uniformly
bright field, as shown on the left. However, in general the optical path lengths differ
so that the light from the two sources combine at the center with a different phase
difference, as shown on the right.
Thus the optical path difference varies more quickly at the edges of the observation
plane than at the center, so the fringe period varies with position. In the figure, the
light waves recombine “in phase” at the center of the observation plane, but the
period of the fringes decreases with increasing distance from the center.
9.2 EQUIPMENT: 109
9.2 Equipment:
1. Ealing-Beck Michelson Interferometer (only two are available)
2. He:Ne laser
4. White-light source
5. Saran wrap
6. soldering iron
9.3 Procedure:
1. (Do ahead of the lab, if possible) Consider the use of the Michelson interferome-
ter with a point source of light (expanding spherical waves) where one mirror is
“tilted” and the path lengths are equal. Redraw the optical configuration of the
Michelson interferometer to show the separation of the “effective” point sources
due to the optical path difference in the two arms. The spherical wavefronts
generated by the images of the point source in the two arms interfere to produce
the fringes. Explain the shape of the fringes seen at the output.
110CHAPTER 9 LAB #8: DIVISION-OF-AMPLITUDE INTERFERENCE
2. (Do ahead of the lab, if possible) Repeat for the case of a point source where the
two mirrors are “aligned” so that the beams are not tilted. Again, the spherical
wavefronts generated by the images of the point source in the two arms interfere
to produce the fringes. Explain the shape of the fringes seen at the output.
3. In the first part of the lab, use a lens to expand the laser beam into a spherical
wave. The mirrors generate images of the source of the spherical wave, and the
resulting spherical wavefronts superpose. The squared magnitude of the spatial
modulation of the superposition is the visible fringe pattern. The drawings
below give an idea of the form of the fringes that will be visible for various
configurations of the two mirrors when used with spherical waves.
4. Once you have obtained circular fringes that are approximately centered, move
the translatable mirror to increase or decrease the optical path difference. (be
sure that you know which!). Note the direction that the fringes move; in other
words, do they appear from or disappear into the center? Explain.
5. Note that fringes also are produced at the input end of the interferometer. How
do they differ from those at the output end?
6. Place Polaroid filters in each path and note the effect on the interference pattern
as the relative polarizations of the beams are changed. Also try the same
experiment with λ/4 plates, λ/2 plates, and the circular polarizer.
7. Place a piece of plastic wrap in one arm of the interferometer. Describe and
explain its effect.
8. Place a spherical lens (best if the focal length f is very long) in the beam at the
input end and note the effect on the fringes. Repeat with the lens in one arm.
9. Put a source of heat in or under one of the arms of the interferometer; your
hand will work (if you are warmblooded!), but a more intense source such as a
soldering iron works better. Note the effect.
10. The wavefronts in the second part of the lab should be approximately planar;
use a second lens to generate collimated light. See if you can generate a pattern
that is all white or all black, in other words, the pattern is a single bright or
dark fringe. When you have generated the dark fringe, try the experiment with
the heat source in one arm again.
9.4 Questions:
1. (Already mentioned in the text) Compute the coherence time and coherence
length for white light, which may be modeled as containing “equal amounts” of
light with all frequencies from the blue (∼
= 400 nm) to the red (λ ∼
= 700 nm).
9.5 REFERENCES: 111
2. (Already mentioned in the text) Redraw the optical configuration of the Michel-
son to show the separation of the effective point sources due to the path-length
difference in the two arms of the interferometer.
3. Explain the direction of motion of the circular fringes when the path length
is changed, i.e., what directions do the circular fringes move if the OPL is
increased? What if OPL is decreased?.
4. When the Michelson is used with collimated light, explain how a single dark
fringe can be obtained. Where did the light intensity go?
5. Explain what happens when a piece of glass (or other material) is placed in one
arm.
9.5 References:
Optics, E. Hecht, sec 9.4
Chapter 10
10.1 Background:
This lab introduces the concept of polarization of light. As we have said in class
(and as is obvious from the name), electromagnetic radiation requires two traveling-
wave vector components to propagate: the electric field, often specified by E, and
the magnetic field B. The two components are orthogonal (perpendicular) to each
other, and mutually orthogonal to the direction of travel, which is often specified by
the vector quantity s (the Poynting vector):
s≡E×B
where x̂, ŷ, and ẑ are the unit vectors directed along the respective Cartesian axes
and det [ ] represents the evaluation of the determinant of the 3 × 3 matrix. You may
recall that the “cross product” is ONLY defined for 3-D vectors.
For example, if the electric field E is oriented along in the x -direction with am-
plitude Ex (so that E||x̂, where || indicates “is parallel to”). The electric field is
E = x̂ · Ex + ŷ · 0 + ẑ · 0. Consider also that the magnetic field B ⊥ E is oriented
along the y-direction (B||ŷ, B = x̂ · 0 + ŷ · By + ẑ · 0), then the electromagnetic field
113
114 CHAPTER 10 LAB #9: POLARIZATION
Thus the electromagnetic wave propagates in the direction of the positive z axis.
10.1.2 Polarization
In vacuum (sometimes called free space), the electric and magnetic fields propagate
with the same phase, e.g., if the electric field is a traveling wave with phase k0 z −
ω 0 t + φ0 , then the two component fields are:
For waves in vacuum, the electric and magnetic fields are “in phase” and the wave
travels in the direction specified by ŝ = Ê × B̂.
Polarized light comes in different flavors: plane (or linear), elliptical, and circular.
In the first lab on oscillations, we introduced some of the types of polarized light when
we added oscillations in the x- and y-directions (then called the real and imaginary
parts) with different amplitudes and initial phases. The most commonly referenced
10.1 BACKGROUND: 115
type of light is plane polarized, where the electric field vector E points in the same
direction for different points in the wave. Plane-polarized waves with E oriented along
the x- or y-axis are easy to visualize; just construct a traveling wave oriented along
that direction. Plane-polarized waves at an arbitrary angle θ may be constructed
by adding x- and y-components with the same frequency and phase and different
amplitudes, e.g.,
When the two component electric fields are “in phase” (so that the arguments of
the two cosines are equal for all [z, t]), then the angle of polarization, specified by θ,
is obtained by a formula analogous to that for the phase of a complex number:
∙ ¸ ∙ ¸
−1 Ey cos [k0 z − ω 0 t + φ0 ] −1 Ey
θ = tan = tan
Ex cos [k0 z − ω 0 t + φ0 ] Ex
as shown in Figure 2.
In words, the angle of the electric vector is a linear function of z and of t; the angle
of the electric vector changes with time and space.
You may want to reexamine or redo that experiment. A similar configuration
where the amplitudes in the x- and y-directions are not equal is called elliptically
polarized.
Circularly polarized light may be generated by delaying the phase of one com-
ponent of plane-polarized light by π/2 radians, or 1/4 of a period. The device for
introducing the phase delay is called a quarter-wave plate. We can also construct a
half-wave plate such that the output wave is:
¡ ¢
E [z, t] = E0 x̂ cos [k0 z − ω 0 t + φ0 ] + ŷ cos [k0 z − ω 0 t + φ0 ± π]
¡ ¢
= E0 x̂ cos [k0 z − ω 0 t + φ0 ] + ŷ (− cos [k0 z − ω 0 t + φ0 ])
¡ ¢
= E0 x̂ − ŷ cos [k0 z − ω0 t + φ0 ]
∙ ¸
−1 −E0 cos [k0 z − ω 0 t + φ0 ] π
θ = θ [z, t] = tan = tan−1 [−1] = −
E0 cos [k0 z − ω 0 t + φ0 ] 4
handed (RHCP) and left-handed (LHCP). There are two conventions for the nomen-
clature:
⎧ ⎫
⎨ right ⎬
1. Angular Momentum Convention (my preference): Point the thumb of the
⎩ left ⎭
hand in the direction of propagation. If ⎧the fingers⎫point in the direction of ro-
⎨ RHCP ⎬
tation of the E-vector, then the light is .
⎩ LHCP ⎭
2. Optics (also called the “screwy”) Convention: The path traveled by the E-
vector of RHCP light is the same path described by a right-hand screw. Of
course, the natural laws defined by Murphy ensure that the two conventions are
opposite: RHCP light by the angular momentum convention is LHCP by the
screw convention.
10.2 Equipment:
1. Set of Oriel Polarizers, including linear and circular polarizers, quarter- and
half-wave plates
2. He:Ne laser
10.3 Procedure:
This lab consists of several sections:
Your lab kit includes linear and circular polarizers, and quarter-wave and half-
wave plates.
(a) Orient two polarizers in orthogonal directions and look at the transmitted
light.
(b) Add a third polarizer AFTER the first two so that it is oriented at an
angle of approximately π/4 radians (45◦ ) and note the result.
(c) Add a third polarizer BETWEEN the first two so that it is oriented at an
angle of approximately π/4 radians and note the result.
2. Malus’ Law (Figure 5) This experiment uses laser sources, and a few words
of warning are necessary: NEVER LOOK DIRECTLY AT A LASER
SOURCE THE INTENSITY IN THE BEAM IS VERY CONCEN-
TRATED AND CAN DAMAGE YOUR RETINA PERMANENTLY
10.3 PROCEDURE: 119
(a) Mount a screen behind polarizer P1 and test the radiation from the laser
to see if it is plane polarized and, if so, at what angle.
(b) Measure the “baseline” intensity of the source using the CCD camera (with
lens). You may have to attenuate the light source with a piece of paper
and/or stop down the aperture of the lens.
(c) Insert one polarizer in front of the detector and measure the intensity
relative to the original unfiltered light. How much light does one polarizer
let through? Next, increase the intensity of the light until you nearly get
a saturated image on the CCD.
(d) Add a second polarizer to the path and orient it first to maximize and
then to minimize the intensity of the transmitted light. Measure both
intensities.
(e) Next we will try to confirm Malus’ Law. Two polarizers must again be
used: one to create linearly polarized light, and one to “test” for the po-
larization. The second polarizer often is called an analyzer, and if possible,
use one of the rotatable polarizers on the optical mounts. Set up the po-
larizers and adjust the light intensity so that when they are aligned, the
image on the camera is again nearly saturated.
(f) Measure the transmitted light for different relative angles (say, every 10◦
or so) by rotating the second polarizer. “Grab” an image of the source
through the two polarizers and determine the average pixel value for your
light source at each angle. Plot the data graphically as a function of angle
and compare to Malus’ law:
(a) Look at the reflection of light from a glossy surface (such as reflections of
the ceiling lights from a waxed floor or the smooth top of a desk) through
the polarizer. Rotate the polarizer to find the direction of greatest trans-
mission. This indicates the direction of linear polarization that is trans-
mitted by the polarizer. If possible, document the reflection at different
angles by a photograph with a digital camera through the polarizer.
120 CHAPTER 10 LAB #9: POLARIZATION
(b) Use a setup shown so that the laser beam reflects from a glass prism; the
prism just serves as a piece of glass with a flat surface and not for dispersion
or internal reflection. Place one of the linear polarizers between the laser
and the first mirror. Focus your attention on the intensity of the beam
reflected from the front surface of the prism — you probably will need to
darken the room.
(c) Rotate the prism on the paper sheet with angular markings to vary the
incident angle θi . At each θi , rotate the polarizer to minimize the trans-
mitted intensity. At a particular θi , the intensity of the transmitted beam
should be essentially zero. This angle θi of minimum reflection is Brew-
ster’s angle, where the electric-field vector parallel to the plane of incidence
is not reflected. Measure Brewster’s angle at least 4 times and average to
get your final result.
(d) Remove the polarizer from the incident beam and use it to examine the
state of polarization of the reflected light.
(a) Construct a circular polarizer with the components available and test it
by laying it on a shiny surface (a coin works well). Shine a light source
on the arrangement from above, and test the behavior as you rotate the
linear polarizer. Make sure you can see a variation in intensity of the light
reflected by the coin. You should be able to see one clear minimum in
intensity between 0 and 90◦ .
(b) Orient your CCD camera so you can grab images of the coin at different
orientation angles. Measure the intensity at several (∼
= 7) different angles
from 0 to 90 degrees, including one near the minimum intensity.
6. Polarization by Scattering (Figure 7) (if outside sky is clear and blue) (Yeah,
right, isn’t this in Rochester?)
(a) Examine scattered light from the blue sky for linear polarization. Look at
several angles measured relative to the sun.
(b) Determine the direction where the light is most completely polarized. This
knowledge is useful to determine the direction of polarization of any linear
polarizer.
(c) Test skylight for circular polarization.
10.4 Analysis:
In your writeups, be sure to include the following items:
1. Plot the expected curve for Malus’ Law together with your experimental data.
√
N
2. State your final result for Brewster’s angle with σ
uncertainty.
3. Graph your results for the brightness of the coin as a function of the orientation
angle of the linear polarizer. Explain why the minimum occurs where it does.
122 CHAPTER 10 LAB #9: POLARIZATION
10.5 Questions:
1. Consider why sunglasses used while driving are usually made with polarized
lenses. Determine the direction of polarization of the filters in sunglasses. The
procedure to determine the direction of polarization of light reflected from a
glossy surface or by scattering from molecules are helpful.
3. Considered the circular polarizer you constructed. If the angle of the plane
(linear) polarizer is not correct, what will be the character of the emerging
light?
Chapter 11
11.1 Sources:
Physical Optics, Robert W. Wood, Optical Society, 1988 (reprint of book originally
published in 1911)
Waves, Frank Crawford, Berkeley Physics Series, McGraw-Hill, 1968.
The Nature of Light, V. Ronchi, Harvard University Press, 1970.
Rainbows, Haloes, and Glories, Robert Greenler, Cambridge University Press,
1980.
Sunsets, Twilights, and Evening Skies, Aden and Marjorie Meinel, Cambridge
University Press, 1983.
Light and Color in the Outdoors, M.G.J. Minnaert, Springer-Verlag, 1993.
11.2 Rationale:
Many optical imaging systems used for scientific purposes (e.g., in remote sensing or
astronomy) generate images from light collected in relatively narrow bands of wave-
lengths. Three common choices exist for “splitting” the light into its constituent
wavelengths. The simplest is by inserting “bandpass” filters into the light path that
pass light in only narrow selected bands (you will use such filters in part of this lab).
The second method uses an optical element formed from a periodic pattern of trans-
parent and opaque regions, called a diffraction “grating” forces different wavelengths
to travel along paths at angles proportional to the wavelength. The third method
“disperses” the light, but uses the physical mechanism of refraction and the inherent
property that different wavelengths travel at different speeds in glass. This means
that the angle of refraction from Snell’s law varies with wavelength.
The physics of diffractive dispersion and of refractive dispersion are very different
and also work in “opposite directions,” so to speak (blue light refracts at a larger
angle, red light diffracts at a larger angle). That said, both are used in instruments
for measuring the spectrum of light and you need to understand them both..
123
124 CHAPTER 11 LAB #10: DISPERSION
Figure 11.1: Cosine grating: the transmittance varies from 1 (“white”) to 0 (“black”)
with period X0 and corresponding spatial frequency ξ 0 .
11.3 Theory:
where λ0 is the wavelength of light and z1 is the propagation distance from the object
to the observation plane in the Fraunhofer diffraction region.
where:
x1 ≡ ξ 0 z1 · λ0 ∝ λ0
If the observation distance z1 is large (as required for Fraunhofer diffraction), then
the angle is proportional to the off-axis distance x0 :
x1
θ1 ∼
=
z1
= ξ 0 · λ0
∝ λ0
In words, the observed pattern from the sinusoidal diffraction grating yields three
“spots” of light, one located at the origin with relative irradiance of 4 units and two
to either side of the origin at distances ±x0 = ±ξ 0 z1 · λ0 , or at angles ±θ1 ∝ λ0 . This
discussion shows that the diffracted distance is proportional to the spatial frequency
of the grating (smaller spacings =⇒ larger frequencies =⇒ larger diffracted angles)
AND MORE IMPORTANTLY (for our purposes) to wavelength (longer λ =⇒
larger diffracted angles). This shows that RED light diffracts at LARGER angles
than BLUE light and that the relationship between the diffracted angle and the
wavelength is proportional (i.e., it is LINEAR with wavelength).
where c ∼ m
= 3 · 108 sec . The “phase velocity” v in the medium is the ratio of the angular
frequency ω and the “wavenumber” k, which is equal to the product of the wavelength
λ and the temporal frequency ν:
ω
v= =λ·ν
k
By combining the two equations, we obtain an expression for the refractive index in
terms of the vacuum velocity c, the angular frequency ω, and the wavenumber k.
µ ¶
k
n=c·
ω
Snell’s law relates the indices of refraction to the angles of light rays in two media.
If the angles of the incident and refracted rays are θ1 and θ2 , respectively (measured
from the vector normal to the surface), then Snell’s law is:
Snell’s law may be rearranged to evaluate the refracted angle θ2 in terms of the
incident angle θ1 and the two refractive indices:
∙ ¸
−1 n1
θ2 = sin · sin [θ1 ]
n2
90
theta2 (degrees)
80
70
60
50
40
30
20
10
0
0 10 20 30 40 50 60 70 80 90
theta1 (degrees)
Output angle θ2 as function of input angle θ1 for constant (nondispersive) refractive
indices: (blue) “rare-to-dense” refraction with n1 = 1 and n2 = 1.5, note that
θ2 < θ1 ; (red) “dense-to-rare” refraction with n1 = 1.5 and n2 = 1 and θ2 > θ1 ; a
linear relationship θ2 = θ1 is shown as a dashed black line.
Refractive Index n vs. Wavelength λ for several media. Note that n decreases with
increasing λ, which means that the velocity of light in the medium increases with
increasing λ. Note the steep slopes of the refractive index for ultraviolet light; there
is an absorption “resonance” at ultraviolet frequencies.
In the common “rare-to-dense” refraction where light in air (n1 = 1.0) is inci-
dent upon glass at angle θ1 Since the refractive index n2 of the glass decreases with
increasing wavelength, the ratio we can use the equation for the refracted angle:
∙ h π i¸
−1 1
θ2 = sin · sin
n2 4
30
20
10
0
1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0
n2
Output angle θ2 as function of refractive index n2 for θ1 = 45◦ and n1 = 1.0. Note
that the refracted angle decreases with increasing values of n2
This means that the deviation angle for RED light is SMALLER than that for BLUE
light, and that the relationship between angle and wavelength is NONLINEAR. Note
carefully that this is the opposite of the situation for diffractive optics, where red
light is diffracted at larger angles than blue light
White light can be dispersed into its constituent frequencies (or equivalently, into
its wavelength spectrum) by utilizing this differential velocity of light in materials
combined with Snell’s law. The intensity of the dispersed light can be measured
and plotted as a spectrum, which is a valuable tool for determining the chemical
constituents of the light source or an intervening medium. In fact, spectral imaging
(forming images of objects in narrow wavebands of light) has become a hot topic
in imaging because of advances in dispersing materials, computer technologies, and
processing algorithms.
The best-known method for dispersing light is based on differential refraction
within a block of glass with flat sides fabricated at an angle (usually 60◦ but other
angles are common). Of course, this was the scheme used by Newton to perform
the first spectral “analysis” of white light (spreading it into its constituent colors) in
1671. Note that Newton also “synthesized” white light by recombining the colored
components. Newton was the first to investigate the spectral components of white and
colored light, and in this way was arguably the first to perform a Fourier analysis.
Newton did make a famous error in this study by assuming that the effect of the
dispersion was proportional to the refractive index of the glass. In other words,
he assumed that the dispersion is a property of the index and does not differ with
material.
Consider a prism with apex angle α shown in the Figure. A is of white light is
incident on the glass at angle θ1 .
130 CHAPTER 11 LAB #10: DISPERSION
where the angle of incidence is θ1 , the apex angle of the prism is α, and the index
of refraction is n for the specific wavelength This is presented only for its impressive
complexity, not because we need to use it. A graph of δ vs. incident angle θ1 for a fixed
index n and some apex angle α is shown in the next figure. The graph demonstrates
that an angle exists where the deviation of the prism is a minimum, which can be
called the “minimum deviation angle” and signified by δmin . The light deviated by
this angle passes symmetrically through the prism, i.e., the angle of incidence to
the prism and the angle of “departure” are equal. Note that the angle of minimum
deviation depends on the refractive index n, and thus the minimum deviation angle
for different wavelengths will be different.
Graph of the deviation angle δ vs. angle of incidence θ1 for n = 1.5 and apex angle
α = 60◦ . Note that δ exhibits a minimum at some angle θmin ' 48◦ .
11.4 EQUIPMENT: 131
For a given prism with apex angle α and minimum deviation angle δ min , the
equation may be rewritten to specify the index of refraction n in terms of δ min and α:
∙ ¸
δ min + α
sin
2
n= hαi
sin
2
By measuring the minimum deviation angle δ min for different wavelengths λ, you
can evaluate n [λ], i.e., the index of refraction as a function of wavelength. This
determines the dispersion of the glass used in the prism.
White light also may be dispersed by a different physical means known as diffrac-
tion. Though the mathematical description of diffraction will be considered later, its
properties are introduced in this laboratory.
11.4 Equipment:
To do the experiments below, you will need:
2. Equiangular (or approximately so) prism made of glass slides that can be filled
with water or mineral oil (Use small pieces of the essential tool for all laborato-
ries — duct tape, which is officially sanctioned by the Department of Homeland
Security!).
3. Prism table with scale in degrees, to measure the change in the angle of the
prism. If unavailable, a sheet of paper marked with angles can be used, but this
is much inferior.
4. He:Ne laser
7. Aluminum foil (to make slits to reduce the spreading of the light passed into
the prism).
8. Diffraction grating(s)
11.5 Procedure:
Set the prism on the rotary table and arrange the laser so you can point the laser
beam through the prism easily, as shown in Figure 5. DO NOT LOOK DIRECTLY
AT THE LASER BEAM. The prism should be able to rotate through a large range
in the angle of incidence on the input face. Use a book with a white piece of paper
132 CHAPTER 11 LAB #10: DISPERSION
taped to it or a piece of white paper taped to the wall as a screen. Make sure before
taking any data that the laser spot remains on your screen as you rotate the prism.
1. Measure the distance from the center of the rotary table to the screen by taking
a number of independent measurements (3 to 5). Each member of the group
should make independent measurements of the length and the uncertainty with-
out revealing the results to the partners. The measurements should be averaged.
What is the standard deviation σ of the result? Use √σN as an estimate of the
uncertainty in your distance measure (note that the calculated uncertainty is
expected to decrease as the number N of measurements is increased).
2. Mark the position of the laser spot on the screen in the case where no prism is
present. This will give you the position of the undeviated beam (position “a”
in the drawing).
3. Measure the apex angle of the glass prism supplied. You can draw lines par-
allel to the faces of the prism, extend these lines with a ruler, and then use a
protractor to make the measurement.
4. Place the prism in the center of the rotary table or angle sheet. Find the position
where the reflected beam travels directly back to the laser source; this is the
location where the face of the prism is perpendicular to the laser beam. Rotate
the prism until you see a spot of laser light that has been refracted by the prism.
Mark the position of the laser spot for different incident angles for as large a
range of angles and of spot positions as you can measure. Take at least 8 data
points, recording the value of the incident angle (as judged by the position of
the rotary table) and the distance from the undeviated spot position.
5. Convert the distances from the previous step to deviation angles δ [θ1 ]. This
can be done by noting that:
ab
= tan [δ]
ac
11.6 ANALYSIS: 133
6. Hold the water prism up to your eye and look at "things" (lamps and such)
with it. You should see colored edges around white lamps or objects; this is
called chromatic aberration in lenses. Cover a small white light source with a
gelatin filter and look at the light through the water prism; a purple filter works
well, as it transmits red and blue light while blocking green. (Crawford, Waves,
#4.12, p.217)
7. Now remove your glass prism and fill the hollow prism made from the microscope
slides with the distilled water provided. Repeat steps 1 through 5. Obviously,
you cannot mount this prism vertically, so this is a more subjective experiment.
But try to estimate the difference in the deviation between the water and glass
prisms.
8. Repeat steps 1-5 after filling the microscope-slide prism with a liquid, such as
water or mineral oil (or both).
9. Go back to the glass prism and use an optical fiber light source and a slit instead
of the laser. Put the fiber source at least a couple of feet away from the prism.
Make a slit that is a few millimeters wide and about 10 mm long. Measure the
angle of incidence as best you can, and then measure the deviation δ for three
different colors: red, green (or yellow), and blue. If you have trouble, you may
want to consider holding filters from the optics kit in front of the slit to cut out
the unwanted colors. In any case, you will probably have to turn the lights off
and shield your screen from the ambient light to be able to see the slit on your
screen.
10. Now, replace the prism with a diffraction grating and describe the spectrum
of the white light. Note particularly the differences compared to the spectrum
generated by the prism. Sketch approximate paths of light emerging from the
grating, and carefully note the differences in path for red and blue light. If
available, compare the spectrum obtained from diffraction gratings with differ-
ent rulings (spacings).
11.6 Analysis:
In your lab write-up, be sure to complete the following:
1. For each of the three prisms, graph the deviation angle δ as a function of the
incident angle on the input face of the prism, θ1 . Find the minimum deviation
angle, δ min and estimate the uncertainty in this quantity.
2. Using the equation, calculate the index of refraction for the glass, water, and
mineral oil used, based on your value forδmin and α in each case.
134 CHAPTER 11 LAB #10: DISPERSION
3. From your measurements, derive the index of refraction for red (λ ' 700 nm),
green (λ ' 550 nm) and blue (λ ' 550 nm) light. Do your data show an decrease
in n with increasing wavelength? The laser is also red (λ ' 632.8 nm). How
well does your value of n agree between the laser measurements and the red
filter measurement?
11.7 Questions:
1. Imagine that a source emits a pulse of light 1 μs long which is composed of
equal amplitudes of all wavelengths. The light is incident on a block of glass of
thickness 20 km and whose refractive index as shown above.
(a) What is the physical length of the pulse [ m] when the light enters the
glass?
(b) What is the time required for red light and for blue light to traverse the
glass?
(c) What is the physical length of the pulse emerging from the glass.
(d) Describe the ”color” of the emerging pulse.
2. If the dispersion of the glass is as shown in the plot of refractive index, and if
the light is incident on the equiangular prism at the angle shown, determine
and sketch approximate paths followed by red light (λ = 600 nm) and by blue
light (λ = 400 nm) and find the approximate difference in emerging angle (the
”dispersion angle”, often denoted by δ). Sketch the path of the light from
incidence through the glass to the emergence. Note carefully which color is
deviated more.
3. Sketch the path followed by the light if it is incident from within a block of the
same type of glass onto a ”prism-shaped” hole.
4. Describe the relative values of the velocity of the average wave and the mod-
ulation wave for if the material has the dispersion curve shown. What if the
dispersion curve is reversed (i.e., if nred > nblue )?
5. Now that you’ve finished this lab on dispersion, reconsider the question of the
formation process of rainbows. Some answers given at the beginning of the
quarter speculated that rainbows were caused by diffracted or by scattered
sunlight. Recall (either from experience or research) and sketch the relative
locations of the rainbow, the viewer, and the sun. On your sketch, show the
path followed by the different colors from the sun to the eye. In other words,
explain the sequence of colors in the rainbow.
BONUS QUESTION:
11.7 QUESTIONS: 135