A strong loophole-free test of local realism
Lynden K. Shalm,1 Evan Meyer-Scott,2 Bradley G. Christensen,3 Peter Bierhorst,1 Michael A. Wayne,3, 4 Martin J.
Stevens,1 Thomas Gerrits,1 Scott Glancy,1 Deny R. Hamel,5 Michael S. Allman,1 Kevin J. Coakley,1 Shellee D.
Dyer,1 Carson Hodge,1 Adriana E. Lita,1 Varun B. Verma,1 Camilla Lambrocco,1 Edward Tortorici,1 Alan L.
Migdall,4, 6 Yanbao Zhang,2 Daniel R. Kumor,3 William H. Farr,7 Francesco Marsili,7 Matthew D. Shaw,7
Jeffrey A. Stern,7 Carlos Abellán,8 Waldimar Amaya,8 Valerio Pruneri,8, 9 Thomas Jennewein,2, 10 Morgan W.
Mitchell,8, 9 Paul G. Kwiat,3 Joshua C. Bienfang,4, 6 Richard P. Mirin,1 Emanuel Knill,1 and Sae Woo Nam1
arXiv:1511.03189v2 [quant-ph] 6 Sep 2016
1
National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA
2
Institute for Quantum Computing and Department of Physics and Astronomy,
University of Waterloo, 200 University Ave West, Waterloo, Ontario, Canada, N2L 3G1
3
Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
4
National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899, USA
5
Département de Physique et d’Astronomie, Université de Moncton, Moncton, New Brunswick E1A 3E9, Canada
6
Joint Quantum Institute, National Institute of Standards and Technology and University of Maryland,
100 Bureau Drive, Gaithersburg, Maryland 20899, USA
7
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109
8
ICFO – Institut de Ciencies Fotoniques, The Barcelona Institute
of Science and Technology, 08860 Castelldefels (Barcelona), Spain
9
ICREA – Institució Catalana de Recerca i Estudis Avançats, 08015 Barcelona, Spain
10
Quantum Information Science Program, Canadian Institute for Advanced Research, Toronto, ON, Canada
(Dated: September 7, 2016)
We present a loophole-free violation of local realism using entangled photon pairs. We ensure that
all relevant events in our Bell test are spacelike separated by placing the parties far enough apart and
by using fast random number generators and high-speed polarization measurements. A high-quality
polarization-entangled source of photons, combined with high-efficiency, low-noise, single-photon
detectors, allows us to make measurements without requiring any fair-sampling assumptions. Using
a hypothesis test, we compute p-values as small as 5.9×10−9 for our Bell violation while maintaining
the spacelike separation of our events. We estimate the degree to which a local realistic system could
predict our measurement choices. Accounting for this predictability, our smallest adjusted p-value
is 2.3 × 10−7 . We therefore reject the hypothesis that local realism governs our experiment.
But if [a hidden variable theory] is local it will not
agree with quantum mechanics, and if it agrees with
quantum mechanics it will not be local. This is what the
theorem says. –John Stewart Bell [1]
Quantum mechanics at its heart is a statistical theory.
It cannot with certainty predict the outcome of all single
events, but instead it predicts probabilities of outcomes.
This probabilistic nature of quantum theory is at odds
with the determinism inherent in Newtonian physics
and relativity, where outcomes can be exactly predicted
given sufficient knowledge of a system. Einstein and
others felt that quantum mechanics was incomplete.
Perhaps quantum systems are controlled by variables,
possibly hidden from us [2], that determine the outcomes of measurements. If we had direct access to these
hidden variables, then the outcomes of all measurements
performed on quantum systems could be predicted with
certainty. De Broglie’s 1927 pilot-wave theory was a
first attempt at formulating a hidden variable theory of
quantum physics [3]; it was completed in 1952 by David
Bohm [4, 5]. While the pilot-wave theory can reproduce
all of the predictions of quantum mechanics, it has the
curious feature that hidden variables in one location can
instantly change values because of events happening in
distant locations. This seemingly violates the locality
principle from relativity, which says that objects cannot
signal one another faster than the speed of light. In
1935 the nonlocal feature of quantum systems was
popularized by Einstein, Podolsky, and Rosen [6], and is
something Einstein later referred to as “spooky actions
at a distance”[7]. But in 1964 John Bell showed that it
is impossible to construct a hidden variable theory that
obeys locality and simultaneously reproduces all of the
predictions of quantum mechanics [8]. Bell’s theorem
fundamentally changed our understanding of quantum
theory and today stands as a cornerstone of modern
quantum information science.
Bell’s theorem does not prove the validity of quantum
mechanics, but it does allow us to test the hypothesis
that nature is governed by local realism. The principle
of realism says that any system has pre-existing values
for all possible measurements of the system. In local realistic theories, these pre-existing values depend only on
events in the past lightcone of the system. Local hiddenvariable theories obey this principle of local realism. Local realism places constraints on the behavior of systems
of multiple particles—constraints that do not apply to
entangled quantum particles. This leads to different predictions that can be tested in an experiment known as a
2
Bell test. In a typical two-party Bell test, a source generates particles and sends them to two distant parties,
Alice and Bob. Alice and Bob independently and randomly choose properties of their individual particles to
measure. Later, they compare the results of their measurements. Local realism constrains the joint probability
distribution of their choices and measurements. The basis of a Bell test is an inequality that is obeyed by local
realistic probability distributions but can be violated by
the probability distributions of certain entangled quantum particles [8]. A few years after Bell derived his inequality, new forms were introduced by Clauser, Horne,
Shimony and Holt [9], and Clauser and Horne [10] that
are simpler to experimentally test.
In a series of landmark experiments, Freedman and
Clauser [11] and Aspect, Grangier, Dalibard, and Roger
[12–14] demonstrated experimental violations of Bell inequalities using pairs of polarization-entangled photons
generated by an atomic cascade. However, due to technological constraints, these Bell tests and those that followed (see [15] for a review) were forced to make additional assumptions to show local realism was incompatible with their experimental results. A significant violation of Bell’s inequality implies that either local realism is
false or that one or more of the assumptions made about
the experiment are not true; thus every assumption in
an experiment opens a “loophole.” No experiment can
be absolutely free of all loopholes, but in [16] a minimal set of assumptions is described that an experiment
must make to be considered “loophole free.” Here we
report a significant, loophole free, experimental violation
of local realism using entangled photon pairs. We use the
definition of loophole free as defined in [16]. In our experiment the only assumptions that remain are those that
can never—even in principle—be removed. We present
physical arguments and evidence that these remaining
assumptions are either true or untestable.
Bell’s proof requires that the measurement choice at
Alice cannot influence the outcome at Bob (and viceversa). If a signal traveling from Alice can not reach Bob
in the time between Alice’s choice and the completion
of Bob’s measurement, then there is no way for a local
hidden variable constrained by special relativity at Alice to change Bob’s outcomes. In this case we say that
Alice and Bob are spacelike separated from one another.
If an experiment does not have this spacelike separation,
then an assumption must be made that local hidden variables cannot signal one another, leading to the “locality”
loophole.
Another requirement in a Bell test is that Alice and
Bob must be free to make random measurement choices
that are physically independent of one another and of
any properties of the particles. If this is not true, then a
hidden variable could predict the chosen settings in advance and use that information to produce measurement
outcomes that violate a Bell inequality. Not fulfilling
this requirement opens the “freedom-of-choice” loophole.
While this loophole can never in principle be closed, the
set of hidden variable models that are able to predict
the choices can be constrained using space-like separation. In particular, in experiments that use processes
such as cascade emission or parametric downconversion
to create entangled particles, space-like separation of the
measurement choices from the creation event eliminates
the possibility that the particles, or any other signal emanating from the creation event, influence the settings. To
satisfy this condition, Alice and Bob must choose measurement settings based on fast random events that occur
in the short time before a signal traveling at the speed of
light from the entangled-photon creation would be able to
reach them. But it is fundamentally impossible to conclusively prove that Alice’s and Bob’s random number
generators are independent without making additional
assumptions, since their backward lightcones necessarily
intersect. Instead, it is possible to justify the assumption of measurement independence through a detailed
characterization of the physical properties of the random
number generators (such as the examination described in
[17, 18]).
In any experiment, imperfections could lead to loss,
and not all particles will be detected. To violate a Bell
inequality in an experiment with two parties, each free
to choose between two settings, Eberhard showed that at
least 2/3 of the particles must be detected [19] if nonmaximally entangled states are used. If the loss exceeds this
threshold, then one may observe a violation by discarding
events in which at least one party does not detect a particle. This is valid under the assumption that particles
were lost in an unbiased manner. However, relying on
this assumption opens the “detector” or “fair-sampling”
loophole. While the locality and fair-sampling loopholes
have been closed individually in different systems [20–24],
it has only recently been possible to close all loopholes simultaneously using nitrogen vacancy centers in diamonds
[25], and now with entangled photons in our experiment
and in the work reported in [26]. These three experiments
also address the freedom-of-choice loophole by space-like
separation.
Fundamentally a Bell inequality is a constraint on
probabilities that are estimated from random data. Determining whether a data set shows violation is a statistical hypothesis-testing problem. It is critical that the statistical analysis does not introduce unnecessary assumptions that create loopholes. A Bell test is divided into a
series of trials. In our experiment, during each trial Alice
and Bob randomly choose between one of two measurement settings (denoted {a, a′ } for Alice and {b, b′ } for
Bob) and record either a “+” if they observe any detection events or a “0” otherwise. Alice and Bob must define
when a trial is happening using only locally available information, otherwise additional loopholes are introduced.
At the end of the experiment Alice and Bob compare the
3
y
z
HWP @ 45
V2b
x
HWP @ 0
H1b
HWP @ 0
H2b
H1b
V2a
V1a
HWP @ 45
Polarization
Controller
H1
BD1
H1b
V1b
H2b
H2b
Entanglement
Si Filter
z
Polarizer
V2
SMF
PPKTP
f =100 cm
HWP
ɸ
BD2
Ti:Sapphire Laser
FPD
Synchronization
Signal to Alice & Bob
1/800
V1a
H1a
V1a
V2a
V2a
H2a
BD3
to Bob
f =25 cm
f =25 cm
Polarization
Controller
to Alice
FIG. 1. Schematic of the entangled photon source. A pulsed 775 nm-wavelength Ti:Sapphire picosecond mode-locked laser
running at 79.3 MHz repetition rate is used as both a clock and a pump in our setup. A fast photodiode (FPD) and divider circuit
are used to generate the synchronization signal that is distributed to Alice and Bob. A polarization-maintaining single-mode
fiber (SMF) then acts as a spatial filter for the pump. After exiting the SMF, a polarizer and half-wave plate (HWP) set the
pump polarization. To generate entanglement, a periodically poled potassium titanyl phosphate (PPKTP) crystal designed for
Type-II phasematching is placed in a polarization-based Mach-Zehnder interferometer formed using a series of HWPs and three
beam displacers (BD). At BD1 the pump beam is split in two paths (1 and 2): the horizontal (H) component of polarization
of the pump translates laterally in the x direction while the vertical (V) component of polarization passes straight through.
Tilting BD1 sets the phase, φ, of the interferometer to 0. After BD1 the pump state is (cos(16◦ ) |H1 i + sin(16◦ ) |V2 i). To
address the polarization of the paths individually, semi-circular waveplates are used. A HWP in path 2 rotates the polarization
of the pump from vertical (V) to horizontal (H). A second HWP at 0◦ is inserted into path 1 to keep the path lengths of the
interferometer balanced. The pump is focused at two spots in the crystal, and photon pairs at a wavelength of 1550 nm are
generated in either path 1 or 2 through the process of spontaneous parametric downconversion. After the crystal, BD2 walks
the V-polarized signal photons down in the y direction (V1a and V2a ) while the H-polarized idler photons pass straight through
(H1b and H2b ). The x–y view shows the resulting locations of the four beam paths. HWPs at 45◦ correct the polarization
while HWPs at 0◦ provide temporal compensation. BD3 then completes the interferometer by recombining paths 1 and 2 for
the signal and idler photons. The two downconversion processes interfere with one another, creating the entangled state in Eq.
(2). A high-purity silicon wafer with an anti-reflection coating is used to filter out the remaining pump light. The idler (signal)
photons are coupled into a SMF and sent to Alice (Bob).
results they obtained on a trial-by-trial basis.
Our Bell test uses a version of the Clauser-Horne inequality [10, 19, 27] where, according to local realism,
P (++ | ab) ≤ P (+0 | ab′ ) +
P (0+ | a′ b) + P (++ | a′ b′ ).
′ ′
(1)
The terms P (++ | ab) and P (++ | a b ) correspond to
the probability that both Alice and Bob record detection
events (++) when they choose the measurement settings
ab or a′ b′ , respectively. Similarly, the terms P (+0 | ab′ )
and P (0+ | a′ b) are the probabilities that only Alice or
Bob record an event for settings ab′ and a′ b, respectively.
A local realistic model can saturate this inequality; however, the probability distributions of entangled quantum
particles can violate it.
To quantify our Bell violation we construct a hypothesis test based on the inequality in Eq. (1). The null
hypothesis we test is that the measured probability distributions in our experiment are constrained by local realism. Our evidence against this null hypothesis of local
realism is quantified in a p-value that we compute from
our measured data using a test statistic. Our test statistic takes all of the measured data from Alice’s and Bob’s
trials and summarizes them into a single number (see the
Supplemental Material for further details). The p-value
is then the maximum probability that our experiment,
if it is governed by local realism, could have produced a
value of the test statistic that is at least as large as the
observed value [28]. Smaller p-values can be interpreted
as stronger evidence against this hypothesis. These pvalues can also be used as certificates for cryptographic
applications, such as random number generation, that
rely on a Bell test [24, 29]. We use a martingale binomial
technique from [27] for computing the p-value that makes
no assumptions about the distribution of events and does
not require that the data be independent and identically
distributed [30] as long as appropriate stopping criteria
are determined in advance.
In our experiment, the source creates polarizationentangled pairs of photons and distributes them to Alice
and Bob, located in distant labs. At the source location a
mode-locked Ti:Sapphire laser running at repetition rate
of approximately 79.3 MHz produces picosecond pulses
centered at a wavelength of 775 nm as shown in figure 1.
These laser pulses pump an apodized periodically poled
potassium titanyl phosphate (PPKTP) crystal to produce photon pairs at a wavelength of 1550 nm via the
process of spontaneous parametric downconversion [31].
4
From Source
HWP
HWP
QWP
PC
RNG 1
PC Driver
XOR
RNG 2
Delay
GPS
RNG 3
Random Bit
Synchronization
Signal
Plate
Polarizers
Time tagger
Amplifier
Detector
10 MHz
Oscillator
FIG. 2. Receiver station setup for Alice and Bob. A photon arrives from the source. Two half-wave plates (HWP), a
quarter-wave plate (QWP), a Pockels cell (PC), and two plate
polarizers together act to measure the polarization state of the
incoming photon. The polarization projection is determined
by a random bit from XORing the outputs of two random
number generators (RNG1 and RNG2) with pre-determined
pseudorandom bits (RNG3). If the random bit is “0”, corresponding to measurement setting a (b) for Alice (Bob), the
Pockels cell remains off. If the random bit is “1”, corresponding to measurement setting a′ (b′ ) for Alice (Bob), then a
voltage is applied to the Pockels cell that rotates the polarization of the photons using a fast electro-optic effect. The
two plate polarizers have a combined contrast ratio > 7000 : 1.
The photons are coupled back into a single-mode fiber (SMF)
and detected using a superconducting nanowire single-photon
detector (SNSPD). The signal is amplified and sent to a timetagging unit where the arrival time of the event is recorded.
The time tagger also records the measurement setting, the
synchronization signal, and a one pulse-per-second signal from
a global positioning system (GPS). The pulse-per-second signal provides an external time reference that helps align the
time tags Alice and Bob record. A 10 MHz oscillator synchronizes the internal clocks on Alice’s and Bob’s time taggers.
The synchronization pulse from the source is used to trigger
the measurement basis choice.
The downconversion system was designed using the tools
available in [32]. The PPKTP crystal is embedded in the
middle of a polarization-based Mach-Zehnder interferometer that enables high-quality polarization-entangled
states to be generated [33]. Rotating the polarization analyzer angles at Alice and Bob, we measure the visibility
of coincidence detections for a maximally entangled state
to be 0.999 ± 0.001 in the horizontal/vertical polarization basis and 0.996 ± 0.001 in the diagonal/antidiagonal
polarization basis (see [34] for information about the reported uncertainties). The entangled photons are then
coupled into separate single-mode optical fibers with one
photon sent to Alice and the other to Bob. Alice, Bob,
and the source are positioned at the vertices of a nearly
right-angle triangle. Due to constraints in the building
layout, the photons travel to Alice and Bob in fiber optic cables that are not positioned along their direct lines
of sight. While the photons are in flight toward Alice
and Bob, their random number generators each choose
a measurement setting. Each choice is completed before
information about the entangled state, generated at the
PPKTP crystal, could possibly reach the random number
generators. When the photons arrive at Alice and Bob,
they are launched into free space, and each photon passes
through a Pockels cell and polarizer that perform the
polarization measurement chosen by the random number generators as shown in Fig. 2. After the polarizer,
the photons are coupled back into a single-mode fiber
and sent to superconducting nanowire single-photon detectors, each with a detection efficiency of 91 ± 2 % [35].
The detector signal is then amplified and sent to a time
tagger where the arrival time is recorded. We assume
the measurement outcome is fixed when it is recorded by
the time tagger, which happens before information about
the other party’s setting choice could possibly arrive, as
shown in Fig. 3(b).
Alice and Bob have system detection efficiencies of
74.7 ± 0.3 % and 75.6 ± 0.3 %, respectively. We measure this system efficiency using the method outlined by
Klyshko [36]. Background counts from blackbody radiation and room lights reduce our observed violation
of the Bell inequality. Every time a background count
is observed it counts as a detection event for only one
party. These background counts increase the heralding
efficiency required to close the detector loophole above
2/3 [19]. To reduce the number of background counts,
the only detection events considered are those that occur within a window of approximately 625 ps at Alice
and 781 ps at Bob, centered around the expected arrival
times of photons from the source. The probability of
observing a background count during a single window is
8.9 × 10−7 for Alice and 3.2 × 10−7 for Bob, while the
probability that a single pump pulse downconverts into
a photon pair is ≈ 5 × 10−4 . These background counts
in our system raise the efficiency needed to violate a Bell
inequality from 2/3 to 72.5 %. Given our system detection efficiencies, our entangled photon production rates,
entanglement visibility, and the number of background
counts, we numerically determine the entangled state and
measurement settings for Alice and Bob that should give
the largest Bell violation for our setup. The optimal state
is not maximally entangled [19] and is given by:
|ψi = 0.961 |HA HB i + 0.276 |VA VB i ,
(2)
where H (V ) denotes horizontal (vertical) polarization,
and A and B correspond to Alice’s and Bob’s photons,
respectively. From the simulation we also determine that
Alice’s optimal polarization measurement angles, relative
to a vertical polarizer, are {a = 4.2o , a′ = −25.9o } while
Bob’s are {b = −4.2o , b′ = 25.9o }.
Synchronization signals enable Alice and Bob to define
trials based only on local information. The synchronization signal runs at a frequency of 99.1 kHz, allowing Alice
and Bob to perform 99,100 trials/s (79.3 MHz/800). This
trial frequency is limited by the rate the Pockels cells can
be stably driven. When the Pockels cells are triggered
5
(a)
(a)
Photons measured
1000
S
Fiber to Alice
A
Photons measured
132.1 m
6
Pu
lse
3 p-value = 2.4 x 10
RNG stops
RNG starts
18
Pu
lse
RNG stops
RNG starts
6l
igh
tco
4.
9
m
Alice
Bob
ne
Detector
0
100
-6
5 p-value = 5.8 x 10-9
7 p-value = 2.0 x 10-7
Bob
Alice
400
Source
Time (ns)
Aggregate Pulses
1 p-value = 0.0025
200
126.2 m
600
Fiber to Bob
800
50
Entaglement created
0
RNG
RNG
50
B
Detector
100
Distance from Source (m)
Photons measured
(b)
100
Pulse 6
Pulse 6
10-1
RNG stops
RNG starts
10-2
2
3
10-3
1
3
5
7
0
0
1
RNG stops
RNG starts
Alice
400
200
P-value
600
Bob
Time (ns)
800
50
100
150
200
Distance between Alice and Bob (m)
FIG. 3. Minkowski diagrams for the spacetime events related
to Alice (A) and the source (S) and Bob (B) and the source
(a), and Alice and Bob (b). All lightcones are shaded blue.
Due to the geometry of Alice, Bob, and the source, more than
one spacetime diagram is required. In a) the random number generators (RNGs) at Alice and Bob must finish picking
a setting outside the lightcone of the birth of an entangled
photon pair. A total of 15 pump pulses have a chance of
downconverting into an entangled pair of photons each time
the Pockels cells are on. The events related to the first pulse
are not spacelike separated, because Alice’s RNG does not
finish picking a setting before information about the properties of the photon pair can arrive; pulses 2 through 11 are
spacelike separated. As shown in (b), pulses 12 through 15
are not spacelike separated as the measurement is finished
by Alice and Bob after information about the other party’s
measurement setting could have arrived. In our experiment
the events related to pulse 6 are the furthest outside of all
relevant lightcones.
they stay on for ≈ 200 ns. This is more than 15 times
longer than the 12.6 ns pulse-to-pulse separation of the
pump laser. Therefore photons generated by the source
can arrive in one of 15 slots while both Alice’s and Bob’s
Pockels cells are on. Since the majority of the photon
pulses arriving in these 15 slots satisfy the spacelike separation constraints, it is possible to aggregate multiple
adjacent pulses to increase the event rate and statisti-
10-4
0
2
4
6
Equivalent Standard Deviations
Photons measured
1000
Pulse with maximal separation
(b)
Meausrement inside
forward lightcone of
other party’s RNG
8
1
Pulse Number 0
12
14
FIG. 4. (a) The positions of Alice (A), Bob (B), and the
source (S) in the building where the experiment was carried
out. The insets show a magnified (×2) view of Alice’s and
Bob’s locations. The white dots are the location of the random number generators (RNGs). The larger circle at each
location has a radius of 1 m and corresponds to our uncertainty in the spatial position measurements. Alice, Bob, and
the source can be located anywhere within the green shaded
regions and still have their events be spacelike separated.
Boundaries are plotted for aggregates of one, three, five, and
seven pulses. Each boundary is computed by keeping the
chronology of events fixed, but allowing the distance between
the three parties to vary independently. In (b) the p-value of
each of the individual 15 pulses is shown. Overlayed on the
plot are the aggregate pulse combinations used in the contours
in (a). The statistical significance of our Bell violation does
not appear to depend on the spacelike separation of events.
For reference and comparison purposes only, the corresponding number of standard deviations for a given p-value (for a
one-sided normal distribution) are shown.
cal significance of the Bell violation. However, including
too many pulses will cause one or more of the spacelike
separation constraints to be violated. Because the probability per pulse of generating an entangled photon pair
is so low, given that one photon has already arrived, the
chance of getting a second event in the same Pockels cell
6
0
10
-1
10
-2
1
2
P-value
1 Pulse
10
-3
10
-4
10
-5
10
-6
10
-7
10
-8
10
-9
3
4
3 Pulses
5
7 Pulses
10
Equivalent Standard Deviations
10
5 Pulses
-6
10
-5
10
-4
10
-3
10
-2
10
-1
FIG. 5. The p-value for different numbers of aggregate pulses
as a function of the excess predictability, ǫ, in Alice’s and
Bob’s measurement settings. Larger levels of predictability
correspond to a weakening of the assumption that the settings choices are physically independent of the photon properties Alice and Bob measure. As in Fig. 4(b), the p-value
equivalent confidence levels corresponding to the number of
standard deviations of a one-sided normal distribution are
shown for reference.
window is negligible (< 1 %).
Alice and Bob each have three different sources of random bits that they XOR together to produce their random measurement decisions (for more information see
the Supplemental Material). The first source is based
on measuring optical phase diffusion in a gain- switched
laser that is driven above and below the lasing threshold.
A new bit is produced every 5 ns by comparing adjacent
laser pulses [17]. Each bit is then XORed with all past
bits that have been produced (for more details see the
Supplemental Material). The second source is based on
sampling the amplitude of an optical pulse at the singlephoton level in a short temporal interval. This source
produces a bit on demand and is triggered by the synchronization signal. Finally, Alice and Bob each have
a different predetermined pseudorandom source that is
composed of various popular culture movies and TV
shows, as well as the digits of π, XORed together. Suppose that a local-realistic system with the goal of producing violation of the Bell inequality, was able to manipulate the properties of the photons emitted by the
entanglement source before each trial. Provided that the
randomness sources correctly extract their bits from the
underlying processes of phase diffusion, optical amplitude
sampling, and the production of cultural artifacts (such
as the movie Back to the Future), this powerful local realistic system would be required to predict the outcomes
of all of these processes well in advance of the beginning
of each trial to achieve its goal. Such a model would
have elements of superdeterminism—the fundamentally
untestable idea that all events in the universe are preordained.
Over the course of two days we took a total of 6 data
runs with differing configurations of the experimental
setup [37]. Here we report the results from the final
dataset that recorded data for 30 minutes (see the Supplemental Material for descriptions and results from all
datasets). This is the dataset where the experiment was
most stable and best aligned; small changes in coupling
efficiency and the stability of the Pockels cells can lead to
large changes in the observed violation. The events corresponding to the sixth pulse out of the 15 possible pulses
per trial are the farthest outside all the relevant lightcones. Thus we say these events are the most spacelike
separated. To increase our data rate we aggregate multiple pulses centered around pulse number 6. We consider
different Bell tests using a single pulse (number 6), three
pulses (pulses 5, 6, and 7), five pulses (pulses 4 through
8), and seven pulses (pulses 3 through 9). The joint measurement outcomes and corresponding p-values for these
combinations are shown in Table I. For a single pulse we
measure a p-value = 2.5×10−3 , for three pulses a p-value
= 2.4 × 10−6 , for five pulses a p-value = 5.8 × 10−9 , and
for seven pulses a p-value = 2.0 × 10−7 , corresponding to
a strong violation of local realism.
If, trial-by-trial, a conspiratorial hidden variable (or
attacker in cryptographic scenarios) has some measure of
control over or knowledge about the settings choices at
Alice and Bob, then they could manipulate the outcomes
to observe a violation of a Bell inequality. Even if we
weaken our assumption that Alice’s and Bob’s setting
choices are physically independent from the source, we
can still compute valid p-values against the hypothesis
of local realism. We characterize the lack of physical
independence with predictability of our random number
generators. The “predictability,” P, of a random number
generator is the probability with which an adversary or
local realistic system could guess a given setting choice.
We use the parameter ǫ, the “excess predictability” to
place an upper bound on the actual predictability of our
random number generators:
P≤
1
(1 + ǫ).
2
(3)
In principle, it is impossible to measure predictability
through statistical tests of the random numbers, because
they can be made to appear random, unbiased, and independent even if the excess predictability during each trial
is nonzero. Extreme examples that could cause nonzero
excess predictability include superdeterminism or a powerful and devious adversary with access to the devices,
but subtle technical issues can never be entirely ruled
out. Greater levels of excess predictability lead to lower
statistical confidence in a rejection of local realism. In
Fig. 5 we show how different levels of excess predictability change the statistical significance of our results [38]
7
(see Supplemental Material for more details). We can
make estimates of the excess predictability in our system. From additional measurements, we observe a bias
of (1.08 ± 0.07) × 10−4 in the settings reaching the XOR
from the laser diffusion random source, which includes
synchronization electronics as well as the random number
generator. If this bias is the only source of predictability
in our system, this level of bias would correspond to an
excess predictability of approximately 2 × 10−4 . To be
conservative we use an excess predictability bound that
is fifteen times larger, ǫp = 3 × 10−3 (see Supplemental
Material for more details). If our experiment had excess
predictability equal to ǫp our p-values would be increased
to 5.9 × 10−3 , 2.4 × 10−5 , 2.3 × 10−7 , and 9.2 × 10−6 for
one, three, five, and seven pulses, respectively [38]. Combining the output of this random number generator with
the others should lead to lower bias levels and a lower
excess predictability, but even under the the paranoid
situation where a nearly superdeterministic local realistic system has complete knowledge of the bits from the
other random number sources, the adjusted p-values still
provide a rejection of local realism with high statistical
significance.
Satisfying the spacetime separations constraints in Fig.
3 requires precise measurements of the locations of Alice,
Bob, and the source as well as the timing of all events.
Using a combination of position measurements from a
global positioning system (GPS) receiver and site surveying, we determine the locations of Alice, Bob, and
the source with an uncertainty of < 1 m. This uncertainty is set by the physical size of the cryostat used to
house our detectors and the uncertainty in the GPS coordinates. There are four events that must be spacelike
separated: Alice’s and Bob’s measurement choice must
be fixed before any signal emanating from the photon
creation event could arrive at their locations, and Alice
and Bob must finish their measurements before information from the other party’s measurement choice could
reach them. Due to the slight asymmetry in the locations of Alice, Bob, and the source, the time difference
between Bob finishing his measurement and information
possibly arriving about Alice’s measurement choice is always shorter than the time differences of the other three
events as shown in Fig. 3(b). This time difference serves
as a kind of margin; our system can tolerate timing errors
as large as this margin and still have all events remain
spacelike separated. For one, three, five, and seven aggregate pulses this corresponds to a margin of 63.5 ± 3.7
ns, 50.9 ± 3.7 ns, 38.3 ± 3.7 ns, and 25.7 ± 3.7 ns, respectively as shown in Table I. The uncertainty in these
timing measurements is dominated by the 1 m positional
uncertainty (see Supplemental Material for further details on the timing measurements).
A way to visualize and further quantify the the spacelike separation of events is to compute how far Alice,
Bob, and the source could move from their measured po-
sition and still be guaranteed to satisfy the locality constraints, assuming that the chronology of all events remains fixed. In figure 4(a) Alice, Bob, and the source locations are surrounded by shaded green regions. As long
as each party remains anywhere inside the boundaries
of these regions their events are guaranteed to be spacelike separated. There are specific configurations where
all three parties can be outside the boundaries and still
be spacelike separated, but here we consider the most
conspiratorial case where all parties can collude with one
another. The boundaries are overlayed on architectural
drawings of the building in which the experiment was
performed. Four different boundaries are plotted, corresponding to the Bell test performed with one, three, five,
and seven aggregate pulses. Minimizing over the path
of each boundary line, the minimum distance that Alice,
Bob, and the source are located from their respective
boundaries is 9.2 m, 7.3 m, 5.4 m, and 3.5 m for aggregates of one pulse, three pulses, five pulses, and seven
pulses, respectively. For these pulse configurations we
would have had to place our source and detection systems physically in different rooms (or even move outside
of the building) to compromise our spacelike separation.
Aggregating more than seven pulses leads to boundaries
that are less than three meters away from our measured
positions. In these cases we are not able to make strong
claims about the spacelike separation of our events.
Finally, as shown in Fig. 4(b), we can compute the 15
p-values for each of the time slots we consider that photons from the source can arrive in every trial. Photons arriving in slots 2 through 11 are spacelike separated while
photons in slots 12 through 15 are not. The photons arriving in these later slots are measured after information
from the other party’s random number generator could
arrive as shown in Fig. 3(b). It appears that spacelike
separation has no discernible effect on the statistical significance of the violation. However, we do see large slotto-slot fluctuation in the calculated p-values. We suspect
that this is due to instability in the applied voltage when
the Pockels cell is turned on. In this case photons receive slightly different polarization rotations depending
on which slot they arrive in, leading to non-ideal measurement settings at Alice and Bob. It is because of this
slot-to-slot variation that the aggregate of seven pulses
has a computed p-value larger than the five-pulse case.
Fixing this instability and using more sophisticated hypothesis test techniques [39–41] will enable us to robustly
increase the statistical significance of our violation for the
seven pulse case.
The experiment reported here is a commissioning run
of the Bell test machine we eventually plan to use to certify randomness. The ability to include multiple pulses in
our Bell test highlights the flexibility of our system. Our
Bell test machine is capable of high event rates, making
it well suited for generating random numbers required
by cryptographic applications [29]. Future work will fo-
8
Aggregate Pulses N (++ | ab)
1
1257
3
3800
5
6378
7
8820
Nstop
2376
7211
12127
16979
Total trials
175,654,992
175,744,824
177,358,351
177,797,650
P-value Adjusted p-value Timing Margin (ns) Minimum distance (m)
2.5 × 10−3
5.9 × 10−3
63.5 ± 3.7
9.2
−6
2.4 × 10
2.4 × 10−5
50.9 ± 3.7
7.3
5.9 × 10−9
2.3 × 10−7
38.3 ± 3.7
5.4
2.0 × 10−7
9.2 × 10−6
25.7 ± 3.7
3.5
TABLE I. P-value results for different numbers of aggregate pulses. Here N (++ | ab) refers to the number of times Alice and
Bob both detect a photon with settings a and b respectively. Before analyzing the data a stopping criteria, Nstop , was chosen.
This stopping criteria refers to the total number of events considered that have the settings and outcomes specified by the terms
in Eq. (1), Nstop = N (++ | ab) + N (+0 | ab′ ) + N (0+ | a′ b) + N (++ | a′ b′ ). After this number of trials the p-value is computed
and the remaining trials discarded. Such pre-determined stopping criteria are necessary for the hypothesis test we use (see
Supplemental Material for more details). The total trials include all trials up to the stopping criteria regardless of whether
a photon is detected. The adjusted p-value accounts for the excess predictability we estimate from measurements of one of
our random number generators. As discussed in the text, the time difference between Bob finishing his measurement and the
earliest time at which information about Alice’s measurement choice could arrive at Bob sets the margin of timing error that
can be tolerated and still have all events guaranteed to be spacelike separated. We also give the minimum distance between
each party and its boundary line (shown in Fig. 4(a)) that guarantees satisfaction of the spacelike separation constraints. In
the Supplemental Material the frequencies of each combination of settings choice for 5 aggregate pulses is reported.
cus on incorporating our Bell test machine as an additional source of real-time randomness into the National
Institute of Standards and Technology’s public random
number beacon (https://beacon.nist.gov).
It has been 51 years since John Bell formulated
his test of local realism. In that time his inequality
has shaped our understanding of entanglement and
quantum correlations, led to the quantum information
revolution, and transformed the study of quantum
foundations. Until recently it has not been possible
to carry out a complete and statistically significant
loophole-free Bell test.
Using advances in random
number generation, photon source development, and
high-efficiency single-photon detectors, we are able to
observe a strong violation of a Bell inequality that is
loophole free, meaning that we only need to make a
minimal set of assumptions. These assumptions are that
our measurements of locations and times of events are
reliable, that Alice’s and Bob’s measurement outcomes
are fixed at the time taggers, and that during any given
trial the random number generators at Alice and Bob are
physically independent of each other and the properties
of the photons being measured. It is impossible, even
in principle, to eliminate a form of these assumptions
in any Bell test. Under these assumptions, if a hidden
variable theory is local it does not agree with our results,
and if it agrees with our results then it is not local.
We thank Todd Harvey for assistance with optical
fiber installation, Norman Sanford for the use of lab
space, Kevin Silverman, Aephraim M. Steinberg, Rupert Ursin, Marissa Giustina, Stephen Jordan, Dietrich
Leibfried, and Paul Lett for helpful discussions, Nik
Luhrs and Kristina Meier for helping with the electronics, Andrew Novick for help with the GPS measurements, Joseph Chapman and Malhar Jere for designing the cultural pseudorandom numbers, and Stephen
Jordan, Paul Lett, and Dietrich Leibfried for constructive comments on the manuscript. We thank Conrad Turner Bierhorst for waiting patiently for the computation of p-values. We dedicate this paper to the
memory of our coauthor, colleague, and friend, Jeffrey
Stern. We acknowledge support for this project provided by: DARPA (LKS, MSA, AEL, SDD, MJS, VBV,
TG, RPM, SWN, WHF, FM, MDS, JAS) and the NIST
Quantum Information Program (LKS, MSA, AEL, SDD,
MJS, VBV, TG, SG, PB, JCB, AM, RPM, EK, SWN);
NSF grant No. PHY 12-05870 and MURI Center for
Photonic Quantum Information Systems (ARO/ARDA
Program DAAD19-03-1-0199) DARPA InPho program
and the Office of Naval Research MURI on Fundamental Research on Wavelength-Agile High-Rate Quantum Key Distribution (QKD) in a Marine Environment, award #N00014-13-0627 (BGC, MAW, DRK,
PGK); NSERC, CIFAR and Industry Canada (EMS,
YZ, TJ); NASA (FM, MDS, WHF, JAS); European Research Council project AQUMET, FET Proactive project
QUIC, Spanish MINECO project MAGO (Ref. FIS201123520) and EPEC (FIS2014-62181-EXP), Catalan 2014SGR-1295, the European Regional Development Fund
(FEDER) grant TEC2013-46168-R, and Fundacio Privada CELLEX (MWM, CA, WA, VP); New Brunswick
Innovation Foundation (DRH). Part of the research was
carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the
National Aeronautics and Space Administration. This
work includes contributions of the National Institute of
Standards and Technology, which are not subject to U.S.
copyright.
[1] J. S. Bell, Epistemological Letters , 2 (1975).
[2] P. Holland, Found. Phys. 35, 177 (2005), arXiv:quantph/0401017.
9
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
L. de Broglie, J. Phys. Radium 8, 225 (1927).
D. Bohm, Phys. Rev. 85, 166 (1952).
D. Bohm, Phys. Rev. 85, 180 (1952).
A. Einstein, B. Podolosky, and N. Rosen, Phys. Rev. 47,
777 (1935).
A. Einstein, M. Born, and H. Born, The Born-Einstein
Letters: the Correspondence between Max & Hedwig Born
and Albert Einstein 1916/1955, 1st ed. (The MacMillan
Press Ltd, London and Basingstoke, 1971).
J. S. Bell, Physics 1, 195 (1964).
J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt,
Phys. Rev. Lett. 23, 880 (1969).
J. F. Clauser and M. A. Horne, Phys. Rev. D 10, 526
(1974).
S. J. Freedman and J. F. Clauser, Phys. Rev. Lett. 28,
938 (1972).
A. Aspect, P. Grangier, and G. Roger, Phys. Rev. Lett.
47, 460 (1981).
A. Aspect, P. Grangier, and G. Roger, Phys. Rev. Lett.
49, 91 (1982).
A. Aspect, J. Dalibard, and G. Roger, Phys. Rev. Lett.
49, 1804 (1982).
M. Genovese, Phys. Rep. 413, 319 (2005), arXiv:quantph/0701071.
J.-Å. Larsson, J. Phys. A 47, 424003 (2014),
arXiv:1407.0363.
C. Abellan, W. Amaya, D. Mitrani, V. Pruneri, and
M. W. Mitchell, ArXiv e-prints (2015), arXiv:1506.02712
[quant-ph].
M. W. Mitchell, C. Abellan, and W. Amaya, Phys. Rev.
A 91, 012314 (2015).
P. H. Eberhard, Phys. Rev. A 47, R747 (1993).
G. Weihs, T. Jennewein, C. Simon, H. Weinfurter,
and A. Zeilinger, Phys. Rev. Lett. 81, 5039 (1998),
arXiv:9810080.
M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett,
W. M. Itano, C. Monroe, and D. J. Wineland, Nature
409, 791 (2001).
T. Scheidl, R. Ursin, J. Kofler, S. Ramelow, X.-S. Ma,
T. Herbst, L. Ratschbacher, A. Fedrizzi, N. K. Langford,
T. Jennewein, and A. Zeilinger, Proc. Nat. Acad. Sci.
USA 107, 19708 (2010), arXiv:0811.3129.
M. Giustina, A. Mech, S. Ramelow, B. Wittmann,
J. Kofler, J. Beyer, A. Lita, B. Calkins, T. Gerrits, S. W.
Nam, R. Ursin, and A. Zeilinger, Nature 497, 227 (2013),
arXiv:1212.0533.
B. G. Christensen, K. T. McCusker, J. B. Altepeter,
B. Calkins, T. Gerrits, A. E. Lita, A. Miller, L. K.
Shalm, Y. Zhang, S. W. Nam, N. Brunner, C. C. W.
Lim, N. Gisin, and P. G. Kwiat, Phys. Rev. Lett. 111,
130406 (2013), arXiv:1306.5772.
[25] B. Hensen, H. Bernien, A. E. Dreau, A. Reiserer, N. Kalb,
M. S. Blok, J. Ruitenberg, R. F. L. Vermeulen, R. N.
Schouten, C. Abellan, W. Amaya, V. Pruneri, M. W.
Mitchell, M. Markham, D. J. Twitchen, D. Elkouss,
S. Wehner, T. H. Taminiau, and R. Hanson, Nature
526, 682 (2015).
[26] M. Giustina, M. A. M. Versteegh, S. Wengerowsky,
J. Handsteiner, A. Hochrainer, K. Phelan, F. Steinlechner, J. Kofler, J.-A. Larsson, C. Abellan, W. Amaya,
V. Pruneri, M. W. Mitchell, J. Beyer, T. Gerrits, A. E.
Lita, L. K. Shalm, S. W. Nam, T. Scheidl, R. Ursin,
B. Wittmann, and A. Zeilinger, ArXiv e-prints (2015),
arXiv:1511.03190 [quant-ph].
[27] P. Bierhorst, J. Phys. A 48, 195302 (2015).
[28] J. Shao, Mathematical Statistics, Springer Texts in Statistics (Springer, New York, 1998) See 2nd edition pages
126-127.
[29] S. Pironio, A. Acı́n, S. Massar, A. B. de la Giroday,
D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes,
L. Luo, T. A. Manning, and C. Monroe, Nature 464,
1021 (2010), arXiv:0911.3427.
[30] R. D. Gill, in Mathematical Statistics and Applications:
Festschrift for Constance van Eeden, Vol. 42, edited by
M. Moore, S. Froda, and C. Léger (Institute of Mathematical Statistics. Beachwood, Ohio, 2003) pp. 133–154,
arXiv:quant-ph/0110137.
[31] P. B. Dixon, D. Rosenberg, V. Stelmakh, M. E. Grein,
R. S. Bennink, E. A. Dauler, A. J. Kerman, R. J. Molnar,
and F. N. C. Wong, Phys. Rev. A 90, 043804 (2014).
[32] L. K. Shalm, K. Garay, J. Palfree, A. L. Migdall,
A. U’Ren, and S. W. Nam, “Spontaneous parametric
downcoversion calculator,” http://www.spdcalc.org.
[33] P. G. Evans, R. S. Bennink, W. P. Grice, T. S. Humble,
and J. Schaake, Phys. Rev. Lett. 105, 253601 (2010).
[34] All uncertainties U and error bars correspond to an estimated standard deviation, σ, and a coverage factor k = 1
as U = kσ.
[35] F. Marsili, V. B. Verma, J. A. Stern, S. Harrington, A. E.
Lita, T. Gerrits, I. Vayshenker, B. Baek, M. D. Shaw,
R. P. Mirin, and S. W. Nam, Nature Photonics 7, 210
(2013), arXiv:1209.5774.
[36] D. N. Klyshko, Sov. J. Quantum Electron. 10, 1112
(1980).
[37] Raw data from each experimental run will be made available online.
[38] P. Bierhorst, (2013), arXiv:1312.2999.
[39] Y. Zhang, S. Glancy, and E. Knill, Phys. Rev. A 84,
062118 (2011), arXiv:1108.2468.
[40] Y. Zhang, S. Glancy, and E. Knill, Phys. Rev. A 88,
052119 (2013), arXiv:1303.7464.
[41] E. Knill, S. Glancy, S. W. Nam, K. Coakley, and
Y. Zhang, Phys. Rev. A 91, 032105 (2015).