Papers by Robert Spekkens
Physical review, Aug 10, 2023
Interference phenomena are often claimed to resist classical explanation. However, such claims ar... more Interference phenomena are often claimed to resist classical explanation. However, such claims are undermined by the fact that the specific aspects of the phenomenology upon which they are based can in fact be reproduced in a noncontextual ontological model [Catani et al., arXiv:2111.13727]. This raises the question of what other aspects of the phenomenology of interference do in fact resist classical explanation. We answer this question by demonstrating that the most basic quantum wave-particle duality relation, which expresses the precise tradeoff between path distinguishability and fringe visibility, cannot be reproduced in any noncontextual model. We do this by showing that it is a specific type of uncertainty relation and then leveraging a recent result establishing that noncontextuality restricts the functional form of this uncertainty relation [Catani et al., Phys. Rev. Lett. 129, 240401 (2022)]. Finally, we discuss what sorts of interferometric experiment can demonstrate contextuality via the wave-particle duality relation.
arXiv (Cornell University), Jul 24, 2022
New Journal of Physics, Dec 5, 2012
Content from this work may be used under the terms of the Creative Commons Attribution-NonCommerc... more Content from this work may be used under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Physical review, Aug 12, 2019
The common wisdom in the field of quantum information theory is that when a system is initially c... more The common wisdom in the field of quantum information theory is that when a system is initially correlated with its environment, the map describing its evolution may fail to be completely positive. If true, this would have practical and foundational significance. We here demonstrate, however, that the common wisdom is mistaken. We trace the error to the standard proposal for how the evolution map ought to be defined. We summarize this standard proposal and then show that it sometimes fails to define a linear map or any map at all. Further, we show that these pathologies persist even in completely classical examples. Drawing inspiration from the framework of classical causal models, we argue that the correct definition of the evolution map is obtained by considering a counterfactual scenario wherein the system is reprepared independently of any systems in its causal past while the rest of the circuit remains the same, yielding a map that is always completely positive. In a post-mortem on the standard proposal, we highlight two distinct mistakes that retrospectively become evident (in its application to completely classical examples): (i) the types of constraints to which it appealed are constraints on what one can infer about the final state of a system based on its initial state, where such inferences are based not just on the cause-effect relation between them-which defines the correct evolution map-but also on the common cause of the two; (ii) in a (retrospectively unnecessary) attempt to introduce variability in the input state, it inadvertently introduced variability in the inference map itself, then tried to fit the input-output pairs associated to these different maps with a single map.
arXiv (Cornell University), Jul 14, 2021
Directed acyclic graphs (DAGs) with hidden variables are often used to characterize causal relati... more Directed acyclic graphs (DAGs) with hidden variables are often used to characterize causal relations between variables in a system. When some variables are unobserved, DAGs imply a notoriously complicated set of constraints on the distribution of observed variables. In this work, we present entropic inequality constraints that are implied by eseparation relations in hidden variable DAGs with discrete observed variables. The constraints can intuitively be understood to follow from the fact that the capacity of variables along a causal pathway to convey information is restricted by their entropy; e.g. at the extreme case, a variable with entropy 0 can convey no information. We show how these constraints can be used to learn about the true causal model from an observed data distribution. In addition, we propose a measure of causal influence called the minimal mediary entropy, and demonstrate that it can augment traditional measures such as the average causal effect.
Understanding the causal influences that hold among the parts of a system is critical both to exp... more Understanding the causal influences that hold among the parts of a system is critical both to explaining that system's natural behaviour and to controlling it through targeted interventions. In a quantum world, understanding causal relations is equally important, but the set of possibilities is far richer. The two basic ways in which a pair of time-ordered quantum systems may be causally related are by a cause-effect mechanism or by a common cause acting on both. Here, we show that it is possible to have a coherent mixture of these two possibilities. We realize such a nonclassical causal relation in a quantum optics experiment and derive a set of criteria for witnessing the coherence based on a quantum version of Berkson's paradox.
Nature Communications, Feb 17, 2023
PRX quantum, Nov 3, 2021
Bell's theorem is typically understood as the proof that quantum theory is incompatible with loca... more Bell's theorem is typically understood as the proof that quantum theory is incompatible with localhidden-variable models. More generally, we can see the violation of a Bell inequality as witnessing the impossibility of explaining quantum correlations with classical causal models. The violation of a Bell inequality, however, does not exclude classical models where some level of measurement dependence is allowed, that is, the choice made by observers can be correlated with the source generating the systems to be measured. Here, we show that the level of measurement dependence can be quantitatively upper bounded if we arrange the Bell test within a network. Furthermore, we also prove that these results can be adapted in order to derive nonlinear Bell inequalities for a large class of causal networks and to identify quantumly realizable correlations that violate them.
arXiv (Cornell University), Mar 15, 2019
Bulletin of the American Physical Society, Mar 13, 2017
measurement tomography are standard analysis methods which find the quantum states and measuremen... more measurement tomography are standard analysis methods which find the quantum states and measurement operators that explain a set of experimental data. Once the quantum description of an experiment is found, it is often used to draw conclusions about the experiment, or to make predictions about future ones. However, these techniques cannot be used to identify possible deviations from quantum theory, as they assume the correctness of quantum mechanics. Here, we develop a quantum-free tomography technique that finds the generalized probability theory (GPT) that best fits our data. This GPT tomography technique is able to characterize the dimension and shape of the GPT state and effect spaces in an experiment, providing a predictive theory explaining the specific preparation and measurement procedures performed. We demonstrate our technique with an experiment manipulating the polarization degree-of-freedom of single photons. The GPT state and effect spaces we construct closely resemble the corresponding spaces for a qubit, and we place small upper bounds on the maximum amount our experiment may deviate from quantum theory.
Content from this work may be used under the terms of the Creative Commons Attribution-NonCommerc... more Content from this work may be used under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
arXiv (Cornell University), Apr 20, 2020
A standard approach to quantifying resources is to determine which operations on the resources ar... more A standard approach to quantifying resources is to determine which operations on the resources are freely available, and to deduce the partial order over resources that is induced by the relation of convertibility under the free operations. If the resource of interest is the nonclassicality of the correlations embodied in a quantum state, i.e., entanglement, then the common assumption is that the appropriate choice of free operations is Local Operations and Classical Communication (LOCC). We here advocate for the study of a different choice of free operations, namely, Local Operations and Shared Randomness (LOSR), and demonstrate its utility in understanding the interplay between the entanglement of states and the nonlocality of the correlations in Bell experiments. Specifically, we show that the LOSR paradigm (i) provides a resolution of the anomalies of nonlocality, wherein partially entangled states exhibit more nonlocality than maximally entangled states, (ii) entails new notions of genuine multipartite entanglement and nonlocality that are free of the pathological features of the conventional notions, and (iii) makes possible a resource-theoretic account of the self-testing of entangled states which generalizes and simplifies prior results. Along the way, we derive some fundamental results concerning the necessary and sufficient conditions for convertibility between pure entangled states under LOSR and highlight some of their consequences, such as the impossibility of catalysis for bipartite pure states. The resource-theoretic perspective also clarifies why it is neither surprising nor problematic that there are mixed entangled states which do not violate any Bell inequality. Our results motivate the study of LOSR-entanglement as a new branch of entanglement theory.
arXiv (Cornell University), May 14, 2020
It is useful to have a criterion for when the predictions of an operational theory should be cons... more It is useful to have a criterion for when the predictions of an operational theory should be considered classically explainable. Here we take the criterion to be that the theory admits of a generalized-noncontextual ontological model. Existing works on generalized noncontextuality have focused on experimental scenarios having a simple structure, typically, prepare-measure scenarios. Here, we formally extend the framework of ontological models as well as the principle of generalized noncontextuality to arbitrary compositional scenarios. We leverage this process-theoretic framework to prove that, under some reasonable assumptions (e.g. tomographic locality), every generalized-noncontextual ontological model of an operational theory-in short, it corresponds to a frame representation which is not overcomplete. One consequence of this theorem is that the largest number of ontic states possible in any such model is given by the dimension of the associated generalized probabilistic theory. This constraint is useful for generating noncontextuality no-go theorems as well as techniques for experimentally certifying contextuality. Along the way, we extend known results concerning the equivalence of different notions of classicality from prepare-measure scenarios to arbitrary compositional scenarios. Specifically, we prove a correspondence between the following three notions of classical explainability of an operational theory: (i) admitting a noncontextual ontological model, (ii) admitting of a positive quasiprobability representation, and (iii) being simplex-embeddable. Contents
Physical review, Jun 5, 2018
Within the framework of generalized noncontextuality, we introduce a general technique for system... more Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.
Physical Review X, Feb 2, 2018
Finding quantitative aspects of quantum phenomena which cannot be explained by any classical mode... more Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.
Nature Physics, Mar 23, 2015
The problem of using observed correlations to infer causal relations is relevant to a wide variet... more The problem of using observed correlations to infer causal relations is relevant to a wide variety of scientific disciplines. Yet given correlations between just two classical variables, it is impossible to determine whether they arose from a causal influence of one on the other or a common cause influencing both, unless one can implement a randomized intervention. We here consider the problem of causal inference for quantum variables. We introduce causal tomography, which unifies and generalizes conventional quantum tomography schemes to provide a complete solution to the causal inference problem using a quantum analogue of a randomized trial. We furthermore show that, in contrast to the classical case, observed quantum correlations alone can sometimes provide a solution. We implement a quantum-optical experiment that allows us to control the causal relation between two optical modes, and two measurement schemes-one with and one without randomizationthat extract this relation from the observed correlations. Our results show that entanglement and coherence, known to be central to quantum information processing, also provide a quantum advantage for causal inference.
arXiv (Cornell University), Oct 13, 2022
Electronic Proceedings in Theoretical Computer Science, 2021
Causal models with unobserved variables impose nontrivial constraints on the distributions over t... more Causal models with unobserved variables impose nontrivial constraints on the distributions over the observed variables. When a common cause of two variables is unobserved, it is impossible to uncover the causal relation between them without making additional assumptions about the model. In this work, we consider causal models with a promise that unobserved variables have known cardinalities. We derive inequality constraints implied by d-separation in such models. Moreover, we explore the possibility of leveraging this result to study causal influence in models that involve quantum systems.
Uploads
Papers by Robert Spekkens