Quantum computing becomes viable when a quantum state can be preserved from environmentally-induc... more Quantum computing becomes viable when a quantum state can be preserved from environmentally-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse and quantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation by guaranteeing increasingly larger clusters of errors will not cause logical failure - a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here, we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural precursor of the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition (QND) parity measurements. Relative to a single physical qubit, we reduce the ...
State distillation is the process of taking a number of imperfect copies of a particular quantum ... more State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
Topological error correction--a novel method to actively correct errors based on cluster states w... more Topological error correction--a novel method to actively correct errors based on cluster states with topological properties--has the highest order of tolerable error rates known to date (10^{-2}). Moreover, the scheme requires only nearest-neighbour interaction, particularly suitable for most physical systems. Here we report the first experimental demonstration of topological error correction with an 8-qubit optical cluster state. In the experiment,
Quantum computing becomes viable when a quantum state can be protected from environment-induced e... more Quantum computing becomes viable when a quantum state can be protected from environment-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse and quantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation of states by guaranteeing that increasingly larger clusters of errors will not cause logical failure-a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural step towards the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition parity measurements. Relative to a single physical qubit, we reduce th...
ABSTRACT The surface code cannot be used when qubits vanish during computation; instead, a varian... more ABSTRACT The surface code cannot be used when qubits vanish during computation; instead, a variant known as the topological cluster state is necessary. It has a gate error threshold of $0.75% and only requires nearest-neighbor interactions on a 2D array of qubits. Previous work on loss tolerance using this code only considered qubits vanishing during measurement. We begin by also including qubit loss during two-qubit gates and initialization, and then additionally consider interaction errors that occur when neighbors attempt to entangle with a qubit that isn't there. In doing so, we show that even our best case scenario requires a loss rate below 1% in order to avoid considerable space-time overhead.
ABSTRACT Superconducting qubits, while promising for scalability and long coherence times, contai... more ABSTRACT Superconducting qubits, while promising for scalability and long coherence times, contain more than two energy levels, and therefore are susceptible to errors generated by the leakage of population outside of the computational subspace. Such leakage errors constitute a prominent roadblock towards fault-tolerant quantum computing (FTQC) with superconducting qubits. FTQC using topological codes is based on sequential measurements of multiqubit stabilizer operators. Here, we first propose a leakage-resilient procedure to perform repetitive measurements of multiqubit stabilizer operators, and then use this scheme as an ingredient to develop a leakage-resilient approach for surface code quantum error correction with superconducting circuits. Our protocol is based on swap operations between data and ancilla qubits at the end of every cycle, requiring read-out and reset operations on every physical qubit in the system, and thereby preventing persistent leakage errors from occurring.
Accurate methods of assessing the performance of quantum gates are extremely important. Quantum p... more Accurate methods of assessing the performance of quantum gates are extremely important. Quantum process tomography and randomized benchmarking are the current favored methods. Quantum process tomography gives detailed information, but significant approximations must be made to reduce this information to a form quantum error correction simulations can use. Randomized benchmarking typically outputs just a single number, the fidelity, giving no information on the structure of errors during the gate. Neither method is optimized to assess gate performance within an error detection circuit, where gates will be actually used in a large-scale quantum computer. Specifically, the important issues of error composition and error propagation lie outside the scope of both methods. We present a fast, simple, and scalable method of obtaining exactly the information required to perform effective quantum error correction from the output of continuously running error detection circuits, enabling accurate prediction of large-scale behavior.
ABSTRACT The fragile nature of quantum information limits our ability to construct large quantiti... more ABSTRACT The fragile nature of quantum information limits our ability to construct large quantities of quantum bits suitable for quantum computing. An important goal, therefore, is to minimize the amount of resources required to implement quantum algorithms, many of which are serial in nature and leave large numbers of qubits idle much of the time unless compression techniques are used. Furthermore, quantum error-correcting codes, which are required to reduce the effects of noise, introduce additional resource overhead. We consider a strategy for quantum circuit optimization based on topological deformation in the surface code, one of the best performing and most practical quantum error-correcting codes. Specifically, we examine the problem of minimizing computation time on a two-dimensional qubit lattice of arbitrary, but fixed dimension, and propose two algorithms for doing so.
Topological quantum error correction codes are known to be able, in principle, to tolerate arbitr... more Topological quantum error correction codes are known to be able, in principle, to tolerate arbitrary local errors given sufficient qubits. This includes errors involving many local qubits that could potentially arise from unwanted many-qubit interactions. In this work, we quantify the level of tolerance, numerically studying the effects of many-qubit errors on the performance of the surface code. We find that if increasingly large area errors are at least moderately exponentially suppressed, arbitrarily reliable quantum computation can still be achieved with practical overhead. We furthermore quantify the effect of non-local two-qubit errors, which would be expected in arrays of qubits coupled by the Coulomb or magnetic dipole or similar polynomially decaying interactions. We surprisingly find that very modest quadratic suppression of such errors with increasing qubit separation is sufficient to permit quantum computation with practical overhead.
Quantum computing becomes viable when a quantum state can be preserved from environmentally-induc... more Quantum computing becomes viable when a quantum state can be preserved from environmentally-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse and quantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation by guaranteeing increasingly larger clusters of errors will not cause logical failure - a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here, we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural precursor of the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition (QND) parity measurements. Relative to a single physical qubit, we reduce the ...
State distillation is the process of taking a number of imperfect copies of a particular quantum ... more State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three.
Topological error correction--a novel method to actively correct errors based on cluster states w... more Topological error correction--a novel method to actively correct errors based on cluster states with topological properties--has the highest order of tolerable error rates known to date (10^{-2}). Moreover, the scheme requires only nearest-neighbour interaction, particularly suitable for most physical systems. Here we report the first experimental demonstration of topological error correction with an 8-qubit optical cluster state. In the experiment,
Quantum computing becomes viable when a quantum state can be protected from environment-induced e... more Quantum computing becomes viable when a quantum state can be protected from environment-induced error. If quantum bits (qubits) are sufficiently reliable, errors are sparse and quantum error correction (QEC) is capable of identifying and correcting them. Adding more qubits improves the preservation of states by guaranteeing that increasingly larger clusters of errors will not cause logical failure-a key requirement for large-scale systems. Using QEC to extend the qubit lifetime remains one of the outstanding experimental challenges in quantum computing. Here we report the protection of classical states from environmental bit-flip errors and demonstrate the suppression of these errors with increasing system size. We use a linear array of nine qubits, which is a natural step towards the two-dimensional surface code QEC scheme, and track errors as they occur by repeatedly performing projective quantum non-demolition parity measurements. Relative to a single physical qubit, we reduce th...
ABSTRACT The surface code cannot be used when qubits vanish during computation; instead, a varian... more ABSTRACT The surface code cannot be used when qubits vanish during computation; instead, a variant known as the topological cluster state is necessary. It has a gate error threshold of $0.75% and only requires nearest-neighbor interactions on a 2D array of qubits. Previous work on loss tolerance using this code only considered qubits vanishing during measurement. We begin by also including qubit loss during two-qubit gates and initialization, and then additionally consider interaction errors that occur when neighbors attempt to entangle with a qubit that isn't there. In doing so, we show that even our best case scenario requires a loss rate below 1% in order to avoid considerable space-time overhead.
ABSTRACT Superconducting qubits, while promising for scalability and long coherence times, contai... more ABSTRACT Superconducting qubits, while promising for scalability and long coherence times, contain more than two energy levels, and therefore are susceptible to errors generated by the leakage of population outside of the computational subspace. Such leakage errors constitute a prominent roadblock towards fault-tolerant quantum computing (FTQC) with superconducting qubits. FTQC using topological codes is based on sequential measurements of multiqubit stabilizer operators. Here, we first propose a leakage-resilient procedure to perform repetitive measurements of multiqubit stabilizer operators, and then use this scheme as an ingredient to develop a leakage-resilient approach for surface code quantum error correction with superconducting circuits. Our protocol is based on swap operations between data and ancilla qubits at the end of every cycle, requiring read-out and reset operations on every physical qubit in the system, and thereby preventing persistent leakage errors from occurring.
Accurate methods of assessing the performance of quantum gates are extremely important. Quantum p... more Accurate methods of assessing the performance of quantum gates are extremely important. Quantum process tomography and randomized benchmarking are the current favored methods. Quantum process tomography gives detailed information, but significant approximations must be made to reduce this information to a form quantum error correction simulations can use. Randomized benchmarking typically outputs just a single number, the fidelity, giving no information on the structure of errors during the gate. Neither method is optimized to assess gate performance within an error detection circuit, where gates will be actually used in a large-scale quantum computer. Specifically, the important issues of error composition and error propagation lie outside the scope of both methods. We present a fast, simple, and scalable method of obtaining exactly the information required to perform effective quantum error correction from the output of continuously running error detection circuits, enabling accurate prediction of large-scale behavior.
ABSTRACT The fragile nature of quantum information limits our ability to construct large quantiti... more ABSTRACT The fragile nature of quantum information limits our ability to construct large quantities of quantum bits suitable for quantum computing. An important goal, therefore, is to minimize the amount of resources required to implement quantum algorithms, many of which are serial in nature and leave large numbers of qubits idle much of the time unless compression techniques are used. Furthermore, quantum error-correcting codes, which are required to reduce the effects of noise, introduce additional resource overhead. We consider a strategy for quantum circuit optimization based on topological deformation in the surface code, one of the best performing and most practical quantum error-correcting codes. Specifically, we examine the problem of minimizing computation time on a two-dimensional qubit lattice of arbitrary, but fixed dimension, and propose two algorithms for doing so.
Topological quantum error correction codes are known to be able, in principle, to tolerate arbitr... more Topological quantum error correction codes are known to be able, in principle, to tolerate arbitrary local errors given sufficient qubits. This includes errors involving many local qubits that could potentially arise from unwanted many-qubit interactions. In this work, we quantify the level of tolerance, numerically studying the effects of many-qubit errors on the performance of the surface code. We find that if increasingly large area errors are at least moderately exponentially suppressed, arbitrarily reliable quantum computation can still be achieved with practical overhead. We furthermore quantify the effect of non-local two-qubit errors, which would be expected in arrays of qubits coupled by the Coulomb or magnetic dipole or similar polynomially decaying interactions. We surprisingly find that very modest quadratic suppression of such errors with increasing qubit separation is sufficient to permit quantum computation with practical overhead.
Uploads
Papers by Austin Fowler