Logical Noise Bias in Magic State Injection

Abstract

Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such as magic state injection is currently lacking. We use numerical simulations of logical process tomography on a fault-tolerant gadget that implements a logical <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>T</mml:mi><mml:mo>=</mml:mo><mml:mi>Z</mml:mi><mml:mo stretchy="false">(</mml:mo><mml:mi>&amp;#x03C0;</mml:mi><mml:mrow class="MJX-TeXAtom-ORD"><mml:mo>/</mml:mo></mml:mrow><mml:mn>4</mml:mn><mml:mo stretchy="false">)</mml:mo></mml:math> gate using magic state injection, to understand how noise characteristics at the physical level are transformed into noise characteristics at the logical level. We show how, in this gadget, a significant phase (<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>Z</mml:mi></mml:math>) bias can arise in the logical noise, even with unbiased noise at the physical level. While the magic state injection gadget intrinsically induces biased noise, with extant phase bias being further amplified at the logical level, we identify noisy error correction circuits as a key limiting factor in the circuits studied on the magnitude of this logical noise bias. Our approach provides a framework for assessing the detailed noise characteristics, as well as the overall performance, of fault-tolerant logical primitives.

Locations

Ask a Question About This Paper

Summary

This paper investigates a critical, yet often overlooked, aspect of fault-tolerant quantum computing: how physical-level noise characteristics transform into logical-level noise, specifically within the ubiquitous primitive of magic state injection for non-Clifford gates. It challenges the implicit assumption that logical noise simply mirrors physical noise, demonstrating a complex transformation process.

The key innovation is the demonstration that magic state injection (MSI) intrinsically introduces a logical Z-bias into the T-gate operation, even when the underlying physical noise is unbiased (e.g., depolarizing). Furthermore, if the physical noise already exhibits a Z-bias, MSI significantly amplifies this bias, showing a quadratic relationship between physical and logical Z-bias. Crucially, even when physical noise is strongly X- or Y-biased, the logical noise still acquires a weak, persistent Z-bias, highlighting the inherent noise-transforming nature of the MSI gadget itself. A significant finding is the dual role of noisy error correction (EC) circuits: while essential for fault tolerance, they are identified as a primary limiting factor on the magnitude of this logical Z-bias, especially counteracting the intrinsic bias when physical noise is X-biased. These insights are derived from novel numerical simulations performing logical process tomography on a fault-tolerant T-gate gadget encoded in the Steane code, allowing for a detailed component-by-component analysis of noise propagation.

This work builds upon foundational concepts of fault-tolerant quantum computing, which seeks to mitigate errors in quantum computations through error correction codes and schemes to achieve universal gate sets. Central to this is the use of the Steane code, a well-studied 7-qubit quantum error-correcting code, and the magic state injection protocol, which enables the implementation of non-Clifford gates like the T-gate necessary for universal quantum computation. The study employs sophisticated Pauli noise models, including the concept of ‘noise bias,’ which quantifies the preferential occurrence of certain error types (e.g., Z errors over X or Y errors). This builds on prior research demonstrating that tailored codes can exploit such biases to improve fault-tolerance thresholds. The methodology leverages logical process tomography, an extension of physical process tomography, to characterize the logical operation, along with established fault-tolerant state preparation and measurement techniques, to rigorously analyze noise propagation within a full fault-tolerant primitive.

Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such … Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such as magic state injection is currently lacking. We use numerical simulations of logical process tomography on a fault-tolerant gadget that implements a logical T = $Z({\pi}/8)$ gate using magic state injection, to understand how noise characteristics at the physical level are transformed into noise characteristics at the logical level. We show how, in this gadget, a significant phase ($Z$) bias can arise in the logical noise, even with unbiased noise at the physical level. While the magic state injection gadget intrinsically induces biased noise, with extant phase bias being further amplified at the logical level, we identify noisy error correction circuits as a key limiting factor on the magnitude of this logical noise bias. Our approach provides a framework for assessing the detailed noise characteristics, as well as the overall performance, of fault-tolerant logical primitives.
Magic is a property of quantum states that enables universal fault-tolerant quantum computing using simple sets of gate operations. Understanding the mechanisms by which magic is created or destroyed is, … Magic is a property of quantum states that enables universal fault-tolerant quantum computing using simple sets of gate operations. Understanding the mechanisms by which magic is created or destroyed is, therefore, a crucial step towards efficient and practical fault-tolerant computation. We observe that a random stabilizer code subject to coherent errors exhibits a phase transition in magic, which we characterize through analytic, numeric and experimental probes. Below a critical error rate, stabilizer syndrome measurements remove the accumulated magic in the circuit, effectively protecting against coherent errors; above the critical error rate syndrome measurements concentrate magic. A better understanding of such rich behavior in the resource theory of magic could shed more light on origins of quantum speedup and pave pathways for more efficient magic state generation.
The design and optimization of a large-scale fault-tolerant quantum computer architecture relies extensively on numerical simulations to assess the performance of each component of the architecture. The simulation of fault-tolerant … The design and optimization of a large-scale fault-tolerant quantum computer architecture relies extensively on numerical simulations to assess the performance of each component of the architecture. The simulation of fault-tolerant gadgets, which are typically implemented by Clifford circuits, is done by sampling circuit faults and propagating them through the circuit to check that they do not corrupt the logical data. One may have to repeat this fault propagation trillions of times to extract an accurate estimate of the performance of a fault-tolerant gadget. For some specific circuits, such as the standard syndrome extraction circuit for surface codes, we can exploit the natural graph structure of the set of faults to perform a simulation without fault propagation. We propose a simulation algorithm for all Clifford circuits that does not require fault propagation and instead exploits the mathematical structure of the spacetime code of the circuit. Our algorithm, which we name adjoint-based code (ABC) simulation, relies on the fact that propagation forward is the adjoint of propagation backward in the sense of Proposition 3 from [14]. We use this result to replace the propagation of trillions of fault-configurations by the backward propagation of a small number of Pauli operators which can be precomputed once and for all.
Fault-tolerant quantum error correction provides a strategy to protect information processed by a quantum computer against noise which would otherwise corrupt the data. A fault-tolerant universal quantum computer must implement … Fault-tolerant quantum error correction provides a strategy to protect information processed by a quantum computer against noise which would otherwise corrupt the data. A fault-tolerant universal quantum computer must implement a universal gate set on the logical level in order to perform arbitrary calculations to in principle unlimited precision. We characterize the recent demonstration of a fault-tolerant universal gate set in a trapped-ion quantum computer [Postler et al. Nature 605.7911 (2022)] and identify aspects to improve the design of experimental setups to reach an advantage of logical over physical qubit operation. We show that various criteria to assess the break-even point for fault-tolerant quantum operations are within reach for the ion trap quantum computing architecture under consideration. We analyze the influence of crosstalk in entangling gates for logical state preparation circuits. These circuits can be designed to respect fault tolerance for specific microscopic noise models. We find that an experimentally-informed depolarizing noise model captures the essential noise dynamics of the fault-tolerant experiment, and crosstalk is negligible in the currently accessible regime of physical error rates. For deterministic Pauli state preparation, we provide a fault-tolerant unitary logical qubit initialization circuit, which can be realized without in-sequence measurement and feed-forward of classical information. We show that non-deterministic state preparation schemes for logical Pauli and magic states perform with higher logical fidelity over their deterministic counterparts for the current and anticipated future regime of physical error rates. Our results offer guidance on improvements of physical qubit operations and validate the experimentally-informed noise model as a tool to predict logical failure rates in quantum computing architectures based on trapped ions.
Fault-tolerant quantum error correction provides a strategy to protect information processed by a quantum computer against noise which would otherwise corrupt the data. A fault-tolerant universal quantum computer must implement … Fault-tolerant quantum error correction provides a strategy to protect information processed by a quantum computer against noise which would otherwise corrupt the data. A fault-tolerant universal quantum computer must implement a universal gate set on the logical level in order to perform arbitrary calculations to in principle unlimited precision. We characterize the recent demonstration of a fault-tolerant universal gate set in a trapped-ion quantum computer [Postler et al. Nature 605.7911 (2022)] and identify aspects to improve the design of experimental setups to reach an advantage of logical over physical qubit operation. We show that various criteria to assess the break-even point for fault-tolerant quantum operations are within reach for the ion trap quantum computing architecture under consideration. We analyze the influence of crosstalk in entangling gates for logical state preparation circuits. These circuits can be designed to respect fault tolerance for specific microscopic noise models. We find that an experimentally-informed depolarizing noise model captures the essential noise dynamics of the fault-tolerant experiment, and crosstalk is negligible in the currently accessible regime of physical error rates. For deterministic Pauli state preparation, we provide a fault-tolerant unitary logical qubit initialization circuit, which can be realized without in-sequence measurement and feed-forward of classical information. We show that non-deterministic state preparation schemes for logical Pauli and magic states perform with higher logical fidelity over their deterministic counterparts for the current and anticipated future regime of physical error rates. Our results offer guidance on improvements of physical qubit operations and validate the experimentally-informed noise model as a tool to predict logical failure rates in quantum computing architectures based on trapped ions.
Contemporary methods for benchmarking noisy quantum processors typically measure average error rates or process infidelities. However, thresholds for fault-tolerant quantum error correction are given in terms of worst-case error rates … Contemporary methods for benchmarking noisy quantum processors typically measure average error rates or process infidelities. However, thresholds for fault-tolerant quantum error correction are given in terms of worst-case error rates -- defined via the diamond norm -- which can differ from average error rates by orders of magnitude. One method for resolving this discrepancy is to randomize the physical implementation of quantum gates, using techniques like randomized compiling (RC). In this work, we use gate set tomography to perform precision characterization of a set of two-qubit logic gates to study RC on a superconducting quantum processor. We find that, under RC, gate errors are accurately described by a stochastic Pauli noise model without coherent errors, and that spatially-correlated coherent errors and non-Markovian errors are strongly suppressed. We further show that the average and worst-case error rates are equal for randomly compiled gates, and measure a maximum worst-case error of 0.0197(3) for our gate set. Our results show that randomized benchmarks are a viable route to both verifying that a quantum processor's error rates are below a fault-tolerance threshold, and to bounding the failure rates of near-term algorithms, if -- and only if -- gates are implemented via randomization methods which tailor noise.
The promise of quantum computing with imperfect qubits relies on the ability of a quantum computing system to scale cheaply through error correction and fault-tolerance. While fault-tolerance requires relatively mild … The promise of quantum computing with imperfect qubits relies on the ability of a quantum computing system to scale cheaply through error correction and fault-tolerance. While fault-tolerance requires relatively mild assumptions about the nature of qubit errors, the overhead associated with coherent and non-Markovian errors can be orders of magnitude larger than the overhead associated with purely stochastic Markovian errors. One proposal to address this challenge is to randomize the circuits of interest, shaping the errors to be stochastic Pauli errors but leaving the aggregate computation unaffected. The randomization technique can also suppress couplings to slow degrees of freedom associated with non-Markovian evolution. Here we demonstrate the implementation of Pauli-frame randomization in a superconducting circuit system, exploiting a flexible programming and control infrastructure to achieve this with low effort. We use high-accuracy gate-set tomography to characterize in detail the properties of the circuit error, with and without the randomization procedure, which allows us to make rigorous statements about Markovianity as well as the nature of the observed errors. We demonstrate that randomization suppresses signatures of non-Markovian evolution to statistically insignificant levels, from a Markovian model violation ranging from $43\sigma$ to $1987\sigma$, down to violations between $0.3\sigma$ and $2.7\sigma$ under randomization. Moreover, we demonstrate that, under randomization, the experimental errors are well described by a Pauli error model, with model violations that are similarly insignificant (between $0.8\sigma$ and $2.7\sigma$). Importantly, all these improvements in the model accuracy were obtained without degradation to fidelity, and with some improvements to error rates as quantified by the diamond norm.
Historically, noise in superconducting circuits has been considered an obstacle to be removed. A large fraction of the research effort in designing superconducting circuits has focused on noise reduction, with … Historically, noise in superconducting circuits has been considered an obstacle to be removed. A large fraction of the research effort in designing superconducting circuits has focused on noise reduction, with great success, as coherence times have increased by four orders of magnitude in the past two decades. However, noise and dissipation can never be fully eliminated, and further, a rapidly growing body of theoretical and experimental work has shown that carefully tuned noise, in the form of engineered dissipation, can be a profoundly useful tool in designing and operating quantum circuits. In this article, I review important applications of engineered dissipation, including state generation, state stabilization, and autonomous quantum error correction, where engineered dissipation can mitigate the effect of intrinsic noise, reducing logical error rates in quantum information processing. Further, I provide a pedagogical review of the basic noise processes in superconducting qubits (photon loss and phase noise), and argue that any dissipative mechanism which can correct photon loss errors is very likely to automatically suppress dephasing. I also discuss applications for quantum simulation, and possible future research directions.
In previous work, we proposed a method for leveraging efficient classical simulation algorithms to aid in the analysis of large-scale fault tolerant circuits implemented on hypothetical quantum information processors. Here, … In previous work, we proposed a method for leveraging efficient classical simulation algorithms to aid in the analysis of large-scale fault tolerant circuits implemented on hypothetical quantum information processors. Here, we extend those results by numerically studying the efficacy of this proposal as a tool for understanding the performance of an error-correction gadget implemented with fault models derived from physical simulations. Our approach is to approximate the arbitrary error maps that arise from realistic physical models with errors that are amenable to a particular classical simulation algorithm in an "honest" way; that is, such that we do not underestimate the faults introduced by our physical models. In all cases, our approximations provide an "honest representation" of the performance of the circuit composed of the original errors. This numerical evidence supports the use of our method as a way to understand the feasibility of an implementation of quantum information processing given a characterization of the underlying physical processes in experimentally accessible examples.
Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic … Noise-based logic is a practically deterministic logic scheme inspired by the randomness of neural spikes and uses a system of uncorrelated stochastic processes and their superposition to represent the logic state. We briefly discuss various questions such as (i) What does practical determinism mean? (ii) Is noise-based logic a Turing machine? (iii) Is there hope to beat (the dreams of) quantum computation by a classical physical noise-based processor, and what are the minimum hardware requirements for that? Finally, (iv) we address the problem of random number generators and show that the common belief that quantum number generators are superior to classical (thermal) noise-based generators is nothing but a myth.
Quantum processors can already execute tasks beyond the reach of classical simulation, albeit for artificial problems. At this point, it is essential to design error metrics that test the experimental … Quantum processors can already execute tasks beyond the reach of classical simulation, albeit for artificial problems. At this point, it is essential to design error metrics that test the experimental accuracy of quantum algorithms with potential for a practical quantum advantage. The distinction between coherent errors and incoherent errors is crucial, as they often involve different error suppression tools. The first class encompasses miscalibrations of control signals and crosstalk, while the latter is usually related to stochastic events and unwanted interactions with the environment. We introduce the incoherent infidelity as a measure of incoherent errors and present a scalable method for measuring it. This method is applicable to generic quantum evolutions subjected to time-dependent Markovian noise. Moreover, it provides an error quantifier for the target circuit, rather than an error averaged over many circuits or quantum gates. The estimation of the incoherent infidelity is suitable to assess circuits with sufficiently low error rates, regardless of the circuit size, which is a natural requirement to run useful computations.
To run large-scale algorithms on a quantum computer, error-correcting codes must be able to perform a fundamental set of operations, called logic gates, while isolating the encoded information from noise~\cite{Harper2019,Ryan-Anderson2021,Egan2021fault, … To run large-scale algorithms on a quantum computer, error-correcting codes must be able to perform a fundamental set of operations, called logic gates, while isolating the encoded information from noise~\cite{Harper2019,Ryan-Anderson2021,Egan2021fault, Chen2022calibrated, Sundaresan2022matching, ryananderson2022implementing, Postler2022demonstration, GoogleAI2023}. We can complete a universal set of logic gates by producing special resources called magic states~\cite{Bravyi2005universal,Maier2013magic, Chamberland2022building}. It is therefore important to produce high-fidelity magic states to conduct algorithms while introducing a minimal amount of noise to the computation. Here, we propose and implement a scheme to prepare a magic state on a superconducting qubit array using error correction. We find that our scheme produces better magic states than those we can prepare using the individual qubits of the device. This demonstrates a fundamental principle of fault-tolerant quantum computing~\cite{Shor96}, namely, that we can use error correction to improve the quality of logic gates with noisy qubits. Additionally, we show we can increase the yield of magic states using adaptive circuits, where circuit elements are changed depending on the outcome of mid-circuit measurements. This demonstrates an essential capability we will need for many error-correction subroutines. Our prototype will be invaluable in the future as it can reduce the number of physical qubits needed to produce high-fidelity magic states in large-scale quantum-computing architectures.
Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow … Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow or include unchecked assumptions. This study presents a self-consistent method for process tomography that is both fast and flexible. The technique complements the broad suite of existing characterization tools, and may potentially allow for pulse optimization to further increase gate fidelities.
Magic state distillation is a resource intensive subroutine that consumes noisy input states to produce high-fidelity resource states that are used to perform logical operations in practical quantum-computing architectures. The … Magic state distillation is a resource intensive subroutine that consumes noisy input states to produce high-fidelity resource states that are used to perform logical operations in practical quantum-computing architectures. The resource cost of magic state distillation can be reduced by improving the fidelity of the raw input states. To this end, we propose an initialization protocol that offers a quadratic improvement in the error rate of the input magic states in architectures with biased noise. This is achieved by preparing an error-detecting code which detects the dominant errors that occur during state preparation. We obtain this advantage by exploiting the native gate operations of an underlying qubit architecture that experiences biases in its noise profile. We perform simulations to analyze the performance of our protocol with the XZZX surface code. Even at modest physical parameters with a two-qubit gate error rate of $0.7\%$ and total probability of dominant errors in the gate $O(10^3)$ larger compared to that of non-dominant errors, we find that our preparation scheme delivers magic states with logical error rate $O(10^{-8})$ after a single round of the standard 15-to-1 distillation protocol; two orders of magnitude lower than using conventional state preparation. Our approach therefore promises considerable savings in overheads with near-term technology.
Dissipative cat-qubits are a promising architecture for quantum processors due to their built-in quantum error correction. By leveraging two-photon stabilization, they achieve an exponentially suppressed bit-flip error rate as the … Dissipative cat-qubits are a promising architecture for quantum processors due to their built-in quantum error correction. By leveraging two-photon stabilization, they achieve an exponentially suppressed bit-flip error rate as the distance in phase-space between their basis states increases, incurring only a linear increase in phase-flip rate. This property substantially reduces the number of qubits required for fault-tolerant quantum computation. Here, we implement a squeezing deformation of the cat qubit basis states, further extending the bit-flip time while minimally affecting the phase-flip rate. We demonstrate a steep reduction in the bit-flip error rate with increasing mean photon number, characterized by a scaling exponent $\gamma=4.3$, rising by a factor of 74 per added photon. Specifically, we measure bit-flip times of 22 seconds for a phase-flip time of 1.3 $\mu$s in a squeezed cat qubit with an average photon number $\bar{n}=4.1$, a 160-fold improvement in bit-flip time compared to a standard cat. Moreover, we demonstrate a two-fold reduction in $Z$-gate infidelity, with an estimated phase-flip probability of $\epsilon_X = 0.085$ and a bit-flip probability of $\epsilon_Z = 2.65 \cdot 10^{-9}$ which confirms the gate bias-preserving property. This simple yet effective technique enhances cat qubit performances without requiring design modification, moving multi-cat architectures closer to fault-tolerant quantum computation.
At the early stage of fault-tolerant quantum computing, it is envisioned that the gate synthesis of a general unitary gate into universal gate sets yields error whose magnitude is comparable … At the early stage of fault-tolerant quantum computing, it is envisioned that the gate synthesis of a general unitary gate into universal gate sets yields error whose magnitude is comparable with the noise inherent in the gates themselves. While it is known that the use of probabilistic synthesis already suppresses such coherent errors quadratically, there is no clear understanding on its remnant error, which hinders us from designing a holistic error countermeasure that is effectively combined with error suppression and mitigation. In this work, we propose that, by exploiting the fact that synthesis error can be characterized completely and efficiently, we can craft the remnant error of probabilistic synthesis such that the error profile satisfies desirable properties. We prove for the case of single-qubit unitary synthesis that, there is a guaranteed way to perform probabilistic synthesis such that we can craft the remnant error to be described by Pauli and depolarizing errors, while the conventional twirling cannot be applied in principle. Furthermore, we show a numerical evidence for the synthesis of Pauli rotations based on Clifford+T formalism that, we can craft the remnant error so that it can be eliminated up to {\it cubic} order by combining logical measurement and feedback operations. As a result, Pauli rotation gates can be implemented with T counts of $\log_2(1/\varepsilon)$ on average up to accuracy of $\varepsilon=10^{-9}$, which can be applied to early fault-tolerant quantum computation beyond classical tractability. Our work opens a novel avenue in quantum circuit design and architecture that orchestrates error countermeasures.
Routing plays an important role in programming noisy, intermediate-scale quantum (NISQ) devices, where limited connectivity in the register is overcome by swapping quantum information between locations. However, routing a quantum … Routing plays an important role in programming noisy, intermediate-scale quantum (NISQ) devices, where limited connectivity in the register is overcome by swapping quantum information between locations. However, routing a quantum state using noisy gates introduces non-trivial noise dynamics, and deciding on an optimal route to minimize accumulated error requires estimates of the expected state fidelity. Here we validate a model for state-dependent routing dynamics in a NISQ processor based on correlated binary noise. We develop a composable, state-dependent noise model for CNOT and SWAP operations that can be characterized efficiently using pair-wise experimental measurements, and we compare model predictions with tomographic state reconstructions recovered from a quantum device. These results capture the state-dependent routing dynamics that are needed to guide routing decisions for near-real time operation of NISQ devices.
We report on the fault-tolerant operation of logical qubits on a neutral atom quantum computer, with logical performance surpassing physical performance for multiple circuits including Bell states (12x error reduction), … We report on the fault-tolerant operation of logical qubits on a neutral atom quantum computer, with logical performance surpassing physical performance for multiple circuits including Bell states (12x error reduction), random circuits (15x), and a prototype Anderson Impurity Model ground state solver for materials science applications (up to 6x, non-fault-tolerantly). The logical qubits are implemented via the [[4, 2, 2]] code (C4). Our work constitutes the first complete realization of the benchmarking protocol proposed by Gottesman 2016 [1] demonstrating results consistent with fault-tolerance. In light of recent advances on applying concatenated C4/C6 detection codes to achieve error correction with high code rates and thresholds, our work can be regarded as a building block towards a practical scheme for fault tolerant quantum computation. Our demonstration of a materials science application with logical qubits particularly demonstrates the immediate value of these techniques on current experiments.
An arbitrarily reliable quantum computer can be efficiently constructed from noisy components using a recursive simulation procedure, provided that those components fail with probability less than the fault-tolerance threshold. Recent … An arbitrarily reliable quantum computer can be efficiently constructed from noisy components using a recursive simulation procedure, provided that those components fail with probability less than the fault-tolerance threshold. Recent estimates of the threshold are near some experimentally achieved gate fidelities. However, the landscape of threshold estimates includes pseudothresholds, threshold estimates based on a subset of components and a low level of recursion. In this paper, we observe that pseudothresholds are a generic phenomenon in fault-tolerant computation. We define pseudothresholds and present classical and quantum fault-tolerant circuits exhibiting pseudothresholds that differ by a factor of 4 from fault-tolerance thresholds for typical relationships between component failure rates. We develop tools for visualizing how reliability is influenced by recursive simulation in order to determine the asymptotic threshold. Finally, we conjecture that refinements of these methods may establish upper bounds on the fault-tolerance threshold for particular codes and noise models.
Fast, reliable logical operations are essential for the realization of useful quantum computers, as they are required to implement practical quantum algorithms at large scale. By redundantly encoding logical qubits … Fast, reliable logical operations are essential for the realization of useful quantum computers, as they are required to implement practical quantum algorithms at large scale. By redundantly encoding logical qubits into many physical qubits and using syndrome measurements to detect and subsequently correct errors, one can achieve very low logical error rates. However, for most practical quantum error correcting (QEC) codes such as the surface code, it is generally believed that due to syndrome extraction errors, multiple extraction rounds -- on the order of the code distance d -- are required for fault-tolerant computation. Here, we show that contrary to this common belief, fault-tolerant logical operations can be performed with constant time overhead for a broad class of QEC codes, including the surface code with magic state inputs and feed-forward operations, to achieve "algorithmic fault tolerance". Through the combination of transversal operations and novel strategies for correlated decoding, despite only having access to partial syndrome information, we prove that the deviation from the ideal measurement result distribution can be made exponentially small in the code distance. We supplement this proof with circuit-level simulations in a range of relevant settings, demonstrating the fault tolerance and competitive performance of our approach. Our work sheds new light on the theory of fault tolerance, potentially reducing the space-time cost of practical fault-tolerant quantum computation by orders of magnitude.
We propose a family of error-detecting stabilizer codes with an encoding rate of $1/3$ that permit a transversal implementation of the gate $T=\mathrm{exp}(\ensuremath{-}i\ensuremath{\pi}Z/8)$ on all logical qubits. These codes are … We propose a family of error-detecting stabilizer codes with an encoding rate of $1/3$ that permit a transversal implementation of the gate $T=\mathrm{exp}(\ensuremath{-}i\ensuremath{\pi}Z/8)$ on all logical qubits. These codes are used to construct protocols for distilling high-quality ``magic'' states $T\left|+\right\ensuremath{\rangle}$ by Clifford group gates and Pauli measurements. The distillation overhead scales as $O({\mathrm{log}}^{\ensuremath{\gamma}}(1/\ensuremath{\epsilon}))$, where $\ensuremath{\epsilon}$ is the output accuracy and $\ensuremath{\gamma}={\mathrm{log}}_{2}(3)\ensuremath{\approx}1.6$. To construct the desired family of codes, we introduce the notion of a triorthogonal matrix, a binary matrix in which any pair and any triple of rows have even overlap. Any triorthogonal matrix gives rise to a stabilizer code with a transversal $T$ gate on all logical qubits, possibly augmented by Clifford gates. A powerful numerical method for generating triorthogonal matrices is proposed. Our techniques lead to a twofold overhead reduction for distilling magic states with accuracy $\ensuremath{\epsilon}\ensuremath{\sim}{10}^{\ensuremath{-}12}$ compared with previously known protocols.
Quantum process tomography is a necessary tool for verifying quantum gates and diagnosing faults in architectures and gate design. We show that the standard approach of process tomography is grossly … Quantum process tomography is a necessary tool for verifying quantum gates and diagnosing faults in architectures and gate design. We show that the standard approach of process tomography is grossly inaccurate in the case where the states and measurement operators used to interrogate the system are generated by gates that have some systematic error, a situation all but unavoidable in any practical setting. These errors in tomography can not be fully corrected through oversampling or by performing a larger set of experiments. We present an alternative method for tomography to reconstruct an entire library of gates in a self-consistent manner. The essential ingredient is to define a likelihood function that assumes nothing about the gates used for preparation and measurement. In order to make the resulting optimization tractable we linearize about the target, a reasonable approximation when benchmarking a quantum computer as opposed to probing a black-box function.
In the quest to completely describe entanglement in the general case of a finite number of parties sharing a physical system of finite-dimensional Hilbert space an entanglement magnitude is introduced … In the quest to completely describe entanglement in the general case of a finite number of parties sharing a physical system of finite-dimensional Hilbert space an entanglement magnitude is introduced for its pure and mixed states: robustness. It corresponds to the minimal amount of mixing with locally prepared states which washes out all entanglement. It quantifies in a sense the endurance of entanglement against noise and jamming. Its properties are studied comprehensively. Analytical expressions for the robustness are given for pure states of two-party systems, and analytical bounds for mixed states of two-party systems. Specific results are obtained mainly for the qubit-qubit system (qubit denotes quantum bit). As by-products local pseudomixtures are generalized, a lower bound for the relative volume of separable states is deduced, and arguments for considering convexity a necessary condition of any entanglement measure are put forward.
With bichromatic fields it is possible to deterministically produce entangled states of trapped ions. In this paper we present a unified analysis of this process for both weak and strong … With bichromatic fields it is possible to deterministically produce entangled states of trapped ions. In this paper we present a unified analysis of this process for both weak and strong fields, for slow and fast gates. Simple expressions for the fidelity of creating maximally entangled states of two or an arbitrary number of ions under non-ideal conditions are derived and discussed.
A new type of uncertainty relation is presented, concerning the information-bearing properties of a discrete quantum system. A natural link is then revealed between basic quantum theory and the linear … A new type of uncertainty relation is presented, concerning the information-bearing properties of a discrete quantum system. A natural link is then revealed between basic quantum theory and the linear error correcting codes of classical information theory. A subset of the known codes is described, having properties which are important for error correction in quantum communication. It is shown that a pair of states which are, in a certain sense, "macroscopically different," can form a superposition in which the interference phase between the two parts is measurable. This provides a highly stabilized "Schrödinger cat" state.Received 4 October 1995DOI:https://doi.org/10.1103/PhysRevLett.77.793©1996 American Physical Society
We develop a procedure for distilling magic states used in universal quantum computing that requires substantially fewer initial resources than prior schemes. Our distillation circuit is based on a family … We develop a procedure for distilling magic states used in universal quantum computing that requires substantially fewer initial resources than prior schemes. Our distillation circuit is based on a family of concatenated quantum codes that possess a transversal Hadamard operation, enabling each of these codes to distill the eigenstate of the Hadamard operator. A crucial result of this design is that low-fidelity magic states can be consumed to purify other high-fidelity magic states to even higher fidelity, which we call multilevel distillation. When distilling in the asymptotic regime of infidelity $\ensuremath{\epsilon}\ensuremath{\rightarrow}0$ for each input magic state, the number of input magic states consumed on average to yield an output state with infidelity $O({\ensuremath{\epsilon}}^{{2}^{r}})$ approaches ${2}^{r}+1$, which comes close to saturating the conjectured bound in another investigation [Bravyi and Haah, Phys. Rev. A 86, 052329 (2012)]. We show numerically that there exist multilevel protocols such that the average number of magic states consumed to distill from error rate ${\ensuremath{\epsilon}}_{\mathrm{in}}=0.01$ to ${\ensuremath{\epsilon}}_{\mathrm{out}}$ in the range ${10}^{\ensuremath{-}5}$--${10}^{\ensuremath{-}40}$ is about $14{\mathrm{log}}_{10}(1/{\ensuremath{\epsilon}}_{\mathrm{out}})\ensuremath{-}40$; the efficiency of multilevel distillation dominates all other reported protocols when distilling Hadamard magic states from initial infidelity 0.01 to any final infidelity below ${10}^{\ensuremath{-}7}$. These methods are an important advance for magic-state distillation circuits in high-performance quantum computing and provide insight into the limitations of nearly resource-optimal quantum error correction.
Transversal gates play an important role in the theory of fault-tolerant quantum computation due to their simplicity and robustness to noise. By definition, transversal operators do not couple physical subsystems … Transversal gates play an important role in the theory of fault-tolerant quantum computation due to their simplicity and robustness to noise. By definition, transversal operators do not couple physical subsystems within the same code block. Consequently, such operators do not spread errors within code blocks and are, therefore, fault tolerant. Nonetheless, other methods of ensuring fault tolerance are required, as it is invariably the case that some encoded gates cannot be implemented transversally. This observation has led to a long-standing conjecture that transversal encoded gate sets cannot be universal. Here we show that the ability of a quantum code to detect an arbitrary error on any single physical subsystem is incompatible with the existence of a universal, transversal encoded gate set for the code.
We consider a model of quantum computation in which the set of elementary operations is limited to Clifford unitaries, the creation of the state $|0\rangle$ computational basis. In addition, we … We consider a model of quantum computation in which the set of elementary operations is limited to Clifford unitaries, the creation of the state $|0\rangle$ computational basis. In addition, we allow the creation of a one-qubit ancilla in a mixed state $\rho$, which should be regarded as a parameter of the model. Our goal is to determine for which $\rho$ universal quantum computation (UQC) can be efficiently simulated. To answer this question, we construct purification protocols that consume several copies of $\rho$ and produce a single output qubit with higher polarization. The protocols allow one to increase the polarization only along certain "magic" directions. If the polarization of $\rho$ along a magic direction exceeds a threshold value (about 65%), the purification asymptotically yields a pure state, which we call a magic state. We show that the Clifford group operations combined with magic states preparation are sufficient for UQC. The connection of our results with the Gottesman-Knill theorem is discussed.
Abstract The seven-qubit quantum error-correcting code originally proposed by Steane is one of the best known quantum codes. The Steane code has a desirable property that most basic operations can … Abstract The seven-qubit quantum error-correcting code originally proposed by Steane is one of the best known quantum codes. The Steane code has a desirable property that most basic operations can be performed easily in a fault-tolerant manner. A major obstacle to fault-tolerant quantum computation with the Steane code is fault-tolerant preparation of encoded states, which requires large computational resources. Here we propose efficient state preparation methods for zero and magic states encoded with the Steane code, where the zero state is one of the computational basis states and the magic state allows us to achieve universality in fault-tolerant quantum computation. The methods minimize resource overheads for the fault-tolerant state preparation and therefore reduce necessary resources for quantum computation with the Steane code. Thus, the present results will open a new possibility for efficient fault-tolerant quantum computation.
Quantum computers are poised to radically outperform their classical counterparts by manipulating coherent quantum systems. A realistic quantum computer will experience errors due to the environment and imperfect control. When … Quantum computers are poised to radically outperform their classical counterparts by manipulating coherent quantum systems. A realistic quantum computer will experience errors due to the environment and imperfect control. When these errors are even partially coherent, they present a major obstacle to achieving robust computation. Here, we propose a method for introducing independent random single-qubit gates into the logical circuit in such a way that the effective logical circuit remains unchanged. We prove that this randomization tailors the noise into stochastic Pauli errors, leading to dramatic reductions in worst-case and cumulative error rates, while introducing little or no experimental overhead. Moreover we prove that our technique is robust to variation in the errors over the gate sets and numerically illustrate the dramatic reductions in worst-case error that are achievable. Given such tailored noise, gates with significantly lower fidelity are sufficient to achieve fault-tolerant quantum computation, and, importantly, the worst case error rate of the tailored noise can be directly and efficiently measured through randomized benchmarking experiments. Remarkably, our method enables the realization of fault-tolerant quantum computation under the error rates observed in recent experiments.
The standard approach to fault-tolerant quantum computation is to store information in a quantum error correction code, such as the surface code, and process information using a strategy that can … The standard approach to fault-tolerant quantum computation is to store information in a quantum error correction code, such as the surface code, and process information using a strategy that can be summarized as distill then synthesize. In the distill step, one performs several rounds of distillation to create high-fidelity logical qubits in a magic state. Each such magic state provides one good $T$ gate. In the synthesize step, one seeks the optimal decomposition of an algorithm into a sequence of many $T$ gates interleaved with Clifford gates. This gate-synthesis problem is well understood for multiqubit gates that do not use any Hadamards. We present an in-depth analysis of a unified framework that realizes one round of distillation and multiqubit gate synthesis in a single step. We call these synthillation protocols, and show they lead to a large reduction in resource overheads. This is because synthillation can implement a general class of circuits using the same number of $T$ states as gate synthesis, yet with the benefit of quadratic error suppression. This general class includes all circuits primarily dominated by control-control-$Z$ gates, such as adders and modular exponentiation routines used in Shor's algorithm. Therefore, synthillation removes the need for a costly round of magic state distillation. We also present several additional results on the multiqubit gate-synthesis problem. We provide an efficient algorithm for synthesizing unitaries with the same worst-case resource scaling as optimal solutions. For the special case of synthesizing controlled unitaries, our techniques are not just efficient but exactly optimal. We observe that the gate-synthesis cost, measured by $T$ count, is often strictly subadditive. Numerous explicit applications of our techniques are also presented.
Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute … Quantum information processors promise fast algorithms for problems inaccessible to classical computers. But since qubits are noisy and error-prone, they will depend on fault-tolerant quantum error correction (FTQEC) to compute reliably. Quantum error correction can protect against general noise if -- and only if -- the error in each physical qubit operation is smaller than a certain threshold. The threshold for general errors is quantified by their diamond norm. Until now, qubits have been assessed primarily by randomized benchmarking, which reports a different "error rate" that is not sensitive to all errors, and cannot be compared directly to diamond norm thresholds. Here we use gate set tomography (GST) to completely characterize operations on a trapped-Yb$^+$-ion qubit and demonstrate with very high ($>95\%$) confidence that they satisfy a rigorous threshold for FTQEC (diamond norm $\leq6.7\times10^{-4}$).
Noise rates in quantum computing experiments have dropped dramatically, but reliable qubits remain precious. Fault-tolerance schemes with minimal qubit overhead are therefore essential. We introduce fault-tolerant error-correction procedures that use … Noise rates in quantum computing experiments have dropped dramatically, but reliable qubits remain precious. Fault-tolerance schemes with minimal qubit overhead are therefore essential. We introduce fault-tolerant error-correction procedures that use only two extra qubits. The procedures are based on adding "flags" to catch the faults that can lead to correlated errors on the data. They work for various distance-three codes. In particular, our scheme allows one to test the ⟦5,1,3⟧ code, the smallest error-correcting code, using only seven qubits total. Our techniques also apply to the ⟦7,1,3⟧ and ⟦15,7,3⟧ Hamming codes, thus allowing us to protect seven encoded qubits on a device with only 17 physical qubits.
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli $Z$ errors occur … We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli $Z$ errors occur more frequently than $X$ or $Y$ errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of $Y$ instead of $Z$ around the faces, as this doubles the number of useful syndrome bits associated with the dominant $Z$ errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
As quantum circuits increase in size, it is critical to establish scalable multiqubit fidelity metrics. Here we investigate, for the first time, three-qubit randomized benchmarking (RB) on a quantum device … As quantum circuits increase in size, it is critical to establish scalable multiqubit fidelity metrics. Here we investigate, for the first time, three-qubit randomized benchmarking (RB) on a quantum device consisting of three fixed-frequency transmon qubits with pairwise microwave-activated interactions (cross-resonance). We measure a three-qubit error per Clifford of 0.106 for all-to-all gate connectivity and 0.207 for linear gate connectivity. Furthermore, by introducing mixed dimensionality simultaneous RB---simultaneous one- and two-qubit RB---we show that the three-qubit errors can be predicted from the one- and two-qubit errors. However, by introducing certain coherent errors to the gates, we can increase the three-qubit error to 0.302, an increase that is not predicted by a proportionate increase in the one- and two-qubit errors from simultaneous RB. This demonstrates the importance of multiqubit metrics, such as three-qubit RB, on evaluating overall device performance.
We compare the performance of quantum error correcting codes when memory errors are unitary with the more familiar case of dephasing noise. For a wide range of codes, we analytically … We compare the performance of quantum error correcting codes when memory errors are unitary with the more familiar case of dephasing noise. For a wide range of codes, we analytically compute the effective logical channel that results when the error correction steps are performed noiselessly. Our examples include the entire family of repetition codes, the five-qubit, Steane, Shor, and surface codes. When errors are measured in terms of the diamond norm, we find that the error correction is typically much more effective for unitary errors than for dephasing. We observe this behavior for a wide range of codes after a single level of encoding, and in the thresholds of concatenated codes using hard decoders. We show that this holds with great generality by proving a bound on the performance of any stabilizer code when the noise at the physical level is unitary. By comparing the diamond norm error ${D}_{\ensuremath{\diamond}}^{\ensuremath{'}}$ of the logical qubit with the same quantity at the physical level ${D}_{\ensuremath{\diamond}}$, we show that ${D}_{\ensuremath{\diamond}}^{\ensuremath{'}}\ensuremath{\le}c{D}_{\ensuremath{\diamond}}^{d}$ where $d$ is the distance of the code and $c$ is a constant that depends on the code but not on the error. This bound compares very favorably to the performance of error correction for dephasing noise and other Pauli channels, where an error correcting code of odd distance $d$ will exhibit a scaling ${D}_{\ensuremath{\diamond}}^{\ensuremath{'}}\ensuremath{\sim}{D}_{\ensuremath{\diamond}}^{(d+1)/2}$.
Typical studies of quantum error correction assume probabilistic Pauli noise, largely because it is relatively easy to analyze and simulate. Consequently, the effective logical noise due to physically realistic coherent … Typical studies of quantum error correction assume probabilistic Pauli noise, largely because it is relatively easy to analyze and simulate. Consequently, the effective logical noise due to physically realistic coherent errors is relatively unknown. Here, we prove that encoding a system in a stabilizer code and measuring error syndromes decoheres errors, that is, causes coherent errors to converge toward probabilistic Pauli errors, even when no recovery operations are applied. Two practical consequences are that the error rate in a logical circuit is well quantified by the average gate fidelity at the logical level and that essentially optimal recovery operators can be determined by independently optimizing the logical fidelity of the effective noise per syndrome.
The quest of demonstrating beneficial quantum error correction in near-term noisy quantum processors can benefit enormously from a low-resource optimization of fault-tolerant schemes, which are specially designed for a particular … The quest of demonstrating beneficial quantum error correction in near-term noisy quantum processors can benefit enormously from a low-resource optimization of fault-tolerant schemes, which are specially designed for a particular platform considering both state-of-the-art technological capabilities and main sources of noise. In this work, we show that flag-qubit-based fault-tolerant techniques for active error detection and correction, as well as for encoding of logical qubits, can be leveraged in current designs of trapped-ion quantum processors to achieve this break-even point of beneficial quantum error correction. Our improved description of the relevant sources of noise, together with detailed schedules for the implementation of these flag-based protocols, provide one of the most complete microscopic characterizations of a fault-tolerant quantum processor to date. By extensive numerical simulations, we provide a comparative study of flag- and cat-based approaches to quantum error correction, and show that the superior performance of the former can become a landmark in the success of near-term quantum computing with noisy trapped-ion devices.
Randomized benchmarking and variants thereof, which we collectively call RB+, are widely used to characterize the performance of quantum computers because they are simple, scalable, and robust to state-preparation and … Randomized benchmarking and variants thereof, which we collectively call RB+, are widely used to characterize the performance of quantum computers because they are simple, scalable, and robust to state-preparation and measurement errors. However, experimental implementations of RB+ allocate resources suboptimally and make ad-hoc assumptions that undermine the reliability of the data analysis. In this paper, we propose a simple modification of RB+ which rigorously eliminates a nuisance parameter and simplifies the experimental design. We then show that, with this modification and specific experimental choices, RB+ efficiently provides estimates of error rates with multiplicative precision. Finally, we provide a simplified rigorous method for obtaining credible regions for parameters of interest and a heuristic approximation for these intervals that performs well in currently relevant regimes.
The distillation of magic states is an often-cited technique for enabling universal quantum computing once the error probability for a special subset of gates has been made negligible by other … The distillation of magic states is an often-cited technique for enabling universal quantum computing once the error probability for a special subset of gates has been made negligible by other means. We present a routine for magic-state distillation that reduces the required overhead for a range of parameters of practical interest. Each iteration of the routine uses a four-qubit error-detecting code to distill the $+1$ eigenstate of the Hadamard gate at a cost of ten input states per two improved output states. Use of this routine in combination with the $15$-to-$1$ distillation routine described by Bravyi and Kitaev allows for further improvements in overhead.
Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the … Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the common situation where the noise is biased towards dephasing. Here we introduce an efficient high-threshold decoder for a noise-tailored surface code based on minimum-weight perfect matching. The decoder exploits the symmetries of its syndrome under the action of biased noise and generalizes to the fault-tolerant regime where measurements are unreliable. Using this decoder, we obtain fault-tolerant thresholds in excess of 6% for a phenomenological noise model in the limit where dephasing dominates. These gains persist even for modest noise biases: we find a threshold of ∼5% in an experimentally relevant regime where dephasing errors occur at a rate 100 times greater than bit-flip errors.
Magic states are eigenstates of non-Pauli operators. One way of suppressing errors present in magic states is to perform parity measurements in their non-Pauli eigenbasis and postselect on even parity. … Magic states are eigenstates of non-Pauli operators. One way of suppressing errors present in magic states is to perform parity measurements in their non-Pauli eigenbasis and postselect on even parity. Here we develop new protocols based on non-Pauli parity checking, where the measurements are implemented with the aid of pre-distilled multiqubit resource states. This leads to a two step process: pre-distillation of multiqubit resource states, followed by implementation of the parity check. These protocols can prepare single-qubit magic states that enable direct injection of single-qubit axial rotations without subsequent gate-synthesis and its associated overhead. We show our protocols are more efficient than all previous comparable protocols with quadratic error reduction, including the protocols of Bravyi and Haah.
The development of a framework for quantifying "non-stabiliserness" of quantum operations is motivated by the magic state model of fault-tolerant quantum computation, and by the need to estimate classical simulation … The development of a framework for quantifying "non-stabiliserness" of quantum operations is motivated by the magic state model of fault-tolerant quantum computation, and by the need to estimate classical simulation cost for noisy intermediate-scale quantum (NISQ) devices. The robustness of magic was recently proposed as a well-behaved magic monotone for multi-qubit states and quantifies the simulation overhead of circuits composed of Clifford+T gates, or circuits using other gates from the Clifford hierarchy. Here we present a general theory of the "non-stabiliserness" of quantum operations rather than states, which are useful for classical simulation of more general circuits. We introduce two magic monotones, called channel robustness and magic capacity, which are well-defined for general n-qubit channels and treat all stabiliser-preserving CPTP maps as free operations. We present two complementary Monte Carlo-type classical simulation algorithms with sample complexity given by these quantities and provide examples of channels where the complexity of our algorithms is exponentially better than previous known simulators. We present additional techniques that ease the difficulty of calculating our monotones for special classes of channels.
One of the most cited books in physics of all time, Quantum Computation and Quantum Information remains the best textbook in this exciting field of science. This 10th anniversary edition … One of the most cited books in physics of all time, Quantum Computation and Quantum Information remains the best textbook in this exciting field of science. This 10th anniversary edition includes an introduction from the authors setting the work in context. This comprehensive textbook describes such remarkable effects as fast quantum algorithms, quantum teleportation, quantum cryptography and quantum error-correction. Quantum mechanics and computer science are introduced before moving on to describe what a quantum computer is, how it can be used to solve problems faster than 'classical' computers and its real-world implementation. It concludes with an in-depth treatment of quantum information. Containing a wealth of figures and exercises, this well-known textbook is ideal for courses on the subject, and will interest beginning graduate students and researchers in physics, computer science, mathematics, and electrical engineering.
Treating stabilizer operations as free, we establish lower bounds on the number of resource states, also known as magic states, needed to perform various quantum computing tasks.Our bounds apply to … Treating stabilizer operations as free, we establish lower bounds on the number of resource states, also known as magic states, needed to perform various quantum computing tasks.Our bounds apply to adaptive computations using measurements with an arbitrary number of stabilizer ancillas.We consider (1) resource state conversion, (2) single-qubit unitary synthesis, and (3) computational subroutines including the quantum adder and the multiply-controlled Z gate.To prove our resource conversion bounds we introduce two new monotones, the stabilizer nullity and the dyadic monotone, and make use of the already-known stabilizer extent.We consider conversions that borrow resource states, known as catalyst states, and return them at the end of the algorithm.We show that catalysis is necessary for many conversions and introduce new catalytic conversions, some of which are optimal.By finding a canonical form for post-selected stabilizer computations, we show that approximating a single-qubit unitary to within diamond-norm precision ε requires at least 1/7 • log 2 (1/ε) -4/3 T -states on average.This is the first lower bound that applies to synthesis protocols using fall-back, mixing techniques, and where the number of ancillas used can depend on ε.Up to multiplicative factors, we optimally lower bound the number of T or CCZ states needed to implement the ubiquitous modular adder and multiplycontrolled-Z operations.When the probability of Pauli measurement outcomes is 1/2, some of our bounds become tight to within a small additive constant.
Abstract Although qubit coherence times and gate fidelities are continuously improving, logical encoding is essential to achieve fault tolerance in quantum computing. In most encoding schemes, correcting or tracking errors … Abstract Although qubit coherence times and gate fidelities are continuously improving, logical encoding is essential to achieve fault tolerance in quantum computing. In most encoding schemes, correcting or tracking errors throughout the computation is necessary to implement a universal gate set without adding significant delays in the processor. Here, we realize a classical control architecture for the fast extraction of errors based on multiple cycles of stabilizer measurements and subsequent correction. We demonstrate its application on a minimal bit-flip code with five transmon qubits, showing that real-time decoding and correction based on multiple stabilizers is superior in both speed and fidelity to repeated correction based on individual cycles. Furthermore, the encoded qubit can be rapidly measured, thus enabling conditional operations that rely on feed forward, such as logical gates. This co-processing of classical and quantum information will be crucial in running a logical circuit at its full speed to outpace error accumulation.
Magic state distillation is one of the leading candidates for implementing universal fault-tolerant logical gates. However, the distillation circuits themselves are not fault-tolerant, so there is additional cost to first … Magic state distillation is one of the leading candidates for implementing universal fault-tolerant logical gates. However, the distillation circuits themselves are not fault-tolerant, so there is additional cost to first implement encoded Clifford gates with negligible error. In this paper we present a scheme to fault-tolerantly and directly prepare magic states using flag qubits. One of these schemes uses a single extra ancilla, even with noisy Clifford gates. We compare the physical qubit and gate cost of this scheme to the magic state distillation protocol of Meier, Eastin, and Knill, which is efficient and uses a small stabilizer circuit. In some regimes, we show that the overhead can be improved by several orders of magnitude.
Quantum computers promise to solve certain problems more efficiently than their digital counterparts. A major challenge towards practically useful quantum computing is characterizing and reducing the various errors that accumulate … Quantum computers promise to solve certain problems more efficiently than their digital counterparts. A major challenge towards practically useful quantum computing is characterizing and reducing the various errors that accumulate during an algorithm running on large-scale processors. Current characterization techniques are unable to adequately account for the exponentially large set of potential errors, including cross-talk and other correlated noise sources. Here we develop cycle benchmarking, a rigorous and practically scalable protocol for characterizing local and global errors across multi-qubit quantum processors. We experimentally demonstrate its practicality by quantifying such errors in non-entangling and entangling operations on an ion-trap quantum computer with up to 10 qubits, with total process fidelities for multi-qubit entangling gates ranging from 99.6(1)% for 2 qubits to 86(2)% for 10 qubits. Furthermore, cycle benchmarking data validates that the error rate per single-qubit gate and per two-qubit coupling does not increase with increasing system size.
Error correction is essential to quantum information processing, but the demanding performance requirements for $u\phantom{\rule{0}{0ex}}s\phantom{\rule{0}{0ex}}e\phantom{\rule{0}{0ex}}f\phantom{\rule{0}{0ex}}u\phantom{\rule{0}{0ex}}l$ error correction make it a difficult proposition. This study shows that incorporating prior knowledge of … Error correction is essential to quantum information processing, but the demanding performance requirements for $u\phantom{\rule{0}{0ex}}s\phantom{\rule{0}{0ex}}e\phantom{\rule{0}{0ex}}f\phantom{\rule{0}{0ex}}u\phantom{\rule{0}{0ex}}l$ error correction make it a difficult proposition. This study shows that incorporating prior knowledge of physical error models can dramatically improve the efficacy of quantum error correction, in a small code. This progress significantly expands the range in which quantum error correction can be usefully applied, facilitating interesting experiments that use accurate device models to protect quantum memories.
We give a new algorithm for computing the<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">robustness of magic</mml:mtext></mml:mrow></mml:math>- a measure of the utility of quantum states as a computational resource. Our work is motivated … We give a new algorithm for computing the<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">robustness of magic</mml:mtext></mml:mrow></mml:math>- a measure of the utility of quantum states as a computational resource. Our work is motivated by the<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">magic state model</mml:mtext></mml:mrow></mml:math>of fault-tolerant quantum computation. In this model, all unitaries belong to the Clifford group. Non-Clifford operations are effected by injecting non-stabiliser states, which are referred to as<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">magic states</mml:mtext></mml:mrow></mml:math>in this context. The<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">robustness of magic</mml:mtext></mml:mrow></mml:math>measures the complexity of simulating such a circuit using a classical Monte Carlo algorithm. It is closely related to the degree negativity that slows down Monte Carlo simulations through the infamous<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">sign problem</mml:mtext></mml:mrow></mml:math>. Surprisingly, the robustness of magic is<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">sub</mml:mtext></mml:mrow></mml:math>- multiplicative. This implies that the classical simulation overhead scales subexponentially with the number of injected magic states - better than a naive analysis would suggest. However, determining the robustness of<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">n</mml:mtext></mml:mrow></mml:math>copies of a magic state is difficult, as its definition involves a convex optimisation problem in a 4<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:msup><mml:mi /><mml:mi>n</mml:mi></mml:msup></mml:mrow></mml:math>-dimensional space. In this paper, we make use of inherent symmetries to reduce the problem to<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">n</mml:mtext></mml:mrow></mml:math>dimensions. The total run-time of our algorithm, while still exponential in<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">n</mml:mtext></mml:mrow></mml:math>, is super-polynomially faster than previously published methods. We provide a computer implementation and give the robustness of up to 10 copies of the most commonly used magic states. Guided by the exact results, we find a finite hierarchy of approximate solutions where each level can be evaluated in polynomial time and yields rigorous upper bounds to the robustness. Technically, we use symmetries of the stabiliser polytope to connect the robustness of magic to the geometry of a low-dimensional convex polytope generated by certain<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow class="MJX-TeXAtom-ORD"><mml:mtext class="MJX-tex-mathit" mathvariant="italic">signed quantum weight enumerators</mml:mtext></mml:mrow></mml:math>. As a by-product, we characterised the automorphism group of the stabiliser polytope, and, more generally, of projections onto complex projective 3-designs.
Noise asymmetry in the quantum operations with cat-qubits is exploited for efficient fault-tolerant quantum error correction. Noise asymmetry in the quantum operations with cat-qubits is exploited for efficient fault-tolerant quantum error correction.
The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of … The surface code is one of the most successful approaches to topological quantum error-correction. It boasts the smallest known syndrome extraction circuits and correspondingly largest thresholds. Defect-based logical encodings of a new variety called twists have made it possible to implement the full Clifford group without state distillation. Here we investigate a patch-based encoding involving a modified twist. In our modified formulation, the resulting codes, called triangle codes for the shape of their planar layout, have only weight-four checks and relatively simple syndrome extraction circuits that maintain a high, near surface-code-level threshold. They also use 25% fewer physical qubits per logical qubit than the surface code. Moreover, benefiting from the twist, we can implement all Clifford gates by lattice surgery without the need for state distillation. By a surgical transformation to the surface code, we also develop a scheme of doing all Clifford gates on surface code patches in an atypical planar layout, though with less qubit efficiency than the triangle code. Finally, we remark that logical qubits encoded in triangle codes are naturally amenable to logical tomography, and the smallest triangle code can demonstrate high-pseudothreshold fault-tolerance to depolarizing noise using just 13 physical qubits.
The sensitivity afforded by quantum sensors is limited by decoherence. Quantum error correction (QEC) can enhance sensitivity by suppressing decoherence, but it has a side effect: it biases a sensor's … The sensitivity afforded by quantum sensors is limited by decoherence. Quantum error correction (QEC) can enhance sensitivity by suppressing decoherence, but it has a side effect: it biases a sensor's output in realistic settings. If unaccounted for, this bias can systematically reduce a sensor's performance in experiment, and also give misleading values for the minimum detectable signal in theory. We analyze this effect in the experimentally motivated setting of continuous-time QEC, showing both how one can remedy it, and how incorrect results can arise when one does not.
Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well … Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code-the XZZX code-offers remarkable performance for fault-tolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every single-qubit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, high-performance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable sub-threshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform fault-tolerant quantum computation.
Neutral-atom arrays have recently emerged as a promising platform for quantum information processing. One important remaining roadblock for the large-scale application of these systems is the ability to perform error-corrected … Neutral-atom arrays have recently emerged as a promising platform for quantum information processing. One important remaining roadblock for the large-scale application of these systems is the ability to perform error-corrected quantum operations. To entangle the qubits in these systems, atoms are typically excited to Rydberg states, which could decay or give rise to various correlated errors that cannot be addressed directly through traditional methods of fault-tolerant quantum computation. In this work, we provide the first complete characterization of these sources of error in a neutral-atom quantum computer and propose hardware-efficient, fault-tolerant quantum computation schemes that mitigate them. Notably, we develop a novel and distinctly efficient method to address the most important errors associated with the decay of atomic qubits to states outside of the computational subspace. These advances allow us to significantly reduce the resource cost for fault-tolerant quantum computation compared to existing, general-purpose schemes. Our protocols can be implemented in the near term using state-of-the-art neutral-atom platforms with qubits encoded in both alkali and alkaline-earth atoms.9 MoreReceived 6 June 2021Revised 3 February 2022Accepted 12 April 2022DOI:https://doi.org/10.1103/PhysRevX.12.021049Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasOptical pumpingQuantum algorithmsQuantum computationQuantum gatesQuantum information processingQuantum protocolsSpontaneous emissionQuantum InformationAtomic, Molecular & Optical
Correcting errors in real time is essential for reliable large-scale quantum computations. Realizing this high-level function requires a system capable of several low-level primitives, including single-qubit and two-qubit operations, midcircuit … Correcting errors in real time is essential for reliable large-scale quantum computations. Realizing this high-level function requires a system capable of several low-level primitives, including single-qubit and two-qubit operations, midcircuit measurements of subsets of qubits, real-time processing of measurement outcomes, and the ability to condition subsequent gate operations on those measurements. In this work, we use a 10-qubit quantum charge-coupled device trapped-ion quantum computer to encode a single logical qubit using the [[7,1,3]] color code, first proposed by Steane [Phys. Rev. Lett. 77, 793 (1996)]. The logical qubit is initialized into the eigenstates of three mutually unbiased bases using an encoding circuit, and we measure an average logical state preparation and measurement (SPAM) error of 1.7(2)×10−3, compared to the average physical SPAM error 2.4(4)×10−3 of our qubits. We then perform multiple syndrome measurements on the encoded qubit, using a real-time decoder to determine any necessary corrections that are done either as software updates to the Pauli frame or as physically applied gates. Moreover, these procedures are done repeatedly while maintaining coherence, demonstrating a dynamically protected logical qubit memory. Additionally, we demonstrate non-Clifford qubit operations by encoding a T¯|+⟩L magic state with an error rate below the threshold required for magic state distillation. Finally, we present system-level simulations that allow us to identify key hardware upgrades that may enable the system to reach the pseudothreshold.15 MoreReceived 10 August 2021Revised 24 November 2021Accepted 7 December 2021Corrected 11 January 2022DOI:https://doi.org/10.1103/PhysRevX.11.041058Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasAtom & ion coolingAtom & ion trapping & guidingQuantum computationQuantum error correctionPhysical SystemsTrapped ionsQuantum InformationAtomic, Molecular & Optical
Fault-tolerant quantum computing will require accurate estimates of the resource overhead under different error correction strategies, but standard metrics such as gate fidelity and diamond distance have been shown to … Fault-tolerant quantum computing will require accurate estimates of the resource overhead under different error correction strategies, but standard metrics such as gate fidelity and diamond distance have been shown to be poor predictors of logical performance. We present a scalable experimental approach based on Pauli error reconstruction to predict the performance of concatenated codes. Numerical evidence demonstrates that our method significantly outperforms predictions based on standard error metrics for various error models, even with limited data. We illustrate how this method assists in the selection of error correction schemes.