Author Description

Login to generate an author description

Ask a Question About This Mathematician

Quantum computers will require encoding of quantum information to protect them from noise. Fault-tolerant quantum computing architectures illustrate how this might be done but have not yet shown a conclusive … Quantum computers will require encoding of quantum information to protect them from noise. Fault-tolerant quantum computing architectures illustrate how this might be done but have not yet shown a conclusive practical advantage. Here we demonstrate that a small but useful error detecting code improves the fidelity of the fault-tolerant gates implemented in the code space as compared to the fidelity of physically equivalent gates implemented on physical qubits. By running a randomized benchmarking protocol in the logical code space of the [4,2,2] code, we observe an order of magnitude improvement in the infidelity of the gates, with the two-qubit infidelity dropping from 5.8(2)% to 0.60(3)%. Our results are consistent with fault-tolerance theory and conclusively demonstrate the benefit of carrying out computation in a code space that can detect errors. Although the fault-tolerant gates offer an impressive improvement in fidelity, the computation as a whole is not below the fault-tolerance threshold because of noise associated with state preparation and measurement on this device.
Noise mechanisms in quantum systems can be broadly characterized as either coherent (i.e., unitary) or incoherent. For a given fixed average error rate, coherent noise mechanisms will generally lead to … Noise mechanisms in quantum systems can be broadly characterized as either coherent (i.e., unitary) or incoherent. For a given fixed average error rate, coherent noise mechanisms will generally lead to a larger worst-case error than incoherent noise. We show that the coherence of a noise source can be quantified by the unitarity, which we relate to the average change in purity averaged over input pure states. We then show that the unitarity can be efficiently estimated using a protocol based on randomized benchmarking that is efficient and robust to state-preparation and measurement errors. We also show that the unitarity provides a lower bound on the optimal achievable gate infidelity under a given noisy process.
We show that nonexponential fidelity decays in randomized benchmarking experiments on quantum-dot qubits are consistent with numerical simulations that incorporate low-frequency noise and correspond to a control fidelity that varies … We show that nonexponential fidelity decays in randomized benchmarking experiments on quantum-dot qubits are consistent with numerical simulations that incorporate low-frequency noise and correspond to a control fidelity that varies slowly with time. By expanding standard randomized benchmarking analysis to this experimental regime, we find that such nonexponential decays are better modeled by multiple exponential decay rates, leading to an instantaneous control fidelity for isotopically purified silicon metal-oxide-semiconductor quantum-dot qubits which is 98.9% when the low-frequency noise causes large detuning but can be as high as 99.9% when the qubit is driven on resonance and system calibrations are favorable. These advances in qubit characterization and validation methods underpin the considerable prospects for silicon as a qubit platform for fault-tolerant quantum computation.
The fidelity of laser-driven quantum logic operations on trapped ion qubits tend to be lower than microwave-driven logic operations due to the difficulty of stabilizing the driving fields at the … The fidelity of laser-driven quantum logic operations on trapped ion qubits tend to be lower than microwave-driven logic operations due to the difficulty of stabilizing the driving fields at the ion location. Through stabilization of the driving optical fields and use of composite pulse sequences, we demonstrate high-fidelity single-qubit gates for the hyperfine qubit of a $^{171}\text{Yb}^{+}$ ion trapped in a microfabricated surface-electrode ion trap. Gate error is characterized using a randomized benchmarking protocol and an average error per randomized Clifford group gate of $3.6(3)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}4}$ is measured. We also report experimental realization of palindromic pulse sequences that scale efficiently in sequence length.
Randomized benchmarking and variants thereof, which we collectively call RB+, are widely used to characterize the performance of quantum computers because they are simple, scalable, and robust to state-preparation and … Randomized benchmarking and variants thereof, which we collectively call RB+, are widely used to characterize the performance of quantum computers because they are simple, scalable, and robust to state-preparation and measurement errors. However, experimental implementations of RB+ allocate resources suboptimally and make ad-hoc assumptions that undermine the reliability of the data analysis. In this paper, we propose a simple modification of RB+ which rigorously eliminates a nuisance parameter and simplifies the experimental design. We then show that, with this modification and specific experimental choices, RB+ efficiently provides estimates of error rates with multiplicative precision. Finally, we provide a simplified rigorous method for obtaining credible regions for parameters of interest and a heuristic approximation for these intervals that performs well in currently relevant regimes.
Randomized benchmarking (RB) is an important protocol for robustly characterizing the error rates of quantum gates. The technique is typically applied to the Clifford gates since they form a group … Randomized benchmarking (RB) is an important protocol for robustly characterizing the error rates of quantum gates. The technique is typically applied to the Clifford gates since they form a group that satisfies a convenient technical condition of forming a unitary 2-design, in addition to having a tight connection to fault-tolerant quantum computing and an efficient classical simulation. In order to achieve universal quantum computing one must add at least one additional gate such as the T gate (also known as the $\pi$/8 gate). Here we propose and analyze a simple variation of the standard interleaved RB protocol that can accurately estimate the average fidelity of the T gate while retaining the many advantages of a unitary 2-design and the fidelity guarantees that such a design delivers, as well as the efficient classical simulation property of the Clifford group. Our work complements prior methods that have succeeded in estimating T gate fidelities, but only by relaxing the 2-design constraint and using a more complicated data analysis.
As quantum computers approach the fault-tolerance threshold, diagnosing and characterizing the noise on large-scale quantum devices is increasingly important. One of the most important classes of noise channels is the … As quantum computers approach the fault-tolerance threshold, diagnosing and characterizing the noise on large-scale quantum devices is increasingly important. One of the most important classes of noise channels is the class of Pauli channels, for reasons of both theoretical tractability and experimental relevance. Here we present a practical algorithm for estimating the s nonzero Pauli error rates in an s-sparse, n-qubit Pauli noise channel, or more generally the s largest Pauli error rates. The algorithm comes with rigorous recovery guarantees and uses only O(n2) measurements, O(sn2) classical processing time, and Clifford quantum circuits. We experimentally validate a heuristic version of the algorithm that uses simplified Clifford circuits on data from an IBM 14-qubit superconducting device and our open-source implementation. These data show that accurate and precise estimation of the probability of arbitrary-weight Pauli errors is possible even when the signal is 2 orders of magnitude below the measurement noise floor.Received 2 August 2020Revised 22 December 2020Accepted 13 January 2021DOI:https://doi.org/10.1103/PRXQuantum.2.010322Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasQuantum benchmarkingQuantum channelsQuantum tomographyQuantum Information
As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how … As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how to efficiently estimate Hamiltonians in principle, but they are poorly conditioned and can only characterize the system up to a scalar factor, making them difficult to use in practice. In this work we present a Bayesian methodology, called Bayesian Hamiltonian Learning (BHL), that addresses both of these issues by making use of any or all, of the following: well-characterised experimental control of Hamiltonian couplings, the preparation of multiple states, and the availability of any prior information for the Hamiltonian. Importantly, BHL can be used online as an adaptive measurement protocol, updating estimates and their corresponding uncertainties as experimental data become available. In addition, we show that multiple input states and control fields enable BHL to reconstruct Hamiltonians that are neither generic nor spatially local. We demonstrate the scalability and accuracy of our method with numerical simulations on up to 100 qubits. These practical results are complemented by several theoretical contributions. We prove that a $k$-body Hamiltonian $H$ whose correlation matrix has a spectral gap $\Delta$ can be estimated to precision $\varepsilon$ with only $\tilde{O}\bigl(n^{3k}/(\varepsilon \Delta)^{3/2}\bigr)$ measurements. We use two subroutines that may be of independent interest: First, an algorithm to approximate a steady state of $H$ starting from an arbitrary input that converges factorially in the number of samples; and second, an algorithm to estimate the expectation values of $m$ Pauli operators with weight $\le k$ to precision $\epsilon$ using only $O(\epsilon^{-2} 3^k \log m)$ measurements, which quadratically improves a recent result by Cotler and Wilczek.
As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how … As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how to efficiently estimate Hamiltonians in principle, but they are poorly conditioned and can only characterize the system up to a scalar factor, making them difficult to use in practice. In this work we present a Bayesian methodology, called Bayesian Hamiltonian Learning (BHL), that addresses both of these issues by making use of any or all, of the following: well-characterised experimental control of Hamiltonian couplings, the preparation of multiple states, and the availability of any prior information for the Hamiltonian. Importantly, BHL can be used online as an adaptive measurement protocol, updating estimates and their corresponding uncertainties as experimental data become available. In addition, we show that multiple input states and control fields enable BHL to reconstruct Hamiltonians that are neither generic nor spatially local. We demonstrate the scalability and accuracy of our method with numerical simulations on up to 100 qubits. These practical results are complemented by several theoretical contributions. We prove that a $k$-body Hamiltonian $H$ whose correlation matrix has a spectral gap $\Delta$ can be estimated to precision $\varepsilon$ with only $\tilde{O}\bigl(n^{3k}/(\varepsilon \Delta)^{3/2}\bigr)$ measurements. We use two subroutines that may be of independent interest: First, an algorithm to approximate a steady state of $H$ starting from an arbitrary input that converges factorially in the number of samples; and second, an algorithm to estimate the expectation values of $m$ Pauli operators with weight $\le k$ to precision $\epsilon$ using only $O(\epsilon^{-2} 3^k \log m)$ measurements, which quadratically improves a recent result by Cotler and Wilczek.
Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow … Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow or include unchecked assumptions. This study presents a self-consistent method for process tomography that is both fast and flexible. The technique complements the broad suite of existing characterization tools, and may potentially allow for pulse optimization to further increase gate fidelities.
Quantum computers will require encoding of quantum information to protect them from noise. Fault tolerant quantum computing architectures illustrate how this might be done, but have not yet shown a … Quantum computers will require encoding of quantum information to protect them from noise. Fault tolerant quantum computing architectures illustrate how this might be done, but have not yet shown a conclusive practical advantage. Here we demonstrate that a small but useful error detecting code improves the fidelity of the fault tolerant gates implemented in the code space as compared to the fidelity of physically equivalent gates implemented on physical qubits. By running a randomized benchmarking protocol in the logical code space of the [4,2,2] code, we observe an order of magnitude improvement in the infidelity of the gates, with the two-qubit infidelity dropping from 5.8(2)% to 0.60(3)%. Our results are consistent with fault tolerance theory and conclusively demonstrate the benefit of carrying out computation in a code space that can detect errors. Although the fault tolerant gates offer an impressive improvement in fidelity, the computation as a whole is not below the fault tolerance threshold because of noise associated with state preparation and measurement on this device.
Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it … Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it executes the circuits required for error correction. As devices increase in size, we will become more reliant on efficient models of this noise. However, such models must still retain the information required to optimize the algorithms used for error correction. Here, we propose a method of extracting detailed information of the noise in a device running syndrome extraction circuits. We introduce and execute an experiment on a superconducting device using 39 of its qubits in a surface code doing repeated rounds of syndrome extraction but omitting the midcircuit measurement and reset. We show how to extract from the 20 data qubits the information needed to build noise models of various sophistication in the form of graphical models. These models give efficient descriptions of noise in large-scale devices and are designed to illuminate the effectiveness of error correction against correlated noise. Our estimates are furthermore precise: we learn a consistent global distribution where all one- and two-qubit error rates are known to a relative error of 0.1%. By extrapolating our experimentally learned noise models toward lower error rates, we demonstrate that accurate correlated noise models are increasingly important for successfully predicting subthreshold behavior in quantum error-correction experiments.4 MoreReceived 18 April 2023Accepted 29 August 2023DOI:https://doi.org/10.1103/PRXQuantum.4.040311Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasQuantum circuitsQuantum correlations in quantum informationQuantum error correctionQuantum information theoryQuantum Information, Science & Technology
The performance requirements for fault-tolerant quantum computing are very stringent. Qubits must be manipulated, coupled, and measured with error rates well below 1%. For semiconductor implementations, silicon quantum dot spin … The performance requirements for fault-tolerant quantum computing are very stringent. Qubits must be manipulated, coupled, and measured with error rates well below 1%. For semiconductor implementations, silicon quantum dot spin qubits have demonstrated average single-qubit Clifford gate error rates that approach this threshold, notably with error rates of 0.14% in isotopically enriched $^{28}$Si/SiGe devices. This gate performance, together with high-fidelity two-qubit gates and measurements, is only known to meet the threshold for fault-tolerant quantum computing in some architectures when assuming that the noise is incoherent, and still lower error rates are needed to reduce overhead. Here we experimentally show that pulse engineering techniques, widely used in magnetic resonance, improve average Clifford gate error rates for silicon quantum dot spin qubits to 0.043%,a factor of 3 improvement on previous best results for silicon quantum dot devices. By including tomographically complete measurements in randomised benchmarking, we infer a higher-order feature of the noise called the unitarity, which measures the coherence of noise. This in turn allows us to theoretically predict that average gate error rates as low as 0.026% may be achievable with further pulse improvements. These fidelities are ultimately limited by Markovian noise, which we attribute to charge noise emanating from the silicon device structure itself, or the environment.
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multi-objective optimization problem of matching observed data while minimizing the … We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multi-objective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a novel genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian … To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian error poses a challenge to current quantum process tomography techniques. Fast Bayesian Tomography (FBT) is a self-consistent gate set tomography protocol that can be bootstrapped from earlier characterization knowledge and be updated in real-time with arbitrary gate sequences. Here we demonstrate how FBT allows for the characterization of key non-Markovian error processes. We introduce two experimental protocols for FBT to diagnose the non-Markovian behavior of two-qubit systems on silicon quantum dots. To increase the efficiency and scalability of the experiment-analysis loop, we develop an online FBT software stack. To reduce experiment cost and analysis time, we also introduce a native readout method and warm boot strategy. Our results demonstrate that FBT is a useful tool for probing non-Markovian errors that can be detrimental to the ultimate realization of fault-tolerant operation on quantum computing.
Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it … Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it executes the circuits required for error correction. As devices increase in size we will become more reliant on efficient models of this noise. However, such models must still retain the information required to optimize the algorithms used for error correction. Here we propose a method of extracting detailed information of the noise in a device running syndrome extraction circuits. We introduce and execute an experiment on a superconducting device using 39 of its qubits in a surface code doing repeated rounds of syndrome extraction, but omitting the mid-circuit measurement and reset. We show how to extract from the 20 data qubits the information needed to build noise models of various sophistication in the form of graphical models. These models give efficient descriptions of noise in large-scale devices and are designed to illuminate the effectiveness of error correction against correlated noise. Our estimates are furthermore precise: we learn a consistent global distribution where all one- and two-qubit error rates are known to a relative error of 0.1%. By extrapolating our experimentally learned noise models towards lower error rates, we demonstrate that accurate correlated noise models are increasingly important for successfully predicting sub-threshold behavior in quantum error correction experiments.
Benchmarking and characterising quantum states and logic gates is essential in the development of devices for quantum computing. We introduce a Bayesian approach to self-consistent process tomography, called fast Bayesian … Benchmarking and characterising quantum states and logic gates is essential in the development of devices for quantum computing. We introduce a Bayesian approach to self-consistent process tomography, called fast Bayesian tomography (FBT), and experimentally demonstrate its performance in characterising a two-qubit gate set on a silicon-based spin qubit device. FBT is built on an adaptive self-consistent linearisation that is robust to model approximation errors. Our method offers several advantages over other self-consistent tomographic methods. Most notably, FBT can leverage prior information from randomised benchmarking (or other characterisation measurements), and can be performed in real time, providing continuously updated estimates of full process matrices while data is acquired.
Characterising the performance of noisy quantum circuits is central to the production of prototype quantum computers and can enable improved quantum error correction that exploits noise biases identified in a … Characterising the performance of noisy quantum circuits is central to the production of prototype quantum computers and can enable improved quantum error correction that exploits noise biases identified in a quantum device. We describe an implementation of averaged circuit eigenvalue sampling (ACES), a general framework for the scalable noise characterisation of quantum circuits. ACES is capable of simultaneously estimating the Pauli error probabilities of all gates in a Clifford circuit, and captures averaged spatial correlations between gates implemented simultaneously in the circuit. By rigorously analysing the performance of ACES experiments, we derive a figure of merit for their expected performance, allowing us to optimise ACES experimental designs and improve the precision to which we estimate noise given fixed experimental resources. Since the syndrome extraction circuits of quantum error correcting codes are representative components of a fault-tolerant architecture, we demonstrate the scalability and performance of our ACES protocol through circuit-level numerical simulations of the entire noise characterisation procedure for the syndrome extraction circuit of a distance 25 surface code with over 1000 qubits. Our results indicate that detailed noise characterisation methods are scalable to near-term quantum devices. We release our code in the form of the Julia package AveragedCircuitEigenvalueSampling.jl.
Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such … Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such as magic state injection is currently lacking. We use numerical simulations of logical process tomography on a fault-tolerant gadget that implements a logical T = $Z({\pi}/8)$ gate using magic state injection, to understand how noise characteristics at the physical level are transformed into noise characteristics at the logical level. We show how, in this gadget, a significant phase ($Z$) bias can arise in the logical noise, even with unbiased noise at the physical level. While the magic state injection gadget intrinsically induces biased noise, with extant phase bias being further amplified at the logical level, we identify noisy error correction circuits as a key limiting factor on the magnitude of this logical noise bias. Our approach provides a framework for assessing the detailed noise characteristics, as well as the overall performance, of fault-tolerant logical primitives.
Characterising the performance of noisy quantum circuits is central to the production of prototype quantum computers and can enable improved quantum error correction that exploits noise biases identified in a … Characterising the performance of noisy quantum circuits is central to the production of prototype quantum computers and can enable improved quantum error correction that exploits noise biases identified in a quantum device. We describe an implementation of averaged circuit eigenvalue sampling (ACES), a general framework for the scalable noise characterisation of quantum circuits. ACES is capable of simultaneously estimating the Pauli error probabilities of all gates in a Clifford circuit, and captures averaged spatial correlations between gates implemented simultaneously in the circuit. By rigorously analysing the performance of ACES experiments, we derive a figure of merit for their expected performance, allowing us to optimise ACES experimental designs and improve the precision to which we estimate noise given fixed experimental resources. Since the syndrome extraction circuits of quantum error correcting codes are representative components of a fault-tolerant architecture, we demonstrate the scalability and performance of our ACES protocol through circuit-level numerical simulations of the entire noise characterisation procedure for the syndrome extraction circuit of a distance 25 surface code with over 1000 qubits. Our results indicate that detailed noise characterisation methods are scalable to near-term quantum devices. We release our code in the form of the Julia package AveragedCircuitEigenvalueSampling.jl.
Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such … Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such as magic state injection is currently lacking. We use numerical simulations of logical process tomography on a fault-tolerant gadget that implements a logical T = $Z({\pi}/8)$ gate using magic state injection, to understand how noise characteristics at the physical level are transformed into noise characteristics at the logical level. We show how, in this gadget, a significant phase ($Z$) bias can arise in the logical noise, even with unbiased noise at the physical level. While the magic state injection gadget intrinsically induces biased noise, with extant phase bias being further amplified at the logical level, we identify noisy error correction circuits as a key limiting factor on the magnitude of this logical noise bias. Our approach provides a framework for assessing the detailed noise characteristics, as well as the overall performance, of fault-tolerant logical primitives.
Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it … Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it executes the circuits required for error correction. As devices increase in size, we will become more reliant on efficient models of this noise. However, such models must still retain the information required to optimize the algorithms used for error correction. Here, we propose a method of extracting detailed information of the noise in a device running syndrome extraction circuits. We introduce and execute an experiment on a superconducting device using 39 of its qubits in a surface code doing repeated rounds of syndrome extraction but omitting the midcircuit measurement and reset. We show how to extract from the 20 data qubits the information needed to build noise models of various sophistication in the form of graphical models. These models give efficient descriptions of noise in large-scale devices and are designed to illuminate the effectiveness of error correction against correlated noise. Our estimates are furthermore precise: we learn a consistent global distribution where all one- and two-qubit error rates are known to a relative error of 0.1%. By extrapolating our experimentally learned noise models toward lower error rates, we demonstrate that accurate correlated noise models are increasingly important for successfully predicting subthreshold behavior in quantum error-correction experiments.4 MoreReceived 18 April 2023Accepted 29 August 2023DOI:https://doi.org/10.1103/PRXQuantum.4.040311Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasQuantum circuitsQuantum correlations in quantum informationQuantum error correctionQuantum information theoryQuantum Information, Science & Technology
Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it … Building error-corrected quantum computers relies crucially on measuring and modeling noise on candidate devices. In particular, optimal error correction requires knowing the noise that occurs in the device as it executes the circuits required for error correction. As devices increase in size we will become more reliant on efficient models of this noise. However, such models must still retain the information required to optimize the algorithms used for error correction. Here we propose a method of extracting detailed information of the noise in a device running syndrome extraction circuits. We introduce and execute an experiment on a superconducting device using 39 of its qubits in a surface code doing repeated rounds of syndrome extraction, but omitting the mid-circuit measurement and reset. We show how to extract from the 20 data qubits the information needed to build noise models of various sophistication in the form of graphical models. These models give efficient descriptions of noise in large-scale devices and are designed to illuminate the effectiveness of error correction against correlated noise. Our estimates are furthermore precise: we learn a consistent global distribution where all one- and two-qubit error rates are known to a relative error of 0.1%. By extrapolating our experimentally learned noise models towards lower error rates, we demonstrate that accurate correlated noise models are increasingly important for successfully predicting sub-threshold behavior in quantum error correction experiments.
To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian … To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian error poses a challenge to current quantum process tomography techniques. Fast Bayesian Tomography (FBT) is a self-consistent gate set tomography protocol that can be bootstrapped from earlier characterization knowledge and be updated in real-time with arbitrary gate sequences. Here we demonstrate how FBT allows for the characterization of key non-Markovian error processes. We introduce two experimental protocols for FBT to diagnose the non-Markovian behavior of two-qubit systems on silicon quantum dots. To increase the efficiency and scalability of the experiment-analysis loop, we develop an online FBT software stack. To reduce experiment cost and analysis time, we also introduce a native readout method and warm boot strategy. Our results demonstrate that FBT is a useful tool for probing non-Markovian errors that can be detrimental to the ultimate realization of fault-tolerant operation on quantum computing.
Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow … Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow or include unchecked assumptions. This study presents a self-consistent method for process tomography that is both fast and flexible. The technique complements the broad suite of existing characterization tools, and may potentially allow for pulse optimization to further increase gate fidelities.
Benchmarking and characterising quantum states and logic gates is essential in the development of devices for quantum computing. We introduce a Bayesian approach to self-consistent process tomography, called fast Bayesian … Benchmarking and characterising quantum states and logic gates is essential in the development of devices for quantum computing. We introduce a Bayesian approach to self-consistent process tomography, called fast Bayesian tomography (FBT), and experimentally demonstrate its performance in characterising a two-qubit gate set on a silicon-based spin qubit device. FBT is built on an adaptive self-consistent linearisation that is robust to model approximation errors. Our method offers several advantages over other self-consistent tomographic methods. Most notably, FBT can leverage prior information from randomised benchmarking (or other characterisation measurements), and can be performed in real time, providing continuously updated estimates of full process matrices while data is acquired.
As quantum computers approach the fault-tolerance threshold, diagnosing and characterizing the noise on large-scale quantum devices is increasingly important. One of the most important classes of noise channels is the … As quantum computers approach the fault-tolerance threshold, diagnosing and characterizing the noise on large-scale quantum devices is increasingly important. One of the most important classes of noise channels is the class of Pauli channels, for reasons of both theoretical tractability and experimental relevance. Here we present a practical algorithm for estimating the s nonzero Pauli error rates in an s-sparse, n-qubit Pauli noise channel, or more generally the s largest Pauli error rates. The algorithm comes with rigorous recovery guarantees and uses only O(n2) measurements, O(sn2) classical processing time, and Clifford quantum circuits. We experimentally validate a heuristic version of the algorithm that uses simplified Clifford circuits on data from an IBM 14-qubit superconducting device and our open-source implementation. These data show that accurate and precise estimation of the probability of arbitrary-weight Pauli errors is possible even when the signal is 2 orders of magnitude below the measurement noise floor.Received 2 August 2020Revised 22 December 2020Accepted 13 January 2021DOI:https://doi.org/10.1103/PRXQuantum.2.010322Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasQuantum benchmarkingQuantum channelsQuantum tomographyQuantum Information
As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how … As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how to efficiently estimate Hamiltonians in principle, but they are poorly conditioned and can only characterize the system up to a scalar factor, making them difficult to use in practice. In this work we present a Bayesian methodology, called Bayesian Hamiltonian Learning (BHL), that addresses both of these issues by making use of any or all, of the following: well-characterised experimental control of Hamiltonian couplings, the preparation of multiple states, and the availability of any prior information for the Hamiltonian. Importantly, BHL can be used online as an adaptive measurement protocol, updating estimates and their corresponding uncertainties as experimental data become available. In addition, we show that multiple input states and control fields enable BHL to reconstruct Hamiltonians that are neither generic nor spatially local. We demonstrate the scalability and accuracy of our method with numerical simulations on up to 100 qubits. These practical results are complemented by several theoretical contributions. We prove that a $k$-body Hamiltonian $H$ whose correlation matrix has a spectral gap $\Delta$ can be estimated to precision $\varepsilon$ with only $\tilde{O}\bigl(n^{3k}/(\varepsilon \Delta)^{3/2}\bigr)$ measurements. We use two subroutines that may be of independent interest: First, an algorithm to approximate a steady state of $H$ starting from an arbitrary input that converges factorially in the number of samples; and second, an algorithm to estimate the expectation values of $m$ Pauli operators with weight $\le k$ to precision $\epsilon$ using only $O(\epsilon^{-2} 3^k \log m)$ measurements, which quadratically improves a recent result by Cotler and Wilczek.
Randomized benchmarking and variants thereof, which we collectively call RB+, are widely used to characterize the performance of quantum computers because they are simple, scalable, and robust to state-preparation and … Randomized benchmarking and variants thereof, which we collectively call RB+, are widely used to characterize the performance of quantum computers because they are simple, scalable, and robust to state-preparation and measurement errors. However, experimental implementations of RB+ allocate resources suboptimally and make ad-hoc assumptions that undermine the reliability of the data analysis. In this paper, we propose a simple modification of RB+ which rigorously eliminates a nuisance parameter and simplifies the experimental design. We then show that, with this modification and specific experimental choices, RB+ efficiently provides estimates of error rates with multiplicative precision. Finally, we provide a simplified rigorous method for obtaining credible regions for parameters of interest and a heuristic approximation for these intervals that performs well in currently relevant regimes.
Quantum computers will require encoding of quantum information to protect them from noise. Fault-tolerant quantum computing architectures illustrate how this might be done but have not yet shown a conclusive … Quantum computers will require encoding of quantum information to protect them from noise. Fault-tolerant quantum computing architectures illustrate how this might be done but have not yet shown a conclusive practical advantage. Here we demonstrate that a small but useful error detecting code improves the fidelity of the fault-tolerant gates implemented in the code space as compared to the fidelity of physically equivalent gates implemented on physical qubits. By running a randomized benchmarking protocol in the logical code space of the [4,2,2] code, we observe an order of magnitude improvement in the infidelity of the gates, with the two-qubit infidelity dropping from 5.8(2)% to 0.60(3)%. Our results are consistent with fault-tolerance theory and conclusively demonstrate the benefit of carrying out computation in a code space that can detect errors. Although the fault-tolerant gates offer an impressive improvement in fidelity, the computation as a whole is not below the fault-tolerance threshold because of noise associated with state preparation and measurement on this device.
As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how … As the size of quantum devices continues to grow, the development of scalable methods to characterise and diagnose noise is becoming an increasingly important problem. Recent methods have shown how to efficiently estimate Hamiltonians in principle, but they are poorly conditioned and can only characterize the system up to a scalar factor, making them difficult to use in practice. In this work we present a Bayesian methodology, called Bayesian Hamiltonian Learning (BHL), that addresses both of these issues by making use of any or all, of the following: well-characterised experimental control of Hamiltonian couplings, the preparation of multiple states, and the availability of any prior information for the Hamiltonian. Importantly, BHL can be used online as an adaptive measurement protocol, updating estimates and their corresponding uncertainties as experimental data become available. In addition, we show that multiple input states and control fields enable BHL to reconstruct Hamiltonians that are neither generic nor spatially local. We demonstrate the scalability and accuracy of our method with numerical simulations on up to 100 qubits. These practical results are complemented by several theoretical contributions. We prove that a $k$-body Hamiltonian $H$ whose correlation matrix has a spectral gap $\Delta$ can be estimated to precision $\varepsilon$ with only $\tilde{O}\bigl(n^{3k}/(\varepsilon \Delta)^{3/2}\bigr)$ measurements. We use two subroutines that may be of independent interest: First, an algorithm to approximate a steady state of $H$ starting from an arbitrary input that converges factorially in the number of samples; and second, an algorithm to estimate the expectation values of $m$ Pauli operators with weight $\le k$ to precision $\epsilon$ using only $O(\epsilon^{-2} 3^k \log m)$ measurements, which quadratically improves a recent result by Cotler and Wilczek.
The performance requirements for fault-tolerant quantum computing are very stringent. Qubits must be manipulated, coupled, and measured with error rates well below 1%. For semiconductor implementations, silicon quantum dot spin … The performance requirements for fault-tolerant quantum computing are very stringent. Qubits must be manipulated, coupled, and measured with error rates well below 1%. For semiconductor implementations, silicon quantum dot spin qubits have demonstrated average single-qubit Clifford gate error rates that approach this threshold, notably with error rates of 0.14% in isotopically enriched $^{28}$Si/SiGe devices. This gate performance, together with high-fidelity two-qubit gates and measurements, is only known to meet the threshold for fault-tolerant quantum computing in some architectures when assuming that the noise is incoherent, and still lower error rates are needed to reduce overhead. Here we experimentally show that pulse engineering techniques, widely used in magnetic resonance, improve average Clifford gate error rates for silicon quantum dot spin qubits to 0.043%,a factor of 3 improvement on previous best results for silicon quantum dot devices. By including tomographically complete measurements in randomised benchmarking, we infer a higher-order feature of the noise called the unitarity, which measures the coherence of noise. This in turn allows us to theoretically predict that average gate error rates as low as 0.026% may be achievable with further pulse improvements. These fidelities are ultimately limited by Markovian noise, which we attribute to charge noise emanating from the silicon device structure itself, or the environment.
Quantum computers will require encoding of quantum information to protect them from noise. Fault tolerant quantum computing architectures illustrate how this might be done, but have not yet shown a … Quantum computers will require encoding of quantum information to protect them from noise. Fault tolerant quantum computing architectures illustrate how this might be done, but have not yet shown a conclusive practical advantage. Here we demonstrate that a small but useful error detecting code improves the fidelity of the fault tolerant gates implemented in the code space as compared to the fidelity of physically equivalent gates implemented on physical qubits. By running a randomized benchmarking protocol in the logical code space of the [4,2,2] code, we observe an order of magnitude improvement in the infidelity of the gates, with the two-qubit infidelity dropping from 5.8(2)% to 0.60(3)%. Our results are consistent with fault tolerance theory and conclusively demonstrate the benefit of carrying out computation in a code space that can detect errors. Although the fault tolerant gates offer an impressive improvement in fidelity, the computation as a whole is not below the fault tolerance threshold because of noise associated with state preparation and measurement on this device.
We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multi-objective optimization problem of matching observed data while minimizing the … We propose a framework for the systematic and quantitative generalization of Bell's theorem using causal networks. We first consider the multi-objective optimization problem of matching observed data while minimizing the causal effect of nonlocal variables and prove an inequality for the optimal region that both strengthens and generalizes Bell's theorem. To solve the optimization problem (rather than simply bound it), we develop a novel genetic algorithm treating as individuals causal networks. By applying our algorithm to a photonic Bell experiment, we demonstrate the trade-off between the quantitative relaxation of one or more local causality assumptions and the ability of data to match quantum correlations.
Randomized benchmarking (RB) is an important protocol for robustly characterizing the error rates of quantum gates. The technique is typically applied to the Clifford gates since they form a group … Randomized benchmarking (RB) is an important protocol for robustly characterizing the error rates of quantum gates. The technique is typically applied to the Clifford gates since they form a group that satisfies a convenient technical condition of forming a unitary 2-design, in addition to having a tight connection to fault-tolerant quantum computing and an efficient classical simulation. In order to achieve universal quantum computing one must add at least one additional gate such as the T gate (also known as the $\pi$/8 gate). Here we propose and analyze a simple variation of the standard interleaved RB protocol that can accurately estimate the average fidelity of the T gate while retaining the many advantages of a unitary 2-design and the fidelity guarantees that such a design delivers, as well as the efficient classical simulation property of the Clifford group. Our work complements prior methods that have succeeded in estimating T gate fidelities, but only by relaxing the 2-design constraint and using a more complicated data analysis.
The fidelity of laser-driven quantum logic operations on trapped ion qubits tend to be lower than microwave-driven logic operations due to the difficulty of stabilizing the driving fields at the … The fidelity of laser-driven quantum logic operations on trapped ion qubits tend to be lower than microwave-driven logic operations due to the difficulty of stabilizing the driving fields at the ion location. Through stabilization of the driving optical fields and use of composite pulse sequences, we demonstrate high-fidelity single-qubit gates for the hyperfine qubit of a $^{171}\text{Yb}^{+}$ ion trapped in a microfabricated surface-electrode ion trap. Gate error is characterized using a randomized benchmarking protocol and an average error per randomized Clifford group gate of $3.6(3)\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}4}$ is measured. We also report experimental realization of palindromic pulse sequences that scale efficiently in sequence length.
Noise mechanisms in quantum systems can be broadly characterized as either coherent (i.e., unitary) or incoherent. For a given fixed average error rate, coherent noise mechanisms will generally lead to … Noise mechanisms in quantum systems can be broadly characterized as either coherent (i.e., unitary) or incoherent. For a given fixed average error rate, coherent noise mechanisms will generally lead to a larger worst-case error than incoherent noise. We show that the coherence of a noise source can be quantified by the unitarity, which we relate to the average change in purity averaged over input pure states. We then show that the unitarity can be efficiently estimated using a protocol based on randomized benchmarking that is efficient and robust to state-preparation and measurement errors. We also show that the unitarity provides a lower bound on the optimal achievable gate infidelity under a given noisy process.
We show that nonexponential fidelity decays in randomized benchmarking experiments on quantum-dot qubits are consistent with numerical simulations that incorporate low-frequency noise and correspond to a control fidelity that varies … We show that nonexponential fidelity decays in randomized benchmarking experiments on quantum-dot qubits are consistent with numerical simulations that incorporate low-frequency noise and correspond to a control fidelity that varies slowly with time. By expanding standard randomized benchmarking analysis to this experimental regime, we find that such nonexponential decays are better modeled by multiple exponential decay rates, leading to an instantaneous control fidelity for isotopically purified silicon metal-oxide-semiconductor quantum-dot qubits which is 98.9% when the low-frequency noise causes large detuning but can be as high as 99.9% when the qubit is driven on resonance and system calibrations are favorable. These advances in qubit characterization and validation methods underpin the considerable prospects for silicon as a qubit platform for fault-tolerant quantum computation.