Author Description

Login to generate an author description

Ask a Question About This Mathematician

Recently, there has been much interest in a new kind of ``unspeakable'' quantum information that stands to regular quantum information in the same way that a direction in space or … Recently, there has been much interest in a new kind of ``unspeakable'' quantum information that stands to regular quantum information in the same way that a direction in space or a moment in time stands to a classical bit string: the former can only be encoded using particular degrees of freedom while the latter are indifferent to the physical nature of the information carriers. The problem of correlating distant reference frames, of which aligning Cartesian axes and synchronizing clocks are important instances, is an example of a task that requires the exchange of unspeakable information and for which it is interesting to determine the fundamental quantum limit of efficiency. There have also been many investigations into the information theory that is appropriate for parties that lack reference frames or that lack correlation between their reference frames, restrictions that result in global and local superselection rules. In the presence of these, quantum unspeakable information becomes a new kind of resource that can be manipulated, depleted, quantified, etc. Methods have also been developed to contend with these restrictions using relational encodings, particularly in the context of computation, cryptography, communication, and the manipulation of entanglement. This paper reviews the role of reference frames and superselection rules in the theory of quantum-information processing.
We obtain sufficient conditions for the efficient simulation of a continuous variable quantum algorithm or process on a classical computer. The resulting theorem is an extension of the Gottesman-Knill theorem … We obtain sufficient conditions for the efficient simulation of a continuous variable quantum algorithm or process on a classical computer. The resulting theorem is an extension of the Gottesman-Knill theorem to continuous variable quantum information. For a collection of harmonic oscillators, any quantum process that begins with unentangled Gaussian states, performs only transformations generated by Hamiltonians that are quadratic in the canonical operators, and involves only measurements of canonical operators (including finite losses) and suitable operations conditioned on these measurements can be simulated efficiently on a classical computer.
We produce and holographically measure entangled qudits encoded in transverse spatial modes of single photons. With the novel use of a quantum state tomography method that only requires two-state superpositions, … We produce and holographically measure entangled qudits encoded in transverse spatial modes of single photons. With the novel use of a quantum state tomography method that only requires two-state superpositions, we achieve the most complete characterization of entangled qutrits to date. Ideally, entangled qutrits provide better security than qubits in quantum bit commitment: we model the sensitivity of this to mixture and show experimentally and theoretically that qutrits with even a small amount of decoherence cannot offer increased security over qubits.
We analyse the quantum walk in higher spatial dimensions and compare classical and quantum spreading as a function of time. Tensor products of Hadamard transformations and the discrete Fourier transform … We analyse the quantum walk in higher spatial dimensions and compare classical and quantum spreading as a function of time. Tensor products of Hadamard transformations and the discrete Fourier transform arise as natural extensions of the 'quantum coin toss' in the one-dimensional walk simulation, and other illustrative transformations are also investigated. We find that entanglement between the dimensions serves to reduce the rate of spread of the quantum walk. The classical limit is obtained by introducing a random phase variable.
Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well … Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code-the XZZX code-offers remarkable performance for fault-tolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every single-qubit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, high-performance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable sub-threshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform fault-tolerant quantum computation.
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli $Z$ errors occur … We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli $Z$ errors occur more frequently than $X$ or $Y$ errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of $Y$ instead of $Z$ around the faces, as this doubles the number of useful syndrome bits associated with the dominant $Z$ errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
Unwanted interaction between a quantum system and its fluctuating environment leads to decoherence and is the primary obstacle to establishing a scalable quantum information processing architecture. Strategies such as environmental … Unwanted interaction between a quantum system and its fluctuating environment leads to decoherence and is the primary obstacle to establishing a scalable quantum information processing architecture. Strategies such as environmental and materials engineering, quantum error correction and dynamical decoupling can mitigate decoherence, but generally increase experimental complexity. Here we improve coherence in a qubit using real-time Hamiltonian parameter estimation. Using a rapidly converging Bayesian approach, we precisely measure the splitting in a singlet-triplet spin qubit faster than the surrounding nuclear bath fluctuates. We continuously adjust qubit control parameters based on this information, thereby improving the inhomogenously broadened coherence time (T2*) from tens of nanoseconds to >2 μs. Because the technique demonstrated here is compatible with arbitrary qubit operations, it is a natural complement to quantum error correction and can be used to improve the performance of a wide variety of qubits in both meteorological and quantum information processing applications.
We show that higher-dimensional versions of qubits, or qudits, can be encoded into spin systems and into harmonic oscillators, yielding important advantages for quantum computation. Whereas qubit-based quantum computation is … We show that higher-dimensional versions of qubits, or qudits, can be encoded into spin systems and into harmonic oscillators, yielding important advantages for quantum computation. Whereas qubit-based quantum computation is adequate for analyses of quantum vs classical computation, in practice qubits are often realized in higher-dimensional systems by truncating all but two levels, thereby reducing the size of the precious Hilbert space. We develop natural qudit gates for universal quantum computation, and exploit the entire accessible Hilbert space. Mathematically, we give representations of the generalized Pauli group for qudits in coupled spin systems and harmonic oscillators, and include analyses of the qubit and the infinite-dimensional limits.
We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame … We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame at a rate that asymptotically approaches one classical bit or one encoded qubit per transmitted qubit. We present an optical scheme to communicate classical bits without a shared reference frame using entangled photon pairs and linear optical Bell state measurements.
We present the theory of how to achieve phase measurements with the minimum possible variance in ways that are readily implementable with current experimental techniques. Measurements whose statistics have high-frequency … We present the theory of how to achieve phase measurements with the minimum possible variance in ways that are readily implementable with current experimental techniques. Measurements whose statistics have high-frequency fringes, such as those obtained from maximally path-entangled $(|N,0⟩+|0,N⟩)/\sqrt{2}$ (``NOON'') states, have commensurately high information yield (as quantified by the Fisher information). However, this information is also highly ambiguous because it does not distinguish between phases at the same point on different fringes. We provide schemes to eliminate this phase ambiguity in a highly efficient way, providing phase estimates with uncertainty that is within a small constant factor of the Heisenberg limit, the minimum allowed by the laws of quantum mechanics. These techniques apply to NOON state and multipass interferometry, as well as phase measurements in quantum computing. We have reported the experimental implementation of some of these schemes with multipass interferometry elsewhere. Here, we present the theoretical foundation and also present some additional experimental results. There are three key innovations to the theory in this paper. First, we examine the intrinsic phase properties of the sequence of states (in multiple time modes) via the equivalent two-mode state. Second, we identify the key feature of the equivalent state that enables the optimal scaling of the intrinsic phase uncertainty to be obtained. This enables us to identify appropriate combinations of states to use. The remaining difficulty is that the ideal phase measurements to achieve this intrinsic phase uncertainty are often not physically realizable. The third innovation is to solve this problem by using realizable measurements that closely approximate the optimal measurements, enabling the optimal scaling to be preserved. We consider both adaptive and nonadaptive measurement schemes.
We present a method for estimating the probabilities of outcomes of a quantum circuit using Monte Carlo sampling techniques applied to a quasiprobability representation. Our estimate converges to the true … We present a method for estimating the probabilities of outcomes of a quantum circuit using Monte Carlo sampling techniques applied to a quasiprobability representation. Our estimate converges to the true quantum probability at a rate determined by the total negativity in the circuit, using a measure of negativity based on the 1-norm of the quasiprobability. If the negativity grows at most polynomially in the size of the circuit, our estimator converges efficiently. These results highlight the role of negativity as a measure of nonclassical resources in quantum computation.
We introduce the quantum quincunx, which physically demonstrates the quantum walk and is analogous to Galton's quincunx for demonstrating the random walk. In contradistinction to the theoretical studies of quantum … We introduce the quantum quincunx, which physically demonstrates the quantum walk and is analogous to Galton's quincunx for demonstrating the random walk. In contradistinction to the theoretical studies of quantum walks over orthogonal lattice states, we introduce quantum walks over nonorthogonal lattice states (specifically, coherent states on a circle) to demonstrate that the key features of a quantum walk are observable albeit for strict parameter ranges. A quantum quincunx may be realized with current cavity quantum electrodynamics capabilities, and precise control over decoherence in such experiments allows a remarkable decrease in the position noise, or spread, with increasing decoherence.
Although universal continuous-variable quantum computation cannot be achieved via linear optics (including squeezing), homodyne detection, and feed-forward, inclusion of ideal photon-counting measurements overcomes this obstacle. These measurements are sometimes described … Although universal continuous-variable quantum computation cannot be achieved via linear optics (including squeezing), homodyne detection, and feed-forward, inclusion of ideal photon-counting measurements overcomes this obstacle. These measurements are sometimes described by arrays of beam splitters to distribute the photons across several modes. We show that such a scheme cannot be used to implement ideal photon counting and that such measurements necessarily involve nonlinear evolution. However, this requirement of nonlinearity can be moved ``off-line,'' thereby permitting universal continuous-variable quantum computation with linear optics.
Ground states of spin lattices can serve as a resource for measurement-based quantum computation. Ideally, the ability to perform quantum gates via measurements on such states would be insensitive to … Ground states of spin lattices can serve as a resource for measurement-based quantum computation. Ideally, the ability to perform quantum gates via measurements on such states would be insensitive to small variations in the Hamiltonian. Here, we describe a class of symmetry-protected topological orders in one-dimensional systems, any one of which ensures the perfect operation of the identity gate. As a result, measurement-based quantum gates can be a robust property of an entire phase in a quantum spin lattice, when protected by an appropriate symmetry.
A goal of the emerging field of quantum control is to develop methods for quantum technologies to function robustly in the presence of noise. Central issues are the fundamental limitations … A goal of the emerging field of quantum control is to develop methods for quantum technologies to function robustly in the presence of noise. Central issues are the fundamental limitations on the available information about quantum systems and the disturbance they suffer in the process of measurement. In the context of a simple quantum control scenario--the stabilization of non-orthogonal states of a qubit against dephasing--we experimentally explore the use of weak measurements in feedback control. We find that, despite the intrinsic difficultly of implementing them, weak measurements allow us to control the qubit better in practice than is even theoretically possible without them. Our work shows that these more general quantum measurements can play an important role for feedback control of quantum systems.
How would the world appear to us if its ontology was that of classical mechanics but every agent faced a restriction on how much they could come to know about … How would the world appear to us if its ontology was that of classical mechanics but every agent faced a restriction on how much they could come to know about the classical state? We show that in most respects it would appear to us as quantum. The statistical theory of classical mechanics, which specifies how probability distributions over phase space evolve under Hamiltonian evolution and under measurements, is typically called Liouville mechanics, so the theory we explore here is Liouville mechanics with an epistemic restriction. The particular epistemic restriction we posit as our foundational postulate specifies two constraints. The first constraint is a classical analog of Heisenberg's uncertainty principle; the second-order moments of position and momentum defined by the phase-space distribution that characterizes an agent's knowledge are required to satisfy the same constraints as are satisfied by the moments of position and momentum observables for a quantum state. The second constraint is that the distribution should have maximal entropy for the given moments. Starting from this postulate, we derive the allowed preparations, measurements, and transformations and demonstrate that they are isomorphic to those allowed in Gaussian quantum mechanics and generate the same experimental statistics. We argue that this reconstruction of Gaussian quantum mechanics constitutes additional evidence in favor of a research program wherein quantum states are interpreted as states of incomplete knowledge and that the phenomena that do not arise in Gaussian quantum mechanics provide the best clues for how one might reconstruct the full quantum theory.
We employ a high quantum efficiency photon number counter to determine the photon number distribution of the output field from a parametric down-converter. The raw photocount data directly demonstrates that … We employ a high quantum efficiency photon number counter to determine the photon number distribution of the output field from a parametric down-converter. The raw photocount data directly demonstrates that the source is nonclassical by 40 standard deviations, and correcting for the quantum efficiency yields a direct observation of oscillations in the photon number distribution.
Measuring the polarization of a single photon typically results in its destruction. We propose, demonstrate, and completely characterize a quantum nondemolition (QND) scheme for realizing such a measurement nondestructively. This … Measuring the polarization of a single photon typically results in its destruction. We propose, demonstrate, and completely characterize a quantum nondemolition (QND) scheme for realizing such a measurement nondestructively. This scheme uses only linear optics and photodetection of ancillary modes to induce a strong nonlinearity at the single-photon level, nondeterministically. We vary this QND measurement continuously into the weak regime and use it to perform a nondestructive test of complementarity in quantum mechanics. Our scheme realizes the most advanced general measurement of a qubit to date: it is nondestructive, can be made in any basis, and with arbitrary strength.
The surface code, with a simple modification, exhibits ultra-high error correction thresholds when the noise is biased towards dephasing. Here, we identify features of the surface code responsible for these … The surface code, with a simple modification, exhibits ultra-high error correction thresholds when the noise is biased towards dephasing. Here, we identify features of the surface code responsible for these ultra-high thresholds. We provide strong evidence that the threshold error rate of the surface code tracks the hashing bound exactly for all biases, and show how to exploit these features to achieve significant improvement in logical failure rate. First, we consider the infinite bias limit, meaning pure dephasing. We prove that the error threshold of the modified surface code for pure dephasing noise is $50\%$, i.e., that all qubits are fully dephased, and this threshold can be achieved by a polynomial time decoding algorithm. We demonstrate that the sub-threshold behavior of the code depends critically on the precise shape and boundary conditions of the code. That is, for rectangular surface codes with standard rough/smooth open boundaries, it is controlled by the parameter $g=\gcd(j,k)$, where $j$ and $k$ are dimensions of the surface code lattice. We demonstrate a significant improvement in logical failure rate with pure dephasing for co-prime codes that have $g=1$, and closely-related rotated codes, which have a modified boundary. The effect is dramatic: the same logical failure rate achievable with a square surface code and $n$ physical qubits can be obtained with a co-prime or rotated surface code using only $O(\sqrt{n})$ physical qubits. Finally, we use approximate maximum likelihood decoding to demonstrate that this improvement persists for a general Pauli noise biased towards dephasing. In particular, comparing with a square surface code, we observe a significant improvement in logical failure rate against biased noise using a rotated surface code with approximately half the number of physical qubits.
Quantum computation can proceed solely through single-qubit measurements on an appropriate quantum state, such as the ground state of an interacting many-body system. We investigate a simple spin-lattice system based … Quantum computation can proceed solely through single-qubit measurements on an appropriate quantum state, such as the ground state of an interacting many-body system. We investigate a simple spin-lattice system based on the cluster-state model, and by using nonlocal correlation functions that quantify the fidelity of quantum gates performed between distant qubits, we demonstrate that it possesses a quantum (zero-temperature) phase transition between a disordered phase and an ordered ``cluster phase'' in which it is possible to perform a universal set of quantum gates.
We introduce methods for clock synchronization that make use of the adiabatic exchange of nondegenerate two-level quantum systems: ticking qubits. Schemes involving the exchange of N independent qubits with frequency … We introduce methods for clock synchronization that make use of the adiabatic exchange of nondegenerate two-level quantum systems: ticking qubits. Schemes involving the exchange of N independent qubits with frequency $\omega$ give a synchronization accuracy that scales as $(\omega\sqrt{N})^{-1}$, i.e., as the standard quantum limit. We introduce a protocol that makes use of N coherent exchanges of a single qubit at frequency $\omega$, leading to an accuracy that scales as $(\omega N)^{-1}\log N$. This protocol beats the standard quantum limit without the use of entanglement, and we argue that this scaling is the fundamental limit for clock synchronization allowed by quantum mechanics. We analyse the performance of these protocols when used with a lossy channel.
Bipartite entanglement may be reduced if there are restrictions on allowed local operations. We introduce the concept of a generalized superselection rule to describe such restrictions, and quantify the entanglement … Bipartite entanglement may be reduced if there are restrictions on allowed local operations. We introduce the concept of a generalized superselection rule to describe such restrictions, and quantify the entanglement constrained by it. We show that ensemble quantum information processing, where elements in the ensemble are not individually addressable, is subject to the superselection rule associated with the symmetric group (the group of permutations of elements). We prove that even for an ensemble comprising many pairs of qubits, each pair described by a pure Bell state, the entanglement per element constrained by this superselection rule goes to zero for a large number of elements.
We investigate quantum many-body systems where all low-energy states are entangled. As a tool for quantifying such systems, we introduce the concept of the entanglement gap, which is the difference … We investigate quantum many-body systems where all low-energy states are entangled. As a tool for quantifying such systems, we introduce the concept of the entanglement gap, which is the difference in energy between the ground-state energy and the minimum energy that a separable (unentangled) state may attain. If the energy of the system lies within the entanglement gap, the state of the system is guaranteed to be entangled. We find Hamiltonians that have the largest possible entanglement gap; for a system consisting of two interacting spin-$1∕2$ subsystems, the Heisenberg antiferromagnet is one such example. We also introduce a related concept, the entanglement-gap temperature: the temperature below which the thermal state is certainly entangled, as witnessed by its energy. We give an example of a bipartite Hamiltonian with an arbitrarily high entanglement-gap temperature for fixed total energy range. For bipartite spin lattices we prove a theorem demonstrating that the entanglement gap necessarily decreases as the coordination number is increased. We investigate frustrated lattices and quantum phase transitions as physical phenomena that affect the entanglement gap.
Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the … Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the common situation where the noise is biased towards dephasing. Here we introduce an efficient high-threshold decoder for a noise-tailored surface code based on minimum-weight perfect matching. The decoder exploits the symmetries of its syndrome under the action of biased noise and generalizes to the fault-tolerant regime where measurements are unreliable. Using this decoder, we obtain fault-tolerant thresholds in excess of 6% for a phenomenological noise model in the limit where dephasing dominates. These gains persist even for modest noise biases: we find a threshold of ∼5% in an experimentally relevant regime where dephasing errors occur at a rate 100 times greater than bit-flip errors.
Abstract A fault-tolerant quantum processor may be configured using stationary qubits interacting only with their nearest neighbours, but at the cost of significant overheads in physical qubits per logical qubit. … Abstract A fault-tolerant quantum processor may be configured using stationary qubits interacting only with their nearest neighbours, but at the cost of significant overheads in physical qubits per logical qubit. Such overheads could be reduced by coherently transporting qubits across the chip, allowing connectivity beyond immediate neighbours. Here we demonstrate high-fidelity coherent transport of an electron spin qubit between quantum dots in isotopically-enriched silicon. We observe qubit precession in the inter-site tunnelling regime and assess the impact of qubit transport using Ramsey interferometry and quantum state tomography techniques. We report a polarization transfer fidelity of 99.97% and an average coherent transfer fidelity of 99.4%. Our results provide key elements for high-fidelity, on-chip quantum information distribution, as long envisaged, reinforcing the scaling prospects of silicon-based spin qubits.
We discuss the characterization and properties of quantum nondemolition (QND) measurements on qubit systems. We introduce figures of merit which can be applied to systems of any Hilbert space dimension, … We discuss the characterization and properties of quantum nondemolition (QND) measurements on qubit systems. We introduce figures of merit which can be applied to systems of any Hilbert space dimension, thus providing universal criteria for characterizing QND measurements. The controlled-NOT gate and an optical implementation are examined as examples of QND devices for qubits. We also consider the QND measurement of weak values.
Typical quantum communication schemes are such that to achieve perfect decoding the receiver must share a reference frame (RF) with the sender. Indeed, if the receiver only possesses a bounded-size … Typical quantum communication schemes are such that to achieve perfect decoding the receiver must share a reference frame (RF) with the sender. Indeed, if the receiver only possesses a bounded-size quantum token of the sender's RF, then the decoding is imperfect, and we can describe this effect as a noisy quantum channel. We seek here to characterize the performance of such schemes, or equivalently, to determine the effective decoherence induced by having a bounded-size RF. We assume that the token is prepared in a special state that has particularly nice group-theoretic properties and that is near-optimal for transmitting information about the sender's frame. We present a decoding operation, which can be proven to be near-optimal in this case, and we demonstrate that there are two distinct ways of implementing it (corresponding to two distinct Kraus decompositions). In one, the receiver measures the orientation of the RF token and reorients the system appropriately. In the other, the receiver extracts the encoded information from the virtual subsystems that describe the relational degrees of freedom of the system and token. Finally, we provide explicit characterizations of these decoding schemes when the system is a single qubit and for three standard kinds of RF: a phase reference, a Cartesian frame (representing an orthogonal triad of spatial directions), and a reference direction (representing a single spatial direction).
We investigate the degradation of reference frames (RFs), treated as dynamical quantum systems, and quantify their longevity as a resource for performing tasks in quantum information processing. We adopt an … We investigate the degradation of reference frames (RFs), treated as dynamical quantum systems, and quantify their longevity as a resource for performing tasks in quantum information processing. We adopt an operational measure of an RF's longevity, namely, the number of measurements that can be made against it with a certain error tolerance. We investigate two distinct types of RF: a reference direction, realized by a spin-j system, and a phase reference, realized by an oscillator mode with bounded energy. For both cases, we show that our measure of longevity increases quadratically with the size of the reference system and is therefore non-additive. For instance, the number of measurements for which a directional RF consisting of N parallel spins can be put to use scales as N2. Our results quantify the extent to which microscopic or mesoscopic RFs may be used for repeated, high-precision measurements, without needing to be reset—a question that is important for some implementations of quantum computing. We illustrate our results using the proposed single-spin measurement scheme of magnetic resonance force microscopy.
We investigate schemes for Hamiltonian parameter estimation of a two-level system using repeated measurements in a fixed basis. The simplest (Fourier based) schemes yield an estimate with a mean square … We investigate schemes for Hamiltonian parameter estimation of a two-level system using repeated measurements in a fixed basis. The simplest (Fourier based) schemes yield an estimate with a mean square error (MSE) that decreases at best as a power law ~N^{-2} in the number of measurements N. By contrast, we present numerical simulations indicating that an adaptive Bayesian algorithm, where the time between measurements can be adjusted based on prior measurement results, yields a MSE which appears to scale close to \exp(-0.3 N). That is, measurements in a single fixed basis are sufficient to achieve exponential scaling in N.
Measurements in quantum mechanics cannot perfectly distinguish all states and necessarily disturb the measured system. We present and analyze a proposal to demonstrate fundamental limits on quantum control of a … Measurements in quantum mechanics cannot perfectly distinguish all states and necessarily disturb the measured system. We present and analyze a proposal to demonstrate fundamental limits on quantum control of a single qubit arising from these properties of quantum measurements. We consider a qubit prepared in one of two nonorthogonal states and subsequently subjected to dephasing noise. The task is to use measurement and feedback control to attempt to correct the state of the qubit. We demonstrate that projective measurements are not optimal for this task, and that there exists a nonprojective measurement with an optimum measurement strength which achieves the best trade-off between gaining information about the system and disturbing it through measurement backaction. We study the performance of a quantum control scheme that makes use of this weak measurement followed by feedback control, and demonstrate that it realizes the optimal recovery from noise for this system. We contrast this approach with various classically inspired control schemes.
We present a simple quantum many-body system---a two-dimensional lattice of qubits with a Hamiltonian composed of nearest-neighbor two-body interactions---such that the ground state is a universal resource for quantum computation … We present a simple quantum many-body system---a two-dimensional lattice of qubits with a Hamiltonian composed of nearest-neighbor two-body interactions---such that the ground state is a universal resource for quantum computation using single-qubit measurements. This ground state approximates a cluster state that is encoded into a larger number of physical qubits. The Hamiltonian we use is motivated by the projected entangled pair states, which provide a transparent mechanism to produce such approximate encoded cluster states on square or other lattice structures (as well as a variety of other quantum states) as the ground state. We show that the error in this approximation takes the form of independent errors on bonds occurring with a fixed probability. The energy gap of such a system, which in part determines its usefulness for quantum computation, is shown to be independent of the size of the lattice. In addition, we show that the scaling of this energy gap in terms of the coupling constants of the Hamiltonian is directly determined by the lattice geometry. As a result, the approximate encoded cluster state obtained on a hexagonal lattice (a resource that is also universal for quantum computation) can be shown to have a larger energy gap than one on a square lattice with an equivalent Hamiltonian.
We show that correlations inconsistent with any locally causal description can be a generic feature of measurements on entangled quantum states. Specifically, spatially separated parties who perform local measurements on … We show that correlations inconsistent with any locally causal description can be a generic feature of measurements on entangled quantum states. Specifically, spatially separated parties who perform local measurements on a maximally entangled state using randomly chosen measurement bases can, with significant probability, generate nonclassical correlations that violate a Bell inequality. For n parties using a Greenberger-Horne-Zeilinger state, this probability of violation rapidly tends to unity as the number of parties increases. We also show that, even with both a randomly chosen two-qubit pure state and randomly chosen measurement bases, a violation can be found about 10% of the time. Among other applications, our work provides a feasible alternative for the demonstration of Bell inequality violation without a shared reference frame.
We show that quantum information can be encoded into entangled states of multiple indistinguishable particles in such a way that any inertial observer can prepare, manipulate, or measure the encoded … We show that quantum information can be encoded into entangled states of multiple indistinguishable particles in such a way that any inertial observer can prepare, manipulate, or measure the encoded state independent of their Lorentz reference frame. Such relativistically invariant quantum information is free of the difficulties associated with encoding into spin or other degrees of freedom in a relativistic context.
We show that a universal set of gates for quantum computation with optics can be quantum teleported through the use of EPR entangled states, homodyne detection, and linear optics and … We show that a universal set of gates for quantum computation with optics can be quantum teleported through the use of EPR entangled states, homodyne detection, and linear optics and squeezing operations conditioned on measurement outcomes. This scheme may be used for fault-tolerant quantum computation in any optical scheme (qubit or continuous-variable). The teleportation of nondeterministic nonlinear gates employed in linear optics quantum computation is discussed.
We derive, and experimentally demonstrate, an interferometric scheme for unambiguous phase estimation with precision scaling at the Heisenberg limit that does not require adaptive measurements. That is, with no prior … We derive, and experimentally demonstrate, an interferometric scheme for unambiguous phase estimation with precision scaling at the Heisenberg limit that does not require adaptive measurements. That is, with no prior knowledge of the phase, we can obtain an estimate of the phase with a standard deviation that is only a small constant factor larger than the minimum physically allowed value. Our scheme resolves the phase ambiguity that exists when multiple passes through a phase shift, or NOON states, are used to obtain improved phase resolution. Like a recently introduced adaptive technique [Higgins et al 2007 Nature 450 393], our experiment uses multiple applications of the phase shift on single photons. By not requiring adaptive measurements, but rather using a predetermined measurement sequence, the present scheme is both conceptually simpler and significantly easier to implement. Additionally, we demonstrate a simplified adaptive scheme that also surpasses the standard quantum limit for single passes.
We address the question of whether symmetry-protected topological (SPT) order can persist at nonzero temperature, with a focus on understanding the thermal stability of several models studied in the theory … We address the question of whether symmetry-protected topological (SPT) order can persist at nonzero temperature, with a focus on understanding the thermal stability of several models studied in the theory of quantum computation. We present three results in this direction. First, we prove that nontrivial SPT order protected by a global onsite symmetry cannot persist at nonzero temperature, demonstrating that several quantum computational structures protected by such onsite symmetries are not thermally stable. Second, we prove that the three-dimensional (3D) cluster-state model used in the formulation of topological measurement-based quantum computation possesses a nontrivial SPT-ordered thermal phase when protected by a generalized (1-form) symmetry. The SPT order in this model is detected by long-range localizable entanglement in the thermal state, which compares with related results characterizing SPT order at zero temperature in spin chains using localizable entanglement as an order parameter. Our third result is to demonstrate that the high-error tolerance of this 3D cluster-state model for quantum computation, even without a protecting symmetry, can be understood as an application of quantum error correction to effectively enforce a 1-form symmetry.
We consider a one-dimensional spin chain for which the ground state is the cluster state, capable of functioning as a quantum computational wire when subjected to local adaptive measurements of … We consider a one-dimensional spin chain for which the ground state is the cluster state, capable of functioning as a quantum computational wire when subjected to local adaptive measurements of individual qubits, and investigate the robustness of this property to local and coupled (Ising-type) perturbations. We investigate the ground state both by identifying suitable correlation functions as order parameters, as well as numerically using a variational method based on matrix product states. We find that the model retains an infinite localizable entanglement length for Ising and local fields up to a quantum phase transition, but that the resulting entangled state is not simply characterized by a Pauli correction based on the measurement results.
We identify a broad class of physical processes in an optical quantum circuit that can be efficiently simulated on a classical computer: this class includes unitary transformations, amplification, noise, and … We identify a broad class of physical processes in an optical quantum circuit that can be efficiently simulated on a classical computer: this class includes unitary transformations, amplification, noise, and measurements. This simulatability result places powerful constraints on the capability to realize exponential quantum speedups as well as on inducing an optical nonlinear transformation via linear optics, photodetection-based measurement, and classical feedforward of measurement results, optimal cloning, and a wide range of other processes.
Vast numbers of qubits will be needed for large-scale quantum computing because of the overheads associated with error correction. We present a scheme for low-overhead fault-tolerant quantum computation based on … Vast numbers of qubits will be needed for large-scale quantum computing because of the overheads associated with error correction. We present a scheme for low-overhead fault-tolerant quantum computation based on quantum low-density parity-check (LDPC) codes, where long-range interactions enable many logical qubits to be encoded with a modest number of physical qubits. In our approach, logic gates operate via logical Pauli measurements that preserve both the protection of the LDPC codes and the low overheads in terms of the required number of additional qubits. Compared with surface codes with the same code distance, we estimate order-of-magnitude improvements in the overheads for processing around 100 logical qubits using this approach. Given the high thresholds demonstrated by LDPC codes, our estimates suggest that fault-tolerant quantum computation at this scale may be achievable with a few thousand physical qubits at comparable error rates to what is needed for current approaches.
We show that nonexponential fidelity decays in randomized benchmarking experiments on quantum-dot qubits are consistent with numerical simulations that incorporate low-frequency noise and correspond to a control fidelity that varies … We show that nonexponential fidelity decays in randomized benchmarking experiments on quantum-dot qubits are consistent with numerical simulations that incorporate low-frequency noise and correspond to a control fidelity that varies slowly with time. By expanding standard randomized benchmarking analysis to this experimental regime, we find that such nonexponential decays are better modeled by multiple exponential decay rates, leading to an instantaneous control fidelity for isotopically purified silicon metal-oxide-semiconductor quantum-dot qubits which is 98.9% when the low-frequency noise causes large detuning but can be as high as 99.9% when the qubit is driven on resonance and system calibrations are favorable. These advances in qubit characterization and validation methods underpin the considerable prospects for silicon as a qubit platform for fault-tolerant quantum computation.
We consider the process of changing reference frames in the case where the reference frames are quantum systems. We find that, as part of this process, decoherence is necessarily induced … We consider the process of changing reference frames in the case where the reference frames are quantum systems. We find that, as part of this process, decoherence is necessarily induced on any quantum system described relative to these frames. We explore this process with examples involving reference frames for phase and orientation. Quantifying the effect of changing quantum reference frames provides a theoretical description for this process in quantum experiments, and serves as a first step in developing a relativity principle for theories in which all objects including reference frames are necessarily quantum.
We introduce the entangled coherent-state representation, which provides a powerful technique for efficiently and elegantly describing and analyzing quantum optics sources and detectors while respecting the photon-number superselection rule that … We introduce the entangled coherent-state representation, which provides a powerful technique for efficiently and elegantly describing and analyzing quantum optics sources and detectors while respecting the photon-number superselection rule that is satisfied by all known quantum optics experiments. We apply the entangled coherent-state representation to elucidate and resolve the long-standing puzzles of the coherence of a laser output field, interference between two number states, and dichotomous interpretations of quantum teleportation of coherent states.
We show that private shared reference frames can be used to perform private quantum and private classical communication over a public quantum channel. Such frames constitute a type of private … We show that private shared reference frames can be used to perform private quantum and private classical communication over a public quantum channel. Such frames constitute a type of private shared correlation, distinct from private classical keys or shared entanglement, useful for cryptography. We present optimally efficient schemes for private quantum and classical communication given a finite number of qubits transmitted over an insecure channel and given a private shared Cartesian frame and∕or a private shared reference ordering of the qubits. We show that in this context, it is useful to introduce the concept of a decoherence-full subsystem, wherein every state is mapped to the completely mixed state under the action of the decoherence.
The two-dimensional cluster state, a universal resource for measurement-based quantum computation, is also the gapped ground state of a short-ranged Hamiltonian. Here, we examine the effect of perturbations to this … The two-dimensional cluster state, a universal resource for measurement-based quantum computation, is also the gapped ground state of a short-ranged Hamiltonian. Here, we examine the effect of perturbations to this Hamiltonian. We prove that, provided the perturbation is sufficiently small and respects a certain symmetry, the perturbed ground state remains a universal resource. We do this by characterising the operation of an adaptive measurement protocol throughout a suitable symmetry-protected quantum phase, relying on generic properties of the phase rather than any analytic control over the ground state.
Single-spin measurements on the ground state of an interacting spin lattice can be used to perform a quantum computation. We show how such measurements can mimic renormalization group transformations and … Single-spin measurements on the ground state of an interacting spin lattice can be used to perform a quantum computation. We show how such measurements can mimic renormalization group transformations and remove the short-ranged variations of the state that can reduce the fidelity of a computation. This suggests that the quantum computational ability of a spin lattice could be a robust property of a quantum phase. We illustrate our idea with the ground state of a rotationally invariant spin-1 chain, which can serve as a quantum computational wire not only at the Affleck-Kennedy-Lieb-Tasaki point, but within the Haldane phase.
Contextuality is a key characteristic that separates quantum from classical phenomena and an important tool in understanding the potential advantage of quantum computation. However, when assessing the quantum resources available … Contextuality is a key characteristic that separates quantum from classical phenomena and an important tool in understanding the potential advantage of quantum computation. However, when assessing the quantum resources available for quantum information processing, there is no formalism to determine whether a set of states can exhibit contextuality and whether such proofs of contextuality indicate anything about the resourcefulness of that set. Introducing a well-motivated notion of what it means for a set of states to be contextual, we establish a relationship between contextuality and antidistinguishability of sets of states. We go beyond the traditional notions of contextuality and antidistinguishability and treat both properties as resources, demonstrating that the degree of contextuality within a set of states has a direct connection to its level of antidistinguishability. If a set of states is contextual, then it must be weakly antidistinguishable and vice-versa. However, maximal contextuality emerges as a stronger property than traditional antidistinguishability.
Fault-tolerant quantum computation demands significant resources: large numbers of physical qubits must be checked for errors repeatedly to protect quantum data as logic gates are implemented in the presence of … Fault-tolerant quantum computation demands significant resources: large numbers of physical qubits must be checked for errors repeatedly to protect quantum data as logic gates are implemented in the presence of noise. We demonstrate that an approach based on the color code can lead to considerable reductions in the resource overheads compared with conventional methods, while remaining compatible with a two-dimensional layout. We propose a lattice surgery scheme that exploits the rich structure of the color-code phase to perform arbitrary pairs of commuting logical Pauli measurements in parallel while keeping the space cost low. Compared to lattice surgery schemes based on the surface code with the same code distance, and assuming the same amount of time is needed to complete a round of syndrome measurements, our approach yields about a <a:math xmlns:a="http://www.w3.org/1998/Math/MathML"><a:mrow><a:mn>3</a:mn><a:mo>×</a:mo></a:mrow></a:math> improvement in the space-time overhead, obtained from a combination of a <b:math xmlns:b="http://www.w3.org/1998/Math/MathML"><b:mrow><b:mn>1.5</b:mn><b:mo>×</b:mo></b:mrow></b:math> improvement in spatial overhead together with a <c:math xmlns:c="http://www.w3.org/1998/Math/MathML"><c:mrow><c:mn>2</c:mn><c:mo>×</c:mo></c:mrow></c:math> speedup due to the parallelization of commuting logical measurements. Even when taking into account the color code's lower error threshold using current decoders, the overhead is reduced by 10% at a physical error rate of <d:math xmlns:d="http://www.w3.org/1998/Math/MathML"><d:msup><d:mn>10</d:mn><d:mrow><d:mo>−</d:mo><d:mn>3</d:mn></d:mrow></d:msup></d:math> and by 50% at <e:math xmlns:e="http://www.w3.org/1998/Math/MathML"><e:msup><e:mn>10</e:mn><e:mrow><e:mo>−</e:mo><e:mn>4</e:mn></e:mrow></e:msup></e:math>. Published by the American Physical Society 2024
We identify regimes where post-selection can be used scalably in quantum error correction (QEC) to improve performance. We use statistical mechanical models to analytically quantify the performance and thresholds of … We identify regimes where post-selection can be used scalably in quantum error correction (QEC) to improve performance. We use statistical mechanical models to analytically quantify the performance and thresholds of post-selected QEC, with a focus on the surface code. Based on the non-equilibrium magnetization of these models, we identify a simple heuristic technique for post-selection that does not require a decoder. Along with performance gains, this heuristic allows us to derive analytic expressions for post-selected conditional logical thresholds and abort thresholds of surface codes. We find that such post-selected QEC is characterised by four distinct thermodynamic phases, and detail the implications of this phase space for practical, scalable quantum computation.
Fault-tolerant implementation of non-Clifford gates is a major challenge for achieving universal fault-tolerant quantum computing with quantum error-correcting codes. Magic state distillation is the most well-studied method for this but … Fault-tolerant implementation of non-Clifford gates is a major challenge for achieving universal fault-tolerant quantum computing with quantum error-correcting codes. Magic state distillation is the most well-studied method for this but requires significant resources. Hence, it is crucial to tailor and optimize magic state distillation for specific codes from both logical- and physical-level perspectives. In this work, we perform such optimization for two-dimensional color codes, which are promising due to their higher encoding rates compared to surface codes, transversal implementation of Clifford gates, and efficient lattice surgery. We propose two distillation schemes based on the 15-to-1 distillation circuit and lattice surgery, which differ in their methods for handling faulty rotations. Our first scheme uses faulty T-measurement, offering resource efficiency when the target infidelity is above a certain threshold ($\sim 35p^3$ for physical error rate $p$). To achieve lower infidelities while maintaining resource efficiency, our second scheme exploits a distillation-free fault-tolerant magic state preparation protocol, achieving significantly lower infidelities (e.g., $\sim 10^{-19}$ for $p = 10^{-4}$) than the first scheme. Notably, our schemes outperform the best existing magic state distillation methods for color codes by up to about two orders of magnitude in resource costs for a given achievable target infidelity.
Abstract Achieving high-fidelity entangling operations between qubits consistently is essential for the performance of multi-qubit systems. Solid-state platforms are particularly exposed to errors arising from materials-induced variability between qubits, which … Abstract Achieving high-fidelity entangling operations between qubits consistently is essential for the performance of multi-qubit systems. Solid-state platforms are particularly exposed to errors arising from materials-induced variability between qubits, which leads to performance inconsistencies. Here we study the errors in a spin qubit processor, tying them to their physical origins. We use this knowledge to demonstrate consistent and repeatable operation with above 99% fidelity of two-qubit gates in the technologically important silicon metal-oxide-semiconductor quantum dot platform. Analysis of the physical errors and fidelities in multiple devices over extended periods allows us to ensure that we capture the variation and the most common error types. Physical error sources include the slow nuclear and electrical noise on single qubits and contextual noise that depends on the applied control sequence. Furthermore, we investigate the impact of qubit design, feedback systems and robust gate design to inform the design of future scalable, high-fidelity control strategies. Our results highlight both the capabilities and challenges for the scaling-up of silicon spin-based qubits into full-scale quantum processors.
Quantum error correcting codes protect quantum information, allowing for large quantum computations provided that physical error rates are sufficiently low. We combine post-selection with surface code error correction through the … Quantum error correcting codes protect quantum information, allowing for large quantum computations provided that physical error rates are sufficiently low. We combine post-selection with surface code error correction through the use of a parameterized family of exclusive decoders, which are able to abort on decoding instances that are deemed too difficult. We develop new numerical sampling methods to quantify logical failure rates with exclusive decoders as well as the trade-off in terms of the amount of post-selection required. For the most discriminating of exclusive decoders, we demonstrate a threshold of 50\% under depolarizing noise for the surface code (or $32(1)\%$ for the fault-tolerant case with phenomenological measurement errors), and up to a quadratic improvement in logical failure rates below threshold. Furthermore, surprisingly, with a modest exclusion criterion, we identify a regime at low error rates where the exclusion rate decays with code distance, providing a pathway for scalable and time-efficient quantum computing with post-selection. We apply our exclusive decoder to the 15-to-1 magic state distillation protocol, and report a $75\%$ reduction in the number of physical qubits required, and a $60\%$ reduction in the total spacetime volume required, including accounting for repetitions required for post-selection. We also consider other applications, as an error mitigation technique, and in concatenated schemes. Our work highlights the importance of post-selection as a powerful tool in quantum error correction.
Two-dimensional color codes are a promising candidate for fault-tolerant quantum computing, as they have high encoding rates, transversal implementation of logical Clifford gates, and high feasibility of magic state constructions. … Two-dimensional color codes are a promising candidate for fault-tolerant quantum computing, as they have high encoding rates, transversal implementation of logical Clifford gates, and high feasibility of magic state constructions. However, decoding color codes presents a significant challenge due to their structure, where elementary errors violate three checks instead of just two (a key feature in surface code decoding), and the complexity in extracting syndrome is greater. We introduce an efficient color-code decoder that tackles these issues by combining two matching decoders for each color, generalized to handle circuit-level noise by employing detector error models. We provide comprehensive analyses of the decoder, covering its threshold and sub-threshold scaling both for bit-flip noise with ideal measurements and for circuit-level noise. Our simulations reveal that this decoding strategy nearly reaches the best possible scaling of logical failure ($p_\mathrm{fail} \sim p^{d/2}$) for both noise models, where $p$ is the noise strength, in the regime of interest for fault-tolerant quantum computing. While its noise thresholds are comparable with other matching-based decoders for color codes (8.2% for bit-flip noise and 0.46% for circuit-level noise), the scaling of logical failure rates below threshold significantly outperforms the best matching-based decoders.
Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such … Fault-tolerant architectures aim to reduce the noise of a quantum computation. Despite such architectures being well studied a detailed understanding of how noise is transformed in a fault-tolerant primitive such as magic state injection is currently lacking. We use numerical simulations of logical process tomography on a fault-tolerant gadget that implements a logical T = $Z({\pi}/8)$ gate using magic state injection, to understand how noise characteristics at the physical level are transformed into noise characteristics at the logical level. We show how, in this gadget, a significant phase ($Z$) bias can arise in the logical noise, even with unbiased noise at the physical level. While the magic state injection gadget intrinsically induces biased noise, with extant phase bias being further amplified at the logical level, we identify noisy error correction circuits as a key limiting factor on the magnitude of this logical noise bias. Our approach provides a framework for assessing the detailed noise characteristics, as well as the overall performance, of fault-tolerant logical primitives.
Abstract Storing quantum information in a quantum error correction code can protect it from errors, but the ability to transform the stored quantum information in a fault tolerant way is … Abstract Storing quantum information in a quantum error correction code can protect it from errors, but the ability to transform the stored quantum information in a fault tolerant way is equally important. Logical Pauli group operators can be implemented on Calderbank-Shor-Steane (CSS) codes, a commonly-studied category of codes, by applying a series of physical Pauli X and Z gates. Logical operators of this form are fault-tolerant because each qubit is acted upon by at most one gate, limiting the spread of errors, and are referred to as transversal logical operators. Identifying transversal logical operators outside the Pauli group is less well understood. Pauli operators are the first level of the Clifford hierarchy which is deeply connected to fault-tolerance and universality. In this work, we study transversal logical operators composed of single- and multi-qubit diagonal Clifford hierarchy gates. We demonstrate algorithms for identifying all transversal diagonal logical operators on a CSS code that are more general or have lower computational complexity than previous methods. We also show a method for constructing CSS codes that have a desired diagonal logical Clifford hierarchy operator implemented using single qubit phase gates. Our methods rely on representing operators composed of diagonal Clifford hierarchy gates as diagonal XP operators and this technique may have broader applications.
Current hardware for quantum computing suffers from high levels of noise, and so to achieve practical fault-tolerant quantum computing will require powerful and efficient methods to correct for errors in … Current hardware for quantum computing suffers from high levels of noise, and so to achieve practical fault-tolerant quantum computing will require powerful and efficient methods to correct for errors in quantum circuits. Here, we explore the role and effectiveness of using noise tailoring techniques to improve the performance of error correcting codes. Noise tailoring methods such as randomized compiling (RC) convert complex coherent noise processes to effective stochastic noise. While it is known that this can be leveraged to design efficient diagnostic tools, we explore its impact on the performance of error correcting codes. Of particular interest is the important class of coherent errors, arising from control errors, where RC has the maximum effect---converting these into purely stochastic errors. For these errors, we show here that RC delivers an improvement in performance of the concatenated Steane code by several orders of magnitude. We also show that below a threshold rotation angle, the gains in logical fidelity can be arbitrarily magnified by increasing the size of the codes. These results suggest that using randomized compiling can lead to a significant reduction in the resource overhead required to achieve fault tolerance.
A fault-tolerant quantum computer will be supported by a classical decoding system interfacing with quantum hardware to perform quantum error correction. It is important that the decoder can keep pace … A fault-tolerant quantum computer will be supported by a classical decoding system interfacing with quantum hardware to perform quantum error correction. It is important that the decoder can keep pace with the quantum clock speed, within the limitations on communication that are imposed by the physical architecture. To this end we propose a local `pre-decoder', which makes greedy corrections to reduce the amount of syndrome data sent to a standard matching decoder. We study these classical overheads for the surface code under a phenomenological phase-flip noise model with imperfect measurements. We find substantial improvements in the runtime of the global decoder and the communication bandwidth by using the pre-decoder. For instance, to achieve a logical failure probability of $f = 10^{-15}$ using qubits with physical error rate $p = 10^{-3}$ and a distance $d=22$ code, we find that the bandwidth cost is reduced by a factor of $1000$, and the time taken by a matching decoder is sped up by a factor of $200$. To achieve this target failure probability, the pre-decoding approach requires a $50\%$ increase in the qubit count compared with the optimal decoder.
Achieving high-fidelity entangling operations between qubits consistently is essential for the performance of multi-qubit systems and is a crucial factor in achieving fault-tolerant quantum processors. Solid-state platforms are particularly exposed … Achieving high-fidelity entangling operations between qubits consistently is essential for the performance of multi-qubit systems and is a crucial factor in achieving fault-tolerant quantum processors. Solid-state platforms are particularly exposed to errors due to materials-induced variability between qubits, which leads to performance inconsistencies. Here we study the errors in a spin qubit processor, tying them to their physical origins. We leverage this knowledge to demonstrate consistent and repeatable operation with above 99% fidelity of two-qubit gates in the technologically important silicon metal-oxide-semiconductor (SiMOS) quantum dot platform. We undertake a detailed study of these operations by analysing the physical errors and fidelities in multiple devices through numerous trials and extended periods to ensure that we capture the variation and the most common error types. Physical error sources include the slow nuclear and electrical noise on single qubits and contextual noise. The identification of the noise sources can be used to maintain performance within tolerance as well as inform future device fabrication. Furthermore, we investigate the impact of qubit design, feedback systems, and robust gates on implementing scalable, high-fidelity control strategies. These results are achieved by using three different characterization methods, we measure entangling gate fidelities ranging from 96.8% to 99.8%. Our analysis tools identify the causes of qubit degradation and offer ways understand their physical mechanisms. These results highlight both the capabilities and challenges for the scaling up of silicon spin-based qubits into full-scale quantum processors.
Current hardware for quantum computing suffers from high levels of noise, and so to achieve practical fault-tolerant quantum computing will require powerful and efficient methods to correct for errors in … Current hardware for quantum computing suffers from high levels of noise, and so to achieve practical fault-tolerant quantum computing will require powerful and efficient methods to correct for errors in quantum circuits. Here, we explore the role and effectiveness of using noise tailoring techniques to improve the performance of error correcting codes. Noise tailoring methods such as randomized compiling (RC) convert complex coherent noise processes to effective stochastic noise. While it is known that this can be leveraged to design efficient diagnostic tools, we explore its impact on the performance of error correcting codes. Of particular interest is the important class of coherent errors, arising from control errors, where RC has the maximum effect -- converting these into purely stochastic errors. For these errors, we show here that RC delivers an improvement in performance of the concatenated Steane code by several orders of magnitude. We also show that below a threshold rotation angle, the gains in logical fidelity can be arbitrarily magnified by increasing the size of the codes. These results suggest that using randomized compiling can lead to a significant reduction in the resource overhead required to achieve fault tolerance.
Storing quantum information in a quantum error correction code can protect it from errors, but the ability to transform the stored quantum information in a fault tolerant way is equally … Storing quantum information in a quantum error correction code can protect it from errors, but the ability to transform the stored quantum information in a fault tolerant way is equally important. Logical Pauli group operators can be implemented on Calderbank-Shor-Steane (CSS) codes, a commonly-studied category of codes, by applying a series of physical Pauli X and Z gates. Logical operators of this form are fault-tolerant because each qubit is acted upon by at most one gate, limiting the spread of errors, and are referred to as transversal logical operators. Identifying transversal logical operators outside the Pauli group is less well understood. Pauli operators are the first level of the Clifford hierarchy which is deeply connected to fault-tolerance and universality. In this work, we study transversal logical operators composed of single- and multi-qubit diagonal Clifford hierarchy gates. We demonstrate algorithms for identifying all transversal diagonal logical operators on a CSS code that are more general or have lower computational complexity than previous methods. We also show a method for constructing CSS codes that have a desired diagonal logical Clifford hierarchy operator implemented using single qubit phase gates. Our methods rely on representing operators composed of diagonal Clifford hierarchy gates as diagonal XP operators and this technique may have broader applications.
To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian … To push gate performance to levels beyond the thresholds for quantum error correction, it is important to characterize the error sources occurring on quantum gates. However, the characterization of non-Markovian error poses a challenge to current quantum process tomography techniques. Fast Bayesian Tomography (FBT) is a self-consistent gate set tomography protocol that can be bootstrapped from earlier characterization knowledge and be updated in real-time with arbitrary gate sequences. Here we demonstrate how FBT allows for the characterization of key non-Markovian error processes. We introduce two experimental protocols for FBT to diagnose the non-Markovian behavior of two-qubit systems on silicon quantum dots. To increase the efficiency and scalability of the experiment-analysis loop, we develop an online FBT software stack. To reduce experiment cost and analysis time, we also introduce a native readout method and warm boot strategy. Our results demonstrate that FBT is a useful tool for probing non-Markovian errors that can be detrimental to the ultimate realization of fault-tolerant operation on quantum computing.
The encoding of qubits in semiconductor spin carriers has been recognised as a promising approach to a commercial quantum computer that can be lithographically produced and integrated at scale. However, … The encoding of qubits in semiconductor spin carriers has been recognised as a promising approach to a commercial quantum computer that can be lithographically produced and integrated at scale. However, the operation of the large number of qubits required for advantageous quantum applications will produce a thermal load exceeding the available cooling power of cryostats at millikelvin temperatures. As the scale-up accelerates, it becomes imperative to establish fault-tolerant operation above 1 kelvin, where the cooling power is orders of magnitude higher. Here, we tune up and operate spin qubits in silicon above 1 kelvin, with fidelities in the range required for fault-tolerant operation at such temperatures. We design an algorithmic initialisation protocol to prepare a pure two-qubit state even when the thermal energy is substantially above the qubit energies, and incorporate radio-frequency readout to achieve fidelities up to 99.34 per cent for both readout and initialisation. Importantly, we demonstrate a single-qubit Clifford gate fidelity of 99.85 per cent, and a two-qubit gate fidelity of 98.92 per cent. These advances overcome the fundamental limitation that the thermal energy must be well below the qubit energies for high-fidelity operation to be possible, surmounting a major obstacle in the pathway to scalable and fault-tolerant quantum computation.
Abstract For certain restricted computational tasks, quantum mechanics provides a provable advantage over any possible classical implementation. Several of these results have been proven using the framework of measurement-based quantum … Abstract For certain restricted computational tasks, quantum mechanics provides a provable advantage over any possible classical implementation. Several of these results have been proven using the framework of measurement-based quantum computation (MBQC), where nonlocality and more generally contextuality have been identified as necessary resources for certain quantum computations. Here, we consider the computational power of MBQC in more detail by refining its resource requirements, both on the allowed operations and the number of accessible qubits. More precisely, we identify which Boolean functions can be computed in non-adaptive MBQC, with local operations contained within a finite level in the Clifford hierarchy. Moreover, for non-adaptive MBQC restricted to certain subtheories such as stabiliser MBQC, we compute the minimal number of qubits required to compute a given Boolean function. Our results point towards hierarchies of resources that more sharply characterise the power of MBQC beyond the binary of contextuality vs non-contextuality.
Fault-tolerant quantum computing will require accurate estimates of the resource overhead under different error correction strategies, but standard metrics such as gate fidelity and diamond distance have been shown to … Fault-tolerant quantum computing will require accurate estimates of the resource overhead under different error correction strategies, but standard metrics such as gate fidelity and diamond distance have been shown to be poor predictors of logical performance. We present a scalable experimental approach based on Pauli error reconstruction to predict the performance of concatenated codes. Numerical evidence demonstrates that our method significantly outperforms predictions based on standard error metrics for various error models, even with limited data. We illustrate how this method assists in the selection of error correction schemes.
We propose an extension to the Pauli stabiliser formalism that includes fractional $2\pi/N$ rotations around the $Z$ axis for some integer $N$. The resulting generalised stabiliser formalism - denoted the … We propose an extension to the Pauli stabiliser formalism that includes fractional $2\pi/N$ rotations around the $Z$ axis for some integer $N$. The resulting generalised stabiliser formalism - denoted the XP stabiliser formalism - allows for a wider range of states and codespaces to be represented. We describe the states which arise in the formalism, and demonstrate an equivalence between XP stabiliser states and 'weighted hypergraph states' - a generalisation of both hypergraph and weighted graph states. Given an arbitrary set of XP operators, we present algorithms for determining the codespace and logical operators for an XP code. Finally, we consider whether measurements of XP operators on XP codes can be classically simulated.
Coupling qubits to a superconducting resonator provides a mechanism to enable long-distance entangling operations in a quantum computer based on spins in semiconducting materials. Here, we demonstrate a controllable spin-photon … Coupling qubits to a superconducting resonator provides a mechanism to enable long-distance entangling operations in a quantum computer based on spins in semiconducting materials. Here, we demonstrate a controllable spin-photon coupling based on a longitudinal interaction between a spin qubit and a resonator. We show that coupling a singlet-triplet qubit to a high-impedance superconducting resonator can produce the desired longitudinal coupling when the qubit is driven near the resonator's frequency. We measure the energy splitting of the qubit as a function of the drive amplitude and frequency of a microwave signal applied near the resonator antinode, revealing pronounced effects close to the resonator frequency due to longitudinal coupling. By tuning the amplitude of the drive, we reach a regime with longitudinal coupling exceeding 1 MHz. This mechanism for qubit-resonator coupling represents a stepping stone towards producing high-fidelity two-qubit gates mediated by a superconducting resonator.
We present two classical algorithms for the simulation of universal quantum circuits on n qubits constructed from c instances of Clifford gates and t arbitrary-angle Z-rotation gates such as T … We present two classical algorithms for the simulation of universal quantum circuits on n qubits constructed from c instances of Clifford gates and t arbitrary-angle Z-rotation gates such as T gates. Our algorithms complement each other by performing best in different parameter regimes. The Estimate algorithm produces an additive precision estimate of the Born-rule probability of a chosen measurement outcome with the only source of run-time inefficiency being a linear dependence on the stabilizer extent (with scaling approximately equal to 1.17t for T gates). Our algorithm is state of the art for this task: as an example, in approximately 13 h (on a standard desktop computer), we estimate the Born-rule probability to within an additive error of 0.03, for a 50-qubit, 60 non-Clifford gate quantum circuit with more than 2000 Clifford gates. Our second algorithm, Compute, calculates the probability of a chosen measurement outcome to machine precision with run time O(2t−rt), where r is an efficiently computable, circuit-specific quantity. With high probability, r is very close to min{t,n−w} for random circuits with many Clifford gates, where w is the number of measured qubits. Compute can be effective in surprisingly challenging parameter regimes, e.g., we can randomly sample Clifford+T circuits with n=55, w=5, c=105, and t=80T gates, and then compute the Born-rule probability with a run time consistently less than 10 min using a single core of a standard desktop computer. We provide a C+Python implementation of our algorithms and benchmark them using random circuits, the hidden-shift algorithm, and the quantum approximate optimization algorithm.4 MoreReceived 26 June 2021Revised 24 March 2022Accepted 9 May 2022DOI:https://doi.org/10.1103/PRXQuantum.3.020361Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasQuantum simulationQuantum Information
Vast numbers of qubits will be needed for large-scale quantum computing because of the overheads associated with error correction. We present a scheme for low-overhead fault-tolerant quantum computation based on … Vast numbers of qubits will be needed for large-scale quantum computing because of the overheads associated with error correction. We present a scheme for low-overhead fault-tolerant quantum computation based on quantum low-density parity-check (LDPC) codes, where long-range interactions enable many logical qubits to be encoded with a modest number of physical qubits. In our approach, logic gates operate via logical Pauli measurements that preserve both the protection of the LDPC codes and the low overheads in terms of the required number of additional qubits. Compared with surface codes with the same code distance, we estimate order-of-magnitude improvements in the overheads for processing around 100 logical qubits using this approach. Given the high thresholds demonstrated by LDPC codes, our estimates suggest that fault-tolerant quantum computation at this scale may be achievable with a few thousand physical qubits at comparable error rates to what is needed for current approaches.
Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow … Complete characterization of the errors that occur in using sets of logic gates is critical to developing the technology of fault-tolerant quantum computing, but current tomography methods are either slow or include unchecked assumptions. This study presents a self-consistent method for process tomography that is both fast and flexible. The technique complements the broad suite of existing characterization tools, and may potentially allow for pulse optimization to further increase gate fidelities.
The robust generation and manipulation of entangled multiphoton states on chip has an essential role in quantum computation and communication. Lattice topology has emerged as a means of protecting photonic … The robust generation and manipulation of entangled multiphoton states on chip has an essential role in quantum computation and communication. Lattice topology has emerged as a means of protecting photonic states from disorder, but entanglement across different topologies remains unexplored. We report biphoton entanglement between topologically distinct spatial modes in a bipartite array of silicon waveguides. The results highlight topology as an additional degree of freedom for entanglement and open avenues for investigating information teleportation between trivial and topological modes.
The quantum logic gates used in the design of a quantum computer should be both universal, meaning arbitrary quantum computations can be performed, and fault-tolerant, meaning the gates keep errors … The quantum logic gates used in the design of a quantum computer should be both universal, meaning arbitrary quantum computations can be performed, and fault-tolerant, meaning the gates keep errors from cascading out of control. A number of no-go theorems constrain the ways in which a set of fault-tolerant logic gates can be universal. These theorems are very restrictive, and conventional wisdom holds that a universal fault-tolerant logic gate set cannot be implemented natively, requiring us to use costly distillation procedures for quantum computation. Here, we present a general framework for universal fault-tolerant logic with stabilizer codes, together with a no-go theorem that reveals the very broad conditions constraining such gate sets. Our theorem applies to a wide range of stabilizer code families, including concatenated codes and conventional topological stabilizer codes such as the surface code. The broad applicability of our no-go theorem provides a new perspective on how the constraints on universal fault-tolerant gate sets can be overcome. In particular, we show how nonunitary implementations of logic gates provide a general approach to circumvent the no-go theorem, and we present a rich landscape of constructions for logic gate sets that are both universal and fault-tolerant. That is, rather than restricting what is possible, our no-go theorem provides a signpost to guide us to new, efficient architectures for fault-tolerant quantum computing.
For certain restricted computational tasks, quantum mechanics provides a provable advantage over any possible classical implementation. Several of these results have been proven using the framework of measurement-based quantum computation … For certain restricted computational tasks, quantum mechanics provides a provable advantage over any possible classical implementation. Several of these results have been proven using the framework of measurement-based quantum computation (MBQC), where non-locality and more generally contextuality have been identified as necessary resources for certain quantum computations. Here, we consider the computational power of MBQC in more detail by refining its resource requirements, both on the allowed operations and the number of accessible qubits. More precisely, we identify which Boolean functions can be computed in non-adaptive MBQC, with local operations contained within a finite level in the Clifford hierarchy. Moreover, for non-adaptive MBQC restricted to certain subtheories such as stabiliser MBQC, we compute the minimal number of qubits required to compute a given Boolean function. Our results point towards hierarchies of resources that more sharply characterise the power of MBQC beyond the binary of contextuality vs non-contextuality.
Fault-tolerant quantum computation demands significant resources: large numbers of physical qubits must be checked for errors repeatedly to protect quantum data as logic gates are implemented in the presence of … Fault-tolerant quantum computation demands significant resources: large numbers of physical qubits must be checked for errors repeatedly to protect quantum data as logic gates are implemented in the presence of noise. We demonstrate that an approach based on the color code can lead to considerable reductions in the resource overheads compared with conventional methods, while remaining compatible with a two-dimensional layout. We propose a lattice surgery scheme that exploits the rich structure of the color-code phase to perform arbitrary pairs of commuting logical Pauli measurements in parallel while keeping the space cost low. Compared to lattice surgery schemes based on the surface code with the same code distance, our approach yields about a $3\times$ improvement in the space-time overhead, obtained from a combination of a $1.5\times$ improvement in spatial overhead together with a $2\times$ speedup due to the parallelisation of commuting logical measurements. Even when taking into account the color code's lower error threshold using current decoders, the overhead is reduced by 10\% at a physical error rate of $10^{-3}$ and by 50\% at $10^{-4}$.
We propose an extension to the Pauli stabiliser formalism that includes fractional $2π/N$ rotations around the $Z$ axis for some integer $N$. The resulting generalised stabiliser formalism - denoted the … We propose an extension to the Pauli stabiliser formalism that includes fractional $2π/N$ rotations around the $Z$ axis for some integer $N$. The resulting generalised stabiliser formalism - denoted the XP stabiliser formalism - allows for a wider range of states and codespaces to be represented. We describe the states which arise in the formalism, and demonstrate an equivalence between XP stabiliser states and 'weighted hypergraph states' - a generalisation of both hypergraph and weighted graph states. Given an arbitrary set of XP operators, we present algorithms for determining the codespace and logical operators for an XP code. Finally, we consider whether measurements of XP operators on XP codes can be classically simulated.
A fault-tolerant quantum computer will be supported by a classical decoding system interfacing with quantum hardware to perform quantum error correction. It is important that the decoder can keep pace … A fault-tolerant quantum computer will be supported by a classical decoding system interfacing with quantum hardware to perform quantum error correction. It is important that the decoder can keep pace with the quantum clock speed, within the limitations on communication that are imposed by the physical architecture. To this end we propose a local `pre-decoder', which makes greedy corrections to reduce the amount of syndrome data sent to a standard matching decoder. We study these classical overheads for the surface code under a phenomenological phase-flip noise model with imperfect measurements. We find substantial improvements in the runtime of the global decoder and the communication bandwidth by using the pre-decoder. For instance, to achieve a logical failure probability of $f = 10^{-15}$ using qubits with physical error rate $p = 10^{-3}$ and a distance $d=22$ code, we find that the bandwidth cost is reduced by a factor of $1000$, and the time taken by a matching decoder is sped up by a factor of $200$. To achieve this target failure probability, the pre-decoding approach requires a $50\%$ increase in the qubit count compared with the optimal decoder.
The manipulation of topologically-ordered phases of matter to encode and process quantum information forms the cornerstone of many approaches to fault-tolerant quantum computing. Here we demonstrate that fault-tolerant logical operations … The manipulation of topologically-ordered phases of matter to encode and process quantum information forms the cornerstone of many approaches to fault-tolerant quantum computing. Here we demonstrate that fault-tolerant logical operations in these approaches can be interpreted as instances of anyon condensation. We present a constructive theory for anyon condensation and, in tandem, illustrate our theory explicitly using the color-code model. We show that different condensation processes are associated with a general class of domain walls, which can exist in both space- and time-like directions. This class includes semi-transparent domain walls that condense certain subsets of anyons. We use our theory to classify topological objects and design novel fault-tolerant logic gates for the color code. As a final example, we also argue that dynamical `Floquet codes' can be viewed as a series of condensation operations. We propose a general construction for realising planar dynamically driven codes based on condensation operations on the color code. We use our construction to introduce a new Calderbank-Shor Steane-type Floquet code that we call the Floquet color code.
Benchmarking and characterising quantum states and logic gates is essential in the development of devices for quantum computing. We introduce a Bayesian approach to self-consistent process tomography, called fast Bayesian … Benchmarking and characterising quantum states and logic gates is essential in the development of devices for quantum computing. We introduce a Bayesian approach to self-consistent process tomography, called fast Bayesian tomography (FBT), and experimentally demonstrate its performance in characterising a two-qubit gate set on a silicon-based spin qubit device. FBT is built on an adaptive self-consistent linearisation that is robust to model approximation errors. Our method offers several advantages over other self-consistent tomographic methods. Most notably, FBT can leverage prior information from randomised benchmarking (or other characterisation measurements), and can be performed in real time, providing continuously updated estimates of full process matrices while data is acquired.
Abstract A fault-tolerant quantum processor may be configured using stationary qubits interacting only with their nearest neighbours, but at the cost of significant overheads in physical qubits per logical qubit. … Abstract A fault-tolerant quantum processor may be configured using stationary qubits interacting only with their nearest neighbours, but at the cost of significant overheads in physical qubits per logical qubit. Such overheads could be reduced by coherently transporting qubits across the chip, allowing connectivity beyond immediate neighbours. Here we demonstrate high-fidelity coherent transport of an electron spin qubit between quantum dots in isotopically-enriched silicon. We observe qubit precession in the inter-site tunnelling regime and assess the impact of qubit transport using Ramsey interferometry and quantum state tomography techniques. We report a polarization transfer fidelity of 99.97% and an average coherent transfer fidelity of 99.4%. Our results provide key elements for high-fidelity, on-chip quantum information distribution, as long envisaged, reinforcing the scaling prospects of silicon-based spin qubits.
Electron spin s in semiconductor quantum dot s have been intensively studied for implementing quantum computation and high fidelity single and two qubit operation s have recently been achieved . … Electron spin s in semiconductor quantum dot s have been intensively studied for implementing quantum computation and high fidelity single and two qubit operation s have recently been achieved . Quantum teleportation is a three qubit protocol exploiting quantum entanglement and it serv es as a n essential primitive for more sophisticated quantum algorithm s Here, we demonstrate a scheme for quantum teleportation based on direct Bell measurement for a single electron spin qubit in a triple quantum dot utilizing the Pauli exclusion principle to create and detect maximally entangled state s . T he single spin polarization is teleported from the input qubit to the output qubit with a fidelity of 0.9 1 We find this fidelity is primarily limited by singlet triplet mixing which can be improved by optimizing the device parameters Our results may be extended to quantum algorithms with a larger number of se miconductor spin qubit s
Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well … Performing large calculations with a quantum computer will likely require a fault-tolerant architecture based on quantum error-correcting codes. The challenge is to design practical quantum error-correcting codes that perform well against realistic noise using modest resources. Here we show that a variant of the surface code-the XZZX code-offers remarkable performance for fault-tolerant quantum computation. The error threshold of this code matches what can be achieved with random codes (hashing) for every single-qubit Pauli noise channel; it is the first explicit code shown to have this universal property. We present numerical evidence that the threshold even exceeds this hashing bound for an experimentally relevant range of noise parameters. Focusing on the common situation where qubit dephasing is the dominant noise, we show that this code has a practical, high-performance decoder and surpasses all previously known thresholds in the realistic setting where syndrome measurements are unreliable. We go on to demonstrate the favourable sub-threshold resource scaling that can be obtained by specialising a code to exploit structure in the noise. We show that it is possible to maintain all of these advantages when we perform fault-tolerant quantum computation.
Fault-tolerant quantum computing will require accurate estimates of the resource overhead, but standard metrics such as gate fidelity and diamond distance have been shown to be poor predictors of logical … Fault-tolerant quantum computing will require accurate estimates of the resource overhead, but standard metrics such as gate fidelity and diamond distance have been shown to be poor predictors of logical performance. We present a scalable experimental approach based on Pauli error reconstruction to predict the performance of concatenated codes. Numerical evidence demonstrates that our method significantly outperforms predictions based on standard error metrics for various error models, even with limited data. We illustrate how this method assists in the selection of error correction schemes.
The quantum logic gates used in the design of a quantum computer should be both universal, meaning arbitrary quantum computations can be performed, and fault-tolerant, meaning the gates keep errors … The quantum logic gates used in the design of a quantum computer should be both universal, meaning arbitrary quantum computations can be performed, and fault-tolerant, meaning the gates keep errors from cascading out of control. A number of no-go theorems constrain the ways in which a set of fault-tolerant logic gates can be universal. These theorems are very restrictive, and conventional wisdom holds that a universal fault-tolerant logic gate set cannot be implemented natively, requiring us to use costly distillation procedures for quantum computation. Here, we present a general framework for universal fault-tolerant logic with stabiliser codes, together with a no-go theorem that reveals the very broad conditions constraining such gate sets. Our theorem applies to a wide range of stabiliser code families, including concatenated codes and conventional topological stabiliser codes such as the surface code. The broad applicability of our no-go theorem provides a new perspective on how the constraints on universal fault-tolerant gate sets can be overcome. In particular, we show how non-unitary implementations of logic gates provide a general approach to circumvent the no-go theorem, and we present a rich landscape of constructions for logic gate sets that are both universal and fault-tolerant. That is, rather than restricting what is possible, our no-go theorem provides a signpost to guide us to new, efficient architectures for fault-tolerant quantum computing.
We analyze a readout scheme for Majorana qubits based on dispersive coupling to a resonator. We consider two variants of Majorana qubits: the Majorana transmon and the Majorana box qubit. … We analyze a readout scheme for Majorana qubits based on dispersive coupling to a resonator. We consider two variants of Majorana qubits: the Majorana transmon and the Majorana box qubit. In both cases, the qubit-resonator interaction can produce sizeable dispersive shifts in the megahertz range for reasonable system parameters, allowing for submicrosecond readout with high fidelity. For Majorana transmons, the light-matter interaction used for readout manifestly conserves Majorana parity, which leads to a notion of quantum nondemolition (QND) readout that is stronger than for conventional charge qubits. In contrast, Majorana box qubits only recover an approximately QND readout mechanism in the dispersive limit where the resonator detuning is large. We also compare dispersive readout to longitudinal readout for the Majorana box qubit. We show that the latter gives faster and higher fidelity readout for reasonable parameters, while having the additional advantage of being manifestly QND, and so may prove to be a better readout mechanism for these systems.1 MoreReceived 7 September 2020Accepted 20 October 2020DOI:https://doi.org/10.1103/PRXQuantum.1.020313Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.Published by the American Physical SocietyPhysics Subject Headings (PhySH)Research AreasMesoscopicsQuantum information architectures & platformsTopological phases of matterQuantum InformationCondensed Matter, Materials & Applied Physics
Symmetry and topology can provide for self-correcting quantum codes in a 3D topologically ordered spin lattice, a key insight for building quantum memories that correct their own errors without active … Symmetry and topology can provide for self-correcting quantum codes in a 3D topologically ordered spin lattice, a key insight for building quantum memories that correct their own errors without active control.
Universal quantum computing by braiding defects in topological stabilizer codes of any dimension is proven to be impossible. Notwithstanding this no-go theorem, it is shown how braiding defects can yield … Universal quantum computing by braiding defects in topological stabilizer codes of any dimension is proven to be impossible. Notwithstanding this no-go theorem, it is shown how braiding defects can yield all Clifford gates in three or more dimensions, and that universal quantum computing in three-dimensional surface codes is possible by supplementing braiding with adaptive gates.
Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the … Noise in quantum computing is countered with quantum error correction. Achieving optimal performance will require tailoring codes and decoding algorithms to account for features of realistic noise, such as the common situation where the noise is biased towards dephasing. Here we introduce an efficient high-threshold decoder for a noise-tailored surface code based on minimum-weight perfect matching. The decoder exploits the symmetries of its syndrome under the action of biased noise and generalizes to the fault-tolerant regime where measurements are unreliable. Using this decoder, we obtain fault-tolerant thresholds in excess of 6% for a phenomenological noise model in the limit where dephasing dominates. These gains persist even for modest noise biases: we find a threshold of ∼5% in an experimentally relevant regime where dephasing errors occur at a rate 100 times greater than bit-flip errors.
Investigating the classical simulability of quantum circuits provides a promising avenue towards understanding the computational power of quantum systems. Whether a class of quantum circuits can be efficiently simulated with … Investigating the classical simulability of quantum circuits provides a promising avenue towards understanding the computational power of quantum systems. Whether a class of quantum circuits can be efficiently simulated with a probabilistic classical computer, or is provably hard to simulate, depends quite critically on the precise notion of "classical simulation" and in particular on the required accuracy. We argue that a notion of classical simulation, which we call epsilon-simulation, captures the essence of possessing "equivalent computational power" as the quantum system it simulates: It is statistically impossible to distinguish an agent with access to an epsilon-simulator from one possessing the simulated quantum system. We relate epsilon-simulation to various alternative notions of simulation predominantly focusing on a simulator we call a poly-box. A poly-box outputs 1/poly precision additive estimates of Born probabilities and marginals. This notion of simulation has gained prominence through a number of recent simulability results. Accepting some plausible computational theoretic assumptions, we show that epsilon-simulation is strictly stronger than a poly-box by showing that IQP circuits and unconditioned magic-state injected Clifford circuits are both hard to epsilon-simulate and yet admit a poly-box. In contrast, we also show that these two notions are equivalent under an additional assumption on the sparsity of the output distribution (poly-sparsity).
The surface code, with a simple modification, exhibits ultra-high error correction thresholds when the noise is biased towards dephasing. Here, we identify features of the surface code responsible for these … The surface code, with a simple modification, exhibits ultra-high error correction thresholds when the noise is biased towards dephasing. Here, we identify features of the surface code responsible for these ultra-high thresholds. We provide strong evidence that the threshold error rate of the surface code tracks the hashing bound exactly for all biases, and show how to exploit these features to achieve significant improvement in logical failure rate. First, we consider the infinite bias limit, meaning pure dephasing. We prove that the error threshold of the modified surface code for pure dephasing noise is $50\%$, i.e., that all qubits are fully dephased, and this threshold can be achieved by a polynomial time decoding algorithm. We demonstrate that the sub-threshold behavior of the code depends critically on the precise shape and boundary conditions of the code. That is, for rectangular surface codes with standard rough/smooth open boundaries, it is controlled by the parameter $g=\gcd(j,k)$, where $j$ and $k$ are dimensions of the surface code lattice. We demonstrate a significant improvement in logical failure rate with pure dephasing for co-prime codes that have $g=1$, and closely-related rotated codes, which have a modified boundary. The effect is dramatic: the same logical failure rate achievable with a square surface code and $n$ physical qubits can be obtained with a co-prime or rotated surface code using only $O(\sqrt{n})$ physical qubits. Finally, we use approximate maximum likelihood decoding to demonstrate that this improvement persists for a general Pauli noise biased towards dephasing. In particular, comparing with a square surface code, we observe a significant improvement in logical failure rate against biased noise using a rotated surface code with approximately half the number of physical qubits.
Topologically ordered materials may serve as a platform for new quantum technologies such as fault-tolerant quantum computers. To fulfil this promise, efficient and general methods are needed to discover and … Topologically ordered materials may serve as a platform for new quantum technologies such as fault-tolerant quantum computers. To fulfil this promise, efficient and general methods are needed to discover and classify new topological phases of matter. We demonstrate that deep neural networks augmented with external memory can use the density profiles formed in quantum walks to efficiently identify properties of a topological phase as well as phase transitions. On a trial topological ordered model, our method's accuracy of topological phase identification reaches 97.4%, and is shown to be robust to noise on the data. Furthermore, we demonstrate that our trained DNN is able to identify topological phases of a perturbed model and predict the corresponding shift of topological phase transitions without learning any information about the perturbations in advance. These results demonstrate that our approach is generally applicable and may be used to identify a variety of quantum topological materials.
The Heisenberg exchange interaction between neighboring quantum dots allows precise voltage control over spin dynamics, due to the ability to precisely control the overlap of orbital wavefunctions by gate electrodes. … The Heisenberg exchange interaction between neighboring quantum dots allows precise voltage control over spin dynamics, due to the ability to precisely control the overlap of orbital wavefunctions by gate electrodes. This allows the study of fundamental electronic phenomena and finds applications in quantum information processing. Although spin-based quantum circuits based on short-range exchange interactions are possible, the development of scalable, longer-range coupling schemes constitutes a critical challenge within the spin-qubit community. Approaches based on capacitative coupling and cavity-mediated interactions effectively couple spin qubits to the charge degree of freedom, making them susceptible to electrically-induced decoherence. The alternative is to extend the range of the Heisenberg exchange interaction by means of a quantum mediator. Here, we show that a multielectron quantum dot with 50-100 electrons serves as an excellent mediator, preserving speed and coherence of the resulting spin-spin coupling while providing several functionalities that are of practical importance. These include speed (mediated two-qubit rates up to several gigahertz), distance (of order of a micrometer), voltage control, possibility of sweet spot operation (reducing susceptibility to charge noise), and reversal of the interaction sign (useful for dynamical decoupling from noise).
Braiding defects in topological stabiliser codes has been widely studied as a promising approach to fault-tolerant quantum computing. We present a no-go theorem that places very strong limitations on the … Braiding defects in topological stabiliser codes has been widely studied as a promising approach to fault-tolerant quantum computing. We present a no-go theorem that places very strong limitations on the potential of such schemes for universal fault-tolerant quantum computing in any spatial dimension. In particular, we show that, for the natural encoding of quantum information in defects in topological stabiliser codes, the set of logical operators implementable by braiding defects is contained in the Clifford group. Indeed, we show that this remains true even when supplemented with locality-preserving logical operators.
Contextuality - the obstruction to describing quantum mechanics in a classical statistical way - has been proposed as a resource that powers quantum computing. The measurement-based model provides a concrete … Contextuality - the obstruction to describing quantum mechanics in a classical statistical way - has been proposed as a resource that powers quantum computing. The measurement-based model provides a concrete manifestation of contextuality as a computational resource, as follows. If local measurements on a multi-qubit state can be used to evaluate non-linear boolean functions with only linear control processing, then this computation constitutes a proof of strong contextuality - the possible local measurement outcomes cannot all be pre-assigned. However, this connection is restricted to the special case when the local measured systems are qubits, which have unusual properties from the perspective of contextuality. A single qubit cannot allow for a proof of contextuality, unlike higher-dimensional systems, and multiple qubits can allow for state-independent contextuality with only Pauli observables, again unlike higher-dimensional generalisations. Here we identify precisely that strong non-locality is necessary in a qudit measurement-based computation that evaluates high-degree polynomial functions with only linear control. We introduce the concept of local universality, which places a bound on the space of output functions accessible under the constraint of single-qudit measurements. Thus, the partition of a physical system into subsystems plays a crucial role for the increase in computational power. A prominent feature of our setting is that the enabling resources for qubit and qudit measurement-based computations are of the same underlying nature, avoiding the pathologies associated with qubit contextuality.