Paper The following article is Open access

Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

and

Published 31 December 2013 © IOP Publishing and Deutsche Physikalische Gesellschaft
, , Focus on Coherent Control of Complex Quantum Systems Citation Mohan Sarovar and Kevin C Young 2013 New J. Phys. 15 125032 DOI 10.1088/1367-2630/15/12/125032

1367-2630/15/12/125032

Abstract

While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to 'Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)', which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Adiabatic quantum computing (AQC) is an alternative to the conventional circuit model for quantum computing that possesses some distinct advantages. It maintains a many-body quantum system in its ground state while the Hamiltonian of the many-body system is morphed from a simple, typically non-interacting, form to a complex, connected form. The ground state of the final, complex Hamiltonian is designed to encode the solution to the problem being solved (e.g. a satisfiability problem whose constraints are enforced by the Hamiltonian [1, 2]). During this evolution, energy relaxation and dephasing in the eigenbasis do not corrupt the computation and these are two reasons why AQC is believed to have some robustness to environmental fluctuations and noise. However, environmental fluctuations are typically local in space and for a general many-body system such local fluctuations may not only result in energy relaxation and dephasing in the eigenbasis. Thus, the problem of evaluating the robustness of AQC is fundamentally linked to understanding the non-equilibrium dynamics of a many-body open quantum system.

It was recognized in  [3, 4] that one can gain some protection against external fluctuations (errors) by encoding the system state and AQC evolution in a quantum error correcting or detecting code. The properties of stabilizer codes [5] allow one to suppress errors without affecting the adiabatic evolution by energetically penalizing them [3] or inhibiting their action by dynamical decoupling (DD) [4, 6]. Both of these mechanisms are effective for error suppression but perform no error correction. We refer to the suppression methods in [3] as energy gap protection (EGP) and the technique in [4] as DD.

As in the conventional circuit model of quantum computing, error suppression techniques alone are insufficient for achieving fault-tolerant AQC. To have a fault-tolerant construction one requires a method for actively reducing the entropy of the encoded system such as error correction. However, it is a challenge to introduce error correction into AQC, which proceeds by Hamiltonian, continuous-time evolution. In contrast, circuit-model quantum computation has the advantage of being described by a sequence of discrete unitary gates within which error correction mechanisms (e.g. syndrome measurement, correction rotations) can be embedded.

In this work, we view an encoded AQC evolution as a time-dependent, many-body open quantum system, and the task of error correction as entropy reduction of this system. We ask the question: If the entropy reduction is performed by cooling local degrees of freedom (the notion of locality will be made more explicit below), can error correction be achieved in the AQC model of quantum computation? We find that error correction by local cooling can be achieved with a slightly modified AQC model, but at a heavy price.

In the companion paper to this work [7] we present general constructions and arguments for the unification of EGP and DD, for the limitations of error suppression in AQC, and a framework for error correction by local cooling in AQC. In this work, we focus on the derivation of generalized master equations for describing the evolution of an encoded AQC system in the presence of environmental fluctuations. These equations allow one to rigorously simulate encoded AQC evolution in the presence of decoherence and cooling, and obtain bounds on performance. In addition, the conditions required for error correction by local cooling in AQC are naturally revealed in the process of deriving the generalized master equations below.

We emphasize that the error suppression and correction we are considering in this work are relevant for errors due to the coupling to uncontrolled degrees of freedom. We do not consider errors that result from diabatic transitions resulting from evolution that is not slow enough (see [7] for a complete discussion of the various failure modes in AQC). Such diabatic errors can be prevented by choosing conservative adiabatic speeds, by engineering the final Hamiltonian to increase energy gaps [8] or by using prior knowledge of energy gaps to adapt the interpolation speed [911], although it is not known how to systematically do the latter in general large-scale problems.

Finally, we remark that in order to derive the dynamical equations used in this work we heavily exploit the structure of error correction codes. This is an illustration of a more general point: while understanding the dynamics of many-body open quantum systems is extremely difficult in general, error correction code structure (or in other words, symmetries relevant to the system–environment coupling) can be exploited to make this problem more tractable. This highlights the utility of using quantum coding concepts to describe many-body quantum systems.

An outline of the remainder of the paper is: section 2 presents the framework of the stabilizer code encoded AQC and sets notation. Section 3 presents the derivation of a master equation capable of capturing the error suppression and correction dynamics of the encoded AQC. Section 4 manipulates this master equation to formalize the effects of error suppression by EGP and DD and quantify the performance of both. In particular, this section shows how both methods for error suppression are unified under the same dynamical picture. Section 5 begins with a discussion of approaches to error correction in AQC and points out some challenges. Then we set out the Hamiltonian structure required for error correction by local cooling and follow this with a derivation of non-equilibrium dynamics of encoded, cooled, AQC. Section 6 explores the properties required for long-term operation of an error corrected AQC system, thermal stability and posits a possible definition of fault-tolerance in AQC. Finally, in section 7 we conclude the paper with a summary of the results and commentary on the prospects for error correction and fault tolerance in AQC.

2. Encoded adiabatic quantum computing

A general description of a quantum many-body system undergoing adiabatic evolution while coupled to an environment is given by the Hamiltonian $H(t) = {H_{\mathrm { \tiny AQC}}(t)}{} + \sum _{j=1}^{n_{\mathrm {e}}} E_j\otimes B_j + {H_{\mathrm { B}}}$ . Here HAQC acts on the many-body system and its time dependence implements the adiabatic evolution. In the following we will specialize to the many-body system most relevant to AQC, a collection of (arbitrarily coupled) n qubits. HB is the bath Hamiltonian, Bj is a bath operator and Ej is a single qubit Pauli error operator (j is not an index for the qubits but the set of errors; one qubit can have multiple error operators).

The system–bath interaction terms cause transitions from the adiabatically evolving ground state, and can result in a failed computation. The first step in protecting against these transitions is to encode the system in an error correcting or detecting stabilizer code [5]. The code chosen must be one for which each Ej in the system–bath interaction anti-commutes with at least one of the stabilizer generators. This encoding will enlarge the system's Hilbert space by a factor of 2Ng, where Ng is the number of stabilizer generators of the code. The physical operators, σx,σy,σz, in HAQC are replaced by the code's logical operators, $\bar X, \bar Y, \bar Z$ , and the encoded Hamiltonian then becomes

Equation (1)

We have assumed that the system–bath interaction remains qualitatively the same after the encoding, but is extended to Ne > ne terms to correspond to the larger system size. We have also added a control Hamiltonian HC(t), which only acts on the system Hilbert space, and is required for error suppression and correction. The control Hamiltonian can take two forms. EGP [3] chooses a static control Hamiltonian that is a sum of stabilizer generators ${H_{\mathrm { C}}}^{\mathrm { EGP}}(t) = -\alpha \sum _{m=1}^{N_{\mathrm {g}}} S_m$ , with α > 0. States in the codespace are then eigenstates of HC with eigenvalue −αNg, but any state outside the codespace is subject to an energy penalty. For DD-based control, we sequentially apply the generators of the stabilizer group as unitary operators. In this case, the time-dependent control Hamiltonian is most easily written implicitly as the equivalent unitary that it generates

Equation (2)

The stabilizers are applied in a particular order, given by the vector n, at times given by $K(t)\in \mathbb {Z}$ , so that at time t, the last operator applied to the system was SnK(t). These two control Hamiltonians can be viewed as two extremes of general control Hamiltonians where the stabilizer generators are applied time-dependently; see [7] for a complete discussion of this. In the following we shall see that the DD control Hamiltonian is useful for error suppression, while the EGP control Hamiltonian is suitable for error suppression and error correction.

For the following discussion it will be useful to define several frames of reference. In order to do this, first define $\mathcal {U}_n(t_1,t_2) = {\mathrm { exp}}_+(-\frac {\mathrm {i}}{\hbar }\int _{t_1}^{t_2}\mathrm {d}s H_n(s))$ , for n∈{B,C}, and $\mathcal {U}_{\mathrm { AQC}}(t_1,t_2) = {\mathrm { exp}}_+(-\frac {\mathrm {i}}{\hbar }\int _{t_1}^{t_2}\mathrm {ds} \bar {H}_{\mathrm { AQC}}(s))$ (+  denotes positive time ordering). For convenience, $\mathcal {U}_n(t) \equiv \mathcal {U}_n(t,0)~\forall n$ . Note that because of the commutation properties of the encoded and control Hamiltonians all of these unitaries commute with each other. The following notation is used for an operator A in an interaction frame with respect to the control: $\skew3\tilde {A}(t) \equiv \mathcal {U}^\dagger _{\mathrm { C}}(t)\mathcal {U}^\dagger _{\mathrm { B}}(t) A~ \mathcal {U}_{\mathrm { C}}(t)\mathcal {U}_{\mathrm { B}}(t)$ , which is typically called the toggling frame. The evolution of states in this frame is generated by the toggling frame Hamiltonian: $\tilde {H}(t) \equiv {\mathcal {U}_{\mathrm { C}}}^\dagger (t){\mathcal {U}_{\mathrm { B}}}^\dagger (t)\left (\bar H(t) -{H_{\mathrm { C}}} -{H_{\mathrm { B}}} \right ){\mathcal {U}_{\mathrm { B}}}(t){\mathcal {U}_{\mathrm { C}}}(t)$ .

It is shown in [7] that the encoded AQC Hamiltonian in the toggling frame looks very similar for the two control scenarios, DD and EGP. Specifically,

Equation (3)

The form of the error operators in the toggling frame for the two control scenarios is [7]

Equation (4)

Equation (5)

where p(t) = 0 if $[E_j,{\mathcal {U}_{\mathrm { C}}}^{\mathrm { DD}}(t)]=0$ and p(t) = 1 if $\{E_j,{\mathcal {U}_{\mathrm { C}}}^{\mathrm { DD}}(t)\}=0$ . p(t) encodes whether the last DD pulse applied commuted or anti-commuted with the error Ej (note that since $\mathcal {U}_{\mathrm { C}}^{\mathrm { DD}}$ is always a member of the stabilizer group, Ej must either commute or anti-commute with it at all times). An effective DD cycle is one that causes p(t) to rapidly alternate between +1 and −1 in succession and thus the system–environment coupling is modulated by a rapidly oscillating function of t. Similarly, the sum in the exponential of equation (5) is taken over all stabilizer generators Sm that anti-commute with the error operator Ej. The modulation of the error operator in the EGP case is operator-valued, but the action on states in the code-space is very similar for both control scenarios, a fact we will exploit below.

3. Dynamical master equation for encoded adiabatic quantum computing

In this section we will formalize the dynamics of error suppression and correction in encoded AQC by deriving a master equation describing effective encoded adiabatic evolution when the qubits are coupled to uncontrolled degrees of freedom. By employing fewer approximations than in the derivation of the conventional Lindblad master equation this reduced dynamics is able to capture the modification of system–environment coupling, and hence decoherence, by controls such as DD or energy penalty terms.

We begin with the full Hamiltonian in equation (1), and assume that the system–environment coupling is weak compared to the other terms in this Hamiltonian, and that the encoding has been chosen such that each Ej is a detectable error. Define an interaction frame with respect to $\bar {H}_{\mathrm { AQC}}(t)$ , HC(t) and HB as: $\skew3\breve{A}(t) \equiv \mathcal {U}^\dagger (t,0) ~A~ \mathcal {U}(t,0)$ , where

Equation (6)

A particularly important property of these Hamiltonians, which we will utilize later, is that they all commute. That is, $[\bar {H}_{\mathrm { AQC}}(s), H_{\mathrm { C}}(s')]=0 ~~ \forall\;s,s'$ , because $\bar {H}_{\mathrm { AQC}}$ only contains logical operators and HC only contains stabilizer terms [5]. And obviously HB commutes with the other two terms. This property implies that this interaction frame transformation factors into $\mathcal {U}(t,0) = \prod _{n={\mathrm { AQC,C,B}}} \mathcal {U}_n(t,0)$ .

Let ϱ be the combined density matrix of system and environment, i.e. a normalized trace-class operator in $\mathcal {H}_{\mathrm { sys}}\otimes \mathcal {H}_{\mathrm { env}}$ . By substituting a formal solution into the von-Neumann equation we get the following dynamical equation for the combined density matrix in the interaction frame [12]

Equation (7)

where $H_{\mathrm {I}} = \sum _j E_j \otimes B_j$ is the interaction Hamiltonian. We will assume that the weak system–environment coupling does not perturb the environment from its equilibrium state at timescales that we resolve, and hence $\breve {\varrho }(s) \approx \breve {\rho }(s) \otimes \sigma _{\mathrm { eq}}$ , a tensor product of the system density matrix $\breve {\rho }(s) \equiv {\mathrm {tr}}_{\mathrm { env}} \{\breve {\varrho }(s)\}$ and the environmental equilibrium density matrix. This allows the derivation of a time-convolution master equation for the system density matrix [12]

Equation (8)

with $C_{kj}(t,s) \equiv {\mathrm {tr}}_{\mathrm { env}}\{\breve {B}_k(t)\breve {B}_j(s)\sigma _{\mathrm { eq}}\}$ being the quantum correlation function of the environment. To obtain this expression we have assumed that ${\mathrm {tr}}_{\mathrm { env}}\{ \breve B_j(t) \sigma _{\mathrm { eq}}\} = 0 ~\forall j$ —i.e. the average interaction force on the bath equilibrium state is zero. We further assume that the environment is stationary, implying that this correlation function is only dependent on the time difference τ = t − s. This simplifies the master equation to

Equation (9)

The final approximation we make is sometimes referred to as the first Markov approximation [13] and replaces $\breve {\rho }(t-\tau )$ with $\breve {\rho }(t)$ in the integrals above. This amounts to assuming that the change in the system state (in the interaction frame, and therefore due to the weak system–environment coupling) is negligible on the timescale set by the decay of the environment correlation function. Therefore this formalism is valid for fast-relaxing or weakly coupled environments. In the following we will restrict our analysis to uncorrelated environments for the system qubits, that is Ckj(τ) = δkjCj(τ). The analysis that follows can be generalized to correlated environments but we will not do so here.

We rewrite this resulting master equation in an interaction frame with respect to the control Hamiltonian only (the toggling frame): $\skew3\tilde {A}(t) = \mathcal {U}^\dagger _{\mathrm { C}}(t,0) ~A~\mathcal {U}_{\mathrm { C}}(t,0)$ , for $A \in \mathcal {H}_{\mathrm { sys}}$ . The transformation required to move into this frame is particularly easy in this case because, as noted above, the stabilizer properties result in a factoring of the full interaction frame transformation unitary (equation (6)). In the toggling frame

Equation (10)

where $\tilde {\Xi }_j(t,\tau ) \equiv \mathcal {U}_{\mathrm { AQC}}(t,t-\tau )~\tilde {E}_j(t-\tau )~\mathcal {U}^\dagger _{\mathrm { AQC}}(t,t-\tau )$ .

We will mostly work with this time-local master equation with time-dependent dissipation kernels [12] in what follows. However, it is possible to also make the second Markov approximation [13] here and set the upper limit of the integrals above to . This typically results in an equation of motion with no t-dependence on the incoherent transition rates. Physically, this approximation implies that the bath correlation functions decay so quickly that the time dependence of the error operators (in the interaction frame) is not resolved by the integrals. Finally, the most drastic approximation replaces the correlation function with a delta function in time, which results in a temperature-independent master equation. We will not make this last approximation in this work since it results in evolution where the control Hamiltonian cannot influence the dissipation and decoherence operators directly, which is counter to any error suppression scheme.

4. Error suppression in adiabatic quantum computing

In order to understand the effects of error suppression on encoded AQC dynamics we begin by quantifying the population preserved in the codespace (the no-error subspace). We define $\mathbf {P}=\frac {1}{2^{N_{\mathrm {g}}}} \prod _{m=1}^{N_{\mathrm {g}}}(\mathbf {I} + S_m)$ as the projector onto the codespace, P0(t) as the codespace population at time t and Q = I − P. Then the change in the codespace population is $\frac {{\mathrm { d}}P_0}{{\mathrm { d}}t} = {\mathrm {tr}}\{\mathbf {P}\frac {{\mathrm { d}}\tilde {\rho }}{{\mathrm { d}}t}\mathbf {P}\}$ . To evaluate this quantity, we will first insert identities in the form P + Q around $\tilde {\rho }(t)$ , resulting in

Equation (11)

where we have used the identities PQ = 0, $\mathbf {P} \tilde {E}_j \mathbf {P} = 0 ~\forall j$ and $\mathbf {P} \tilde {\Xi }_j(t, \tau ) \mathbf {P} = 0 ~ \forall j$ . The first of these is by definition and the others follow from the properties of the Hamiltonian and error operators—i.e. $\bar {H}_{\mathrm { AQC}}(s)$ and HC(s) cannot move states between the subspaces projected onto by P and Q, and Ej applied to any state in P results in a state in Q. The term ${\mathrm {tr}}\{ \mathbf {P}\tilde {E}_j(t) \tilde {\Xi }_j(t,\tau )\mathbf {Q}\tilde {\rho }(t)\mathbf {P}\}$ and its conjugate also evaluate to zero although it is slightly more involved to see why. The reason is that $\tilde {E}_j(t) \tilde {\Xi }_j(t,\tau ) = \mathcal {U}^\dagger _{\mathrm { C}}(t,0)E_j\mathcal {U}_{\mathrm { C}}(t,0)\mathcal {U}_{\mathrm { AQC}}(t,t-\tau ) \mathcal {U}^\dagger _{\mathrm { C}}(t-\tau ,0)E_j\mathcal {U}_{\mathrm { C}}(t-\tau ,0)~\mathcal {U}^\dagger _{\mathrm { AQC}}(t,t-\tau )$ contains two applications of Ej interleaved with unitary evolution that does not connect different stabilizer syndrome subspaces and hence cannot connect P and Q subspaces. Hence, this master equation simplifies to

At this point we employ a critical property of the control Hamiltonian: that it modulates the system–environment interaction. Using the expressions for toggling frame error operators in equation (4) and equation (5) allows us to simplify the equation of motion for codespace population to

Equation (12)

where $\hat {\Xi }(t,\tau) \equiv \mathcal {U}_{\mathrm { AQC}}(t,t-\tau )E_j~\mathcal {U}^\dagger _{\mathrm { AQC}}(t,t-\tau )$ and mj(t,τ) is a modulation function that results from the control. It captures the control influence on the dissipation and decoherence. For the two control scenarios of EGP and DD, the modulation functions take the form

Equation (13)

Equation (14)

where wj is the number of stabilizer terms in the EGP penalty Hamiltonian that anti-commute with error Ej. p(t) is the DD coefficient defined above. To write the modulation function for EGP we have exploited the property

Equation (15)

which follows from the fact that all states in the codespace are eigenvalue +1 eigenstates of the stabilizers. These modulation functions are analogous to the filter functions derived for describing DD for pure dephasing dynamics [14].

The modulation functions given in equations (13) and (14) display some degree of asymmetry between the EGP and DD error suppression techniques because while mDD depends on times t and τ, mEGP only depends on time τ. This is only because we have restricted ourselves to the case of constant, uniform energy penalty α. As detailed in [7], a more general formulation of EGP would allow for α to be time dependent ${H_{\mathrm { C}}}^{{\mathrm { EGP}}}(t) = - \sum _{m=1}^{N_{\mathrm {g}}} \alpha _m(t) S_m$ . In this case,

Equation (16)

with $\chi (t) \equiv -\sum _{\{S_m,E_j\}=0} \int _0^{t}\mathrm {d}s~ \alpha _m(s)$ . Comparing this with equation (14), we see that in this more general formulation the similarity between DD and EGP is even more evident; they both modulate the dissipation kernels defining the leakage from the codespace (DD with a square pulse and EGP with a smooth oscillating function).

Since the correlation function, Cj(τ), decays with τ the values of the integrands in equation (12) at small values of τ are the most important. If we assume that HAQC varies slowly with respect to time, we can approximate

Equation (17)

and hence approximate $\hat {\Xi }(t,\tau ) \approx \Xi (t,\tau ) \equiv \mathrm {e}^{-\frac {\mathrm {i}}{\hbar }\tau \bar H_{\mathrm { AQC}}(t)} E_j~\mathrm {e}^{\frac {\mathrm {i}}{\hbar }\tau \bar H_{\mathrm { AQC}}(t)}$ in equation (12). Thus the final form of the population master equation is

Equation (18)

4.1. HAQC = 0

To see the effects of the control Hamiltonian even more clearly, we consider this dynamical equation in the absence of the adiabatic evolution (i.e. when HAQC = 0). Then the traces in this equation simplify further since ${\mathrm {tr}}\{E_j \Xi _j(t,\tau ) \mathbf {P}\tilde {\rho }(t)\mathbf {P}\} \rightarrow {\mathrm {tr}}\{\mathbf {P}\tilde {\rho }(t)\mathbf {P}\}$ and ${\mathrm {tr}}\{E_j \mathbf {P} \Xi _j(t,\tau ) \mathbf {Q} \tilde {\rho }(t) \mathbf {Q}\} \rightarrow {\mathrm {tr}}\{\mathbf {Q}_1\tilde {\rho }(t)\mathbf {Q}_1\}$ , where Q1 is a projector onto the subspace of Q that contains states one error away from the codespace. This simplification allows the derivation of a classical master/rate equation for the codespace population

Equation (19)

with

Equation (20)

and P1(t) is the population in the one-error subspace at time t. The rates r±j(t) quantify the leakage into and out of the codespace per unit time. In the absence of a control Hamiltonian these rates are simply proportional to ${\mathrm { Re}}\{\int _0^t \mathrm {d}\tau C_j(\tau )\}$ , a property of the environmental fluctuations alone. However the control Hamiltonian, in the case of DD and EGP, has the effect of modulating this integral by an oscillating function and hence decreasing its modulus if the rate of oscillation is large enough.

At this point we pause to point out an important difference between error suppression by DD and by EGP. Note that the fact that the DD modulation functions, mDDj(t,τ), are real implies that r+j = rj always, regardless of the model of the bath. This highlights a fundamental difference between DD and EGP: EGP imposes a real energy difference between the stabilizer syndrome subspaces and hence bath-induced transition rates between them follow detailed balance (for a bath near thermal equilibrium). This is in contrast with DD that does not impose a real energy gradient and thus all stabilizer syndrome subspaces remain energetically degenerate. Therefore transitions between stabilizer syndrome subspaces do not require energy exchange with the bath and hence associated transition rates are not Boltzmann weighted. However, note that both techniques, DD and EGP, suppress transition rates as a result of modulating the integrands in equation (20). The difference between these techniques will become more important when we consider error correction in the next section.

4.1.1. Example: classical stochastic noise model

To illustrate the error suppression consider a classical approximation of the environment (e.g. the Kubo–Anderson stochastic model) in which case the correlation function is purely real, and fix it to be exponentially decaying. In this case, C(t)∝eγt where γ is the inverse correlation time. If in addition the noise amplitude is Gaussian distributed, this describes an Ornstein–Uhlenbeck stochastic process. Consider the case of EGP where the modulation function is sinusoidal, in which case,

Equation (21)

where ωj = 2αwj/ℏ. Note that this classical model of the bath will not capture relaxation and temperature effects correctly. This is the reason that temperature does not appear in the rates above and that r+j = rj. However, we consider it here because of the simple form of the resulting correlation function, which in turn allows us to transparently illustrate the mechanisms of error suppression. A more complete calculation with a quantum correlation function that incorporates temperature effects and thus results in unequal upward and downward rates is presented in appendix A.

In the expression in equation (21), the consequences of adding the energy penalty terms are summarized by the presence of the factor ωj. This term increases with the energy penalty (α) and the number that anti-commute with the error (wj). The term has two effects: (i) its presence in the denominator decreases the overall rate of population leakage and (ii) it increases the oscillation frequency of the sinusoidal functions in the numerator, thus decreasing the magnitude of integrals of r±j(t) as long as this oscillation frequency is large. Therefore, this calculation explicitly shows how the control Hamiltonian decreases population leakage from the codespace.

For use in later sections and to connect to previous results on error suppression [3] we also consider the population transfer rates in equation (20) under the second Markov approximation, which sets the upper limit of the rate integrals to infinity (t → ). Under this further approximation the rates become related to the Fourier (for EGP) or Walsh–Hadamard (for DD) transforms of the bath correlation function. For example, for EGP $r_j^\pm = \frac {2}{\hbar ^2} \mathbb{Re} {{\sf C}}_j( \pm \omega _j)$ under the second Markov approximation, where ${{\sf C}}_j(\omega ) \equiv \int _0^\infty C_j(\tau ) \mathrm {e}^{\mathrm {i}\omega \tau } \mathrm {d}\tau $ is the (one-sided) Fourier transform of Cj(τ). Assuming a harmonic thermal bath, and using symmetries of Cj(τ), this rate can also be written as [15]

Equation (22)

where ${\sf n}(\omega ) = 1/(\mathrm {e}^{\beta \hbar \omega }-1)$ is the average occupation number (according to the Bose–Einstein distribution) and $\mathcal {J}_j(\omega )$ is the spectral density of the bath (with symmetry: $\mathcal {J}(-\omega )=-\mathcal {J}(\omega )$ ).2

This expression for the rates in the second Markov approximation makes it clear that for large energy penalties the rates are largely determined by the cut-off (or regularization) behavior of the bath spectral density. That is, as the energy penalty, α, increases the population transfer rates are proportional to the spectral density $\mathcal {J}_j(\omega _j)$ at higher values of ωj. Realistic spectral densities decay at frequencies above some cut-off, and if the cut-off behavior is Lorentzian like in the Kubo–Andersen model, then the rates will only decay as $\propto \frac {1}{\omega _j^2}$ for large ωj, while if the spectral density regularization is exponential, the rates decay as ∝eωj for large ωj. Therefore knowledge of the high-energy behavior of the bath spectral density is critical in assessing the effectiveness of error suppression using EGP or DD in many-qubit systems. This issue was also noted by Jordan et al in [3]. If we require the rates to be suppressed exponentially in nl, the number of logical qubits, then we require that ωj (and consequently α or wj) scale exponentially in nl for a bath with Lorentizian regularization and linearly for a bath with exponential regularization. These are very different requirements, with the former being much more demanding.

4.2. HAQC ≠ 0

The classical rate equation in equation (19) describing subspace population changes is only possible when HAQC = 0. When this is not the case, the states in a single subspace, e.g. the codespace, have different transition rates, as opposed to equation (19) where all states in P have the same transition rate to Q1. Therefore a rate equation for subspace populations is no longer possible. Despite this, the effect of the control terms (EGP or DD) is still to suppress transitions between subspaces arising from the environmental coupling. To see this, consider the instantaneous ground state population (in the toggling frame): $P_{\psi _0} = {\left \langle {\psi _0(t)}\right \vert } \tilde \rho (t) {\left \vert {\psi _0(t)}\right \rangle }$ . The rate of change of this population is given by

The first two terms represent coherent deformations to the ground state due to adiabatic evolution. These might lead to change in the ground state population through diabatic transitions. But since these cannot be suppressed with our encoding we ignore these, and instead only consider the change in ground state population due to the last term, which induces incoherent transitions. Using equation (10) this term evaluates to

Equation (23)

Now consider the case where $\tilde {\rho }(t) ={\left \vert {\psi _0(t)}\right \rangle }{\left \langle {\psi _0(t)}\right \vert }$ . That is, calculate the rate of change in ground state population when the system begins in the ground state. This simplifies to

where ${ \vert {\psi _j(t)} \rangle } = E_j{ \vert {\psi _0(t)} \rangle }$ , and we have set the ground state energy to be zero without loss of generality ($\bar {H}_{\mathrm { AQC}}(t) {\vert {\psi _0(t)} \rangle }=0$ ). Furthermore, we have approximated $\hat {\Xi }(t,\tau ) \approx \Xi (t,\tau )$ as before. This expression quantifies the rate of population leakage from the ground state as a result of incoherent transitions. The matrix element ${ \langle {\psi _j(t)} \vert } \mathrm {e}^{-\mathrm {i} \bar {H}_{\mathrm { AQC}}(t)\tau }{ \vert {\psi _j(t)} \rangle }$ cannot be simplified in general because the error state ${\vert {\psi _j(t)}\rangle }$ is not necessarily an eigenstate of $\bar {H}_{\mathrm { AQC}}(t)$ because $[\bar {H}_{\mathrm { AQC}}, E_j] \neq 0$ . This problem, that error states are not eigenstates of the adiabatic Hamiltonian, is a major issue for error correction in AQC, and we shall return to this issue in the next section. However, regardless of the value of this matrix element the above expression confirms that the mechanism of error suppression in the presence of the adiabatic interpolation is the same as when $\bar {H}_{\mathrm { AQC}}=0$ ; i.e. the modulation functions add oscillatory components to the dissipation kernels and hence the integrals defining the rate of population leakage can be suppressed as long as this rate of oscillation is large enough (the correlation function will decay quickly after a cut-off frequency for any physical model of the bath and the oscillation frequency, ωj, should be larger than this cut-off frequency).

5. Error correction in adiabatic quantum computing

There is no established approach for error correction in AQC. The most obvious approach is to freeze adiabatic evolution at regular intervals, measure the stabilizer generators and then apply a correction if necessary before recommencing adiabatic evolution. This approach, which we shall refer to as the freeze–measure–correct approach, resembles circuit-model error correction but is fraught with practical difficulties. For example, the multi-body measurements necessary for error correction will likely be implemented non-adiabatically and therefore could disturb the ground state population. More importantly, any leakage outside the codespace between error correction cycles becomes uncorrectable due to the issue identified above that error states are not necessarily eigenstates of the adiabatic Hamiltonian. To make this issue more explicit, we summarize the reasoning presented in [7], where an example evolution was considered: unperturbed evolution of the system until time τ, at which point there is a correctable error Ej, proceeding with unperturbed evolution again until an error correction cycle. The optimal error correction operation is the application of Ej again after decoding. Thus the overall evolution is: ${\left \vert {\psi (t)}\right \rangle } = E_j \mathcal {U}_{\mathrm { AQC}}(t,\tau ) E_j \mathcal {U}_{\mathrm { AQC}}(\tau ,0) {\left \vert {\psi _0)}\right \rangle }$ . Note that we are only considering evolution in $\mathcal {H}_{\mathrm { sys}}$ since this is sufficient and we are ignoring evolution by the control Hamiltonian since it is inconsequential for the following argument. Since the evolution till τ is unperturbed and adiabatic this is equivalent to: ${\left \vert {\psi (t)}\right \rangle } = E_j \mathcal {U}_{\mathrm { AQC}}(t,\tau ) E_j {\left \vert {\psi _0(\tau )}\right \rangle }$ where ${\left \vert {\psi _0(\tau )}\right \rangle }$ is the ground state of the adiabatic Hamiltonian at time τ (and in the codespace). To simplify this further we want to commute the Ej past the AQC unitary. However, the commutation relation between $\mathcal {U}_{\mathrm { AQC}}$ and Ej is non-trivial. We first decompose the encoded AQC Hamiltonian into terms that commute and anti-commute with Ej: $\skew3\bar{H}_{\mathrm { AQC}}(t)\kern-1pt =\kern-1pt \skew3\bar{H}^+_j(t)\kern-1pt +\kern-1pt \skew3\bar{H}^-_j(t)$ with $E_j \skew3\bar{H}^{\pm }_j(t) \pm \skew3\bar{H}^\pm _j(t) E_j\kern-1pt =\kern-1pt 0$ . With this decomposition, $\skew3\bar{H}_{\mathrm { AQC}}(t) E_j = E_j (\skew3\bar{H}^+_j(t) - \skew3\bar{H}^-_j(t))~\forall\;t$ . Using this to commute the error past the AQC evolution results in: ${\left \vert {\psi (t)}\right \rangle } = \overline {\mathcal {U}}_{\mathrm { AQC}} {\left \vert {\psi _0(\tau )}\right \rangle }$ , where

Equation (24)

Therefore even after the correction has been applied at time t we do not recover the correct state, ${\left \vert {\psi _0(t)}\right \rangle }$ . This is in effect because the error state $E_j {\left \vert {\psi _0(\tau )}\right \rangle }$ is not an eigenstate of the $\skew3\bar{H}_{\mathrm { AQC}}$ and therefore once a state is promoted into an error subspace $\skew3\bar{H}_{\mathrm { AQC}}$ coherently mixes it with other states in that error subspace (but not between subspaces), which results in a faulty correction. Another way to interpret this problem is to see that the correctable, low-weight error Ej is quickly 'dressed' by the adiabatic Hamiltonian into a high weight, uncorrectable error3. This is analogous to an error during the implementation of a non-transversal gate in the circuit model. This problem of leaked populations being uncorrectable implies that the error correction cycles in the freeze–measure–correct approach have to be extremely frequent.

In this work we construct an alternative to the freeze–measure–correct approach to error correction in AQC. It operates continuous-in-time, does not require the freezing of adiabatic evolution and relies on cooling. It is known that cooling is analogous to error correction since both are entropy reduction methods; e.g. [1618]. However, the cooling dynamics have to be engineered so that the correct degrees of freedom (erroneous excitations in the case of AQC) are being damped. The cooling of local degrees of freedom is the most experimentally practical and we restrict our attention to such local cooling4. In the case of AQC with qubits, local cooling refers to cooling individual qubits independently. In the following we formulate the structure necessary for implementing error correction during AQC with local cooling, and the effective dynamics of encoded AQC with such cooling.

5.1. Structure required for error correction

Two specific features are required for error correction by local cooling to be successful. The first is stabilizer encodings and penalties on errors enforced by EGP. DD is insufficient for error correction by cooling because it does not impose real energy penalties. Cooling preferentially biases the system toward low-energy states and therefore energy penalties on error subspaces are necessary for the codespace to be preferentially populated by cooling. Therefore, in the following, we will assume that the control Hamiltonian implements EGP and mj(t,τ) = mEGPj(t,τ) always.

The second feature required for error correction by cooling is related to the above observation that leakage from the codespace is irrecoverable due to the coherent mixing of states in the error subspaces by the adiabatic Hamiltonian. In order to circumvent this problem we must modify the construction of the encoded adiabatic Hamiltonian $\skew3\bar{H}_{\mathrm { AQC}}$ . To understand how to do this modification it is useful to present another interpretation of why the problem exists. The essence of the problem is that local perturbations (by single qubit Pauli errors) of the many-body ground state of $\skew3\bar{H}_{\mathrm { AQC}}+H_{\mathrm { C}}$ quickly become non-local excitations of the many-body system due to the couplings induced by the Hamiltonian. Therefore, the subsequent cooling of local degrees of freedom (single qubits) cannot destroy this delocalized excitation. This is not an issue in quantum memories created from degenerate ground states of stabilizer Hamiltonians (e.g. the abelian toric code) because in these cases local perturbations create excitations that remain localized and therefore can be subsequently quenched by local cooling [18]. Given this perspective it is clear that a way to fix this problem is to modify the logical AQC Hamiltonian $\skew3\bar{H}_{\mathrm { AQC}}$ so that local perturbations create localized excitations, i.e. the correctable error states remain eigenstates of the encoded Hamiltonian. This can, in fact, always be done because of the freedom in choice of logical operators in stabilizer codes; a logical operator can be multiplied by any linear combination of stabilizer generators to create an equivalent logical operator. We will refer to these modified logical Hamiltonians as protected Hamiltonians and notate them as $\skew3\bar{H}^\lambda _{\mathrm { AQC}}$ .

5.2. Protected Hamiltonians for adiabatic quantum computing

In order to specify the algebraic properties of the protected Hamiltonians we must first introduce some notation. The system bath interaction, $\sum _{j=1}^{N_{\mathrm {e}}} E_j\otimes B_j$ , defines the elementary errors in the system as Ej. We specialize to the case where the physical system–environment interaction contains single qubit error terms, i.e. Ej in equation (1) acts non-trivially only on one qubit and E2j = 1. This is a physically realistic assumption since the system–environment interaction is likely low weight. However, the code we use to encode the system could correct more than one error, e.g. could have distance d > 3. In this case, the system is recoverable even after multiple errors.

A general sequence of elementary errors, $\prod _{i=1}^l E_{j_i}$ with 1 ⩽ ji ⩽ Ne can build up to a correctable or uncorrectable error. By the error correction conditions [21], each correctable error is identified by a unique anti-commutation pattern with the stabilizer generators of the code, the syndrome pattern. Motivated by this we denote the Pauli operator associated with a sequence of elementary errors as Eν, where the label ν is a binary vector indicating which stabilizer generators anti-commute with the error. Explicitly, ν = (ν1,ν2,...νg), where g is the number of generators in the code (an [[n,k,d]] stabilizer code has g = n − k generators), and EνSm = (− 1)νmSmEν where Sm is a stabilizer generator5. Each elementary error Ej, which are all assumed to be correctable by the employed code, has a syndrome pattern that we denote ν(j). Therefore the elementary errors could be alternatively written as Eν(j). Finally, we also define projectors onto the 2k-dimensional syndrome subspaces as Qν = EνPEν. Note that $\sum _{{\boldsymbol {\nu }}} \mathbf {Q}_{\boldsymbol {\nu }}= \mathbf {1}$ is a resolution of identity, where Q0 = P is included in the sum. See figure 1 for a graphical representation of syndrome subspaces and some of the above definitions in encoded Hilbert space.

Figure 1.

Figure 1. The structure of encoded Hilbert space and transitions between syndrome subspaces induced by incoherent single qubit (elementary) errors. An [[n,k,d]] quantum code encodes k logical qubits in n physical qubits and corrects at least ⌊(d − 1)/2⌋ Pauli errors. The two examples shown here are (a) the repetition bit-flip code with stabilizer generators ZZI and IZZ (not a full quantum code since phase errors are not corrected), and (b) the [[5,1,3]] code with stabilizer generators IXZZX,XIXZZ,ZXIXZ and ZZXIX. In each example the circles represent 2k dimensional syndrome subspaces of the encoded Hilbert space. These subspaces are labeled by a syndrome pattern ν = ν1ν2,...νnk, a vector whose binary entries denote whether each one of the n − k stabilizer generators commutes (νi = 0) or anti-commutes (νi = 1) with the error Eν that takes a state in the codespace to that syndrome subspace. ν = 0 represents the codespace. Both these examples are perfect, non-degenerate codes so each syndrome subspace corresponds to a unique correctable error. Black, green and orange lines correspond to the transitions between subspaces induced by correctable single qubit σx, σz and σy errors, respectively. The red lines correspond to transitions induced by uncorrectable errors. We have labeled the black edges with the corresponding transition-inducing errors for the bit-flip code but have omitted these labels for clarity for the larger [[5,1,3]] code. The rate equation dynamics derived in the main text describes a Markov random walk between these subspaces because the elementary errors Ej move states incoherently between the syndrome subspaces. Beginning in the codespace, the overall state is guaranteed to be correctable as long as no red line is crossed during the random walk.

Standard image High-resolution image

Note that for perfect codes [5] the number of syndrome patterns, 2nk, exactly equals the number of correctable errors. Whereas, for imperfect codes the number of syndrome patterns ν could be larger than the set of correctable errors. Therefore one must be careful when tracking error sequences using the syndrome labeling. While each correctable error sequence produces a unique syndrome pattern, an uncorrectable error sequence can produce a syndrome pattern that is the same as, or could be different from (for imperfect codes), the syndrome pattern of a correctable error sequence. Starting from the codespace, the syndrome pattern uniquely labels an error sequence as long as it is a correctable error, but as we build up more and more errors we must be careful to track when a composite error becomes uncorrectable. For example, given two correctable errors Eν and Eμ, the concatenated error EνEμ = Eνμ (⊕ denotes binary addition) could be another correctable error or an uncorrectable error. The syndrome label νμ by itself does not tell us which one it is in all cases. Finally, we note that while the distance of the quantum code is a useful proxy for identifying correctable states, it is not always sufficient. That is, for a non-degenerate code the correctable errors are the ones with weight ⩽⌊(d − 1)/2⌋, where d is the distance of the code. However, degenerate codes can correct errors that have weight greater than ⌊(d − 1)/2⌋. Therefore to be general and capture degenerate as well as imperfect quantum codes, we will simply refer to an error sequence $\prod _{i=1}^l E_{j_i}$ with 1 ⩽ ji ⩽ Ne as being a correctable or uncorrectable error. Either way, it can be associated with a syndrome pattern ν, and when there is no ambiguity we will label it as Eν.6

Moreover, given this notation we can label a (time-dependent) complete basis in the Hilbert space of the encoded system as ${\left \vert {\epsilon _n(t), {\boldsymbol {\nu }}}\right \rangle }$ with $\skew3\bar{H}^\lambda _{\mathrm { AQC}}(t) {\left \vert {\epsilon _n(t), \boldsymbol {0}}\right \rangle } = \epsilon _n(t) {\left \vert {\epsilon _n(t), \boldsymbol {0}}\right \rangle }$ ; i.e. the quantum numbers epsilonn label the eigenvalues of $\skew3\bar{H}^\lambda _{\mathrm { AQC}}(t)$ for a state in the codespace. Furthermore, $\sum _{n, {\boldsymbol {\nu }}} {\left \vert {\epsilon _n(t), {\boldsymbol {\nu }}}\right \rangle }{\left \langle {\epsilon _n(t), {\boldsymbol {\nu }}}\right \vert } = \mathbf {1}$ , where the sum over ν is over all 2nk (correctable and uncorrectable) syndrome patterns.

Given this notation, the fundamental property of a protected implementation of the logical Hamiltonian is that all correctable errors take eigenstates in the codespace to eigenstates in error subspaces. That is, for a correctable error Eν,

Equation (25)

This condition stipulates that the erred (but correctable) state is an eigenstate of the protected Hamiltonian. In principle, epsilonn,ν can have an arbitrary dependence on n and ν, but for practical constructions based on exploiting the stabilizer structure (see [7]) this energy factorizes into epsilonn,ν(t) = epsilonn(t)λν. The first factor is the same as the energy in the codespace while $\lambda _{\boldsymbol {\nu }} \in \mathbb{R}$ (λν ≠ 0) is a deformation factor that modifies the energy of the corresponding state in the error space. From demanding this property we get the fundamental property of protected logical Hamiltonians, that for all correctable errors

Equation (26)

This can be viewed as a deformed commutator between the logical Hamiltonian and the correctable errors (but only when operating on the codespace). Note that although $\skew3\bar{H}^\lambda _{\mathrm { AQC}}$ is time dependent λν is time independent; its only dependence is on the error syndrome.

Although equation (26) is a property that we are demanding from protected Hamiltonians, we can also show that it is always possible to construct a logical AQC Hamiltonian that satisfies this property. A constructive algorithm is given [7] and results in protected Hamiltonians of the following form:

Equation (27)

where $\skew3\bar{H}_{\mathrm { AQC}}$ is a conventional encoded AQC Hamiltonian using arbitrary logical operators of the code (note that $\mathbf {P}\skew3\bar{H}_{\mathrm { AQC}} \mathbf {P} = \skew3\bar{H}_{\mathrm { AQC}} \mathbf {P}$ , since the logical Hamiltonian does not connect different stabilizer subspaces, and hence $\skew3\bar{H}^{\lambda }_{\mathrm { AQC}}$ is Hermitian). Although this protected Hamiltonian is typically a very high-weight operator, it is possible to decrease its weight by utilizing the structure of the code [7]. However, the weight of the protected Hamiltonian can never be decreased below d since it must contain logical operators of the code. A distinguished protected Hamiltonian, which has the property that λν = 1, ∀ ν will be important in the analysis below and we refer to it as the canonical protected Hamiltonian, and denote it by $\skew3\bar{H}^{\mathrm {p}}_{\mathrm { AQC}}$ .

5.3. Dynamics under protected adiabatic quantum computing Hamiltonians

Before introducing the cooling necessary for error correction we will first derive equations describing non-equilibrium dynamics under a protected Hamiltonian implementation of AQC. The dynamics of the encoded AQC evolution is best described as dynamics of populations in the syndrome subspaces described above. Defining $P_{{\boldsymbol {\nu }}} = {\mathrm {tr}}(\mathbf {Q}_{{\boldsymbol {\nu }}} \tilde {\rho }(t))$ as the population of the syndrome subspace labeled by ν, equation (10) can be used to derive the following evolution:

Equation (28)

As above we want to use the properties of the control and logical Hamiltonians to simplify the traces in this equation. To begin, we exploit the expression for the elementary error operators in the toggling frame given by equation (5). Then, using the approximation $\hat {\Xi }(t,\tau ) \approx \Xi (t,\tau ) \equiv \mathrm {e}^{-\frac {\mathrm {i}}{\hbar }\tau \bar H^\lambda _{\mathrm { AQC}}(t)} E_j~\mathrm {e}^{\frac {\mathrm {i}}{\hbar }\tau \bar H^\lambda _{\mathrm { AQC}}(t)}$ , the first trace becomes

Equation (29)

where

Equation (30)

This factor is analogous to the factor wj in section 4, however in this case this frequency depends on the entire error sequence and not just Ej.

To simplify this trace further we first utilize the fundamental property of the protected Hamiltonian, equation (26), to establish a similar relation for elementary errors applied to states in error subspaces

Equation (31)

which holds as long as the concatenated error EjEν = Eν(j)⊕ν is a correctable error. Here λν(j)⊕ν is the deformation factor for the concatenated error; i.e. $\skew3\bar{H}^\lambda _{\mathrm { AQC}}(t) E_{{\boldsymbol {\nu }}(j) \oplus {\boldsymbol {\nu }}} \mathbf {P} - \lambda _{{\boldsymbol {\nu }}(j)\oplus {\boldsymbol {\nu }}}(t) E_{{\boldsymbol {\nu }}(j) \oplus {\boldsymbol {\nu }}} \skew3\bar{H}^\lambda _{\mathrm { AQC}}\mathbf {P} = 0$ .

This identity simplifies the first trace in equation (28), as long as EjEν is also a correctable error, to

Equation (32)

Similarly, using the above properties, we can simplify the second trace in equation (28) to

Equation (33)

when Eν is a correctable error.

Equation (32) is proportional to the gain in population in syndrome subspace ν as a result of transitions from syndrome subspace ν(j)⊕ν. Similarly, equation (33) is proportional to the total loss in population in syndrome subspace ν to the neighboring subspace ν(j)⊕ν. The exponential factor $\pm \left [2\alpha \varpi (j,{\boldsymbol {\nu }}) + \epsilon _n(t) (\lambda _{{\boldsymbol {\nu }}} - \lambda _{{\boldsymbol {\nu }}(j)\oplus {\boldsymbol {\nu }}})\right ]$ represents the gain or loss in energy as a result of the transition. There are two contributions to this energy. The first comes from the energy difference between the syndrome subspaces projected onto by Qν and Qν+j. This energy difference is enforced by the EGP Hamiltonian and is equal to 2αϖ(j,ν). The second component of the overall energy cost comes from the energy difference caused by the deformation of the energy landscape by the protected logical Hamiltonian, and is given by epsilonn(t)(λν − λν(j)⊕ν).

These traces almost describe the net population of syndrome subspaces, however the issue is that each state in a syndrome subspace has a different rate of transition to neighboring syndrome subspaces (i.e. the exponent in equation (33) is n dependent). We can avoid this scenario if we utilize the canonical protected Hamiltonian, in which case λν = 1 ∀ correctable ν, and the two traces above become

Equation (34)

By putting these simplified traces together, the evolution of the syndrome subspace populations, for syndromes that represent correctable errors, follows a classical master/rate equation:

Equation (35)

for ν correctable, with time-dependent rates

Equation (36)

Equation (37)

Therefore, by utilizing the localizing properties of the canonical protected Hamiltonian we can see that the master equation (28) represents dynamics between syndrome subspace populations. The dynamics resembles a Markov chain random walk with each state in the chain being represented by a syndrome pattern label.

However, until now we have only accounted for transitions between subspaces caused by correctable errors, i.e. transitions along the non-red edges in figure 1. We cannot track the system accurately once it crosses a red edge in figure 1 because the syndrome subspaces no longer necessarily uniquely identify states that are uncorrectable. However, we can represent dynamics across this correctable–uncorrectable boundary as a leakage from the correctable subspace of states. To do so, we can sum over all rates of departure from the correctable subspace, which requires evaluation of $ {\mathrm {tr}} [ \mathbf {Q}_{{\boldsymbol {\nu }}} \skew3\tilde {E}_j(t) \tilde {\Xi }_j(t,\tau ) \tilde {\rho }(t) \mathbf {Q}_{{\boldsymbol {\nu }}} ]$ when EjEν is an uncorrectable error and Eν is a correctable error. To evaluate this trace we derive the property

Equation (38)

where the second option is only possible if the code employed is an imperfect code. The second equality follows from considering PEμEjEνP; if ν(j)⊕ν is an uncorrectable syndrome pattern (which can only be the case if the code is imperfect) then it is distinct from any μ in the sum and therefore PEμEjEνP = 0. On the other hand, if ν(j)⊕ν is a correctable syndrome pattern there exists a μ = ν(j)⊕ν in the sum, and therefore PEμEjEνP = δμ,ν(j)⊕νP. In this case then, $\skew3\bar{H}^{\mathrm {p}}_{\mathrm { AQC}}(t) E_j \mathbf {Q}_{\boldsymbol {\nu }} = E_{{\boldsymbol {\nu }}(j)\oplus {\boldsymbol {\nu }}} \skew3\bar{H}_{\mathrm { AQC}}(t) \mathbf {P} E_{{\boldsymbol {\nu }}} = E_j E_{\boldsymbol {\nu }} \skew3\bar{H}_{\mathrm { AQC}}(t) E_{\boldsymbol {\nu }} \mathbf {Q}_{\boldsymbol {\nu }} = E_j \skew3\bar{H}^*_{{\boldsymbol {\nu }}}(t)\mathbf {Q}_{\boldsymbol {\nu }}$ , where we have defined

Equation (39)

with $\skew3\bar{H}^\pm _{{\mathrm { AQC}},{\boldsymbol {\nu }}}(t)$ being the terms in $\skew3\bar{H}_{\mathrm { AQC}}(t)$ that commute/anti-commute with Eν. $\skew3\bar{H}^*_{\boldsymbol {\nu }}$ is still a logical Hamiltonian in the sense that it only contains logical operators, but it is not the same as $\skew3\bar{H}^{\mathrm {p}}_{\mathrm { AQC}}$ . It encapsulates the logical error incurred as a result of concatenating Ej and Eν.

Focusing on the first case in equation (38) for now, where ν(j)⊕ν is a correctable syndrome pattern, we have

Equation (40)

where in the second line we have used equation (38) and in the third we have expanded the projector as $\mathbf {Q}_{\boldsymbol {\nu }} = \sum _n {\left \vert {\epsilon _n(t), {\boldsymbol {\nu }}}\right \rangle }{\left \langle {\epsilon _n(t), {\boldsymbol {\nu }}}\right \vert }$ , and used the fact that both $\skew3\bar{H}^{\mathrm {p}}_{\mathrm { AQC}}$ and $\skew3\bar{H}^*$ only contain logical operators. The matrix element $ {\left \langle {\epsilon _n(t) , {\boldsymbol {\nu }}}\right \vert } \mathrm {e}^{-\frac {\mathrm {i}}{\hbar } \skew3\bar{H}^*(t)\tau } {\left \vert {\epsilon _m(t), {\boldsymbol {\nu }}}\right \rangle } $ is in general going to be non-zero and carry a dependence on n,m and τ, and to simplify this further we need to make some approximations.

The first approximation we make is $\mathrm {e}^{-\frac {\mathrm {i}}{\hbar }\tau [2\alpha \varpi (j,{\boldsymbol {\nu }}) -\epsilon _m(t)]} \approx \mathrm {e}^{-\frac {\mathrm {i}}{\hbar }\tau [2\alpha \varpi (j,{\boldsymbol {\nu }}) -\bar {\epsilon }(t)]}$ where $\bar {\epsilon }(t) = \frac {1}{2^k}\sum _{m=1}^{2^k} \epsilon _m(t)$ is the mean energy of the states in a syndrome subspace7. This is a reasonable approximation because in the regime of good error correction/suppression the EGP energy penalty will be greater than the energy spread within a syndrome subspace, i.e. α ≫ epsilonm(t) ∀ m,t, and hence we can replace the m dependent value with the average since the oscillation frequency will be mostly determined by the first term 2αϖ(j,ν).

The second approximation is that the matrix element ${\left \langle {\epsilon _m(t), {\boldsymbol {\nu }}}\right \vert } \tilde {\rho }(t) {\left \vert {\epsilon _n(t), {\boldsymbol {\nu }}}\right \rangle }$ is only non-zero when n = m since we take the adiabatic interpolation to be slow enough to avoid diabatic errors. Hence there is no coherence between logical eigenstates in a syndrome space. Finally, we assume that for small τ (because of the decaying correlation function Cj(τ), the values of the above trace for small values of τ are most important), ${\left \langle {\epsilon _m(t), {\boldsymbol {\nu }}}\right \vert } \mathrm {e}^{-\frac {\mathrm {i}}{\hbar } \skew3\bar{H}^*(t)\tau } {\left \vert {\epsilon _m(t), {\boldsymbol {\nu }}}\right \rangle } \approx 1$ . That is, the rotation by the error unitary will be negligible for the τ values of relevance. This is obviously a crude approximation (but one that improves in quality as the correlation time of the environment decreases), and we comment on ways to refine it below.

Using these approximations, we can estimate the above trace, when EjEν is an uncorrectable error as

Equation (41)

It can be shown using the same approximations that this same expression holds if the second case of equation (38) is true. Thus, we now have a complete rate model for the error dynamics

Equation (42)

for ν correctable, with R+ν,j(t) as above and

Equation (43)

It is important that the first sum in equation (42) is over all elementary errors such that EjEν is correctable, while the second sum is over all elementary errors. This means that we are tracking only the population within the correctable subspace of states—population that leaks out into the uncorrectable subspace is not accounted for. Consequently, the net population in the correctable subspace, $P_{\mathrm { corr}} \equiv \sum _{{\boldsymbol {\nu }}~{\mathrm { correctable}}} P_{\boldsymbol {\nu }}$ , will decrease over time. This correctable population lower bounds the probability that the AQC computation has not failed (due to environment-induced failure modes). This is especially useful for examining the effect of error correction by local cooling which attempts to keep the net error weight small and enhance this success probability.

We note that most of the approximations used above to simplify the expressions for rates across the correctable–uncorrectable boundary can be avoided at the expense of tracking the leakage rate of each state across this boundary. That is, only by assuming the absence of diabatic errors, we can write equation (40) as

where $h_m(\tau , t) \equiv {\left \langle {\epsilon _m(t) , {\boldsymbol {\nu }}}\right \vert } \mathrm {e}^{-\frac {\mathrm {i}}{\hbar } \skew3\bar{H}^*(t)\tau } {\left \vert {\epsilon _m(t), {\boldsymbol {\nu }}}\right \rangle }$ . Thus each state in the ν syndrome subspace has a different rate of leakage to the uncorrectable ν(j)⊕ν subspace. We simply have to keep track of these different rates across this boundary and sum them up separately in order to improve the model. We do not explicitly do that here since the rate equation above is accurate enough for our purposes.

To summarize this section, we have been able to derive a description of encoded AQC evolution that completely decouples the adiabatic evolution and the environmentally induced error evolution. This was possible because of the structure of the encoded Hilbert space and properties of stabilizer codes that ensure that logical evolution and evolution due to errors are orthogonal. We note that the above Markov chain random walk description of error-induced evolution requires several ingredients: (i) an almost Markov description of the environment; (ii) the imposition of energy penalties for erroneous states by an EGP control Hamiltonian; and (iii) a protected implementation of the logical Hamiltonian that localizes correctable errors.

We draw attention to previous works on continuous-time error correction that described the optimal tracking of errors as a Markov chain random walk on syndrome subspaces [2224]. However, in contrast to the present work, these formulations were for quantum memories and hence did not consider logical Hamiltonians, and further, did not explicitly consider physical system–environment models for the error dynamics.

5.3.1. Example

To illustrate the utility of the rate equation derived above we simulate dynamics under this equation for the example of a single qubit encoded using the Steane [[7,1,3]] code [25]. The system–bath interaction we consider couples σx and σz of each qubit to an environment, and thus induces both bit-flip and phase errors, explicitly

Equation (44)

where B(i)x/z are bath operators. Under this system–bath coupling, the Steane code is capable of correcting all weight one errors and most weight two errors. It is also a perfect code in the sense that every one of the 64 syndrome pattern identifies a weight one or weight two error. The syndrome subspaces and connections induced by correctable and uncorrectable errors for this code and system–bath model are shown in figure 2. We simulate equation (42) for this code with some typical bath parameters and the results are shown in figure 3. We see from this graph that when α/kBT > 2 the decay of population in the correctable subspace is suppressed heavily. If we extrapolate to long enough times the decay is still exponential, but the decay constant is decreased substantially by the error suppression terms in the Hamiltonian (that are proportional to α).

Figure 2.

Figure 2. The structure of encoded Hilbert space for the Steane [[7,1,3]] code and transitions between syndrome subspaces induced by elementary σx and σz errors. The six stabilizer generators for this code are: S1 = ZIZIZIZ,S2 = IZZIIZZ, S3 =  IIIZZZZ, S4 = XIXIXIX, S5 = IXXIIXX, S6 = IIIXXXX and the syndrome subspaces (circles) are labeled by the decimal equivalent of their binary syndrome pattern ν. The central circle with ν = 0 is the codespace. Black lines (green lines) indicate transitions induced by correctable σx errors (σz errors). The red lines indicate transitions induced by uncorrectable errors.

Standard image High-resolution image
Figure 3.

Figure 3. Population in the correctable subspace as a function of time and $\frac {\alpha }{k_{\mathrm { B}} T}$ . The average time-independent energy of the logical subspace is taken to be $\bar {\epsilon }(t)=50\,\mathrm {kHz}~\forall~t$ . The bath has an Ohmic spectral density with Lorenz–Drude cut-off (see appendix A for details), and parameters: ER = ℏ(0.1 MHz), γ = 3 MHz and $T = \frac {\hbar }{k_{\mathrm { B}}} (1\,{\mathrm { MHz}})$ . The transition rates in the rate equation (42) are taken in their second Markov approximation and so are time-independent. We confirmed that the population in the correctable subspace has only a weak dependance on the bath parameters that are not varied, as long as the Markovianity condition ER/ℏ ≪ γ is met.

Standard image High-resolution image

5.4. Adding local cooling for error correction

Now we examine the effect of adding local cooling of individual qubits in order to implement a correction mechanism that preferentially populates lower energy states (the code space is the lowest energy state by construction). The cooling is modeled as a strong local coupling of all elementary error operators to a reservoir at low temperature. We will use the term 'reservoir' to refer to the low-temperature environment and 'bath' to refer to the uncontrollable environmental degrees of freedom at higher temperature. Thus we add a new interaction and free Hamiltonian to equation (1) of the form $H_{\mathrm { cool}} = \sum _{j=1}^{N_{\mathrm {e}}} E_j\otimes F_j + H_{\mathrm { R}}$ , where Fj are operators in the Hilbert space of the cold reservoir and HR is the free Hamiltonian of the reservoir. This reservoir could physically be a harmonic environment that can be cooled more effectively than other environmental degrees of freedom, or could be ancillary qubits that are actively optically pumped to a low-temperature state [18, 20]. We will not specify the reservoir details here but simply assume that it is at thermal equilibrium. In this case, one can average over the reservoir degrees of freedom just as we averaged over the bath degrees of freedom in the previous subsections to obtain a rate equation for error path populations in the presence of coupling to both environments

Equation (45)

for ν correctable, with R±j(t) being the same time-dependent rates as in equations (36) and (43), and the other time-dependent rates (resulting from the coupling to the cold reservoir) being

Equation (46)

where $D_{j}(\tau ) \equiv {\mathrm {tr}}_{\mathrm { env}}\{\breve {F}_j(\tau )\breve {F}_j(0)\sigma ^{{\mathrm { res}}}_{\mathrm { eq}}\}$ is the quantum correlation function of the reservoir degrees of freedom.

To clarify the effects of cooling, we simplify these rates by taking the second Markov approximation of the bath and reservoir dynamics, and assume $\bar \epsilon (t) \approx \bar \epsilon ~\forall~t$ . This results in time-independent rates of population transfer and the rate equation

Equation (47)

for ν correctable, with rates

Equation (48)

where ${\sf m}(\omega ) = 1/(\mathrm {e}^{\beta _{\mathrm {R}} \hbar \omega }-1)$ is the Bose–Einstein distribution at the temperature of the cold reservoir and $\mathcal {K}(\omega )$ is the spectral density of the reservoir degrees of freedom (${\sf n}(\omega )$ and $\mathcal {J}(\omega )$ are equivalent quantities for the bath and were defined in section 4.1). For simplicity we assume that this spectral density and distribution of modes is the same for all j. Also,

Equation (49)

As before, this dynamical model tracks population in the correctable subspace and treats any leakage outside this subspace as irrecoverable. Thus it lower bounds the probability of successful computation.

For effective cooling we require two conditions: $\mathcal {K}(\omega ) \gg \mathcal {J}(\omega ) ~\forall~ \omega $ and ${\sf m}(\omega ) \ll {\sf n}(\omega ) ~ \forall~ \omega $ . The first stipulates that the reservoir is coupled more strongly to the system than the noisy bath (the coupling is still within the weak-coupling regime required for the Born and Markov approximations utilized in deriving the model), and the second stipulates that the reservoir temperature is lower than the bath temperature. Under these conditions, the rates in equation (48) can be approximated by

This describes coupling to an effective reservoir at a slightly higher temperature than the original cooled reservoir. The average thermal occupation of the new effective reservoir is given by the term in the square brackets above

Equation (50)

and is clearly only perturbatively (in $\mathcal {J}/\mathcal {K}$ ) higher than the original thermal occupation of the cold reservoir, ${\sf m}(\omega )$ . This average occupation also defines a, possibly energy dependent, effective temperature of the effective reservoir

Equation (51)

Therefore, if the temperature of the cooled reservoir can be kept well below the energetic barriers imposed by the EGP, then the population deviation from the codespace can be suppressed even in the presence of the perturbing bath-induced errors, i.e. $T^{\mathrm { eff}}(\omega _{\boldsymbol {0},j}) \ll \hbar \omega _{\boldsymbol {0},j}/k_{\mathrm { B}} ~\forall j ~\Rightarrow {\sf n}^{\mathrm { eff}}(\omega _{\boldsymbol {0},j}) \approx 0 ~\forall j~\Rightarrow \gamma (-\omega _{\boldsymbol {0},j}) \approx 0 ~\forall j$ . This is exactly error correction by cooling; we have constructed an effective reservoir that couples to the appropriate degrees of freedom to quench excitations that represent errors.

6. Conditions for effective error correction and a notion of fault-tolerance in adiabatic quantum computing

The dynamical equation derived above enables the identification of conditions required for effective error correction by local cooling in AQC. The first condition is of course that we require a protected Hamiltonian that permits the restoration of population that has leaked from the codespace (or equivalently, keeps excitations induced by local perturbations localized in space). However, the local nature of the cooling imposes another stringent requirement on successful error correction and long-term error-free operation. This is evident from examining the rate of population arriving and leaving a syndrome space. For the codespace (or a correctable subspace close to the codespace) we want the rate of population leaving as a result of more errors to be smaller than the rate of population returning. This is a minimal condition, because if this were not satisfied then population initialized in the codespace will leak out and become uncorrectable rapidly. That is, we require γ(ων,j) > γ(− ων,j) if w(EjEν) > w(Eν), where w(Eν) is the weight of the error operator Eν. A consequence of the Kubo–Martin–Schwinger condition for a bath at thermal equilibrium [12] is that these transition rates satisfy detailed balance

Equation (52)

Therefore γ(ων,j) > γ(− ων,j), if ων,j = 2αϖ(j,ν)/ℏ > 0. But as we see from equation (30), ϖ(j,ν) has no dependence on the weight of the errors and can be positive or negative. The issue is that the energetic cost of an error is dictated by its syndrome pattern (how many stabilizers the error anti-commutes with) rather than the Pauli weight of the error. This means that it is possible for a high-weight error to have a lower energy than a low-weight error, which is incompatible with error correction by local cooling, which is constructed to drive population toward low-energy states. For example, the steady state of equation (47) has Boltzmann distributed populations across syndrome subspaces, and this distribution is not favorable unless the lowest energy syndrome subspaces also correspond to the lowest weight error subspaces containing states that are close to the codespace to maximize the probability of a correct decoding.

However, structuring the energies of syndrome subspaces such that they are ordered by weight of error (or distance from codespace in terms of number of errors) is a challenging problem. This can be done by designing the EGP Hamiltonian accordingly, but at the cost of adding an exponential number of energy penalty terms, most of which are high weight; see appendix B. In fact, we note that this condition of having a structured energy landscape of error states is exactly the condition required to have a self-correcting quantum memory [2629]. A self-correcting, or resilient, quantum memory is imagined to consist of a lattice finite-dimensional quantum system whose ground state is degenerate with a finite-energy gap to excitations. The ground state degeneracy is stable to weak, local perturbations and these degenerate states are where the quantum information is stored8. The additional key property of self-correcting quantum memories is that they possess thermal stability, meaning that if the temperature is below a threshold temperature, Tc, excitations from a given ground state created by local perturbations can only result in transitions to one of the other degenerate ground states in a time exponential in the system size [2931]. This is beneficial since any useful computation or storage is expected to be sub-exponential in system size. It is believed that a route to constructing a self-correcting quantum memory is to have a structured energy landscape such that each additional error on a state incurs an energy penalty [28]. There are two primary challenges to this approach. The first is that it is currently unknown exactly how the energy penalties should scale with the system size in order to have a thermally stable quantum memory [32]. A general condition for thermal stability is complicated by entropic considerations, which reveal that the population of a syndrome subspace is dependent not only on its energetic penalty but also on how many error paths lead to the subspace [28]. The second challenge is that we only have a few examples of many-body systems where the energy landscape can be structured suitably while at the same time satisfying physical restrictions such as locality, low-weight Hamiltonian terms and embedding in three or fewer spatial dimensions. In fact, an efficient local Hamiltonian construction for such a memory in less than four spatial dimensions is a significant open problem in quantum information [2628, 33, 34]. Therefore, we expect that structuring the EGP control Hamiltonian to efficiently implement an energy penalty to erroneous states that depends monotonically on the distance from the codespace (in terms of number of elementary errors) will be a difficult task.

We emphasize that this is a problem that is orthogonal to the problem of constructing a protected Hamiltonian. The latter, which is constructed by manipulating $\skew3\bar{H}_{\mathrm { AQC}}$ , is required to prevent coherent delocalization of excitations by the logical Hamiltonian implementing the AQC. The structured energy landscape we refer to in this section, which is constructed by manipulating HC, is desirable to prevent environmental/thermal processes from taking the system too far from the codespace through a sequence of errors.

This overlap of the conditions required for a self-correcting quantum memory and a long-term stable AQC stimulates us to ponder on conditions for fault-tolerant AQC. At this stage there is no constructive notion of what it means for an adiabatic quantum computation to proceed in a fault-tolerant manner. However, by drawing an analogy to the case of self-correcting quantum memories it may be possible to posit a limited definition of fault tolerance in AQC. In the time-continuous, autonomous, models we are considering there is no 'active error correction' implemented by way of gates and measurements, and so the notion of fault-tolerant computing must necessarily be modified from standard circuit model notions stemming from the threshold theorem [21]. In this spirit, a self-correcting quantum memory can be thought of as possessing two phases: a stable phase when T < Tc in which quantum information can be reliably stored in the ground states for a time that scales exponentially with system size and an unstable phase when T > Tc where thermal fluctuations will corrupt quantum information stored in the degenerate ground space. Such a characterization is analogous to a fault-tolerant quantum computer, which contains a stable operating phase when errors are below threshold and an unstable phase above threshold [35]9. This suggests that we define an environmental-fault-tolerant AQC implementation as one capable of executing AQC evolution and error correction at once, while also possessing thermal stability in the sense that crossing the boundary from correctable to uncorrectable states through environmentally induced processes takes a time that scales exponentially with encoded system size. Note that this definition of fault-tolerance is limited since it does not capture the system's susceptibility to failure modes that are not induced by the system–bath coupling such as diabatic errors and failure to implement the correct final Hamiltonian. The dynamical model developed in this paper can be used to formulate sufficient conditions for such fault-tolerant AQC. The analysis in section 5 demonstrates how to implement error correction by local cooling. The remaining step, of stipulating conditions on HC that produce a favorable energy landscape, is treated in appendix B, where we use a simplified dynamical model to numerically explore the effect of various energy landscapes on thermal stability.

Finally, we note that the monotonic energy landscape requirement comes from the fact that our method for entropy reduction is the restrictive local cooling model. That is, a local cooling operation can only act on information collected from a local neighborhood of the whole system (one qubit in the case above), whereas the optimal decoding and correction operation must correlate and compare all the stabilizer measurements across the whole lattice of qubits. Therefore, it may be possible to replace the structured landscape requirement with a continuous-time physical implementation of a near-optimal decoder that acts upon a large fraction of the stabilizer syndrome values. It is unclear what such a physical implementation would look like but recent constructions of physical realizations (as opposed to realizations on a digital processor) of decoders for classical codes may suggest a route forward [36].

7. Discussion

We have analyzed the dynamics of stabilizer code encoded AQC in the presence of a weakly coupled perturbing environment. We constructed an open system model describing error dynamics for encoded AQC that enabled the unification of previously proposed techniques for error suppression, EGP and DD, under the same dynamical framework. The model elucidates the mechanisms behind error suppression for both techniques and allows the calculation of rates of leakage from the codespace. We note that our dynamical model is applicable to any situation where stabilizer encoding is used to protect evolving ground states. Therefore, it could also be useful for modeling the effects of noise on encoded quantum simulation.

Then we extended the model to encompass error correction in encoded AQC by local cooling. The steps taken in deriving this dynamical model clarify the essential physical properties of the system and environment required for the validity of popular Markovian rate models of errors and perturbations used in quantum computing. In particular, we identified several requirements for error correction to be successful in the adiabatic model of quantum computation:

  • (i)  
    The stabilizer code structure should be imposed by EGP as opposed to DD. The latter does not impose real energy penalties on error states and therefore is not compatible with cooling based error correction.
  • (ii)  
    A choice of logical operators in the encoded adiabatic Hamiltonian, $\skew3\bar{H}_{\mathrm { AQC}}$ , is needed that ensures that local excitations remain localized in space and energy. That is, the eigenstates of the logical Hamiltonian in the codespace should remain eigenstates after the occurrence of an error that promotes the state into a error syndrome subspace. These Hamiltonians, termed protected Hamiltonians, can be constructed using the freedom in defining the logical operators of a stabilizer code.
  • (iii)  
    Error correction can be implemented by coupling local systems (qubits) to a thermalizing cold reservoir with temperature less than the EGP penalty. This reservoir could be implemented by selectively coupling multi-level ancilla systems and optical pumping [18].

Finally, we also considered the conditions necessary for long-term, stable ('fault-tolerant') operation of an AQC with error correction implemented through local cooling.

Unfortunately these requirements for error correction and long-term stable operation are extremely demanding experimentally. All the requirements identified above, except for the ability to couple to a cold reservoir, can only be implemented by increasing the weight of the Hamiltonian of the encoded system. Therefore, although we have identified the requirements for performing error correction within an adiabatic model of quantum computation, these requirements are likely too stringent to be practical, at least in the near-term.

Acknowledgments

We acknowledge useful discussions with Sandia's AQUARIUS Architecture team, especially with Robin Blume-Kohout, Anand Ganti and Andrew Landahl. MS also acknowledges useful discussions with David Poulin on the topic of self-correcting quantum memories. This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

Appendix A.: Full rate calculation for Ohmic spectral density

The example environment considered in the main text is a classical bath with exponentially decaying correlation. Here we generalize this to a true quantum environment and explicitly demonstrate that the controlled suppression of population leakage from the codespace holds in this case too. Consider a damped harmonic environment with an Ohmic spectral density with Lorentz–Drude regularization

Equation (A.1)

where ER is the reorganization energy, which quantifies the total system–environment coupling strength, and γ is the inverse of the environment correlation timescale. The quantum correlation function for an environment with such a spectral density is

Equation (A.2)

where β = 1/kBT is the inverse temperature and $\nu _\kappa \equiv \frac {2\pi \kappa }{\beta \hbar }$ are the Matsubara frequencies. For moderate to high temperatures the terms in the summand decay quickly and it is customary to truncate the sum at finite κ. Assuming that the error suppression technique is EGP and computing the rates in the population master/rate equation (equation (19)) yields

Equation (A.3)

where ωj = 2αwj/ℏ, $a_0^\pm = \left [\gamma \cos (\frac {\beta \hbar \gamma }{2}) \pm \omega _j \sin (\frac {\beta \hbar \gamma }{2})\right ]$ . As in the case of a classical model of the environment, the suppression of these transition rates is achieved by two mechanisms: (i) the suppression term 2αwj in the denominator decreases the overall rate of population leakage; (ii) the same term increases the oscillation frequency of the sinusoidal functions in the numerator, thus decreasing the magnitude of integrals of r±j(t).

We plot these rates for some sample parameters in figure A.1.

Figure A.1.

Figure A.1. Time-dependent transition rates between codespace and error space for a bath with Ohmic spectral density. Solid lines are rj+(t) and dashed lines are rj(t). The error weight is assumed to be wj = 1, and hence ωj = 2α/ℏ. The bath parameters used are γ = 3 MHz, ER = ℏ(0.1 MHz), T = (ℏ/kB)(1 MHz). Three values of energy penalty are shown: α/ℏ = 1 MHz (blue), α/ℏ = 2 MHz (red), α/ℏ = 4 MHz (green). The inset shows $\frac {2}{\hbar ^2} \mathbb {Re} {{\sf C}}_j(\omega )$ , where ${{\sf C}}$ is the one-sided Fourier transform of C(t). In a second-Markov approximation, this is the quantity that determines the rates and we have indicated with colored lines (the color-to-α mapping is the same as in the main figure) the points at which this quantity is sampled. At long times, r±j(t) approaches $\frac {2}{\hbar ^2} \mathbb{Re} {{\sf C}}_j(\omega _j)$ . This intuitively explains the suppression of r±; at this temperature the Ohmic spectrum dictates that $\mathbb{Re} {{\sf C}}_j(\omega )$ quickly decays when ω ≳ 2 MHz and is negligible when $\omega \lesssim -5\,{\mathrm { MHz}}$ . Note that when ωj > γ, r(t) can become <0 at short times. This is a well-known problem with time-dependent Redfield rates: the transient behavior can lead to negative rates as a result of the theory being invalid for very short times. There are various solutions to this problem, e.g. [37], but we will not concern ourselves with these here since the long-time behavior is of most interest to us and these transient effects are negligible for long-time behavior.

Standard image High-resolution image

Appendix B.: Simplified thermal stability analysis

In the main text we reduced the open system dynamics of an encoded AQC with a protected logical Hamiltonian to a Markov random walk description with time-independent rates when all Markov approximations of the environment are made. Such a description is also valid for lattice based quantum memories encoded using a stabilizer code Hamiltonian (e.g. the abelian toric code [38]). This description can be used to analyze properties required of the quantum code (which translates to properties required of the energy penalty enforcing Hamiltonian, HCEGP) for the type of thermal stability required for the long-term operation of the quantum memory or AQC. We define a thermally stable system in this context as one in which the time taken for a sequence of physical errors (stemming from a system–bath coupling) to build to an uncorrectable logical error scales exponentially in the 'system size'. For quantum memories this system size is often defined as the number of physical qubits (e.g. the number of qubits in a two-dimensional lattice encoding k qubits in its ground state). Alternatively, if one is building a system for computation from nl single qubits, each encoded in a concatenated code, then it is desirable for the system to remain correctable for a time exponential in the number of logical qubits, nl, since any efficient computation will execute for a time less than this. Hence, in such systems thermal stability is the exponential scaling of population in the correctable subspace with the number of logical qubits. The question we address in this appendix is what kind of scaling of the energy penalty Hamiltonian with nl is required for this type of thermal stability?

As discussed in the main text, this type of thermal stability is intimately related to the type of energy landscape the errors/excitations experience. The most straightforward way to do such a thermal stability analysis is to simulate error dynamics, using for example a Markov chain Monte Carlo algorithm, with physically motivated rates calculated from the derived expressions. In this appendix, we take an alternate route and use a key simplification to relate this problem to a one-dimensional (1D) Markov chain hitting time problem whose analytical solution can be derived easily.

Consider a non-degenerate code whose syndrome subspace populations dynamics is described by equation (42), and furthermore, consider the full Markov limit where the rates of population transfer become time-independent and only dependent on the energy difference between the syndrome subspaces. To map the dynamics to a discrete-time Markov random walk we choose a time discretization Δt and rewrite the dynamics as

Equation (B.1)

for ν correctable. Pν[n] ≡ Pν(nΔt), and the coefficients are probabilities defined as

Equation (B.2)

Recall that ων,j is the frequency difference between the syndrome subspaces connected by the particular transition. The spectral density and temperature in the above expression can be properties of the environment, or renormalized versions that result from coupling to a cold reservoir (e.g. equation (50)). This discretization of the dynamics makes the following analysis easier and we shall see below that the particular choice of Δt does not affect the results.

In figure B.1 we draw a time-discretized version of the encoded state space diagrams in the main text (e.g. figure 2). Here, all states with a certain error weight are grouped together in columns. There are $\scriptsize (\begin {array}{@{}c@{}}N_{\mathrm {e}} \\w \end {array})$ states in column w, and all states in column w have w transitions back to column w − 1 and (Ne − w) transitions forward to column w + 1. Crossing the red line with any transition results in an uncorrectable state. Since we are focusing on non-degenerate codes the red transitions actually lead back to the drawn syndrome subspaces. However, in the following we will treat any transition across the correctable–uncorrectable boundary as leakage and not track this population.

Figure B.1.

Figure B.1. A redrawing of the structure of encoded Hilbert space for an arbitrary non-degenerate code with distance d and number of correctable errors $n_{\mathrm {c}} = \lfloor \frac {d-1}{2} \rfloor $ . The syndrome subspaces are organized into columns with column w containing all syndrome subspaces with errors of weight w associated with their states. Arrows indicate transitions between states and transitions across the red line create logical errors.

Standard image High-resolution image

At this point we utilize a critical simplification and equate all transition probabilities from column w to w + 1 and denote this probability by πw→(w+1). Similarly, all transition probabilities from column w to w − 1 are considered to be the same and denoted by πw→(w−1). This simplification can result from a choice of EGP Hamiltonian that enforces an energy penalty on states that depend only on the weight of the error assigned to the syndrome value of that state; i.e. a Hamiltonian of the form

Equation (B.3)

where α > 0 and δ(w) is a scaling factor for states with weight w errors. We also define Δw ≡ δ(w) − δ(w − 1). It is clear that this Hamiltonian is extremely high weight and therefore not practical. However, the required simplification can also result from a replacement of all transition probabilities from column w to w + 1 by their maximum; i.e. $\pi _{w\rightarrow (w+1)} = \max \{\pi _{{\boldsymbol {\nu }},j} | \mathbf {w}(E_\nu )=w ~{\mathrm { and}}~ \mathbf {w}(E_j E_\nu )=w+1\}$ . Then π(w+1)→w is the probability of the corresponding backward transition and Δw is the energy difference between the states that are connected by this transition. In this case the following can be viewed as a worst-case analysis where πw→(w+1) is arrived at by choosing the maximum transition probability.

Under the above simplification the Markov chain in figure B.1 satisfies conditions for strong lumpability [39] and we can group all states in a given column into one and recover an equivalent 1D Markov chain, as in figure B.2. Any transition across the correctable–uncorrectable boundary is treated as a transition into an absorbing state, Ω. The forward and backward transition rates in this 1D Markov chain are explicitly

Equation (B.4)

with $\pi _{w\rightarrow (w+1)} = \Delta t \frac {2\mathcal {J}(-\alpha \Delta _{w+1})}{\hbar }[{\sf n}(-\alpha \Delta _{w+1})+1]$ and $\pi _{w\rightarrow (w-1)} = \Delta t \frac {2\mathcal {J}(\alpha \Delta _w)}{\hbar }[{\sf n}(\alpha \Delta _w)+1]$ . From the detailed balance condition $\frac {\pi _{w\rightarrow (w-1)}}{\pi _{(w-1)\rightarrow w}} = \mathrm {e}^{\beta \alpha \Delta _w}$ , we know that $\frac {q_w}{p_{w-1}} = \mathrm {e}^{\beta \alpha \Delta _w}$ .

Figure B.2.

Figure B.2. A 1D Markov chain representation of B.1 achievable when states lying in each column can be lumped. pw and qw are transition probabilities and rw = 1 − pw − qw. Note that q0 = qΩ = 0.

Standard image High-resolution image

Figure B.2 describes a 1D Markov chain with one reflecting and one absorbing boundary condition. We are ultimately interested in the mean hitting time to the absorbing state (or equivalently the mean absorption time) of a random walk that is initialized at the w = 0 state. This can be easily calculated using a first step analysis [40], and results in

Equation (B.5)

where tabs is the stochastic absorption time. Note that there is no real Δt dependence in this expression since pw∝Δt, and hence the result is not dependent on the choice of discretization time. Rewriting this expression in terms of the ratios $\frac {q_w}{p_{w-1}}$ , we get

Equation (B.6)

where $\tilde p_w = p_w/\Delta t$ . This quantity seems to grow exponentially with each positive energy barrier (Δw > 0) as expected, but also contains combinatorial factors (in $\tilde p_w$ ) that encode the entropic contribution to the average hitting time.

Figure B.3 shows the results of numerical solutions of equation (B.6) for a system encoded in a concatenated [[7,1,3]] Steane code with elementary errors σx and σz on each qubit—i.e. the system–environment interaction is as equation (44) in the main text (see the figure caption for bath parameters). Recall that at a concatenation level of t, the number of physical qubits per logical qubit is 7t and the distance of the code is 3t. Here we consider t = 4 and assume all errors up to weight $n_{\mathrm {c}} = \lfloor \frac {3^4-1}{2} \rfloor $ can be corrected. Figure B.3 shows how the average hitting time scales with different energy landscapes imposed by choice of the energy barriers. All energy barrier steps are taken to be the same for simplicity—i.e. $\Delta _w = \bar {\Delta } ~ \forall~ w$ —and we consider how $\bar {\Delta } \rightarrow \bar {\Delta }(n_{\mathrm {l}})$ has to scale with the number of logical qubits nl (each logical qubit is encoded in the concatenated Steane code) to achieve various scalings of average hitting time with nl. In (a) the cost of each additional error scales logarithmically in nl, while in (b) and (c) this cost scales as a small power and linearly, respectively.

Figure B.3.

Figure B.3. Log of the mean hitting time versus number of logical qubits (nl) for three types of scaling for the energetic barrier Δw: (a) logarithmic scaling, (b) square root scaling and (c) linear scaling with nl. The penalty energy scale is set by α/ℏ = 3 MHz. The encoding for each logical qubit is the Steane code to four levels of concatenation, and the three curves in each graph indicate behavior for three different bath temperatures. The bath was chosen to have Ohmic spectral density with Lorentz–Drude regularization (equation (A.1)), with parameters: ER = ℏ(0.1 MHz), γ = 200 MHz (the regularization rate γ is chosen to be very large to be consistent with the second Markov approximation of the rates). (a) $\bar \Delta (n_{\mathrm {l}})=\log n_{\mathrm {l}}$ , (b) $\bar \Delta (n_{\mathrm {l}})=\sqrt {\log n_{\mathrm {l}}}$ , (c) $\bar \Delta (n_{\mathrm {l}})=n_{\mathrm {l}}$ .

Standard image High-resolution image

Two observations to make from figure (B.3) are:

  • (i)  
    As the number of (logical and physical) qubits is increased, log of the average hitting time scales in the same manner as the scaling of $\bar {\Delta }(n_{\mathrm {l}})$ . Hence to achieve exponential scaling of average hitting time—i.e. exponential scaling of the failure time for a computation or memory—we require the energy cost of each additional error to scale linearly in system size for this concatenated code approach.
  • (ii)  
    This scaling of log(η0) as $\bar {\Delta }(n_{\mathrm {l}})$ is only recovered once the energy barrier is above a certain level, as evidenced by the initial flat trend of log(η0) when kBT/ℏ = 1.0 and 1.5 MHz in figures B.3(a) and (b). In fact, this behavior is more convincingly seen when we examine the average hitting time as a function of the number of logical qubits and the energy barrier scale α, as in figure B.4 (plotted only for $\bar {\Delta }(n_{\mathrm {l}})=\log (n_{\mathrm {l}})$ ). This plot shows that the scaling of log(η0) with nl turns over from a flat line to one that behaves as $\bar {\Delta }(n_{\mathrm {l}})$ at some critical value of α (which is about 1 MHz) in this case. That is, when α is below this critical value the hitting time remains constant, even if Δl scales with nl.

Figure B.4.

Figure B.4. Log of the mean hitting time as a function of α and nl (for $\bar {\Delta } = \log (n_{\mathrm {l}})$ ). The bath temperature is kBT/ℏ = 0.5 MHz and all other parameters are as in figure B.3. The similarity of the scaling of log hitting time and $\bar {\Delta }$ (both with nl) is only present after α ≳ kBT.

Standard image High-resolution image

To understand these two interesting aspects of the average hitting time we further analyze the analytical expression for η0. We begin with the form of equation (B.6) when $\Delta _w = \bar {\Delta }$ is independent of w:

Equation (B.7)

where $\lambda \equiv \beta \alpha \bar {\Delta }$ , and we have used the fact that in this case of independent energy barrier steps, $\tilde {p}_w = {\scriptsize (\begin {array}{@{}c@{}}N_{\mathrm {e}} \\w \end {array})(N_{\mathrm {e}}-w)}\pi _{\bar {\Delta }}$ , with $\pi _{\bar {\Delta }} = [ j ( \frac {\mathrm {e}^{-\lambda }}{1-\mathrm {e}^{-\lambda }} )]$ , with $j_{\bar \Delta } \equiv \frac {2 \mathcal {J}(\alpha \bar {\Delta }/\hbar )}{\hbar }$ . Now, we will lower bound this quantity by replacing n + k in the summand by its maximum value, nc, and use

Equation (B.8)

when Ne ≫ nc. To see the validity of this bound note that for a concatenated [[n,k,d]] code Nent and $n_{\mathrm {c}} = \lfloor \frac {d^t-1}{2} \rfloor $ , and since n > d by the quantum Singleton bound [21] Ne ≫ nc for t ⩾ 2. For instance, the [[7,1,3]] Steane code concatenated to t = 4 levels yields Ne∝2401 and nc = 20. Using this inequality and the expression for $\pi _{\bar {\Delta }}$ above, we get the lower bound

Equation (B.9)

Consider this bound in the regime valid for the simulations in figure B.3, $\lambda = \frac {\alpha \bar {\Delta }}{k_{\mathrm {B}} T} \gg 1$ , which is when each energy barrier step is larger than the thermal energy. The log of equation (B.9) in this regime is well approximated by

Equation (B.10)

where H(p) ≡ − plogp − (1 − p)log(1 − p) is the binary entropy function (the log is base 2) and we have used the approximation $\log {\scriptsize (\begin {array}{@{}c@{}} n \\ k \end {array})} \approx n H(k/n)$ , valid when n ≫ 1 and k ≫ 1. For the concatenated code example considered in figure B.3, $\lambda = \beta \alpha \bar {\Delta }(n_{\mathrm {l}})$ and Ne = 2(74)nl, and substituting these values,

Equation (B.11)

This expression explains both of the observations listed above if we examine its behavior with respect to nl. We see that for small nl the second entropic factor dominates unless $\bar {\Delta }(n_{\mathrm {l}})$ is linear in nl. This is the reason for the small, almost constant hitting time for small nl on some curves in figures B.3(a) and (b). However, as nl grows the second entropic factor becomes logarithmic in nl since xH(1/x) → log(x) as x → , and hence the first factor will dominate as long as $\bar {\Delta }(n_{\mathrm {l}})$ grows at least logarithmically in nl. Precisely where this turnover happens is dictated by β,α and parameters of the code. This shows that the scaling of the energy barrier step with logical qubits eventually dictates the hitting time behavior, $\eta _0(n_{\mathrm {l}}) \sim \mathrm {e}^{\bar {\Delta }(n_{\mathrm {l}})}$ , as long as this scaling is logarithmic or higher, and λ ≫ 1. This suggests that thermal stability, defined as exponential scaling of hitting time with nl, is achievable only when the energy barrier steps $\bar {\Delta }$ grow linearly in nl, a demanding requirement.

One can follow a similar line of analysis to show the following scaling properties of η0: (i) η0 is doubly exponential in the concatenation level t; and (ii) in the λ ≪ 1 regime, the hitting time decays with nl even if $\bar {\Delta }(n_{\mathrm {l}})$ scales linearly in nl.

In section 4.1.1 we pointed out that the rate of leakage from the codespace can be exponentially suppressed if the spectral density of the bath decays exponentially and the EGP energy penalty can be scaled linearly with nl (see also [3]). We seem to have arrived at the same requirement on energy penalties in this analysis of thermal stability. However, the key difference here is that the exponential scaling is achieved with no strong assumption on the bath spectral density. The dependence of equation (B.11) on the spectral density at $\bar \Delta $ is logarithmic and therefore it will not dominate the behavior unless the spectral density increases exponentially with energy, which is an unphysical dependence. Therefore, the addition of the protected Hamiltonian and concatenated error correction mechanisms allow the exponential (in nl) stability of AQC operation independent of the behavior of the spectral density of the environment.

Finally, we note that several approximations entered the above analysis, which is the reason this is a simplified analysis of thermal stability. Most importantly, we did not consider physical locality and weight restrictions on the EGP Hamiltonian and possible energy penalties. It would be interesting to extend this analysis to include physical restrictions and the structure of degenerate codes.

Footnotes

  • This expression is strictly only valid for ωj = 2αwj/ℏ ≠ 0, otherwise it diverges. The issue is that the second Markov limit, t → , and the ωj → 0 limit have to be taken carefully when both are required. A careful analysis [12] reveals that when ωj → 0, these rates become $r^{\pm }_j = \frac {2 k_{\mathrm { B}} T}{\hbar ^2} \frac {\partial \mathcal {J}(\omega )}{\partial \omega } |_{\omega =0}$ .

  • The weight of a multi-qubit Pauli operator is the number of non-identity terms in the tensor product.

  • Non-local cooling of arbitrary degrees of freedom of a many-body system is an powerful resource that enables efficient error correction and state preparation [19, 20] but is physically unrealistic.

  • In the following, a 'no error' is considered to be in the set of correctable errors with Eν=0 = 1.

  • For degenerate codes it is more correct to refer to Eν as the 'correction operation' since ν labels a syndrome subspace and multiple correctable errors can map to the same syndrome space for a degenerate code. But, for simplicity, we refer to Eν as an error with this subtlety implied for degenerate codes.

  • Note that since we are using the canonical protected Hamiltonian, the deformation factor λν = 1 and hence there is no ν dependence on this average energy.

  • We note that a critical part of the definition of a self-correcting quantum memory, the degenerate ground space, is unnecessary in the case of a self-correcting AQC implementation.

  • We note that although this analogy is informative, there is a critical missing step in it: fault-tolerant quantum computing captures the fact that the computation is resilient to faulty implementations of the gates used for error correction as long as the error rates are below the threshold. As far as we are aware there is no analysis of the stability of self-correcting quantum memories when the non-unitary dynamics that implement cooling and equilibration to the operating temperature, T < Tc, contain small errors.

Please wait… references are loading.