Abstract
Gradient Ascent Pulse Engineering (GRAPE) is a popular technique in quantum optimal control, and can be combined with automatic differentiation (AD) to facilitate on-the-fly evaluation of cost-function gradients. We illustrate that the convenience of AD comes at a significant memory cost due to the cumulative storage of a large number of states and propagators. For quantum systems of increasing Hilbert space size, this imposes a significant bottleneck. We revisit the strategy of hard-coding gradients in a scheme that fully avoids propagator storage and significantly reduces memory requirements. Separately, we present improvements to numerical state propagation to enhance runtime performance. We benchmark runtime and memory usage and compare this approach to AD-based implementations, with a focus on pushing towards larger Hilbert space sizes. The results confirm that the AD-free approach facilitates the application of optimal control for large quantum systems which would otherwise be difficult to tackle.
Export citation and abstract BibTeX RIS
Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
The rapidly evolving field of quantum information processing has generated much interest in quantum optimal control for tasks such as state preparation [1, 2], error correction [3, 4] and realization of logical gates [5–8]. This methodology has been implemented in various quantum systems [9], such as nuclear magnetic resonance [10–15], trapped ions [16, 17], nitrogen-vacancy centers in diamonds [18–24], Bose–Einstein condensation [25–27] and neutral atoms [28, 29].
One of the prominent techniques for implementing quantum optimal control is Gradient Ascent Pulse Engineering (GRAPE). The central goal of optimal control is to adjust control parameters in such a way that the infidelity of the desired state transfer or gate operation is minimized. In GRAPE, the minimization is based on gradient descent and thus generally requires the evaluation of gradients. Automatic differentiation (AD) [30] is a convenient tool that has also made an impact on quantum optimal control with GRAPE [31–34]. Internally, AD builds a computational graph and applies the chain rule to automatically compute the gradient of a given function.
However, the convenience of AD comes at the cost of memory required for storing the full computational graph. The use of semi-automatic differentiation (semi-AD) [35] partially reduces the memory overhead compared to full automatic differentiation (full-AD). This reduction can still be insufficient to tackle optimal control for quantum systems of rapidly growing size [36, 37]. This motivates us to revisit the original GRAPE scheme [12] which is based on hard-coded gradients (HG), and has the smallest possible memory cost. We perform a scaling analysis of memory usage and runtime, and thereby develop a decision tree guiding the optimal choice of strategy among HG, semi-AD and full-AD. We exemplify the scaling behavior with benchmarks for concrete optimization tasks.
The outline of our paper is as follows. Section 2 briefly reviews GRAPE and introduces necessary notation. In section 3, we illustrate derivation and memory-efficient implementation of the gradients for the most commonly used cost function contributions. Our numerical implementation of HG further improves the efficiency of state propagation. In section 4, we analyze and compare the scaling of runtime and memory usage scaling for HG, AD and semi-AD, and present results from concrete benchmark studies. In section 5, we formulate a decision tree that facilitates the choice of optimal numerical strategy for GRAPE-based quantum optimal control. We summarize our findings and present our conclusions in section 6.
2. Brief sketch of GRAPE
We briefly sketch the basics of GRAPE [12], mainly to establish the notion used in subsequent sections.
Consider a system described by the Hamiltonian
Here, Hs denotes the static system Hamiltonian and hc is an operator coupling the classical control field a(t) to the system 1 . Quantum optimal control aims to adjust a(t) such that a desired unitary gate or state transfer is performed within a given time interval 0 ≤ t ≤ T. To facilitate optimization, the total control time T is divided into N intervals of duration Δt = T/N, and the control field is specified by the amplitudes at the discrete times tn = nΔt (n = 1, 2,...,N). The discretization interval Δt is chosen such that control amplitudes are approximately constant within each Δt. This treatment is actually quite close to modeling the actual drive output of an arbitrary waveform generator, where Δt can be chosen to align with the available resolution. We denote the time-discretized values by
and group these to form the vector . Under the influence of this piecewise-constant control field, the system's quantum state ∣ψn 〉:=∣ψ(tn )〉 evolves by a single time step according to
Here, Un is the short-time propagator
where Hn = Hs + an hc . Optimization of the control field proceeds via minimization of a cost function C. Typically, C includes the state-transfer or gate infidelity, along with additional cost contributions employed to moderate pulse characteristics such as maximal power or bandwidth. To this end a composite cost function is constructed, , where αν are empirically chosen weight factors and Cν are individual cost contributions. GRAPE utilizes the method of steepest descent to minimize C(a) by updating the control amplitudes according to the cost function gradient:
Here, j enumerates steepest-descent iterations and ηj > 0 is the learning rate governing the step size.
3. Calculation of gradients
A critical ingredient for GRAPE is the computation of cost function gradients. One popular option is to leverage automatic differentiation for this purpose, such as implemented in Tensorflow [38] and Autograd [39]. A clear advantage of this strategy is the ease by which complicated cost function gradients are evaluated automatically at runtime. This convenience, however, is bought at the cost of significant memory consumption which can be prohibitive for large quantum systems. This waste of memory resources is prohibitive when optimal control is applied to large quantum systems. Therefore, we are motivated to revisit hard-coded gradients in a computationally efficient scheme for key cost contributions.
3.1. Analytical gradients
We provide a discussion of the analytical gradients of two key cost contributions in this subsection. Expressions for gradients of other cost contributions are shown in appendix A.
In state-transfer problems, a central goal of quantum optimal control is to minimize the state-transfer infidelity given by
where ∣ϕT 〉 is the target state and ∣ψN 〉 is a state realized at final time tN . The gradient of takes the form
where the state ∣ϕn 〉 obeys the recursive relation
This can be interpreted as backward propagation.
In addition to state transfer, unitary gates are another operation of interest. Gate optimization can be achieved by minimizing the gate infidelity . Here, UT is the target unitary gate and UR = UN ⋯ U1 is the actually realized gate. For a given orthonormal basis (h enumerates the basis states), the gradient of can be written as
where . The state is obtained as follows: apply the target unitary UT to basis state and backward propagate the result to time tn ,
Equations (7) and (9) provide the analytical expressions needed for gradient descent. In the next two subsections, we will discuss how to numerically evaluate these expressions in a memory-efficient way.
3.2. Numerical implementation of hard-coded gradient
The calculation of gradients in equations (7) and (9) requires numerical evaluation of expressions of the form Un ∣Ψ〉 and . Here, the short-time propagator Un is given by a matrix exponential eA and A = − iHn Δt.
Numerically evaluating eA ∣Ψ〉 is not without challenge. As pointed out by Moler and Van Loan [40], several methods exist, yet none of them is truly optimal. Three methods ranked highly in reference [40] are based on solving ordinary differential equations, matrix diagonalization and scaling and squaring (combined with series expansion). ODE solving, in this context, tends to be most expensive in runtime [35]. We mainly on focus on scaling and squaring but comment on situations where matrix diagonalization is preferable in section 5.
Scaling and squaring [41] which is based on the identity
where the right hand side matrix exponential now involves the matrix B = A/s which has a norm that is reduced compared to . This generally allows for a lower truncation order m of the exponential series 2 ,
Typically, m is smaller than the one required for the original eA . Our goal is to approximate the state vector where
According to the last expression, approximation of eA ∣Ψ〉 can thus proceed by repeated application of B = A/s to a vector, without ever invoking matrix-matrix multiplication. This boosts runtime performance from O(d3) to O(d2).
To proceed with the evaluation of eA ∣Ψ〉, the truncation order m and scaling factor s are determined in such a way that the error remains below the error tolerance τ,
Here, 〚•〛 denotes the floating point representation of •. Since this error εA,∣Ψ〉 is difficult and costly to evaluate, we instead derive an upper bound EA (m, s) [see appendix C] and resort to the inequality
For a given A, this inequality generally has multiple solutions for m and s. As shown in appendix C, the evaluation requires ms matrix-vector multiplications, which are observed to dominate the runtime. Since systematic minimization of ms requires significant runtime itself, we instead: (1) select solutions to equation (15) with the smallest s which is denoted by , and (2) among these pairs, choose the one with smallest m (which we denote by ).
Thus, calculating eA ∣Ψ〉 requires
matrix-vector multiplications. As shown in appendix D for two concrete examples, our method for evaluating eA ∣Ψ〉 can achieve better runtime performance and accuracy than Scipy's expm multiply function [42] .
Having addressed the challenges associated with numerically evaluating eA ∣Ψ〉, we now turn to the calculation of , which is crucial for computing cost function gradients as in equations (7) and (9). We base this evaluation on the approximation
where Bn = − iHn Δt/s. We proceed by using the auxiliary matrix method based on the identity [43, 44]
Here, is defined as
which is a 2 × 2 matrix of operators each acting on our Hilbert space with dimension d. Once a basis is chosen, can be represented as a 2d × 2d matrix of complex numbers. Numerical evaluation of the left hand side of the identity (18) gives a 2d dimensional vector. The first d entries of the vector correspond to , which is an approximation of .
3.3. Efficient evaluation of hard-coded gradients
The analytical gradient expressions, such as in equation (7), are generally complicated and involve a large number of contributing quantities. Depending on grouping and ordering of operations, these quantities may need to be stored simultaneously or computed repeatedly with critical implications for runtime and memory usage. The scheme originally proposed by Khaneja et al [12] strikes a favorable compromise that avoids the proliferation of stored intermediate state vectors, as illustrated in figure (1). More specifically, the final quantum state ∣ψN 〉 is obtained by forward propagating ∣ψ0〉. Then, backward propagation of ∣ψN 〉, ∣ϕT 〉 is carried out to calculate the matrix element needed for the n-th component of the gradient, thus building up the gradient in descending order. For both propagation directions, only the current intermediate states are stored, leading to a memory cost of Θ(1). 3
4. Scaling analysis and benchmarking
We consider the following three methods to compute gradients within GRAPE: evaluation of hard-coded gradients, full automatic differentiation [32], semi-automatic differentiation [35]. To determine the method most appropriate for a given problem, we inspect the memory usage and runtime scaling of these three approaches. Following this analysis, we present benchmark studies of specific state-transfer and gate-operation problems to illustrate the scaling behavior.
4.1. Scaling analysis of runtime and memory usage
Runtime and memory usage can pose significant challenges when performing optimization tasks for quantum systems with large Hilbert space dimension (d) or for time evolution requiring significant number of time steps (N). In this subsection, we analyze the scaling of memory usage and runtime with respect to both N and d, focusing on the cost contribution C1 for state-transfer and unitary-gate optimization. Our findings can be generalized to other cost contributions beyond C1, as elaborated in appendix A.
4.1.1. Evaluation of hard-coded gradients (HG) using forward-backward propagation
We start by considering state-transfer problems. The memory usage for evaluating is mainly determined by the required storage of quantum states and the Hamiltonian. In forward-backward propagation, the number of states and instances of Hamiltonian matrices stored at any given time is of the order of unity. Each individual state vector of dimension d consumes memory ∼Θ(d). Memory usage for the Hamiltonian depends on the employed storage scheme. In the 'worst' case of dense matrix storage, the memory required for Hamiltonian is Θ(d2). For sparse matrix storage, memory usage generally depends on sparsity and is determined by the number of stored Hamiltonian matrix elements (here denoted as κ). For example, for a diagonal or tridiagonal matrix, memory usage is κ ∼ Θ(d); for a linear chain of qubits with nearest-neighbor σz coupling, it is . Combining the scaling relations above gives us an overall asymptotic behavior of memory usage Θ(d + κ).
We observe that runtime of the scheme is dominated by the required Θ(N) evaluations of propagator-state products and their derivative counterparts. Numerically approximating a single propagator-state product requires μ matrix-vector multiplications [see equation (16)], where runtime of one instance of matrix-vector multiplication scales as Θ(κ). The implicit dependence of μ = μ(d) on d is specific to each particular system and can be difficult to obtain analytically. Numerically, we find for a driven transmon coupled to a cavity that for a linear chain of coupled qubits, the scaling is , see appendix E for more details. Once combined, the above individual scaling relations lead to an overall runtime asymptotic behavior Θ(N μ κ). For example, in the case of the linear chain of coupled qubits, runtime scales as .
We next turn to gate-optimization problems, where scaling behavior of memory usage and runtime are as follows. We calculate by sequentially applying forward-backward propagation to each of the d basis states. The memory usage is Θ(d + κ), the same as for . The runtime scaling is Θ(N μ κ d) which is d times larger than the runtime scaling for . (Another way of calculating consists of parallelized forward-backward propagation of all basis states. The concrete scaling behavior then depends on the parallelization scheme; that discussion is beyond the scope of this paper.).
4.1.2. Full automatic differentiation (full-AD)
Reverse-mode automatic differentiation extracts gradients by a forward pass and a reverse pass through the computational tree. With the forward pass the cost function C is computed and the backward pass accumulates the gradient via chain rule.
The memory usage for evaluating is determined by the requirement that all intermediate numerical results must be stored as part of the forward pass and act as input to the reverse pass. Specifically, the forward pass evolves the initial state vector to the final state vector in N time steps. The total number of vectors that have to be stored along the way is Θ(N μ), since computation of a single short-time propagator-state product via scaling and squaring necessitates μ iterations. In addition, considering the memory usage for the Hamiltonian, the full scaling is Θ(N μ d + κ).
The runtime for evaluating combines forward and reverse pass. It is dominated by the evaluation of Θ(N) propagator-state products, each of which has the runtime scaling Θ(μ κ), leading to an overall runtime scaling of Θ(N μ κ).
For gate-optimization problems, is evaluated by evolving not only a single initial state but a whole set of d basis vectors. Evaluating Nd propagator-state products requires Θ(N μ d2 + κ) memory, where the number of stored Hamiltonian matrix elements κ takes into account storage of the Hamiltonian. Since κ has the worst-case scaling Θ(d2) in a case of dense Hamiltonian matrix, the overall scaling is Θ(N μ d2). Assuming that the evolution of individual basis vectors is not parallelized, runtime also grows by a factor of d, so that runtime scaling for is Θ(N μ κ d).
4.1.3. Semi-automatic differentiation (semi-AD)
For gradients of cost contributions that are tedious to derive analytically, semi-automatic differentiation [35] offers an alternative approach in which some derivatives are hard-coded and others treated by automatic differentiation. Compared to full-AD, semi-AD has the benefit that the memory usage scaling is independent of μ while maintaining the same runtime scaling.
4.1.4. Comparison
The results of this scaling analysis are summarized in table 1. For full-AD, memory usage grows linearly with respect to μ and number of times steps N. By contrast, memory usage for HG does not increase with μ and N, but remains constant. This advantage is not bought at the cost of worse runtime scaling. Therefore, HG is preferable over full-AD in the optimal control of large quantum systems where memory usage would otherwise be prohibitively large. Semi-AD presents an interesting compromise viable whenever the memory bottleneck is not too severe. We note that the runtime scaling Θ(N μ κ d) of HG via scaling and squaring can in principle exceed the scaling O(Nd3) for matrix exponentiation based on matrix diagonalization (see appendix F). In this case, the latter is the better option.
We illustrate the discussed scaling by benchmark studies for concrete optimization problems in the following section.
4.2. Benchmark studies
Our first benchmark example studies the following state-transfer problem. Consider a setup in which a driven flux-tunable transmon is capacitively coupled to a superconducting cavity, and perform a state transfer from the cavity vacuum state to the 20-photon Fock state. The Hamiltonian in the frame co-rotating with the transmon drive is
Here, the drive is both longitudinal and transverse, c and b are the annihilation operators for cavity photons and transmon excitations respectively. α, g and Δ denote anharmonicity, interaction strength and difference between the transmon and cavity frequency.
Optimization proceeds by minimizing the infidelity and limiting the occupation of higher transmon levels by including the cost contribution [equation (A.1)]. The latter has the additional benefit of enabling the truncation to only a few transmon levels. We measure runtime and peak memory usage of full-AD 4 , semi-AD and HG for a single optimization iteration. The runtime is measured by the package Temci [45] and the peak memory usage is monitored by the memory profiler package [46]. The optimization is performed on an AMD Ryzen Threadripper 3970X 32-Core CPU with 3.7 GHz frequency.
Benchmark results for the state-transfer optimization are shown in figure 2. We consider a single time step (N = 1) and measure the runtime versus d. The results in figure 2(a) confirm that full-AD, semi-AD and HG have the same scaling of runtime per time step with respect to d (table 1). The observed runtime scaling is consistent with κ ∼ Θ(d2) due to dense matrix storage for H1, and (see appendix E). The scaling of the required memory resources is illustrated in figure 2(b): memory usage scales with dimension d as Θ(d2) for full-AD, semi-AD and HG, consistent with κ ∼ Θ(d2) being the dominant contribution. As anticipated, the scaling of memory with N differs between full-AD, semi-AD and HG [figure 2(c)]. For full-AD and semi-AD, the scaling is linear with N, while it is independent of N for HG. The slope of full-AD is much larger than the one of semi-AD, since the former has extra dependence on μ (table 1), which is a constant ≈100 in this benchmark. The results confirm the favorable memory efficiency of HG and semi-AD compared to full-AD in state-transfer problems with large number of time steps.
Download figure:
Standard image High-resolution imageWe next turn to an example for optimizing a unitary gate. This benchmark is performed for three driven transmons with nearest neighbor coupling. The target gate consists of simultaneous Hadamard operations on each of the three transmons. For concreteness, we consider the case of transmons with the same frequency, driven resonantly. The full Hamiltonian in the rotating frame of the drive is
The goal of the optimization is to minimize the infidelity . The setup for this benchmark is the same as in the previous example.
Figure 3 records the benchmarks based on a single time step (N = 1). The observed runtime scaling with dimension d is for full-AD, semi-AD and HG, see figure 3(a). This power law arises from Θ(μ κ d) for κ ∼ Θ(d2) and [see appendix E]. The memory usage scaling with d is illustrated in figure 3(b), confirming that full-AD has an unfavorable extra dependence compared to HG and semi-AD (table 1). The memory usage scaling with N is as expected [figure 3(c)]: for semi-AD, the scaling is linear with N; for HG, it is independent of N. (The memory usage of full-AD in this case exceeds our resources, and thus is not shown.) Throughout our benchmark, this linear dependence with N causes semi-AD to consume about 10 times more memory than HG. This underlines that HG is preferable whenever memory constraints become a limitation in optimization problems with large Hilbert space dimension and large number of time steps. We emphasize that this improvement is achieved without worsening the runtime scaling.
Download figure:
Standard image High-resolution image5. Choosing among different optimization strategies
There are several considerations which determine whether full-AD, HG or semi-AD is the optimal strategy for a given optimization problem, see the decision tree figure 4. Whenever the Hilbert space dimension d and the number of time steps N are sufficiently small, then memory usage may not be a bottleneck. In this case, full-AD may be preferable, since it eliminates the need for hard-coded gradients. If the memory cost of full-AD is not affordable but analytical gradients are tedious to compute, semi-AD may be a viable compromise, though still less memory efficient than HG. In all other cases, HG is the method of choice, since it can handle optimization with larger N and d compared to full-AD and semi-AD.
Download figure:
Standard image High-resolution imageThe choice of implementing HG via scaling and squaring or matrix diagonalization should be informed by whether the Hamiltonian is sparse and κ scales better than Θ(d2). In the latter case, scaling and squaring is preferable and leads to an overall memory usage ∼Θ(d + κ), which is better than the memory scaling O(d2) of matrix diagonalization. By contrast, for Hamiltonians with κ ∼ Θ(d2), these two methods are equivalent in memory usage scaling and we are free to select the one with better runtime for a specific optimization task. While the runtime for optimization using matrix diagonalization scales as O(d3) [appendix F], the runtime for optimization based on scaling and squaring differs between state-transfer and gate optimizations: (1) for state transfer, the runtime scales as Θ(μ d2); hence, whenever μ scales better than Θ(d), scaling and squaring wins over matrix diagonalization. (2) for gate optimization, the runtime scales as Θ(μ d3); hence matrix diagonalization is preferable.
6. Conclusions
Motivated by the memory bottleneck problem in quantum optimal control of large systems, we have revisited and compared the memory usage and runtime scaling of multiple flavors of GRAPE: hard-coded gradients (HG), full automatic differentiation (full-AD) and semi-automatic differentiation (semi-AD). These three methods are equivalent in runtime scaling but differ characteristically in scaling of memory usage. The ranking among them is as follows, going from the most to least efficient: HG, semi-AD and full-AD. Thus HG is the preferred option when facing a memory bottleneck problem.
We have illustrated the scaling of HG and full-AD by benchmark studies for concrete state-transfer and gate-operation optimizations, respectively. As part of this, we have presented improvements in the implementation of scaling and squaring, used to evaluate propagator-state products numerically. In the example studied, we observed a speedup compared to Scipy's expm multiply function. The decision tree shown in figure 4 summarizes the choice of appropriate numerical strategy for a given optimization problem.
Acknowledgments
We thank Thomas Propson for fruitful discussions. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. We are grateful for the following open-source packages used in our work: matplotlib [47], numpy [48], scipy [49], sympy [50], scqubits [51], and qoc [31].
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).
Appendix A.: Analytical gradients
In the main text, we exclusively focused on minimizing state-transfer or gate infidelity. In this appendix, we provide the analytical gradients for other cost contributions of interest.
The cost contribution
penalizes the integrated expectation value of the Hermitian operator Ω. The gradient of this cost contribution can be written as
with
The vector ∣Φn 〉 obeys the recursion relation
Due to this relation, the expression in equation (A.2) can be evaluated with a forward-backward scheme analogous to the one discussed in the section 3.3.
The cost contribution sums up infidelities from all time steps (as opposed to recording the final state infidelity only, as in ). This steers the system towards its target state more rapidly, thus helping to reduce the duration of state-transfer. Similar to equation (A.2), the gradient of is:
with
The vector obeys the recursive relation
This allows us to use a forward-backward scheme to evaluate the gradient expression in equation (A.5).
For gate operations, the cost contribution analogous to is given by , where describes the evolution from time step 1 to . We obtain the gradient
with
Here, forms an arbitrary orthonormal basis and h = 1, 2,...,d enumerates the basis states. The vector obeys the recursion relation
again enabling the evaluation of the gradient expression with a forward-backward propagation scheme.
Appendix B.: Scaling analysis
For the cost contributions discussed in appendix A, we now analyze the memory usage and runtime scaling of full-AD, semi-AD and HG.
B.1. Scaling analysis: hard-coded gradient
Thanks to the similarities in the expressions and the existence of recursion relations, forward-backward schemes akin to figure 1 can also be applied to , and . As a result, we find that the scaling analysis in section 4.1 leads to the same the memory usage and runtime requirement as and .
B.2. Scaling analysis: full-automatic differentiation
Despite the specific differences among the expressions of cost contributions , and C3 g , we find that the scaling of runtime and memory usage is again set by the need to store intermediate vectors and repeatedly perform matrix-vector multiplications. Consequently, the scaling remains the same as for the cost contributions discussed in section 4.1.
In summary, the scaling for the state-transfer and gate-operation cost contributions matches those shown in table 1. For the scaling of semi-AD, see [35].
Table 1. Memory usage and runtime scaling with respect to Hilbert space dimension d, number of time steps N, the number of stored Hamiltonian matrix elements κ = κ(d) and the number of matrix-vector multiplications required for evaluating propagator-state products μ = μ(d). (The scaling of μ and κ depends on the specific optimization task.) The table contrasts scaling for state transfer and gate operation for hard-coded gradients (HG), full automatic differentiation (full-AD) and semi-automatic differentiation (semi-AD).
State transfer | Gate operation | |||
---|---|---|---|---|
Runtime | Memory usage | Runtime | Memory usage | |
Hard-coded gradient | Θ(N μ κ) | Θ(d + κ) | Θ(N μ κ d) | Θ(d + κ) |
Full automatic differentiation | Θ(N μ κ) | Θ(N μ d + κ) | Θ(N μ κ d) | Θ(N μ d2) |
Semi-automatic differentiation | Θ(N μ κ) | Θ(Nd + κ) | Θ(N μ κ d) | Θ(Nd2) |
Appendix C.: Determine the upper bound EA (m, s)
The approximation of eA ∣Ψ〉 in section 3.2 required the determination of an upper bound EA (m, s) for the error εA,∣Ψ〉(m, s) [equation (14)]. In this appendix, we show how to acquire such bound.
Considering the expression for the error [equation (14)], we note that the denominator is simply 1. The error εA,∣Ψ〉 is bounded as follows:
The error εt arises from the truncation of the Taylor series and rounding error εr is due to finite-precision arithmetics. We proceed to derive the upper bounds for these two errors separately.
C.1. Upper bound for the truncation error εt
We rewrite the truncation error as
where denotes the matrix-valued error of the truncated exponential series. Employing the triangle inequality and submultiplicativity [52], we find the bound
Here the second line utilizes the fact that the state is normalized and eB is unitary.
This upper bound must be evaluated numerically to determine m and s. However, computing the matrix 2-norm is inconvenient since it requires the use of an eigensolver. To mitigate this, we switch to the matrix 1-norm by exploiting 5
Monotonicity of Rm leads us to
C.2. Upper bound for the rounding error εr
The rounding error εr originates from the finite precision of the floating-point representation of real numbers as well as the finite accuracy of basic arithmetic operations. The floating-point representation 〚ξ〛 for the real number ξ can be written as 〚ξ〛 = ξ(1 + δ) where δ = δ(ξ) is the relative deviation [52]. Its magnitude is always smaller than machine precision, i.e. ∣δ∣ < u. Similarly, the complex number z obeys
with ∣δ∣ ≤ u. Arithmetic operations such as addition and multiplication with two complex numbers z1, z2 incur a relative deviation δ = δ(z1, z2) such that
where [52]. The relative deviation shown in equations (C.6), (C.7) generally depend on both numbers involved as well as the operation in question. In the following, we will distinguish different δ with subscript ν.
Using the equations (C.6), (C.7), we derive an upper bound for the rounding error in the evaluation of the dot product which is the basic arithmetic operation in evaluating according to equation (13).
C.2.1. Upper bound for the rounding error associated with evaluating the dot product
The rounding error incurred when evaluating the dot product is given by ∣Σd − 〚Σd 〛∣ with Σd = w T z (). Here, w T is a row of B = i δ tH/s. We illustrate how to bound the rounding error for d=1,2, and give the general result subsequently. Based on equations (C.6) and (C.7), we have
In the last step, we have used [52], and defined . For d = 2, the upper bound of the error is
The general upper bound for arbitrary d is given by:
Since w T is proportional to a row of the Hamiltonian matrix, sparsity of H implies sparsity of w T. Vanishing entries in w T is special because they do not incur rounding errors. Thus, one can obtain a stronger upper bound
where σ is the number of non-zero elements in w T.
C.2.2. Upper bound for the rounding error of
The vector is evaluated recursively via
where j = 1, 2, ⋯ ,m and b0 is a vector representation of ∣Ψ〉 under a choice of orthonormal basis. Before evaluating the upper bound for the error , we first derive an upper bound for the error of an vector component . We illustrate how to bound this error for m = 0,1, and then give the general result.
For the case m = 0, we have the bound
We then obtain the upper bound of the error for the example of m = 1:
where σi is number of non-zero elements in ith row of the matrix B. Here, the third line arises from the division error between a complex number and a real number, i.e. 〚z/ξ〛 = z/ξ(1 + δ), ∣δ∣ < u [52]. The general upper bound for arbitrary m is given by:
where ∣B∣, ∣b0∣ denote the matrix and the vector with elements ∣Bil ∣, , respectively. Defining
and using equation (C.15), we obtain
with . Here, β∣∣b0∣∣2 the desired upper bound for the rounding error of .
C.2.3. Upper bound for the error εr
The vector is evaluated by the recursive relation.
where c0 = b0. We illustrate how to bound the error for s = 1, 2, and give the general result subsequently.
For s = 1, we have
For s = 2, we obtain
where the last step is enabled by equation (C.19). Finally, we bound the matrix norm in the second term by
Since equation (C.17) holds for any complex vector b0, the error in equation (C.20) is further bounded by:
Generalizing this strategy further, we obtain the expression for the upper bound of εr :
Combining the upper bounds for εr and εt , we return to equation (C.1) and obtain
where , and .
Appendix D.: Numerical comparison between scaling and squaring implementations
Scaling and squaring is used to evaluate the propagator-state product, e−iHΔt ∣Ψ〉. We compare relative error and runtime between our implementation of scaling and squaring and Scipy's implementation, expm multiply [42]. The relative errors for our implementation and Scipy's implementation follow the general expression in equation (14), but differ in specific choice of m and s. As a proxy for the exact propagator-state product, we apply Taylor expansion to matrix exponential with 400 digits precision by Sympy and the result of the product converges at 30 digits after decimal point. The runtime is measured with the help of the Temci package [45].
As the basis for this comparison, we use the examples of Hamiltonians (20) and (21) and implement sparse storage for them. The relative errors are shown in figures D1(a) and (c) as a function versus the matrix norm respectively. In both examples, the relative errors for our implementation are bounded by the error tolerance τ, while the relative errors for Scipy's implementation tend to exceed the tolerance for large norms There are two reasons for this tendency. First, rounding errors are not considered in the derivation of the upper bound for the errors [42]; second the truncation of eB m [equation (12)] is merely based on a heuristic criterion.
Download figure:
Standard image High-resolution imageEven though the relative errors of Scipy's implementation are worse, it gains smaller number of matrix-vector multiplications, which is denoted by μ. This is illustrated in figure D1(b) and (d), that the ratio of μ between our implementation and Scipy's implementation is always larger than 1. Although Scipy's implementation spends less time on matrix-vector multiplications, the total runtime of Scipy's implementation is larger than the one of our implementation, see figure D1(b) and (d). This is due to the larger runtime overhead of Scipy's implementation for determining (m, s).
In conclusion, our implementation has the better numerical stability and runtime than Scipy's implementation in our numerical experiments.
Appendix E.: The scaling of μ
The asymptotic behavior of runtime, discussed in the main text, is governed by the scaling of μ (16) with dimension d of the Hilbert space. The dependence μ(d) is specific to the matrix A to be exponentiated (which is proportional to the Hamiltonian). This prevents us from stating a general scaling relation. We thus turn to a discussion of several concrete examples.
We first consider the example of the transmon-cavity system, see equation (20), where we increase d via the cavity dimension dc while leaving the transmon cutoff fixed. The corresponding Hamiltonian H1, written in the bare product basis, then has a constant maximum number of non-zero elements in each row. (I.e., the parameter (C.16) is independent of d.) Numerically, we find in this case that μ scales linearly with (), and this linearity holds for a wide range of τ, see figure E1(a). In this example, two ingredients lead to the scaling : (1) d is a constant multiple of dc , and (2) the cavity ladder operators in H1 which has one-norm scaling . Thus, we obtain the overall scaling , see figure E1(e). In the second example, we consider the system three coupled transmons, see equation (21). The Hamiltonian H2 is written in the harmonic oscillator basis and has a constant , which leads to [figure E1(b)]. One-norm of operator in H2 has quadratic scaling with each transmon dimension. Since we increase d by enlarging the dimensions of all three transmons simultaneously, we obtain . This scaling results in , see figure E1 (f).
Download figure:
Standard image High-resolution imageFor other Hamiltonian matrices where depends on d, the scaling may still hold. In the following, we show two concrete examples that one where linearity holds, and another where it breaks. We first consider a driven system of Nq qubits with nearest-neighbor σz coupling. In the rotating frame, the system is described by
Each qubit is controlled via drive fields and . For this example, the numerical results show that scaling still holds, see figure E1(c). Since we increase d by enlarging Nq , we obtain . This scaling results in , see figure E1(g). For the second example, we consider two capacitively coupled fluxonium qubits with Hamiltonian
Here, ECi , EJi , ELi denote the charging, inductive and Josephson energies of two fluxonium qubits, and g is the interaction strength. φ and n are the phase and charge number operators, defined in the usual way. We represent the Hamiltonian in the harmonic oscillator basis. Figure E1(d) shows that scaling of μ is superlinear with for τ = 10−8. We thus settle for a numerical determination of the scaling relation between μ and d for τ = 10−8 by increasing the dimensional cutoff for both fluxonium qubits simultaneously. The observed scaling is μ ∼ Θ(d0.65), see figure E1(h).
Appendix F.: Alternative numerical implementation of HG scheme and the corresponding scaling analysis
The numerical implementation of the hard-coded gradient scheme requires evaluation of the short-time propagator and its derivative. Under certain conditions, it turns out to be advantageous to numerically evaluate the short-time propagator by employing matrix diagonalization. In this appendix, we first discuss the method to compute propagator derivative and then analyze the scaling of HG.
F.1. Evaluation of propagator derivative
The operator −iHΔt is anti-Hermitian, and hence can be diagonalized and exponentiated according to
Here, S is unitary and D diagonal. As a result, the propagator derivative can be written as
We apply the inverse unitary transformation to (F.3) and obtain
Based on , equation (F.4) can be rewritten as
with . Here, ◦denotes the Hadamard product (element-wise multiplication). To obtain dD/da and dS/da, we take the derivative of equation (F.1) and repeat the same steps as for equations (F.4) and (F.5), leading to
where . Since the diagonal elements of are zero, it follows that
For off-diagonal elements of the operator in equation (F.6), we obtain
where we define
Inserting equations (F.7) and (F.8) into equation (F.5) gives
where we use E ◦ G = 0.
F.2. Scaling analysis
The gradients of the contributions to the cost function (for the case of state transfer) are evaluated by the forward-backward propagation scheme. In this scheme, the storage required for states does not scale with the number of intermediate time steps. The matrices that must be stored are the system and control Hamiltonians as well as dense matrices dense matrices S, E and . Thus, the total memory usage scales as O(d2). The runtime of the scheme is dominated by O(N) evaluations of the propagator and its derivative. Employing matrix diagonalization for the propagator evaluation requires a runtime scaling ∼O(d3) [54]. Calculation of the propagator derivative involves matrix-matrix multiplication and Hadamard product with runtime scaling O(d3) and O(d2), respectively. Thus, the overall runtime of the HG scheme scales as O(Nd3).
The gradients of the contributions to the gate-operation cost function are evaluated in an analogous manner, resulting in the runtime and memory scaling.
Footnotes
- 1
For simplicity, we consider one control channel with real-valued a(t). Generalizations to multiple control channels and complex a are straightforward.
- 2
While Chebyshev expansion has faster convergence than Taylor expansion, their numerical implementations have the same runtime scaling. Here, we focus on Taylor expansion due to its simplicity.
- 3
Here, f(x) = Θ(g(x)) is the usual Bachmann-Landau notation for f(x) bounded above and below by g(x) in the asymptotic sense.
- 4
AD is implemented by the package Autograd [39] in this benchmark. This package does not support sparse linear algebra, so we implement dense matrix storage for the Hamiltonian in both full-AD and HG for fair comparison.
- 5