Optimal control of large quantum systems: assessing memory and runtime performance of GRAPE

Gradient Ascent Pulse Engineering (GRAPE) is a popular technique in quantum optimal control, and can be combined with automatic differentiation (AD) to facilitate on-the-fly evaluation of cost-function gradients. We illustrate that the convenience of AD comes at a significant memory cost due to the cumulative storage of a large number of states and propagators. For quantum systems of increasing Hilbert space size, this imposes a significant bottleneck. We revisit the strategy of hard-coding gradients in a scheme that fully avoids propagator storage and significantly reduces memory requirements. Separately, we present improvements to numerical state propagation to enhance runtime performance. We benchmark runtime and memory usage and compare this approach to AD-based implementations, with a focus on pushing towards larger Hilbert space sizes. The results confirm that the AD-free approach facilitates the application of optimal control for large quantum systems which would otherwise be difficult to tackle.

One of the prominent techniques for implementing quantum optimal control is Gradient Ascent Pulse Engineering (GRAPE).The central goal of optimal control is to adjust control parameters in such a way that the infidelity of the desired state transfer or gate operation is minimized.In GRAPE, the minimization is based on gradient descent and thus generally requires the evaluation of gradients.Automatic differentiation (AD) [30] is a convenient tool that has also made an impact on quantum optimal control with GRAPE [31][32][33].Internally, AD builds a computational graph and applies the chain rule to automatically compute the gradient of a given function.
However, the convenience of AD comes at the cost of memory required for storing the full computational graph.The use of semi-automatic differentiation (semi-AD) [34] partially reduces the memory overhead compared to full automatic differentiation (full-AD).This reduction can still be insufficient to tackle optimal control for quantum systems of rapidly growing size [35,36].This motivates us to revisit the original GRAPE scheme [11] which is based on hard-coded gradients (HG), and has the smallest possible memory cost.We perform a scaling analysis of memory usage and runtime, and thereby develop a decision tree guiding the optimal choice of strategy among HG, semi-AD and full-AD.We exemplify the scaling behavior with benchmarks for concrete optimization tasks.
The outline of our paper is as follows.Section II briefly reviews GRAPE and introduces necessary notation.In Sec.III, we illustrate derivation and memory-efficient implementation of the gradients for the most commonly used cost func-We briefly sketch the basics of GRAPE [11], mainly to establish the notion used in subsequent sections.
Consider a system described by the Hamiltonian Here, H s denotes the static system Hamiltonian and h c is an operator coupling the classical control field a(t) to the system 1 .Quantum optimal control aims to adjust a(t) such that a desired unitary gate or state transfer is performed within a given time interval 0 ≤ t ≤ T .To facilitate optimization, the total control time T is divided into N intervals of duration ∆t = T /N , and the control field is specified by the amplitudes at the discrete times t n = n∆t (n = 1, 2, . . ., N ).The discretization interval ∆t is chosen such that control amplitudes are approximately constant2 within each ∆t.We denote the time-discretized values by and group these to form the vector a ∈ R N .Under the influence of this piecewise-constant control field, the system's quantum state |ψ n := |ψ(t n ) evolves by a single time step according to Here, U n is the short-time propagator where H n = −i (H s + a n h c )∆t. Optimization of the control field proceeds via minimization of a cost function C. Typically, C includes the state-transfer or gate infidelity, along with additional cost contributions employed to moderate pulse characteristics such as maximal power or bandwidth.To this end a composite cost function is constructed, C(a) = ν α ν C ν (a), where α ν are empirically chosen weight factors and C ν are individual cost contributions.GRAPE utilizes the method of steepest descent to minimize C(a) by updating the control amplitudes according to the cost function gradient: Here, j enumerates steepest-descent iterations and η j > 0 is the learning rate governing the step size.

III. CALCULATION OF GRADIENTS
A critical ingredient for GRAPE is the computation of cost function gradients.One popular option is to leverage automatic differentiation for this purpose, such as implemented in Tensorflow [37] and Autograd [38].A clear advantage of this strategy is the ease by which complicated cost function gradients are evaluated automatically at runtime.This convenience, however, is bought at the cost of significant memory consumption which can be prohibitive for large quantum systems.This waste of memory resources is prohibitive when optimal control is applied to large quantum systems.Therefore, we are motivated to revisit hard-coded gradients in a computationally efficient scheme for key cost contributions.

A. Analytical gradients
We provide a discussion of the analytical gradients of two key cost contributions in this subsection.Expressions for gradients of other cost contributions are shown in App. A.
In state-transfer problems, a central goal of quantum optimal control is to minimize the state-transfer infidelity given by where |φ T is the target state and |ψ N is a state realized at final time t N .The gradient of C st 1 takes the form where the state |φ n obeys the recursive relation This can be interpreted as backward propagation.
In addition to state transfer, unitary gates are another operation of interest.Gate optimization can be achieved by minimizing the gate infidelity C Here, U T is the target unitary gate and U R = U N • • • U 1 is the actually realized gate.For a given orthonormal basis {|ψ h 0 } h (h enumerates the basis states), the gradient of C g 1 can be written as where The state |φ h n is obtained as follows: apply the target unitary U T to basis state {|ψ h 0 } h and backward propagate the result to time t n , Equations ( 7) and ( 9) provide the analytical expressions needed for gradient descent.In the next two subsections, we will discuss how to numerically evaluate these expressions in a memory-efficient way.

B. Numerical implementation of hard-coded gradient
The calculation of gradients in Eqs. ( 7) and ( 9) requires numerical evaluation of expressions of the form U n |Ψ and ∂Un ∂an |Ψ .Here, the short-time propagator U n is given by a matrix exponential e A and A = −iH n ∆t.
Numerically evaluating e A |Ψ is not without challenge.As pointed out by Moler and Van Loan [39], several methods exist, yet none of them is truly optimal.Three methods ranked highly in reference [39] are based on solving ordinary differential equations, matrix diagonalization and scaling and squaring (combined with series expansion).ODE solving, in this context, tends to be most expensive in runtime [34].We mainly on focus on scaling and squaring but comment on situations where matrix diagonalization is preferable in Sec.V.
Scaling and squaring [40] which is based on the identity where the right hand side matrix exponential now involves the matrix B = A/s which has a norm that is reduced compared to ||A||.This generally allows for a lower truncation order m of the exponential series 3 , Typically, m is smaller than the one required for the original e A .Our goal is to approximate the state vector e A |Ψ ≈ (e B m ) s |Ψ where According to the last expression, approximation of e A |Ψ can thus proceed by repeated application of B = A/s to a vector, without ever invoking matrix-matrix multiplication.This boosts runtime performance from O(d 3 ) to O(d 2 ).
To proceed with the evaluation of e A |Ψ , the truncation order m and scaling factor s are determined in such a way that the error remains below the error tolerance τ , Here, • denotes the floating point representation of Having addressed the challenges associated with numerically evaluating e A |Ψ , we now turn to the calculation of ∂Un ∂an |Ψ , which is crucial for computing cost function gradients as in Eqs.(7) and (9).We base this evaluation on the approximation where B n = −iH n ∆t/s.We proceed by using auxiliary matrix method based on the identity [42,43] (e An/s m This relates (∂(e Bn m ) s /∂a n )|Ψ to the exponential of the auxiliary matrix A n acting on a vector.Here, A n is defined as which is a 2 × 2 matrix of operators each acting on our Hilbert space with dimension d.Once a basis is chosen, A n can be represented as a 2d × 2d matrix of complex numbers.

C. Efficient evaluation of hard-coded gradients
The analytical gradient expressions, such as ∇C st 1 in Eq. ( 7), are generally complicated and involve a large number of contributing quantities.Depending on grouping and ordering of operations, these quantities may need to be stored simultaneously or computed repeatedly with critical implications for runtime and memory usage.The scheme originally proposed by Khaneja et al. [11] strikes a favorable compromise that avoids the proliferation of stored intermediate state vectors.More specifically, the final quantum state |ψ N is obtained by forward propagating |ψ 0 .Then, backward propagation of |ψ N , |φ T is carried out to calculate the matrix element φ n | ∂Un ∂an |ψ n−1 needed for the n-th component of the gradient, thus building up the gradient in descending order.For both propagation directions, only the current intermediate states are stored, leading to a memory cost of Θ(1). 4

IV. SCALING ANALYSIS AND BENCHMARKING
We consider the following three methods to compute gradients within GRAPE: evaluation of hard-coded gradients, full automatic differentiation [31], semi-automatic differentiation [34].To determine the method most appropriate for a given problem, we inspect the memory usage and runtime scaling of these three approaches.Following this analysis, we present benchmark studies of specific state-transfer and gateoperation problems to illustrate the scaling behavior.

A. Scaling analysis of runtime and memory usage
Runtime and memory usage can pose significant challenges when performing optimization tasks for quantum systems with large Hilbert space dimension (d) or for time evolution requiring significant number of time steps (N ).In this subsection, we analyze the scaling of memory usage and runtime with respect to both N and d, focusing on the cost contribution C 1 for state-transfer and unitary-gate optimization.Our findings can be generalized to other cost contributions beyond C 1 , as elaborated in App. A.
a. Evaluation of hard-coded gradients (HG) using forward-backward propagation.We start by considering state-transfer problems.The memory usage for evaluating ∇C st 1 is mainly determined by the required storage of quantum states and the Hamiltonian.In forward-backward propagation, the number of states and instances of Hamiltonian matrices stored at any given time is of the order of unity.Each individual state vector of dimension d consumes memory ∼ Θ(d).Memory usage for the Hamiltonian depends on the employed storage scheme.In the "worst" case of dense matrix storage, the memory required for Hamiltonian is Θ(d 2 ).For sparse matrix storage, memory usage generally depends on sparsity and is determined by the number of stored Hamiltonian matrix elements (here denoted as κ).For example, for a diagonal or tridiagonal matrix, memory usage is κ ∼ Θ(d); for a linear chain of qubits with nearest-neighbor σ z coupling, it is Θ(d log 2 d).Combining the scaling relations above gives us an overall asymptotic behavior of memory usage Θ(d + κ).
We observe that runtime of the scheme is dominated by the required Θ(N ) evaluations of propagator-state products and their derivative counterparts.Numerically approximating a single propagator-state product requires µ matrix-vector multiplications [see Eq. ( 16)], where runtime of one instance of matrix-vector multiplication scales as Θ(κ).The implicit dependence of µ = µ(d) on d is specific to each particular system and can be difficult to obtain analytically.Numerically, we find for a driven transmon coupled to a cavity that µ ∼ Θ( √ d); for a linear chain of coupled qubits, the scaling is Θ(log 2 d), see App.E for more details.Once combined, the above individual scaling relations lead to an overall runtime asymptotic behavior Θ(N µ κ).For example, in the case of the linear chain of coupled qubits, runtime scales as We next turn to gate-optimization problems, where scaling behavior of memory usage and runtime are as follows.We calculate ∇C g 1 by sequentially applying forward-backward propagation to each of the d basis states.The memory usage is Θ(d + κ), the same as for ∇C st 1 .The runtime scaling is Θ(N µκd) which is d times larger than the runtime scaling for ∇C st 1 .5 b.Full automatic differentiation (full-AD) Reversemode automatic differentiation extracts gradients by a forward pass and a reverse pass through the computational tree.With the forward pass the cost function C is computed and the backward pass accumulates the gradient via chain rule.
The memory usage for evaluating ∇C st 1 is determined by the requirement that all intermediate numerical results must be stored as part of the forward pass and act as input to the reverse pass.Specifically, the forward pass evolves the initial state vector to the final state vector in N time steps.The total number of vectors that have to be stored along the way is Θ(N µ), since computation of a single short-time propagatorstate product via scaling and squaring necessitates µ iterations.In addition, considering the memory usage for the Hamiltonian, the full scaling is Θ(N µd + κ).
The runtime for evaluating ∇C st 1 combines forward and reverse pass.It is dominated by the evaluation of Θ(N ) propagator-state products, each of which has the runtime scaling Θ(µκ), leading to an overall runtime scaling of Θ(N µκ).
For gate-optimization problems, C g 1 is evaluated by evolving not only a single initial state but a whole set of d basis vectors.Accordingly, memory usage increases by a factor of d relative to state transfer, resulting in the overall memory scaling Θ(N µd 2 ).Assuming that the evolution of individual basis vectors is not parallelized, runtime also grows by a factor of d, so that runtime scaling for ∇C g 1 is Θ(N µκd).c.Semi-automatic differentiation (semi-AD) For gradients of cost contributions that are tedious to derive analytically, semi-automatic differentiation [34] offers an alternative approach in which some derivatives are hard-coded and others treated by automatic differentiation.Compared to full-AD, semi-AD has the benefit that the memory usage scaling is independent of µ while maintaining the same runtime scaling.

d. Comparison
The results of this scaling analysis are summarized in Tab.I.For full-AD, memory usage grows linearly with respect to µ and number of times steps N .By contrast, memory usage for HG does not increase with µ and N , but remains constant.This advantage is not bought at the cost of worse runtime scaling.Therefore, HG is preferable over full-AD in the optimal control of large quantum systems where memory usage would otherwise be prohibitively large.
Semi-AD presents an interesting compromise viable whenever the memory bottleneck is not too severe.
We illustrate the discussed scaling by benchmark studies for concrete optimization problems in the following section.

B. Benchmark studies
Our first benchmark example studies the following statetransfer problem.Consider a setup in which a driven fluxtunable transmon is capacitively coupled to a superconducting cavity, and perform a state transfer from the cavity vacuum state to the 20-photon Fock state.The Hamiltonian in the frame co-rotating with the transmon drive is Here, the drive is both longitudinal and transverse, c and b are the annihilation operators for cavity photons and transmon excitations respectively.α, g and ∆ denote anharmonicity, interaction strength and difference between the transmon and cavity frequency.Optimization proceeds by minimizing the infidelity C st 1 and limiting the occupation of higher transmon levels by including the cost contribution C st 2 [Eq.(A1)].The latter has the additional benefit of enabling the truncation to only a few transmon levels.We measure runtime and peak memory usage of full-AD 6 and HG for a single optimization iteration.The runtime is measured by the package Temci [44] and the peak memory usage is monitored by the memory profiler package [45].The optimization is performed on an AMD Ryzen Threadripper 3970X 32-Core CPU with 3.7 GHz frequency.
Benchmark results for the state-transfer optimization are shown in Fig. 2. We consider a single time step (N = 1) and measure the runtime versus d.The results in Fig. 2(a) confirm that full-AD and HG have the same scaling of runtime per time step with respect to d (Tab.I).The observed runtime scaling Θ(d ) (see App. E).The scaling of required memory resources is illustrated in Fig. 2(b): memory usage scales with dimension d as Θ(d 2 ) for both full-AD and HG, consistent with κ ∼ Θ(d 2 ) being the dominant contribution.As anticipated, the memory usage scaling with respect to N differs between full-AD and HG [Fig.2(c)]: for full-AD, the scaling is linear with N ; for HG, it is independent of N .This confirms the favorable memory efficiency of HG compared to full-AD in optimization problems with large number of time steps. 6AD is implemented by the package Autograd [38] in this benchmark.This package does not support sparse linear algebra, so we implement dense matrix storage for the Hamiltonian in both full-AD and HG for fair comparison.
2500 3000 3500 4000 4500 Hilbert space dimension (d)  We next turn to an example for optimizing a unitary gate.This benchmark is performed for three driven transmons with nearest neighbor coupling.The target gate consists of simultaneous Hadamard operations on each of the three transmons.For concreteness, we consider the case of transmons with the same frequency, driven resonantly.The full Hamiltonian in the rotating frame of the drive is The goal of the optimization is to minimize the infidelity C g 2 .The setup for this benchmark is the same as in the previous example.
Fig. 3   (N = 1).The observed runtime scaling with dimension d is Θ(d 3 ) for both full-AD and HG, see Fig. 3(a).This power law arises from Θ(µκd) for κ ∼ Θ(d 2 ) and µ ∼ Θ(d ) dependence compared to HG (Tab.I).Throughout our benchmarks, this causes full-AD to consume about 20 times more memory than HG.This underlines that HG is preferable whenever memory constraints become a limitation in optimization problems with large Hilbert space dimension.We emphasize that this improvement is achieved without worsening the runtime scaling 7 . 7We note that the runtime scaling Θ(N µκd) of HG via scaling and squaring can in principle exceed the scaling O(N d 3 ) for matrix exponentiation

V. CHOOSING AMONG DIFFERENT OPTIMIZATION STRATEGIES
There are several considerations which determine whether full-AD, HG or semi-AD is the optimal strategy for a given optimization problem, see the decision tree Fig. 4. Whenever the Hilbert space dimension d and the number of time steps N are sufficiently small, then memory usage may not be a bottleneck.In this case, full-AD may be preferable, since it eliminates the need for hard-coded gradients.If the memory cost of full-AD is not affordable but analytical gradients are tedious to compute, semi-AD may be a viable compromise, though still less memory efficient than HG.In all other cases, based on matrix diagonalization (see App. F).In this case, the latter is the better option.
HG is the method of choice, it can handle optimization with larger N and d compared to full-AD and semi-AD.
The choice of implementing HG via scaling and squaring or matrix diagonalization should be informed by whether the Hamiltonian is sparse and κ scales better than Θ(d 2 ).In the latter case, scaling and squaring is preferable and leads to an overall memory usage ∼ Θ(d + κ), which is better than the memory scaling O(d 2 ) of matrix diagonalization.By contrast, for Hamiltonians with κ ∼ Θ(d 2 ), these two methods are equivalent in memory usage scaling and we are free to select the one with better runtime for a specific optimization task.While the runtime for optimization using matrix diagonalization scales as O(d 3 ) [App.F], the runtime for optimization based on scaling and squaring differs between statetransfer and gate optimizations: (1) for state transfer, the runtime scales as Θ(µd 2 ); hence, whenever µ scales better than Θ(d), scaling and squaring wins over matrix diagonalization.

VI. CONCLUSIONS
Motivated by the memory bottleneck problem in quantum optimal control of large systems, we have revisited and compared the memory usage and runtime scaling of multiple flavors of GRAPE: hard-coded gradients (HG), full automatic differentiation (full-AD) and semi-automatic differentiation (semi-AD).These three methods are equivalent in runtime scaling but differ characteristically in scaling of memory usage.The ranking among them is as follows, going from the most to least efficient: HG, semi-AD and full-AD.Thus HG is the preferred option when facing a memory bottleneck problem.
We have illustrated the scaling of HG and full-AD by benchmark studies for concrete state-transfer and gateoperation optimizations, respectively.As part of this, we have presented improvements in the implementation of scaling and squaring, used to evaluate propagator-state products numerically.In the example studied, we observed a speedup compared to Scipy's expm multiply function.The decision tree shown in Fig. 4 summarizes the choice of appropriate numerical strategy for a given optimization problem.
U n ,1 = U n • • • U 1 describes the from time step 1 to n .We obtain the gradient with Here, {|ψ h 0 } forms an arbitrary orthonormal basis and h = 1, 2, . . ., d enumerates the basis states.The vector |Φ h T,n obeys the recursion relation A10) again enabling the evaluation of the gradient expression with a forward-backward propagation scheme.

Appendix B: Scaling analysis
For the cost contributions discussed in App.A, we now analyze the memory usage and runtime scaling of full-AD, semi-AD and HG.
a. Scaling analysis: hard-coded gradient.Thanks to the similarities in the expressions and the existence of recursion relations, forward-backward schemes akin to Fig. 1 can also be applied to ∇C st 2 ,∇C st 3 and ∇C g 3 .As a result, we find that the scaling analysis in Sec.IV A leads to the same the memory usage and runtime requirement as ∇C st 1 and ∇C g 2 .b. Scaling analysis: full-automatic differentiation.Despite the specific differences among the expressions of cost contributions C st 2 , C st 3 and C g 3 , we find that the scaling of runtime and memory usage is again set by the need to store intermediate vectors and repeatedly perform matrix-vector multiplications.Consequently, the scaling remains the same as for the cost contributions discussed in Sec.IV A In summary, the scaling for the state-transfer and gateoperation cost contributions matches those shown in table I.For the scaling of semi-AD, see Ref. 34.The approximation of e A |Ψ in Sec.III B required the determination of an upper bound E A (m, s) for the error ε A,|Ψ (m, s) [Eq.( 14)].In this appendix, we show how to acquire such bound.
Considering the expression for the error [Eq.( 14)], we note that the denominator is simply 1.The error ε A,|Ψ is bounded as follows: The error ε t arises from the truncation of the Taylor series and rounding error ε r is due to finite-precision arithmetics.We proceed to derive the upper bounds for these two errors separately.

Upper bound for the truncation error εt
We rewrite the truncation error as where R m (B) = ∞ q=m+1 B q /q! denotes the matrix-valued error of the truncated exponential series.Employing the triangle inequality and submultiplicativity [51], we find the bound Here the second line utilizes the fact that the state is normalized and e B is unitary.This upper bound must be evaluated numerically to determine m and s.However, computing the matrix 2-norm is inconvenient since it requires the use of an eigensolver.To mitigate this, we switch to the matrix 1-norm by exploiting8 Monotonicity of R m leads us to ) . (C5)

Upper bound for the rounding error εr
The rounding error ε r originates from the finite precision of the floating-point representation of real numbers as well as the finite accuracy of basic arithmetic operations.The floatingpoint representation ξ for the real number ξ can be written as ξ = ξ(1 + δ) where δ = δ(ξ) is the relative deviation [51].Its magnitude is always smaller than machine precision, i.e. |δ| < u.Similarly, the complex number z obeys with |δ| ≤ u.Arithmetic operations such as addition and multiplication with two numbers z 1 , z 2 incur a relative deviation δ = δ(z 1 , z 2 ) such that where u := 2 √ 2u/(1 − 2u) [51].The relative deviation shown in Eqs.(C6), (C7) generally depend on both numbers involved as well as the operation in question.In the following, we will distinguish different δ with subscript ν.
Using the Eqs.(C6), (C7), we derive an upper bound for the rounding error in the evaluation of the dot product which is the basic arithmetic operation in evaluating e B m |Ψ according to Eq. ( 13).
a. Upper bound for the rounding error associated with evaluating the dot product The rounding error incurred when evaluating the dot product is given by Here, w T is a row of B = iδtH/s.We illustrate how to bound the rounding error for d = 1, 2, and give the general result subsequently.Based on Eqs.(C6) and (C7), we have In the last step, we have used [51], and defined γ n := nu 1−nu .For d = 2, the upper bound of the error is The general upper bound for arbitrary d is given by: Since w T is proportional to a row of the Hamiltonian matrix, sparsity of H implies sparsity of w T .Vanishing entries in w T is special because they do not incur rounding errors.Thus, one can obtain a stronger upper bound where σ is the number of non-zero elements in w T .
b. Upper bound for the rounding error of e B m |Ψ The vector e B m |Ψ is evaluated recursively via  (i) .We illustrate how to bound this error for m = 0, 1, and then give the general result.
For the case m = 0, we have the bound We then obtain the upper bound of the error for the example of m = 1: where σ i is number of non-zero elements in ith row of the matrix B. Here, the third line arises from the division error between a complex number and a real number, i.e. z/ξ = z/ξ(1 + δ), |δ| < u [51].The general upper bound for arbitrary m is given by: Here, β||b 0 || 2 the desired upper bound for the rounding error of e B m |Ψ .
c. Upper bound for the error εr The vector (e B m ) s 0 is evaluated by the recursive relation.where the last step is enabled by Eq. (C19).Finally, we bound the matrix norm in the second term by Since Eq. (C17) holds for any complex vector b 0 , the error in Eq. (C20) is further bounded by: Generalizing this strategy further, we obtain the expression for the upper bound of ε r : Combining the upper bounds for ε r and ε t , we return to Eq. (C1) and obtain where Appendix D: Numerical comparison between scaling and squaring implementations Scaling and squaring is used to evaluate the propagatorstate product, e −iH∆t |Ψ .We compare relative error and runtime between our implementation of scaling and squaring and Scipy's implementation, expm multiply [41].The relative errors for our implementation and Scipy's implementation follow the general expression in Eq. ( 14), but differ in specific choice of m and s.As a proxy for the exact propagator-state product, we apply Taylor expansion to matrix exponential with 400 digits precision by Sympy and the result of the product converges at 30 digits after decimal point.The runtime is measured with the help of the Temci package [44].
As the basis for this comparison, we use the examples of Hamiltonians ( 20) and ( 21) and implement sparse storage for them.The relative errors are shown in Fig. 5(a) and (c) as a function versus the matrix norm ||−iH∆t|| 2 respectively.
In both examples, the relative errors for our implementation are bounded by the error tolerance τ , while the relative errors for Scipy's implementation tend to exceed the tolerance for large norms.There are two reasons for this tendency.First, rounding errors are not considered in the derivation of the upper bound for the errors [41]; second the truncation of e B m [Eq.( 12)] is merely based on a heuristic criterion.
Even though the relative errors of Scipy's implementation are worse, it gains smaller number of matrix-vector multiplications, which is denoted by µ.This is illustrated in Fig. 5(b) and (d), that the ratio of µ between our implementation and Scipy's implementation is always larger than 1.Although Scipy's implementation spends less time on matrix-vector multiplications, the total runtime of Scipy's implementation is larger than the one of our implementation, see Fig. 5(b) and  (d).This is due to the larger runtime overhead of Scipy's implementation for determining (m, s).
In conclusion, our implementation has the better numerical stability and runtime than Scipy's implementation in our numerical experiments.

Appendix E: The scaling of µ
The asymptotic behavior of runtime, discussed in the main text, is governed by the scaling of µ (16) with dimension d of the Hilbert space.The dependence µ(d) is specific to the matrix A to be exponentiated (which is proportional to the Hamiltonian).This together with the complexity of the minimization constraint (15) prevents us from stating a general scaling relation.We thus turn to a discussion of several concrete examples.We first consider the example of the transmon-cavity system, see Eq. ( 20), where we increase d via the cavity dimension d c while leaving the transmon cutoff fixed.The corresponding Hamiltonian H 1 , written in the bare product basis, then has a constant maximum number of non-zero elements in each row.(I.e., the parameter σ (C16) is independent of d.) Numerically, we find in this case that µ scales linearly with ), see Fig. 6(e).In the second example, we consider the system three coupled transmons, see Eq. ( 21).The Hamiltonian H 2 is written in the harmonic oscillator basis and has a constant σ , which leads to µ ∼ Θ(||A|| 1 ) [Fig.6 ).This scaling results in µ ∼ Θ(d 2 3 ), see Fig. 6 (f).
For other Hamiltonian matrices where σ depends on d, the scaling µ ∼ Θ(||A|| 1 ) may still hold.In the following, we show two concrete examples that one where linearity holds, and another where it breaks.We first consider a driven system of N q qubits with nearest-neighbor σ z coupling.In the rotating frame, the system is described by Each qubit is controlled via drive fields a (n) x and a (n) y .For this example, the numerical results show that scaling µ ∼ Θ(||A|| 1 ) still holds, see Fig. 6(c).Since we increase d by enlarging N q , we obtain ||A|| 1 ∼ Θ(N q ) ∼ Θ(log 2 d).This scaling results in µ ∼ Θ(log 2 d), see Fig. 6(g).For the second example, we consider two capacitively coupled fluxonium qubits with Hamiltonian Here, E Ci , E Ji , E Li denote the charging, inductive and Josephson energies of two fluxonium qubits, and g is the interaction strength.ϕ and n are the phase and charge number operators, defined in the usual way.We represent the Hamiltonian in the harmonic oscillator basis.Fig. 6(d) shows that scaling of µ is superlinear with ||A|| 1 for τ = 10 −8 .We thus settle for a numerical determination of the scaling relation between µ and d for τ = 10 −8 by increasing the dimensional cutoff for both fluxonium qubits simultaneously.The observed scaling is µ ∼ Θ(d 0.65 ), see Fig. 6(h).
x /2π = a by employing matrix diagonalization.In this appendix, we first discuss the method to compute propagator derivative and then analyze the scaling of HG.

Evaluation of propagator derivative
The operator −iH∆t is anti-Hermitian, and hence can be diagonalized and exponentiated according to Here, S is unitary and D diagonal.As a result, the propagator derivative can be written as where we use E • G = 0.

Scaling analysis
The gradients of the contributions to the cost function (for the case of state transfer) are evaluated by the forwardbackward propagation scheme.In this scheme, the storage required for states does not scale with the number of intermediate time steps.The matrices that must be stored are the system and control Hamiltonians as well as dense matrices dense matrices S, E and E .Thus, the total memory usage scales as O(d 2 ).The runtime of the scheme is dominated by O(N ) evaluations of the propagator and its derivative.Employing matrix diagonalization for the propagator evaluation requires a runtime scaling ∼ O(d 3 ) [53].Calculation of the propagator derivative involves matrix-matrix multiplication and Hadamard product with runtime scaling O(d 3 ) and O(d 2 ), respectively.Thus, the overall runtime of the HG scheme scales as O(N d 3 ).
The gradients of the contributions to the gate-operation cost function are evaluated in an analogous manner, resulting in the runtime and memory scaling.

Figure 2 .Figure 3 .
Figure 2. Benchmark results for Fock state generation in a system consisting of a cavity coupled to a transmon.(a) Runtime and (b) memory usage per evolution time step versus Hilbert space dimension d.The dimension is varied by increasing cavity dimension while fixing the transmon dimension to 6. (c) Memory usage as a function of the number of time steps for d = 600.In all panels, data points are results from benchmarking and dashed lines are power law fits.(Results are obtained for: ∆/2π = 3 GHz, g/2π = 0.1 GHz, α/2π = −0.225GHz and constant controls az/2π = ax/2π = 0.1 GHz.)

Figure 4 .
Figure 4. Decision trees for selecting the optimal numerical strategy.(a) Choosing among three numerical methods for evaluating gradients: full automatic differentiation, semi-automatic differentiation, hard-coded analytical gradients.(Here, we we assume µ ∼ Θ(d); for different scaling see Tab.I.) (b) Determining the appropriate strategy for evaluating propagator-state products.

2 3 )
[see App.E].The memory usage scaling with d is illustrated in Fig. 3(b), confirming that full-AD has an unfavorable extra µ ∼ Θ(d 2 3 Appendix C: Determine the upper bound EA(m, s)

and b 0
is a vector representation of |Ψ under a choice of orthonormal basis.Before evaluating the upper bound for the error e B m b 0 − e B m b 0 2 , we first derive an upper bound for the error of an vector component e B m b 0 (i) − (e B m b 0 ) C15) where |B|, |b 0 | denote the matrix and the vector with elements |B il |, |b i 0 |, respectively.Defining σ := max i σ i (C16) and using Eq.(C15), we obtain e B m b 0 − (e B m b 0

) where c 0
= b 0 .We illustrate how to bound the error ε r = (e B m ) s b 0 − (e B m ) s b 0 2 for s = 1, 2, and give the general result subsequently.For s = 1, we have e B m b 0 − e B m b 0 2 (C17) < β||b 0 || 2 = β.(C19) For s = 2, we obtain (e B m ) 2 b 0 − (e B m ) 2 b 0 2 ≤ (e B m ) 2 b 0 − e B m e B m b 0 2 + e B m e B m b 0 − (e B m ) 2 b 0 2 < e B m e B m b 0 − e B m e B m b 0 2 + β e B m 2 , (C20)

5 . 1 2 2 c
Comparison between scaling and squaring as implemented here and Scipy's expm multiply, considering relative error and runtime.(a), (c) Relative error versus matrix norm.(b), (d) Ratios of runtime and µ (i.e.number of matrix-vector multiplications) between our and Scipy's implementations as functions of the matrix norm.The matrix norm ||−iH∆t|| 2 is varied by increasing ∆t ∈ [1, 10].In all panels, colors encode different error tolerances τ .The results of the first and second row are obtained based on cavity-transmon and three coupled transmons Hamiltonians, respectively.The system parameters of these two Hamiltonian are the same as the ones given in the captions of Figs. 2, 3, respectively.||A|| 1 µ ∼ Θ(||A|| 1 ) , and this linearity holds for a wide range of τ , see Fig. 6(a).In this example, two ingredients lead to the scaling ||A|| 1 ∼ Θ(d ) : (1) d is a constant multiple of d c , and (2) the cavity ladder operators in H 1 which has one-norm scaling Θ(d 1 ).Thus, we obtain the overall scaling µ ∼ Θ(d 1 2 (b)].One-norm of operator (b † b) 2 in H 2 has quadratic scaling with each transmon dimension.Since we increase d by enlarging the dimensions of all three transmons simultaneously, we obtain ||A|| 1 ∼ Θ(d 2 3

Table I .
Memory usage and runtime scaling with respect to Hilbert space dimension d, number of time steps N , stored Hamiltonian matrix elements κ = κ(d) and the number of matrix-vector multiplications required for evaluating propagator-state products µ = µ(d).(The scaling of µ and κ depends on the specific optimization task.)The table contrasts scaling for state transfer and gate operation for hard-coded gradients (HG), full automatic differentiation (full-AD) and semi-automatic differentiation (semi-AD).
records the benchmark based on a single time step