Paper The following article is Open access

Optimal control of large quantum systems: assessing memory and runtime performance of GRAPE

, , and

Published 8 February 2024 © 2024 The Author(s). Published by IOP Publishing Ltd
, , Citation Yunwei Lu et al 2024 J. Phys. Commun. 8 025002 DOI 10.1088/2399-6528/ad22e5

2399-6528/8/2/025002

Abstract

Gradient Ascent Pulse Engineering (GRAPE) is a popular technique in quantum optimal control, and can be combined with automatic differentiation (AD) to facilitate on-the-fly evaluation of cost-function gradients. We illustrate that the convenience of AD comes at a significant memory cost due to the cumulative storage of a large number of states and propagators. For quantum systems of increasing Hilbert space size, this imposes a significant bottleneck. We revisit the strategy of hard-coding gradients in a scheme that fully avoids propagator storage and significantly reduces memory requirements. Separately, we present improvements to numerical state propagation to enhance runtime performance. We benchmark runtime and memory usage and compare this approach to AD-based implementations, with a focus on pushing towards larger Hilbert space sizes. The results confirm that the AD-free approach facilitates the application of optimal control for large quantum systems which would otherwise be difficult to tackle.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The rapidly evolving field of quantum information processing has generated much interest in quantum optimal control for tasks such as state preparation [1, 2], error correction [3, 4] and realization of logical gates [58]. This methodology has been implemented in various quantum systems [9], such as nuclear magnetic resonance [1015], trapped ions [16, 17], nitrogen-vacancy centers in diamonds [1824], Bose–Einstein condensation [2527] and neutral atoms [28, 29].

One of the prominent techniques for implementing quantum optimal control is Gradient Ascent Pulse Engineering (GRAPE). The central goal of optimal control is to adjust control parameters in such a way that the infidelity of the desired state transfer or gate operation is minimized. In GRAPE, the minimization is based on gradient descent and thus generally requires the evaluation of gradients. Automatic differentiation (AD) [30] is a convenient tool that has also made an impact on quantum optimal control with GRAPE [3134]. Internally, AD builds a computational graph and applies the chain rule to automatically compute the gradient of a given function.

However, the convenience of AD comes at the cost of memory required for storing the full computational graph. The use of semi-automatic differentiation (semi-AD) [35] partially reduces the memory overhead compared to full automatic differentiation (full-AD). This reduction can still be insufficient to tackle optimal control for quantum systems of rapidly growing size [36, 37]. This motivates us to revisit the original GRAPE scheme [12] which is based on hard-coded gradients (HG), and has the smallest possible memory cost. We perform a scaling analysis of memory usage and runtime, and thereby develop a decision tree guiding the optimal choice of strategy among HG, semi-AD and full-AD. We exemplify the scaling behavior with benchmarks for concrete optimization tasks.

The outline of our paper is as follows. Section 2 briefly reviews GRAPE and introduces necessary notation. In section 3, we illustrate derivation and memory-efficient implementation of the gradients for the most commonly used cost function contributions. Our numerical implementation of HG further improves the efficiency of state propagation. In section 4, we analyze and compare the scaling of runtime and memory usage scaling for HG, AD and semi-AD, and present results from concrete benchmark studies. In section 5, we formulate a decision tree that facilitates the choice of optimal numerical strategy for GRAPE-based quantum optimal control. We summarize our findings and present our conclusions in section 6.

2. Brief sketch of GRAPE

We briefly sketch the basics of GRAPE [12], mainly to establish the notion used in subsequent sections.

Consider a system described by the Hamiltonian

Equation (1)

Here, Hs denotes the static system Hamiltonian and hc is an operator coupling the classical control field a(t) to the system 1 . Quantum optimal control aims to adjust a(t) such that a desired unitary gate or state transfer is performed within a given time interval 0 ≤ tT. To facilitate optimization, the total control time T is divided into N intervals of duration Δt = T/N, and the control field is specified by the amplitudes at the discrete times tn = nΔt (n = 1, 2,...,N). The discretization interval Δt is chosen such that control amplitudes are approximately constant within each Δt. This treatment is actually quite close to modeling the actual drive output of an arbitrary waveform generator, where Δt can be chosen to align with the available resolution. We denote the time-discretized values by

Equation (2)

and group these to form the vector $a\in {{\mathbb{R}}}^{N}$. Under the influence of this piecewise-constant control field, the system's quantum state ∣ψn 〉:=∣ψ(tn )〉 evolves by a single time step according to

Equation (3)

Here, Un is the short-time propagator

Equation (4)

where Hn = Hs + an hc . Optimization of the control field proceeds via minimization of a cost function C. Typically, C includes the state-transfer or gate infidelity, along with additional cost contributions employed to moderate pulse characteristics such as maximal power or bandwidth. To this end a composite cost function is constructed, $C(a)={\sum }_{\nu }{\alpha }_{\nu }{C}_{\nu }\left(a\right)$, where αν are empirically chosen weight factors and Cν are individual cost contributions. GRAPE utilizes the method of steepest descent to minimize C(a) by updating the control amplitudes according to the cost function gradient:

Equation (5)

Here, j enumerates steepest-descent iterations and ηj > 0 is the learning rate governing the step size.

3. Calculation of gradients

A critical ingredient for GRAPE is the computation of cost function gradients. One popular option is to leverage automatic differentiation for this purpose, such as implemented in Tensorflow [38] and Autograd [39]. A clear advantage of this strategy is the ease by which complicated cost function gradients are evaluated automatically at runtime. This convenience, however, is bought at the cost of significant memory consumption which can be prohibitive for large quantum systems. This waste of memory resources is prohibitive when optimal control is applied to large quantum systems. Therefore, we are motivated to revisit hard-coded gradients in a computationally efficient scheme for key cost contributions.

3.1. Analytical gradients

We provide a discussion of the analytical gradients of two key cost contributions in this subsection. Expressions for gradients of other cost contributions are shown in appendix A.

In state-transfer problems, a central goal of quantum optimal control is to minimize the state-transfer infidelity given by

Equation (6)

where ∣ϕT 〉 is the target state and ∣ψN 〉 is a state realized at final time tN . The gradient of ${C}_{1}^{\mathrm{st}}$ takes the form

Equation (7)

where the state ∣ϕn 〉 obeys the recursive relation

Equation (8)

This can be interpreted as backward propagation.

In addition to state transfer, unitary gates are another operation of interest. Gate optimization can be achieved by minimizing the gate infidelity ${C}_{1}^{g}=1-{\left|\mathrm{tr}\left({U}_{T}^{\dagger }{U}_{R}\right)/d\right|}^{2}$. Here, UT is the target unitary gate and UR = UN U1 is the actually realized gate. For a given orthonormal basis $\{| {\psi }_{0}^{h}\rangle \}{}_{h}$ (h enumerates the basis states), the gradient of ${C}_{1}^{g}$ can be written as

Equation (9)

where $| {\psi }_{n}^{h}\rangle ={U}_{n}\cdots {U}_{1}| {\psi }_{0}^{h}\rangle $. The state $| {\phi }_{n}^{h}\rangle $ is obtained as follows: apply the target unitary UT to basis state $\{| {\psi }_{0}^{h}\rangle \}{}_{h}$ and backward propagate the result to time tn ,

Equation (10)

Equations (7) and (9) provide the analytical expressions needed for gradient descent. In the next two subsections, we will discuss how to numerically evaluate these expressions in a memory-efficient way.

3.2. Numerical implementation of hard-coded gradient

The calculation of gradients in equations (7) and (9) requires numerical evaluation of expressions of the form Un ∣Ψ〉 and $\tfrac{\partial {U}_{n}}{\partial {a}_{n}}| {\rm{\Psi }}\rangle $. Here, the short-time propagator Un is given by a matrix exponential eA and A = − iHn Δt.

Numerically evaluating eA ∣Ψ〉 is not without challenge. As pointed out by Moler and Van Loan [40], several methods exist, yet none of them is truly optimal. Three methods ranked highly in reference [40] are based on solving ordinary differential equations, matrix diagonalization and scaling and squaring (combined with series expansion). ODE solving, in this context, tends to be most expensive in runtime [35]. We mainly on focus on scaling and squaring but comment on situations where matrix diagonalization is preferable in section 5.

Scaling and squaring [41] which is based on the identity

Equation (11)

where the right hand side matrix exponential now involves the matrix B = A/s which has a norm that is reduced compared to $\left|\left|A\right|\right|$. This generally allows for a lower truncation order m of the exponential series 2 ,

Equation (12)

Typically, m is smaller than the one required for the original eA . Our goal is to approximate the state vector ${e}^{A}| {\rm{\Psi }}\rangle \approx {\left({e}_{m}^{B}\right)}^{s}| {\rm{\Psi }}\rangle $ where

Equation (13)

According to the last expression, approximation of eA ∣Ψ〉 can thus proceed by repeated application of B = A/s to a vector, without ever invoking matrix-matrix multiplication. This boosts runtime performance from O(d3) to O(d2).

To proceed with the evaluation of eA ∣Ψ〉, the truncation order m and scaling factor s are determined in such a way that the error remains below the error tolerance τ,

Equation (14)

Here, 〚•〛 denotes the floating point representation of •. Since this error εA,∣Ψ〉 is difficult and costly to evaluate, we instead derive an upper bound EA (m, s) [see appendix C] and resort to the inequality

Equation (15)

For a given A, this inequality generally has multiple solutions for m and s. As shown in appendix C, the evaluation requires ms matrix-vector multiplications, which are observed to dominate the runtime. Since systematic minimization of ms requires significant runtime itself, we instead: (1) select solutions to equation (15) with the smallest s which is denoted by ${\mathbb{s}}$, and (2) among these pairs, choose the one with smallest m (which we denote by ${\mathbb{m}}$).

Thus, calculating eA ∣Ψ〉 requires

Equation (16)

matrix-vector multiplications. As shown in appendix D for two concrete examples, our method for evaluating eA ∣Ψ〉 can achieve better runtime performance and accuracy than Scipy's expm multiply function [42] .

Having addressed the challenges associated with numerically evaluating eA ∣Ψ〉, we now turn to the calculation of $\tfrac{\partial {U}_{n}}{\partial {a}_{n}}| {\rm{\Psi }}\rangle $, which is crucial for computing cost function gradients as in equations (7) and (9). We base this evaluation on the approximation

Equation (17)

where Bn = − iHn Δt/s. We proceed by using the auxiliary matrix method based on the identity [43, 44]

Equation (18)

Here, ${{ \mathcal A }}_{n}$ is defined as

Equation (19)

which is a 2 × 2 matrix of operators each acting on our Hilbert space with dimension d. Once a basis is chosen, ${{ \mathcal A }}_{n}$ can be represented as a 2d × 2d matrix of complex numbers. Numerical evaluation of the left hand side of the identity (18) gives a 2d dimensional vector. The first d entries of the vector correspond to $\partial {\left({e}_{m}^{{B}_{n}}\right)}^{s}/\partial {a}_{n}| {\rm{\Psi }}\rangle $, which is an approximation of $\tfrac{\partial {U}_{n}}{\partial {a}_{n}}| {\rm{\Psi }}\rangle $.

3.3. Efficient evaluation of hard-coded gradients

The analytical gradient expressions, such as ${\rm{\nabla }}{C}_{1}^{{st}}$ in equation (7), are generally complicated and involve a large number of contributing quantities. Depending on grouping and ordering of operations, these quantities may need to be stored simultaneously or computed repeatedly with critical implications for runtime and memory usage. The scheme originally proposed by Khaneja et al [12] strikes a favorable compromise that avoids the proliferation of stored intermediate state vectors, as illustrated in figure (1). More specifically, the final quantum state ∣ψN 〉 is obtained by forward propagating ∣ψ0〉. Then, backward propagation of ∣ψN 〉, ∣ϕT 〉 is carried out to calculate the matrix element $\langle {\phi }_{n}| \tfrac{\partial {U}_{n}}{\partial {a}_{n}}| {\psi }_{n-1}\rangle $ needed for the n-th component of the gradient, thus building up the gradient in descending order. For both propagation directions, only the current intermediate states are stored, leading to a memory cost of Θ(1). 3

4. Scaling analysis and benchmarking

We consider the following three methods to compute gradients within GRAPE: evaluation of hard-coded gradients, full automatic differentiation [32], semi-automatic differentiation [35]. To determine the method most appropriate for a given problem, we inspect the memory usage and runtime scaling of these three approaches. Following this analysis, we present benchmark studies of specific state-transfer and gate-operation problems to illustrate the scaling behavior.

4.1. Scaling analysis of runtime and memory usage

Runtime and memory usage can pose significant challenges when performing optimization tasks for quantum systems with large Hilbert space dimension (d) or for time evolution requiring significant number of time steps (N). In this subsection, we analyze the scaling of memory usage and runtime with respect to both N and d, focusing on the cost contribution C1 for state-transfer and unitary-gate optimization. Our findings can be generalized to other cost contributions beyond C1, as elaborated in appendix A.

4.1.1. Evaluation of hard-coded gradients (HG) using forward-backward propagation

We start by considering state-transfer problems. The memory usage for evaluating ${\rm{\nabla }}{C}_{1}^{{st}}$ is mainly determined by the required storage of quantum states and the Hamiltonian. In forward-backward propagation, the number of states and instances of Hamiltonian matrices stored at any given time is of the order of unity. Each individual state vector of dimension d consumes memory ∼Θ(d). Memory usage for the Hamiltonian depends on the employed storage scheme. In the 'worst' case of dense matrix storage, the memory required for Hamiltonian is Θ(d2). For sparse matrix storage, memory usage generally depends on sparsity and is determined by the number of stored Hamiltonian matrix elements (here denoted as κ). For example, for a diagonal or tridiagonal matrix, memory usage is κ ∼ Θ(d); for a linear chain of qubits with nearest-neighbor σz coupling, it is ${\rm{\Theta }}(d\,{\mathrm{log}}_{2}d)$. Combining the scaling relations above gives us an overall asymptotic behavior of memory usage Θ(d + κ).

We observe that runtime of the scheme is dominated by the required Θ(N) evaluations of propagator-state products and their derivative counterparts. Numerically approximating a single propagator-state product requires μ matrix-vector multiplications [see equation (16)], where runtime of one instance of matrix-vector multiplication scales as Θ(κ). The implicit dependence of μ = μ(d) on d is specific to each particular system and can be difficult to obtain analytically. Numerically, we find for a driven transmon coupled to a cavity that $\mu \sim {\rm{\Theta }}(\sqrt{d});$ for a linear chain of coupled qubits, the scaling is ${\rm{\Theta }}({\mathrm{log}}_{2}d)$, see appendix E for more details. Once combined, the above individual scaling relations lead to an overall runtime asymptotic behavior Θ(N μ κ). For example, in the case of the linear chain of coupled qubits, runtime scales as ${\rm{\Theta }}({Nd}\,{\left({\mathrm{log}}_{2}d\right)}^{2})$.

We next turn to gate-optimization problems, where scaling behavior of memory usage and runtime are as follows. We calculate ${\rm{\nabla }}{C}_{1}^{g}$ by sequentially applying forward-backward propagation to each of the d basis states. The memory usage is Θ(d + κ), the same as for ${\rm{\nabla }}{C}_{1}^{{st}}$. The runtime scaling is Θ(N μ κ d) which is d times larger than the runtime scaling for ${\rm{\nabla }}{C}_{1}^{{st}}$. (Another way of calculating ${\rm{\nabla }}{C}_{1}^{g}$ consists of parallelized forward-backward propagation of all basis states. The concrete scaling behavior then depends on the parallelization scheme; that discussion is beyond the scope of this paper.).

4.1.2. Full automatic differentiation (full-AD)

Reverse-mode automatic differentiation extracts gradients by a forward pass and a reverse pass through the computational tree. With the forward pass the cost function C is computed and the backward pass accumulates the gradient via chain rule.

The memory usage for evaluating ${\rm{\nabla }}{C}_{1}^{{st}}$ is determined by the requirement that all intermediate numerical results must be stored as part of the forward pass and act as input to the reverse pass. Specifically, the forward pass evolves the initial state vector to the final state vector in N time steps. The total number of vectors that have to be stored along the way is Θ(N μ), since computation of a single short-time propagator-state product via scaling and squaring necessitates μ iterations. In addition, considering the memory usage for the Hamiltonian, the full scaling is Θ(N μ d + κ).

The runtime for evaluating ${\rm{\nabla }}{C}_{1}^{{st}}$ combines forward and reverse pass. It is dominated by the evaluation of Θ(N) propagator-state products, each of which has the runtime scaling Θ(μ κ), leading to an overall runtime scaling of Θ(N μ κ).

For gate-optimization problems, ${C}_{1}^{g}$ is evaluated by evolving not only a single initial state but a whole set of d basis vectors. Evaluating Nd propagator-state products requires Θ(N μ d2 + κ) memory, where the number of stored Hamiltonian matrix elements κ takes into account storage of the Hamiltonian. Since κ has the worst-case scaling Θ(d2) in a case of dense Hamiltonian matrix, the overall scaling is Θ(N μ d2). Assuming that the evolution of individual basis vectors is not parallelized, runtime also grows by a factor of d, so that runtime scaling for ${\rm{\nabla }}{C}_{1}^{g}$ is Θ(N μ κ d).

4.1.3. Semi-automatic differentiation (semi-AD)

For gradients of cost contributions that are tedious to derive analytically, semi-automatic differentiation [35] offers an alternative approach in which some derivatives are hard-coded and others treated by automatic differentiation. Compared to full-AD, semi-AD has the benefit that the memory usage scaling is independent of μ while maintaining the same runtime scaling.

4.1.4. Comparison

The results of this scaling analysis are summarized in table 1. For full-AD, memory usage grows linearly with respect to μ and number of times steps N. By contrast, memory usage for HG does not increase with μ and N, but remains constant. This advantage is not bought at the cost of worse runtime scaling. Therefore, HG is preferable over full-AD in the optimal control of large quantum systems where memory usage would otherwise be prohibitively large. Semi-AD presents an interesting compromise viable whenever the memory bottleneck is not too severe. We note that the runtime scaling Θ(N μ κ d) of HG via scaling and squaring can in principle exceed the scaling O(Nd3) for matrix exponentiation based on matrix diagonalization (see appendix F). In this case, the latter is the better option.

We illustrate the discussed scaling by benchmark studies for concrete optimization problems in the following section.

4.2. Benchmark studies

Our first benchmark example studies the following state-transfer problem. Consider a setup in which a driven flux-tunable transmon is capacitively coupled to a superconducting cavity, and perform a state transfer from the cavity vacuum state to the 20-photon Fock state. The Hamiltonian in the frame co-rotating with the transmon drive is

Equation (20)

Here, the drive is both longitudinal and transverse, c and b are the annihilation operators for cavity photons and transmon excitations respectively. α, g and Δ denote anharmonicity, interaction strength and difference between the transmon and cavity frequency.

Optimization proceeds by minimizing the infidelity ${C}_{1}^{{st}}$ and limiting the occupation of higher transmon levels by including the cost contribution ${C}_{2}^{{st}}$ [equation (A.1)]. The latter has the additional benefit of enabling the truncation to only a few transmon levels. We measure runtime and peak memory usage of full-AD 4 , semi-AD and HG for a single optimization iteration. The runtime is measured by the package Temci [45] and the peak memory usage is monitored by the memory profiler package [46]. The optimization is performed on an AMD Ryzen Threadripper 3970X 32-Core CPU with 3.7 GHz frequency.

Benchmark results for the state-transfer optimization are shown in figure 2. We consider a single time step (N = 1) and measure the runtime versus d. The results in figure 2(a) confirm that full-AD, semi-AD and HG have the same scaling of runtime per time step with respect to d (table 1). The observed runtime scaling ${\rm{\Theta }}({d}^{\tfrac{5}{2}})$ is consistent with κ ∼ Θ(d2) due to dense matrix storage for H1, and $\mu \sim {\rm{\Theta }}({d}^{\tfrac{1}{2}})$ (see appendix E). The scaling of the required memory resources is illustrated in figure 2(b): memory usage scales with dimension d as Θ(d2) for full-AD, semi-AD and HG, consistent with κ ∼ Θ(d2) being the dominant contribution. As anticipated, the scaling of memory with N differs between full-AD, semi-AD and HG [figure 2(c)]. For full-AD and semi-AD, the scaling is linear with N, while it is independent of N for HG. The slope of full-AD is much larger than the one of semi-AD, since the former has extra dependence on μ (table 1), which is a constant ≈100 in this benchmark. The results confirm the favorable memory efficiency of HG and semi-AD compared to full-AD in state-transfer problems with large number of time steps.

Figure 1.

Figure 1. The forward-backward propagation scheme to compute the components of the gradient in equation (7). (a) Forward propagate the initial state by UN U1 to obtain the realized state ∣ψN 〉 and calculate the overlap 〈ψN ϕT 〉, where ∣ϕT 〉 is the target state. (b) Backward propagate ∣ψN 〉 and ∣ϕT 〉 by the propagators ${U}_{N}^{\dagger },{U}_{N-1}^{\dagger },\cdots ,{U}_{1}^{\dagger }$ iteratively, and compute $\langle {\phi }_{n}| \tfrac{\partial {U}_{n}}{\partial {a}_{n}}| {\psi }_{n-1}\rangle $ at each step step.

Standard image High-resolution image
Figure 2.

Figure 2. Benchmark results for Fock state generation in a system consisting of a cavity coupled to a transmon. (a) Runtime per iteration and (b) memory usage versus Hilbert space dimension d. The dimension is varied by increasing cavity dimension while fixing the transmon dimension to 6. (c) Memory usage as a function of the number of time steps for d = 240. All plots are in log-log scale. Dashed lines are power-law fits. (Parameters: Δ/2π = 3 GHz, g/2π = 0.1 GHz, α/2π = − 0.225 GHz and constant controls az /2π = ax /2π = 0.1 GHz.).

Standard image High-resolution image

We next turn to an example for optimizing a unitary gate. This benchmark is performed for three driven transmons with nearest neighbor coupling. The target gate consists of simultaneous Hadamard operations on each of the three transmons. For concreteness, we consider the case of transmons with the same frequency, driven resonantly. The full Hamiltonian in the rotating frame of the drive is

Equation (21)

The goal of the optimization is to minimize the infidelity ${C}_{2}^{g}$. The setup for this benchmark is the same as in the previous example.

Figure 3 records the benchmarks based on a single time step (N = 1). The observed runtime scaling with dimension d is ${\rm{\Theta }}({d}^{\tfrac{11}{3}})$ for full-AD, semi-AD and HG, see figure 3(a). This power law arises from Θ(μ κ d) for κ ∼ Θ(d2) and $\mu \sim {\rm{\Theta }}({d}^{\tfrac{2}{3}})$ [see appendix E]. The memory usage scaling with d is illustrated in figure 3(b), confirming that full-AD has an unfavorable extra $\mu \sim {\rm{\Theta }}({d}^{\tfrac{2}{3}})$ dependence compared to HG and semi-AD (table 1). The memory usage scaling with N is as expected [figure 3(c)]: for semi-AD, the scaling is linear with N; for HG, it is independent of N. (The memory usage of full-AD in this case exceeds our resources, and thus is not shown.) Throughout our benchmark, this linear dependence with N causes semi-AD to consume about 10 times more memory than HG. This underlines that HG is preferable whenever memory constraints become a limitation in optimization problems with large Hilbert space dimension and large number of time steps. We emphasize that this improvement is achieved without worsening the runtime scaling.

Figure 3.

Figure 3. Benchmark results for unitary gate optimization in a system consisting of three coupled transmons. (a) Runtime per iteration and (b) memory usage versus Hilbert space dimension d. The total Hilbert space dimension d is increased by enlarging the cutoff of each transmon from 10 to 14 simultaneously. (c) Memory usage as a function of the number of time steps for d = 123 = 1728. Full-AD (not shown) would require more memory than our resources (estimated: 1TB). All plots are in log-log scale. Dashed lines are power-law fits. (Parameters: g/2π = 0.1 GHz, α/2π = − 0.225 GHz and a(ν)/2π = 0.1 GHz.).

Standard image High-resolution image

5. Choosing among different optimization strategies

There are several considerations which determine whether full-AD, HG or semi-AD is the optimal strategy for a given optimization problem, see the decision tree figure 4. Whenever the Hilbert space dimension d and the number of time steps N are sufficiently small, then memory usage may not be a bottleneck. In this case, full-AD may be preferable, since it eliminates the need for hard-coded gradients. If the memory cost of full-AD is not affordable but analytical gradients are tedious to compute, semi-AD may be a viable compromise, though still less memory efficient than HG. In all other cases, HG is the method of choice, since it can handle optimization with larger N and d compared to full-AD and semi-AD.

Figure 4.

Figure 4. Decision trees for selecting the optimal numerical strategy. (a) Choosing among three numerical methods for evaluating gradients: full automatic differentiation, semi-automatic differentiation, hard-coded analytical gradients. (Here, we we assume μ ∼ Θ(d); for different scaling see table 1.) (b) Determining the appropriate strategy for evaluating propagator-state products.

Standard image High-resolution image

The choice of implementing HG via scaling and squaring or matrix diagonalization should be informed by whether the Hamiltonian is sparse and κ scales better than Θ(d2). In the latter case, scaling and squaring is preferable and leads to an overall memory usage ∼Θ(d + κ), which is better than the memory scaling O(d2) of matrix diagonalization. By contrast, for Hamiltonians with κ ∼ Θ(d2), these two methods are equivalent in memory usage scaling and we are free to select the one with better runtime for a specific optimization task. While the runtime for optimization using matrix diagonalization scales as O(d3) [appendix F], the runtime for optimization based on scaling and squaring differs between state-transfer and gate optimizations: (1) for state transfer, the runtime scales as Θ(μ d2); hence, whenever μ scales better than Θ(d), scaling and squaring wins over matrix diagonalization. (2) for gate optimization, the runtime scales as Θ(μ d3); hence matrix diagonalization is preferable.

6. Conclusions

Motivated by the memory bottleneck problem in quantum optimal control of large systems, we have revisited and compared the memory usage and runtime scaling of multiple flavors of GRAPE: hard-coded gradients (HG), full automatic differentiation (full-AD) and semi-automatic differentiation (semi-AD). These three methods are equivalent in runtime scaling but differ characteristically in scaling of memory usage. The ranking among them is as follows, going from the most to least efficient: HG, semi-AD and full-AD. Thus HG is the preferred option when facing a memory bottleneck problem.

We have illustrated the scaling of HG and full-AD by benchmark studies for concrete state-transfer and gate-operation optimizations, respectively. As part of this, we have presented improvements in the implementation of scaling and squaring, used to evaluate propagator-state products numerically. In the example studied, we observed a speedup compared to Scipy's expm multiply function. The decision tree shown in figure 4 summarizes the choice of appropriate numerical strategy for a given optimization problem.

Acknowledgments

We thank Thomas Propson for fruitful discussions. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. We are grateful for the following open-source packages used in our work: matplotlib [47], numpy [48], scipy [49], sympy [50], scqubits [51], and qoc [31].

Data availability statement

All data that support the findings of this study are included within the article (and any supplementary files).

Appendix A.: Analytical gradients

In the main text, we exclusively focused on minimizing state-transfer or gate infidelity. In this appendix, we provide the analytical gradients for other cost contributions of interest.

The cost contribution

Equation (A.1)

penalizes the integrated expectation value of the Hermitian operator Ω. The gradient of this cost contribution can be written as

Equation (A.2)

with

Equation (A.3)

The vector ∣Φn 〉 obeys the recursion relation

Equation (A.4)

Due to this relation, the expression in equation (A.2) can be evaluated with a forward-backward scheme analogous to the one discussed in the section 3.3.

The cost contribution ${C}_{3}^{{st}}=1-\tfrac{1}{N}{\sum }_{n^{\prime} =1}^{N}{\left|\left\langle {\phi }_{T}| {\psi }_{n^{\prime} }\right\rangle \right|}^{2}$ sums up infidelities from all time steps (as opposed to recording the final state infidelity only, as in ${C}_{1}^{{st}}$). This steers the system towards its target state more rapidly, thus helping to reduce the duration of state-transfer. Similar to equation (A.2), the gradient of ${C}_{3}^{{st}}$ is:

Equation (A.5)

with

Equation (A.6)

The vector obeys the recursive relation

Equation (A.7)

This allows us to use a forward-backward scheme to evaluate the gradient expression in equation (A.5).

For gate operations, the cost contribution analogous to ${C}_{3}^{{st}}$ is given by ${C}_{3}^{g}=1-{\sum }_{n^{\prime} =1}^{N}{\left|\mathrm{tr}\left({U}_{T}^{\dagger }{U}_{n^{\prime} ,1}\right)\right|}^{2}/{{Nd}}^{2}$, where ${U}_{n^{\prime} ,1}={U}_{n^{\prime} }\cdots {U}_{1}$ describes the evolution from time step 1 to $n^{\prime} $. We obtain the gradient

Equation (A.8)

with

Equation (A.9)

Here, $\{| {\psi }_{0}^{h}\rangle \}$ forms an arbitrary orthonormal basis and h = 1, 2,...,d enumerates the basis states. The vector $| {{\rm{\Phi }}}_{T,n}^{h}\rangle $ obeys the recursion relation

Equation (A.10)

again enabling the evaluation of the gradient expression with a forward-backward propagation scheme.

Appendix B.: Scaling analysis

For the cost contributions discussed in appendix A, we now analyze the memory usage and runtime scaling of full-AD, semi-AD and HG.

B.1. Scaling analysis: hard-coded gradient

Thanks to the similarities in the expressions and the existence of recursion relations, forward-backward schemes akin to figure 1 can also be applied to ${\rm{\nabla }}{C}_{2}^{{st}}$,${\rm{\nabla }}{C}_{3}^{{st}}$ and ${\rm{\nabla }}{C}_{3}^{g}$. As a result, we find that the scaling analysis in section 4.1 leads to the same the memory usage and runtime requirement as ${\rm{\nabla }}{C}_{1}^{{st}}$ and ${\rm{\nabla }}{C}_{2}^{g}$.

B.2. Scaling analysis: full-automatic differentiation

Despite the specific differences among the expressions of cost contributions ${C}_{2}^{{st}}$, ${C}_{3}^{{st}}$ and C3 g , we find that the scaling of runtime and memory usage is again set by the need to store intermediate vectors and repeatedly perform matrix-vector multiplications. Consequently, the scaling remains the same as for the cost contributions discussed in section 4.1.

In summary, the scaling for the state-transfer and gate-operation cost contributions matches those shown in table 1. For the scaling of semi-AD, see [35].

Table 1. Memory usage and runtime scaling with respect to Hilbert space dimension d, number of time steps N, the number of stored Hamiltonian matrix elements κ = κ(d) and the number of matrix-vector multiplications required for evaluating propagator-state products μ = μ(d). (The scaling of μ and κ depends on the specific optimization task.) The table contrasts scaling for state transfer and gate operation for hard-coded gradients (HG), full automatic differentiation (full-AD) and semi-automatic differentiation (semi-AD).

  State transfer Gate operation
 RuntimeMemory usageRuntimeMemory usage
Hard-coded gradientΘ(N μ κ)Θ(d + κ)Θ(N μ κ d)Θ(d + κ)
Full automatic differentiationΘ(N μ κ)Θ(N μ d + κ)Θ(N μ κ d)Θ(N μ d2)
Semi-automatic differentiationΘ(N μ κ)Θ(Nd + κ)Θ(N μ κ d)Θ(Nd2)

Appendix C.: Determine the upper bound EA (m, s)

The approximation of eA ∣Ψ〉 in section 3.2 required the determination of an upper bound EA (m, s) for the error εA,∣Ψ〉(m, s) [equation (14)]. In this appendix, we show how to acquire such bound.

Considering the expression for the error [equation (14)], we note that the denominator is simply 1. The error εA,∣Ψ〉 is bounded as follows:

Equation (C.1)

The error εt arises from the truncation of the Taylor series and rounding error εr is due to finite-precision arithmetics. We proceed to derive the upper bounds for these two errors separately.

C.1. Upper bound for the truncation error εt

We rewrite the truncation error as

Equation (C.2)

where ${R}_{m}(B)={\sum }_{q\,=\,m\,+\,1}^{\infty }{B}^{q}/q!$ denotes the matrix-valued error of the truncated exponential series. Employing the triangle inequality and submultiplicativity [52], we find the bound

Equation (C.3)

Here the second line utilizes the fact that the state is normalized and eB is unitary.

This upper bound must be evaluated numerically to determine m and s. However, computing the matrix 2-norm is inconvenient since it requires the use of an eigensolver. To mitigate this, we switch to the matrix 1-norm by exploiting 5

Equation (C.4)

Monotonicity of Rm leads us to

Equation (C.5)

C.2. Upper bound for the rounding error εr

The rounding error εr originates from the finite precision of the floating-point representation of real numbers as well as the finite accuracy of basic arithmetic operations. The floating-point representation 〚ξ〛 for the real number ξ can be written as 〚ξ〛 = ξ(1 + δ) where δ = δ(ξ) is the relative deviation [52]. Its magnitude is always smaller than machine precision, i.e. ∣δ∣ < u. Similarly, the complex number z obeys

Equation (C.6)

with ∣δ∣ ≤ u. Arithmetic operations such as addition and multiplication with two complex numbers z1, z2 incur a relative deviation δ = δ(z1, z2) such that

Equation (C.7)

where $u^{\prime} := 2\sqrt{2}u/(1-2u)$ [52]. The relative deviation shown in equations (C.6), (C.7) generally depend on both numbers involved as well as the operation in question. In the following, we will distinguish different δ with subscript ν.

Using the equations (C.6), (C.7), we derive an upper bound for the rounding error in the evaluation of the dot product which is the basic arithmetic operation in evaluating ${e}_{m}^{B}| {\rm{\Psi }}\rangle $ according to equation (13).

C.2.1. Upper bound for the rounding error associated with evaluating the dot product

The rounding error incurred when evaluating the dot product is given by ∣Σd − 〚Σd 〛∣ with Σd = w T z (${\boldsymbol{w}},{\boldsymbol{z}}\in {{\mathbb{C}}}^{d}$). Here, w T is a row of B = i δ tH/s. We illustrate how to bound the rounding error for d=1,2, and give the general result subsequently. Based on equations (C.6) and (C.7), we have

Equation (C.8)

In the last step, we have used $\left|{\prod }_{\nu =1}^{n}(1+{\delta }_{\nu })-1\right|\lt \tfrac{{nu}^{\prime} }{1-{nu}^{\prime} }$ [52], and defined ${\gamma }_{n}:= \tfrac{{nu}^{\prime} }{1-{nu}^{\prime} }$. For d = 2, the upper bound of the error is

Equation (C.9)

The general upper bound for arbitrary d is given by:

Equation (C.10)

Since w T is proportional to a row of the Hamiltonian matrix, sparsity of H implies sparsity of w T. Vanishing entries in w T is special because they do not incur rounding errors. Thus, one can obtain a stronger upper bound

Equation (C.11)

where σ is the number of non-zero elements in w T.

C.2.2. Upper bound for the rounding error of ${e}_{m}^{B}| {\rm{\Psi }}\rangle $

The vector ${e}_{m}^{B}| {\rm{\Psi }}\rangle $ is evaluated recursively via

Equation (C.12)

where j = 1, 2, ⋯ ,m and b0 is a vector representation of ∣Ψ〉 under a choice of orthonormal basis. Before evaluating the upper bound for the error ${\left|\left|[\kern-2pt[ {e}_{m}^{B}{b}_{0}]\kern-2pt] -{e}_{m}^{B}{b}_{0}\right|\right|}_{2}$, we first derive an upper bound for the error of an vector component $\left|[\kern-2pt[ {e}_{m}^{B}{b}_{0}{]\kern-2pt] }^{(i)}-{\left({e}_{m}^{B}{b}_{0}\right)}^{(i)}\right|$. We illustrate how to bound this error for m = 0,1, and then give the general result.

For the case m = 0, we have the bound

Equation (C.13)

We then obtain the upper bound of the error for the example of m = 1:

Equation (C.14)

where σi is number of non-zero elements in ith row of the matrix B. Here, the third line arises from the division error between a complex number and a real number, i.e. 〚z/ξ〛 = z/ξ(1 + δ), ∣δ∣ < u [52]. The general upper bound for arbitrary m is given by:

Equation (C.15)

where ∣B∣, ∣b0∣ denote the matrix and the vector with elements ∣Bil ∣, $| {b}_{0}^{i}| $, respectively. Defining

Equation (C.16)

and using equation (C.15), we obtain

Equation (C.17)

with $\beta := {\sum }_{k=0}^{m}{\gamma }_{k(\sigma ^{\prime} +2)+m\,+\,2}\tfrac{| | B| {| }_{1}^{k}}{k!}$. Here, β∣∣b0∣∣2 the desired upper bound for the rounding error of ${e}_{m}^{B}| {\rm{\Psi }}\rangle $.

C.2.3. Upper bound for the error εr

The vector ${\left({e}_{m}^{B}\right)}^{s}{b}_{0}$ is evaluated by the recursive relation.

Equation (C.18)

where c0 = b0. We illustrate how to bound the error ${\varepsilon }_{r}={\left|\left|[\kern-2pt[ {\left({e}_{m}^{B}\right)}^{s}{b}_{0}]\kern-2pt] -{\left({e}_{m}^{B}\right)}^{s}{b}_{0}\right|\right|}_{2}$ for s = 1, 2, and give the general result subsequently.

For s = 1, we have

Equation (C.19)

For s = 2, we obtain

Equation (C.20)

where the last step is enabled by equation (C.19). Finally, we bound the matrix norm in the second term by

Equation (C.21)

Since equation (C.17) holds for any complex vector b0, the error in equation (C.20) is further bounded by:

Equation (C.22)

Generalizing this strategy further, we obtain the expression for the upper bound of εr :

Equation (C.23)

Combining the upper bounds for εr and εt , we return to equation (C.1) and obtain

where ${\left|\left|B\right|\right|}_{1}={\left|\left|A\right|\right|}_{1}/s$, $\alpha =\alpha ({\left|\left|A\right|\right|}_{1},m,s)$ and $\beta =\beta (\sigma ^{\prime} ,{\left|\left|A\right|\right|}_{1},m,s)$.

Appendix D.: Numerical comparison between scaling and squaring implementations

Scaling and squaring is used to evaluate the propagator-state product, eiHΔt ∣Ψ〉. We compare relative error and runtime between our implementation of scaling and squaring and Scipy's implementation, expm multiply [42]. The relative errors for our implementation and Scipy's implementation follow the general expression in equation (14), but differ in specific choice of m and s. As a proxy for the exact propagator-state product, we apply Taylor expansion to matrix exponential with 400 digits precision by Sympy and the result of the product converges at 30 digits after decimal point. The runtime is measured with the help of the Temci package [45].

As the basis for this comparison, we use the examples of Hamiltonians (20) and (21) and implement sparse storage for them. The relative errors are shown in figures D1(a) and (c) as a function versus the matrix norm ${\left|\left|-{iH}{\rm{\Delta }}t\right|\right|}_{2}$ respectively. In both examples, the relative errors for our implementation are bounded by the error tolerance τ, while the relative errors for Scipy's implementation tend to exceed the tolerance for large norms There are two reasons for this tendency. First, rounding errors are not considered in the derivation of the upper bound for the errors [42]; second the truncation of eB m [equation (12)] is merely based on a heuristic criterion.

Figure D1.

Figure D1. Comparison between scaling and squaring as implemented here and Scipy's expm multiply, considering relative error and runtime. (a), (c) Relative error versus matrix norm. (b), (d) Ratios of runtime and μ (i.e. number of matrix-vector multiplications) between our and Scipy's implementations as functions of the matrix norm. The matrix norm ${\left|\left|-{iH}{\rm{\Delta }}t\right|\right|}_{2}$ is varied by increasing Δt ∈ [1, 10]. In all panels, colors encode different error tolerances τ. The results of the first and second row are obtained based on cavity-transmon and three coupled transmons Hamiltonians, respectively. The system parameters of these two Hamiltonian are the same as the ones given in the captions of figures 2, 3, respectively.

Standard image High-resolution image

Even though the relative errors of Scipy's implementation are worse, it gains smaller number of matrix-vector multiplications, which is denoted by μ. This is illustrated in figure D1(b) and (d), that the ratio of μ between our implementation and Scipy's implementation is always larger than 1. Although Scipy's implementation spends less time on matrix-vector multiplications, the total runtime of Scipy's implementation is larger than the one of our implementation, see figure D1(b) and (d). This is due to the larger runtime overhead of Scipy's implementation for determining (m, s).

In conclusion, our implementation has the better numerical stability and runtime than Scipy's implementation in our numerical experiments.

Appendix E.: The scaling of μ

The asymptotic behavior of runtime, discussed in the main text, is governed by the scaling of μ (16) with dimension d of the Hilbert space. The dependence μ(d) is specific to the matrix A to be exponentiated (which is proportional to the Hamiltonian). This prevents us from stating a general scaling relation. We thus turn to a discussion of several concrete examples.

We first consider the example of the transmon-cavity system, see equation (20), where we increase d via the cavity dimension dc while leaving the transmon cutoff fixed. The corresponding Hamiltonian H1, written in the bare product basis, then has a constant maximum number of non-zero elements in each row. (I.e., the parameter $\sigma ^{\prime} $ (C.16) is independent of d.) Numerically, we find in this case that μ scales linearly with ${\left|\left|A\right|\right|}_{1}$ ($\mu \sim {\rm{\Theta }}({\left|\left|A\right|\right|}_{1})$), and this linearity holds for a wide range of τ, see figure E1(a). In this example, two ingredients lead to the scaling ${\left|\left|A\right|\right|}_{1}\sim {\rm{\Theta }}({d}^{\tfrac{1}{2}})$ : (1) d is a constant multiple of dc , and (2) the cavity ladder operators in H1 which has one-norm scaling ${\rm{\Theta }}({d}_{c}^{\tfrac{1}{2}})$. Thus, we obtain the overall scaling $\mu \sim {\rm{\Theta }}({d}^{\tfrac{1}{2}})$, see figure E1(e). In the second example, we consider the system three coupled transmons, see equation (21). The Hamiltonian H2 is written in the harmonic oscillator basis and has a constant $\sigma ^{\prime} $, which leads to $\mu \sim {\rm{\Theta }}({\left|\left|A\right|\right|}_{1})$ [figure E1(b)]. One-norm of operator ${\left({b}^{\dagger }b\right)}^{2}$ in H2 has quadratic scaling with each transmon dimension. Since we increase d by enlarging the dimensions of all three transmons simultaneously, we obtain ${\left|\left|A\right|\right|}_{1}\sim {\rm{\Theta }}({d}^{\tfrac{2}{3}})$. This scaling results in $\mu \sim {\rm{\Theta }}({d}^{\tfrac{2}{3}})$, see figure E1 (f).

Figure E1.

Figure E1. Scaling of μ, the number of matrix-vector multiplications. (a)-(d) The scaling of μ with ${\left|\left|A\right|\right|}_{1}$ for Hamiltonians Hj , j = 1, 2, 3, 4, when considering several error tolerances τ. (e)-(h) The scaling of μ with dimension d when considering τ = 10−8 for each Hj , respectively. In all figures, dots are sampling data and solid lines are fitting curves. The parameters setup of H1, H2 are the same as the ones given in the captions of figures 2, 3. Parameters of H3 and H4 are as follows: for H3, ECi /2π = 2.5 GHz, EJi /2π = 8.9 GHz, ELi /2π = 0.5 GHz, g/2π = 0.1 GHz and ϕ = 0.33 ; for H4, ${a}_{x}^{(\nu )}/2\pi ={a}_{y}^{(\nu )}/2\pi =0.5\,\mathrm{GHz}$ and g/2π = 0.1 GHz.

Standard image High-resolution image

For other Hamiltonian matrices where $\sigma ^{\prime} $ depends on d, the scaling $\mu \sim {\rm{\Theta }}({\left|\left|A\right|\right|}_{1})$ may still hold. In the following, we show two concrete examples that one where linearity holds, and another where it breaks. We first consider a driven system of Nq qubits with nearest-neighbor σz coupling. In the rotating frame, the system is described by

Each qubit is controlled via drive fields ${a}_{x}^{(n)}$ and ${a}_{y}^{(n)}$. For this example, the numerical results show that scaling $\mu \sim {\rm{\Theta }}({\left|\left|A\right|\right|}_{1})$ still holds, see figure E1(c). Since we increase d by enlarging Nq , we obtain ${\left|\left|A\right|\right|}_{1}\sim {\rm{\Theta }}({N}_{q})\sim {\rm{\Theta }}({\mathrm{log}}_{2}d)$. This scaling results in $\mu \sim {\rm{\Theta }}({\mathrm{log}}_{2}d)$, see figure E1(g). For the second example, we consider two capacitively coupled fluxonium qubits with Hamiltonian

Equation (E.1)

Here, ECi , EJi , ELi denote the charging, inductive and Josephson energies of two fluxonium qubits, and g is the interaction strength. φ and n are the phase and charge number operators, defined in the usual way. We represent the Hamiltonian in the harmonic oscillator basis. Figure E1(d) shows that scaling of μ is superlinear with ${\left|\left|A\right|\right|}_{1}$ for τ = 10−8. We thus settle for a numerical determination of the scaling relation between μ and d for τ = 10−8 by increasing the dimensional cutoff for both fluxonium qubits simultaneously. The observed scaling is μ ∼ Θ(d0.65), see figure E1(h).

Appendix F.: Alternative numerical implementation of HG scheme and the corresponding scaling analysis

The numerical implementation of the hard-coded gradient scheme requires evaluation of the short-time propagator and its derivative. Under certain conditions, it turns out to be advantageous to numerically evaluate the short-time propagator by employing matrix diagonalization. In this appendix, we first discuss the method to compute propagator derivative and then analyze the scaling of HG.

F.1. Evaluation of propagator derivative

The operator −iHΔt is anti-Hermitian, and hence can be diagonalized and exponentiated according to

Equation (F.1)

Equation (F.2)

Here, S is unitary and D diagonal. As a result, the propagator derivative can be written as

Equation (F.3)

We apply the inverse unitary transformation to $\tfrac{{{de}}^{A}}{{da}}$ (F.3) and obtain

Equation (F.4)

Based on $\tfrac{{{dS}}^{\dagger }}{{da}}S+{S}^{\dagger }\tfrac{{dS}}{{da}}=0$, equation (F.4) can be rewritten as

Equation (F.5)

with ${E}_{{ij}}:= {e}^{{D}_{{jj}}}-{e}^{{D}_{{ii}}}$. Here, ◦denotes the Hadamard product (element-wise multiplication). To obtain dD/da and dS/da, we take the derivative of equation (F.1) and repeat the same steps as for equations (F.4) and (F.5), leading to

Equation (F.6)

where $E{{\prime} }_{{ij}}:= {D}_{{jj}}-{D}_{{ii}}$. Since the diagonal elements of $E^{\prime} $ are zero, it follows that

Equation (F.7)

For off-diagonal elements of the operator in equation (F.6), we obtain

Equation (F.8)

where we define

Equation (F.9)

Inserting equations (F.7) and (F.8) into equation (F.5) gives

Equation (F.10)

where we use EG = 0.

F.2. Scaling analysis

The gradients of the contributions to the cost function (for the case of state transfer) are evaluated by the forward-backward propagation scheme. In this scheme, the storage required for states does not scale with the number of intermediate time steps. The matrices that must be stored are the system and control Hamiltonians as well as dense matrices dense matrices S, E and ${E}^{{\prime} }$. Thus, the total memory usage scales as O(d2). The runtime of the scheme is dominated by O(N) evaluations of the propagator and its derivative. Employing matrix diagonalization for the propagator evaluation requires a runtime scaling ∼O(d3) [54]. Calculation of the propagator derivative involves matrix-matrix multiplication and Hadamard product with runtime scaling O(d3) and O(d2), respectively. Thus, the overall runtime of the HG scheme scales as O(Nd3).

The gradients of the contributions to the gate-operation cost function are evaluated in an analogous manner, resulting in the runtime and memory scaling.

Footnotes

  • 1  

    For simplicity, we consider one control channel with real-valued a(t). Generalizations to multiple control channels and complex a are straightforward.

  • 2  

    While Chebyshev expansion has faster convergence than Taylor expansion, their numerical implementations have the same runtime scaling. Here, we focus on Taylor expansion due to its simplicity.

  • 3  

    Here, f(x) = Θ(g(x)) is the usual Bachmann-Landau notation for f(x) bounded above and below by g(x) in the asymptotic sense.

  • 4  

    AD is implemented by the package Autograd [39] in this benchmark. This package does not support sparse linear algebra, so we implement dense matrix storage for the Hamiltonian in both full-AD and HG for fair comparison.

  • 5  

    Generally, ${\left|\left|B\right|\right|}_{2}\leqslant \sqrt{{\left|\left|B\right|\right|}_{\infty }{\left|\left|B\right|\right|}_{1}}$ [53] holds true for any B. When B is anti-Hermitian, ${\left|\left|B\right|\right|}_{\infty }={\left|\left|B\right|\right|}_{1}$ leads to inequality equation (C.4).

Please wait… references are loading.