Quantum Davidson algorithm for excited states

Excited state properties play a pivotal role in various chemical and physical phenomena, such as charge separation and light emission. However, the primary focus of most existing quantum algorithms has been the ground state, as seen in quantum phase estimation and the variational quantum eigensolver (VQE). Although VQE-type methods have been extended to explore excited states, these methods grapple with optimization challenges. In contrast, the quantum Krylov subspace (QKS) method has been introduced to address both ground and excited states, positioning itself as a cost-effective alternative to quantum phase estimation. However, conventional QKS methodologies depend on a pre-generated subspace through real or imaginary-time evolutions. This subspace is inherently expansive and can be plagued with issues like slow convergence or numerical instabilities, often leading to relatively deep circuits. Our research presents an economic QKS algorithm, which we term the quantum Davidson (QDavidson) algorithm. This innovation hinges on the iterative expansion of the Krylov subspace and the incorporation of a pre-conditioner within the Davidson framework. By using the residues of eigenstates to expand the Krylov subspace, we manage to formulate a compact subspace that aligns closely with the exact solutions. This iterative subspace expansion paves the way for a more rapid convergence in comparison to other QKS techniques, such as the quantum Lanczos. Using quantum simulators, we employ the novel QDavidson algorithm to delve into the excited state properties of various systems, spanning from the Heisenberg spin model to real molecules. Compared to the existing QKS methods, the QDavidson algorithm not only converges swiftly but also demands a significantly shallower circuit. This efficiency establishes the QDavidson method as a pragmatic tool for elucidating both ground and excited state properties on quantum computing platforms.


I. INTRODUCTION
Computing the ground and excited state properties of intricate many-body systems is a cornerstone of quantum physics and chemistry.Despite the importance, this endeavor demands substantial computational power due to the factorial growth of the full many-body wavefunction's solution space as the system size (represented by the number of electrons and basis functions) [1][2][3].As a result, classical quantum chemistry techniques such as Hartree-Fock (HF), density functional theory (DFT) [4,5], tensor network methods [6,7] that optimize the wavefunction in the form of a matrix product states (MPS), selected configuration interaction (sCI) [8,9] that iteratively expands the configuration interaction (CI) spaces, and coupled-cluster (CC) theory truncated at finite orders [10] have been conceived to bypass the direct formulation of full many-body wavefunctions.However, these techniques invariably employ truncations or approximations and are limited to a certain size.
Since the 1980s, quantum computers (QC) that leverage the power of quantum entanglement have been proposed as the ideal platforms for simulating quantum me-chanics [11,12].Advancing into the Noisy Intermediate-Scale Quantum (NISQ) era [13], quantum computing has shown its potential with the demonstration of quantum advantages in well-defined tasks [14].Electronic structure problems, crucial to various scientific disciplines, emerge as one of the most promising and immediate applications of quantum computers [1][2][3]15].Numerous quantum algorithms have been proposed for calculating the ground state of quantum many-body systems on QC since the advent of quantum phase estimation (QPE) [16,17].However, the inherently noisy nature of NISQ devices necessitates hybrid quantum-classical algorithms with shallower circuits.To this end, the Variational Quantum Eigensolver (VQE) that leverages the power of variational principle and classical optimization was conceived accordingly [18].The VQE scheme deploys the parameterized wavefunction(or ansatz) and the corresponding energy measurement on QCs and then uses classical algorithms for variational energy minimization.VQE algorithm has been performed on various quantum architectures such as photons [18], superconducting qubits [19,20], and trapped ions [21].Since then, many algorithms have been proposed to improve the performance further or reduce the quantum resource requirements of VQE [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36].
Despite the extensive development of quantum algorithms for electronic structure problems, the majority pivots on ground state properties.However, many chemical critical processes, including energy transfer [37], bond dissociation [38], and light emission [39], revolve around electronically excited states.Which necessitates the development of quantum algorithms for excited states.One straightforward way is to extend VQE for excited states by introducing certain constraints [40][41][42][43][44][45][46][47][48].Despite the success and impact of VQE algorithms, they suffer from optimization problems.The optimization process in VQE is challenging due to the high nonlinearity of the energy and stochastic errors [49] and is compounded by the inclusion of multiple excited states.
Alternatively, the other emerging direction for calculating excited states on QC is based on the quantum subspace, showcased by Quantum subspace expansion (QSE) [44,[50][51][52][53], non-orthogonal VQE [54], equation of motions [55,56], and the quantum Krylov subspace (QKS) framework inspired by its classical analogs [57][58][59][60][61][62].Current QKS methods utilize either real or imaginary time [58,59] evolutions to generate the Krylov subspace, which then is used to sample the low-lying spectrum of the Hamiltonian Ĥ.In particular, the Quantum Lanczos (QLanczos) algorithm [59] that engages a basis of correlated states generated from the imaginary-time propagation [63,64] has been proposed.Even though the QKS methods remove the optimization problems of the VQE algorithms, the current QKS usually requires a relatively deep circuit due to the trotter expansion of the real/imaginary time evolution and a larger number of iterations for convergence.Moreover, the pre-generated Krylov subspace from the time evolution is not necessarily compact.
This research introduces the Quantum Davidson (QDavidson) algorithm, an economic QKS method that efficiently crafts a compact Krylov subspace.By harnessing the Residue and a pre-conditioner, Davidson's algorithm restricts the subspace search near the exact state, guaranteeing brisk convergence [65].Compared to its classical counterpart, our strategy offloads subspace expansion to the quantum computer, eliminating the explicit construction of full many-body wavefunctions on classical computers, leading to the speedup in the subspace projection.Compared to the existing QKS method, QDavidson's rapid convergence results in a much shallower circuit, making it potentially more noise resilient.

A. Krylov subspace and classical Davidson algorithm
In linear algebra, an order-r Krylov subspace generated by a matrix H and a reference vector b is the linear subspace spanned by the images of b under the first r powers of H [66].This subspace is denoted as K r (H, b) ≡ {b, Hb, H 2 b, • • • , H r−1 b}.The Krylov subspace has been extensively utilized in numerical algorithms for finding solutions to high-dimensional matrices, such as the generalized minimal residual method (GM-RES), Davidson, and quasi-minimal residual (QMR) algorithms [65].
In quantum chemistry, the iterative Krylov subspace K r ( Ĥ, |ψ⟩) is especially valuable for identifying low-lying states in electronic structure theory since the number of such states is typically much smaller than the size of the solution space.Various versions of the Davidson algorithm have been developed to enhance convergence by designing efficient pre-conditioners [67][68][69].Furthermore, the Davidson algorithm has been generalized to use non-orthogonal Krylov spaces [70].The flowchart of a standard classical Davidson algorithm is presented in Algorithm S1 of the Supplementary Materials (SM).Despite its computational efficiency, the generation of the subspace Ĥr |ψ⟩ remains a significant bottleneck in the Davidson algorithm and becomes computationally intensive on classical computers as the dimension of Ĥ grows.

B. Quantum Davidson algorithm
In this study, we introduce a quantum counterpart of the Davidson algorithm, termed QDavidson, that harnesses quantum computers to mitigate the scaling challenges of generating the Krylov subspace.Compared to the traditional Krylov method, the QKS scheme encodes arbitrarily complex states on quantum circuits, where the matrix's projection into the subspace is measured.The benefits of QKS methods over VQE-based techniques for computing low-lying excited states include 1) an independent ansatz for each reference state with a streamlined quantum circuit and 2) the elimination of the intricate optimization process.In Reference 59, quantum imaginary time evolution (QITE) is conducted using the Trotter decomposition of the evolution operator e − Ĥt and mapping each Trotter step into a unitary.Subsequently, the QLanczos algorithm is introduced in the Krylov space derived from QITE snapshots.However, the QKS arising from sequential time evolution lacks compactness.Here, we develop the QDavidson algorithm to create a compact QKS set, yielding a more efficient and numerically stable realization of the QKS approach.
Instead of pre-generating the subspace through real or imaginary-time evolution, the QDavidson framework adaptively expands the QKS, keeping the subspace growth closely with the true eigenspace.Initially, orthogonal states serve as the reference.HF |ϕ 0 ⟩, single-excitation configurations |ϕ a i ⟩ = a † a a i |ϕ 0 ⟩, and configuration-interaction single (CIS) states { ia C ia |ϕ a i ⟩}, which are efficiently computed on classical computers, are natural choices for initial reference states.The set of these initial reference states is denoted as {|ψ k ⟩}.Within the QKS framework, each basis (subsequently called a Krylov vector) within the Krylov subspace of the j th Davidson iteration assumes the following ansatz, Here, multiple reference states (i.e., a linear combination of HF/CIS states) are utilized.The entanglers Û j K arise from the Krylov space expansion and introduce correlations that extend beyond the initial states.The details of Û j K will be detailed later, with Û 1 K = 1 for the initial iteration.
The general ground and excited states, denoted |Φ I ⟩, can be expressed as linear combinations of the Krylov vectors, Therefore, the challenge of finding the ground state and low-lying excited states, represented by the equation Ĥ |Ψ⟩ = E |Ψ⟩, can be recast as the generalized eigenvalue problem HV = ESV within the Krylov subspace.Here, H denotes the Hamiltonian matrix within the Krylov subspace, as described by Eq. 1, Additionally, S represents the overlap matrix among Krylov vectors, with individual elements given by, Considering each Krylov vector may contain distinct entanglers and reference states, the matrix elements of both H and S are measured on quantum computers using an ancillary qubit and the Hadamard test [54,58], as illustrated in Figure 1.Once the elements of H and S are measured, the generalized eigenvalue problem HV = ESV can be trivially solved on classical computers due to the small dimensionality of the Krylov subspace.
After solving for the approximate excited states in the current subspace, the residues of these approximate states can be computed as The norm of the Residue can be measured on quantum computers similarly to the H KL measurement, The classical Davidson employs the Krylov subspace K r (( Ĥ − E), |Φ I ⟩) to solve the eigenvalue problem.However, it is non-trivial to create the Ĥ |Ψ I ⟩ state on quantum circuits with the non-unitary Ĥ.As an alternative, within the QDavidson algorithm, a correction vector defined by is employed to expand the Krylov subspace.In other words, the QDavidson algorithm uses the subspace of K r (e −∆τ ( Ĥ−E) , |Φ I ⟩) to derive the eigenstates.Because when ∆τ → 0. The evolution detailed in Equation 7 is subsequently mapped to unitary operators e −i Â as proposed in Ref. [59], where n I is the normalization factor and Â = α a α Pα .
The coefficients a α are obtained by solving a linear algebra associated with the mapping (more details can be found in Appendix A).An alternative method involves directly mapping the preconditioned residue After mapping the residue operator to unitaries, the QDavidson algorithm determines if the new Krylov vectors (or the correction vector) are linearly dependent within the current subspace by introducing Here, ||δ I ⟩| > ϵ, it implies that the new Krylov vector |δ I ⟩ is linearly independent within the current subspace, and hence, |δ ′ I ⟩ should be incorporated into the subspace.As the inclusion of a new Krylov vector into the Krylov space introduces additional correlations, the next iteration will draw the results nearer to the exact solutions.The flowchart of the QDavidson algorithm is summarized in Fig. 2.
Since the size of the Krylov subspace N K ) is small, computing the generalized eigenvalue problems on classical computers is cheap.The primary complexity of the QDavidson algorithm stems from mapping the residue vectors into unitaries.Hence, its main bottleneck is the formation of S, b, and the resolution of the linear system to obtain unitary Â.Earlier research has shown that mapping non-unitary exponential operators into unitaries scales exponentially with correlation domain D [59].But, a local approximation can be applied to eliminate the exponential dependence on D [59], leading to polynomial complexity.

III. NUMERICAL EXPERIMENTS
In this section, we present numerical results for the proposed algorithm.To demonstrate the performance of the QDavidson algorithm, we conducted exact quantum simulations of various systems, such as one-dimensional (1D) Heisenberg models and molecular systems, using a noiseless state-vector simulator.

A. One-dimensional Heisenberg models
Both long-range (LR) and short-range (SR) onedimensional (1D) Heisenberg models, analogous to the models described in Ref. [59], were tested in this work.The short-range models account only for nearestneighbor interactions between spins, characterized by C ij ( Xi Xj + Ŷi Ŷj + Ẑi Ẑj ) terms.In contrast, the longrange models consider pairwise interactions among all spins, encompassing a larger number of terms in a qubit Hamiltonian.Explicitly, the long-range and short-range Hamiltonians are given by and where . N denotes the number of spins in the system.For the shortrange Hamiltonians, the index i is cyclic; thus, when it reaches N + 1, it reverts to 1.The QDavidson algorithm is state-dependent, necessitating the definition of initial reference states.We initiated the algorithm with the anti-ferromagnetic product state, which corresponds to the alternating |01..⟩ state in a computational basis.
In order to benchmark the QDavidson algorithm against other QKS methods, a QLanczos algorithm with identical initial parameters was also implemented.
The results of the QDavidson and QLanczos algorithms for the 1D-Heisenberg models are shown in Figure 3.For the initial four low-lying solutions, the QDavidson algorithm achieved convergence within 4 and 7 iterations for 4-spin and 6-spin systems, respectively (Figure 3E-H).This performance surpasses the recently proposed QLanczos [59] algorithm (Figure 3A-D), which not only converges slower but also displays numerical instability as the resultant states quickly become linearly dependent.By implementing root convergence criteria and a linear dependency check, the QDavidson algorithm exhibits enhanced numerical stability.The full results for the QLanczos algorithm are provided in the Supplementary Information (SI) (Figure S1).This algorithm was executed until an iteration produced a linearly dependent state.Notably, while the QLanczos managed to compute exact energy values for all 4-spin models within 6 iterations (Figure S1A-B), it did not converge for 6spin systems, and the final energy estimations showed a considerable deviation (Figure S1C-D).As each iteration appends a new entangler after mapping the imaginary time evolution operator into unitaries, the circuit depth increases monotonically with iterations within the QITE, QLanczos, and QDavidson frameworks.Hence, faster convergence translates to more compact circuits.Compared to QITE algorithms, QLanczos converges faster in finding the ground state [59].Our QDavidson algorithm substantially improves convergence by employing the residue operator to narrow the subspace search near the exact state, resulting in a significantly reduced circuit depth.Although this improvement may not be evident for smaller systems, it becomes remarkably advantageous for larger systems.To elucidate this, we examined how the lowest-state energy's accuracy varies with the resulting circuits' maximum depth for both QLanczos and QDavidson.The results are shown in Figure 5. QDavidson reproduces the exact solution for six-spin systems when the subspace comprises circuits with maximum gate depths of approximately 400 and 900 for the short-range and long-range models, respectively.In contrast, QLanczos results are less accurate, and deeper circuits are required to perform the algorithm.For instance, the accuracy attained by QLanczos after 8 iterations (gate depth of 480) for the 6-spin SR model can be achieved by QDavidson using circuits with a peak gate depth of just 120.Given the iterative expansion of the subspace, each QDavidson algorithm iteration contains states characterized by circuits of various depths.Consequently, tracking solely the longest circuit within the subspace is sufficient.The circuit depth was analyzed based on the Pauli words present in the operator Â in the e −∆τ ( Ĥ−E) |Ψ I ⟩ → e −i Â |Ψ I ⟩ expansion.Recognizing the elements of Â = i θ i σi , a standard unitary evolution circuit could be constructed for e −i Â (assuming one Trotter step).

B. π-conjugated hydrocarbons
In addition to the 1D Heisenberg models, we also examined our algorithm on chemical systems.As expected, molecular Hamiltonians are notably more complex than the 1D Heisenberg models due to the intricate entanglement between numerous orbitals.To highlight the advantage of the QDavidson algorithm for molecular systems, we studied three molecular systems: ethylene, cyclopropene cation, and benzene.The Cartesian coordinates of the investigated molecular systems are provided in the Supplementary Information (SI) (see Table S1).An active space representing the π-conjugated system was selected for all the systems.The detailed active orbitals can be found in the SI (see Figure S2).Therefore,  the (2e, 2o) active space was considered for the C 2 H 4 molecule, (2e, 3o) for C 3 H + 3 , and (6e, 6o) for C 6 H 6 .The Jordan-Wigner transformation [71] was utilized to map the second-quantized Hamiltonian into the qubit Hamil-tonian.Consequently, the largest system examined in this study is the benzene molecule, comprising 6 electrons and 12 spin-orbitals in the active space (which corresponds to a 12-qubit Hamiltonian with 407 terms).
Taking into account that the electronic structure problems in molecular systems maintain particle-preserving and total spin-preserving symmetries, it is beneficial to establish a symmetric pool of Pauli terms for the correction vector |δ I ⟩ mapping.Therefore, a set of all possible spin projection-preserving single and double excitation operators was selected and then transformed into qubit operators employing the Jordan-Wigner scheme [71].All unique Pauli terms with an odd number of Ŷ operators were selected into the pool.This results in 12, 40, and 828 unique Pauli terms for the ethylene, cyclopropene cation, and benzene, respectively.
The performance of the QDavidson algorithm depends on the defined initial state.Multiple simulations were conducted using varying initial configurations (|ϕ 0 ⟩ , |ϕ a i ⟩ = a † a a i |ϕ 0 ⟩).In particular, the Hartree-Fock, the first excited singlet, and the first triplet product states were taken into consideration.The QDavidson algorithm converges to exact energy values for the smallest molecular systems within several iterations.For C 2 H 4 , starting from the Hartree-Fock product state, the QDavidson algorithm obtained the exact ground state energy (-77.11518Hartree) and the exact second excited singlet state energy (-76.37541Hartree) after just one iteration.Similar results were acquired for the C 3 H + 3 sys-tem, where the exact ground state (-113.64929Hartree) and the first three singlet excited states were determined.Table I presents results for other initial states.Intriguingly, the eigenstates identified for the C 3 H + 3 system maintain particle preservation and do not coincide with a neutral state solution with 3 electrons instead of two, even though they are lower in energy.
For the more extensive C 6 H 6 system, the QDavidson algorithm also accurately located the energies of several low-lying states (within chemical accuracy).However, as depicted in Figure 4a-c, many more iterations were needed.This is presumably due to the limited operator pool chosen, which only includes Pauli words from JW-mapped single and double excitation operators.Consequently, multi-electron systems (with electron count exceeding 2) might lack sufficient flexibility in the e −∆τ ( Ĥ−E) operator mapping.Regardless, the algorithm yielded meaningful results even with this restricted operator pool.For the benzene molecule, the algorithm found the first excited singlet and triplet states within 10 −2 Hartree after approximately 30 iterations (see Figure 4b, c).
For larger systems, it becomes evident that initiating the algorithm from a singular initial state isn't the optimal strategy to capture the first few low-lying excited states, as illustrated in Figure 4A-C.Even though the algorithm can locate states of interest, the accuracy is far from the desired precision of 10 −3 Hartree.Even initiating the algorithm with the first excited singlet (Figure 4F) or triplet (Figure 4G) determinants leads to quicker convergence compared to using the Hartree-Fock initial state.However, the energy convergence remains slow and often diverges from the exact values.In turn, the generalization of the algorithm with multiple initial states results in faster convergence and higher accuracy (see Figure 4H), highlighting the significance of multireference states.Notably, chemically accurate energies for S 0 , T 1 , and T 2 states could be obtained after just 11 iterations.Therefore, to efficiently find the low-lying excited states of a large molecular system, it is advised to begin with multiple initial states to expedite energy convergence.

C. Effects of statistical shot-noise
Owing to the finite number of quantum measurements (shots), exact measurements of the Hamiltonian matrix and overlap matrix elements within the Krylov subspace are unattainable.A limited shot count may introduce numerical instabilities into the QDavidson procedure.In general, to estimate the expectation value of a Hamiltonian with respect to a single state |ψ⟩ with precision p, it is required to perform O(|h max | 2 M p −2 ) measurements, where h max is the largest coefficient in the Hamiltonian decomposition Ĥ = i h i Pi , and M denotes the number of Hamiltonian terms [18].Similar conclusions apply when evaluating Hamiltonian matrix elements within the Krylov subspace.Since ⟨Ψ can be evaluated according to Figure 1, the cost of evaluating the Hamiltonian matrix element is O(|h max | 2 M p −2 N 2 K ) where N K is a size of the Krylov subspace.Although this measurement process can be complex, in practice, only a small subspace is typically spanned, and the growth of the subspace is linear with respect to the number of iterations, making the N 2 factor relatively insignificant compared to |h max | 2 M p −2 .Likewise, the overlap matrix's measurement incurs a cost of O(p −2 N 2 K ) shots.The procedure for mapping residual vectors to unitaries also requires quantum measurements.The required shot count here depends on the size of the operator pool (P ) chosen for the mapping and is in order ].However, it is noteworthy that the shots needed for QDavidson to converge are fewer than for the VQE procedure since the latter's ansatz optimization involves a substantial number of energy evaluations.
To illustrate the number of shots required to perform the algorithm on a real-world example, we incorporated the shot-noise into the calculations for the C 2 H 4 molecule.The desired precision p was determined after examining the algorithm's noise robustness by introducing the random error to each element of Hamiltonian matrix, (off-diagonal) overlap matrix element, S αβ and b α (Appendix A).It was found that an evaluation precision of 10 −4 is sufficient to guarantee the algorithm's numerical stability.With this level of precision, energy values of −77.115 ± 0.001(S 0 ) and −76.375 ± 0.001(S 2 ) were replicated.The algorithm, when initiated with a Hartree-Fock reference state, converged after 1 iteration.The estimated shot count per circuit evaluation was set to 10 8 , and the total number of shots was found to be ∼ 10 10 .This experiment underscores the QDavidson algorithm's capability to reproduce the low-lying spectra of the evaluated Hamiltonian even in the presence of statistical shot noise.

IV. SUMMARY
In this study, we developed an efficient QKS algorithm, QDavidson, to compute ground states and low-lying excited states by harnessing the Krylov subspace's power and the Davidson algorithm's rapid convergence.Unlike other QKS methodologies that employ real or imaginary time to pre-generate a subspace, QDavidson uses residues from previously approximated states to expand the subspace and capitalizes on the pre-conditioner to narrow the subspace search near the exact state, ensuring rapid convergence.Our numerical simulations confirm that the QDavidson algorithm surpasses other QKS methods like QLanczos in terms of convergence speed.Since the circuit depth increases with iterations, QDavidson's rapid convergence results in less complex quantum circuits, enhancing its resilience against noise.Future research could delve into advanced pre-conditioners for the QDavidson method [67,69], potentially further trimming the iterative steps and circuit depth.Moreover, simulation accuracy is bound by the chosen basis set.While a large basis set is essential for achieving chemical accuracy, increasing the basis set size in quantum algorithms is not NISQ friendly, which requires a significantly larger number of qubits and deeper circuits [72].However, by employing a trans-correlated Hamiltonian, one can attain accuracy at the cc-pVTZ basis even with a minimal basis set [72].

FIG. 1 .
FIG. 1. Schematic diagram of modified Hadamard test for measuring off-diagonal elements.The measurement of the expectation value of 2σ+ = X + iY operator.

FIG. 2 .
FIG. 2. Flowchart of the QDavidson algorithm.The orange (blue) box represents the part of the algorithm that is performed on QPU (CPU).

Hence, the measurement
of | |R I ⟩ | is akin to the H KL measurement but with the Hamiltonian Ĥ in Equation 3 replaced by ( Ĥ − E 2 I ) 2 .The norm indicates the convergence of each eigenstate.If | |R I ⟩ | exceeds ϵ (where ϵ denotes the convergence criterion), a new Krylov vector, based on the normalized counterpart of |R I ⟩, should be incorporated into the Davidson algorithm, provided it is linearly independent within the existing Krylov subspace.

FIG. 3 .
FIG. 3. Energy differences of the 1D Heisenberg models as a function of algorithm iteration.The energy difference is defined as E algorithm − Eexact.Results for different states are represented by different colors: black for the ground state, blue for the first excited state, green for the second excited state, and red for the third excited state.Graphs (a-d) present the results of the QLanczos algorithm, while graphs (e-h) depict the results of the QDavidson algorithm obtained with the same QITE expansion parameters.Graphical representations of each system are provided in the bottom-left corner of each graph.The small circles with numbers symbolize spins, and the lines connecting these circles denote C k Ŝi Ŝj terms.Different C k coefficients are illustrated by lines of varying colors and widths.

FIG. 4 . 1 a
FIG. 4. Energies and errors of the first four low-lying states (S0, T1, S1, T2) obtained from the QDavidson procedure as a function of algorithm iteration.Results for different states are given in different colors.The results for C6H6 molecule initialized with different product states: |000000111111⟩ (A, E); |000010011111⟩ (B, F); |000001011111⟩ (C, G); a combination of three product states mentioned above (D, H).Black dashed horizontal lines show exact energies obtained from a direct diagonalization of the corresponding Hamiltonian.The geometry of the molecule is shown in the inset of (E).

FIG. 5 .
FIG. 5. Energy differences of the lowest states for 6-spin 1D Heisenberg models as a function of a maximum circuit depth (A) SR model; (B) LR model.
FIG. S1.Energy differences of the 1D Heisenberg models [59] as a function of QLanczos algorithm iteration.The energy difference is defined as E algorithm − Eexact.Results for different states are given with different colors: black -ground state; blue -first excited state; green; second excited state; red -third excited state.Graphical representations of each system are given in the bottom left corner of each graph.The small circles with numbers represent spins, while lines connecting those circles represent C k Ŝi Ŝj terms.Different C k coefficients are illustrated with lines of different colors and widths.

TABLE I .
Results of QDavidson algorithm for C2H4 and C3H + 3 molecules.The initial state and number of iterations for the algorithm convergence are given.Different eigenstates are denoted in parenthesis as the following: S0 -ground singlet state, T1 -the lowest triplet state, Si -i th excited singlet state, Ti -i th excited triplet state.The Sz denotes the spin projection of the obtained state.