Paper The following article is Open access

An efficient quantum algorithm for spectral estimation

, , , and

Published 1 March 2017 © 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Adrian Steffens et al 2017 New J. Phys. 19 033005 DOI 10.1088/1367-2630/aa5e48

1367-2630/19/3/033005

Abstract

We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum–classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.

Export citation and abstract BibTeX RIS

1. Introduction

Algorithms for the spectral estimation of signals consisting of finite sums of exponentially damped sinusoids have a vast number of practical applications in signal processing. These range from imaging and microscopy [1], radar target identification [2], nuclear magnetic resonance spectroscopy [3], estimation of ultra wide-band channels [4], quantum field tomography [5, 6], power electronics [7], up to the simulation of atomic systems [8]. If the damped frequencies (poles) are known and merely the concomitant coefficients are to be identified, linear methods are readily applicable. In the practically relevant task in which the poles are to be estimated from the data as well, however, one encounters a nonlinear problem, and significantly more sophisticated methods have to be employed.

There are various so-called high resolution spectral estimation techniques that provide precisely such methods: they include matrix pencil methods (MPM) [9], Prony's method [10], MUSIC [11], ESPRIT [12], and atomic norm denoising [13]. These techniques are superior to discrete Fourier transform (DFT) in instances with damped signals and close frequencies or small observation time $T\gt 0$ [1416] and are preferred over of the Fourier transform in those applications laid out in [15, 7, 8]: the DFT resolution in the frequency domain ${\rm{\Delta }}\omega $ is proportional to $1/T$, which is especially critical for poles that are close to each other. If the poles are sufficiently damped and close, they cannot be resolved by DFT independently of T. Nonlinear least-squares fitting of the poles or considering higher-order derivatives of the Fourier transform is in general relatively imprecise, sensitive to noise, or unefficient. Nonlinear algorithms such as the MPM can still detect poles, where DFT fails, but are limited to signals composed of finitely many damped sinusoids.

With regard to quantum algorithms dedicated to tasks of spectral estimation—algorithms to be run on a quantum computer—the celebrated quantum Fourier transform (QFT) [17] provides an exponential speedup towards the fastest known classical implementations of DFT for processing discretized signals of N samples: classical fast Fourier transform algorithms, on the one hand, take ${\rm{\Theta }}(N\mathrm{log}N)$ gates [18], whereas QFT takes ${\rm{\Theta }}({\mathrm{log}}^{2}N)$ gates to produce a quantum state encoding the Fourier coefficients in its amplitudes. The QFT constitutes a key primitive in various quantum algorithms. In particular, it paved the way for quantum speedups for problems such as prime factoring or order-finding [19]. Regarding spectral estimation, however, QFT inherits the above mentioned properties of its classical counterpart.

The aim of this work is to develop a quantum version of a powerful spectral estimation technique, the MPM, providing an analogous quantum speedup from $O(\mathrm{poly}\,N)$ to $O(\mathrm{poly}\ \mathrm{log}N)$ for data given in a suitable format. Hereto, we make use of the fact that establishing eigenvalues and eigenvectors of low-rank matrices—constituting major steps in this algorithm—can be achieved very fast on quantum computers [20]. Given signal data either via the amplitudes of a quantum state or stored in a quantum random access memory (QRAM) [2123], phase estimation of these matrices can be performed directly. For exponentiating non-sparse operators for phase estimation, we employ quantum principal component analysis (QPCA) [20] and a recently developed oracle-based method [24]. In an additional step, we employ a quantum linear fitting algorithm [25, 26] to determine the summing coefficients and hence all parameters that determine the signal function. In this sense, we can understand our algorithm also as an instance of a nonlinear quantum fitting algorithm in contrast to linear curve fitting algorithms [25, 26]. Furthermore, our algorithm can also be employed as a sub-routine in a higher quantum algorithm that requires spectral estimation as an intermediate step. We expect the developed methods to provide valuable novel primitives to be used in other quantum algorithms as well.

2. The classical matrix pencil algorithm

We start by briefly recapitulating the original (classical) matrix pencil algorithm before in section 3, we turn to showing how to implement a quantum version of this algorithm in order to gain an exponential speedup. MPM [9] comprise a family of efficient signal processing algorithms for spectral estimation and denoising of equidistantly sampled complex-valued functions f of the type

Equation (1)

with the poles ${\lambda }_{k}=-{\alpha }_{k}+{\rm{i}}\,{\beta }_{k}\in {\mathbb{C}}$, damping factors ${\alpha }_{k}\in {{\mathbb{R}}}_{+}$, frequencies ${\beta }_{k}\in {\mathbb{R}}$, and coefficients ${c}_{k}\in {\mathbb{C}}$ for $k=1,\ldots ,p$, where $p\in {\mathbb{N}}$ is the number of poles. The damping results in a broadening of the spectral lines towards Lorentzian curves. Real-valued functions as a special case can be analyzed as well: here, for each $k=1,\ldots ,p$ either ${\lambda }_{k},{c}_{k}\in {\mathbb{R}}$—these terms are non-oscillatory—or there exist ${\lambda }_{k^{\prime} },{c}_{k^{\prime} }$ such that ${\lambda }_{k^{\prime} }={\lambda }_{k}^{* }$ and ${c}_{k^{\prime} }={c}_{k}^{* }$. Clearly, such signals, in which the number of poles p is small and finite, are ubiquitous, or in other instances provide an exceedingly well approximation of the underlying signal.

Algorithm 1. Matrix pencil algorithm.

Data: Discretized signal with components ${f}_{j}={\sum }_{k=1}^{p}{c}_{k}\,{{\rm{e}}}^{{\lambda }_{k}{\rm{\Delta }}t\cdot j}$, $j=0,\ldots ,N-1$,
${c}_{k},{\lambda }_{k}\in {\mathbb{C}},\ {\mathfrak{Re}}({\lambda }_{k})\leqslant 0$.
Result: Frequencies ${\{{\lambda }_{k}\}}_{k=1}^{p}$ and coefficients ${\{{c}_{k}\}}_{k=1}^{p}$.
begin
Create the Hankel matrices ${F}^{(1)}:= {({f}_{j+k-2})}_{j,k=1}^{N/2}$ and ${F}^{(2)}:= ({f}_{j+k-1}{)}_{j,k=1}^{N/2}$ from the signal and compute their (truncated) singular decompositions ${F}^{(i)}={U}^{(i)}{S}^{(i)}{V}^{(i)\dagger },\,i=1,2$.
Solve the generalized eigenvalue problem ${U}^{(1)\dagger }{F}^{(2)}{V}^{(1)}\,{w}_{k}={\mu }_{k}\,{S}^{(1)}\,{w}_{k}$. The p frequencies $\{{\lambda }_{k}\}$ can directly be obtained from the p eigenvalues $\{{\mu }_{k}\}$.
Create the Vandermonde matrix W from the eigenvalues $\{{\mu }_{k}\}$ and invert the linear equation system ${Wc}=({f}_{j})$ to obtain the coefficients $\{{c}_{k}\}$.

The idea of MPM is to determine the poles $\{{\lambda }_{k}\}$ independently from the coefficients $\{{c}_{k}\}$ and compare the discretized signal with its translates. Assume that all ck are nonzero and ${\lambda }_{j}\ne {\lambda }_{k}$ for $j\ne k$. First, sample the function f equidistantly,

Equation (2)

with sampling interval ${\rm{\Delta }}t\gt 0$. In general, the higher the number of samples N, the more robust the procedure becomes towards noise and the higher the frequencies that can be reconstructed (Nyquist–Shannon sampling theorem [27])—at the expense of computational effort. For clearness, assume that N is even. From the sampled signal, create the Hankel matrices ${F}^{(1)},{F}^{(2)}\in {{\mathbb{C}}}^{N/2\times N/2}$, defined as

Equation (3)

and

Equation (4)

Note that for complex signals, the matrices ${F}^{(1)}$ and ${F}^{(2)}$ are symmetric but in general not Hermitian. In other implementations, ${F}^{(1)}$ and ${F}^{(2)}$ do not even need to be square. To keep the notation clear, we proceed with square matrices as just defined. Set ${\mu }_{k}:= {{\rm{e}}}^{{\lambda }_{k}{\rm{\Delta }}t}$ for $k=1,\ldots ,p$. It is easy to see that ${F}^{(1)}$ can be factorized as

Equation (5)

with the Vandermonde matrix $M\in {{\mathbb{C}}}^{N/2\times p}$,

Equation (6)

and diagonal matrix ${D}_{c}:= \mathrm{diag}(({c}_{k}{)}_{k=1}^{p})\in {{\mathbb{C}}}^{p\times p}$. The matrix ${F}^{(2)}$, on the other hand, can be decomposed as

Equation (7)

with ${D}_{\mu }:= \mathrm{diag}(({\mu }_{k}{)}_{k=1}^{p})\in {{\mathbb{C}}}^{p\times p}$. Note that equations (5) and (7) are neither the eigenvalue nor the singular value decomposition of ${F}^{(1)}$ and ${F}^{(2)}$, respectively; the column vectors of M do not even have to be orthogonal. We can see from these equations that both ${F}^{(1)}$ and ${F}^{(2)}$ have rank p, which will in general also be the case for the linear matrix pencil [28]

Equation (8)

unless $\gamma \in {\mathbb{C}}$ matches an element of the set ${\{{\mu }_{k}\}}_{k=1}^{p}$. Hence, all ${\mu }_{k}$ are solutions of the generalized eigenvalue problem (GEVP)

Equation (9)

with $v\in {{\mathbb{C}}}^{N/2}$. The matrix pair $({F}^{(2)},{F}^{(1)})$ is in general regular and accordingly results in $N/2$ generalized eigenvalues [29]—not all of these correspond to a ${\mu }_{k}$. There are different extensions that take care of this issue and increase algorithmic stability (see, e.g., [30]). To make the algorithm accessible to an efficient quantum implementation, we will consider a specific MPM variant, the direct MPM [9]: we make use of the singular value decompositions of ${F}^{(1)}$ and ${F}^{(2)}$, keeping only the nonzero singular values and the corresponding singular vectors,

Equation (10)

with ${S}^{(i)}\in {{\mathbb{C}}}^{p\times p}$ for $i=1,2$. This singular value decomposition of a Hankel matrix of size order N × N is the time-critical step of the entire algorithm and it scales with ${\rm{\Theta }}({N}^{2}\mathrm{log}N)$ using state-of-the-art classical algorithms [31, 32]. We multiply ${U}^{(1)\dagger }$ from the left and ${V}^{(1)}$ from the right to

Equation (11)

and see that the resulting equivalent GEVP

Equation (12)

with $w\in {{\mathbb{C}}}^{p}$, yields exactly ${\{{\mu }_{k}\}}_{k=1}^{p}$ as eigenvalues and via ${\lambda }_{k}=\mathrm{log}({\mu }_{k})/{\rm{\Delta }}t$ the corresponding poles. The eigenvalues can be retrieved in ${\rm{\Theta }}({p}^{3})$ steps using the QZ algorithm [33]. Although in general it can be numerically favorable to solve the GEVP directly [29], ${S}^{(1)}$ is an invertible diagonal matrix and it is in practice sufficient to solve the equivalent ordinary eigenvalue problem

Equation (13)

The coefficients $\{{c}_{k}\}$ are linearly related to the signal and can be obtained by plugging ${\{{\mu }_{k}\}}_{k=1}^{p}$ into an overdetermined Vandermonde equation system,

Equation (14)

and computing the least squares solution

Equation (15)

in terms of the 2-norm, $\parallel \cdot {\parallel }_{2}$, e.g. via applying the Moore–Penrose pseudoinverse ${W}^{+}:= {({W}^{\dagger }W)}^{-1}{W}^{\dagger }$ to the signal vector f. Thus, all parameters that determine the signal are reconstructed.

3. Quantum implementation

In the following, we describe how to implement an efficient quantum analogue of the MPM.

Algorithm 2. Quantum matrix pencil algorithm.

Data: Discretized signal with components ${f}_{j}={\sum }_{k=1}^{p}{c}_{k}\,{{\rm{e}}}^{{\lambda }_{k}{\rm{\Delta }}t\cdot j}$, $j=0,\ldots ,N-1$,
${c}_{k},{\lambda }_{k}\in {\mathbb{C}},\,{\mathfrak{Re}}({\lambda }_{k})\leqslant 0$ either from QRAM or encoded in a quantum state.
Result: Frequencies ${\{{\lambda }_{k}\}}_{k=1}^{p}$ and coefficients ${\{{c}_{k}\}}_{k=1}^{p}$.
begin
Perform concatenated phase estimations via exponentiating Hermitian matrices ${\widetilde{F}}^{(1)},{\widetilde{F}}^{(2)}$ that contain the matrices ${F}^{(1)}$, ${F}^{(2)}$, respectively, yielding the $p$ biggest singular values and the overlaps $\{\langle {u}_{j}^{(1)}| {u}_{k}^{(2)}\rangle \}$ and $\{\langle {v}_{j}^{(1)}| {v}_{k}^{(2)}\rangle \}$ of the according left and right singular vectors.
Construct the according matrices and solve the eigenvalue problem classically to obtain the poles $\{{\lambda }_{k}\}$.
Build a fitting matrix from the poles and obtain the coefficients $\{{c}_{k}\}$ via quantum linear fitting.

For an efficient quantum algorithm, we assume that the number of poles p is constant and small relative to the number of samples N, which is a natural setting since in practice, we are often interested in damped line spectra with fewer constituents and higher sampling rates for robustness towards noise. The guiding idea is to condense all arrays of size $O(N)$ in equation (13) into arrays of size $O(p)$ by rewriting the first term in equation (12),

as

Equation (16)

with ${ \mathcal U },{ \mathcal V }\in {{\mathbb{C}}}^{p\times p}$. The singular values $\{{s}_{k}^{(j)}\}$ will be obtained via quantum phase estimation [34, 35], the overlaps $\langle {v}_{k}^{(i)}| {v}_{l}^{(j)}\rangle $ via two concatenated quantum phase estimations. The eigenvalue problem equation (13),

Equation (17)

is now determined by $2{p}^{2}$ complex and $2p$ real numbers, and can easily be evaluated classically in ${\rm{\Theta }}({p}^{3})$ operations, yielding the required poles ${\lambda }_{k}=\mathrm{log}({\mu }_{k})/{\rm{\Delta }}t$ for $k=1,\ldots ,p$. Thus, as other efficient quantum algorithms [36, 37], the classical result is a low-dimensional read-out quantity. Otherwise, the read-out costs would neutralize any performance gain in the algorithm. After that, the poles are used as input for a quantum linear fitting algorithm yielding the coefficients $\{{c}_{k}\}$. In the following, we describe the individual steps of the quantum algorithm in detail. We start by discussing the quantum preparation of the Hankel matrices.

3.1. Accessing the data

In order to realize a quantum speedup, the signal has to be accessible in a fast and coherent way—otherwise, the read-in process alone would be too costly. The data input for the matrix pencil algorithm consists of a time series $({f}_{j}{)}_{j=0}^{N-1}$. We consider two crucially different approaches of data access/availability for the quantum algorithm, with the main focus of this work being on the first approach:

  • (i)  
    The signal is stored in a quantum accessible form such as quantum RAM. In other words, we are provided with access to the operation
    Equation (18)
    for $j=0,\ldots ,N-1$, with the signal values encoded in binary form in the second quantum register. In order to create the Hankel matrix ${F}^{(i)}=({f}_{j+k+i-3}{)}_{j,k=1}^{N/2}\in {{\mathbb{C}}}^{N/2\times N/2}$ and $i=1,2$, we can perform the following operation with straightforward index manipulations,
    Equation (19)
    for $j,k=1,\ldots ,N/2$. The ancilla prepared in $| i\rangle $, $i=1,2$, will be used in an entirely classical manner. This operation can be used to simulate Hankel matrices via the non-sparse matrix simulation methods of [24, 38].One way to implement signal access in equation (18) is via QRAM [21, 22]. As discussed in [21, 22], the expected number of hardware elements that are activated in a QRAM call is $O(\mathrm{poly}\ \mathrm{log}N)$. For each memory call, the amount of required energy and created decoherence thus scales logarithmically with the memory size. Note that because of their peculiar structure, $(N\times N)$-Hankel matrices require only $O(N)$ elements to be stored. In comparison, a general s-sparse matrix requires storage of $O({Ns})$ elements.
  • (ii)  
    As a second approach, we have been given multiple copies of particular quantum state vectors encoding the data in their amplitudes. This approach does not require quantum RAM and operates using the quantum principal component algorithm (QPCA). Importantly, our method then compares to the QFT in the sense that it operates on a given initial state that contains the data to be transformed.The states that are processed by QPCA correspond to positive semidefinite matrices, which is in general not the case for the Hankel matrices ${F}^{(i)}$. Adding a sufficiently scaled unit matrix would enforce positivity, but the resulting matrix would not have the required low rank anymore. It turns out, however, that by employing a new type of extended matrix, we can use QPCA to compute singular value decompositions of indefinite matrices and make it applicable for our algorithm, as is fleshed out in appendix B. The given state vectors have to be of a particular form such as
    Equation (20)
    with ${C}^{(i)}=(\parallel {F}^{(i)}{\parallel }_{2}^{2}+{a}^{(i)2}\parallel {F}^{(i)\dagger }{F}^{(i)}{\parallel }_{2}^{2})$ and a known scaling constant ${a}^{(i)}$ such that ${({a}^{(i)})}^{-1}=O({\max }_{j,k}| {({F}^{(i)\dagger }{F}^{(i)})}_{j,k}| )$, where $\parallel {F}^{(i)}{\parallel }_{2}$ is the Frobenius norm of ${F}^{(i)}$. This state includes in its amplitudes information about the Hankel matrix ${F}^{(i)}$ and ${F}^{(i)\dagger }{F}^{(i)}$. The particular form of $| {\chi }^{(i)}\rangle $ will become clear in the next section. The advantages of the matrix pencil algorithm over the usual Fourier transform come at a price in the quantum algorithm: we require availability of the state vectors $| {\chi }^{(i)}\rangle $ instead of the signal state vector ${\sum }_{j}{f}_{j}| j\rangle $.

In the next section, we show how the operation in equation (18) or, alternatively, multiple copies of $| {\chi }^{(i)}\rangle $ can be used to efficiently simulate a Hermitian matrix that encodes the eigenvalues and associated eigenvectors of the Hankel matrices.

3.2. Simulating the Hankel matrices

We would like to obtain the singular values and vectors of ${F}^{(1)}$ and ${F}^{(2)}$ with a quantum speedup via phase estimation, which for real signals correspond, up to signs, to their eigenvalues and vectors. Since the procedure is the same for ${F}^{(1)}$ and ${F}^{(2)}$, for clarity we will drop the index in this section and use F for both matrices. Phase estimation requires the repeated application of powers of a unitary operator generated by a Hermitian matrix to find the eigenvalues and eigenvectors of that matrix. Thus, we need to connect both Hankel matrices, generally non-Hermitian, to Hermitian matrices. Depending on the input source discussed in the previous section, this is done in different ways.

Generally, since F is not sparse, we cannot make use of the sparse simulation techniques described in [39]. Although both matrices have low rank $p\ll N$, they will in general not be positive definite, so that QPCA [20] cannot readily be used either. Note that although ${F}^{\dagger }F$ and ${{FF}}^{\dagger }$ are positive definite, provide the correct singular vectors of F, and can be efficiently exponentiated, the phase relations between left and right singular vectors, which are necessary for the matrix pencil algorithm, are not preserved. This insight can be taken as yet another motivation to look for more general efficient methods to exponentiate matrices that exhibit a suitable structure, such as being low-rank, sparse or having a low tensor rank.

For the oracular setting (i), we construct a Hermitian matrix $\widetilde{F}$ and apply the unitary operator ${{\rm{e}}}^{-{\rm{i}}\widetilde{F}t}$ to an initial quantum state. Hereto, we employ the 'extended matrix'

Equation (21)

which is Hermitian by construction. Its eigenvalues correspond to the singular values $\pm {s}_{j},j=1,\ldots ,N/2$, of F and its eigenvectors are proportional to $({u}_{j},\pm {v}_{j})\in {{\mathbb{C}}}^{N}$. Importantly, the phase relations between left and right singular vectors are preserved. Note that an operation analogous to equation (18) for the extended matrix can be easily constructed from equation (18). The method developed in [24] allows us to exponentiate non-sparse Hermitian matrices in this oracular setting. Following their discussion, equation (19) is mapped to the corresponding entry of a modified swap matrix ${S}_{\widetilde{F}}$, resulting in the matrix

Equation (22)

In [24], it is shown that performing infinitesimal swap operations with ${S}_{\widetilde{F}}$ on an initial state $\rho \otimes \sigma $ with auxiliary state $\rho := (1/N{)}_{j,k=1}^{N}$ is equivalent to just evolving σ in time with the Hamiltonian $\widetilde{F}$ for small ${\rm{\Delta }}t\gt 0$, i.e.

Equation (23)

The modified swap matrix ${S}_{\widetilde{F}}$ is one-sparse within a quadratically larger space and can be efficiently exponentiated with the methods in [3941] with a constant number of oracle calls and run time $\widetilde{O}(\mathrm{log}N)$, where we omit polylogarithmic factors in O by use of the symbol $\widetilde{O}$. Achieving an accuracy $\varepsilon \gt 0$ for the eigenvalues requires

Equation (24)

steps in the algorithm [24], where $\parallel \widetilde{F}{\parallel }_{\max }$ denotes the maximal absolute element of $\widetilde{F}$. The phase estimation is performed as discussed in [42] to obtain the $1/{\varepsilon }^{2}$ scaling compared to the $1/{\varepsilon }^{3}$ scaling of the original work [20, 24]. Note that in our setting $| {\widetilde{F}}_{j,k}| ={\rm{\Theta }}(1)$ and in particular $\parallel \widetilde{F}{\parallel }_{\max }={\rm{\Theta }}(1)$. The run time is the number of steps multiplied by the run time of the swap matrix simulation, i.e. $\widetilde{O}(\mathrm{log}N/{\varepsilon }^{2})$. In appendix A, we discuss an alternative approach for efficient matrix exponentiation developed in [38], and check its applicability to our algorithm.

In setting (ii), where we are given multiple copies of state vectors, we proceed in a different way employing QPCA. The state vector $| \chi \rangle $ can be reduced to a particular quantum density matrix as

Equation (25)

With quantities $C=(\parallel F{\parallel }_{2}^{2}+{a}^{2}\parallel {F}^{\dagger }F{\parallel }_{2}^{2})$ and ${a}^{-1}=O({\max }_{j,k}| {({F}^{\dagger }F)}_{j,k}| )$ as before. In the same way,

Equation (26)

can be prepared from a permuted state vector $| \widetilde{\chi }\rangle $. The matrix

Equation (27)

is positive semi-definite with unit trace by construction, just as required by the QPCA. Invoking the singular value decomposition of $F={{USV}}^{\dagger }$, its eigenvalues in terms of the singular values of F are given by ${s}_{j}^{2}{({{as}}_{j}\pm 1)}^{2}/(2C)$, its eigenvectors are $({u}_{j},\pm {v}_{j})\in {{\mathbb{C}}}^{N}$. The matrix Z has twice the rank of F. The application of QPCA then allows resolving its eigenvalues to an accuracy $\varepsilon \gt 0$ using

Equation (28)

copies of $| \chi \rangle $ and $| \widetilde{\chi }\rangle $ [20] for a total run time of again $\widetilde{O}(\mathrm{log}N/{\varepsilon }^{2})$. In appendix B, we provide further details on this method.

Both the oracular and the QPCA setting can be employed in quantum phase estimation to obtain the singular values and associated singular vectors of the Hankel matrices in quantum form. Phase estimation allows the preparation of

Equation (29)

where $F={{USV}}^{\dagger }$ is the singular value decomposition with right and left singular vectors uk and vk. The associated singular value sk is encoded in a register. The ${\beta }_{k}$ arise from the choice of the initial state. The next section describes concretely how consecutive phase estimation steps are used for the matrix pencil algorithm as a building block to obtain the signal poles and expansion coefficients.

3.3. Twofold phase estimation

In this section, we describe how to obtain the singular vector overlaps $\{{{ \mathcal U }}_{j,k}\}$ and $\{{{ \mathcal V }}_{j,k}\}$. Hereto, we perform two concatenated phase estimation procedures to obtain states that encode these overlaps in their amplitudes, which are essentially determined by tomography. It is important to pay attention to the correct phase relations between the overlaps. Phase estimation is applied to a specific initial state and an additional eigenvalue register. Initial states with large overlap with the eigenstates of $\widetilde{F}$, equation (21), or Z, equation (27), respectively, can be prepared efficiently. For example, ${{FF}}^{\dagger }/\mathrm{tr}({{FF}}^{\dagger })| 0\rangle \langle 0| $ or ${F}^{\dagger }F/\mathrm{tr}({F}^{\dagger }F)| 1\rangle \langle 1| $ are suitable initial states and can be prepared from the oracle equation (18) [20]. For both initial states, the trace with an eigenvector $| {u}_{k},{v}_{k}\rangle $ is ${\sigma }_{k}^{2}/(2{\sum }_{j}{\sigma }_{j}^{2})$. Alternatively, if we have been given multiple copies of $| \chi \rangle $, we can simply take Z to be the initial state [20].

We append two registers for storing the singular values to the initial state, obtaining $| 0\rangle | 0\rangle | {\psi }_{0}\rangle $ with the notation $| 0\rangle := | 0,\ldots ,0\rangle $, and perform the phase estimation algorithm with ${{\rm{e}}}^{-{\rm{i}}{S}_{{\tilde{F}}^{(2)}}{\rm{\Delta }}t}$ as a unitary operator to obtain a state proportional to

Equation (30)

where for clarity we order the eigenspaces such that positive singular values are put first, i.e. ${s}_{k+p}^{(2)}=-{s}_{k}^{(2)}$, ${u}_{k+p}^{(2)}={u}_{k}^{(2)}$, and ${v}_{k+p}^{(2)}=-{v}_{k}^{(2)}$ for $k=1,\ldots ,p$. To obtain the overlaps of the matrices ${U}^{(1)}$ and ${U}^{(2)}$, the v-part of the eigenvector of ${\widetilde{F}}^{(2)}$ is projected out, yielding

Equation (31)

with normalization factor ${\nu }_{1}\in {{\mathbb{R}}}_{+}$ and ${\sum }_{k=1}^{2p}| {g}_{k}{| }^{2}=1$. Each singular value ${s}_{k}^{(2)}\in {{\mathbb{R}}}_{+}$ can be determined efficiently from this with accuracy ${\varepsilon }_{\sigma }$ in a runtime of $\tilde{O}(\mathrm{log}N/{\varepsilon }_{\sigma }^{3})$ (see section 3.2). We need to determine the amplitudes $\{{g}_{k}\}$, which have to be removed from the overlap values. For this, we essentially perform standard tomography of the quantum state equation (31). The singular register vectors ${\{| {s}_{k}^{(2)}\rangle \}}_{k=1}^{2p}$ are pairwise orthogonal, so that the amplitudes ${\{{g}_{k}\}}_{k=1}^{p}$ can be efficiently obtained—up to a global complex phase ${{\rm{e}}}^{{\rm{i}}{\vartheta }_{1}}$—via measurements e.g. of the form

Equation (32)

with probabilities

Equation (33)

respectively. Suppose ${g}_{{k}_{1}}$ is known. Then ${g}_{{k}_{2}}$ can easily be obtained from equation (33). Hence, by fixing one global phase ${{\rm{e}}}^{{\rm{i}}{\vartheta }_{1}}$ (e.g. corresponding to ${g}_{1}\mathop{=}\limits^{!}+| {g}_{1}| \,$), all values ${\{{g}_{k}\}}_{k=1}^{2p}$ are unambiguously determined. Requiring an accuracy

Equation (34)

of the probabilities in equation (33) for $k=1,\ldots ,p$, denoting expected value and variance with ${\mathbb{E}}$ and ${\mathbb{V}}$, respectively and with ${\xi }_{g}$ the reciprocal of the smallest probability, we require $O({\xi }_{g}/{\varepsilon }_{g}^{2})$ measurement repetitions for each amplitude. We thus have established the values

Equation (35)

Next, the state vector $| {\psi }_{1}\rangle $ is used as input for a second phase estimation procedure with ${{\rm{e}}}^{-{\rm{i}}{S}_{{\widetilde{F}}^{(1)}}{\rm{\Delta }}t}$ as unitary operator, yielding

Equation (36)

with normalization factor ${\nu }_{2}\in {{\mathbb{R}}}_{+}$ and ${\sum }_{j,k=1}^{2p}| {h}_{j,k}{| }^{2}=1$. The inner product $\langle {u}_{j}^{(1)},{v}_{j}^{(1)}| {u}_{k}^{(2)},0\rangle $ reduces to $\langle {u}_{j}^{(1)}| {u}_{k}^{(2)}\rangle $ with vectors in ${{\mathbb{C}}}^{N}$. The same way as above, we determine the singular values $\{{s}_{j}^{(1)}\}$ and the values

Equation (37)

up to ${\varepsilon }_{h}$ with global phase ${{\rm{e}}}^{{\rm{i}}{\vartheta }_{2}}$ with $O({\xi }_{h}/{\varepsilon }_{h}^{2})$ repetitions for each amplitude. Dividing the values in equation (37) by the ones in equation (35), we obtain

Equation (38)

with ${\vartheta }_{{ \mathcal U }}:= {\vartheta }_{2}-{\vartheta }_{1}$, ${\nu }_{{ \mathcal U }}:= {\nu }_{1}/{\nu }_{2}$ and accuracy $\sim {\varepsilon }_{g}+{\varepsilon }_{h}$. The established overlaps

Equation (39)

correspond to the same matrix entry of ${ \mathcal U }$ for $j,k=1,\ldots ,p$ and can be averaged over. This way, the matrix ${ \mathcal U }$ is determined up to a global phase and a normalization factor. Repeating the entire procedure, but with projecting out the u-part,

Equation (40)

yields all overlaps ${\{\langle {v}_{j}^{(1)}| {v}_{k}^{(2)}\rangle \}}_{j,\,k=1}^{p}$, the entries of ${ \mathcal V }$, up to a factor ${\nu }_{{ \mathcal V }}\,{{\rm{e}}}^{{\rm{i}}{\vartheta }_{{ \mathcal V }}}$. Note that

Equation (41)

for $j,k=1,\ldots ,p$ because the v-parts of the ${\widetilde{F}}^{(i)}$ eigenvectors from $k=1,\ldots ,p$ and $k=p+1,\ldots ,2p$ have opposite signs. For real-valued signals and Hermitian ${F}^{(i)}$, we can perform the procedure with ${{\rm{e}}}^{-{\rm{i}}{S}_{{F}^{(i)}}{\rm{\Delta }}t}$ instead of ${{\rm{e}}}^{-{\rm{i}}{S}_{{\widetilde{F}}^{(i)}}{\rm{\Delta }}t}$ and do not need to project the u- and v-parts.

In summary, we have determined the singular values forming matrix ${S}^{(i)}$ to accuracy ${\varepsilon }_{\sigma }$ in time $\widetilde{O}(p/{\varepsilon }_{\sigma }^{2})$. In addition, we have determined the overlaps of the right and left singular vectors of the two Hankel matrices ${F}^{(1)}$ and ${F}^{(2)}$. The required number of repetitions is

Equation (42)

for obtaining the entries of ${ \mathcal U }$ and analogously ${n}_{{ \mathcal V }}$ for obtaining the entries of ${ \mathcal V }$. With

Equation (43)

for the cost of the phase estimation, this leads to a total run time of

Equation (44)

with $\xi := \max \{{\xi }_{g},{\xi }_{h}\}$. The performance scales as $n=O(\mathrm{poly}\ \mathrm{log}\ N)$ for example in the following regime: first, the number of poles is small compared to N, which is a natural regime, as mentioned above; second, regarding ξ, if the overlaps are not too small, $\xi =O(\mathrm{poly}\ \mathrm{log}\ N);$ and third, an error $1/\varepsilon =O(\mathrm{poly}\ \mathrm{log}\ N)$ can be tolerated.

3.4. Solving the small classical problem

Having determined the values via phase estimation, the reconstructed eigenvalue equation (17) now reads

Equation (45)

All (scaled) matrix entries of equation (45) are available classically and we can solve the problem with a classical algorithm [33] running with time $O({p}^{3})$. The errors in the matrix entries are amplified within the entries of the matrix product entries ${\hat{{ \mathcal F }}}_{j,k}$ by a factor of $\mathrm{poly}\ p$ at worst. Taking the inverse of ${S}^{(1)}$ amounts to inverting its diagonal entries, hence the relative errors of $({S}^{(1)}{)}_{j,j}^{-1}$ are unchanged. These are only small if the effective singular values of ${F}^{(1)}$ (the ones bigger than a suitable threshold ${\theta }_{1}$) are sufficiently bigger than zero, resulting in a condition number of ${S}^{(1)}$ bounded by ${\max }_{j}({S}_{j,j}^{(1)})/{\theta }_{1}$. ${ \mathcal F }$ as well as the perturbed matrix $\hat{{ \mathcal F }}={ \mathcal F }+{\rm{\Delta }}{ \mathcal F }$ will in general not be normal, but diagonalizable: ${ \mathcal F }=X\,\mathrm{diag}({\lambda }_{j})\,{X}^{-1}$. According to the Bauer–Fike theorem [43], we can order the eigenvalues $\{{\hat{\lambda }}_{j}\}$ of $\hat{{ \mathcal F }}$ such that

Equation (46)

for $j=1,\ldots ,p$, where $\kappa (X):= \parallel X{\parallel }_{2}\parallel {X}^{-1}{\parallel }_{2}$ is the condition number of X, which represents the amplification factor of the matrix perturbation towards the perturbation of the eigenvalues. The matrix perturbation contributes linearly, while the condition number of X, which is independent of the perturbation ${\rm{\Delta }}{ \mathcal F }$, is related to the condition of the underlying inverse spectral estimation problem. This could in principle be ill-conditioned (e.g. for the reconstruction of extremely small or highly damped spectral components relative to the other ones), but we are more concerned with problems that are also of interest in the classical world and hence sufficiently well-conditioned. Note that p, the number of poles, is small by assumption so that this classical step does not pose a computational bottleneck for the algorithm. For noisy signals, the rank of ${F}^{(i)}$ will in general be larger than p, ${F}^{(i)}$ could even be full rank—for not too large noise, however, the additional noise components will remain small such that the effective rank will still be at p. Since only the biggest components of ${F}^{(i)}$ are taken into account, this results in a rank-p approximation that is best in the Frobenius norm sense (Eckart–Young theorem [44]) and an effective noise filtering of the underlying signal.

The eigenvalues ${\gamma }_{k}$ of equation (45) are determined up to ${{\rm{e}}}^{-{\rm{i}}({\varphi }_{{ \mathcal U }}+{\varphi }_{{ \mathcal V }})-\mathrm{log}({\nu }_{{ \mathcal U }}{\nu }_{{ \mathcal V }})}$, which corresponds to a uniform translation of all poles. We can take care of this ambiguity by introducing an additional reference pole ${\lambda }_{\mathrm{ref}}:= 0$ (corresponding to the eigenvalue ${\mu }_{\mathrm{ref}}=1$) that has to be incorporated into the original signal. This can easily be achieved by adding any constant to the original signal vector (its normalizability is not affected). Since for exponentially damped signals ${\mathfrak{Re}}({\lambda }_{k})\leqslant 0$ holds for each k, the eigenvalue ${\gamma }_{\mathrm{ref}}$ corresponding to the reference pole will still be identifiable as the one with the biggest absolute value $| {\gamma }_{k}| $. Simply dividing all ${\gamma }_{k}$ by ${\gamma }_{\mathrm{ref}}$ (corresponding to the transformation ${\lambda }_{k}{\rm{\Delta }}t\mapsto {\lambda }_{k}{\rm{\Delta }}t+{\rm{i}}({\varphi }_{{ \mathcal U }}+{\varphi }_{{ \mathcal V }})+\mathrm{log}({\nu }_{{ \mathcal U }}{\nu }_{{ \mathcal V }})$ for each k) then yields the correct values $\{{\mu }_{k}\}$ and poles.

3.5. Quantum linear fitting

We feed the poles back into the quantum world by using the quantum fitting algorithm described in [25, 26] to obtain the coefficients $\{{c}_{k}\}$ in $O(\mathrm{log}(N)p)$ steps and hence the entire parametrization of the input function. We consider real and imaginary parts of the signal f, the poles ${\lambda }_{k}{\rm{\Delta }}t=: -{\alpha }_{k}+{\rm{i}}\,{\beta }_{k}$ and the coefficients ${c}_{k}={a}_{k}+{\rm{i}}\,{b}_{k}$ separately, and equation (14) becomes

Equation (47)

with

and $\widetilde{N}:= N-1$. The vector 2-norm of the kth column of $\widetilde{W}$ can be established in closed form as

Equation (48)

Hence, $\parallel \widetilde{W}{\parallel }_{2}$ can be computed in time $O(p)$. We will rescale the solution for c such that we can assume that $\parallel \widetilde{W}{\parallel }_{2}=1$. The norms of matrices $\parallel \widetilde{W}{\parallel }_{2}$ for real-valued signals can be calculated as well by combining the norms of the kth with the $(k+p)$th column. Since each row consists of $2p$ elements, the row norms can be computed in $O(p)$ as well.

Since $\alpha := ({\alpha }_{k}),\beta := ({\beta }_{k})$ are known, we can construct a quantum oracle, providing quantum access to the matrix entries ${w}_{j,k}(\alpha ,\beta )$,

Equation (49)

The matrix $\widetilde{W}$ can be prepared as a state vector

Equation (50)

following the procedure described in [26] with time $\widetilde{O}(\mathrm{poly}\ \mathrm{log}(N)\,p\,{\xi }_{W}\mathrm{log}(1/\zeta ))$, where ζ is the accuracy of the preparation of $| w\rangle $ and

Equation (51)

Here, we set $\widetilde{O}(g(N)):= O(g(N)\mathrm{poly}\ \mathrm{log}(g(N)))$ for functions g. For the preparation of $| \tilde{f}\rangle $, we require time $\widetilde{O}(\mathrm{poly}\ \mathrm{log}(N)\,{\xi }_{f}\mathrm{log}(1/\zeta ))$ with

Equation (52)

With $| w\rangle $ and $| f\rangle $ prepared, we then can proceed as described in [26, theorems 2 and 3] and obtain with probability bigger than 2/3 an estimate $\hat{c}$ in time $\widetilde{O}{(\mathrm{poly}\mathrm{log}(N){\kappa }_{W}{p}^{3/2}(\sqrt{2p}{\xi }_{f}/\varepsilon +{\kappa }_{W}^{2}{\xi }_{f}/{\rm{\Phi }}+{\kappa }_{W}^{6}(2p)}^{5}{\xi }_{W}/{\varepsilon }^{4}{\rm{\Phi }})/\varepsilon {\rm{\Phi }})$, with 2-norm accuracy ε, ${\kappa }_{W}=\parallel \widetilde{W}{\parallel }_{2}/\parallel {\widetilde{W}}^{+}{\parallel }_{2}$, and norm Φ of the projection of $\tilde{f}$ onto the column space of $\widetilde{W}$, the fit quality. Importantly, we can estimate the quality of the fit with time $\widetilde{O}{(\mathrm{poly}\mathrm{log}(N)({\xi }_{f}+{\xi }_{W}(2p)}^{3}{\kappa }_{W}^{4}/\varepsilon )/\varepsilon )$. Note that sampling $\hat{c}$ is efficient because it comprises $O(p)$ components. Altogether, we have determined the sought-after coefficients and hence all parameters that characterize the signal f in $\mathrm{poly}\ \mathrm{log}N$. This concludes the description of the quantum matrix pencil algorithm.

4. Summary and discussion

We have developed a quantum implementation of an important algorithm for spectral estimation, the MPM, taking a tool from signal processing to the quantum world and significantly improving upon the effort required. Given the arguable scarcity of quantum algorithms with this feature, progress in this respect seems highly desirable. The quantum MPM is a useful alternative to QFT in many practical applications such as imaging or simulation of atomic systems, in the same way that classical MPMs and related algorithms are useful alternatives to the classical Fourier transform. This is especially the case for signals with close damped poles and limited total sampling time. The presented algorithm can be applied to classical data to solve the classical problem at hand.

For a signal given by N equidistant samples, we have made use of the fact that the eigenvalue problem equation (17) consisted of large matrices of size $O(N)$ that could, however, be contracted into manageable matrices of size $O(p)$ via concatenated use quantum phase estimations in $O(\mathrm{poly}\ \mathrm{log}N)$. This justifies the use of a quantum version of the MPM as opposed to quantum versions of related algorithms like Prony's method, where the p quantities leading the corresponding poles are determined in a later step, during the fitting of the coefficients, and the critical step would already be $O(\mathrm{poly}\,N)$.

The quantum phase estimation was shown to be implementable in two complementary ways: either by retrieving the input signal via quantum oracle calls such as quantum RAM, or by using multiple copies of a state with the signal encoded in its amplitudes for QPCA. The developed extended matrix construction for indefinite matrices significantly expands the set of matrices that can be exponentiated via QPCA. Since QPCA so far solely relied on positive semidefinite matrices, we expect this to be a useful new primitive also for other quantum algorithms.

The actual step to determine the poles from an eigenvalue problem of a p × p matrix can be performed classically since p is assumed to be small. Subsequently, feeding back the established poles into a quantum fitting algorithm allows the coefficients of the signal again to be determined efficiently in $\widetilde{O}(\mathrm{poly}\ \mathrm{log}N)$. This way, we have an effective division of labor between classical and quantum algorithms, to the extent that such a hybrid algorithm is possible efficiently. Classical intermediate steps are for example reminiscent of quantum error correction, where error syndromes are measured and the quantum state is processed according to the classical measurement results [45].

In order to create an efficient quantum algorithm, it is essential to adress certain caveats, which are succinctly listed in Aaronson [46] using the example of the groundbreaking work in [47]: both for the QRAM and the QPCA setting, the input data can be accessed quickly enough and the Hankel matrices can be exponentiated efficiently—due to being sparse in a quadratically larger space or by fulfilling the QPCA requirements, respectively. For this, it is necessary that the entries of the Hankel matrices and hence the input signal have a similar magnitude ${\rm{\Theta }}(1)$. Furthermore, for twofold phase estimation, as for general phase estimation, we need to be able to prepare initial states that provide sufficiently large overlap with the states we use for further processing. In the QRAM setting as well as in the QPCA setting, one can employ initial states that are closely related to the input signal. Analogously, the overlaps in the matrices ${ \mathcal U }$ and ${ \mathcal V }$ need to be sufficiently large. Reading-out the $O(N)$ components of the state vectors would foil the achieved quantum speedup; however, as in [36, 37], the number of necessary output quantities in our algorithm is condensed down to $O(p)$. Each output can be determined with time $\widetilde{O}(\mathrm{poly}\ \mathrm{log}(N))$, provided that Vandermonde matrix $\widetilde{W}$ from the established frequencies is sufficiently well-conditioned, analogous to the requirements related to the condition number in the matrix inversion algorithm [47]. Naturally, we are interested in sufficiently well-behaved signals where a classical MPM algorithm could in principle reconstruct all of its components, excluding e.g. highly damped or relatively small terms, which manifest themselves again in the conditioning of the matrix inversion. In this respect, the quantum MPM inherits the properties related to the conditioning of its classical analogue.

The outlined procedure is generalizable to arbitrary signal dimensions d, i.e. signals of the type $f({t}_{1},\ldots ,{t}_{d})={\sum }_{{k}_{1},\ldots ,{k}_{d}=1}^{p}{c}_{{k}_{1},\ldots ,{k}_{d}}\,{{\rm{e}}}^{{\lambda }_{{k}_{1}}{t}_{1}+...+{\lambda }_{{k}_{d}}{t}_{d}}$, with $c\in {{{\mathbb{C}}}^{p}}^{d}$ by suitable tensor contractions of the array of signal samples $({f}_{{j}_{1},\ldots ,{j}_{d}}{)}_{\{{j}_{l}\}=0}^{N-1}$ [5] or fixing all time indices but one and applying the MPM on the remaining vector. This yields the sought-after poles since they are the same for the different time indices ti. For time index-dependent poles, one can consider 'enhanced matrices'—embeddings of Hankel matrices that correspond to one-dimensional projections of the multidimensional signal within a larger block Hankel matrix—as in [48]. There are many potential applications for this, e.g. in radar imaging and geophysics [49].

Beyond the potential use of reducing the computation time of the MPM in its classical applications or classical postprocessing in quantum applications, it is also worthwhile to consider the possibilities in a pure quantum setting: these include the examination of quantum systems that feature a discrete set of damped oscillations such as the vibronic modes of molecules in a condensed-phase environment where the data—as opposed to what is usually done—would also have to be taken in a quantum coherent manner in order to replace quantum RAM or to build a state as in appendix B and subsequently be processed by the quantum MPM.

We expect the methods and primitives that we develop and introduce here to be highly useful also when devising other quantum algorithms. This includes the new ideas on the computation of overlaps by suitably concatenating quantum phase estimation procedures and on the efficient exponentiation of a novel type of structured matrices on a quantum computer. We hope that the present work stimulates such further research.

Acknowledgments

AS thanks the German National Academic Foundation (Studienstiftung des deutschen Volkes) and the Fritz Haber Institute of the Max Planck Society for support. SL and PR were supported by ARO and AFOSR. JE thanks the Templeton Foundation, the DFG (CRC 183, EI 519/7-1), the ERC (TAQ), and the EC (RAQUEL, AQuS) for support.

Appendix A.: Alternative non-sparse quantum oracle method

Berry et al present a method to exponentiate matrices sublinear in the sparsity [38]. In this section, we summarize the performance and requirements of this method and the application to the low-rank Hankel matrices of the present work. The number of oracle queries for simulating a matrix such as the Hermitian ${\widetilde{F}}^{(i)}$ in equation (21) is given by

Equation (A1)

where s is the sparsity and ε is the error. The quantity ${{\rm{\Lambda }}}_{\mathrm{tot}}\gt 0$ depends on the norms of the matrix as ${{\rm{\Lambda }}}_{\mathrm{tot}}={\rm{\Lambda }}{{\rm{\Lambda }}}_{1}{{\rm{\Lambda }}}_{\max }$ with the spectral norm ${\rm{\Lambda }}=\parallel {\widetilde{F}}^{(i)}{\parallel }_{\infty }$, the maximum column sum norm ${{\rm{\Lambda }}}_{1}=\parallel {\widetilde{F}}^{(i)}{\parallel }_{1}$, and the maximum matrix element ${{\rm{\Lambda }}}_{\max }=\parallel {\widetilde{F}}^{(i)}{\parallel }_{\max }$. The conditions for this to work are given by ${\rm{\Lambda }}t\geqslant \sqrt{\varepsilon }$,

Equation (A2)

and ${\rm{\Lambda }}\leqslant {{\rm{\Lambda }}}_{1}$.

We confirm that under reasonable assumptions the low-rank non-sparse Hankel matrices under consideration in this work can be simulated with $O(\mathrm{log}N)$ queries. Assume that the signal is reasonably small with not too many zeros. This implies that the matrix ${\widetilde{F}}^{(i)}$ is non-sparse with $s={\rm{\Theta }}(N)$ and the individual elements scale as ${\widetilde{F}}_{{jk}}^{(i)}={\rm{\Theta }}(1)$. If we assume that the signal is generated by a few (in fact, p) components, then the matrix is low rank with rank $2p$. Since $\mathrm{tr}{(({\widetilde{{F}}}^{(i)})}^{2})={\sum }_{j=1}^{2p}{\lambda }_{j}^{2}\leqslant {N}^{2}\parallel {\widetilde{{F}}}^{(i)}{\parallel }_{\max }^{2}$, we have that the significant eigenvalues scale as ${\lambda }_{j}={\rm{\Theta }}(N)$, $j=1,\,...,2p$. These assumptions have the following straightforward implications:

  • (i)  
    The spectral norm (largest eigenvalue) is ${\rm{\Lambda }}={\rm{\Theta }}(N)$,
  • (ii)  
    the induced 1-norm (maximum column sum) is ${{\rm{\Lambda }}}_{1}={\rm{\Theta }}(N)$, and
  • (iii)  
    the maximum element is ${{\rm{\Lambda }}}_{\max }={\rm{\Theta }}(1)$.

Thus, ${{\rm{\Lambda }}}_{\mathrm{tot}}={\rm{\Theta }}({N}^{2})$ and the total number of queries is $O({t}^{3/2}\sqrt{{\rm{\Theta }}({N}^{3})/\varepsilon })$. We need time $t={\rm{\Theta }}(1/N)$ to resolve the eigenvalues ${\lambda }_{j}={\rm{\Theta }}(N)$ via phase estimation. Thus, at an error ε, we need $O(1/\sqrt{\varepsilon })$ queries, which is again efficient.

We show that we can satisfy the conditions as follows. Since we have $t={\rm{\Theta }}(1/N)$ already from phase estimation, we can assume that with constant effort $t\geqslant \sqrt{\varepsilon }/{\rm{\Lambda }}={\rm{\Theta }}(\sqrt{\varepsilon }/N)$. Next, by using (i)–(iii) and $s={\rm{\Theta }}(N)$, we have

Equation (A3)

The third criterion ${\rm{\Lambda }}\leqslant {{\rm{\Lambda }}}_{1}$ is satisfied by Gershgorin's theorem, since the eigenvalues are bounded by the maximum sum of the absolute elements in a row/column.

Appendix B.: Matrix exponentiation via QPCA

In this appendix, we present an alternative way to efficiently exponentiate indefinite matrices, in order to give more substance to ideas of exponentiating structured matrices while at the same time preserving a phase relationship. Since exponentiating matrices $F\in {{\mathbb{C}}}^{N/2\times N/2}$ while a preserving phase relationship is key to the above algorithm and is expected to be important in other quantum algorithms, we briefly present an alternative method that accomplishes this task via QPCA. This method compares to the QFT in the sense that it operates on a given initial state that contains the data to be transformed in its amplitudes without querying QRAM. We assume that we have been presented with many copies of the state vector

Equation (B.1)

with $C:= (\parallel F{\parallel }_{2}^{2}+{a}^{2}\parallel {F}^{\dagger }F{\parallel }_{2}^{2})$ and ${a}^{-1}:= O({\max }_{j,k}| {({F}^{\dagger }F)}_{j,k}| )$. The matrix F takes the role of ${F}^{(1)}$ and ${F}^{(2)}$ of the main text, so again the classical index i is suppressed. Note that even though a is exponentially small, the individual amplitudes of this state are of similar size. Reducing the state in terms of the k index leads to

In matrix form, this reduced density matrix is written as

Equation (B.2)

By the use of the singular value decomposition of $F={{USV}}^{\dagger }$, this matrix—positive semi-definite by construction—can be written as

Equation (B.3)

In precisely the same way, we are given multiple copies of the state

Equation (B.4)

Again reducing the state in terms of the k index leads to

leading to the matrix

Equation (B.5)

which can be decomposed as

Equation (B.6)

The matrix

Equation (B.7)

has still low rank, as it has just twice the rank of F. Its eigenvectors are $({u}_{j},\pm {v}_{j})\in {{\mathbb{C}}}^{N}$ and its eigenvalues in terms of the singular values of F are given by ${s}_{j}^{2}{({{as}}_{j}\pm 1)}^{2}/(2C)$ since

Equation (B.8)

and

Equation (B.9)

This renders standard QPCA [20] readily applicable and allows us to determine the singular spectra of matrices F, even if they are indefinite, by constructing the positive semidefinite matrix Z.

Please wait… references are loading.