Abstract
A universal fault-tolerant quantum computer holds the promise to speed up computational problems that are otherwise intractable on classical computers; however, for the next decade or so, our access is restricted to noisy intermediate-scale quantum (NISQ) computers and, perhaps, early fault tolerant (EFT) quantum computers. This motivates the development of many near-term quantum algorithms including robust amplitude estimation (RAE), which is a quantum-enhanced algorithm for estimating expectation values. One obstacle to using RAE has been a paucity of ways of getting realistic error models incorporated into this algorithm. So far the impact of device noise on RAE is incorporated into one of its subroutines as an exponential decay model, which is unrealistic for NISQ devices and, maybe, for EFT devices; this hinders the performance of RAE. Rather than trying to explicitly model realistic noise effects, which may be infeasible, we circumvent this obstacle by tailoring device noise using randomized compiling to generate an effective noise model, whose impact on RAE closely resembles that of the exponential decay model. Using noisy simulations, we show that our noise-tailored RAE algorithm is able to regain improvements in both bias and precision that are expected for RAE. Additionally, on IBM's quantum computer ibmq_belem our algorithm demonstrates advantage over the standard estimation technique in reducing bias. Thus, our work extends the feasibility of RAE on NISQ computers, consequently bringing us one step closer towards achieving quantum advantage using these devices.

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
Amazing progress and development of quantum processors and the paradigm of hybrid quantum–classical algorithms have worked in tandem to spark the age of near-term quantum computing. The two major events that acted as catalysts for this noisy intermediate-scale quantum (NISQ) era are the invention of variational quantum eigensolver (VQE) [1–4] and publicly-available access to IBM's quantum devices through their cloud platform [5]. In addition to VQE for quantum chemistry, with promising experimental demonstrations [6–9], variational quantum algorithms have applications in diverse fields including discrete optimization [10–13], quantum machine learning and generative modelling [14–22], quantum amplitude estimation [23] and quantum error correction [24].
Extensive studies to elucidate the viability of quantum advantage in near-term quantum computing have produced mixed results so far [22, 25–30]. A recent analysis argued that VQE will fail to provide quantum advantage over state-of-the-art quantum chemistry algorithms for estimating the ground-state energy of an industry-scale problem Hamiltonian [31]. This drawback of VQE arises because evaluating even a single energy within chemical accuracy requires a large number of samples, which was estimated to take up to days for some realistic molecules. Specifically, the number of samples required for a desired accuracy of scales as , leading to huge run-times for VQE. This lesser-acknowledged problem of infeasible runtimes, which was dubbed the 'measurement problem' [31], poses a major roadblock for hybrid algorithms, as well as many non-variational algorithms, towards achieving quantum advantage on devices in the NISQ era and beyond.
In an effort to deal with the measurement problem on NISQ devices, recent works apply techniques of quantum amplification and quantum estimation borrowed from the field of fault-tolerant quantum computing [32–35]. As a first step towards this effort, Wang et al [32] achieves a reduction in the desired number of samples for VQE by improving the sample scaling from to , where . More generally, these algorithms employ short-depth versions of quantum amplitude estimation [36, 37] to improve scaling and reduce runtimes for VQE, thus yielding interpolating scaling between the standard and the quantum phase estimation sampling rates of and , respectively.
Our work focuses on the near-term implementation of the quantum estimation algorithm [33]. An essential component of this algorithm is the likelihood function, which is modelled as an exponentially-decaying periodic function with a decay parameter f, over measurement data. This 'fidelity' parameter depends on the two-qubit gate fidelity (), the number of two-qubit gates (D) and the number of qubits (n), thus incorporating noise effects into algorithm design and making this algorithm robust. A detailed analysis based on this model derives the total run-time:
where incorporates state preparation and measurement errors. This expression highlights the interplay between fidelity (noise) and depth (quantum coherence) towards reducing runtimes and improving scaling of measurements with respect to accuracy.
In practice, using quantum coherence to improve sampling efficiency requires running deeper quantum circuits, but their efficiency might be curbed by noise [38–41]. Additionally, more complex errors originating from unpredictable sources, including miscalibration of gates and undesired correlations between qubits, in NISQ devices can deviate the likelihood function from the assumed exponential decay model. To this end, there have been two techniques so far for improving the performance of amplitude estimation in such noisy scenarios; see figure 1. The first one is to experimentally approximate the decay parameter and insert it into the likelihood function [41], and the second is to introduce more decay parameters into the likelihood function [42]. In summary, one either measures the nuisance parameters in a simplified noise model or adds extra nuisance parameters in the likelihood function.
Figure 1. Three techniques for dealing with device noise in quantum amplitude estimation: (a) construct accurate noise models [41], (b) add more parameters to the existing noise model [42], (c) tailor device noise into existing noise model [this work].
Download figure:
Standard image High-resolution imageOur approach is to tailor the device noise into stochastic noise, thus producing an effective noise model closer to the exponential decay. While figure 1 highlights three different methods to make quantum amplitude estimation feasible for near-term applications, we emphasize that they are not mutually exclusive and some combined procedure might eventually be the best. Using noisy simulations, we show that the measurement data from noise-tailored amplitude amplification circuits come closer to the exponential decay model for likelihood function. Furthermore, we assess the performance of our technique at the task of estimating expectation values using current quantum processors.
Our paper is organized as follows. We begin in section 2 by briefly reviewing the robust amplitude estimation (RAE) algorithm, and then inspect the impact of both incoherent and coherent errors on the likelihood function. In section 3 we summarize randomized compiling (RC), an efficient noise-tailoring technique for quantum devices, and study its effect on this likelihood function. Finally we present our results in section 4 on the success and limitations of this noise-tailoring technique for RAE using IBM devices, both simulators and real hardware, and conclude our work in section 5.
2. Noise in robust amplitude estimation
Robust amplitude estimation (RAE) is a quantum-enhanced algorithm for estimating the expectation value of a Hermitian operator with higher accuracy and precision as compared to the standard sampling (SS) technique of direct averaging [38]. In this section, we first briefly explain the RAE algorithm, then we introduce mathematical models describing major sources of incoherent and coherent errors in superconducting qubits. Finally, we discuss the impact of these errors on the RAE algorithm.
2.1. RAE algorithm
Given a n-qubit Hermitian operator P with eigenvalues ±1 and a quantum state , the goal is to estimate the expectation value . To do this, the RAE algorithm uses measurement data from enhanced sampling circuits [33], and then performs the classical postprocessing of maximum likelihood estimation (MLE) [38]. An enhanced sampling circuit modifies the trivial SS circuit by adding Grover iterates G, used for quantum amplitude amplification before the measurement of P, see figure 2. For a noisy enhanced sampling circuit with L layers, where each layer resembles G and has a fidelity , the likelihood of obtaining a measurement bit-string with parity is:
Figure 2. The structure and components of a n-qubit quantum circuit used in estimating . A is a parameterized quantum circuit (PQC), P is the operator to be measured and R0 creates reflection about .
Download figure:
Standard image High-resolution imageThe layer fidelity parameter is in-turn related to the nuisance parameter λ of the exponential decay noise model as , thus incorporating the impact of noise into the above likelihood function.
The classical processing in RAE utilizes a family of enhanced sampling circuits of increasing L, consequently of increasing depth, where a higher depth circuit demonstrate an increase in the inference power [33]. After obtaining a measurement outcome d from an enhanced sampling circuit with L layers, a two-dimensional prior distribution is updated to yield the posterior function [38]:
during the inference process. At the end of the inference process, the estimated expectation is given by the value of Π that maximises the above posterior distribution. Combining likelihood functions for enhanced sampling circuits with different Ls yield an unique estimate with a quantum advantage [43].
It was empirically shown that RAE affords not only a way to improve the precision of one's estimate but also a way to mitigate the bias in one's estimate at least for a two-qubit system [38]. The crucial insight behind this observation is that as long as the noise model describing the quantum hardware is not far from the assumed noise model for RAE, i.e. the exponential decay model, or has the same effect on the algorithm as this assumed noise model, the effect of the noise in MLE is simply to slow the rate of information gain as opposed to biasing the estimates. In this work, we push this insight further and endeavour to tailor the real and complicated noise on a quantum hardware to produce an effective noisy circuit so that f in likelihood function (2) becomes a better approximation to the effect of noise in the estimation process.
2.2. Relevant noise models
For our simulations, we use the model of a noisy superconducting quantum processor and include incoherent errors from amplitude damping and dephasing and coherent errors from residual ZZ interactions between qubits, which are the major sources of error for these processors. Although amplitude and phase damping errors are bad, coherent error is the major performance-limiting factor for quantum algorithms because these errors can accumulate adversarially with increasing circuit depth. Karamlou et al [44] demonstrates that, for superconducting quantum processors, the residual ZZ errors exert a detrimental impact on hybrid algorithms using PQC.
The combined effect of phase and amplitude damping errors is described by a stochastic error channel. For a single-qubit, with decay rate T1, dephasing rate T2, and a time-step parameter tstep, this channel is defined using the three Kraus matrices:
where and . Extending this formulation to n qubits, the incoherent error channel transforms a n-qubit density matrix ρ into:
For our simulations, we use the implementation of this error channel provided by IBM's qiskit [45].
The coherent error arising from undesired ZZ interactions between transmon qubits is modelled as a modification to the ideal CNOT gate [44]. These ZZ interactions are produced by small anharmonicities in the qubits. The overall effect of these interactions on a pair of qubits, namely control c and target t, is expressed as:
where and are the sets of qubits coupled to c and t, respectively, with and being the corresponding ZZ interactions. In the presence of these coherent errors, the expression for CNOT over gate time tgate, in terms of the native cross-resonance gate [46, 47], is modified as:
where is the coupling between the control and target qubits.
2.3. Impact of noise on the likelihood function
To understand the effect of incoherent and coherent errors on the RAE algorithm, we investigate how these errors impact the likelihood function (2). To do this, we use the likelihood of getting even parities as:
where L = 0 is the SS case. We perform noisy simulations of enhanced sampling circuits with A being the PQC describing a 4-qubit hydrogen molecule (figure 3) and P being the 4-qubit Pauli operator XXXX. In figure 4(a), we present results from simulations incorporating only amplitude and phase damping, with typical T1 and T2 values for IBM devices, for a fixed Π and increasing L. We observe that the likelihood values obtained from these simulations yield a good fit to the desired functional form (8), as evident from their perfectly overlapping values and . This implies that despite the noise we should be able to get a good estimate of f and the estimation process could be largely salvaged.
Figure 3. A 4-qubit PQC, with parameter , describing Hydrogen molecule in a minimal basis. This PQC can prepare any state that is a real linear combination of and , which includes the ground state of a Hydrogen molecule when the Jordan–Wigner transformation with interleaved spin ordering is used.
Download figure:
Standard image High-resolution imageFigure 4. Impact of incoherent and coherent noise on the likelihood function (8) obtained from noisy simulations of enhanced sampling circuits, with A being the 4-qubit PQC (figure 3) and .
Download figure:
Standard image High-resolution imageOn the other hand, adding moderate level of coherent errors from ZZ interactions drives the simulation results far away from our assumed functional form with little hope of getting a good estimate of f; see figure 4(b). This observation serves as a strong evidence of how coherent errors impact likelihood values, and can consequently deteriorate the performance of RAE. From these results we infer that with the assumed noise model behind the derivation of the likelihood function (2), RAE will not always offer the sought after sampling power.
3. Randomized compiling for RAE
Motivated by our findings on how detrimental coherent errors are for likelihood values, we aim to engineer an effective noise profile whose impact on RAE is similar to that of an exponential decay model. To this end, we employ RC that efficiently tailors coherent errors into stochastic errors [48]. In this section, we first briefly state the background of RC, followed by an explanation of the basic procedure for RC. Finally, we study the impact of this technique on likelihood values in the presence of different magnitudes of coherent error.
The idea of RC was originally proposed in the context of quantum error correction and fault tolerant quantum computing, where thresholds for different error-correcting codes are estimated with an assumption of stochastic Pauli noise [49, 50]. These estimates can be way off if coherent errors are present [51] since they pile up systematically. In this regard, RC was introduced to construct an effective quantum channel, which is described by only stochastic incoherent noise. Recently, RC has been used for improving algorithmic performances of NISQ devices [52–56]. Thus, besides making RAE feasible for NISQ devices, our work adds on to the growing field of research in noise mitigation and noise tailoring within the NISQ era.
3.1. RC procedure
Given a n-qubit 'bare' circuit , RC produces a set of N random circuits that are all logically equivalent to . Logical equivalence means that all these circuits produce exactly the same wave-function in the noiseless limit. Each , which we refer to as a 'random duplicate', is constructed by the following procedure:
- (a)Partition into 'easy' and 'hard' cycles, where we define an easy cycle to be a layer of single-qubit gates and a hard cycle to be a layer of two-qubit entangling gates,
- (b)Uniformly sample from the n-qubit Pauli group to get an element,
- (c)Insert the element after an easy cycle,
- (d)Insert gates after a hard cycle to ensure the logical equivalence to .
Additionally, we combine all adjacent single-qubit gates to preserve the circuit depth. Using a convenient tensor product notation:
where is the single-qubit gate acting on qubit j in an easy cycle i, figure 5 shows the construction and structure of a random duplicate for a 2-qubit circuit.
Figure 5. Circuit representations for logically-equivalent circuits before, during and after RC.
Download figure:
Standard image High-resolution imageIn our implementation, the total number of measurements obtained by executing the RC procedure is kept equal to the number of measurements allocated to . By defining a tuple for a circuit with M measurements, we can thus formulate RC as the map:
where represents a virtual circuit with measurements being the union of measurement outcomes from all random duplicates. For the set and the identity , we choose:
All these random duplicates in the noisy setting are only distinguished by how the noise affects them since they are all equivalent in the noiseless setting. Then the intuition is that upon running them all on hardware and collecting the results we get bit-strings that behave as though they came from a circuit with no coherent noise but simply stochastic noise.
3.2. Impact of RC on the likelihood function
We study how RC impacts the likelihood of getting even parities (8) from noisy simulations of enhanced sampling circuits (figure 2), with the same A and P as in the previous section. As this likelihood function depends on the exact expectation value Π and number of layers, i.e. Grover iterates, L, our analysis is performed in the following two ways:
- (a)Fix L = 1 and vary by choosing 1000 different values for θ0 in figure 3,
- (b)Fix and increase L from 0 to 10.
Furthermore, we generate N = 50 random duplicates for each enhanced sampling circuit.
In figure 6(a), we observe that even in the presence of moderate ZZ interactions with strength kHz, RC successfully tailors this coherent error into incoherent error for circuits with one Grover iterate, and consequently brings the likelihood function close to the desired form (8). The tailored circuits fit the function closely and yields a layer fidelity of 0.34, signifying that the resultant depolarizing channel is very noisy. As we increase circuit depth beyond one Grover iterate, coherent noise piles up producing no signal, as seen in figure 6(b). The resultant likelihood function is essentially flat and uninformative, with negligible enhanced sampling power; on the other hand for weak coherent noise, i.e. with kHz, RC is able to consistently tailor the likelihood values to the desired values for increasing L and the resultant function is not very destructive. From these observations we can hypothesize that RC can improve RAE in the presence of coherent noise as long as the noise is not very strong.
Figure 6. Impact of the RC procedure on the likelihood function (8) obtained from noisy simulations of enhanced sampling circuits, with A being the 4-qubit PQC (figure 3) and P = XXXX.
Download figure:
Standard image High-resolution image4. RC-assisted RAE in practice
In this section, we present results on estimating Pauli expectation values using our technique of RC-assisted RAE. This technique involves calculating maximum likelihood estimates using measurement data from randomly-compiled enhanced sampling circuits, as opposed to bare enhanced sampling circuits used in the RAE algorithm [38]. As RC tailors coherent error into stochastic error, consequently yielding a likelihood function resembling the desired functional form (2), we expect to obtain more precise and accurate estimates using the RC-assisted RAE algorithm. We compare performances between three estimation techniques, namely RAE, RC-assisted RAE and direct averaging, for two different PQCs. In addition to the previously-used 4-qubit Hydrogen molecule (figure 3), we test these techniques for a 2-qubit low-depth circuit ansatz (LDCA) [57] with 20 CNOTs (figure 7). Our tests are performed on both IBM simulator and a real device.
Figure 7. A 2-qubit LDCA with . These values are chosen at random to yield a non-trivial expectation value (not near zero or ±0.5) of the operator P = XX.
Download figure:
Standard image High-resolution imageThe classical postprocessing for MLE uses measurement data from multiple enhanced sampling circuits (figure 2) with increasing number of layers. For a given maximum layer number Lmax, we calculate an estimate by collecting equal number of shots from circuits with and then employing the MLE technique on this data. By fixing the total number of shots to be M, we take measurements from each circuit. This is in contrast to techniques using M measurements of each circuit [40], i.e. a total of shots for each Lmax, or optimizing number of shots for each circuit based on nuisance parameter [41].
For RC-assisted RAE, we also fix the number N of duplicates for increasing layers. An estimate of the expectation value is taken as the average of estimates from B Bayesian runs or MLEs, where the value of B is judiciously chosen such that is converged. Using 50 different measurement datasets, we obtain 50 estimates and then report their mean and root-mean-square error (RMSE), along with their corresponding standard errors. The total time, in units of A, required for calculating an estimate is the runtime:
where nO and nA are the number of two-qubit gates in R0 and A, respectively. In terms of these quantities, the depth of an enhanced sampling circuit of L layers is .
4.1. Performance on a noisy simulator
Through simulations of noisy quantum circuits, we study how RC impacts the two features of RAE, namely enhanced sampling for improving RMSE and error mitigation for improving bias of estimates. In the presence of amplitude and phase damping noise and a low magnitude of residual ZZ coupling, RC-assisted RAE achieves slightly lower variances and RMSEs as compared to RAE (figure 8(a)), although both techniques have equivalent scaling of RMSE with runtime (figure 9(a)). In this noise regime, both techniques also have comparable biases because our choice of B cannot resolve changes in biases at the third decimal point.
Figure 8. Error mitigation properties of RAE and RC-assisted RAE (RC-RAE) as observed by simulating enhanced sampling circuits in the presence of both incoherent and coherent noise. Each point and its error bar represent the mean and standard error of 50 estimates of , respectively, with A being the 4-qubit PQC (figure 3). Each estimate is obtained from runs and shots, with RC using N = 50. Blue line indicates the estimate obtained by direct averaging over M shots from the SS circuit, and the blue band denotes the corresponding standard error.
Download figure:
Standard image High-resolution imageBy increasing the coherent noise to a moderate value, we observe that RC-assisted RAE performs significantly better than RAE. In this regime, the scaling of RMSE with runtime for RC-assisted RAE exceeds the scaling for RAE up to (figure 9(b)); this signifies that RC helps RAE to improve RMSE faster. As we have fixed M for increasing Lmax, this perceived improvement in RMSE with increasing runtime is solely due to using more layers, and not due to increasing shots. This feature is also evident from figure 8(b) where error bars on the low-biased estimates from our RC-assisted RAE decreases with increasing Lmax. Moreover, due to the inherent ability of RC to mitigate the impact of coherent errors on quantum-algorithmic performance, our algorithm is able to significantly reduce bias as compared to RAE in the presence of moderate coherent noise (figure 8(b)).
Figure 9. Enhanced sampling properties of RAE and RC-assisted RAE (RC-RAE) as observed by simulating enhanced sampling circuits in the presence of both incoherent and coherent noise. Each point and its error bar represent the RMSE of 50 estimates of (figure 8) and the corresponding standard error, respectively.
Download figure:
Standard image High-resolution imageSimilar to our previous experiments [38], we notice that the estimates oscillate with Lmax (figure 8). This might be due to the impact of coherent error being accumulated in some systematic manner as we include additional layers in the estimation process. These oscillations persist in RC-assisted RAE possibly due to the fact that we do not optimize the number of RC duplicates and there might exist some residual coherent errors in the circuits. By allocating the same number of shots for SS and RAE, we observe that RC-assisted RAE also beats SS in the presence of low to moderate coherent error by achieving far less bias and standard deviation (figure 8).
4.2. Performance on a real hardware
Having established the importance and limitations of RC for the RAE algorithm in simulations, we now test its efficacy on real devices. We showcase results from one publicly-available IBM device: ibmq_belem. For the 4-qubit circuits, RC-assisted RAE has only slight advantage over RAE in terms of reducing bias, but no improvement in precision is observed (figure 10(a)). Consequently, RMSEs from both techniques scale similarly with increasing runtime, where RC-assisted RAE yields slightly less RMSEs than RAE (figure 10(b)), and this reduction of RMSEs is solely due to the corresponding decrease in biases. As these circuits experience huge coherent noise from various sources including residual ZZ couplings and other kinds of crosstalk between qubits, the stochastic noise model achieved after Pauli twirling renders the likelihood function uninformative. In this scenario, 4-qubit enhanced sampling circuits with high L do not contribute significantly in the inference process as they suffer from extreme noise; thus we use in our plots. Our observation from figure 10(a) suggests a major limitation of RC for RAE: when coherent noise is very high, the effective noise model after RC does not assist RAE and thus we are better off opting for SS.
Figure 10. Estimation of expectation values using ibmq_belem for a 4-qubit Hydrogen Ansatz (figure 3) with . Each point and its error bar are obtained by taking statistics over 50 independent estimates; see captions of figures 8 and 9.
Download figure:
Standard image High-resolution imageFor the 2-qubit circuits, although we see a significant reduction in bias with RC-assisted RAE, no noticeable advantage occurs for precision of estimates (figure 11(a)). The impact of coherent error sources on these 2-qubit circuits is lesser than that on 4-qubit circuits; this enables RC to better mitigate noise with even lower number of duplicates. A possible reason for getting lesser biases, both with and without RC, for 4-qubit circuits is that the readout error correction may be less effective for more qubits. These readout errors can introduce additional bias to the estimates, which cannot be tackled by RC. These two-qubit circuits experience a noise regime where RC-assisted RAE beats SS by reducing bias of estimates quite significantly. Consequently, RMSEs obtained from our technique decrease up to , and have lesser values than those achieved by RAE, as evident from figure 11(b).
Figure 11. Estimation of expectation value for a 2-qubit LDCA (figure 7), with , using ibmq_belem. Similar to figure 10, each point and its error bar are obtained by taking statistics over 50 independent estimates. In this experiment, each estimate is obtained from runs and M = 8000 shots, with RC using N = 20.
Download figure:
Standard image High-resolution image4.3. Summary of results
In practice, the performance of our RC-assisted RAE algorithm greatly depends on the impact of device noise on the enhanced sampling circuits, and consequently, this algorithm does not always deliver better performance than those from RAE or SS. The effective stochastic noise after RC can be very detrimental when the bare circuit has a high degree of coherent error. This is due to the fact that although RC successfully tailors the likelihood function into the desired form (2), the advantage of RC-assisted RAE ultimately depends on the value of f for this tailored function. A low value for f, corresponding to an effective stochastic noise of high strength, reduces the signal used in the estimation process and thus deteriorates the algorithmic performance.
From our experiments using simulators and real devices, we can thus infer the following:
- (a)RC can help in improving RAE's performance if coherent errors are not high, where the high, moderate and low coherent error regimes can be algorithm dependant.
- (b)Study of optimal number of RC duplicates is needed since this might impact the bias of estimate.
5. Conclusion
The excitement of near-term quantum computing is not one for merely academics but also for practitioners in the industrial world where quantum advantage could scalably be turned into a great boon for the world at large. One major roadblock in achieving quantum advantage on NISQ devices is the high cost for estimating expectation values of operators with desired accuracies. Although RAE can potentially squeeze quantum coherence from deep circuits to improve precision and bias of estimates, the presence of highly correlated noise and decoherence restricts its applicability only to small circuits. In this work, we show that by altering complicated device noise using RC we can make RAE more feasible on NISQ devices.
By running RC-assisted RAE on IBM devices, both simulator and quantum processor, we propose three broad regimes for coherent noise, which is found to be the main culprit for deteriorating algorithmic performances. In the low-noise regime, the performance of our algorithm is not conclusively better (we see disagreements in the third decimal point) than RAE's performance; this might not be surprising as there is little coherent noise to be removed and we suspect that more careful optimization of RC is needed. But it is clear that both techniques provide significant advantage over SS. We demonstrate these effects in simulations by modelling coherent errors as residual ZZ couplings between superconducting qubits.
Upon increasing this coupling strength to a moderate magnitude, RC yields an effective stochastic noise channel, which is not very destructive. Consequently, our RC-assisted RAE can be used to recover the sampling power of RAE and to further enhance the noise mitigating property of RAE in this regime. As coherent error increases, RC can still help to reduce the bias but the tailored likelihood function has very little signal. Thus, the inferencing procedure loses its sampling power and using Grover iterates does not particularly improve the scaling of the precision and RMSE, as observed by running 2-qubit circuits on today's devices.
On the extreme end where the coherent noise gets very high, reduction in bias after RC is insignificant; thus, the noise mitigation property of RAE vanishes as well and using SS is enough. We find that 4-qubit circuits on NISQ devices fall in this regime. Thus RC-assisted RAE performs differently in this high-noise regime as compared to the moderate-noise regime, where this algorithm enhances at least one of the two features of RAE.
Our work highlights the role of noise tailoring in improving performances of NISQ algorithms that rely on quantum amplitude estimation. From our experiments on practical NISQ computers, we realize that further quantum-control techniques are essential for decreasing crosstalk errors [58, 59], and, consequently, for extending the usefulness of our RC-assisted RAE on these devices. Nevertheless, our technique can possibly make implementations of RAE on early fault-tolerant and fault-tolerant quantum computers perform more closely to the existing performance models [60]. Moreover, performance of our algorithm can be improved by using optimal number of random duplicates for each enhanced sampling circuit. A possible future direction is to combine the existing techniques for dealing with device noise (figure 1) in order to recover the ability of RAE to improve precision even in particularly noisy conditions.
Acknowledgments
We acknowledge insightful discussions on RAE and RC with Peter Johnson and Athena Caesura, respectively. We also thank Razieh Annabestani, Jerome Gonthier and George Umbrarescu for reviewing the manuscript. Our experiments are run using the Orquestra™ platform of Zapata Computing.
Data availability statement
The data that support the findings of this study are available upon reasonable request from the authors.
Appendix A: RAE circuits
Here we provide examples of enhanced sampling circuits used in the RAE algorithm; see figures 12, 13 and 14. These circuits are transpiled for ibmq_belem using the qiskit transpiler with optimization level 2.
Figure 12. Transpiled enhanced sampling circuits for a 4-qubit Hydrogen Ansatz (figure 3) and the measurement operator P = XXXX.
Download figure:
Standard image High-resolution imageFigure 13. Transpiled enhanced sampling circuit with L = 2 layers for a 4-qubit Hydrogen Ansatz (figure 3) and the measurement operator P = XXXX.
Download figure:
Standard image High-resolution imageFigure 14. Transpiled enhanced sampling circuits for a 2-qubit LDCA (figure 7) and the measurement operator P = XX.
Download figure:
Standard image High-resolution imageAppendix B: RAE and RC-assisted RAE algorithms
We briefly explain RAE and RC-assisted RAE algorithms using a workflow diagram in figure 15.
Figure 15. Workflow describing how to estimate Π for a given Lmax in one Bayesian run. In practice, the Bayesian update uses an un-normalized version [38] of equation (3) as we just need the 'argmax' operation.
Download figure:
Standard image High-resolution image