Testing quantum computers with the protocol of quantum state matching

The presence of noise in quantum computers hinders their effective operation. Even though quantum error correction can theoretically remedy this problem, its practical realization is still a challenge. Testing and benchmarking noisy, intermediate-scale quantum (NISC) computers is therefore of high importance. Here, we suggest the application of the so-called quantum state matching protocol for testing purposes. This protocol was originally proposed to determine if an unknown quantum state falls in a prescribed neighborhood of a reference state. We decompose the unitary specific to the protocol and construct the quantum circuit implementing one step of the dynamics for different characteristic parameters of the scheme and present test results for two different IBM quantum computers. By comparing the experimentally obtained relative frequencies of success to the ideal success probability with a maximum statistical tolerance, we discriminate statistical errors from device specific ones. For the characterization of noise, we also use the fact that while the output of the ideal protocol is insensitive to the internal phase of the input state, the actual implementation may lead to deviations. For systematically varied inputs we find that the device with the smaller quantum volume performs better on our tests than the one with larger quantum volume, while for random inputs they show a more similar performance.


I. INTRODUCTION
The fields of Quantum Computation and Quantum Information have received a huge boost in the last years with the advent of "public" quantum computation. Current devices can be accessed remotely, opening the possibility for the larger public to carry out experiments and to test them by running programs. Quantum computers (qcs) can be based on several different physical systems such as superconducting qubits [1][2][3], trapped ions [4], photonic devices [5] and neutral atoms [6]. Given all these possibilities, questions, such as computational efficiency, error correction capability, stability and computational power start to become important matters for future applications.
In order to discriminate among the different technologies or to decide the optimal domain of applicability of a given quantum computer one needs to devise "measure sticks" or benchmarks. In the current so-called Noisy Intermediate-Scale Quantum (NISQ) era [7], the question of what a suitable benchmark is, becomes tricky because we are still dealing with "unfinished" technologies: qcs that contain a lot of errors, do not support efficient error correction, have low computational power, among other missing traits that classical computers have already overcome [8]. Indeed, an argument has been made about the current field of quantum computer benchmarking, stressing the point that we are still in the exploratory stage [9]. The last few years have brought the arrival of the first quantum benchmarks, the most prominent ones being the Quantum Volume (QV) [10,11] and the Q-score [12]. Yet, the field is only starting and there is still a long road ahead.
Current quantum benchmarks can be divided roughly into two categories (not taking into account benchmarks related to temporal stability [13]): the first is based on randomized circuits such as randomized benchmarking [14][15][16][17], the quantum volume [11], or random circuits with a certain (mirror) structure [18]; the second is based on the successful achievement of certain hallmark protocols, such as the Bernstein-Vazirani algorithm and Grover's search [19][20][21], the Bell test, the matrix inversion procedure or Schrödinger's microscope [22], as well as algorithms used in quantum chemistry [23]. Benchmarks based on randomized circuits in which a probability distribution is sampled are in general a good starting point to test qcs, but they usually average out particular errors [22]. These particular errors might become important, especially if the given quantum computer is used to perform a specific task.
Our work follows the approach of the second category, as it is based on the so-called quantum state matching protocol [24]. In a single step of this protocol a specific entangling operation is applied on a pair of qubits, both prepared in the same initial state. Then one member of the pair is measured and depending on the result of the measurement, the other member of the pair is post-selected or discarded. This procedure leads to a complex nonlinear transformation on the post-selected qubit. Note that a similar procedure is applied to realize the Schrödinger microscope [22]. The transformation in our case is constructed in a way that it has two superattractive fixed points [25] (which correspond to orthogonal quantum states), with their respective basins of attraction being separated by a circle on the Bloch sphere. By iterating the protocol one can decide whether a given unknown quantum state falls in one of the basins of attraction, i.e., whether it is in the circle-shaped neighborhood of one of the superattractive states. The radius of the circle can be prescribed, in a given implementation of the scheme it determines the matrix elements of the entangling unitary, and also affects the probability of success of the protocol.
In this paper, we show how one can employ this protocol to test qcs. In contrast to the nonlinear protocol realizing the Schrödinger microscope [22], where the dynamics does not possess any attractive cycles, thus all initial states are chaotic, in our scheme, the nonlinear transformation has superattractive fixed points and a tuneable success probability. In this way, the protocol itself can decrease initial noise, while such fluctuations may be enhanced in the case of the Schrödinger microscope [26].
The paper is organised as follows. Section II describes the ideal protocol and introduces the specific entangling unitary that is involved in it. Then, in Sec. III we determine the optimal decomposition of this unitary into a minimum number of programmable quantum gates. In Sec. IV we describe the statistical framework used to test the quantum computers, while in Sec. V we present and analyse the results obtained from real qcs. We compare results obtained by using systematically varied inputs (Sec. V A) and randomly chosen inputs (Sec. V B), as well as results post-processed with readout error mitigation (Sec. V C). In Section VI we conclude and give an outlook on future directions. Appendix A presents the dates of the experiments discussed in the paper.

II. THE IDEAL PROTOCOL
Errors in current NISQ computers lead to deviations from the desired pure output state of a quantum computation. These errors can be systematic, which do not necessarily change the purity of the qubit state, or random ones, leading to mixed outputs. One might wish to be able to decide whether such an output state is "close enough" to a desired pure state.
The quantum state matching protocol was originally proposed for this task [24]: given a pure qubit state as a reference and a circular neighborhood around it, one can design a scheme which transforms the unknown state closer to the reference state if it was originally inside the prescribed circle, or otherwise to a state that is orthogonal to the reference state.
Assuming that the unknown state is at hand in many copies, the scheme can be further iterated, and due to the superattractive nature of the transformation it can, after a few steps, match the unknown state to the reference or its orthogonal pair, which can then be discriminated.
The fast convergence of the above mentioned protocol is due to the nonlinear nature of the underlying quantum state transformation. The nonlinearity arises because one takes two copies of the unknown state |Φ 0 , then applies a specific entangling unitary (which is determined by the reference state and the radius of tolerance) and then measures one of the qubits. If the measurement result is 0, then the other qubit undergoes a nonlinear transformation compared to its initial state. It has been proven in Ref. [24] that one can think of the entangling unitary as being composed of local rotations, which are determined by the position of the reference state on the Bloch sphere, and a two-qubit unitary, which is determined by the prescribed tolerance. A scheme containing solely this two-qubit unitary realizes quantum state matching to the reference state |0 . In this work, we will focus on the implementation of this latter protocol, which we describe in what follows.
Mathematically, a pure state of a two-level system can be written as where z ∈ C ∪ ∞ and N 0 is a normalization factor. |Φ 0 can be represented as a point on the Bloch (or the Riemann) sphere or, equivalently, it can be represented as a point on the complex plane, the two representations being related by the stereographic projection. Let us denote by the radius of the circular shaped tolerance region on the complex plane around the origin (representing the quantum state |0 ). Then, let us take two qubits, both prepared in the same intial state |Φ 0 , and apply the entangling unitary It can be easily seen that if one measures the second qubit to be in state |0 , then the state of the first qubit can be written as Note that this is a quadratic (nonlinear) transformation of the initial state |Φ 0 , represented by the complex number z. If one iterates this protocol, then initial states with |z| < will converge to |0 , while those with |z| > will converge to |1 , as the f (z) complex map has two (super)attractive fixed points: 0 and ∞ (corresponing to the quantum states |0 and |1 , respectively). The two regions of convergence (the so-called Fatou set) are separated by a circle (the so-called Julia set) of radius containing points which do not converge, but evolve chaotically [25], [27]. We note that superattractivity of the quantum states |0 and |1 is advantageous for the protocol, as it ensures the fastest possible convergence, yet, the closer the initial unknown state is to the -circle, the more iterations are needed for the initial state to be matched with the reference state (with a given accuracy).
In a quantum circuit the iteration of the protocol (and the corresponding complex map f (z)) can be carried out as depicted in Figure 1. In order to carry out n iterations, one needs 2 n qubits, all prepared in the same initial state |Φ 0 . For the first iteration, one forms 2 n−1 pairs of these qubits and applies on every pair a U gate. Every subsequent iterational step requires 2 times less U gates compared to the previous step, so that in the complete circuit one needs to apply n j=1 2 j−1 = 2 n − 1 times the U gate. Note that in the original proposal [24], one needs to perform measurements on one member of each pair and then post-select only the successfully transformed qubits in each step. As this so-called "midcircuit" measurement is not yet implemented in current commercially available quantum computers, one needs to postpone the post-selection step to the end of the quantum circuit, where all qubits are measured at once. Let us also note here that the quantum state matching protocol works in a practically analogous way for noisy inputs, i.e., when the copies of the initial state constitute a statistical mixture. In this case, the protocol can gradually decrease the noise and, with a good approximation, eventually purify the unknown state into the reference state or its orthogonal pair in a quite similar fashion to the case of pure inputs [24].
In what follows, we look for an optimal decomposition of the unitary U into elementary quantum gates, involving the lowest possible number of CNOT gates. We will show that in our case, two CNOT gates will suffice. Let us note that in the special case of = 1, which corresponds to the transformation f (z) = z 2 , the protocol can also be realized with a single CNOT operation, however, the radius of tolerance is then the highest possible. In order to deviate from this rather trivial case, here, we will focus on cases with < 1.

III. DECOMPOSITION OF U INTO PROGRAMMABLE GATES
In order to implement the protocol in a quantum computer, we first need to decompose U (see Eq. (2)) into elementary one-and two-qubit gates. It is known that any entangling twoqubit gate in SU (4) can be decomposed into single-qubit gates and at most three CNOTs [28,29]. We will show that in our case, two CNOT gates will suffice. In order to find the decomposition, we closely follow the algorithmic-like procedure presented in [30], which we briefly recall here. We note that similar decompositions can be found in Refs. [31], [32], where a further emphasis is made to determine the minimum number of gates needed from a given gate set. Here we do not restrict ourselves to such set of single-qubit gates as the actual physically implementable gates can differ in the different qcs. Instead, we accept that a further transpilation step will determine what native gates realize the single-qubit gates in our decomposition.
As derived from Cartan's KAK decomposition by Khaneja et al. [33,34], any U AB ∈ SU (4) matrix can be decomposed as where A l , B l ∈ SU (2) (l = 1, 2) are local unitaries, k ∈ R 3 , and with σ j (j = 1, 2, 3) being the Pauli matrices. The entangling part of U AB is contained in the matrix The procedure for the decomposition consists of the following steps.
1. Transform the unitary matrix U AB into the so-called magic basis using the matrix as 2. Separate U into its real (U R ) and imaginary parts (U I ). Here we note that U R and U I are real matrices (not unitary), which, due to the unitarity of U , possess the properties Consequently, according to a theorem by Eckart and Young [35] one can find a pair of unitary matrices (V A , X A ) with which a joint diagonalization of the U R and U I matrices is possible.
In order to do that one can first determine an SVD of U R , namely where D is a real diagonal matrix that contains the singular values (which are all non-negative) in its diagonal, while V A , X A are unitary matrices.
3. Convert the imaginary part U I using the SVD decomposition of U R as Note that the proof of the theorem by Eckart and Young in Ref. [35] also shows that U I is Hermitian and commutes with D.
4. Diagonalize U I so that with G being the real diagonal matrix that contains the eigenvalues of U I , and P being composed of the corresponding eigenvectors of U I . Since D and U I commute, D also commutes with P and P † . Thus we can write U as where we have defined Q L = V A P and Q R = P † X † A .
(Let us note here that in steps 2 and 3 the roles of the U R and U I matrices can be interchanged, i.e., one can first determine the SVD of U I and then transform U R and proceed along to diagonalize U .) 5. Transform U back to the original basis. Note that D + iG is a diagonal matrix and since U is unitary, D + iG is also unitary, therefore the elements in the diagonal of D + iG can be written as e iΦ j (j = 0, 1, 2, 3). The change to the original basis M (D + iG)M † is equivalent to the following transformation of the Φ j phases [30] (k 0 , k 1 , The local unitaries A l and B l of Eq. (4) in the original basis are related to Q L and Q R via the identities In what follows we present the decomposition of the U operation obtained by the above mentioned procedure. Since U contains many zeros, one can find analytic expressions for all the terms in Eq. 4. Let us introduce the notation ≡ cos α, (α ∈ [0, π/2)) so that Eq. (2) becomes For a generic value of α (excluding the case α = π/4, which, for simplicity, we do not detail here as it leads to a different decomposition), the singular values of the matrix U R (see Eq. 9) are found to be where r = 8 + sin 2 (2α). We write D as and arrange in X A the corresponding x j (j = 1, 2, 3, 4) right singular vectors as columns accordingly where and N j = 1 + y 2 j (j = 1, 2). (Note that one is free to choose the ordering of the singular values in D as long as the corresponding right and left singular vectors are arranged Then, using the matrices V A and X A one finds from Eq. (10) that U I has the form The eigenvalues of U I are easily obtained and thus G can be given as (c.f. Eq. (11)) It is easy to see that the matrix P , which is composed of the eigenvectors of U I , can be given by P = H H, where H is the 2 × 2 Hadamard matrix.
Using Eqs. (18) and (23) we can determine the phases Φ j as from which we can also calculate the phases k j using Eq. (13). We find that k 0 = k 1 = 0, and only k 2 and k 3 are nonzero.
Up to this point, we have determined the decomposition of U into the entangling part U ent , and local parts M Q L M † and M Q † R M † as a function of α (or equivalently as a function of ). The local parts can further be decomposed into single-qubit transformations. Since Q L = V A P and Q R = P † X † A , we can look for the A 1 , A 2 , B 1 , B 2 matrices as products of the decompositions of the respective constituting matrices, as e.g., it can easily be decomposed as where In our case, V A and X † A are SO(4) matrices of the form of Eq. (19), i.e., they have nonzero elements in a chessboard pattern (there are entries only in the main diagonal and the ±2- where with |x i | 2 + |y i | 2 = 1 (similarly for the case of M X † A M † ). The local operations in the decomposition of U can readily be implemented in quantum computers. U ent however cannot be directly realized, thus one needs to look for an optimal decomposition for it in terms of programmable one-and two-qubit quantum gates.
According to Theorem 2 of Ref. [29] U ent can be realized with two CNOTs (plus some one-qubit gates) if it can be written as In our case, for a generic value of we find that k 1 = 0, and k 2 , k 3 < 0, so that Eq. (6) can be written as which is different from Eq. (29) in that (i) in the exponent the term with σ 1 ⊗ σ 1 is missing, while σ 3 ⊗ σ 3 is present, and (ii) a priori we do not know whether |k 2 | > |k 3 | or |k 3 | > |k 2 |.
Nonetheless, it is possible to transform U ent to the form of U by performing operations denotes simultaneous local rotations of the form R i (µ) = exp(−iµσ i ). In this way one can change σ j ⊗ σ j to σ k ⊗ σ k in the desired way, which can be considered as a swapping of the corresponding components of the vector k. (We note here that this is analogous to saying that the canonical class vector k of U ent is equivalent to the canonical class vector h of U [30], resulting in the fact that the two unitaries have the same entangling power). The action of the aforementioned transformations can be seen by applying them on the terms of the Taylor expansion of the exponential in Eq. (30) and then using the identities where ε ijk is the Levi-Civita symbol.
The extra local rotations that we use to transform U ent to the form of U must be compensated in the full decomposition of U . This can be achieved by adding the respective inverse rotations, which will then multiply the local unitaries A j and B j . We can thus define new local unitaries A j and B j as and write the final decomposition of U containing only programmable quantum gates as (see also Fig. 2) In fact, in the actual implementation we aim to reduce the number of local unitaries, therefore we apply only a single unitary on every qubit before and after the central CNOTs (e.g., onlȳ A 1 = A 1 w acts on qubit 1 after the second CNOT). We note that the state preparation of the qubits is carried out before the application of U .

IV. STATISTICAL FRAMEWORK FOR POST-SELECTION
We are interested in a quantitative approach for testing real quantum computers. In the NISQ era, the impact of noise and errors in the devices is of special importance. Most of the time, it is not even clear how one can construct a theoretical model that takes into account all of them in a realistic way [36]. Moreover, different quantum computers possess different sources of errors. For example, a great part of the errors in superconducting qcs comes from the readout step [21,[37][38][39]. On the other hand, in quantum computers based on cold atoms, a potential source of noise is the all-to-all qubit interaction [20].
Indeed, in [36] a wide spectrum of errors has been identified, ranging from environmental interactions, qubit interactions, imperfect operations and detection errors. All of them may be present in a quantum experiment in a nontrivial way. They can also vary in a complex manner as a function of time, even in the same device. If one would like to assess the performance of different quantum devices, a quantitative benchmark comes handy to compare the performance of various qcs.
In our approach we will consider two quantities: the success probability, and the transformed quantum state after a given number of iterations. We will analyze the performance of the tested quantum computers by using the experimentally obtained relative frequencies to obtain corresponding quantities, which we compare to the ideal theoretical ones.
Here we parameterize the input state with spherical Bloch-sphere coordinates (θ 0 , φ 0 ), which are related to the parameterization used in Eq. (1) by z = e iφ 0 tan(θ 0 /2). Then the transformed state |Φ n after n steps of the protocol reads as One can see that initial states with the same initial θ 0 but different φ 0 angle (i.e., states from a circle at a given latitude) will be mapped to states with the same θ n , i.e., to another circle at a different latitude. Consequently, if we measure the final kept qubit only in the computational basis, then, in an ideal case, only the value of θ 0 should affect the measurement results. We will utilize this property of the protocol to test the performance of qcs.
The ideal (theoretical) success probability of performing n iterations of the protocol can be expressed in the following way [24] p (n) s = 2 n+1 −2 cos It is important to note here that p (n) s is independent of the initial angle φ 0 , thus, in the case of an ideal (noise-free) quantum circuit, p (n) s should not vary if we change φ 0 . We will use this fact as one of our test tools.
Another aspect we aim at is to be able to distinguish statistical errors from device specific errors [22]. The first type of errors are independent of the physical system, they are due to the finite number of experiments that can be realized in any quantum system [40]. The second type of errors are those which are specific to the physical system itself (e.g., quantum gate errors and readout errors). From a benchmarking perspective, these latter are the ones we look for. Indeed, in [22] it was pointed out that a quantitative benchmark score should Since, for a given , p s decreases double-exponentially with each iteration, in what follows, we will focus only on one step of the protocol. Let us denote the relative frequency of success after the first step with p (e) s = N/M , where N is the number of favorable outcomes. In order to quantify statistical errors, we can introduce the usual sigma notation with the help of the standard deviation σ = p s (1 − p s )/M . We will mostly use 3σ tolerance, meaning that more than 99% of the experiments should fall in the interval [p s − 3σ, p s + 3σ]. This implies that relative frequencies outside this interval are caused by device errors with high probability. In the next section, we investigate some quantitative properties of two IBM quantum computers using this method.

V. IMPLEMENTATION OF THE QUANTUM STATE MATCHING PROTOCOL
For the implementation of one step of the protocol we constructed the 2-qubit quantum circuit in Fig. 2 in two freely available IBM devices: ibmq manila and ibmq lima. The reason for choosing these devices is their significantly different Quantum Volumes (QV) [1]: ibmq manila has a QV of 32, while ibmq lima has a QV of 8. The QV is a currently widely used measure of performance of quantum devices. Our purpose is to see if the differences suggested by the value of the QV are also reflected in the results of our tests with the quantum state matching protocol.

A. Systematically varied inputs
First, we investigate how the relative frequency of success p (e) s in the different experiments compare to the ideal success probability. In our two-qubit circuit, p (e) s can be determined by taking the counts corresponding to measuring the the states |00 and |10 at the end of the circuit. We note that in our implementations we always set the first qubit as the one which we want to transform, and the second as the one according to which we postselect, thus, cases, where the second qubit is measured to be |1 , are not considered to be successful events. Furthermore, we also payed attention to the circuit topology: we always used neighboring qubits and allowed for transpilation optimizations whenever possible.
In order to test the devices, we ran the circuit for four different cases of . In every case, we The results are shown in Fig. 3.
It can be seen in Fig. 3 that the higher the value of , the closer p shaded blue region representing the statistical tolerance. This feature reveals that device errors, clearly distinguishable from statistical errors, are present in this quantum computer.
The fact that p (e) s overestimates p s for every (θ 0 , φ 0 ) in these two cases of suggest that the decay of the qubits from the excited to the ground state, especially the transition |01 → |00 might affect the results, as these lead to extra counts corresponding to the state |00 even though they do not represent a successful implementation of the protocol.
We note that our results suggest that lower values of or a higher number of iterations would result in even larger differences of p (e) s from the theoretical values, indicating that device errors are quite significant in the tested qcs. Yet, we can assess that the technology have shown important improvements compared to previous analysis [41], where most of the time "raw" experiments did not yield successful results.
We can also investigate the performance of the devices by reconstructing the transformed quantum state using the p 1 as a function of the initial θ 0 . One can see that the results of both devices deviate from the ideal value for most inputs. For lower values of θ 0 (θ 0 π/4) the angle θ 1 is generally overestimated. This artifact may be related to measuring extra counts in '10', resulting from e.g. a |11 → |10 decay process. We will show in Sec. V C that some part of these errors can be mitigated by applying a readout error mitigation scheme. The remaining deviations may be caused by other processes (such as imperfect state preparation, and gate operations), which we do not attempt to model in this work.
A striking difference between the two devices is that, while in. Fig. 3 ibmq manila seems to perform better in terms of the estimated success probability, when estimating the quantum state itself (with the angle θ (e) 1 ), then the results from ibmq lima are more regular than those of ibmq manila. This indicates that ibmq lima is more stable in terms of the overall operations, as well as over time than ibmq manila.

B. Randomly chosen φ 0 inputs
As we have mentioned in Sec. IV, the theoretical success probability p s is independent of φ 0 (for a fixed and θ 0 ). In an actual implementation on a qc there may be errors at any part of the circuit during its execution, which may lead to a p  This figure can be considered as a complement of Fig. 3: some features are similar in the overall picture (e.g., lower values of result in weaker performance), but we also find some differences arising from using a random ensemble of inputs. For instance, if one looks at the = 0.6 and 0.7 cases, it can be seen that the p This suggests that using a random set of inputs might conceal some errors. Consequently, the results in a NISQ system can be heavily influenced by choosing appropriate sets of parameters for a given quantum circuit: some can reveal an optimistic performance (as suggested by the random inputs and the QV values), or they can reveal device errors (as suggested by our results using systematic inputs and gate sets).

C. Readout error mitigation
One of the problems of NISQ systems is the reduced number of qubits available. In order to implement a universal fault-tolerant quantum computer, one would need about 10 6 physical qubits with low error rates and long coherence times [7]. At the time of writing this paper, the largest IBM quantum device is the ibmq washington with 127 qubits. It is clear that in the near future fault-tolerant quantum computers will not be available. If one is interested in reliable results from NISQ quantum computers, other frameworks of quantum correction must be utilized. It is known that in superconducting qubits a significant amount of errors come from the detection step [21]. One way to reduce such errors is to use a readout error mitigation procedure [21,42]. In this section, we apply such a scheme to our results.
Without loss of generality, we only provide the analysis for the case of randomly chosen initial values of φ 0 , corresponding to the results shown in Fig. 5.
Here we focus on the impact of readout error mitigation on the experimentally estimated values of θ 1 . Figure 6 shows the results without (left column) and with (right column) error mitigation, as a function of the initial angle θ 0 . The mitigation matrices [42] used in the procedure were determined directly by carrying out the full tomography of the two qubits used in the circuit. It can be seen that the readout error mitigation procedure can significantly improve the results, even though the tomography could not be done at the same time as the experiment. Our results are consistent with the assumption that the readout step in these devices is indeed quite erroneous. Results from ibmq manila (ibmq lima) are displayed in the top (bottom) row. The results were obtained from the same data as those in Fig. 5.

VI. SUMMARY AND OUTLOOK
We have presented implementations of the quantum state matching protocol on two IBM quantum computers based on superconducting qubits. The most important ingredient of the protocol is a specific two-qubit entangling unitary U . We have determined the optimal decomposition of U with programmable quantum gates, and implemented the quantum circuit corresponding to one step of the protocol for systematically varied and randomly chosen input parameters as well.
We found a qualitative disagreement between the Quantum Volume of the devices and our results, obtained by the implementation of the quantum state matching protocol. We also showed that randomly chosen inputs may change the results and give the impression of a better performance. Further, we have also implemented a readout error mitigation procedure on our results which removed some of the errors, suggesting that detection errors are indeed quite significant in these superconducting qcs.
In our analysis, we determined the relative frequency of success experimentally and compared it to the ideal theoretical success probability with ±3σ statistical tolerances to test how well a quantum circuit is implemented in current qcs. We showed that already in the simplest cases of our protocol, and with this very simple quantity, the tested devices behaved opposite to what one would have expected from their Quantum Volumes [11,43]. This suggests that our test is more demanding than that used for the calculation of the QV. The success probability in our protocol can be tuned to any value, while in the Quantum Volume framework, the "success probability" always includes all the heavy outputs coming from the full probability distribution of the final state.
The most important advantage of our protocol in testing quantum computers is that it is scalable: irrespective of the number of steps, the expected result can be classically calculated, despite the fact that, in a qc realization, the protocol requires 2 n qubits for n steps. In a future work we shall test more iterations of our protocol on larger devices, based on several different physical systems as well (including IonQ or Rigetti).