Abstract
The hybrid quantum–classical learning scheme provides a prominent way to achieve quantum advantages on near-term quantum devices. A concrete example toward this goal is the quantum neural network (QNN), which has been developed to accomplish various supervised learning tasks such as classification and regression. However, there are two central issues that remain obscure when QNN is exploited to accomplish classification tasks. First, a quantum classifier that can well balance the computational cost such as the number of measurements and the learning performance is unexplored. Second, it is unclear whether quantum classifiers can be applied to solve certain problems that outperform their classical counterparts. Here we devise a Grover-search based quantum learning scheme (GBLS) to address the above two issues. Notably, most existing QNN-based quantum classifiers can be seamlessly embedded into the proposed scheme. The key insight behind our proposal is reformulating the classification tasks as the search problem. Numerical simulations exhibit that GBLS can achieve comparable performance with other quantum classifiers under various noise settings, while the required number of measurements is dramatically reduced. We further demonstrate a potential quantum advantage of GBLS over classical classifiers in the measure of query complexity. Our work provides guidance to develop advanced quantum classifiers on near-term quantum devices and opens up an avenue to explore potential quantum advantages in various classification tasks.
Export citation and abstract BibTeX RIS
Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
The field of machine learning has achieved remarkable success in computer vision, natural language processing, and data mining [1]. Recently, an increasing interest from the physics community to use machine learning methods to solve complicated physics problems, e.g. classifying phases of matter and simulating quantum systems [2–4], has emerged. Besides the revolutionary influence of machine learning to the physics world, another uprising field that tightly binds machine learning with physics is quantum machine learning whose goal is to solve specific tasks beyond the reach of classical computers [5].
To better understand how quantum computing facilitates the machine learning tasks, devising quantum algorithms that have the ability to solve fundamental machine learning problems with quantum advantages is desirable [5]. For example, the proposed quantum linear systems algorithm (a.k.a., HHL algorithm) enables the linear equations to be solved with the exponential speedup over its classical counterparts [6]. By employing HHL algorithm as the subroutine, many quantum machine learning algorithms with exponential quantum speedup have been proposed, e.g. the quantum principal component analysis [7], quantum singular value decomposition [8], quantum non-negative matrix factorization [9], and the quantum regression [10]. However, those proposed quantum algorithms that possess fabulous quantum advantages can only be executed on a fault-tolerant quantum computer by using the quantum random access memory [6], which is still a rather distant dream.
When approaching the noisy intermediate-scale quantum (NISQ) era, it is intrigued to explore whether there exists any quantum algorithm that can not only solve fundamental learning problems with promised quantum advantages but can also be efficiently implemented on near-term quantum devices [11]. To achieve this goal, one of the most likely solutions is the quantum neural network (QNN), which is also called as variational quantum algorithms [12–14]. Concretely, QNN is composed of a variational quantum circuit to prepare quantum states and a classical controller to perform optimization tasks [13, 15]. Partial evidence to support this claim is the theoretical result that the probability distribution generated by the variational quantum circuit used in QNN can not be efficiently simulated by classical computers [16–18]. Driven by the strong expressive power of quantum circuits and the similar work philosophy between QNN and the classical deep neural network (DNN), its natural to exploit whether QNN can be realized on near-term quantum computers to accomplish certain machine learning tasks with better performance over classical learning algorithms.
A central application of QNN, analogous to DNN, is tackling classification tasks [1]. Many real-world problems can be categorized into the classifying scenario, e.g. the recognization of hand-written digits, the characterization of different creatures, and the discrimination of quantum states. For binary classification, given a dataset
with N examples and M features in each example, QNN aims to learn a decision rule f θ (⋅) that correctly predicts the label of the given dataset , i.e.
where θ refers to the trainable parameters and is the indicator function that takes the value 1 if the condition z is satisfied and zero otherwise. Recently, QNNs with varied quantum circuit architectures and optimization methods have been proposed to accomplish the aforementioned classification tasks. In particular, the references [19–21] have devised the amplitude encoding based QNN to classify the Iris dataset and the hand-written digits image dataset; the references [22–24] have developed the kernel-based QNN to accomplish the synthetic datasets; and the references [25] have proposed the convolution based QNN to tackle quantum state discrimination tasks. When no confusion can arise, we use the quantum classifier in the rest of the study to specify QNNs that are used to accomplish classification tasks defined in equation (2).
Despite the promising heuristic results mentioned above, very few studies have theoretically explored the power of quantum classifiers. A noticeable theoretical result about quantum classifiers is the trade-off between the computational cost (i.e. the number of measurements) and the training performance indicated by [13]. Denote as the loss function employed in quantum classifiers, where θ (t) refers to the trainable parameters at the tth iteration and is the given dataset with in total N samples. As shown in figure 1, when the batch gradient descent method is employed to optimize the loss function , the updating rule of the trainable parameters follows
where η is the learning rate, refers to the ith batch with and , and B denotes the number of batches. Define
as the utility measure that evaluates the distance between the optimized result and the stationary point in the optimization landscape. The following theorem summarizes the utility bound R1 of quantum classifiers.
Theorem 1 (Modified from theorem 1 of [13]). Quantum classifiers under the depolarization noise setting output after T iterations with the utility bound
where M is the number of measurements to estimate the quantum expectation value, LQ is the circuit depth of variational quantum circuits, p is the rate of the depolarization noise, and B is the number of batches.
The result of theorem 1 indicates that a larger number of batches B ensures a better utility bound R1, while the price to pay is increasing the total number of measurements. For example, when B = N, we have for ∀i ∈ [N] and each sample z j is sequentially fed into variational quantum circuits to acquire that estimates . Once the set is collected, the gradients can be estimated by . Suppose that the required number of measurements to estimate the derivative of the jth parameter θ j , i.e. , is M, then the total number of measurements to acquire is NM. Therefore, the estimation of , which includes d parameters, requires NMd measurements. Such a cost becomes unaffordable for large N. However, the trade-off between the utility R1 and the computational efficiency caused by the varied number of batches B is not considered in previous quantum classifiers, where most of them only focused on the setting B = N. How to design a quantum classifier that can attain a good utility R1 with a low computational cost is unknown.
Another theoretical issue toward quantum classifiers is that none of the previous results have explored their potential advantages compared with classical counterparts. This questions the necessity of employing quantum classifiers because no benefit can be offered. Under the above observations, it is highly desirable to develop a quantum classifier that can not only achieve a good utility R1 using a low computational cost, but can also possess certain quantum advantages compared with classical classifiers.
Here we devise a Grover-search based learning scheme (GBLS) to address the above two issues under the NISQ setting. Our proposal has the following advantages. First, GBLS is a flexible and effective learning scheme, which enables the optimization of different quantum classifiers with a varied number of batches B. Note that the choice of the encoding methods and the variational ansatz used in GBLS is very flexible, which covers a wide range of the proposed quantum classifiers [20–24]. Moreover, the Grover-search based machinery is only required in the training process, and the prediction of the new input is completed by only using the optimized variational quantum circuits, which ensures its efficacy. Second, we prove that the query complexity can be quadratically reduced over its classical counterparts in the optimal setting (see theorem 2) when it is applied to accomplish specific binary classification tasks. Last, numerical simulation results demonstrate that GBLS can well accomplish binary classification tasks even when the system noise and the finite number of quantum measurements are considered (see section 3). Notably, the required number of measurements of GBLS is dramatically less than other advanced quantum classifiers [22–24] with competitive performance (see table 1). In other words, GBLS is a powerful protocol that allows quantum classifiers to achieve a good utility bound R1 with a low computational cost.
Table 1. The basic information of different quantum classifiers. The notations T, K, M, N, and d refer to the number of epochs, the batch size (i.e. in our simulation K = 4), the number of measurements used to estimate quantum expectation value, the total number of training examples, and the total number of trainable parameters.
Methods | MSE_batch | MSE | BCE | GBLS |
---|---|---|---|---|
Number of batches B | N | N | ||
Number of measurements | O(TMNd) | O(TMNd) |
The central concept in GBLS is reformulating the classification tasks as the search problem. Note that although the advantage held by the quantum Grover-search algorithm is evident, how to transform the classification task into the search problem is inconclusive. Such a reformulation is the main technical contribution in this study. Recall that Grover-search [26] identifies the target element i* in a database of size K by iteratively applying a predefined oracle and a diffusion operator with to the input state. GBLS, as shown in figure 2, employs a specified variational quantum circuit and a multiple controlled qubits gate along the Z axis (MCZ) to replace the oracle Uf . In particular, the variational quantum circuit conditionally flips a flag qubit (i.e. the black dot behind highlighted by the pink region) depending on the training data. The flag qubit is then employed as a part of MCZ gate to guide a Grover-like search algorithm to identify the index of the specified example, i.e. the status of the flag qubit such as '0' or '1' determines the successful probability to identify the target index. Through optimizing the trainable parameters of the variational quantum circuits , GBLS aims to maximize the successful probability to sample the target index when the corresponding training example is positive; otherwise, GBLS minimizes the successful probability of sampling the target index. The inherited property from the Grover-search algorithm allows our proposal to achieve an advantage in terms of query complexity when the binary classification task involves the searching constraint (see section 2.3 for details). Besides the computational merit, GBLS is insensitive to noise, guaranteed by the fact that combining a variational learning approach with Grover-search can preserve a high probability of success in finding the solution under the NISQ setting [27].
Download figure:
Standard image High-resolution image2. Grover-search based learning scheme
The outline of this section is as follows. In subsection 2.1, we first elaborate on the implementation details of the proposed GLBS as depicted in figure 2. We then explain how to use the trained GLBS to predict the given new input with O(1) query complexity in subsection 2.2. We last explain how GBLS can solve certain learning problems with potential advantages in subsection 2.3.
2.1. Implementation
In the preprocessing stage, GBLS employs the dataset defined in equation (1) to construct an extended dataset . Compared with the original dataset , the cardinality of each training example in is enlarged to K. For the purpose of applying the Grover-search algorithm to locate the target index i* = K − 1, the construction rule for the kth extended training example for all k ∈ [N] is as follows. The mathematical representation of is
The last pair in corresponds to the kth example of , i.e. . The first K − 1 pairs in are uniformly sampled from a subset of , where all labels of this subset, i.e. , are opposite to yk . Note that the construction of the subset is efficient. Since yk ∈ {0, 1}, we can construct two subsets and that only contains examples of with label '0' and label '1', respectively, where . When yk = 0, the first K − 1 pairs are sampled from ; otherwise, when yk = 1, the first K − 1 pairs are sampled from .
As aforementioned, different quantum classifiers exploit different methods to encode into the quantum states [12]. For ease of notation, we denote the quantum state corresponding to the kth example as
where h(⋅) is an encoding operation (a possible encoding method is discussed in section 3), and the subscripts 'F' and 'I' refer to the feature register with NF qubits and the index register with NI qubits, respectively.
We now move on to explain the training procedure of GBLS. Recall that the reference [27] points out that combining a variational learning approach with Grover-search algorithm produces an additional quantum advantage than conventional Grover's algorithm such that the target solution can be located with a higher success probability. A similar idea is used in GBLS. Namely, the employed variational quantum circuits aim to learn a hyperplane that separates the last pair in with its first K − 1 pairs. Denote , where each layer U( θ l ) contains O(poly(NF )) parameterized single qubit gates and at most O(poly(NF )) fixed two-qubit gates with the identical layouts. In the optimal situation, given the initial state in equation (6), applying to the feature register yields the following target state:
- (a)If the last pair of the input example refers to the label yk = 0, the target state is
- (b)Otherwise, when the last pair of the input example refers to yk = 1, the target state is
We denote (resp. ) as the first qubit of the quantum state in the feature register being (resp. ). As shown in figure 3, once the state is prepared, GBLS iteratively applies MCZ gate to the index register controlled by the first qubit of the feature register and the index register, uses Udata and to uncompute the feature register, and applies the diffusion operator Uinit to the index register to complete the first cycle. Denote all quantum operations belong to one cycle as U, i.e.
With a slight abuse of notation, we define with in the rest of the paper. GBLS repeatedly applies U to the initial state except for the last cycle, where the applied unitary operations are replaced by
as highlighted by the brown shadow in figure 4. Following the conventional Grover-search, GBLS queries U and UE with in total times before taking quantum measurements. This completes the quantum part of GBLS.
Download figure:
Standard image High-resolution imageDownload figure:
Standard image High-resolution imageWe next analyze how the quantum state evolves for the case yk = 0 and yk = 1, respectively. For the case of yk = 0, applying to the input state in equation (6) will transform this state to as described in equation (7). Since the control qubit in the feature register is 0, applying MCZ gate does not flip the phase of the state. After uncomputing, the result state yields . The positive phase for all computational basis i ∈ [K − 1] implies that applying the quantum operation does not change the state as well, i.e.
In other words, when we measure the index register of the output state, the probability to sample the computation basis i with i ∈ [K − 1] is uniformly distributed.
For the case of yk = 1, the input state in equation (6) will be transformed to after interacting with unitary , as described in equation (8). With the control qubit in the feature register being 1, such a generated quantum state will evolve as Grover-search algorithm does by iteratively applying MCZ, the uncomputation operation , and Uinit. Mathematically, the result state after interacting with MCZ yields
where , , , and refers to the computational basis . Analogous to the Uf in Grover-search, the trainable and data-driven used above conditionally flips the phase of the state . Next, the uncomputing operation and the diffusion operator Uinit are employed to increase the probability of . Mathematically, the generated state after the first cycle yields
where U is defined in equation (9). The probability of sampling i* is increased to sin2 3γ, which is in accordance to Grover-search algorithm. This observation leads to the following theorem, whose proof is given in appendix
Theorem 2. For GBLS, under the optimal setting, the probability of sampling the outcome i* = K − 1 approaches 1 asymptotically iff the label of the last entry of is yk = 1.
We leverage the particular property of GBLS, in which the output distribution is varied for different label of input as shown in theorem 2, to accomplish the binary classification task. Concisely, the output state of GBLS, i.e. , corresponding to yk = 1 will contain the computational basis i = K − 1 with probability near to 1. By contrast, the output state corresponding to yk = 0 will contain all computational bases i ∈ [K − 1] with the equal probability. Driven by this observation and the mechanism of the Grover-search algorithm, the loss function of GBLS is
where sign(⋅) is the sign function, refers to the measurement operator, is the generated quantum state, and U(
θ
) is defined in equation (9) (for clearness, we use the explicit form U(
θ
) instead of U). Intuitively, the minimized corresponds to the facts that when yk
= 1 (yk
= 0), the success probability to sample i* as well as attain the first feature qubit to be '1' ('0') is maximized (minimized). GBLS employs a gradient-based method, i.e. the parameter shift rule [22], to optimize
θ
. Confer appendix
We would like to address that, GBLS can be used to conduct both the linear and nonlinear classification tasks depending on the specified quantum classifiers. For example, when GBLS adopts the proposal [23, 24] to implement Udata and , it has capability of classifying nonlinear data.
2.2. Prediction
Once the training of GBLS has finished, the trained can be directly employed to predict the label of the future instances with O(1) query complexity, where the corresponding circuit implementation is shown in figure 5. To achieve this, we devise the following prediction method. Denote the new input as . We first encode into the quantum state with the identical encoding method used in the training procedure, i.e. . Applying the trained to yields
where .
Download figure:
Standard image High-resolution imageDenote the probability of the outcome '1' after measuring the first feature qubit of the state in equation (15) as and let the threshold be 1/2. The new input data will be identified as label '0', if p1 < 1/2; otherwise, it will be given label '1'.
2.3. Potential advantage of GBLS
Here we design a binary classification task to explore the potential advantage of GBLS in terms of query complexity. Consider the classification task that requires not only to find a decision rule in equation (2) but also to output the index j satisfying a pre-determined black-box function. Note that the identification of a target index is a common functionality in the context of database searching in the medical system, economy, and online shopping. For example, given a medical database, it is natural to expect that the trained classifier can predict whether a patient is ill or healthy based on her/his symptoms, and can identify a healthy patient with additional properties, e.g. the gender of the patient is female, which can be modeled by a black box function.
The mathematical formulation of this classification task is as follows. Given the data in equation (5), denoted the black box as q(⋅), the task yields
where the function q(⋅) is a Boolean function with the input set . Taking GBLS implemented in the previous subsections as an example, q(⋅) has the following form, ∀j = {0, ..., K − 1}
Furthermore, q(⋅) could be implemented by the MCZ gate, which conditionally flips the phase of the computational basis corresponding to j*:=K − 1 if the state is given in equation (8). In this way, the Grover-like search structure used in GBLS promises that the probability to sample j* will be maximized. We remark that GBLS can be effectively generalize to implement other forms of q(⋅) via modifying the MCZ gate. When the size of the dataset loaded by GBLS is K, a well-trained GBLS can locate the target index with query complexity, guaranteed by the result of theorem 2. However, given access to the well-trained classifier f θ (⋅), both classical algorithms and previous quantum classifiers need at least O(K) query complexity to find j*. The reduced query complexity of GBLS implies a potential quantum advantage to accomplish classification tasks.
3. Numerical experiments
We now apply GBLS to classify a nonlinear synthetic dataset to evaluate its performance. The construction of follows the proposal [23]. Consider a synthetic dataset with N = 200, where , . Let g(⋅) be a specific embedding function with for all i ∈ {0, ..., N − 1}. The label of x i is assigned as yi = 1 if
where V ∈ SU(4) is a unitary operator, is the measurement operator, and the gap Δ is set as 0.2. The label of x i is assigned as yi = 0 if
We illustrate the synthetic dataset in the left panel of figure 6.
Download figure:
Standard image High-resolution imageAt the data preprocessing stage, we split the dataset into the training datasets with size Ntrain = 100 and the test dataset with Ntest = 100. In the training process, we follow the construction rule of GBLS to build the extended training dataset by using . We set K = 4 in the following analysis, where the training example can be encoded into a quantum state by using four qubits with NI
= NF
= 2 (see appendix
The numerical simulations are implemented on Python in conjunction with the PennyLane, Qiskit, and pyQuil libraries [28–30]. The hyper-parameters setting used in our experiment is as follows. The block of UE in figure 4 is employed once for the case K = 4, according to the Grover's theorem . The layer number of variational quantum circuits, i.e. , is set as L = 2. The number of epochs used in classical optimization is 20. For comparison, we also apply the quantum kernel classifier proposed by [23, 24] with two different loss functions, i.e. the mean squared error (MES) loss, and the binary cross entropy (BCE) loss, to learn the synthetic dataset . The selection of the quantum kernel classifiers as the reference is based on the fact that this method has achieved state-of-the-art performance to classify nonlinear data [23].
Ideal setting. We first evaluate performance of different quantum classifiers under the ideal setting, where the quantum system is noiseless and the number of measurements is infinite. The right panel of figure 6 illustrates the averaged training and testing accuracies versus the number of epochs. In particular, our proposal achieves comparable performance with the quantum kernel classifier with the BCE loss, where both the train and test accuracies converge to 99% within 2 epochs. Moreover, these two methods outperform the quantum kernel classifier with the MSE loss (B = N), whose test accuracy can only reach 95% after 10 epochs. The variance of these three quantum classifiers after 10 epochs becomes small, which implies that all of them hold stable performance under the ideal setting.
Depolarization noise setting. We next investigate performance of GBLS and the referenced quantum kernel classifiers under the realistic setting, where the quantum system noise is considered and the number of measurements is finite. Specifically, we employ the depolarization channel to model the system noise, i.e. given a quantum state , the quantum depolarization channel that acts on this state is defined as
where p is the depolarization rate, and πd
is the maximally mixed state with . Meanwhile, to explore the trade-off between the computational cost (i.e. the total number of measurements) and the utility R1 indicated by theorem 1, we also compare performance between GBLS and a modified quantum kernel classifier with the MSE loss, which supports to use the batch gradient descent method with B = N/4 to optimize parameters (please refer to appendix
The hyper-parameters settings applied to GBLS and other quantum classifiers are as follows. The depolarization rate is set as p = 0.05 and p = 0.25, respectively. The number of measurements is set as 10 to approximate the quantum expectation result. The parameter shift rule is used to estimate the analytic gradients [22, 31]. For each classifier, we repeat the numerical simulations with five times to collect the statistical information. Confer appendix
The simulation results of GBLS and the referenced quantum classifiers are illustrated in figure 7. Specifically, when p = 0.05, GBLS and the other three referenced quantum classifiers achieve comparable performance after 10 epochs. Moreover, the quantum kernel classifier with the MSE loss (B = N/4 possesses a lower the convergence rate and a larger variance than the rest three classifiers. When p = 0.25, there exists a relatively large gap between the quantum kernel classifiers with the MSE_bactch method and the rest three quantum classifiers in the measure of the convergence rate. Such a difference reflects the importance to use GBLS to investigate classification tasks under the varied number of batches. We summarize the averaged training and test accuracies of GBLS and other quantum classifiers at the last epoch in table 2. Even though the measurement error and quantum gate noise are considered, GBLS can still attain stable performance, since its variance is very small (i.e. at most 0.04). This observation suggests the applicability of our proposal on NISQ machines.
Download figure:
Standard image High-resolution imageTable 2. Performance of different quantum classifiers under the depolarization noise at the 20-th epoch. The labels 'MSE_batch', 'MSE', 'BCE', and 'GBLS' follow the same meanings as explained in table 1. The value 'a ± b' refers that the averaged accuracy is a and its variance is b.
Methods | MSE_batch | MSE | BCE | GBLS |
---|---|---|---|---|
p = 0.05 (train) | 0.929 ± 0.037 | 0.978 ± 0.013 | 0.956 ± 0.024 | 0.935 ± 0.024 |
p = 0.25 (train) | 0.846 ± 0.072 | 0.936 ± 0.032 | 0.918 ± 0.031 | 0.881 ± 0.025 |
p = 0.05 (test) | 0.943 ± 0.032 | 0.975 ± 0.006 | 0.860 ± 0.089 | 0.945 ± 0.021 |
p = 0.25 (test) | 0.862 ± 0.095 | 0.934 ± 0.009 | 0.791 ± 0.056 | 0.879 ± 0.040 |
We would like to emphasize the main issue considered in this study: whether there exists a quantum classifier that can attain a good utility bound R1 by using a few number of measurements. The numerical simulation results of GBLS provide a positive response toward this issue. Recall the setting given in table 1 and the results in figure 7. Although the required number of measurements for GBLS is reduced by K = 4 times compared with quantum classifiers with the BCE loss and the MSE loss (B = N), they achieve comparable performance. This result implies a huge separation of the computational efficacy between GBLS and previous quantum classifiers with B = N when N is large.
Noise model from real quantum hardware. We further compare performance of GBLS and the referenced quantum classifiers under a noise model extracted from real quantum hardware, i.e. IBMQ_ourense, provided by the Qiskit and PennyLane python libraries [28, 29]. Notably, for all classifiers, the gate noise is only imposed on the trainable quantum circuits UL
instead of the whole circuits, since the implementation of multi-controlled gates (e.g. CCZ) used in GBLS will introduce a huge amount of noise and destroy the optimization of GBLS (see appendix
The simulation results are exhibited in figure 8. Specifically, the three classifiers achieve comparable performance. Such results indicate that the efficacy of GBLS, since the required number of measurements for GBLS is reduced by four times compared with the rest two quantum classifiers.
Download figure:
Standard image High-resolution image4. Discussion and conclusion
In this study, we have proposed a GBLS for classification. Different from previous proposals, GBLS supports the optimization of a wide range of quantum classifiers with a varied number of batches. This property allows us to explore the trade-off between the computational efficiency and the utility bound R1. Moreover, we demonstrate that GBLS possesses a potential advantage to tackle certain classification tasks in the measure of query complexity. Numerical experiments showed that GBLS can achieve comparable performance with other advanced quantum classifiers by using a fewer number of measurements. We believe that our work will provide immediate and practical applications for near-term quantum devices.
Acknowledgments
This work received support from Australian Research Council (Project FL-170100117), and the Faculty of Engineering and Information Technologies at the University of Sydney (the Engineering and Information Technologies Research Scholarship).
Data availability statement
The data that support the findings of this study are openly available at the following URL/DOI: https://github.com/yuxuan-du/.
Appendix A.: Proof of theorem 1
Proof of theorem 1 .
To achieve theorem 1, we separately discuss the situations in which the label of the last entry in is yk = 1 and yk = 0, respectively.
For the case yk = 1. Suppose that the label of the last entry in is yk = 1. Followed from equation (13), after the first cycle, the generated state of GBLS is
where . This result indicates that the probability to sample the target index i* is increased from sin2 γ to sin2 3γ, which is same with Grover-search.
Then, by induction as the proof of Grover-search does [32], the generated state of GBLS after applying U to with ℓ times yields
Note that, GBLS requires that the employed quantum operation at the last cycle is UE as defined in equation (10) instead of U. Mathematically, the generated state is
where the first equality uses equation (A.1), the second equality exploits equation (13) to engineer the feature register, the third equality employs MCZ to flip the phase the state whose first qubit in the feature register is , and last equality comes from the application of the diffusion operator with to the index register.
The result of equation (A.2) indicates that, under the optimal setting, the probability to sample i* is close to 1 when , since and then sin ((2ℓ + 3)γ) ≈ 1.
For the case yk = 0. We then demonstrate that, when the label of the last entry in is yk = 0, even if applying and UE to with , the probability to sample i* is 1/K. Followed from equation (11), after the first cycle, the generated state of GBLS is
where . Due to , after applying U to the state , the probability to sample any index is identical. By induction, applying the corresponding U to the state with ℓ times yields
where given any positive integer ℓ, the probability to sample is 1/K.
As with the case of yk = 1, at the last cycle, we apply the unitary UE to the state , and the generated state is
where the first equality uses the explicit form of UE and equation (A.3), and the second equality is guaranteed by equation (12) (note that the only difference is replacing with based on the setting yk = 0), and the last equality exploits the explicit form of Uinit.
The result of equation (A.4) reflects that, under the optimal setting, the probability to sample i* can never be increased when yk = 0. Therefore, we can conclude that, under the optimal setting, the probability to sampling the outcome i* approaches 1 asymptotically if and only if the label of the last entry of is yk = 1. □
Appendix B.: Variational quantum circuits and the optimizing method
In this section, we first introduce the variational quantum circuits used in GBLS. We then elaborate the optimization method, i.e. the parameter shift rule, that is employed to train .
Variational quantum circuits, which is also called parameterized quantum circuit, are composed of trainable single qubit gates and two qubits gates (e.g. CNOT or CZ). As a promising scheme for NISQ devices, variational quantum circuits have been extensively investigated for accomplishing the generative and discriminative [15, 20, 33–35] tasks via variational hybrid quantum–classical algorithms [36]. One typical variational quantum circuits is called the multiple-layer parameterized quantum circuits (MPQC), where the arrangement of quantum gates in each layer is identical [33]. Denote the operation formed by the lth layer as U( θ l ). The generated quantum state from MPQC yields
where L is the total number of layers. GBLS employs MPQC to construct , i.e.
and the circuit arrangement for the lth layer U( θ l ) is shown in figure B1. When the number of layers is L, the total number of trainable parameters for GBLS is 2NF L.
Download figure:
Standard image High-resolution imageThe updating rule of GBLS at the kth iteration follows
where η is the learning rate and is the kth training example. By expanding the explicit form of given in equation (14), the gradients of can be rewritten as
where yk refers to the label of the last entry in , sign(⋅) is the sign function, Π is the measurement operator, and
GBLS adopts the parameter shift rule proposed by [22] to attain the gradient . Concisely, the parameter shift rule iteratively computes each entry of the gradient. Without loss of generality, here we explain how to compute for j ∈ [2NF L]. Define as
where only the jth parameter is rotated by . Then the mathematical representation of the gradient for the jth entry is
In conjunction with equations (B.2), (B.3) and (B.5), the updating rule of GBLS at the tth iteration for the jth entry is
Appendix C.: More details of numerical simulations
In this section, we provide more details about the numerical simulations. Specifically, we first explain how to construct the employed synthetic dataset. We then elaborate on the implementation of GBLS and referenced classifiers, and their hyper-parameters settings. We next analyze the required circuit depth to implement these quantum classifiers. Last, we introduce the construction of the modified dataset used in the MSE_batch method.
The construction of the synthetic dataset. Given the training example for all i ∈ [N − 1], the embedding function that is used to encode x i into the quantum states is formulated as
where is a specified mapping function. The above formulation implies that g( x i ) can be converted to a sequence of quantum operations, where its implementation is illustrated in the upper left panel of figure B2. To simultaneously encode multiple training examples into the quantum states, we should implement g( x i ) as a controlled version, where the implementation is shown in the upper right panel of figure B2.
Download figure:
Standard image High-resolution imageThe random unitary V ∈ SU(4) used in the numerical simulations is formulated as V = RY (ψ1) ⊗ RY (ψ2), where ψ1 and ψ1 are uniformly sampled from [0, 2π).
The details of GBLS, the referenced classifiers, and hyper-parameters setting. The implementation of GBLS is shown the lower panel of figure B2. In particular, the data encoding unitary Udata is composed of a set of controlled-g( x i ) quantum operations. The MPQC introduced in appendix B is employed to build , where each layer U( θ l ) is composed of RY gates and CZ gates and the layer number is L = 2.
The basic components of the referenced quantum classifiers are identical to those used in GBLS. In particular, for all employed quantum kernel classifiers, the implementation of variational quantum circuits are the same with GBLS, where the layer number is L = 2 and each layer is composed of RY gates and CZ gates as shown in figure B2. The implementation of the encoding unitary Udata depends on the batch size B. For the quantum kernel classifiers with the BCE loss and MSE loss (B = N), following equation (C.1), the encoding unitary is
For the quantum kernel classifier with the MSE loss (B = N/4), the implementation of the encoding unitary Udata is the same with GBLS as shown in figure B2.
The detailed hyper-parameters settings for GBLS and the referenced classifiers are as follows. The learning rate for GBLS, the quantum kernel classifier with the BCE loss, the quantum kernel classifier with the MSE loss (B = N and B = N/4) is identical, which is set as η = 1.0. Moreover, when we explore the statistical performance of different quantum classifiers under the noise setting, the random seeds are set as with R being the total number of repetitions.
The analysis of the quantum circuit depth. Here we analyze the required circuit depth to implement quantum kernel classifiers used in numerical simulations. As explained in the above subsection, the quantum kernel classifiers with B = N can be efficiently realized, since the data encoding unitary Udata and the variational quantum circuits only involve single and two qubits gates. In particular, the circuit depth to construct the unitary Udata in equation (C.2) is 1. Moreover, the circuit depth to construct UL ( θ ) as shown in figure B2 is 4. In total, when the number of batches B equals to N, the required depth for the quantum kernel classifier with the BCE or MSE loss is 5.
Compared with the setting B = N, the implementation of the quantum kernel classifier with B = N/4 and GBLS requires a relatively deep circuits. The substantial reason is that the fabrication of the data encoding unitary Udata involves multi-controlled qubits gates as shown in figure B2 (highlighted by the brown region). Specifically, when we decompose the CC–RY gate into single-qubit and two-qubit gates, the required circuit depth is 27. Therefore, following figure B2, the circuit depth to implement Udata is 113. Considering that the circuit depth to implement is 4, the total circuit depth to implement the quantum kernel classifier with B = N/4 is 117. As shown in figure B2, the quantum circuit in GBLS is composed of Udata, , and Uinit. The implementation of Udata and is identical to the quantum kernel classifier with B = N/4. Moreover, based on Grover-search algorithm, the circuit depth to implement Uinit is 15, which includes 4 Hadamard gates and 1 CCZ gate. Therefore, the total circuit depth to implement GBLS is 132.
We remark that the circuit depth of the quantum kernel classifier with B = N/4 and GBLS is dominated by the implementation of Udata, which exploits multi-controlled qubits gates to load different training examples in superposition. Such an observation implies that efficient encoding methods can dramatically reduce the required circuit depth to construct these quantum classifiers. A possible solution is proposed by [37], which constructs a target multi-qubits gate by optimizing a variational quantum circuit which consists of tunable single-qubit gates and fixed two qubits gates.
The modified training dataset for the MSE_batch method. We note that naively employing the original training dataset to optimize the quantum kernel classifier with the MSE_batch loss is infeasible. Let us illustrate a simple example. Suppose the input state is with the batch size 2, where the subscript 'I' ('F') refers to the index (feature) register. When the trainable quantum circuits and the measurement operator are applied to this state, the output corresponds to the averaged predictions of the examples . Such a setting is ill-posed once the labels x (1) and x (i) of are opposite, e.g. the former is 0 and the latter is 1, since a wrong prediction (the former is 1 and the latter is 0) also leads to the averaged truth label 0.5.
To conquer the above issue, we build a modified dataset instead of to optimize the quantum kernel classifier with the MSE_batch loss. Specifically, we shuffle the given dataset and ensure that for the modified dataset, the training examples in each batch for ∀i ∈ [B] must possess the same label. In doing so, the averaged truth label can either be 0 and 1 without any confusion.
Appendix D.: The computational complexity of GBLS and the quantum kernel classifier with the BCE loss
We now separately derive the required number of measurements, or equivalently, the computational complexity, for GBLS and the quantum kernel classifier with the BCE loss at each epoch. For both methods, the hyper-parameters setting is supposed to be identical, i.e. the size of the dataset is N, the layer number of MPQC is L, the number of qubits to load data features is NF , the total number of trainable parameters θ is NF L, and the number of measurements applied to estimate the quantum expectation value is M.
We say one query when the variational quantum circuit used in the quantum classifier takes the encoded data and then be measured by the measurement operator once. Following the training mechanism of the quantum classifier, its query complexity amounts to counting the total number of measurements to the variational quantum circuits to acquire the gradients in one epoch.
We now derive the required number of measurements of the quantum kernel classifier with the BCE loss in one epoch. Given the dataset , the BCE loss yields
where yi is the label of the ith example and p(yi ) is the predicted probability of the label yi , or equivalently, the output of the quantum circuit used in the quantum kernel classifier
where , refers to variational quantum circuits defined in equation (B.1), represents the encoded quantum state defined in equation (C.1), and Π is the measurement operator. Following the parameter shift rule, the derivative of BCE loss satisfies
where θ ± is defined in equation (B.4). The above equation implies that to acquire the gradients of the BCE loss, it necessitates to feed the training example one by one to the quantum kernel classifier to estimate p(yi ), and then conduct the classical post-processing to compute the coefficient . In other words, the number of batches for this quantum classifier can only be B = N. Since the estimation of p(yi ), Tr(Πρ( θ +)), and Tr(Πρ( θ −)) are completed by using M measurements, the derivative can be estimated by using 3NM measurements. Considering that there are in total NF L trainable parameters, the total number of measurements at each epoch for the quantum kernel classifier with the BCE loss is 3NMNF L.
Unlike the quantum kernel classifier with the BCE loss, GBLS uses a simple loss function defined in equation (14), which allows us to efficiently acquire the gradient by leveraging the superposition property. Recall equation (B.6). The gradient of GBLS satisfies
where yk refers to the label of the last pair in the extended training example . The above equation indicates that the gradient for , which contains K training examples in , can be estimated by using 2M measurements, where the first (last) M measurements aim to approximate (). Therefore, the total number of measurements to collect for all possible is 2MB = 2MN/K. Considering that there are in total NF L trainable parameters, the query complexity at each epoch for GBLS is 2NF LMN/K. Note that when K → N, the required number of measurements of GBLS can be dramatically reduced.
To ease of understanding, let us illustrate an intuitive example. Define two extended training examples, where the first one includes all positive examples in and one negative example, and the second one includes all negative examples in and one positive example. Since these two extended examples cover the whole dataset , when GBLS uses these two examples to update θ , it completes one epoch. Celebrated by the simple form of , the number of measurements to estimate the gradients for the jth entry θ j given these two extended examples is O(1). Considering there are in total O(NF L) trainable parameters, the total number of measurements at each epoch for GBLS is O(LNF ).