Abstract
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum–classical hybrid simulator, where a 'quantum student' is being taught by a 'classical teacher'. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch–Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
Export citation and abstract BibTeX RIS
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
1. Introduction
Quantum information science has seen explosive growth in recent years, as a more powerful generalization of classical information theory [1]. In particular, quantum computation has received momentum from quantum algorithms that outperform their classical counterparts [2–5]. Thus, the development of quantum algorithms is one of the most important areas of computer science. However, unfortunately, recent research on quantum algorithm design is rather stagnant, compared to other areas in quantum information, as scarcely any new quantum algorithms have been discovered in the past few years [6]. We believe that this is due to the fact that we—the designers—are used to classical logic. Thus we think that quantum algorithm design should turn towards new methodology, different from that of the current approach.
Machine learning is a well-developed branch of artificial intelligence and automatic control. Although 'learning' is often thought of as a uniquely human trait, a machine being given feedback (taught) can improve its performance (learn) in a given task [7, 8]. In the past few decades, there has been a growing interest not only in theoretical studies of but also in a variety of applications of machine learning. Recently, many quantum implementations of machine learning have been introduced to achieve better performance for quantum information processing [9–13]. These works motivate us to look at machine learning as an alternative approach for quantum algorithm design.
Keeping our primary goal in mind, we ask whether a quantum algorithm can be found by the machine that also implements it. On the basis of this idea, we consider a machine which is able to learn quantum algorithms in a real experiment. Such a machine may discover solutions which are difficult for humans to find because of our classical way of thinking. Since we can always simulate a quantum machine on a classical computer (though not always efficiently), we can use such simulations to design quantum algorithms without the need for a programmable quantum computer. This classical machine can thus be regarded as a simulator that learns a quantum algorithm—a so-called learning simulator. The novelty of such a learning simulator lies in its capabilities of 'learning' and 'teaching'. With regard to these abilities, we consider two internal systems: one is a learning system (the 'student' say), and the other is a main-feedback system (the 'teacher' say). While the standard approach is to assume that both student and teacher are quantum machines, here we use a quantum–classical hybrid simulator such that the student is a quantum machine and the teacher a classical machine. Such a hybridization is easier and more economical to realize if any algorithms can be learned.
In this paper, we employ a learning simulator for quantum algorithm design. The main question of this work is: 'Can our learning simulator help in designing a quantum algorithm?' The answer to this question is affirmative, as it is shown, in Monte Carlo simulations, that our learning simulator can faithfully learn appropriate elements of a quantum algorithm for solving an oracle decision problem, called the Deutsch–Jozsa problem. The algorithms found are equivalent, but not exactly equal, to the original Deutsch–Jozsa algorithm. We also investigate the learning time, as it becomes important in application not only due to the large-scale problems often arising in machine learning but also because, in its learning, our simulator will exhibit the quantum speedup (if any) of an algorithm to be found, as described later. We observe that the learning time is proportional to the square root of the total number of parameters, in contrast to showing the exponential tendency found in classical machine learning. We expect our learning simulator to reflect the quantum speedup of the algorithm found in its learning, possibly in synergy with the finding that the size of the parameter space can be significantly smaller for quantum algorithms than for their classical counterparts [14]. We note that the method presented is aimed at a real experiment, in contrast to the techniques of [15, 16].
2. The basic architecture of the learning simulator
Before discussing the details of the learning simulator, it is important to have an understanding of what machine learning is. A typical task of machine learning is to find a function for the input x and the target based on the observations in supervised learning, or to find some hidden structure in unsupervised learning [7, 8]. The main difference between supervised and unsupervised learning is that in the latter case the target is unknown. Throughout this paper, we consider supervised learning where the target is known.
We now briefly sketch our method (see also figure 1). To begin, a supervisor defines the problem to be solved, and arranges the necessary prerequisites (e.g., the input–target pairs (x, ), and a function Q produced by a non-trivial device, the so-called oracle) for learning. The preliminary information is passed to the learning simulator at once. The simulator encodes the communicated information on its own elements. We note here that one could consider two main tasks in designing a quantum algorithm. The first is to construct a useful form of quantum oracle, and the second is to find another incorporating quantum operation(s) to maximize the quantum advantages, such as superposition engaging parallelism [17] or entanglement [18]. We focus here on the latter
5
. Note, however, that it is necessary to define a specific oracle operation (see appendix
We then describe the basic elements of the learning simulator in figure 1. The simulator consists of two internal parts. One is the learning system which is supposed to eventually perform a quantum algorithm, and the other is the feedback system responsible for teaching the former. The learning system consists of the standard quantum information-processing devices: preparation P for preparing a pure quantum state, operation U for performing a unitary operation, and measurement M. Here, the chosen quantum oracle is involved in U. On the other hand, the feedback system is classical, as this is easier and less expensive to realize in practice. Furthermore, by employing classical feedback, we can use a well-known (classical) learning algorithm whose performance has already been proved to be reliable. Recently, a scheme for machine learning involving quantum feedback has been reported [21], but the usefulness of the quantumness has not been clearly elucidated, even though the reported results are meaningful in some applications. Moreover, it is unclear at present whether any classical feedback is applicable to quantum algorithm design. Consequently, it is preferred to use classical feedback in this work. In this sense, our simulator is a quantum–classical hybrid. The feedback system is equipped with a main-feedback device F which involves the classical memory and the learning algorithm . records the control parameters of U and the measurement results of M. corresponds to a series of rules for updating U.
We illustrate how our simulator performs the learning. Let us start with the set of K input–target pairs communicated from the supervisor:
where f is a function that transforms the inputs into their targets 6 . The main task of the simulator is to find f. Firstly, an initial state is prepared in P and transformed to by U. Then M performs measurement on with a chosen measurement basis. The measurement result is delivered to F through . Note here that the information about the initial state and the measurement basis encoded in P and M is also determined by the supervisor before the learning. Finally, F updates U on the basis of . Basically, the learning is just the repetition of these three steps. When the learning is completed, we obtain a P–U–M device to implement f by simply removing F. The supervisor, then, investigates whether the P–U–M device found provides any speedup reducing the overall oracle references, or saves any computational resources for implementing the algorithm [22]. In particular, the supervisor would standardize the identified operations U as an algorithm. Here, we clarify that the input information in T and the measurement results are classical. Nevertheless, the simulator is supposed to exploit quantum effects in learning, because the operations before measurement are all quantum. This assumption is supported by recent theoretical studies that show the improvement of learning efficiency achieved by using quantum superposition [14, 23].
3. Construction of the learning simulator
The general design of the learning simulator depicted in figure 1 works fine for problems such as number factorization. However, in problems requiring a large number of oracle references, the input is the oracle itself and, by definition, it is a (unitary) transformation rather than a string of bits. To allow for the input in the form of a unitary matrix we need to refine our simulator a little (but let us stress that this does not mean that our method is not general). The refined version depicted in figure 2 allows the simulator to learn an algorithm of iterative type. The difference in learning simulator stems directly from the formulation of the problems.
Download figure:
Standard image High-resolution imageThe most important aspect of the refined learning simulator is the decomposition of U. In order to deal with both classical and quantum information, we divide U into three sub-devices, such that
where is the total unitary operator, and denotes the unitary operator of the jth sub-device. Here, and are n-qubit controllable unitary operators, whereas is the oracle for encoding the input . By 'controllable' we mean here, and throughout the paper, that they can be changed by the feedback.
The unitary operators are generally parameterized as
where is a real vector in -dimensional Bloch space for , and is a vector whose components are SU(d) group generators [24, 25]. The components of can be directly matched to control parameters in some experimental schemes, e.g., beam-splitter and phase-shifter alignments in linear optical systems [26] and radio-frequency (rf) pulse sequences in nuclear magnetic resonance (NMR) systems [27]. In that sense, we call a control-parameter vector. Here, is determined by , as described above. In such a setting, we expect our simulator to learn an optimal set of , such that and will solve a given problem.
Our simulator is actually well-suited to learning even iterative algorithms, such as Groverʼs [5]. We envisage using our simulator as follows: in the first stage, apply to an input state, then which is a non-trivial operation, say the oracle, and finally , to generate an output state. The feedback system updates and . Then, after a certain number of iterations which do not lead to any improvements, our simulator goes to the second stage, where the output state is fed back to be the input state to apply –– again. Therefore, in the second stage, the oracle is referenced twice. If it fails again, it will try to loop three times at the third stage. By some number of stages, there will be enough oracle references to solve the problem. In such a way, our simulator can learn even a quantum algorithm of iterative type 7 , without adopting any additional sub-devices and altering the structure in a real experiment. Thus, the scalability for the size of the search space is only concerned with the number of control parameters in and , given by , where .
Here, we highlight another subsidiary question: how long does it take for our simulator to learn an (almost) deterministic quantum algorithm? Investigating this issue will become increasingly important, especially in the application of our simulator to very large-scale (i.e., ) problems. One may raise the objection that our simulator runs extremely slowly for large size problems. On the other hand, however, it is also likely that, in its learning, our simulator enjoys the quantum speedup, if any, of the algorithm to be found. To see this, consider two cases: a classical algorithm and a quantum algorithm which our simulator tries to find, assuming that they are of different complexities in terms of the number of oracle queries. For instance, the quantum one may query a polynomial number of oracles, whereas the number of queries for the classical one increases exponentially with respect to the problem size. Regardless of the methods of realization, a learning simulator cannot reduce the number of stages to less than the number of oracle queries in a given algorithm to be found. This is reflected in the learning time. In other words, our simulator may show the learning speedup, exploring far fewer stages in the learning of the quantum algorithm, as long as the algorithm to be found exhibits quantum speedup. These controversial arguments require us to investigate the learning time as well as the effectiveness of our simulator.
4. Application to the Deutsch–Jozsa problem
As a case study, consider an n-bit oracle decision problem, called the Deutsch–Jozsa (DJ) problem. The problem is deciding whether some binary function is constant ( generates the same value 0 or 1 for every input) or balanced ( generates 0 for exactly half of the inputs, and 1 for the rest of the inputs) [2, 3]. On a classical Turing machine, queries are required to solve this problem. If we use a probabilistic random classical algorithm, we can determine the function with a small error, less than , by making q queries [28, 29].
On the other hand, the DJ quantum algorithm solves the problem with only a single query [29, 30]. The DJ quantum algorithm runs as follows: first, apply on the input state , then to evaluate the input function, and finally again to produce an output state . Here, is the Hadamard gate which transforms the qubit states and into equal superposition states and respectively. is the function-evaluation gate that calculates a given function . It is defined by its action,
where is the binary sequence of the computational basis. Then, the output state is given as
where C and B are the sets of constant and balanced functions, respectively, and the binary components depend on the balanced functions (excepting that for all j). In the last step, von Neumann measurement is performed on the output state. The corresponding measurement operator is given by . The other projectors constituting the observable are irrelevant because we are interested only in the probabilities associated with the first case,
and the second case,
Therefore it is promised that the function is either constant or balanced by only a single oracle query.
We are now ready to apply our method to the DJ problem. To begin, the supervisor prepares the set of input–target pairs,
The learning simulator is to find the 'functional' f now, by adjusting and . The input functions are encoded in of . Here, we chose the same form of the oracle as in equation (4), i.e., type (ii). Then P prepares an arbitrary initial state and M performs the measurement on each qubit. Here we introduce a function to apply a measurement result to one of the targets (in our case, 'c' or 'b'). We call this the interpretation function. Note that the interpretation function is also to be learned, because, in general, any a priori knowledge of the quantum algorithm to be found is completely unknown. For the sake of convenience, we consider a Boolean function that transforms the measurement result to 0 (equivalently, 'c') only if for all , and otherwise 1 (equivalently, 'b'). One may generalize the interpretation function to a function if one is interested in any other problems that contain many targets less than [31].
5. The learning algorithm for differential evolution
One of the most important parts of our method is the choice of a learning algorithm . The efficiency and accuracy of machine learning are heavily influenced in general by the algorithm chosen. We employ so-called 'differential evolution', as it is known as one of the most efficient optimization methods [32]. We implement the differential evolution as follows. To begin, we prepare sets of the control-parameter vectors: (). Thus we have parameter vectors in total. They are chosen initially at random and recorded on in F.
[] Then, mutant vectors are generated for (k = 1,3), according to
where , , and are randomly chosen for . These three vectors are chosen to be different from each other; for that, is necessary. The free parameter W, called a differential weight, is a real and constant number.
[] After that, all parameter vectors
are reformulated as trial vectors
by means of the following rule: for each j,
where is a randomly generated number and the crossover rate is another free parameter between 0 and 1.
[] Finally, are taken for the next iteration if and yield a fitness value larger than that from and ; if not, are retained. Here the fitness is defined by
where and are measurement probabilities for the ith set, given by equations (6) and (7). While evaluating the fitness values, F records on the best and its corresponding parameter vector set .
The above steps []–[] are repeated until reaches close to 1. In an ideal case, the simulator finds yielding with and . The parameters found lead to an algorithm equivalent to the original DJ one.
6. Numerical analysis
The simulations are done for the n-bit DJ problem, with n increasing from 1 to 5. In the simulations, we take for all n
8
. The results are given in figure 3(a), where we present the averaged best fitness , sampling 1000 trials. It is clearly observed that approaches 1 as the iteration proceeds. There is just one stage required for all n. This implies that our simulator can faithfully learn a single-query quantum algorithm for the DJ problem, showing . It is also notable that the algorithms found are equivalent to, but not exactly equal to, the original DJ algorithm: the and found are always different, but constitute an algorithm solving the DJ problem (see appendix
Download figure:
Standard image High-resolution imageThen we present a learning probability P(r), defined by the probability that the learning is completed before or at the rth iteration [33]. Here we assume a halting condition for finding a nearly deterministic algorithm. In figure 3(b), we present P(r) for all n, each of which is averaged over 1000 simulations. We find that P(r) is well-fitted with an integrated Gaussian
where the probability density is a Gaussian function . Here, is the average iteration number and is the standard deviation over the simulations, which characterize how many iterations are sufficient for a statistical accuracy of . Note that we have finite values of and for all n. The probability density is drawn in figure 3(c), resulting from P(r).
We also investigate the learning time. As we already pointed out, the learning time becomes an intriguing issue which may be related not only to the applicability of our algorithm to large-scale problems but also to the learning speedup. Regarding as a learning time, we present a graph of versus in figure 3(d). Remarkably, the data are well-fitted linearly with with and . This means that the learning time is proportional to the square root of the size of the parameter space 9 . This contrasts with the typical exponential tendency for classical machine learning (see, for example, [34, 35] and the references therein).
7. Summary and remarks
We have presented a method for quantum algorithm design based on machine learning. The simulator that we have used is a quantum–classical hybrid, where the quantum student is being taught by a classical teacher. We discussed such hybridization being beneficial in terms of the usefulness and the implementation cost. Our simulator was applicable in designing an oracle-based quantum algorithm. We demonstrated, as a case study, that our simulator can faithfully learn a single-query quantum algorithm that solves the DJ problem, even though it does not have to. The algorithms found are equivalent, but not exactly equal, to the original DJ algorithm with fitness .
We also investigated the learning time, as this would become increasingly important in application, not only due to the large-scale problems often arising in machine learning but also because, in its learning, our simulator potentially exhibits the quantum speedup, if any, of an algorithm to be found. In the investigation, we observed that the learning time is proportional to the square root of the size of the parameter space, instead of showing the exponential dependence of classical machine learning. This result is very suggestive. We expect our simulator to reflect the quantum speedup of the algorithm found in its learning, possibly in synergy with the finding from [14] that for quantum algorithms, the size of the parameter space can be significantly smaller than that for their classical counterparts: not only does their learning time scale more favorably with the size of the space, but also this size is smaller to begin with.
We hope that the proposed method will help in designing quantum algorithms, and provide an insight into learning speedup, establishing a link between the learning time and the quantum speedup of the algorithms found. However, it is an open question whether one would observe more improved behaviors in quantum algorithm design on employing a quantum feedback rather than the classical feedback.
Acknowledgments
JB would like to thank M Żukowski, H J Briegel, and B C Sanders for discussions and comments. We acknowledge the financial support of National Research Foundation of Korea (NRF) grants funded by the Korea government (MEST; No. 2010-0018295 and No. 2010-0015059). JR and MP were supported by the Foundation for Polish Science TEAM project cofinanced by the EU European Regional Development Fund. JR was also supported by NCBiR-CHIST-ERA Project QUASAR. MP was also supported by the UK EP-SRC and ERC grant QOLAPS.
Appendix A.: Quantum oracle operation
As described in the main text, one could consider two different issues in designing a certain type of quantum algorithm. The first is determining a specific form of quantum oracle operation, and the second is finding another incorporating operations to maximize the quantum advantages. Although we focused on the latter in the current work, it is also necessary to inquire into the question of what kind of quantum oracle is best fitted for our learning simulator in figure 2, practically.
Dealing with the quantum oracle is a twofold task: defining the appropriate query function Q and encoding its output q for the oracle operation. The query function Q maps available inputs of the problem to certain accessible values , (). Here we clarify that Q is evaluated classically, and independently of the construction of the oracle operation. The finite input set () and the query function Q are determined prior to learning, as mentioned in section 2.
Let us now consider a general process for oracle operation, such that
where is a computational basis, and is a quantum state of an input . Here, and are controllable parameters depending on . We then determine a specific form of oracle operation by choosing either or . These two types of oracle are equivalent, in the sense that they are independent of the query function Q, and can be converted into each other without any alteration of the complexity of the algorithm found [36]. In this work, we considered the latter type of oracle operation, as it is more economical in the sense that the query function is encoded into the phase without any additional system.
Appendix B.: The variants of the original one-bit Deutsch–Jozsa algorithm
In this appendix, we discuss the original Deutsch–Jozsa algorithm and its variants for the simple case n = 1 [37]. In such a case, the learning part of our simulator consists of two single-qubit unitary operations (k = 1,3) and one oracle operation , as in equation (4). Here it is convenient to rewrite any single-qubit unitary operation as
where is a three-dimensional real vector, and is nothing but the vector of Pauli operators. Here, is given as the Euclidean vector norm of , i.e., , and is a normalized vector. All pure states are characterized as points on the surface of a unit sphere, called the 'Bloch sphere', and rotates a pure state (i.e., a point on the Bloch sphere) by the angle around the axis . Such a geometric description is convenient for describing the unitary processes.
We now turn to the one-bit DJ algorithm –– which consists of three operation steps. Firstly, the unitary rotates the initial state to a state on the equator of the Bloch sphere, i.e., , where ϕ is an arbitrary phase factor. The oracle then flips the state to the antipodal side if is balanced, and leaves it unchanged if is constant. The last unitary transforms the incoming state to the corresponding output,
Noting that the Hadamard operation is π-rotation about the axis , it is easily checked that the phase ϕ is given as zero in the original DJ algorithm. On the basis of such a description, we can infer that there are numerous sets (k = 1,3) leading the initial state to the desired output as in equation (B.2). Thus, many variants of the original DJ algorithm exist. As an example, we give and found in our simulator as follows:
with
The algorithm constructed with the above and runs as
This algorithm is not exactly equal to, but equivalent to, the original one-bit DJ algorithm.
Footnotes
- 5
Actually, in algorithm design [19] or logic-mechanism programming [20], the important point is usually how we utilize a given oracle (or a corresponding operation for judging the positive or negative state) with other incorporated logics in order to achieve a speedup of the designed algorithm, rather than how we construct or optimize the oracle itself.
- 6
Here, can be encoded either on the state in P or on the control parameters of U. In most cases, encoding on U is appropriate and this is the choice made for our work, as shown later.
- 7
The procedure is not the most general one. For full generality one would also need to add some quantum memory but, to our knowledge, no existing quantum algorithm actually uses this yet.
- 8
For a large size classical learning system, a huge number of candidate solutions are usually needed. For example, it is appropriate to chose – (see [32]).
- 9
It is worth noting that there is an alternative method, called semidefinite programming, which may be used for the purpose of finding a quantum algorithm. In [19], the authors have considered the problem of finding optimal unitaries given a fixed number of queries. Their algorithm was able to solve the problem in polynomial time (i.e. polynomial in the dimension d).