Strategy for quantum algorithm design assisted by machine learning

We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a"quantum student"is being taught by a"classical teacher."In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem assisted by classical main-feedback system. Our method is applicable to design quantum oracle-based algorithm. As a case study, we chose an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte-Carlo simulations that our simulator can faithfully learn quantum algorithm to solve the problem for given oracle. Remarkably, learning time is proportional to the square root of the total number of parameters instead of the exponential dependance found in the classical machine learning based method.

Quantum information science has seen explosive growth in recent years, as a more powerful generalization of classical information theory [1]. In particular, quantum computation has received its momentum from the quantum algorithms that outperform their classical counterparts [2,3,4,5]. Thus, the development of quantum algorithms is one of the most important areas of computer science. However, unfortunately, recent research on quantum algorithm design is rather stagnant, compared to other areas in quantum information, as new quantum algorithms have scarcely been discovered in the last few years [6]. We believe that this is due to the fact that we -the designers are used to classical logic. Thus we think that the quantum algorithm design should turn towards new methodology, different from the current approach.
Machine learning is a well-developed branch of artificial intelligence and automatic control. Although "learning" is often thought of as a uniquely human trait, a machine being given feedback (taught) can improve its performance (learn) in a given task [7,8]. In the last decades, there has been a growing interest not only in the theoretical studies but also in a variety of applications of the machine learning. Recently, many quantum implementations of machine learning have been introduced to achieve better performance for quantum information processing tasks [9,10,11,12,13]. These works motivate us to look at machine learning as an alternative approach for quantum algorithm design.
Keeping our primary goal in mind, we ask whether a quantum algorithm can be designed by the machine that also implements it. Based on this idea, we consider a machine which is able to learn quantum algorithms in a real experiment. Such a machine may discover solutions which are difficult for humans to find because of our classical way of thinking. Since we can always simulate a quantum machine on a classical computer (though not always efficiently) we can use such simulations to design quantum algorithms without the need for a programable quantum computer. This classical machine can thus be regarded as a simulator that learns a quantum algorithm, so-called learning simulator. The novelty of such a learning simulator is in its capabilities of "learning" and "teaching." With regard to these abilities, we consider two internal systems: One is a learning system ("student" say), and the other is a main feedback system ("teacher" say). While the standard approach is to assume that both of student and teacher are quantum machines here we use a quantum-classical hybrid simulator such that the student is a quantum and the teacher a classical machine. Such a hybridization is easier and more economical to realize if any algorithms are able to be learned.
In this paper, we employ a learning simulator for quantum algorithm design. The main question of this work is: "Can our learning simulator learn quantum algorithm without any pre-programmed knowledge?" The answer to this question is affirmative, as it is shown, in Monte-Carlo simulations, that our learning simulator can faithfully learn a quantum algorithm to solve an oracle decision problem, called Deutsch-Jozsa problem. The found algorithms are equivalent, but not exactly equal, to the original Deutsch-Jozsa algorithm. We also investigate the learning time, as it becomes important in application not only due to the large-scale problem often arises in machine learning  but also for the fact that in its learning our simulator will exhibit the quantum speedup (if any) of an algorithm to be found, as described in later. We observe that the learning time is proportional to the square root of the total number of parameters, instead of the exponential tendency found in the classical machine learning. We expect that our learning simulator will reflect the quantum speedup of the found algorithm in its learning, possibly in synergy with the findings that the size of the parameter space can be significantly smaller for quantum algorithms than for their classical counterparts [14]. We note that the presented method is aimed at a real experiment, in contrast to the techniques of [15,16].

Basic architecture of the learning simulator
Before discussing the details of learning simulator, it is important to have an understanding of what machine learning is. A typical task of machine learning is to find a function f (x) = t x for the input x and the target t x based on the observations in supervised learning or to find some hidden structure in unsupervised learning [7,8].
The main difference between supervised and unsupervised learning is that in latter case the target t x is unknown. Throughout this paper, we consider a supervised learning where the target t x is known. We now describe the basic elements of the learning simulator. The simulator consists of two internal parts. One is the learning-system which is supposed to eventually perform a quantum algorithm, and the other is the feedback-system responsible for teaching the former. The learning-system consists of the standard quantum informationprocessing devices: Preparation P to prepare a pure quantum state, operation U to perform an unitary operation, and measurement M. On the other hand, the feedbacksystem is classical as it is easier and less expensive to realize in practice. Furthermore, by employing the classical feedback, we can use a well-known (classical) learning algorithm whose performance has already been proved to be reliable. Recently, a scheme for machine learning involving a quantum feedback has been reported [17], but the usefulness of the quantumness has not been clearly elucidated, even though their results are meaningful in some applications. Moreover, it is unclear yet whether any classical feedback is applicable to the quantum algorithm design. Consequently, it is preferred to use the classical feedback in this work. In the sense, this simulator is a quantum-classical hybrid. The feedback-system is equipped with a main feedback device F which involves the classical memory S and the learning algorithm A. S records the control parameters of U and measurement results of M. A corresponds to a series of rules for updating U. In figure 1, the schematic diagram of the learning simulator is presented.
We briefly illustrate how such a simulator performs the learning. To begin, provide a set of K input-target pairs, where f is a function that transforms the inputs x i into their targets. Our goal is to find f . To this end, a supervisor sends T to the simulator. Then, it starts learning. First a state |Ψ in is prepared in P and transformed to |Ψ out by U.
Then M performs measurement on |Ψ out . The measurement result is delivered to Basically, the learning is just the repetition of these three steps. When the learning is completed, we obtain P -U-M device to implement f by simply removing F . The supervisor can analyze if the found P -U-M provides any speedup or saves any computational resources. Here, we clarify that the input information in T and the measurement results are classical. Nevertheless, the simulator is supposed to exploit quantum effects in learning, because the operations before measurement are all quantum. This assumption is supported by recent theoretical studies that show the improvement of the learning efficiency by using quantum superposition [14,18].

Construction of the learning simulator
The general design of the learning simulator depicted in figure 1 works fine for problems where the input is classical (e.g. number factorization). However, in the problems involving oracle, the input is the oracle itself and, by definition, it is a (unitary) transformation rather than a string of bits. To allow for the input in the form of an unitary matrix we need to refine our simulator a little (but let us stress that this does not mean that our method is not general). The refined version depicted in figure 2 allows the simulator to learn any algorithm with oracle reference. The difference in the learning simulators stems directly from the formulation of the problems. The most important aspect in the refined learning simulator is the decomposition of U. In order to deal with both classical and quantum information, we divide U into three sub-devices, such that  whereÛ tot is total unitary operator, andÛ j (j = 1, 2, 3) denotes the unitary operator of jth sub-device.Û 2 is the part that encodes the (classical) input x i as the oracle.Û 1 and U 3 are n-qubit controllable unitary operators. By controllable we here, and throughout the paper, means that they can be changed by the feedback.
The unitary operators are generally parametrized aŝ where p = (p 1 , p 2 , . . . , p d 2 −1 ) T is a real vector in (d 2 − 1)-dimensional Bloch space for d = 2 n , and G = (ĝ 1 ,ĝ 2 , . . .ĝ d 2 −1 ) T is a vector whose components are SU(d) group generators [19,20]. In a real experiment, such form of unitary operation is realized with control parameters p j ∈ [−π, π] [21,22]. In that sense, we call p a control-parameter vector. Here p 2 (x i ) is determined by the inputs in T . In such setting, we expect that our simulator learns an optimal set of {p 1 , p 3 }, so thatÛ 1 andÛ 3 come to solve a given problem. We emphasize that the design depicted in figure 2 is for general purpose. The simulator is actually well-suited to learn even iterative algorithms, such as Grover's [5]. We envision using our simulator as follows: In the first stage, applyÛ 1 to an inputstate, thenÛ 2 which is a non-trivial operation, say oracle, and finallyÛ 3 to generate an output state. The feedback-system updatesÛ 1 andÛ 3 . Then, after a certain number of iterations which do not lead to any improvements, our simulator goes to the second stage, where the output state is fed back to be the input state to applyÛ 1 -Û 2 -Û 3 again. Therefore, in the second stage, the oracle is referenced twice. If it fails again, it will try to loop three times at the third stage. By some number of stages, there will be enough oracle references to solve the problem. In such way, our simulator can learn every known quantum algorithm ‡, without adopting any additional sub-devices and altering the structure in a real experiment. Thus, the scalability for the size of the search space is only concerned with the number of control parameters inÛ 1 andÛ 3 , ‡ The procedure is not the most general one. For full generality one would also need to add some quantum memory but, to our knowledge, no existing quantum algorithm actually uses it yet.
Here, we highlight another subsidiary question: How long does it take for our simulator to learn a (almost) deterministic quantum algorithm? Investigating this issue will be increasingly important, especially in application of our simulator to very largescale (i.e. D ≫ 1) problem. Thus one may doubt that our simulator runs extremely slow in a large size of problem, on one hand. On the other hand, however, it is also likely that in its learning our simulator enjoys the quantum speedup, if any, of an algorithm to be found. To see this, consider two cases, a classical and a quantum algorithm which our simulator tries to find, assuming that they are of different complexities in terms of the number of oracle queries. For instance, the quantum queries a polynomial number of oracles, whereas the classical does the exponential with respect to the problem size. Regardless of its realization methods, a learning simulator can reduce the number of stages not less than the number of oracle queries in a given algorithm to be found. This is reflected by learning time. In other words, our simulator may show the learning speedup, exploring much less stages in the learning of quantum algorithm, as far as the algorithm to be found exhibits quantum speedup. These controversial arguments demand us to investigate the learning time as well as the effectiveness of our simulator.

Application to Deutsch-Jozsa problem
As a case study, consider an n-bit oracle decision problem, called Deutsch-Jozsa (DJ) problem. The problem is to decide if some binary function x:{0, 1} n → {0, 1} is constant (x generates the same value 0 or 1 on every input) or balanced (x generates 0 on exactly half of the inputs, and 1 on the rest of the inputs) [2,3]. On a classical Turing machine 2 n−1 + 1 queries are required to solve this problem. If we use a probabilistic random classical algorithm, we can determine the function x with a small error, less than 2 −q , by q queries [23,24].
On the other hand, DJ quantum algorithm solves the problem by only single query [25,24]. The DJ quantum algorithm runs as follows: First, applyĤ ⊗n on the input state |Ψ in = |00 · · · 0 , thenÛ x to evaluate the input function, and finallyĤ ⊗n again to produce an output state |Ψ out . Here,Ĥ is Hadamard gate which transforms the qubit states |0 and |1 into equal superposition statesĤ |0 = (|0 + |1 ) / √ 2 and H |1 = (|0 − |1 ) / √ 2 respectively.Û x is the function-evaluation gate that calculates a given function x. It is defined by its action, U x |k 1 k 2 · · · k n = e iπx(k 1 k 2 ···kn) |k 1 k 2 · · · k n , where k 1 k 2 · · · k n ∈ {0, 1} n is the binary sequence of the computational basis. Then, the output state is given as where C and B are the sets of constant and balanced functions, respectively, and the binary components z j ∈ {0, 1} (j = 1, 2, . . . , n) depend on the n 2/n balanced functions (excepting that z j = 0 for all j). In the last step, von-Neumann measurement is performed on the output state. The corresponding measurement operator is given bŷ M = |00 · · · 0 00 · · · 0|. The other projectors constituting the observable are irrelevant because we are interested only in the probabilities associated with the first case and the second case Therefore, we can determine by only single oracle query whether the function x is constant or balanced.
We are now ready to apply our method to the DJ problem. To begin, supervisor prepares the set of input-target pairs, The learning simulator is to find the "functional" f now as adjustinĝ U 1 andÛ 3 . The input functions x i are encoded in p 2 (x i ) ofÛ 2 . Then P prepares an arbitrary initial state |Ψ in and M performs the measurement on each qubit. Here we introduce a function to apply a measurement result to one of the targets (in our case, 'c' or 'b'). We call this interpretation function. Note that the interpretation function is also to be learned, because, in general, any a priori knowledge of the quantum algorithm to be found is completely unknown. For a sake of convenience, we consider a Boolean function that transforms the measurement result z 0 z 1 · · · z n to 0 (equivalently, 'c') only if z j = 0 for all j = 0, 1, . . . , n, and otherwise 1 (equivalently, 'b'). One may generalize the interpretation function to a function {0, 1} n → {0, 1} m , if interested in any other problems that contain many targets less than 2 m [26].

Learning algorithm of differential evolution
One of the most important parts in our method is choosing a learning algorithm A. Efficiency and accuracy of machine learning are heavily influenced in general by the algorithm chosen. We employ so-called "differential evolution", as it is known as one of the most efficient optimization methods [27]. We implement the differential evolution as follows. To begin, we prepare N pop sets of the control parameter vectors: {p 1,i , p 3,i } (i = 1, 2, · · · , N pop ). Thus we have 2N pop parameter vectors in total. They are chosen initially at random and recorded on S in F . [L.1] Then, 2N pop mutant vectors ν k,i are generated forÛ k (k = 1, 3), according to where p k,a , p k,b , and p k,c are randomly chosen for a, b, c ∈ {1, 2, · · · , N pop }. These three vectors are chosen to be different from each other, for that N pop ≥ 3 is necessary. The free parameter W , called a differential weight, is a real and constant number. [L.2] After that, all 2N pop parameter vectors p k,i = (p k,1 , p k,2 , · · · , p k,d 2 −1 ) T i are reformed to trial vectors τ k,i = (τ k,1 , τ k,2 , · · · , τ k,d 2 −1 ) T i by the rule: For each j, where R j ∈ [0, 1] is a randomly generated number and the crossover rate C r is another free parameter in between 0 and 1. [L.3] Finally, {τ 1,i , τ 3,i } are taken for the next iteration ifÛ 1 (τ 1,i ) andÛ 3 (τ 3,i ) yield a larger fitness value than that fromÛ 1 (p 1,i ) and Here the fitness ξ i is defined by where P C,i and P B,i are measurement probabilities for i-th set, given by equations (6) and (7). While evaluating the N pop fitness values, F records on S the best ξ best and its corresponding parameter vector set {p 1,best , p 3,best }. The above steps [L.1]-[L.3] are repeated until ξ best reaches close to 1. In an ideal case, the simulator finds {p 1,best , p 3,best } that yields ξ best = 1 with P C = 1 and P B = 0. The found parameters lead to an algorithm equivalent to the original DJ.

Numerical analysis
The simulations are done for n-bit DJ problem with increasing n from 1 to 5. In the simulations, we take N pop = 10 for all n §. The results are given in figure 3(a), where we present the averaged best fitness ξ best , sampling 1000 trials. It is clear to observe that ξ best approaches to 1 as iteration proceeds. The required stage is just one for all n. This implies that our simulator can faithfully learn a single-query quantum algorithm for DJ problem, showing ξ ≃ 1. It is also notable that the found algorithms are equivalent to, but not exactly equal to the original DJ algorithm: The foundÛ 1 andÛ 3 are always different, but constitute an algorithm solving DJ problem. Then we present a learning probability P (r), defined by the probability that the learning is completed before or at r-th iteration [28]. Here we assume a halting condition ξ best ≥ 0.99 to find a nearly deterministic algorithm. In figure 3(b), we present P (r) for all n, each of which is averaged by 1000 simulations. We find that P (r) is well fitted to an integrated Gaussian where probability density ρ(r) is a Gaussian function 1 √ 2π∆r e − (r−rc) 2 2∆r 2 . Here, r c is the average iteration number and ∆r is the standard deviation over the simulations, which § For a large size of classical learning-system, huge number N pop of candidate solutions are usually needed. For example, it is appropriate to chose N pop ≃ 5D ∼ 10D (See the reference [27]). characterize how many iterations are sufficient for a statistical accuracy of ξ best ≥ 0.99. Note that we have finite values of r c and ∆r for all n. The probability density ρ(r) is drawn in figure 3(c), resulting from P (r).
We also investigate the learning time. As we already pointed out, learning time becomes an intriguing issue which may be related not only to the applicability of our algorithm to large-scale problem but also to the learning speedup. Regarding r c as a learning time, we present the graph of r c versus √ D in figure 3(d). Remarkably, the data are well fitted linearly to r c = A √ D + B with A ≃ 43 and B ≃ −57. This means that the learning time is proportional to the square root of the size of the parameter space . This behavior is contrary to a typical tendency of being exponential in the classical machine learning (See, for example, [30,31] and their references).
It is worth noting that there is an alternative method, called semidefinite programming, which may be used for the purpose of finding a quantum algorithm. In reference [29], the authors have considered the problem of finding optimal unitaries given a fixed number of queries. Their algorithm could solve the problem in polynomial time (i.e. polynomial in the dimension d).

Summary and remarks
We have presented a method for quantum algorithm design based on machine learning. The simulator we have used is a quantum-classical hybrid, where the quantum student is being taught by a classical teacher. We discussed that such a hybridization is beneficial in terms of the usefulness and the implementation cost. Our simulator is of general purpose and, in principle, every quantum algorithm could be found with it. As a case study, we demonstrated that our simulator can faithfully learn a single-query quantum algorithm that solves DJ problem even though it does not have to. The found algorithms are equivalent, but not exactly equal, to the original DJ algorithm with the fitness ≃ 1.
We also investigated the learning time, as it would become increasingly important in application not only due to the large-scale problem often arises in machine learning but also for the fact that in its learning our simulator potentially exhibits the quantum speedup, if any, of an algorithm to be found. In the investigation, we observed that the learning time is proportional to the square root of the size of the parameter space instead of the exponential dependance in the classical machine learning. This result is very suggestive. We expect that our simulator will reflect the quantum speedup of the found algorithm in its learning, possibly in synergy with the findings from the reference [14] that for quantum algorithms the size of the parameter space can be significantly smaller than for their classical counterparts: Not only their learning time scales more favorably with the size of the space but also this size is smaller to begin with.
We believe that the proposed method will be used to find an essential quantum operation or gate that have never been regarded crucial in quantum algorithms, and it could boost the very beginning field of merging machine learning and quantum information [32,33,34], in particular for quantum algorithm design. Nevertheless, it is an open question whether to observe more improved behaviors in quantum algorithm design when employing a quantum feedback, compared with the classical feedback.