Quantum-enhanced deliberation of learning agents using trapped ions

A scheme that successfully employs quantum mechanics in the design of autonomous learning agents has recently been reported in the context of the projective simulation (PS) model for artificial intelligence. In that approach, the key feature of a PS agent, a specific type of memory which is explored via random walks, was shown to be amenable to quantization, allowing for a speed-up. In this work we propose an implementation of such classical and quantum agents in systems of trapped ions. We employ a generic construction by which the classical agents are ‘upgraded’ to their quantum counterparts by a nested process of adding coherent control, and we outline how this construction can be realized in ion traps. Our results provide a flexible modular architecture for the design of PS agents. Furthermore, we present numerical simulations of simple PS agents which analyze the robustness of our proposal under certain noise models.


Introduction
In the past decades, quantum physics has been employed to enhance communication and information processing with significant success, laying the foundation for the now well established fields of quantum computation and quantum information [1][2][3][4][5]. In contrast, the potential of merging the related, but distinct, field of artificial intelligence (AI) with quantum physics is significantly less well-understood. Thus far, advances in this field have been reported mostly for algorithmic approaches to applied AI-related tasks, e.g., (un-) supervised data clustering and process replication, where selected quantum algorithms could be utilized [6][7][8][9][10].
On the other hand, the first result showing that quantum mechanics can also aid in the complemental task of designing autonomous learning agents-a task more closely related to robotics, and embodied cognitive sciences-has only recently been provided in [11]. This work is embedded in the framework of projective simulation (PS) for AI, introduced in [12]. The central component of PS is a specific memory system utilized by the agent. This memory system, called episodic and compositional memory (ECM), provides a platform for simulating future action before real action is taken. The ECM can be described as a stochastic network of so-called clips, which represent prior experiences of the learning agent, whose decision-making process is realized by a stochastic random walk in the clip space. In the agent's design, it is the specific structure of the ECM that is particularly suitable for quantization.
In this work we present a proposal for the experimental implementation of both classical and quantum PS agents in systems of trapped ions. While the classical variants of PS agents can easily be realized in physical systems without requiring quantum control, we show here how certain implementations of classical agents in ion traps can be used to construct quantum PS agents. This is achieved in a generic way through a nested process of adding coherent control. Figure 1. Projective simulation agent. The (PS) model for active learning agents, introduced in [12], describes an embodied agent that interacts with its environment via sensory input (percepts), and action on the environment that is conducted using a set of actuators. The sensors and actuators are linked to the episodic compositional memory (ECM), which relates new perceptual input to the agent's past experience.
flags allow for the demonstration of a quantum speed-up when incorporated into a very simple agent design, which is readily implementable in current laboratories.
In the next section, we present a more formal treatment of the standard PS model, and show how it can be implemented in an ionic set-up.

Standard PS agent
As noted, in the PS model, the ECM is represented as a clip network, that is, a weighted directed graph over the set of vertices  = presented with a percept s i , the standard PS initiates a random walk in the clip network, governed by P, and starting from (the clip corresponding to) s i . The walk is terminated at the first instance an action clip is encountered. This action is then coupled out as a real action. This process can be viewed in terms of probability vectors as follows. Each clip c i can be represented as a canonical basis vector of an N-dimensional real vector space  , that is, = … … c [0, , 1,0, , 0] i T , with the unity at the ith position. The state after one random walk transition is which is a probability vector, i.e., a vector with real non-negative entries summing to one, representing a probability distribution over the clip space. This distribution is then sampled from, obtaining some clip c k , which, if it represents an action, is coupled out. Otherwise the random walk proceeds from c k . In the spirit of the reinforcement learning paradigm, each round of interaction with the environment is either rewarded or not, and both cases lead to an update of the clip network, by altering the transition probabilities, and/or by altering the clip set itself, which constitutes the learning aspect of the PS agent. For an overview of the standard PS model, including examples of update rules, see [15].  1,2,3,4) ij to the ECM, which governs the transition probabilities for a random walk in the network. In addition, flags, here indicated on clip c 4 , may be introduced, e.g., to relate actions that were recently rewarded to the corresponding percepts. 5 Technically, since in the standard PS model, the action is coupled out whenever an action clip is hit, the probabilities of transiting from an action clip are undefined. However, we can, for simplicity, assign a unit probability of transiting to itself to each action clip. Thus, action clips are the absorbing states of the underlying Markov chain, although this will not be relevant for our work. 6 In the last expression we have equated the representations of percepts and actions within the clip network with the actions and percepts themselves, in a slight abuse of notation. In the following, we will be using s k (a k ) to denote the percept (action) clips when the semantics of the clip matters (e.g., whether it is an action or a percept), and the generic notation c k when it does not. Formally, there is a distinction between percepts s j and actions a j , and their internal representation (a memory), usually denoted μ s ( ) j and μ a ( ) j , respectively.

Standard PS with trapped ions
We shall now discuss how the random walk initiated in an standard PS agent can be emulated in a quantum system, in particular, using laser pulses on a string of trapped ions. Although a quantum implementation is not strictly required for the classical random walk of the standard PS agent, such a construction is the prerequisite for the fully quantized RPS agent that we will discuss in section 4. For the construction of a quantum mechanical analogue of the transition matrix P we start by promoting the real vector space  to a complex Hilbert space , and representing the clips c i as orthonormal basis states| 〉 c i . We then construct a unitaryU i , such that for a fixed basis state denoted| 〉 0 -this may correspond to some clip state| 〉 c l but the particular choice of this fixed state is unimportant-the components of the state | 〉 U 0 i with respect to the clip basis encode the transition amplitudes as dictated by the transition matrix P, i.e.
We can see that a measurement of the state above in the clip basis recovers the right-hand side of the classical equation (1). However a single unitary cannot encode all the transitions of P. This can be seen quite simply, by noting that the columns of the matrix representation of U i are required to be orthogonal, while the columns of P may even be identical. In general, one therefore requires N distinct unitaries U i to represent all transitions of P on an N-dimensional Hilbert space. In other words, the first column, corresponding to the basis state| 〉 0 , of the unitary U i determines the transition probabilities from the clip c i to any other clip in the sense of equation (2). Equation (1) could be recovered even if the amplitudes in equation (2) had arbitrary relative complex phases. These phases are irrelevant in the context of the classical agent, but for the purpose of the extension to the quantum RPS we restrict the entries of the first column of U i to be real and positive.
Note that, given the set of unitaries 1 each corresponding to a column of an N-state transition matrix P, one can emulate any classical random walk by iterating the measurement of the quantum register (in the clipbasis), resetting the register to the state| 〉 0 , and applying the U i corresponding to the prior measurement result. The capacity to generate such unitaries will, in the next section, be used as a primitive to construct coherent quantum walks. Here we first analyze how such unitaries can be realized in an ionic set-up.
To proceed, we wish to encode the clip basis in the internal states of a chain of trapped ions, and the unitaries U i in the laser pulses driving the transitions between them. We will consider a setup as described, e.g., in [16,17]. A string of 40 Ca + ions is confined by a quadrupole trap (Paul trap). The ion confinement can be described by harmonic potentials, and the Coulomb repulsion of the ions couples the harmonic oscillators, such that the motion of the ions can be captured in terms of their collective normal modes. For each ion, two Zeeman sublevels, for instance,| 〉 =| 〉 − g S : 1 2, 1 2 and| 〉 =| 〉 − e D : 5 2, 1 2 , which can be coupled by a quadrupole transition, are used to represent the computational basis states of a single qubit. In turn, we employ the state space of k qubits as a representation of the clip network. Hence, the PS implementation we propose requires = k N log ( ) 2 ⎡ ⎢ ⎤ ⎥ ions for a network of N clips. The required unitaries can be realized with two laser beams [16,17], one of which is a broad beam that is nearly collinear to the ion chain, such that all ions are illuminated. The second laser beam can be focussed to address each ion individually. When operated resonantly at the frequency ω corresponding to the transition | 〉 ↔ | 〉 g e , the first laser laser realizes the collective gate where we use the shorthand notation , i.e., the Pauli X operator for the ith qubit. The second laser, on the other hand, is applied off-resonance to provide the single-qubit gate The operations of equations (3) and (4) can further be complemented with an entangling gate, such as the Cirac-Zoller [18] or Mølmer-Sørensen [19] gate, to form a universal set of quantum gates, and hence provide the possibility to construct the unitaries U i in principle. In general, the aim is to determine a sequence of The freedom in the choice of parameters allows for all of the operators U i to be represented by some specific choices of the θ j . In particular, the agent is considered to operate based on a fixed internal architecture, in particular the tuning of the angles should have a simple operational meaning. At every step of the learning process, the agent only updates a set of parameters, here the θ i , corresponding to the duration of some laser pulses within a fixed sequence. For instance, in the very simple case 1 , acting on a Hilbert space , to a single controlled unitary U of the form j which acts on   ⊗ C , where  C is an (at least) M-dimensional Hilbert space, and | 〉 j { } is an orthonormal basis of  C . Practically, this mapping may be understood as a physical procedure of adding quantum control to individual elementary operations [20]. We refer to such mappings and the associated physical processes, which implicitly feature in many quantum algorithms [3,21], as coherent controlization. As we will discuss in section 4, coherent controlization forms an essential part of the construction of the quantum RPS agent.
As a first instance of its applicability, coherent controlization provides an elegant method to generically assemble and combine probability unitaries. The latter may also be assembled in other, sometimes more efficient ways, and one alternative construction is provided in the appendix. Nonetheless, the construction of the probability unitaries using coherent controlization offers the opportunity to illustrate this method on a simple and useful example.
Before we begin, let us recall the task at hand. For a given probability distribution , corresponding to the jth column of the stochastic matrix P, we wish to construct the associated unitaryU j , such that the first column ofU j has real and positive entries p ij , with i = 1,…,N.
As the elementary operations that depend on these parameters we select single-qubit Y rotations θ U ( ) Y i , which, for a trapped ion setup, may be realized as in equation (6), and where we drop the label Y for ease of notation. Any probability unitary θ θ … − U ( , , ) N 1 1 on an N-clip network can then be assembled by a nested scheme of coherent controlization on k qubits, where k is the smallest integer that is larger than N log ( ) 2 . For simplicity, let us assume here that the size of the clip network is such that ∋ =  N k log ( ) 2 , which can always be achieved by duplicating some clips.
For a two-clip probability distribution p p { , } 1 2 , the probability unitary is trivially realized by a single-qubit Y rotation θ U ( ) 1 , with θ = p cos ( 2) 4 . This is followed by two controlled Y rotations of the second qubit, conditioned on the state of the first, that is, θ 3 is applied when the first qubit is in the state| 〉 1 . The corresponding angles are determined from the renormalized probabilities within the respective subspaces, i.e., For larger values of k, the controlization becomes nested, see figure 3, e.g., for k = 3 (N = 8), the lowest level of single qubit operations, here θ U ( ) 2 and θ U ( ) 3 , is followed by controlled operations on a third qubit. Labeling the qubits as I, II, and III, we may write the corresponding probability unitary as where the controlled two-qubit operations are given by As we have argued above, coherent controlization allows for the construction of general probability unitaries from basic single-qubit probability unitaries. Despite the simple appearance of the circuits in figure 3, the practical implementation of coherent controlization requires additional attention. In fact, it is generally impossible to decompose quantum-controlled operations U ctrl( ) into individual gates = U G U G ctrl( ) 1 2 , such that the G i are independent of U, which implies that the gates G i may not be specified if U is unknown [22,23]. This seems to suggest that coherent controlization requires computational effort in its implementation. However, for the ionic implementation that we will discuss next, we exploit additional degrees of freedom of the physical setup to perform coherent controlization in a generic way.

Coherent controlization in trapped ions
We shall now discuss how quantum control can be practically added to unitaries that are realized by laser pulses in a trapped ion setup, based on the scheme introduced in [20]. As an example we give the explicit pulse decomposition that realizes the two-qubit unitary θ θ θ U ( , , ) 1 2 3 , which can be viewed as a special case of equation (9a) for θ θ … = , , 0 , where we use two ions, labeled I and II, respectively, before we explain how this method is generalized to the control of k-qubit unitaries.
To start, we note that the operation θ ⊗  U( ) 1 I II ⎡ ⎣ ⎤ ⎦ can be trivially implemented by the pulse sequence of equation (6), and we can thus focus our attention on the remaining term Apart from the laser pulses for the elementary operations θ U ( ) 2 and θ U ( ) 3 , our scheme for their coherent controlization also consists of a number of additional Y rotations in two-dimensional subspaces of the ionic energy levels other than the one spanned by| 〉 g and| 〉 e , see figure 4. We will use additional superscripts, e.g., U Y # i , where the labels '#' identify different detuning frequencies, and the subscript ∈ i {I, II} identifies the ion, to distinguish these operations. Furthermore, we make use of one of the common vibrational modes, which we assume has been cooled to the ground state| 〉 0 v , before the following steps are executed.
(i) Cirac-Zoller [18,24] method: a sequence of appropriately blue-detuned laser pulses is applied on ion I to realize This step encodes the state of qubit I in the vibrational mode, i.e., the initial state of the form α β ψ (ii) Hiding: red-detuned laser pulses corresponding to , as illustrated in figure 4. Denoting the state ψ encoded in the levels| ′〉 g II and| ′〉 e II as ψ | ′〉 II , we may write the overall state after this step as 3 : the pulse sequence that realizes θ U ( ) 3 is applied to ion II, which leaves the system in the state (iv) Switching: to exchange the primed and unprimed levels, laser pulses for , which are blueand red-detuned, respectively, are applied to ion II, see figure 4. The resulting overall state after these operations is (vi) Switching: the primed and unprimed levels are exchanged again using the laser pulses for (vii) Unhiding: the hiding operations of step (ii) are reversed by the application of (viii) Return control: finally, π − U ( ) Y CZ I is applied to ion I, which returns the control from the vibrational mode, and a provides the desired state α θ ψ β θ ψ , that is, the unitary θ U ( ) 2 acts on ion II, when ion I is in the state | 〉 g I , while θ U ( ) 3 acts upon the subspace in which the first ion is in the state | 〉 e I .
If required, the scheme laid out in steps (i)-(viii) may be straightforwardly extended to larger clip spaces by increasing the number of control qubits and vibrational modes used. Each Y rotation in principle requires three individual pulses, see equation (6), but the collective X rotations for the operations θ U ( ) i can be subsumed into two single pulses π U ( ) X 2 and − π U ( ) X 2 at the start and at the end of the entire pulse sequence, respectively. We hence find that the overall number of elementary laser pulses necessary to assemble a k-qubit probability unitary is given by Note that an exponential scaling in terms of the qubits used is inevitable, as k qubits encode 2 k probabilities, and we must have the freedom to specify each one of these. In terms of the state space of the ECM network (clip number) the scaling is linear.
In such a process − k ( 1)vibrational modes of different frequencies are used to generalize steps (i) and (vii) to condition − k ( 1)-qubit operations on the state of the first qubit, i.e., by transferring the populations 1 . Next, we give the basics of the classical and quantum RPS agent models, and show how the two components -coherent controlization and probability unitaries-can be utilized to construct these in systems of trapped ions.

Reflecting PS with trapped ions
We now turn to the so-called RPS agent introduced in [11]. The central aim of the RPS is to output the actions according to a specific distribution, which we shall specify shortly, that is updated, indirectly, as the ECM network is modified throughout the learning process. Here, the clip network  is disjoint, and it comprises unconnected percept-specific subnetworks with associated stochastic (ergodic and time-reversible) matrices Depending on which percept is observed, the random walk is executed on the corresponding perceptspecific (sub-)network, where it is continued until the Markov chain P k is (approximately) mixed, that is, until the respective stationary distribution π P k , which has support over the entire clip space, is (approximately) reached. The agent then samples from the obtained distribution, and iterates the procedure (which requires remixing of the Markov chain) until an action is hit. More specifically, the RPS agent is designed to output (a good approximation) of the tailed distribution π P k defined as That is, the re-normalized distribution π P k truncated such that it has support only over the action space.
Despite the differences in the walk termination criteria of the standard PS and RPS models, all the operational elements required for an emulation of a classical RPS agent in an ionic set-up have already been presented in the last section, as the previously described construction enables the emulation of any classical random walk.
In the remainder of this section, we aim to show how the quantum RPS agent, which employs a truly coherent quantum walk (in the sense of [13,14]) to obtain a quadratic speed-up over the classical RPS agent, can be implemented based on the coherent controlization of unitaries as discussed in section 3.3. For notational simplicity, we will from this point on ignore the subscript k indicating the percept the network in question corresponds to, unless it is specifically required.
The central process of the quantum RPS model, the basics of which we present next, is a so-called Szegedytype quantum random walk, see, e.g. [14], that is performed on the percept-specific ECM (sub-)network. These Szegedy-type quantum random walks are used in the quantum RPS agent in order to output an action distributed according to the tailed stationary distribution π P with a quadratically decreased number of elementary diffusion steps, as compared to a classical RPS agent.
As the structure of this decision-making process is rather involved, let us briefly sketch it out here, before proceeding in more detail. The basic building block of a Szegedy-type walk, is the elementary diffusion unitary U , P which acts on a two register system, each one of sufficient dimensionality to represent the entire clip network. One application ofU P can be considered as the analog of one step of the classical walk governed by the transition matrix P. The Szegedy walk operator W(P), on the other hand, is constructed using four applications ofU P (or its inverse), and some quantum operations which are independent from P. One of the distinct properties of the operator W(P) is that its unique + ( 1)eigenstate π | ′ 〉 P is a particular coherent encoding of the stationary distribution π P of the Markov chain. Exploiting this property, and using a modified Kitaev phase estimation algorithm [21], we can construct an approximate reflection operator (ARO), which reflects over the state π | ′ 〉 P . The speed-up achieved in the quantum RPS originates, in part, from the efficiency of the construction of the ARO operator in terms of the number of applications of the diffusion unitary U P , relative to the mixing time of the Markov chain as specified by P.
The ARO operator above can then be used in search algorithms (e.g., as in [13,14]), as well as in the decisionmaking process of the RPS agent, which can be seen as a Grover-type [4] reflection process in the following sense. Upon the system, initialized in the state π | ′ 〉 P , one sequentially applies a 'check' operator, which adds a relative phase of − ( 1)to all basis states corresponding to actions, followed by the ARO operator, which reflects over the coherent encoding of the stationary distribution. This, like in the Grover algorithm, induces a sequence of rotations in a two-dimensional workspace, which, after a certain number of iterations, guarantees that the system state has a constant overlap with the state encoding the aforementioned tailed distribution. The second component of the quantum speed-up lies in the number of these iterations, which inherits the quadratic improvement that is characteristic to Grover's algorithm. With this in mind, let us now give further details of the building blocks of the quantum RPS.

The Szegedy walk operator
As we have argued previously, a unitary on an N-dimensional Hilbert space is not capable of representing all transitions of an arbitrary Markov chain over a network of N clips. For this reason, the classical random walk for a given transition matrix P that we have described in section 3.1 is realized by, in general, N unitaries where U i is associated with the ith column of P. In the Szegedy-type approach to quantum walks, two copies,  I and  II , of an N-dimensional Hilbert space, i.e., are used to accommodate for all the required degrees of freedom. For a time-reversible Markov chain we define the unitary walk operators U P andV P as In the context of quantum RPS agents, we assume that the underlying ergodic Markov chain is timereversible, i.e., it satisfies detailed balance. Although the Szegedy-type walk can be defined even if this is not the case, one would additionally require access to the time-reversed transition matrix 7 P * in such a situation. Here, we will present the construction in the most general terms, with the implicit understanding that for the RPS, the unitaryV P can be obtained fromU P by swapping the registers prior to, and after the application ofU P . With the operatorsU P andV P at hand, we can now proceed with the construction of the Szegedy walk operator W(P), which is implemented by reflecting over the spaces A and B, defined as The generalized walk operator is then defined as where, for = X A B , , we have The two operators A ref( ) and B ref( ) are constructed from the diffusion operators,U P andV P , along with reflections over| 〉 0 I , and| 〉 0 II denoted D 0,I and D 0,II , respectively, as shown in figure 5. The unique + ( 1) eigenstate π | ′ 〉 P of the Szegedy walk operator W(P), which coherently encodes the stationary distribution π P on the two registers, is given by

The ARO
The next step in the design of a quantum RPS agent is the construction of the ARO from the walk operator W(P). The ARO operator is designed to approximate the (ideal) reflection operator

P P P I,II
With the generalized walk operator W(P) at hand, an approximate reflection over π | 〉 P is obtained [14] by implementing the phase detection operator W PD( ), a modification of Kitaev's [21] phase estimation algorithm, shown in figure 6. For this task, we add + n ( 1)ancilla qubits, where n scales as δ log (1  ) is the spectral gap of the Markov chain, i.e., λ 2 is the second largest eigenvalue of P. We employ W PD( ) and its inverse operation, with an intermediate reflection over the ancilla state| … 〉 00 0 Aux . This combination of operations approximates the reflection over π | ′ 〉 P from equation (17). An analysis of the fidelity of the reflection, as a function of n, is given in [14]. The crucial feature of this construction is that the ARO operates based on a number of calls to W(P) that scales as δ Õ (1 ) 8 , while the number of calls to P to prepare the stationary distribution for the classical RPS scales as δ Õ (1 ).

Quantum deliberation
To output a distribution of actions that corresponds to the tail of the stationary distribution with support only over the (flagged) actions, the agent performs a quantum deliberation process with elements reminiscent of Grover-like steps [4,14]. In the preparation phase, the agent first initializes the joint system of registers I and II in the state π | ′ 〉 P from equation (16). While the preparation of this initial state may be involved in general, in certain cases, including the one presented in the appendix, it becomes straightforward. Consecutively, the agent alternatingly applies the following two operations: (i) Reflection over the actions: where  denotes the set of (flagged) actions.
(ii) Approximate reflection over the state π | ′ 〉 P .The sequence of operations above will, similarly to Grover's algorithm, increase the amplitude of the actions with respect to non-action components in the state of the system, while maintaining the relative weights of the action elements. This ensures that the actions are output according to the correct distribution, as explained in [11].
After iterating these steps a number of times that is determined by the relative probability  ϵ π = ∑ ∈ ( ) i P i of the actions within the stationary distribution, the agent samples, that is, measures in the clip basis of register I. If a desired action is found, it is coupled out, otherwise the procedure is repeated [11]. The average number of iterations of the Grover-like steps (i) and (ii) scales as ϵ Õ (1 ), while the classical RPS agent requires ϵ Õ (1 ) iterations on average.

Reflecting PS implementation for trapped ions
Finally, let us examine the possibility to implement the decision-making process of a quantum RPS agent in an ion trap. As we have explained, two operators are required, the reflection over (flagged) actions, and the ARO. The former can be generically achieved, for instance, by applying the detuned pulses corresponding to π U (2 ) of the coherent controlization step (iv) specifically to those basis states corresponding to (flagged) actions, flipping their sign. The latter, the ARO, is implemented starting from the probability unitaries, by coherent controlization, in conjunction with a few fixed operations, D 0,I , D 0,II , D 0,Aux and H.
Let us briefly describe the individual steps of this procedure. By coherently conditioning the probability unitaries U i , the operationU P is obtained, from which the pulse sequence forV P is obtained by swapping the registers, which, in practice, corresponds to an exchange of the qubit/ion labels in the pulse sequence forU P . The associated inverse operators follow immediately by setting θ θ → − ( ) can be implemented up to a phase of − ( i), that is, for the jth ion we have the pulse sequence withU X as in equation (3), andU Z j given by equation (4). The superfluous phase − ( i)cancels naturally, since the Hadamard gate is used four times for every ancilla in the ARO, twice each for the realization of W PD( ) and its inverse, see figure 6. Finally, we make again use of coherent controlization to construct the phase detection operator W PD( ) and its inverse from the walk operator W(P). The possibility to add control to arbitrary (unknown) unitaries hence provides a modular structure, that allows, in principle, for the generic implementation of all operations that required for the decision-making of a quantum RPS agent. The modular use of coherent controlization in the design of the agent can thus be summarized by the following sequence:

CC CC
That is, starting from single qubit Y rotations, parameterized according to the stochastic matrix P, we construct the probability unitaries using coherent controlization. From the probability unitaries we then construct, again by coherent controlization,U P andV P , which are used to assemble W(P). Finally, from W(P) we construct the ARO operator that is central to the quantum deliberation steps, once again employing coherent controlization. As we have argued, all individual operations of the quantum RPS are implementable with current technology. While large network sizes, as well as small values of ϵ or δ, impose challenges for state-of-the-art ionic implementations of the generic RPS decision-making process, these technological restrictions may be overcome by the continuing development of scalable ion trap arrays. Nonetheless, special cases of the general scheme we have laid out here are well within reach of experimental testing. In the appendix, we present such an example for a quantum RPS agent based on an ECM using two qubits, and we give an explicit pulse decomposition of its entire decision-making process, including an error analysis.

Conclusions
We have presented a modular architecture for the implementation of the deliberation process of PS agents in systems of trapped ions. We have shown first how the probability unitaries, which are required for the emulation of classical random walks, can be generically constructed using coherent controlization, and second how this process allows for the implementation of a quantum RPS agent based on these probability unitaries. A main feature of our construction is its modular architecture, that is, any changes of the probabilities as part of the learning process can be dealt with at the level of the implementation of the probability unitaries, whereas the rest of the construction is unaltered. The generic construction relies only on elementary single-qubit Y rotations and coherent controlization, which allows for a straightforward assembly, as well as straightforward updating of the probability unitaries.
This is an important advantage, if not a prerequisite, for the realization of a learning agent that is continuously adjusting the probabilities underlying its deliberation process. Having to re-compute the entire sequence of gates which need to be applied to realize the quantum RPS agent for any change of the underlying Markov chain would impose a large computational overhead on the agent, and significantly diminish the advantage in speed that is provided by quantizing the RPS agent.
In addition to the general modular architecture, we have provided numerical simulations of an implementation of simple RPS agents using trapped ions. As our investigation shows, proof-of-principle realizations of these agents are simple enough to be implementable in current experimental setups, while they are sufficiently involved to demonstrate the quadratic speed-up.

Appendix. Rank-one reflecting PS in ion traps
Here, we provide an example for a quantum RPS agent sophisticated enough for the demonstration of a quantum speed-up, whilst being sufficiently simple to allow an immediate implementation in readily available ion trap setups, e.g., as described in [16,17]. The appendix is structured as follows. In section A.1 we first discuss the simplified decision-making process for a quantum RPS agent whose underlying ECM network corresponds to a rank-one Markov chain. To provide context, the role of these simple agents is then illustrated for the invasion game in section A.2. In section A.3, we propose an ion trap implementation of the rank-one quantum RPS agent, for which we supply the explicit overall pulse sequence. We accompany our proposal with an appropriate error model, and corresponding numerical simulations, which are given in the final section A.4.

A.1. Rank-one reflecting PS
A special case of the RPS agents that we have considered in section 4 is obtained by considering the reflective analog of so-called 'two-layered' PS agents, where all transition are one-step transitions from percepts to actions [11]. Such agents have a very simple structure, yet were shown to be capable of learning to solve non-trivial environmental tasks [15,25]. In the RPS analog of two-layered PS agents [11], the associated Markov chains of each percept-specific clip network are rank-one throughout the entire learning process of the agent. The columns of P are then all identical, and equal to the stationary distribution. The spectral gap is given by δ = 1, and the Markov chain mixes in one step. Let us consider the consequences-radical simplifications-for the construction of the RPS agent.
In the rank-one case, the probability unitaries U i for a fixed P are all the same, so we may remove the subscript, write only U, but we keep in mind the distinction of U andU P . Moreover, coherent controlization is no longer necessary for the construction ofU P , since U is applied regardless of the state of the control register, . As can be easily seen, the reflections A ref( ) and B ref( ) shown in figure 5 then commute, acting locally on registers II and I, respectively, see figure A1 . Similarly, the coherent encoding of the stationary distribution is now given by the product state π π π | ′〉 = | 〉 | 〉 P P P I,II I II . When assembling the phase detection operator W PD( ) and the ARO, see figure 6, the spectral gap of δ = 1 means that (at most) one ancilla qubit is required. Now, note that the walk operator W(P) for rank-one matrices P, as shown in figure A1(a), is Hermitean, and thus the entire circuit shown in figure A1(b) reduces to a single application of the Szegedy walk operator W(P). An exact reflection over π | 〉 P can hence be performed by applying = W P U D U ( ) 0 † to either of the registers, see figure A1(c). Without loss of generality we select register I, where we drop the subscript indicating the register from now on, to perform all the Grover-like steps to output actions according to the tailed stationary distribution, which entails the following steps. Figure A1. Rank-one reflection operator. For rank-one Markov chains,U P andV P are local operations on registers II and I, respectively. The Szegedy walk operator W(P) that is shown in (a) hence factorizes into two independent applications of U D U 0 † . Since the walk operator further becomes Hermitean, = W W † , the single remaining ancilla is also redundant, the approximate reflection circuit shown in (b) reduces to one application of W(P) as shown in (c), and the reflection becomes exact.
In the preparation stage, the state π | 〉 P is initialized by one application of U to the state| 〉 0 . Then, the two operators of the Grover-like process, i.e., the reflection over the action  ref( ), and the reflection over π | 〉 P , are applied a prescribed number of times determined by ϵ, the relative probability of the actions within the stationary distribution. Consecutively, the agent measures in the clip basis. If the measurement provides an action, it is coupled out, otherwise the agent iterates this procedure.
Before we continue with the ionic implementation of the deliberation process, let us briefly examine an example for a task-the invasion game-for which the agent may employ its capabilities of learning and decision-making.

A.2. The invasion game
As a simple example that can be solved by two-layered agents, let us discuss the invasion game, as considered in [12]. In this game, the agent is tasked with guarding a region of space from an adversary who attempts to enter the region through an array of entrances, see figure A2 . The agent's goal is to prevent the adversary from entering by blocking sites. In every round of the game, the adversary has three possible moves. It may attempt to enter at its current location, or move one door to the left, or one door to the right and attempt to enter through one of these openings. The agent is rewarded if it matches the move, thus blocking the adversary.
To emphasize the learning aspect of the game, we assume that the game starts with the adversary and the agent located at the same entrance, and before the adversary moves, it displays some signal that indicates which way he intends to move next. Thus, the set of percepts of the agent (the defender) is ↓ ← → { , , }, which hint at the possible subsequent move of the attacker. The agent itself can also choose to remain where it is, move left, or move right in an attempt to block, corresponding to the three action clips For the RPS agents discussed previously, this simple game may be represented by associating a three clip network to each of the percepts. In what follows, we shall only focus on a network associated to one percept, say ' ↓', as everything will also hold for other subnetworks as well, and we shall drop the corresponding subscript for ease of notation. For such two-layered settings there is a simple construction relating the probabilities of outputting a particular action, and the structure of the underlying percept-specific Markov chain. In particular, the action probabilities π π π π = ( , , ) 3 are realized by the stochastic matrix where each column is the vector π. The learning of the agent manifests in the relative increases of probabilities corresponding to rewarded actions, and examples for specific update rules can be found, e.g., in [12].
In basic two-layered settings in both the RPS and the analogous standard PS agent models, an action is coupled out after exactly one diffusion step. In order to illustrate a speed-up in such a scenario, we therefore need to consider some additional structure that increases the learning efficiency of the agent, but induces a longer deliberation time. Such a structure can be provided by percept-specific flags, which correspond to rudimentary emotion tags. Flags can be interpreted as the agent's short term memory, indicating favored actions. In other words, absent flags indicate that a particular choice of action, for a given percept, was not rewarded in the previous step, and should be avoided. More precisely, this structure works as follows. Initially, all the actions are flagged. Then after an action has been coupled out, the flag is removed if the action is not rewarded. If the unflagged action is selected again after encountering the same percept in a consecutive round, the deliberation process is repeated until the deliberation results in a flagged action. In the case that the last remaining flag is removed, which indicates a definite change in the setting of the environment, all flags are re-set.
This structure leads to great improvements in settings where the environment (e.g., the adversary in the invasion game) changes its strategy, for instance, by permuting the meaning of the percepts [12]. In this case, if the network is already well-taught, the probability of outputting the correct action, once the meaning of percepts has been altered, can be very low. We will be interested in precisely such a setting. Suppose the attacker pursues a Figure A2. Invasion game. In the invasion game [12] the agent defends a region of space against an adversary that tries to enter through a series of openings. To be rewarded, the agent is to prevent the adversary from entering, by blocking the passages, which can be achieved if the adversary's signals, '↓', '←', and '→', indicating its next move, are interpreted correctly, and the agent mirrors the adversary's moves.

A.4. Numerical simulations
For the numerical simulations that we present in this final section, we consider imprecisions in the laser pulse frequency or duration, resulting in varying angles for the laser pulses, as the primary sources of errors. We model such errors by randomly varying the angles for each pulse in the sequence according to a Gaussian distribution with standard deviation σ that is centered around the correct value.
In the simulations, we specify a pair of values π > 0 1 and π > 0 2 , such that ϵ π π = + < 1 1 2 , initialize the corresponding state vector π θ θ | 〉 = | 〉 U ( , ) 0 which corresponds to a measurement in the clip basis. If no flagged action is found, a new number m is generated, and the procedure is iterated until a flagged action has been sampled. For every fixed set of π 1 and π 2 the process is repeated for 10 4 runs to build up statistics, out of which N 1 (N 2 ) result in an output of the action clip c 1 (c 2 ), corresponding to π 1 (π 2 ). Additionally, the overall number N U of calls to the operator U until a flagged action is observed is recorded in each run. For N U the expected scaling as ϵ (1 ) is largely independent from the error parameter, as can be seen from figure A3 , since this behavior is governed by the structure of the process, in particular, the upper bound ϵ m for the randomly chosen value m. The integer steps by which ϵ m increases, as ϵ (1 ) decreases, also explain the steplike pattern visible in the data of figure A3. That is, in such a Grover-like scheme, the probability to sample a flagged action grows monotonically with the number of iterations only up to some point, from which on additional applications of the reflections will alternatingly decrease and increase the probability. The average number of repetitions set by the value ϵ m , which corresponds to a fixed interval of ϵ-values, is hence not optimal for all ϵ within that interval, which can be seen from the slanting of the data points, and their standard deviations, in each of the 'steps' seen in figure A3(a). The errors partially cover this effect, as can be seen in figures A3(b) and (c).
To illustrate the speed-up of the quantum RPS agent with respect to a classical RPS agent, we directly compare their performance in a simulation without errors, that is, for σ = 0, see figure A4 . The classical rankone RPS agent is emulated here by running the rank-one quantum RPS deliberation process described in this section for = ϵ m 0 , that is, the state | 〉 U 0 is prepared, and a sample is taken, such that clip c i is obtained with probability|〈 | 〉|| c U 0 i 2 . If no flagged action is obtained, the procedure is repeated. What remains to be confirmed by the simulations is the output of flagged actions according to the tail of the stationary distribution, as predicted in [11]. We address this question in two ways. First, we evaluate the behavior of a few selected illustrative pairs of probabilities π 1 and π 2 for increasing error parameters in figure A5 . As a measure for the accuracy of the output, we use the statistical distance  Figure A3. Average number of calls to U. The results of the numerical simulation for the average number of calls to the probability unitary U until an action clip is hit are shown for error parameters σ π = 100, σ π = 20, and σ π = 10, in (a), (b), and (c), respectively. Each blue dot corresponds to the average over 10 4 runs for a fixed value ϵ π π = + 1 2 . The vertical gray lines indicate three standard deviations of the mean values (over 100 runs each) in each direction. The solid purple curves show the best fits that are linear in ϵ (1 ), while the dashed red curves show the best fits that are linear in ϵ (1 ), and we have confirmed that the former fit the data better than the latter.  . Statistical distance to tailed distribution. The statistical distance π D N (˜,˜), see equation (A.5), of the output from the tailed stationary distribution is plotted against the width σ of the error distribution, for values ϵ = 0.05 (solid) and 0.001 (dots), and ratios π π = 9 1 2 , 4, and 2 (top to bottom). The dashed horizontal lines indicate the statistical distance to the uniform distribution for each pair π π { , } 1 2 , which is approached when the errors dominate the behavior of the agent. Figure A6. Output according to tailed distribution. The plots in (a)-(c) show the ratios N N 1 2 of the counts in the numerical simulations in comparison with the corresponding ratios π π 1 2 according to the (tailed) stationary distribution, for error parameters σ π = 100, σ π = 20, and σ π = 10, respectively. The solid purple lines show the best linear fits, which should match the 45°d iagonal, shown as dashed gray line, in an ideal RPS agent. Each group of data points along a vertical line corresponds to fixed value of π π 1 2 , but varying ϵ. The data used is in fact the same as that used for figure A3. 1 , 2 . In figure A6 we then compare the relative frequencies N N 1 2 with which the two flagged actions were obtained to the corresponding ratios π π 1 2 of the (tailed) stationary distribution, for a broad range of values π 1 and π 2 , and for the three error parameters previously chosen used in figure A3.
The data shown in figure A5 illustrates that large errors result in an output according to a uniform distribution over the flagged actions. The farther the tailed stationary distribution is away from the uniform distribution, the smaller the tolerance for errors. As the stationary distribution is updated throughout the learning process the errors will thus cause a stronger deviation from the desired output distribution.
To make these statements more meaningful in terms of learning agents, let us consider a specific example. Let us assume that for a fixed percept, the tailed stationary distribution may be biased towards the action clip c 1 , such that an ideal agent outputs this action in 90% of the cases 9 . To reach this goal, such an agent updates the corresponding Markov chain throughout the learning process, until the associated stationary distribution is such that π π = 9 1 2 . We may then set an error threshold, by assuming that the agent is still considered to succeed, if the action c 1 is performed only 70% of the time, i.e., a statistical distance of 20%. Brief inspection of the topmost solid curve in figure A5 reveals that for ϵ = 0.05 the threshold value corresponds roughly to the largest error, σ π = 10, that we consider in figure A3. This, in turn, suggests a maximal number of = ϵ m 5 coherent iterations of the reflections in the Grover-like process before a measurement is performed, which translates to 64 individual laser pulses as described in section A.3.
The initial analysis presented in this appendix suggests that our proposal for the implementation of twolayered quantum RPS agents may be feasible, and be readily implemented in a laboratory as a proof-of-principle demonstration of learning agents enhanced by employing quantum physics.