Time-warping invariant quantum recurrent neural networks via quantum-classical adaptive gating

Adaptive gating plays a key role in temporal data processing via classical recurrent neural networks (RNNs), as it facilitates retention of past information necessary to predict the future, providing a mechanism that preserves invariance to time warping transformations. This paper builds on quantum RNNs (QRNNs), a dynamic model with quantum memory, to introduce a novel class of temporal data processing quantum models that preserve invariance to time-warping transformations of the (classical) input-output sequences. The model, referred to as time warping-invariant QRNN (TWI-QRNN), augments a QRNN with a quantum–classical adaptive gating mechanism that chooses whether to apply a parameterized unitary transformation at each time step as a function of the past samples of the input sequence via a classical recurrent model. The TWI-QRNN model class is derived from first principles, and its capacity to successfully implement time-warping transformations is experimentally demonstrated on examples with classical or quantum dynamics.


Introduction
Sequential data is prevalent in both classical and quantum systems.Indeed, tasks like natural language processing [1,2] or simulating the dynamics of quantum systems [3,4] require computing architectures that are capable of capturing temporal dependencies by retaining only the information about the past that is necessary to predict the future.This calls for adaptive gating mechanisms that can forget in a data-driven manner, so as to selectively memorize and overwrite information [5,6].
For classical processes, it was recently formally proved that gating mechanisms ensure invariance to time warping [7,8].Consider, for example, monitoring vitals of patients in intensive care, or the evolution of a quantum system.Timeseries data that result from these processes are not naturally discrete -they are in fact obtained by sampling a continuous signal.However, in practice, the sampling rate cannot always be controlled, and moreover, it may not always be fixed.Time warping transformations capture such dynamically changing sampling rates, or basic operations such as inserting a, possibly varying, number of zeros or white spaces between the elements of an input sequence.As such, time warping can model important practical aspects such as imperfect sampling due to jitter or clock drifts, or changes in time scales.Thereby, it is imperative to develop models that are immune to these transformations and are able to fit the corresponding "time-warped" signals.
Conventional recurrent neural networks (RNNs), which do not implement gating mechanisms, are not robust to time-warping transformations, failing even in simple settings with the additional of few zeros between samples [7].In contrast, classical RNN models with gating, such as long short-term memories (LSTMs) and gated recurrent units (GRUs), can successfully adapt to transformed inputs.This paper aims at investigating quantum models that preserve time warping-invariance for temporal data processing, targeting both classical and quantum dynamics.

Related Work
Reflecting the underlying symmetries of a given data set in the choice of the inductive bias is well known to be of fundamental importance for the success of classical machine learning.Whilst symmetries have always played an imperative role in studying physical systems and understanding the laws of physics, the study of symmetries in data has just recently gained momentum, leading to the blossoming field of geometric deep learning [8].Symmetries formalize the invariance of objects under some set of operations.For example, the binding energy of a molecule does not change by permuting the order of the atoms, and a picture of a cat still depicts a cat regardless of the position of the cat within the image.Incorporating this prior knowledge into the learning architecture, as a geometric prior, e.g., by adopting a graph [9] or convolutional [10] neural network, has been shown to improve both trainablity and generalization performance.
Recently, the quantum machine learning (QML) community has also focused on introducing geometric priors into quantum models [11,12,13].For example, quantum graph neural networks preserve permutations symmetries, making them suitable for learning quantum tasks with a graph structure [14,15,16,17]; whilst quantum convolutional neural networks [18] preserve translations symmetry.As their classical counterparts, symmetry-preserving quantum models have the potential not only of reducing sample complexity, but also to mitigate quantum computing-specific issues such as barren plateaus [19,20].
A quantum recurrent neural network (QRNN) architecture that can successfully learn temporal dependencies using a repeat-until-success mechanism was first proposed in [21].
A separate QRNN model was later proposed in [22].As illustrated in Fig. 1, a QRNN applies the same parametrized quantum circuit (PQC) sequentially over time, whilst retaining historical information in unmeasured "memory" qubits.The model has been shown to be capable of learning sequential data [22], and its capacity was an-alyzed in [23] in comparison to counterpart classical models with the same number of memory units.In particular, it was shown in [24,25] that quantum models can describe temporal sequences with reduced memory, compared to their classical counterparts.A model with classical memory, was introduced in [26] by integrating quantum models within a classical LSTM architecture.The latter model was also leveraged in [27] to define a reservoir computing solution with fixed dynamics.RNNs have also been integrated with quantum dynamical models in order to simulate non-Markovian dynamics in open system [28].
To the best of our knowledge, quantum models for temporal data processing have not yet been studied from the perspective of symmetry preservation.In this paper, we address this knowledge gap by focusing on models with quantum memory.To this end, we take the QRNN model in [22] as foundation due to its efficiency, although the general approach presented here could be extended to any quantum model with a recurrent structure.

Contribution
Inspired by the connection between gating and time warping unveiled in [7], in this paper we investigate the problem of preserving symmetries caused by time-warping transformations in quantum dynamic models.The main contributions are as follows.
• We introduce the class of time warpinginvariant QRNNs (TWI-QRNNs), which augment QRNNs with a quantum-classical adaptive gating mechanism that chooses whether to apply a parameterized unitary transformation or not at each time step as a function of the past samples of the input sequence via a classical recurrent model.
• The TWI-QRNN model class is derived from first principles via a postulate of invariance to time warping.
• While the TWI-QRNN model class implements deterministic mappings, we also introduce a time warping-invariant stochastic model referred to as time warping-invariant stochastic QRNNs (TWI-SQRNNs).

Quantum Recurrent Neural Networks
Formally, we study the problem of mapping classical sequences x 1:T comprising T samples x t with t = 1, 2, ..., T , to corresponding classical sequences z 1:T , comprising samples z t with t = 1, 2, ..., T , through a parameterized causal mapping operating over time t.The QRNN model in [22] implements a deterministic mapping between sequence x 1:T and sequence z 1:T based on a parameterized quantum circuit (PQC) for the case in which both x t and z t are real valued.We will specifically focus here on settings in which the target samples z t are scalar, while input samples x t may be arbitrary real vectors.In the following, we review the QRNN model, as well as the corresponding training criterion.Furthermore, we point out a connection between QRNNs and dissipative quantum neural networks (QNNs) [29][30], which will be leveraged in the following section to introduce the proposed model class.

Quantum Recurrent Neural Networks
A QRNN is defined by a parameterized unitary transformation U (x, θ) operating on a register of n = n A + n B qubits.The unitary is a function of the real-valued input vector x and of a vector of real-valued model parameters θ.The ansatz, i.e., architecture, assumed in [22] for the unitary U (x, θ) includes an input encoding layer with single-qubit rotations dependent on input x, followed by parameterized processing layers with both single-qubit and two-qubit gates.Note, however, that we allow the dependence of the unitary U (x, θ) to be arbitrary.Therefore, the model accounts also for special input-encoding methods such as the repeat-until-success neural architecture from [21] (see also [31]).
As illustrated in Fig. 1, the unitary is applied at each time step by re-initializing the subregister of n B qubits to the ground state |0⟩.After the application of the unitary to all n qubits, the subregister of n B qbits is measured.Since the subregister of n A qubits is not measured, it can be used to propagate information from one time step to the next.Therefore, we will refer to the n A -qubit subregister as the memory subregister, while the subsystem of n B qubits is referred to as the output subregister.
To elaborate, the density matrix describing the state of the register of n qubits before the first application of the unitary is given by ρ AB 0 = |0⟩⟨0| A ⊗ |0⟩⟨0| B , where the first state is defined on the memory subregister of n A qubits, and the second on the output subregister of n B qubits as described by the superscripts A and B. Note that the memory subregister is also intialized to |0⟩.The application of the unitary U (x 1 , θ), which encodes the first input sample x 1 , yields the output state density ρ AB where we have ρ B 1 = Tr A (ρ AB 1 ) and Tr X (•) indicates the partial trace with respect to subregister X ∈ {A, B}.In the QRNN model, the measurement is repeated many times in order to obtain an estimate of the expectation value ⟨O B ⟩ of the observable O B .After averaging over the measurement outcomes, the resulting density matrix of the memory subregister is given by ρ A 1 = Tr B (ρ AB 1 ).Note that this description of the state of the memory subregister is independent of the specific observable O B , and is sufficient to predict the expectations of downstream measurements [22].Furthermore, we point the reader to Sec. 4, which studies a stochastic variant of the QRNN model in which a single measurement outcome o λ is evaluated at each time step.
Generalizing to any time t = 1, ..., T , the memory subregister evolves according to the update rules and with initialization ρ A 0 = |0⟩⟨0| A .Furthermore, the output at each time t is given by a function of the expected value of the local observable O B , where As mentioned earlier in this section and illustrated in Fig. 1, evaluating the sequence of expected values z t requires running the circuit multiple times in order to approximately evaluate the expectations in (5) via empirical averages of the measurement outputs at all time steps t = 1, ..., T .

Training QRNNs
Given a data set of input-output example pairs (x 1:T , z1:T ), training of a QRNN targets the minimization of the training loss.To this end, for each example pair (x 1:T , z1:T ), we define the quadratic training loss [22] ℓ(z 1:T , z1: with z t is a function of the model parameter vector θ through (4).The training loss ( 6) is averaged over all examples in the training set, and it can be minimized via zeroth-order optimization schemes, such as the parameter shift rule [32,33].

QRNNs as Dissipative QNNs
In this subsection, we observe a useful connection between QRNNs and dissipative QNNs [29,30], a popular model for PQCs that mimics the operation of classical multi-layer perceptrons.We specifically focus on single-layer QRNNs as defined in [29,Fig. 1].A (single-layer) dissipative QNN encompasses an input subregister and a number of output subregisters.A new output subregister is introduced at each processing step t, and a parameterized unitary U t , also referred to as "perceptron", is applied to the input subregister and to the new output subregister.All output subregisters are finally measured.We note that dissipative QNNs are also related to quantum collision models used in quantum mechanics for the study of open quantum systems [34].
A dissipative QNN with T = 3 steps is illustrated in Fig. 2. Note that, unlike the dissipative QRNN model in [29,30], in Fig. 2, a different input data sample x t is loaded at each step t, and the unitaries U (x t , θ) used across steps t share the same parameter vector θ.It can be readily checked that the dissipative QNN in Fig. 2 is equivalent to the QRNN model in Fig. 1 if we identify the input subregister of the dissipative QNN with the QRNN's memory subregister and the output subregisters with the QRNN's output subregister.This is in the sense that both models provide statistically equivalent outputs.
The main advantage of formulating a QRNN in terms of a dissipative QRNN is that the evolution of the system and the outputs given by ( 2)-( 5) can be more directly expressed without explicitly resorting to partial trace operations and intermediate measurements at each time step t .To this end, define a system, as exemplified in Fig. 2 with n T = n A + T n B qubits, corresponding to one memory subregister of n A qubits and T output subregisters of n B qubits.The output subregisters are indexed as t = 1, 2, ..., T as in Fig. 2. Initializing the density state of such system to the ground state ρ 0 = |0⟩⟨0|, where |0⟩ is a separable ground state across the n T qubits, the density state evolves as where V t (x t , θ) is a unitary matrix that applies unitary U (x t , θ) to the memory subregister (first wire in Fig. 2) and to the t-th output subregister, while identity operators are applied to all other subregisters.Furthermore, the outputs (5) can be written as where the local observable O B t applies only to the t-th memory subregister.

Time Warping-Invariant QRNNs
In this section, we introduce the proposed TWI-QRNN model class.We start by defining the concept of time warping-invariance.Then, we derive the TWI-QRNN model, which is finally described, along with the corresponding training problem.

Time Warping-Invariance
Following [7] and [8], in order to define time warping-invariance, we consider a continuoustime formulation in which the time axis is defined by a real variable t.Round parentheses will be used to denote dependence on the continuous time t.Furthermore, we will take a(t) to represent either the value of the function a(•) at time t, or the entire function a(•), as it will be clear from the context.
To start, we define a time warping operation c(t) as any monotonically increasing function of time.Let us fix some subset C of time warping operations.An example is given by the linear time warping family C given by operations of the form c(t) = at for some range of values a > 0, which correspond to a stretching (a < 1) or shrinking (a > 1) of the time axis.Time warpinginvariance is a property of a parameterized model class z(t) = f (x(t), θ) of mappings between an input continuous-time signal x(t) and an output continuous-time signal z(t).Note that the mapping is arbitrary, and not restricted to memoryless functions.Furthermore, a mapping in the class is identified by the model parameter θ.
Model class f (•, θ) is said to be time warpinginvariant with respect to the family C, if for any z(t) = f (x(t), θ) produced by the model with some model parameters θ given input x(t), there exist model parameters θ ′ that yield the output z(c(t)) = f (x(c(t)), θ ′ ) given input x(c(t)) for any function c(t) in C. In words, the model class can reproduce time-warped versions of its inputoutput pairs.
It is emphasized that the definition of time warping-invariance given above applies only to continuous-time processes.Therefore, its extension to discrete-time processes, first introduced in [7], requires the use of discrete-time approximations.As we detail in the next subsection, these approximations entail that the notion of time warping-invariance is more loosely defined as compared to the "spatial" invariances studied in quantum geometric machine learning [13].We may hence refer to the resulting invariance property as a quasi-invariance as in [7], but we will not make this distinction explicit in the following.

Time Warping-Invariance for QRNNs
The time warping-invariance property defined in the previous subsection can be applied to QRNNs as long as one specifies a suitable continuoustime extension of the defining equalities ( 2)-( 5), or equivalently ( 7)-( 8).Here, we adopt a stronger definition of time warping-invariance based on the formulation ( 7)-( 8), whereby the invariance condition is imposed on the input-output pairs (x t , ρ t ).Note that, if invariance holds in terms of the density matrices ρ t , then it also holds, a fortiori, on the expected values (8) producing the output samples z t .In the rest of this subsection, we provide a derivation of the TWI-QRNN model that differs from the steps of classical models [7], [8] and is more suitable for a quantum mechanical description of the problem.A direct extension of the arguments in [7], [8] can be found in the Appendix.The reader only interested in the definition of the model can move on directly to the next subsection.
Let us start from the definition (7) and apply a time-warping operation so that the input signal observed by the QRNN at time t is given by x(c(t)).Denote as ρ(t) a continuous-time version of the density matrix ρ t obtained with the original input x(t), and as ρ(c(t)) the corresponding density matrix obtained with the time-warped input x(c(t)) under time warping-invariance.The change in density matrix from one discrete-time step t to the next, t + 1, can be related to the derivatives with respect to the warped time axis as  where we have used the definition of integral and changed the integration variable from c(t) to t.
To proceed, we make the approximation of considering the time-warping derivative dc(t)/dt to be constant in the interval [t, t + 1], so that it can be taken out of the integral in (9).Furthermore, in order to account for the desired time warpinginvariance of the output density matrix, we assume that the continuous-time evolution operator is approximately invariant under time-warping transformations, i.e., With regards to the approximations put forth in this paragraph, we note that we are only interested in the evolution of the density at the end of the discrete time-step from time t to t + 1 as per (7), and hence we can ignore constraints on the underlying infinitesimal physical evolution as long as they are consistent with the discrete-time evolution.Importantly, we emphasize that the approximate equality (10) is not intended to indicate that a meaningful continuous-time physical process ρ(t) exists that satisfies (10) with a strict equality.Rather, the continuous-time quantum process is just used as a motivational starting point for the introduction of the TWI-QRNN model.For clarity, and to acknowledge the approximations made, we will use ρ(•) in the rest of the derivation.Inserting (10) into (9) and using the above approximations, we obtain In (11), the density ρ(t + 1) is given by the output of the QRNN, via (7), with previous state ρ(c(t)) ≈ ρ(t).Moreover, from the point of view of a discrete QRNN model under time warping, the time c(t) corresponds to discrete time t, the value x(c(t)) to sample x t , and density matrix ρ(c(t)) to ρt .Using these definitions, we get the approximate equality We observe that the right-hand side in (12) is a valid density matrix if the inequality dc(t)/dt ≤ 1 holds.The same limitation applies to the derivation presented in [7] for classical models (see also Appendix), and it stems from approximations done in going from discrete to continuous time and back.
In other words, under condition (12), the density matrix ρt+1 is produced by leaving the previous density matrix ρt unchanged with probability (1 − dc(t)/dt) and by applying the unitary V t (x t , θ) otherwise.The update (12) is the key equation defining TWI-QRNNs, as it will be elaborated on in the next subsection.

TWI-QRNNs
Based on the analysis in the previous subsection, we define the class of TWI-QRNNs using the partial trace formalism (2)-(5) as follows.A TWI-QRNNs applies the updates where α t ∈ [0, 1] is a probability, along with the equalities (3)-( 5).Accordingly, when evaluating By the analysis in the previous subsection, culminating in (12), the probability α t should be chosen equal to the time-warping derivative dc(t)/dt.To gain insight on this selection, let us consider the case of a linear time warping family C defined as c(t) = at with a < 1, which corresponds to a stretching the inputs and outputs by a factor of 1/a.Accordingly, by the definition of time warping-invariance, if the input x t is stretched by a factor of 1/a, the model class should be able to reproduce an equally stretched density output sequence ρ t .Therefore, assuming 1/a to be an integer, if each input sample x t of the original sequence is kept constant for 1/a consecutive samples, the output samples ρ t of the output sequence should also be constant for consecutive intervals of 1/a samples.The update (13) implements a probabilistic version of this operation, whereby each sample ρ t is held constant, on average, for 1/(1 − α t ) consecutive samples.Therefore, setting α t = 1 − a approximately satisfies time warping-invariance with respect to the family C.
We finally observe that the mechanism described in this subsection, which is based on t − 1 classical binary variables and switches, could be readily replaced by a fully quantum implementation with controlled-U gates and t − 1 of controlling ancilla qubits.However, this alternative architecture is rather inefficient for sequences containing a sufficiently large number of time samples.

Adaptive Gating Mechanism
Whilst the motivational derivation above suggests the choice of α as dc(t)/dt, the proposed algorithm practically replaces the unknown derivative in the model ( 13) with a classical recurrent neural network model (RNNs) that infers a suitable time-warping probability α t from the input data sequence in a causal fashion.Specifically, we write the probability α t as where σ(a) = (1 + exp(−a)) −1 denotes a logistic sigmoid with hyperparameter vector ϕ t being the output of a model defined by the updates and where W = {W x , W h , W h x , W h h } denote learnable parameters.The overall TWI-QRNN model is shown in Fig. 3.
We emphasize that the classical RNN model is not required if one has prior knowledge of the time-warping function.Furthermore, we intentionally avoid introducing gating mechanisms in the RNN.This choice is motivated by the designated role of the RNN as a mechanism to estimate the time-warping function, with the model memory being managed by the quantum system via the described gating scheme.That said, we note that more complex tasks may call for more sophisticated gating mechanisms across both quantum and classical models.

Training TWI-QRNNs
Let us define as the variational distribution of the Bernoulli variable b t dictating whether or not the current unitary U (x t , θ) is applied to the input and memory subregisters as per (13).The joint distribution of the variables b 1:T is then given as q(b 1:T |x 1:T , W ) = T t=1 q(b t | x 1:t , W ). Note that the samples b 1:T are conditionally independent given the input sequence x 1:T , and that the variable b t depends causally on the input samples x 1:T .Training a TWI-QRNN amounts to the optimization of the variational parameters W and of the PQC parameters θ.
To specify the training problem, we define the loss for each example (x 1:T , z1:T ) as the expectation E[ℓ(z 1:T , z1:T )] of the loss function (6), where the average is taken with respect to the random variables b 1:T , which, in turn, determine the random outputs z 1:T .To estimate this expectation, one can leverage realizations of the outputs z 1:T obtained as described in the previous subsection by drawing samples from the variational distribution q(b 1:T |x 1:T , W ).
The resulting training loss is minimized via gradient descent over W . Specifically, the variational parameters W are optimized using the log-derivative trick [35] to estimate the gradient, whilst optimization over θ is carried out using zeroth-order optimization as for QRNNs.

Stochastic Time Warping-Invariant QRNNs
In this section, we introduce a probabilistic counterpart of the QRNN, referred to as stochastic QRNN (SQRNN), which is then extended to the time warping-invariant class TWI-SQRNN.

SQRNNs
A SQRNN is a probabilistic model that maps an input real-valued vector sequence x 1:T to a discrete-valued sequence y 1:T , with y t being a string of n B bits.Note that equivalently, we can consider the output sample y t to take one out of 2 n B possible values.Accordingly, without loss of generality, we write y t ∈ {0 : 2 n B − 1}.Unlike the QRNN model introduced in Sec.II, an SQRNN outputs sequence y 1:T via a single run of the circuit, hence not requiring an empirical average over multiple runs of the circuit.Specifically, as illustrated in Fig. 4, the output y t is the random output of the measurement of the output subregister at time t.
To elaborate, let us fix a projective measure-ment defined by projection matrices Π B y with y ∈ {0 : 2 n B − 1}.The projection matrices act on the output subregister.Then, the probability of the outcome y t at time t given the past and current samples x 1:t of the input sequence is given by Born's rule as where Π yt is applied only to the output subregister and the density matrix ρ AB t is defined as in Sec. 2.
Given an example (x 1:T , ȳ1:T ), the training loss is defined as the cross-entropy loss We observe that this loss is the negative logarithm of the product of conditional marginals.Its minimization may be interpreted as a form of pseudo maximum likelihood [36].An alternative, but more complex, criterion would be the negative logarithm of the joint distribution of sequence ȳ1:T , which can be obtained from the joint density ρ t described in Sec.2.3.The training problem with loss ( 19) can be addressed using the same tools mentioned in Sec. 2 for QRNNs.

TWI-SQRNNs
In the previous section, we have introduced TWI-QRNNs by imposing time warping-invariance in terms of the density matrices produced by the equivalent dissipative QNN.Since the probabilities (18) have the same form as the expectations (5) defining the outputs of QRNNs, the same reasoning applies to SQRNNs ρ AB t .Based on this observation, we define TWI-SQRNNs in a manner analogous to TWI-QRNNs.The only caveat is that each output y t is obtained via a single pass through the circuit up to time t, rather than by carrying out several runs through the circuit.Accordingly, when evaluating the output y t for time t, a TWI-SQRNN generates a single realization of t − 1 Bernoulli random variables b 1 , ..., b t−1 , with each random variable b t ∼ Bern(α t ) being equal to 1 with probability α t in (14).For the given realization b 1 , ..., b t−1 , at time t ′ < t, a unitary U (x t ′ , θ) is applied to the memory and output subregisters if b t ′ = 1 and an identity is applied otherwise, followed by a measurement of the output register.
Finally, training is carried out in a manner analogous to TWI-QRNNs by defining the training loss as the expectation of the cross-entropy loss (19) with respect to the variational distribution q(b 1:T |x 1:T , W ).

Experiments
In this section, we provide experimental results to elaborate on the performance of TWI-QRNNsand their stochastic counterpart, TWI-SQRNNs -as opposed to conventional (s)QRNNs, in the presence of time-warping distortions of the input sequence.As a classical benchmark, we use an LSTM RNN, which by [7] also preserve invariance to time-warping via gating mechanisms.

Remembering a Cosine Wave
To start, we consider the task of remembering the past sample of a simple classical time sequence x t , namely a, possibly time-warped, discrete-time cosine function.Accordingly, we set the target sequence as zt = x t−1 for (TWI-)QRNNs and y t = x t−1 for (TWI-)SQRNNs.For the case without time warping, reference [22] demonstrated that the original QRNN model is capable of successfully implementing this task.In contrast, here we study the robustness of the models under study to a time warping of the input sequence.To elaborate, the considered cosine function is given by The "unwarped" sequence in (20) is shown in We apply linear time-warping to obtain "warped" input sequences x t as discussed in Sec.3.3.Accordingly, given the unwarped sequence x t obtained by sampling the original cosine signal cos(t), we repeat each sample for 1/a time instants, where a is such that the ratio 1/a is an integer.This setting accounts for a sampler with memory, as a new sample is observed every 1/a time steps.Examples of warped sequences x t are shown in Fig. 5 for a = 0.1 (middle) and a = 0.05 (right).

Predicting Spin Dynamics
Following [22], we study next the problem of predicting the expected value of an observable for the quantum spin dynamics of a three-qubit system.The density state σ(t) of the three-qubit system evolves in continuous time according to the Lindblad master equation with Hamiltonian The coefficients are set to h k = 2π, J k = 0.1π, and the dissipation rate equals c = √ 0.0002.The initial state is set to ρ(0) = |+⟩ ⊗n , and the sequence is sampled at continuous time instants ∆T, 2∆T, ..., 200∆T with ∆T = 1/20, which correspond to the discrete time instants t = 1, 2, ..., T , respectively, with T = 200.The unwarped input x t is then given by the corresponding expected values of the Z observable for the first qubit of the three qubits.
Non-linear time-warped sequences are generated by setting c(t) = √ t, and sampling the solution of the master equation ( 22) at continuous time instants √ ∆T , √ 2∆T , ..., √ 200∆T , which correspond to the discrete time instants t = 1, 2, ..., T = 200.This setting represents an instance in which the sampling interval varies over time.

Circuit
Throughout this section, we adopt models with n = 6 qubits consisting of an n A = 3-qubit memory subregister and of an n B = 3-qubit output subregister.The parameterized unitary U (x, θ) implements the cascade of an encoding unitary U in (x), and of a problem-dependent evolution unitary U H (θ). Specifically, in order to encode each input sample x we use the encoding unitary ⊗3 , which applies an identity tranformation to the memory subregister and an input-dependent Pauli-Y rotation [22] R in (x) = R y (arccos(x)) = exp(−iarccos(x)Y /2) (23) to each qubit of the output register.As shown in Fig. 7, the evolution unitary consists of a layer of single qubit rotation gates, each parameterized by the angles θ as in [22], followed by Hamiltonian dynamics.Thereby, after the rotation layer, the unitary transformation exp(−iH∆t) is applied using the fixed Hamiltonian with ∆t = 0.17.The coefficients a i , J ij in (24) are drawn randomly from a uniform distribution as a i , J ij ∈ [−1, 1], and they are fixed during training.In contrast, the rotation angles are optimized using the gradient-free optimizer COBLYA [37].
We emphasize that the encoding circuit could be chosen differently, and that the architecture has not been optimized.For example, we may be able to improve the performance of the models by encoding orthogonal polynomials or using trainable encoding strategies [38].At the measurement part, we measure Z-expectation value of each qubit in the output register and take z t as their average multiplied by a real coefficient c for the (TWI-)QRNN.

Gating Mechanism via a Classical RNN
For the classical RNN in the proposed models, we do not use hidden layers and we directly optimize the gating parameters W = {W x , W h , W h x , W h h } determining the hyperparameter in (15) using gradient descent as discussed in Sec.III-D.The optimization of the circuit and gating parameters is carried over in an iterative manner.

Classical Benchmark
As a classical benchmark, we adopt a standard temporal model comprised of a LSTM RNN layer followed by a dense layer.To ensure a meaningful comparison with quantum models, we set the number of trainable parameters in the gating mechanism of the TWI-QRNN to match the number of parameters in forget gate of the LSTM, namely 4 trainable parameters and bias terms, and the number of trainable parameters in the circuit of the TWI-QRNN to match the number of parameters in the input gate, namely 3 trainable parameters.Note that, the LSTM has additional parameters in the output gate and the dense layer, for a total of 14 parameters.This choice ensures that the number of parameters is the same in modules that appear in both models since the TWI-QRNN does not have an output gate/dense layer.The model is trained using gradient descent, using the Adam optimiser over 2000 training epochs with a learning rate 0.001.We use the mean squared error loss as the optimisation objective.

Remembering a Cosine Wave
For the first task -remembering the last sample of a, possibly time-warped, cosine wave -we show the accumulated prediction loss over time in Fig. 8.The training algorithm is run for the first 50 samples (not shown), and the error is accumulated over the subsequent 150 samples.We use 5 trials, and we average over the random initialization of the trainable parameters, as well as the Hamiltonian coefficients in (24).
We consider the case when the warping parameter a is known a priori, as well as the more challenging case in which parameter a is not known and is learnt via the proposed RNN-based adaptive gating mechanism.For both considered value of a, namely a = 0.1 and a = 0.05, the proposed TWI-QRNN model is seen to consistently achieve a significantly lower loss as compared with the standard QRNN.This demonstrates the robustness of TWI-QRNNs to linear time warps.The gap in performance between TWI-QRNNs and QRNNs widens as time warping becomes more pronounced by having a increase from a = 0.1 to a = 0.05.Similar conclusions can be drawn for the stochastic model, as shown in Fig. 9.In fact, TWI-SQRNN outperforms the SQRNN model for both values of a.In addition, the TWI-QRNN is seen to accrue a lower loss compared to the clas-   sical LSTM benchmark.Overall, for both models, the gating mechanism is seen to be capable of adapting the value of a, achieving comparable loss to the case when a is known a priori.In fact, the loss obtained with adaptive gating is lower than that with known parameter a.This suggests that the adaptive gating mechanism, which is further investigated in Appendix A, may compensate for errors in the approximations adopted in Sec. 3.

Predicting Spin Dynamics
In a manner similar to the previous experiment, the accumulated quadratic loss for the problem of predicting spin dynamics over time is shown in Fig. 10.We consider again both the ideal case in which the time-warping function c(t) = √ t is known a priori, and the more practical case in which the time transformation is not known and is learnt via the RNN-based adaptive gating mechanism.
The accrued loss accumulates more quickly in the earlier time steps as the warping is more prominent when the continuous time variable t is smaller.However, the proposed model is seen to be capable of resisting even non-linear warping transformations, accruing much lower predic-tion loss over time compared to the conventional QRNN model.This confirms that the gating mechanism can approximate the warping derivative and achieve low prediction loss, encouraging the use of the proposed model for prediction of phenomena characterized by quantum dynamics.In a manner similar to the previous experiment, the TWI-QRNN also performs better than the classical LSTM benchmark.

Discussion and Future Work
In this paper, we have studied quantum models that process temporal data.We have showed that postulating invariance to time transformations in the data, hence taking invariance to time warping as an axiom, necessarily leads to an adaptive gate-like mechanism in quantum recurrent models.We derived a novel quantum model class from first principles, which was experimentally seen to be capable of resisting time warping transformations.We have also provided examples in which the proposed quantum model class outperforms a classical LSTM RNN benchmark with comparable capacity in the corresponding modules in terms of number of parameters.In this regard, we caution against deriving a general conclusion based on these results.Current state-of-the-art  machine learning models fit large datasets with models with large capacity which is not plausible with current quantum devices due to issues associated with small numbers of qubits and data loading, as well as trainability challenges due to barren plateaus.However, when the machine learning practitioner is concerned with learning from limited temporal data, as the results of this work show, the proposed model is the better candidate to tackle the problem.Various future research directions arise.For example, it is an open question how to design an efficient TWI-QRNN models with a fully quantum gating mechanism.This is particularly useful for running the models on quantum hardware, rather than simulators, and it is also left for future work.An interesting direction could be the investigation of the memory cost of time warpinginvariant quantum recurrent models, since quantum models have been shown to describe temporal sequences with reduced memory [25,24].Given the performance of the proposed model on prediction tasks, we expect it to be advantageous for tasks related to quantum phenomena, with extensions possibly operating directly on quantum data [39].Following the arguments in [7] and [8], we finally return to a discrete-time model as follows.From the point of view of a discrete QRNN model under time warping, the time c(t) corresponds to discrete time t, the value x(c(t)) to x t , and ρ(c(t)) to ρ t .Therefore, a discrete-time version of (30) can be defined as (12).

Figure 2 :
Figure 2: The QRNN model in Fig. 1 can be interpreted as a variant of a dissipative QRNN [29, 30], as shown in the figure with T = 3, in which the same parameter vector θ is reused by unitaries U (x, θ), and different data samples x t are loaded at each time t.

Figure 3 :
Figure 3: Illustration of the operation of the TWI-QRNN model.

Figure 4 :
Figure 4: Illustration of the proposed SQRNN model.

Fig. 5 (
left) for T = 200 samples x t = cos(t) obtained at time instants t = 1, 2, ..., T, T = 200.For the stochastic models, which produces discrete outputs, the value of the cosine in (20) is discretized to 2 n B levels equally spaced in the interval [−1, 1] with n B = 3 (see next subsection for further details on the model architecture).

Figure 6 :
Figure 6: (left) The "unwarped" expected value of the observable for the three-qubit spin dynamics generated by (21); and (right) the corresponding time-warped signal with non-linear transformation c(t) = √ t.

Figure 7 :
Figure 7: The evolution part of the circuit utilized in the numerical simulations.The rotation angles θ act as parameters of the circuit.The Hamiltonian H is fixed and given by (24).

Figure 8 :
Figure 8: Cumulative quadratic loss for the task of remembering a time-warped cosine wave as a function of time for time-warping parameters a = 0.1 (left) and a = 0.05 (right).

Figure 9 :Figure 10 :
Figure 9: Cumulative quadratic loss for the task of remembering a time-warped cosine wave as a function of time for time-warping parameters a = 0.1 (left) and a = 0.05 (right).Unlike Fig. 8, which considers deterministic predictors obtained via expected values of an observable, this figure consider stochastic, one-shot, predictors based on a single measurement output (see Sec. 4).
the output z t for time t, a TWI-QRNN generates a number of realizations of t−1 Bernoulli random variables b 1 , ..., b t−1 , with each random variable b t ∼ Bern(α t ) being equal to 1 with probability α t .For each realization b 1 , ..., b t−1 , at time t ′ < t, a unitary U (x t ′ , θ) is applied to the memory and output subregisters if b t ′ = 1 and an identity is applied otherwise, followed by a measurement of the output register.The outputs of the measurement of the output register at time t are then averaged to obtain z t .