This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy. Close this notification
Paper The following article is Open access

Generating quantum states through spin chain dynamics

Published 13 April 2017 © 2017 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Citation Alastair Kay 2017 New J. Phys. 19 043019 DOI 10.1088/1367-2630/aa68f9

1367-2630/19/4/043019

Abstract

The spin chain is a theoretical work-horse of the physicist, providing a convenient, tractable model that yields insight into a host of physical phenomena including conduction, frustration, superconductivity, topological phases, localisation, phase transitions, quantum chaos and even string theory. Our ultimate aim, however, is not just to understand the properties of a physical system, but to harness it for our own ends. We therefore study the possibilities for engineering a special class of spin chain, envisaging the potential for this to feedback into the original physical systems. We pay particular attention to the generation of multipartite entangled states such as the W (Dicke) state, superposed over multiple sites of the chain.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Spin chains are one of the simplest models that can exhibit any of a wide variety of properties and, as such, have been instrumental in developing our understanding of those properties. This includes conductivity (the Hubbard model), the transition from conduction to insulation (the Bose–Hubbard model) [1], and high temperature superconductivity [2]. From localisation within random media [3], through quantum chaos [4], to transport in globally entangled topological systems and the Kitaev chain [57] the entire gamut of strongly correlated systems can be studied, and features such as phase transitions [8, 9] elucidated, including the transition between the exact, efficient solubility of gapped systems [10, 11] and the believed intractability/universality of the calculation of ground state/time evolution of a gapless Hamiltonian [12, 13]. These spin chains can even be used as a technical tool to describe properties of more complex systems, as demonstrated by the Onsager solution to the two-dimensional Ising model [14], and even string theory [15, 16]!

The spin chain model arises directly in experiments: solid state [17, 18], optical lattices [19], trapped ions[20], or even photonic systems [2123] are all capable of realising the spin chain model that we study in this paper. Furthermore, this description in terms of a Hamiltonian (along with some additional control parameters) is often more natural than the gate model that they are attempting to emulate for the purposes of quantum computation. In fact, it is remarkable how little control one has to add in order to create universal quantum computation from a very simple spin chain [24].

Quantum information and quantum computation is the ultimate expression of our understanding of quantum mechanics; instead of merely describing and explaining quantum phenomena, we are trying to understand how we can manipulate quantum systems to an unprecedented level in order to realise the transformations that we desire, whether this is some comparatively simple manifestation of quantum technology such as a Bell test [25], random number generation [26] or quantum key distribution [27] or the complete package of universal quantum computation. Given our history with the spin chain, and its experimental prospects, we should understand how these tasks might be realised in this setting. As already mentioned, with a large enough local Hilbert space dimension [12, 13], universal quantum computation can be realised, just by initialising a suitable initial state and leaving the system to evolve. Alternatively, with the smallest possible Hilbert space, a small amount of control can be added to one end of the chain and, again, universal control can be realised [24]. What are the true limits here? If we only have a spin chain with local dimension 2, and no other control, what evolution can be realised? Once these limits are understood, it is easy to relax the conditions, and add in features that might be easy for a given experiment to implement, while leaving out other features that might be more challenging.

For the past decade, specific tasks within this category have been intensively studied. Perfect state transfer (see, for example, [2833])—making particular choices of the Hamiltonian parameters such that a single qubit state $| \psi \rangle $ on the first spin at time t = 0 arrives perfectly at the last spin at the state transfer time, t0—is the typical case examined. The same solutions generate entanglement, both bipartite [34] and that required for cluster states [35]. Simple modification of these coupling schemes permits fractional revivals [32, 3638]—superposing the input state over the two extremal sites of the chain. Meanwhile, modification of the form of the Hamiltonian has demonstrated that other tasks can be achieved, such as the generation of a Greenberger–Horne–Zeilinger state [39].

In this paper, we address the question of what other functions a spin chain can realise (specifically, a nearest-neighbour Hamiltonian in one-dimension that preserves the number of excitations on the chain), moving far away from the small modifications around the central result of perfect state transfer. In response, we show that almost all states comprised of a single excitation (a single $| 1\rangle $ superposed across many sites, while all others are in the $| 0\rangle $ state) and real amplitudes can be deterministically generated by evolving an excitation initially located on a single site, including the important case of the W state of N qubits. We do this by showing that it is sufficient to ensure that the Hamiltonian H1 has eigenvalues which satisfy a particular property, and by fixing one of the eigenvectors.

In section 2, we describe a set of sufficient properties that the Hamiltonian has to satisfy in order to guarantee creation of a target state. In section 3, we then describe a numerical technique that is guaranteed to work to arbitrary accuracy for almost all target states (and characterising the cases when it does not work). In section 4, we realise that although the algorithm in section 3 provides a useful existence proof, the corresponding solutions have excessively high times for producing the required states. As such, section 5 constructs some analytic cases that yield optimal state synthesis times, and section 6 uses these as the basis for a perturbative technique to find good solutions—those that produce the target state with high accuracy in the minimum time.

1.1. Setting

In this paper we consider a spin chain comprised of N spins, the Hamiltonian of which is

Equation (1)

where Xn, Yn and Zn denote the usual Pauli matrices applied to site n (and ${\mathbb{1}}$ elsewhere). It is excitation preserving,

meaning that any one-excitation state, such as $| 1\rangle {| 0\rangle }^{\otimes (N-1)}$ can only evolve into another one-excitation state,

where ${\sum }_{n}| {\alpha }_{n}(t){| }^{2}=1$. Indeed, the Hamiltonian when restricted to the first excitation subspace is described as

where $| n\rangle := {| 0\rangle }^{\otimes (n-1)}| 1\rangle {| 0\rangle }^{\otimes (N-n)}$, yielding

The matrix H1 is a real, symmetric, tridiagonal matrix where each of the elements can be independently specified, making it ideal for the engineering tasks that we intend to study.

Our aim is to be able to initialise the spin chain in a separable one-excitation state, $| n\rangle $. Typically, this will be at one end of the chain, say $| 1\rangle $ 1 . We want to find the coupling strengths $\{{J}_{n}\}$ and magnetic fields $\{{B}_{n}\}$ such that the evolution produces

for some particular set of coefficients $\{{\alpha }_{n}\}$ that we specify, perfectly and deterministically, i.e. there will be a time t0 (the 'synthesis time') such that the state of the spin chain is the target state $| {\psi }_{{\rm{T}}}\rangle $. Two states of particular interest that satisfy these properties are W-states (Dicke states) of all, or odd numbered sites:

Note that the second state is only valid as a target state if N is odd, although an even N version can be defined.

A key assumption that we make here is that the target coefficients ${\alpha }_{n}$ are real. This need hardly be considered a limitation—we envisage the use of such spin chains to be in producing specific resource states that might otherwise be challenging to produce accurately and repeatedly. The central resource here is likely to be the entangled nature of the state which is entirely determined by the real amplitudes; complex amplitudes can be generated by local unitaries acting on the state.

1.2. Relevance to other spin chain models

Our study will be far more widely applicable than the initial choice of spin chain, equation (1), might suggest—there are two main classes of spin chain that arise in the literature. The first is the XXZ model,

of which the Heisenberg model is the special case ${{\rm{\Delta }}}_{n}=1$. As we are concentrating on the single excitation subspace in this paper, there is a trivial mapping between magnetic fields ${\tilde{B}}_{n}\leftrightarrow {B}_{n}$ and couplings ${\tilde{J}}_{n}\leftrightarrow {J}_{n}$, meaning our results instantly translate. This is equally true of the Hubbard and Bose–Hubbard models. The second class are the free-fermion models:

of which equation (1) is the special case $\gamma =0$ using a standard mapping (the Jordan–Wigner transformation [40]) between Pauli spin operators and the fermionic creation operators ${a}_{n}^{\dagger }$. The key idea here, however, is that the coupling of the fermions in an N-qubit system is described by a $2N\times 2N$ tridiagonal matrix [41]. As soon as we understand how to engineer the properties of H1, we know how to engineer the properties of these systems as well, it is only that the corresponding initial and final states are different, requiring a little more analysis. Moreover, the beauty of these systems is that the evolution of that $2N\times 2N$ matrix conveys everything about the evolution of the entire system, not just a specific subspace (unlike the XXZ, Hubbard and Bose–Hubbard models).

2. Designer states

We aim to find Hamiltonians for which ${{\rm{e}}}^{-{{\rm{i}}{H}}_{1}{t}_{0}}| 1\rangle =| {\psi }_{{\rm{T}}}\rangle $. In almost all cases2 , there is a very simple way that one can attempt to do this—imagine H1 has an eigenvector $| \eta \rangle $ of zero eigenvalue, and all other eigenvalues are half-integer multiples of some factor λ. The evolution after a time ${t}_{0}=2\pi /\lambda $ is

because all the eigenvectors have acquired a phase −1 except for $| \eta \rangle $. Consequently, the final state is just

Thus, by fixing

we have the evolution as desired. We shall denote the components of $| \eta \rangle $ by ${\eta }_{n}=\langle n| \eta \rangle $.

2.1. Constraints of the technique

Fixing a single eigenvector, and imposing properties on the eigenvalues, immediately guarantees the desired evolution of the single excitation subspace. What are the constraints on the target state for which this can be done? The isolated problem of imposing that a tridiagonal matrix such as H1 has a particular real-valued eigenvector (irrespective of the other eigenvalues) is well understood [42]:

  • the amplitudes of the eigenvector at either end of the chain, ${\eta }_{1}$ and ${\eta }_{N}$, must be non-zero, i.e.
  • two consecutive amplitudes cannot both be zero, i.e. if ${\eta }_{n}=0$ for any $n=2,3,\ldots ,N-1$, then ${\eta }_{n-1},{\eta }_{n+1}\ne 0$ 3 .

Hence, this technique immediately rules out the previously studied special cases of perfect state transfer and end-to-end entanglement generation, because these have ${\alpha }_{2}={\alpha }_{3}\,=\,\ldots \,=\,{\alpha }_{N-1}=0$, emphasising the non-uniqueness of our strategy. However, these are the only restrictions—for any other choice of $| {\psi }_{{\rm{T}}}\rangle $, we can always find an arbitrarily good approximation to a matrix with eigenvector $| \eta \rangle $ and the required spectral structure (and by continuity, a perfect solution must exist).

To our knowledge, the task of finding a Hamiltonian with a specific eigenvector and spectrum has not previously been studied, although specifying one or the other is quite common [42, 43]. This task is of independent mathematical interest and as such, we elucidate some of its mathematical properties in appendix A—showing that for specific choices of spectra, sometimes the solution for the Hamiltonian parameters is non-unique, and sometimes no solution exists. However, our task must not be mistaken for that—we are not constrained to using a specific spectrum, only by certain general properties. We only have to show that for any desired $| {\psi }_{{\rm{T}}}\rangle $, and hence $| \eta \rangle $, there exists at least one choice of spectrum for which there is a solution.

3. Arbitrarily accurate solutions

To show that, for any desired $| {\psi }_{{\rm{T}}}\rangle $ satisfying the conditions that ${\alpha }_{N}\ne 0$ and that no two consecutive amplitudes are zero, there exists a spectrum for which H1 can be constructed, we take a technique from [44], where we start with a known Hamiltonian which, in this case will have the correct eigenvector but not spectrum, and find how to perturb the Hamiltonian to correct the spectrum.

We start by considering the eigenvector equation ${H}_{1}| \eta \rangle =0$:

Equation (2)

We fix ${J}_{1}=1$ and work iteratively. At step n (starting with n = 2), we know ${J}_{n-1}$, allowing us to choose ${J}_{n}={J}_{n-1}$ and hence set ${B}_{n}=-{J}_{n-1}({\eta }_{n-1}+{\eta }_{n+1})/{\eta }_{n}$ if ${\eta }_{n}\ne 0$. Otherwise, we fix ${J}_{n}=-{\eta }_{n-1}{J}_{n-1}/{\eta }_{n}$, and choose Bn = 0. At the end of the iteration, all the parameters of H1 are set, and the 0 eigenvector is $| \eta \rangle $. This is precisely the technique for solving inverse eigenmode problems in [42]. We refer to this matrix as Hη, and follow the process:

  • Pick an accuracy parameter ε (smaller than half the smallest gap between eigenvalues in Hη).
  • Truncate the eigenvalues of Hη to the nearest multiple of ε.
  • Shift all the eigenvalues except the 0 value by $\pm \tfrac{1}{2}\varepsilon $. This defines the target spectrum. The choice of ± does not matter, and can be made in order to minimise the change in the eigenvalues, which need never be larger than $\varepsilon /4$. This ensures that the ordering of the eigenvalues is maintained.
  • Take the values $\{\langle 1| {\lambda }_{n}\rangle \}$, where $| {\lambda }_{n}\rangle $ are the eigenvectors of Hη, and use these along with the target spectrum to calculate a new Hamiltonian $\tilde{H}$. This follows a standard technique for inverse eigenvalue problems known as the (inverse) Lanczos algorithm [45], which takes these two sets of parameters as input and returns a tridigaonal matrix with the specified spectrum and values $\{\langle 1| {\lambda }_{n}\rangle \}$.

The output, $\tilde{H}$, is guaranteed to have a spectrum that achieves the desired phases in a time ${t}_{0}=2\pi /\varepsilon $. A solution to this always exists [45]. While the 0 eigenvector is no longer $| \eta \rangle $, but $| {\eta }_{\mathrm{actual}}\rangle $, since $\tilde{H}$ is only a perturbation of Hη, it should not be significantly different.

By continuity of the spectral properties of the Hamiltonian (as we tend $\varepsilon \to 0$), we infer that a perfect realisation must exist, albeit with arbitrarily long state synthesis time. Thus, as a special case, we can create any state with real, non-zero amplitudes on every site of the chain, including states such as the W state. For example,

with parameter $\varepsilon =0.001687714$ evolves $| 1\rangle \to | \alpha \rangle $ where $\langle \alpha | W\rangle \gt 1-2\times {10}^{-9}$.

How different is the state produced to what we wanted? The state produced by $\tilde{H}$ is ${{\rm{e}}}^{-{\rm{i}}\tilde{H}{t}_{0}}| 1\rangle =| {\psi }_{\mathrm{actual}}\rangle $, and has overlap with the target of

Now, note that a Hamiltonian perturbation V gives, up to normalisation,

where we crudely estimate $| \langle {\lambda }_{n}| V| \eta \rangle | \lesssim \varepsilon $ and $| {\lambda }_{n}| \geqslant {\lambda }_{\min }$ to yield

Let us take the typical case where we start from the system where all the Jn are equal. The original spectrum is

With N odd, then there is a 0 eigenvalue which we will choose to correspond to the 0-value eigenvector that we will tune. Of the other eigenvalues, λ, we have that $| \lambda | \gt {\lambda }_{(N-1)/2)}={\lambda }_{\min }\sim 1/N$. Meanwhile, the smallest gap, that determines ε arises at ${\lambda }_{1}-{\lambda }_{2}\sim 1/{N}^{2}$. So, once perturbed, all the Jn are approximately equal, and if we assume 0 magnetic fields, we get $\langle \eta | {\eta }_{\mathrm{actual}}\rangle \sim 1-O(1/N)$, with a t0 scaling as $O({N}^{2})$.

In fact, it is not a priori obvious that a perturbation that only shifts the eigenvalues by no more than $\varepsilon /4$ should satisfy $| \langle {\lambda }_{n}| V| \eta \rangle | \lesssim \varepsilon $. A more rigorous analysis is provided in appendix B that improves the error scaling to $\langle \eta | {\eta }_{\mathrm{actual}}\rangle \sim 1-O(1/{N}^{3})$.

It must be emphasised that we do not propose this algorithm as one that should practically be used; there are a number of shortcomings including that in order for $\varepsilon \to 0$, we require $t\to \infty $. Also, from a practical perspective, perturbations to the Hamiltonian would have to be at the level of $O({\varepsilon }^{2})$ in order to not have too significant an effect, but this is a ridiculous level of accuracy. Instead, the purpose of the algorithm was to show that there is always a solution. It is the focus of the rest of this paper to convey that there are many improvements that can be made such that the state can be created in a time that is independent of the desired accuracy, and at a speed close to the theoretical limits.

4. Speed limits

For a given target state $| {\psi }_{{\rm{T}}}\rangle $, how small can the synthesis time, t0, be made? The shorter the time, the less opportunity there is for noise to build up and overwhelm the device. The crucial issue is the spectral gap—if the smallest eigenvalue gap is Δ, then the minimum value of t0 is $\pi /{\rm{\Delta }}$. Indeed, if that smallest gap arises between a pair of eigenvectors that does not include the 0-eigenvector, ${t}_{0}\geqslant 2\pi /{\rm{\Delta }}$. We consequently want to understand how large Δ can be made, subject to the physically motivated constraint that the maximum coupling strength of the chain is bounded, i.e. ${J}_{n}\leqslant {J}_{\max }$ for all n. In the explicit constructions above, ${\rm{\Delta }}\sim 1/{N}^{2}$ yielding a state synthesis time of $O({N}^{2})$. From the history of perfect state transfer, we know that the uniform coupling chain (on which that construction was based) is far from optimal in terms of transfer time; O(N) is possible. We aim to show that the same is possible for state synthesis. In the abstract, we note that by bounding all coupling strengths ${J}_{n}\in [-{J}_{\max },{J}_{\max }]$, all eigenvalues are constrained in the range $\lambda \in [-2{J}_{\max },2{J}_{\max }]$. With N such gaps, the smallest gap between eigenvalues can be no more than $O(1/N)$, so the synthesis time must be O(N).

The challenge is to make a correspondence between the spectral properties of the Hamiltonian H1, which are well characterised for the state synthesis task, and the coupling strengths. Let us assume that H1 is symmetric, meaning ${B}_{n}={B}_{N+1-n}$ and ${J}_{n}^{2}={J}_{N-n}^{2}$, and of odd size $N=2M+1$. Improving the proof technique of [46], we will argue that

Equation (3)

We start by observing that if H1 achieves the state synthesis task, then so does any ${H}_{1}+\gamma {\mathbb{1}}$, because the ${\mathbb{1}}$ only contributes a global phase to the evolution. We resolve this freedom in the magnetic fields by fixing ${B}_{M+1}=0$. Next, observe that the symmetry assumption splits H1 into anti-symmetric and symmetric subspaces with mutually interlacing eigenvalues ${\{{\mu }_{k}\}}_{k=1}^{M}$ and ${\{{\nu }_{k}\}}_{k=1}^{M+1}$ respectively (${\nu }_{k}\lt {\mu }_{k}\lt {\nu }_{k+1}$). All eigenvalues must have an integer spacing, except for a spacing of $\tfrac{1}{2}$ either side of one special eigenvalue. Let us assume this special eigenvalue is ${\mu }_{\tilde{k}}$ (which turns out to be the relevant case, rather than ${\eta }_{\tilde{k}}$). We have that

where $S={\sum }_{n=1}^{N}| n\rangle \langle N+1-n| $. If we use the bounds ${\eta }_{k}\geqslant {\eta }_{1}+2(k-1)-{\delta }_{k\gt \tilde{k}}$ and ${\mu }_{k}\leqslant {\eta }_{k+1}-1+\tfrac{1}{2}{\delta }_{k=\tilde{k}}$, then one readily derives

which is the smallest possible (${M}^{2}-\tfrac{1}{2}$) for the choice ${\eta }_{1}=\tfrac{1}{2}-M$.

Importantly, this construction gives us insight as to how we could realise the optimal solution—by selecting a spectrum $0,\pm 1,\pm 3,\pm 5,\ldots ,\pm (N-2)$, which is very far from the spectrum chosen in section 3. Of course, even for a symmetric target eigenvector, it is not necessary that the Hamiltonian be symmetric, and even our basic premise of fixing a single eigenvector and some basic spectral properties is far from unique, so this construction has limited applicability. We can extend the technique at the cost of removing the possibility of the bound being tight. We start with

fixing the Bn through eigenvector relation equation (2), so that, under the assumption ${\eta }_{n}\ne 0$,

By imposing $| {J}_{n}| \leqslant {J}_{\max }$, this reduces to

Having fixed that the special eigenvalue is 0 (we did this implicitly, but we have the freedom to do that thanks to the $\gamma {\mathbb{1}}$ shift), and recognising that all other eigenvalues in $\sum {\lambda }_{n}^{2}$ must be spaced by at least $2\pi /{t}_{0}$, it is clear that the smallest such sum arises from eigenvalues centred on 0, in ± pairs, with the minimum spacing. Thus,

We finally have that

Equation (4)

For example, the W-state requires a time of at least $\pi N/(3\sqrt{2}{J}_{\max })$ in the large N limit.

If we wish to compare with all possible fixed Hamiltonians, or even time-varying excitation-preserving Hamiltonians, subject to the constraint that all coupling strengths are bounded within a range $[-{J}_{\max },{J}_{\max }]$, then we can utilise Lieb–Robinson bounds [47]. These convey that to generate a non-trivial correlation function between two regions separated by a distance L requires at least a time $\sim L$ because there is a finite group velocity for the propagation of correlations. Conventionally, the group velocity in this situation would be evaluated as ${v}_{g}=2{J}_{\max },$ giving an optimal evolution time of $(N-1)/(2{J}_{\max })$. This velocity is borne out by detailed numerical calculations of optimal quantum control in [48] (in the bulk; edge effects can affect finite sized systems), although rigorous calculations of the Lieb–Robinson bound only give ${v}_{g}\leqslant 6{J}_{\max }$ [47]. For instance, if we consider the two operators ${O}_{A}={Z}_{1}$ and OB = ZN, and evaluate

then at the start of the evolution we have $\sigma (| n\rangle )=0$, while the final state has $\sigma (| \alpha \rangle )=-4{\alpha }_{1}^{2}{\alpha }_{N}^{2}$. Provided ${\alpha }_{1}{\alpha }_{N}$ is not exponentially small, ${t}_{0}\geqslant (N-1)/{v}_{g}$, so the scaling relation is certainly optimal. Note that this Lieb–Robinson time (which we shall take to be ${J}_{\max }{t}_{0}\geqslant (N-1)/2$ later in the paper) is independent of the target state. In fact, this serves to illustrate the crude level of the bound in equation (4)—it is easy to pick target states so that the bound it gives is less strict that the Lieb–Robinson bound. For example, up to normalisation,

with N = 21 has a worse bound if $\alpha \lt 0.9157$.

5. Analytic solutions

We saw in the previous section that our original technique gives a state synthesis time that is far worse than we might hope, scaling as $O({N}^{2})$ instead of O(N). One might hope that the same technique would continue to work when using an initial coupling distribution of ${J}_{n}=\sqrt{n(N-n)}/N$ [29] instead of Jn = 1, which would give an eigenvalue spacing of $1/N$. This can be made to work. For example, starting from ${J}_{n}={(-1)}^{n}\sqrt{n(N-n)}/N$ and ${B}_{n}=-{J}_{n-1}-{J}_{n}$, the eigenvalue gap appears, numerically, to be $O(1/N)$, and so a transfer time O(N) is possible. However, there is necessarily a multiplicative constant overhead to such a scheme (having to choose ε well within the size of the existing energy gap), meaning that state synthesis is, perhaps, an order of magnitude slower that it could theoretically be. In the case of N = 7, for instance, we found a system that produced an output state $| \psi \rangle $ with $\langle W| \psi \rangle =1-2\times {10}^{-6}$, but ${J}_{\max }{t}_{0}=893$, far worse than the ${J}_{\max }{t}_{0}\geqslant 4.51$ suggested by equation (4). We therefore pursue a different technique, starting by producing analytic solutions that have a particular target spectrum and fixed 0-eigenvector, and using these as the input to a perturbative technique to produce useful solutions.

In [49], a set of matrices which we call the Hahn matrices, were introduced. These are M × M symmetric tridiagonal matrices with diagonal elements

and off-diagonal elements

The Hahn matrices have a spectrum $k(k+2\alpha +1)$ for $k=0,\ldots ,M-1$ [49] and $\alpha \geqslant 0$ 4 . While this spectrum is not the one we desire, the Hahn matrices motivate our new construction of an N × N symmetric tridiagonal matrix ($N=2M+1$) with 0 on the main diagonal and off-diagonal couplings that satisfy

which has a spectrum 0 and $\pm {\left\{\left(k+\tfrac{2\alpha -1}{2}\right)\right\}}_{k=1}^{M}$. In particular, integer values of α yield a spectrum that is compatible with our spectral condition and give ${t}_{0}=2\pi $. Furthermore, imposing that our new matrix is symmetric requires that ${J}_{M}={J}_{M+1}$. Hence,

which is sufficient to define all the coefficients.

In the case of $\alpha =0$, the spectrum is the one that was used to give the minimum state synthesis time, ${J}_{\max }{t}_{0}=\tfrac{\pi }{2}\sqrt{{M}^{2}-\tfrac{1}{2}}$.

For example, with M = 3 and $\alpha =1$, we start with the Hahn matrix

and subsequently create the size 7 Hamiltonian (in the first excitation subspace)

This has a 0-eigenvector, up to normalisation, of approximately

To see that its spectrum is $0,\pm \tfrac{1}{2},\pm \tfrac{3}{2},\pm \tfrac{5}{2}$, we first observe that for any eigenvector ${\sum }_{n=1}^{7}{\lambda }_{n}| n\rangle $ of eigenvalue λ, there is an eigenvector ${\sum }_{n=1}^{7}{\lambda }_{n}{(-1)}^{n+1}| n\rangle $ with eigenvalue $-\lambda $, so all eigenvalues occur in $\pm \lambda $ pairs, except for 0, which must be there given the odd size of the system. Now we evaluate ${H}_{1}^{2}-\tfrac{9}{4}{\mathbb{1}}$,

and recognise that it splits into two subspaces corresponding to the even and odd matrix elements:

The second of these is our original Hahn matrix. Hence, some of the eigenvalues λ are related to those of the Hahn matrix by ${\lambda }^{2}-\tfrac{9}{4}$, and we know those values to be $0,4,10$. Overall, this imposes that the spectrum must be $0,\pm \tfrac{1}{2},\pm \tfrac{3}{2},\pm \tfrac{5}{2}$ (and this proof strategy is the same for any size of matrix).

6. Perturbative manipulations

If we want a different vector $| \eta \rangle $ to be the 0-eigenvector, we must start from our analytic solution and try to iterate towards an improved solution. It is interesting to observe that for $\alpha =1$, the 0 eigenvector of the previous construction is very close to $| {W}_{\mathrm{odd}}\rangle $—numerically we have created matrices of (odd) size up to 10003, and $\langle {W}_{\mathrm{odd}}| \eta \rangle $ is always at least 0.9995 . Equally this means that the overlap with the W state is approximately $1/\sqrt{2}$. Consequently, it can serve as a crude starting point for numerical schemes—by judiciously changing the signs of the coupling strengths we can guarantee an overlap with any target state of approximately $\left({\sum }_{n}| {\alpha }_{2n-1}| \right)\sqrt{2}/\sqrt{N\,+\,1}$ which is never too small.

We start with a Hamiltonian ${H}_{1}^{(0)}$ which has magnetic fields ${B}_{n}^{(0)}$ and couplings ${J}_{n}^{(0)}$, which can be used to calculate the characteristic polynomial p(x) of ${H}_{1}^{(0)}$. We aim to find the first order correction to the fields and couplings that steps us towards having the desired spectrum $\{{\lambda }_{n}\}$ (for all eigenvalues except 0) and desired 0-vector $| \eta \rangle $. Let us write $| {G}^{(0)}\rangle ={({B}_{1}^{(0)},{J}_{1}^{(0)},{B}_{2}^{(0)},\ldots {B}_{N}^{(0)})}^{T}$. Then if

the conditions for $| \eta \rangle $ to be the 0-eigenvector of our new solution are represented as $M| {G}^{(1)}\rangle =0$. Combined with the $N-1$ conditions for getting the eigenvalues (except for the 0-value) correct,

this specifies a linear problem to be solved for the next step, $| \delta G\rangle =| {G}^{(1)}\rangle -| {G}^{(0)}\rangle $. Moreover, the entries of the vector $\underline{{\rm{\nabla }}}p(x)$ are easily evaluated:

where ${[R]}_{n}$ denotes a matrix R with its nth row and column removed.

This technique works in theory, although in practice the matrices involved are poorly conditioned, meaning that the radius of convergence is too small and anything other than modest system sizes gets trapped too readily in local maxima. Nevertheless, there is significant scope for improvement by, for example, applying appropriate pre-conditioning, and using higher order techniques should it prove desirable to go to system sizes larger than $N\sim 100$. However, we have not explored these options since, as we will argue in section 7, there are practical reasons why it is unlikely to be necessary.

Instead, we have found that sub-optimal techniques, while not providing monotonic convergence, often happen to yield higher quality solutions by not getting trapped in local maxima (or reach a sufficiently accurate point that the above calculation does converge). In particular, the supporting calculations provided via a Mathematica workbook [50], adopt the technique of

  • Start with a Hamiltonian H1 (couplings Jn and fields Bn).
  • Change the signs of the couplings to
    This does not change the spectrum of the Hamiltonian, but given the sign changes in the couplings determine the ordering of the eigenvalues, it may be that the 0-eigenvector is changed.
  • Define a perturbation
    and corresponding perturbed Hamiltonian ${H}_{p}={H}_{1}+\delta V$, with $\delta =\min (1,\varepsilon /\parallel V\parallel )$ for some $\varepsilon \ll 1$. Note that the sign choice of the $\{{J}_{n}\}$ minimised the norm of V, making it as close to being a perturbation as possible.
  • Calculate the eigenvectors $| {\tilde{\lambda }}_{n}\rangle $ of Hp.
  • Calculate a new Hamiltonian with the desired spectrum starting from the eigenvector overlaps $\{\langle 1| {\tilde{\lambda }}_{n}\rangle \}$ by using the (inverse) Lanczos algorithm.

The overall step is isospectral by construction, and should provide a small ($O(\varepsilon )$) improvement in the accuracy of the target eigenvector. Thus, repetition is anticipated to drive us towards a good solution, should one exist, modulo some possible disturbance introduced by the reordering of the eigenvectors due to the sign changes of the coupling strengths.

For example, figure 1 depicts the evolution of a 21 qubit system which performs the evolution $| 1\rangle \to | \psi \rangle $ where $\langle W| \psi \rangle \approx 1-2\times {10}^{-15}$. It is clear, however, that our calculations can still be significantly improved—the results for 99 spins, as used in figure 2, demonstrate that random perturbations can easily find improvements in the Hamiltonian over and above those which we realised with the above formulation.

Figure 1.

Figure 1. Time evolution of probability of excitation being at each site in a 21 qubit chain, approximating the evolution of $| 1\rangle $ evolving to the W-state.

Standard image High-resolution image
Figure 2.

Figure 2. Overlap of output with target W state in a chain of 99 spins, comparing average over 400 instances with the best sample, where each magnetic field and coupling strength is altered by multiplication of an amount chosen uniformly at random from the range $1\pm x$.

Standard image High-resolution image

With regards to the optimal speed, the 21-qubit example of figure 1 gives that ${J}_{\max }{t}_{0}=33.1$. Equation (4) specifies that ${J}_{\max }{t}_{0}\geqslant 14.9;$ there appears to be some margin for improvement within the bounds of the technique presented in this paper, but some proportion of this must be attributed to the crude nature of the bound—to saturate it would require every coupling strength to be equal, which cannot happen. Equally, the bound for the symmetric case is ${J}_{\max }{t}_{0}\geqslant 33.0;$ this does not apply because the output is not symmetric but as a tight constraint on those systems perhaps gives a more realistic indication of the value. Moreover, comparing to the Lieb–Robinson bound for any Hamiltonian, including time-dependent ones, the optimal relation (ignoring any edge effects) would be ${J}_{\max }{t}_{0}\geqslant 10$, surprisingly close!

7. Susceptibility to errors

Any real-world implementation of these ideas will naturally experience some variance from the ideal, either in the form of imperfections in the manufacturing process, and manifesting as a perturbation to the Hamiltonian, or in the form of noise. Studying these effects is a broad topic, but we provide some preliminary indications about the effects of these error sources.

For imperfections in the manufacturing process, we note that one of the advantages of the fixed Hamiltonian scheme is that we can analyse the performance of a manufactured device in advance of using it, and potentially even make slight adjustments (such as the evolution time) to partially compensate for errors. Indeed, we could manufacture multiple copies of the device and use the best one. Nevertheless, errors will still creep in. We have chosen to numerically study the effect on the final state of randomly altering each coupling strength and magnetic field by up to a fixed percentage. This percentage shift, as opposed to an absolute shift, arises more naturally in some scenarios such as evanescently coupled waveguides [21], where the coupling of two waveguides separated by a distance x has the form ${J}_{0}{{\rm{e}}}^{-\mu x}$, so an absolute error in position $\delta x$ corresponds to a multiplicative error ${{\rm{e}}}^{-\mu \delta x}$. The effects are demonstrated in figure 2 for varying levels of inaccuracy for a chain of 99 spins attempting to create a W-state, constructed according to section 6. The effects are remarkably modest.

On the other hand, noise is always going to be a greater problem that limits the practical useful size of a spin chain (just as will be the case for state transfer, although error correction techniques are slowly being understood in that context [51]). Consider dephasing noise as an example: the appearance of a single Z error randomly in the system is not too detrimental to the final state. To see this, consider decomposing the error in terms of the Majorana fermions

The purpose in doing this is that under the action of our Hamiltonian (represented in the single excitation subspace by H1), these fermions evolve independently according to

For an error Zn at time t, we can consider the fidelity as a figure of merit:

The majority of the terms in this sum do not have any effect—typically, the fidelity is only reduced by an amount $1/N$ for each error. Hence, O(N) errors can be tolerated during the evolution, while only reducing the fidelity by a finite amount. For a given chain length, there will certainly be a threshold per-qubit error rate below which the resulting output state is of sufficiently high fidelity. However, the fastest evolution requires a time O(N), and there are N qubits involved meaning that a constant per-qubit error rate introduces $O({N}^{2})$ errors during the evolution. As system sizes scale, it will become impossible to find a practical working window for the noise rate, just as it does for state transfer [51]. This is one motivating factor behind concentrating on only modest sized systems in section 6.

8. Conclusions

We have shown that a spin chain can be engineered to deterministically create almost any single excitation state of real amplitudes from its time evolution, vastly extending their utility. Numerically, our outputs give Hamiltonians that yield close to the target state in a time that is within a modest (i.e. O(1)) multiplicative factor of being optimal, and are remarkably robust to manufacturing imperfections. While we require manipulation of both magnetic fields and coupling strengths, all the magnetic fields can be set to 0 simply by replacing the chain by one of length $2N+1$, and instead trying to produce the state ${\sum }_{n}{\alpha }_{n}| 2n-1\rangle $ using a target spectrum whose non-zero eigenvalues occur in $\pm \lambda $ pairs. For example, to generate a 7-qubit W-state, it could be easier to produce a 15-qubit $| {W}_{\mathrm{odd}}\rangle $ state and only manipulate the coupling strengths. The cost is an approximate doubling of the state synthesis time. As described in section 1.2, all our results can readily be applied to local free-fermion models such as the transverse Ising model, or any one-dimensional excitation preserving nearest-neighbour Hamiltonian such as the Heisenberg model.

The assumption that the target state has real coefficients ${\alpha }_{n}$ was central to our derivation. We do not consider this a serious limitation as the entanglement resource produced by the spin chain is not affected by the ability to manipulate the complex phases—these are a local property of the state. Viewed from an alternative perspective, one can suggest that if a party requests an N qubit state, that implies an ability to do something with those N qubits. One would not request it in order to let it just decohere, unobserved. It might not be a universal computational ability, and the state is required to elevate those abilities to greater computational power, as in the local operations and classical communication paradigm, or measurement-based quantum computation. This might be as simple as the ability to measure the qubits, wherein the implementation of local phases can be incorporated into the choice of measurement basis. If the aim is more than just measurement, the user probably has the ability to implement the local phases themselves. Either way, it does not matter that we only produce a state with real amplitudes. Nevertheless, one method to realise complex amplitudes ${\alpha }_{n}$ is by extending the Hamiltonian model and applying techniques described in [52]. If the couplings Jn produce the target state with amplitudes $| {\alpha }_{n}| $, then replacing each term in the Hamiltonian using

would be sufficient, as this is equivalent to applying a unitary rotation with diagonal elements ${{\rm{e}}}^{{\rm{i}}\mathrm{Arg}({\alpha }_{n})}$ on the first excitation subspace of the Hamiltonian.

Any target state with no consecutive zero amplitudes can be realised. To get consecutive zeros, one could examine the technique that [42] specifies for fixing two eigenvectors of a matrix. While this gives no control over the spectrum, the procedure of section 3 can be applied to get a high accuracy solution, and hence conveys that solutions exist. However, this can give no more than two consecutive zeros6 . The challenge is to design systems that produce states with many 0 amplitudes, which is likely to require inordinate control over most of the eigenvectors. This is addressed in [53]. The additional advantage of such tuning is that it should be possible to select a much tighter spectrum, with eigenvalue gaps that are $\pi /{t}_{0}$, as compared to the majority being $2\pi /{t}_{0}$, as in this work. We anticipate that this would approximately halve the value of ${J}_{\max }{t}_{0}$, substantially closing the gap to the Lieb–Robinson speed limit.

While one might argue that, conceptually, our results are not new—the possibility of universal quantum computation on a spin chain [12, 13] implies that any state can be made—there is a world of difference. Perhaps most damning is that results such as [12] do not give deterministic operation. Instead, there is a very small success probability, vanishing as some power of N, and one has to repeat until success. In that sense, those schemes are not free from user interaction. This is in complete contrast to our scheme wherein the state is guaranteed to be produced with high accuracy at a particular time, and we have shown that our scheme if within a modest overhead of being the fastest that it could be. Furthermore, the universal Hamiltonian schemes require large local Hilbert spaces with unrealistic Hamiltonians, while here our scheme is designed with 'standard' models in mind which are abstractions of commonly arising interactions. System initialisation, while using a product state, is nevertheless complex in order to programme the necessary commands, the output is in a subspace, and possibly encoded (and production of an encoded version of the target state is entirely different to producing the state itself). Meanwhile, our results create the state itself, and system initialisation is as simple as 'cooling' to the ${| 0\rangle }^{\otimes N}$ state, and setting a single spin to $| 1\rangle $. The consequence is a realistic proposition, with good, experimental prospects, particularly using evanescently coupled waveguides. The basic technology has already been shown to work for perfect state transfer in [21], and the present setting is even more appropriate; for the tasks considered here, the only input of interest is a single excitation, not a superposition of states, so one does not require the additional lengths of more recent experiments [23, 54]. However, the efficacy of such a scheme would have to be compared to other methods such as [22, 55].

Of course, the assumptions made here are not appropriate to all experimental scenarios, but should act as a bound. Relaxing those assumptions and reintroducing some relatively simple-to-implement experiment-dependent controls can only improve the situation, and we now know that such solutions are possible. This might be considered akin to the vast explosion of state transfer schemes (see [33] and references therein), tuned to a variety of different physical implementations and physical effects, after it was demonstrated that perfect transfer as a concept was possible [29].

Acknowledgments

We would like to thank L Banchi and G Coutinho for introductory conversations. This work was supported by EPSRC grant EP/N035097/1.

Appendix A.: The mathematical task of imposing a spectrum and eigenvector on a tridiagonal matrix

Problem 1. Given a real, normalised vector

such that ${\eta }_{1}{\eta }_{N}\ne 0$ and no two consecutive values ${\eta }_{n}$ and ${\eta }_{n+1}$ are both zero, and a set of distinct real numbers ${\rm{\Lambda }}={\{{\lambda }_{n}\}}_{n=1}^{N}$, find a real, symmetric, tridiagonal matrix H1 with eigenvalues Λ such that ${H}_{1}| \eta \rangle =\eta | \eta \rangle $ ($\eta \in {\rm{\Lambda }}$).

To our knowledge, the construction of tridiagonal matrices with a specific spectrum and a specific eigenvector has not been studied, although the independent questions of inverse eigenvalue [43] and inverse eigenmode [42] problems have been examined. As such, we are interested in categorising when solutions to problem 1 exist, and how to find them.

We start by making an observation about the necessary pattern of signs of the coupling strengths such that a specified eigenvector can correspond to a particular eigenvalue in the ordered sequence. Recall [42] that if all the Jn are negative, the eigenvector with the nth largest eigenvalue has N − n sign changes in its amplitudes. In order to ensure that a particular eigenvector $| \eta \rangle $ has the nth largest eigenvalue, find a diagonal matrix D, with ${D}^{2}={\mathbb{1}}$ such that $D| \eta \rangle $ has N − n sign changes. If matrix H1 has coupling strengths Jn which are all negative, and an eigenvector $D| \eta \rangle $ which has N − n sign changes, and thus has the nth largest eigenvalue, the matrix ${{DH}}_{1}D$ has the same magnetic fields, the coupling strengths are the same up to sign changes

and $| \eta \rangle $ is an eigenvector. Moreover, since D is unitary, the transformation was isospectral, and $| \eta \rangle $ must have the nth largest eigenvector.

Lemma 1. Specifying a spectrum and a target eigenvector is insufficient to yield a unique solution.

Proof. By uniqueness, we mean choice of the values $\{{J}_{n}^{2}\}$—changing the signs of the Jn is a triviality which we want to discount. The Hamiltonian

where ${J}_{2}=-\sqrt{45}/{J}_{1}$ has spectrum $0,\pm 3,\pm 5$ and the 0-eigenvector is $| W\rangle $ for two distinct values of ${J}_{1}^{2}$:

Lemma 2. Problem 1 does not always have a solution.

Proof. It suffices to find a counterexample. To that end, fix N = 5 and $| \eta \rangle =| {W}_{\mathrm{odd}}\rangle $ with a target spectrum of $\{0,\pm 3,\pm 5\}$ (note that this example is of particular relevance to our studies of state synthesis). Requiring ${H}_{1}| \eta \rangle =0$ immediately restricts the structure to

We then fix $0={\sum }_{n}{\lambda }_{n}=\mathrm{Tr}({H}_{1})={B}_{2}+{B}_{4}$, i.e. ${B}_{4}=-{B}_{2}$. Next, $\mathrm{Tr}({H}_{1}^{3})=0=6{B}_{2}({J}_{1}^{2}-{J}_{4}^{2})$. We take the two cases of ${B}_{2}=0$ and ${J}_{1}^{2}={J}_{4}^{2}$ separately. If ${B}_{2}=0$, then we can solve ${J}_{1}^{2}$ and ${J}_{4}^{2}$ simultaneously in

There are no non-negative solutions. Similarly, for ${J}_{1}^{2}={J}_{4}^{2}$, one has to simultaneously solve

which, again, has no solutions. □

Appendix B.: Error analysis of perturbative method

In section 3 we described a method for creating arbitrarily good solutions, at the cost of increasing state synthesis time. We gave a reasonable, but unjustified, assessment of the accuracy of the scheme. In this appendix, we give a more rigorous argument. Recall that $| \eta \rangle $ is the target 0-eigenvector, while $| {\eta }_{\mathrm{actual}}\rangle $ is the 0-eigenvector that our perturbed system actually has. We estimate $F=\langle \eta | {\eta }_{\mathrm{actual}}\rangle $ as an accuracy parameter. By construction, F is real since both $| \eta \rangle $ and $| {\eta }_{\mathrm{actual}}\rangle $ are real. If U and $\tilde{U}$ diagonalise Hη and $\tilde{H}$ respectively, then the calculation of F is equivalent to $\langle m| {U}^{\dagger }\tilde{U}| m\rangle $ where m is the index of the relevant eigenvector: $U| m\rangle =| \eta \rangle $. However, U and $\tilde{U}$ must be very similar, so we choose an expansion

which maintains unitarity and the limit $\tilde{U}\to U$ as $\varepsilon \to 0$, where K is Hermitian [56]. Expanding for small ε,

Since F is real, and the diagonal of K is real, the diagonal of K must be 0, such that we are left with the second order term, as required.

Having shown that the error term scales as ${\varepsilon }^{2}\langle m| {K}^{2}| m\rangle $, the ε dependence is immediate, but the N dependence is suppressed. Following [56], we can derive that $\langle m| {K}^{2}| m\rangle ={\sum }_{n}| {U}_{{nm}}{| }^{2}{G}_{n}^{2}$ where G is a diagonal matrix satisfying

Equation (B1)

and em is the difference between the mth largest intended and actual eigenvalues as a fraction of ε. Consider

which is N times larger than the average error, and no smaller than the worst-case error. If $| G\rangle $ solves

(which must have a solution, even if $V={\sum }_{n,m}| {U}_{{nm}}{| }^{2}| m\rangle \langle n| $ is singular), then the error estimate is simply $\langle G| G\rangle $. Thus, if ζ is the smallest non-zero singular value of V, we have

To demonstrate that the scaling is not pathological, we study the special case in which Hη has Jn = 1 and Bn = 0 for all n. This is particularly pertinent to the creation of a W state. We have that

To find the eigenvalues, observe that for $N\gt 5$, the states

span a three-dimensional subspace in which the Hamiltonian may be represented as

The remaining subspace squares to ${\mathbb{1}}/4(N+1)$. Thus, the smallest absolute eigenvalue is $1/2\sqrt{N+1}$. Hence, $\sum {G}_{n}^{2}\sim N$, and the error dependence is $O({\varepsilon }^{2}N)$ in the worst case, but one anticipates that in typical cases, the dependence on N is much weaker.

Footnotes

  • Strictly, if we care about speed, we should actually start the excitation from the middle of the chain.

  • Here, and throughout this paper, 'almost all' is used in the mathematical sense that the set of target states for which it is not possible is of measure zero. For what the spin chain strictly produces, this is a statement about the size of the set of real vectors compared to the size of the set of real vectors with two or more consecutive zeros.

  • Reference [42] specifies a further property on sign changes between ${\eta }_{n-1}$ and ${\eta }_{n+1}$ if ${\eta }_{n}=0$ because they imposed that all the Jn should be negative. We make no such imposition.

  • In fact, [49] restricted the values of α more strongly, but this was because other specific properties of the spectrum were required. Reference [49] also gives the eigenvectors of these matrices in terms of the Hahn polynomials.

  • Up to some signs which can be corrected by judiciously changing the signs of the couplings.

  • For two eigenvectors $| {\eta }^{1}\rangle $ and $| {\eta }^{2}\rangle $ necessary conditions on there being a corresponding tridiagonal matrix include that ${s}_{n}={t}_{n}=0$ or ${s}_{n}{t}_{n}\gt 0$ for each n = 1,..., N, where ${s}_{n}={\sum }_{m=1}^{n}{\eta }_{m}^{1}{\eta }_{m}^{2}$ and ${t}_{n}={\eta }_{n}^{1}{\eta }_{n+1}^{2}-{\eta }_{n+1}^{1}{\eta }_{n}^{2}$. However, the condition of two consecutive zeros is ${\eta }_{n}^{1}+{\eta }_{n}^{2}=0$ and ${\eta }_{n+1}^{1}+{\eta }_{n+1}^{2}=0$, which in turn means tn = 0, requiring ${\eta }_{n}^{1}={\eta }_{n}^{2}=0$ such that sn = 0. This allows two zeros together, but to add a third consecutive zero would require two consecutive zeros in both eigenvectors.

Please wait… references are loading.