Brought to you by:
Paper

Bethe Ansatz and Q-operator for the open ASEP

and

Published 9 July 2014 © 2014 IOP Publishing Ltd
, , Citation Alexandre Lazarescu and Vincent Pasquier 2014 J. Phys. A: Math. Theor. 47 295202 DOI 10.1088/1751-8113/47/29/295202

1751-8121/47/29/295202

Abstract

In this paper, we look at the asymmetric simple exclusion process with open boundaries with a current-counting deformation. We construct a two-parameter family of transfer matrices which commute with the deformed Markov matrix of the system. We show that these transfer matrices can be factorized into two commuting matrices with one parameter each, which can be identified with Baxter's Q-operator, and that for certain values of the product of those parameters, they decompose into a sum of two commuting matrices, one of which is the usual one-parameter transfer matrix for a given dimension of the auxiliary space. Using this, we find the TQ equation for the open ASEP, and, through functional Bethe Ansatz techniques, we obtain an exact expression for the dominant eigenvalue of the deformed Markov matrix.

Export citation and abstract BibTeX RIS

Introduction

The asymmetric simple exclusion process (ASEP), a one-dimensional discrete lattice gas model with hard core exclusion and biased diffusion, is one of the most extensively studied models of non-equilibrium statistical physics [14]. There are several good reasons for this. First of all, it is a simple, physically reasonable toy model, yet its behaviour is complex enough to be interesting. Furthermore, it has the mathematical property of being integrable, which makes it a good candidate for exact calculations. It is also related to other models, like growing interfaces [5], the XXZ spin chain [6], or certain random matrices [7], which makes its study relevant not only for itself, but for a wide range of physical systems.

One of the fundamental quantities to describe the behaviour of the ASEP, as a particle gas driven out of equilibrium, is the current of particles that flows through it in its steady state. That current is closely related to the entropy production in the system [8], which is the defining characteristic of non-equilibrium systems, and is therefore of particular importance. The statistics of that current can be described through its large deviation function [9], which is the rescaled logarithm of its probability distribution, or, equivalently, through the generating function of its cumulants. This generating function can be expressed as the maximal eigenvalue of an integrable deformation of the Markov matrix of the system. That method was used in [10], in conjunction with the coordinate Bethe Ansatz [11], to find the generating function of the cumulants of the current in the periodic totally asymmetric simple exclusion process (TASEP), a simpler variant where the particles move only in one direction, and later in the periodic ASEP [12]. For the open ASEP, unfortunately, the presence of reservoirs makes it impossible to number the particles, which makes the coordinate Bethe Ansatz inapplicable. The generating function of the cumulants of the current was first obtained in [13] in the large size limit and in certain phases of the system. An exact general expression was then conjectured in [14] for the TASEP and [15] for the ASEP (through calculations [16] relying on the famous matrix Ansatz [17] that describes the steady state of the system), and their structure was found to be extremely similar to the periodic case, but no rigorous proof was found, and no precise explanation for that similarity. This is what we propose to do in this paper.

There is a version of the Bethe Ansatz more versatile than the coordinate one, called the algebraic Bethe Ansatz [18], which can in principle be used for the open ASEP. Its formulation comes from the fact that the Markov matrix of the ASEP (or the Hamiltonian of the XXZ spin chain), can be expressed in relation to the row-by-row transfer matrix of the six-vertex model [19]. That transfer matrix commutes with the Markov matrix, and depends on a spectral parameter. It has the form of a product of local tensors, (Lax matrices), traced over an auxiliary space which is usually of dimension 2, like the physical space of a single site, although any dimension can be chosen. That product of Lax matrices, if the trace on the auxiliary space is not taken, is called the monodromy matrix of the system, and creation and annihilation operators for the particles can be extracted from it, that depend on their own spectral parameters. The algebraic Bethe Ansatz then consists in finding a trivial eigenvector of the transfer matrix (a vacuum state), on which the creation operator is then applied to obtain the other eigenvectors. Commutation relations between the transfer matrix and the creation operators, depending on their respective spectral parameters, produce the same Bethe equations as for the coordinate Bethe Ansatz, where the role of the Bethe roots is played by those spectral parameters.

In the case of a periodic system, with a fixed number of particles (or a fixed magnetisation sector), that vacuum state can be chosen as either completely empty, or completely full, which are two trivial eigenvectors of the system. In the case of an open system, there are two extra difficulties. Firstly, the transfer matrix is more complicated, and involves two rows of the vertex model instead of one, with certain reflection operators at the boundaries [20, 21]. This makes things harder, but not intractable. The second difficulty, however, does: for an open system, with no occupation sectors and no trivial eigenvectors, we do not know, in general, how to find a suitable vacuum state. Such states have been found for certain special boundary conditions, such as triangular boundary matrices [22, 23], or full matrices with constraints on their coefficients [2427], for which some pseudo-particles are conserved and the full construction can be performed. Recent progress has also been made for a semi-infinite chain [28, 29], which has only one boundary, through a method alternative to the Bethe Ansatz. The vacuum state for completely general boundary conditions, however, remains elusive.

There is however a way to obtain the eigenvalues of the Markov matrix (or Hamiltonian, or transfer matrix) we are interested in without having to deal with the eigenvectors at all. This can be done through Baxter's so-called 'Q-operator' method. It was first used as an alternative method to solve the six-vertex model [19], but was later discovered to be, in fact, a limit of the transfer matrix with an infinite-dimensional auxiliary space [30, 31]. Certain algebraic relations between the Q-operator and the transfer matrix, called 'TQ' relations, allow to obtain the functional Bethe Ansatz equations for the eigenvalues directly, without need of the eigenvectors [3236]. However, even with that method, the open case was solved only for certain constraints on the boundary parameters [3638] (in the case of the ASEP deformed to count the current, those constraints involve all four boundary parameters, the current-counting fugacity, and the size of the system). More recently, a variant of the TQ equation was devised, for the XXZ spin chain [3941] as well as in the XXX case [42, 43], where an extra inhomogeneous term is added to the equation. This extra term allows to define a polynomial equivalent to the Q-operator which verifies the same relation with the eigenvalue of the corresponding Hamiltonian (the usual Q-operator cannot be polynomial itself for general values of the boundary parameters, unless the aforementioned constraints are verified). While that method applies to the situation we are considering, it is for the moment unclear how it compares with the one presented in this paper. We will come back to this issue in the conclusion.

In this paper, we show that it is in fact possible to treat the most general case without introducing an inhomogeneity, by constructing explicitly the Q-operator (as well as the P-operator for the 'other side of the equator' [44]) for the open ASEP with any boundary parameters and current-counting deformation, and obtaining the functional Bethe equations for the eigenvalues of the deformed Markov matrix. We construct a transfer matrix with an auxiliary space of infinite dimension and two spectral parameters instead of one (the second of which is what is usually used as a representation parameter of the Uq(SU(2)) algebra [45] and fixed to a specific value, but we will see that it is essential to us to treat it as a free parameter). This is a natural generalization of the transfer matrix presented in [16], with the boundary vectors playing the part of the reflection operators we mentioned. We identify that transfer matrix as the product of P and Q, and we find that, taking special values of these two spectral parameters, we can recover the usual one-parameter transfer matrices with any auxiliary space dimension, and the T-Q relations for all of those matrices, as well as the corresponding fusion rules. This is the reverse of the usual construction, where Q is obtained through the fusion of an infinite number of two-dimensional transfer matrices.

In the first section, we consider the periodic case, where everything is known from the coordinate Bethe Ansatz, as a benchmark for our construction, and we connecting this approach to that of the functional Bethe Ansatz presented in [12, 46], noticing that the polynomials P and Q that are constructed there are the eigenvalues of the operators P and Q that we consider here. In the second section, we apply the same method to the open ASEP, and see that the addition of the boundaries does not modify the structure of the Q-operator. This allows us to use the same method as in [12] to obtain the expression of the generating function of the cumulants of the current in the open ASEP that was conjectured in [15]. We settle, in passing, the question of how the matrix Ansatz for the steady state of the ASEP [17] relates to the algebraic Bethe Ansatz, and we show how our construction can also be applied to the spin-1/2 XXZ chain.

1. Periodic ASEP

In this first section, we treat the periodic case, for which the coordinate Bethe Ansatz solution is known [12]. By generalizing the tensors X that were defined in [16], as well as the algebraic relations their elements satisfy, we construct a transfer matrix with two free parameters, which commutes with the current-counting deformed Markov matrix of the periodic ASEP for any values of those parameters. We then show that, for certain special values, the transfer matrix decomposes into two independent blocks, one of which is the one-parameter transfer matrix for some dimension of the auxiliary space. We also show that our transfer matrix is in fact the product of two one-parameter operators P and Q. Putting these results together, we are able to recover the functional Bethe equations for the periodic ASEP [46].

The matrix that we want to diagonalize here is the Markov matrix of the periodic ASEP, with a current-counting deformation, which is given by:

with

acting on sites i and i + 1 in basis {00, 01, 10, 11}, and where M(L) connects site L with site 1. If all the μi are taken to be 0, this gives the Markov matrix of the open ASEP (which is stochastic : the columns sum to 0). Deforming it with fugacities allows to keep track of the number of times a given jumping rate is used in a given realization of the system, and hence to get the statistics of the particle currents. The largest eigenvalue of that deformed matrix can then be found to be the generating function for the cumulants of those currents, which is the quantity that we want to obtain. It can be shown [8] that the spectrum of that deformed matrix depends only on the sum of all the μi, so that we can for instance choose μL = μ and set all the others to 0. We will denote that deformed matrix by Mμ, and the corresponding highest eigenvalue by E(μ).

1.1. Bulk algebra and commutation relations

The starting point for the results of this paper is to realize that the matrices d and e that are defined, for instance, in [47, 48], with the algebraic relation that they satisfy, deqed = (1 − q), correspond to a special representation of the Uq(SU(2)) algebra (up to a simple gauge transformation that we present in section 2.6). In light of this, it seems natural to wonder whether a more general representation might be used, and produce different, and perhaps better, results.

Let us therefore define:

in basis {0, 1}, corresponding to the occupancy of one of the sites, and where matrices A, d and e satisfy:

For practical purposes, we will be using a specific solution to these equations, given by:

Equation (4)

Equation (5)

and

Equation (6)

where S+ and S are simply operators increasing or decreasing n by 1 (not to be confused with spin operators). We recover the simpler versions of these matrices simply by taking x = y = 0.

A few remarks need to be made here. First of all, the matrix A that we have just defined plays an important role in building the matrix Ansatz for the multispecies periodic ASEP [48]. Secondly, we could have chosen for d and e their contragredient representation $\overline{e}={}^t\!d$ and $\overline{d}={}^t\!e$, which is equivalent to a gauge transformation on d and e. We will be using this fact abundantly in the rest of the chapter. Finally, we can actually define A, S+ and S over $\mathbb {Z}$ rather than $\mathbb {N}$, so that S+ and S are the inverse of one another: S+S = 1 (which would not work on $\mathbb {N}$ because of the cut at −1). Because of the term (1 − qn) in d, which is 0 between states ||0〉〉 and || − 1〉〉, we are assured that, if starting from a state ||n〉〉 with n ⩾ 0, we can never go to one with n < 0 through any combination of d, e and A. We just need to make sure that those expressions are always applied to vectors that have non-zero coefficients only for n ⩾ 0, which is enforced by the matrix Aμ that we will define momentarily, on $\mathbb {N}$ alone. This fact will make many future calculations much easier.

All our matrices are now combinations of only A and S, which satisfy a simple algebra:

Equation (7)

with S+ = S and S = (S)−1.

We also need to define another diagonal matrix Aμ, given by

Equation (8)

such that

Equation (9)

Note that this does not have a well-defined trace for μ = 0. If we want to consider that limit, we will have to multiply Aμ by (1 − e−μ) first.

Finally, we define the transfer matrix

Equation (10)

where the product symbol refers to a matrix product in the auxiliary space (i.e. the internal space of matrices A and S+) and a tensor product in configuration space, and the trace is taken only on the auxiliary space. The superscript (i) only indicates to which physical site each matrix X corresponds to (and their place in the product). In other terms, the weight of that matrix between configurations $\mathcal{C}=\lbrace \tau _i\rbrace$ and $\mathcal{C}^{\prime }=\lbrace \tau _i^{\prime }\rbrace$, where τi is the number of particles on site i in $\mathcal{C}$, is given by

Through a calculation which can be found in appendix A.1, we show that each of the local matrices in Mμ satisfy

Equation (11)

and, for M(L), which contains the deformation,

Equation (12)

Note that these relations are in fact the infinitesimal equivalent of the so-called 'RLL equation' for the commutation of matrices X with different parameters, where $\hat{X}$ is the derivative of X for a well chosen variable.

We can now recover Mμ in its entirety by summing over i in (11) and (12). The hat matrices cancel out from one term to the next, and we are left with 0, so that:

Equation (13)

$T_\mu ^{{\rm per}}(x,y)$ has therefore the same eigenvectors as Mμ.

Note that for a general set of fugacities {μi}, the corresponding transfer matrix is the same as the one we defined here, with matrices $A_{\mu _i}$ inserted at their appropriate place in the matrix product:

Equation (14)

1.2. Decomposition of the transfer matrix

Considering the representation we chose for matrix e in (6), namely S(1 − xyA), an interesting case to consider is xy = qk + 1 with $k\in \mathbb {N}^\star$ (which sets one coefficient to 0 in e).

Let us therefore impose y = 1/qk − 1x. The four matrices in X become:

Equation (15)

Equation (16)

and the coefficient of ||k〉〉〈〈k − 1|| in e vanishes. This makes all these matrices lower block-triangular (n0 and n1 obviously are, since they are diagonal, and d already was, but not e in general) with a block of size k (for n from 0 to k − 1 in the four series above) and one of infinite size (for n from k to ).

The coefficients of that second block happen to be the same as the coefficients of the whole matrix for xqkx and yq/x and in the contragredient representation of d and e (i.e. exchanging and transposing them):

Equation (17)

Equation (18)

Indeed, taking nn + k in (15) and (16), and removing the k negative indices, we get exactly (17) and (18).

Since the trace of a product of block-diagonal matrices is the sum of the traces of the products of the blocks, this gives us an equation for $T_\mu ^{{\rm per}}$, which is one of the two results essential to our derivation of the functional Bethe Ansatz:

Equation (19)

where the factor ekμ is the first coefficient of Aμ on the second block. The transfer matrix t[k] is the contribution coming from the first block, which can be written as:

Equation (20)

where Xk(x) contains the same entries as X(x, 1/qk − 1x), but truncated at n = k − 1 (so that the auxiliary space is k-dimensional).

We saw that $T_\mu ^{{\rm per}}$ commutes with Mμ for any values of x and y, so it also commutes with another $T_\mu ^{{\rm per}}$ at different values of the parameters (this is in fact not certain, because any one of those matrices could have a degenerate eigenspace, but we will assume that it is true, for now; there is a better way to show that two matrices $T_\mu ^{{\rm per}}$ at different values of x and y commute, and we will come back to it in the next section). This tells us that those matrices also commute with t[k](x) for any k, and that the t[k](x)'s with different k's commute together. This matrix equation therefore implies the same relation for the eigenvalues Λi(x, y) of $T_\mu ^{{\rm per}}(x,y)$ and the eigenvalues $\Lambda ^{[k]}_i(x)$ of t[k](x).

To go further, we need to examine the first two cases in this last equation. Note that, to be consistent with our notation for X, all the matrices will be written with the physical space as their outer space and the auxiliary space as their inner space (i.e. separated in 2 × 2 blocks of matrices acting on the auxiliary space), which is opposite to the standard convention in some cases.

For k = 1, the first block of X is given by:

The matrix t[1](x), which is scalar inside of a given occupancy sector, is then given by:

Equation (21)

This is the same as the function h(x) that is defined in [12], up to a sign.

For k = 2, the 2 × 2 blocks from n0, e, n1 and d are:

and

Equation (22)

with $A_\mu ^{[2]}=\big[\begin{array}{@{}cc@{}}{ 1} & 0\\ { 0} & { {\rm e}^{-\mu }}\end{array}\big]$.

This matrix is, in fact, the standard transfer matrix for the periodic XXZ spin chain, with a two-dimensional auxiliary space. To write it in its usual form, we need to make a few transformations and change variables. To that effect, let us consider:

Equation (23)

with $\lambda = \frac{1+q x}{q(1+x)}$, i.e. $x=-\frac{1-q \lambda }{q(1-\lambda )}$. We use the symbol · to signify a product in configuration space, so that it not be confused with a product in the auxiliary space (for which we use the usual product notation). This is the common form of the Lax matrix for the ASEP:

where P(i) is a permutation matrix which has the effect of exchanging the physical space at site i with the auxiliary space (figure 1).

Figure 1.

Figure 1. Schematic representation of L(i)(λ). The first box represents P(i), which exchanges the occupancies of the auxiliary space a and the physical space i (the matrix is seen as acting from SE to NW). The second box is the same exchange operator, with the local matrix M acting during the exchange.

Standard image High-resolution image

The matrix we applied to X2 from the left in (23) multiplies every entry by x for each occupied site (and since this number is conserved between the left and right entries of the transfer matrix, this operation actually commutes with t[2], so that we have not modified its eigenvectors). All in all, this operation multiplies t[2] by xN/(1 + x)L = 1/h(x). We therefore define:

Equation (24)

which is the usual transfer matrix for the periodic ASEP with one marked bond.

Since L(i)(0) = P(i) is a permutation matrix, and its derivative with respect to λ at 0 is $\frac{{\rm d}}{{\rm d}\lambda }L^{(i)}(0)=P^{(i)} M^{(i)}$, we find that $\hat{t}^{[2]}(0)$ is the matrix that transposes the whole system back one step, and that, for the whole transfer matrix $\hat{t}^{[2]}$ (figure 2):

Equation (25)

(we have not considered how Aμ comes into play, but we can easily check that it gives the correct term in Mμ).

Figure 2.

Figure 2. Schematic representation of the value and first logarithmic derivative of $\hat{t}^{[2]}$ at 0. The first is a translation matrix which takes each site i to i − 1. The second (of which only one part of the sum over sites is represented) gives the local jump matrix M(i).

Standard image High-resolution image

Written in terms of t[2](x), this becomes:

Equation (26)

Doing the same calculations at λ = instead of 0, after a few modifications (such as taking the contragredient representation for X and multiplying everything by h(x)/h(qx)) would have instead given us:

Equation (27)

which is also what we would have obtained if we had considered a system with LN particles, exchanging Q with P, and kept y = 1/qx as a variable. This identity is usually called the 'crossing symmetry' in the language of quantum spin chains.

1.3. R matrix

The next step is to show that the eigenvalues Λi(x, y) of $T_\mu ^{{\rm per}}$ are in fact a product of a function of x and a function of y, which we will note as Pi(x) and Qi(y). This factorization property is a well known fact for the periodic XXX [49, 50] and XXZ [34, 44] spin chains. This result will actually be derived in the next section, as there are a few preliminary calculations which need to be done first, mainly in order to find the R matrix of our system.

One way to go about this is through a method used in [51, 52] which consists in introducing two new Lax matrices, defined by:

where the operators Ai and Sj obey (9) for i = j (i.e. if they act on the same space), and commute otherwise.

We then take the product of those matrices (which is a matrix product on the physical two-dimensional space and a tensor product on the infinite-dimensional auxiliary spaces of L1 and $\tilde{L}_2$):

with A = A1A2. We will omit the notation · for the product between those new Lax matrices, since the indices are there to signify that their elements act on different spaces.

We now need to consider two special cases for the coefficients of L1 and $\tilde{L}_2$. Let us first set them as follows: a1 = x, c2 = y and the rest is 1. We write the corresponding matrices as $L_1^+$ and $\tilde{L}_2^-$:

and, by projecting each element on S1 = S2 = S (i.e. by applying ∑||i, j〉〉〈〈i + j|| to the right and its contragredient to the left), we get:

Equation (28)

Naturally, we check that A and S satisfy the correct relations.

Let us now set a2 = x, c1 = y/q, c2 = q and the rest to 1. We write the corresponding matrices as $L_1^-$ and $\tilde{L}_2^+$:

and, through the same operation as before, we get:

Equation (29)

In both of those special cases, one of the non-diagonal elements has a factor (1 − A) which allows us to truncate the representation at state ||0〉〉 and avoid some convergence issues. It would not be the case, however, if we were to construct $L_1^+(x) \tilde{L}_2^+(y)$ or $L_1^-(x) \tilde{L}_2^-(y)$, which we will therefore avoid at any cost.

We will now use this formalism in order to find the so-called 'R matrix' which is such that:

where R acts on the two auxiliary spaces of both X matrices. We will comment on the use of such a matrix at the end of this section.

Considering that $X(x,y)\cdot X(x^{\prime },y^{\prime })=L_1^+(x) \tilde{L}_2^-(y) L_3^+(x^{\prime }) \tilde{L}_4^-(y^{\prime })$, we will perform this exchange of parameters in steps, exchanging the parameters of only two Li's at a time. The first thing we could try is to exchange y and x', but this would transform $\tilde{L}_2^-(y)$ into $\tilde{L}_2^+(x^{\prime })$, so we would have $L_1^+(x) \tilde{L}_2^+(x^{\prime })$ on the left, which we want to avoid. The solution is then to first exchange x' and y', then y' and y, and finally y and x', to obtain X(x, y') · X(x', y), and then do the same once more to exchange x and x'.

We first need to find f12(x, y) such that $L_1^+(x) \tilde{L}_2^-(y) f_{1 2}(x,y) = f_{1 2}(x,y) L_1^-(y)\tilde{L}_2^+(x)$, which is to say:

That f12 may depend on A and S. In terms of the elements of X(x, y), this writes:

The first and second equations tell us that f12 commutes with A (i.e. it is diagonal), so it should be a function of A alone. The third or fourth equations then give us:

(where the second equality is due to the commutation of S with A), which we can rewrite as

Iterating this last equation, we finally find:

Equation (30)

where (x) is the infinite q-Pochhammer symbol

To exchange the parameters back, one simply has to apply $f_{1 2}^{-1}(xy)$.

This was for the exchange of parameters between the first and second or third and fourth matrices in $L_1^+(x) \tilde{L}_2^-(y) L_3^+(x^{\prime }) \tilde{L}_4^-(y^{\prime })$ (i.e. inside of a same X matrix). We now need to exchange parameters between the second and third matrices in that product.

Let us consider $\tilde{L}_2L_3$ with c2 = y, c3 = y'/q, and the rest set to 1:

We need to find $g_{2 3}^-(y,y^{\prime })$ such that $\tilde{L}_2^-(y) L_3^-(y^{\prime })g_{2 3}^-(y,y^{\prime })=g_{2 3}^-(y,y^{\prime })\tilde{L}_2^-(y^{\prime }) L_3^-(y)$. As before, the commutation for each of the four elements of $\tilde{L}_2^-(y) L_3^-(y^{\prime })$ is:

The first and second equations tell us that $g_{2 3}^-$ commutes with S2/S3 and with A2A3. the third and fourth suggest that $g_{2 3}^-$ keeps A2 and A3 separated, so it should only depend on S2/S3. The third equation then gives:

which can be rewritten as:

and produces, through iteration:

Equation (31)

Finally, we look for $g_{2 3}^+(x,x^{\prime })$ such that $\tilde{L}_2^+(x) L_3^+(x^{\prime })g_{2 3}^+(x,x^{\prime })=g_{2 3}^+(x,x^{\prime })\tilde{L}_2^+(x^{\prime }) L_3^+(x)$. Setting a2 = x, c2 = q, a3 = x', and the rest to 1, in $\tilde{L}_2 L_3$, we find:

and the exact same operations as before produce:

Equation (32)

Let us note that g+ and g commute with the projection which we perform on $L_1\tilde{L}_2$ and $L_3\tilde{L}_4$ to get the X matrices.

We can finally forget everything about that alternative construction of X(x, y), and simply define those operators f and g± as what we found them to be. After re-indexing them so that 1 refers to the auxiliary space of the first X matrix, and 2 to the second, we can write:

with

where we relabelled f12 as f1, f34 as f2, and $g^\pm _{23}$ as $g^\pm _{12}$, consistently with the indexes of the X matrices.

Applying those one after the other, we find the full R matrix:

Equation (37)

with

Equation (38)

(cf figures 35), or an equivalent expression if we apply Rx to the left of Ry instead.

Figure 3.

Figure 3. Schematic representation of f12 and $f_{12}^{-1}$. The exponents + and − of the L matrices on one side and the other are represented by, respectively, right and left arrows (the top row being L1, and the second $\tilde{L}_2$), and their arguments are represented by colours (blue for x and orange for y).

Standard image High-resolution image
Figure 4.

Figure 4. Schematic representation of $g^+_{23}$ and $g^-_{23}$. The top row represents $\tilde{L}_2$, and the second L3, and their arguments are represented by colours (red for x and y and green for x' and y').

Standard image High-resolution image
Figure 5.

Figure 5. Schematic representation of the R matrix for the periodic ASEP.

Standard image High-resolution image

There are many things to be said about that R matrix. First of all, it is the rigorous way to prove that $T^{{\rm per}}_\mu (x,y)$ commutes with $T^{{\rm per}}_\mu (x^{\prime },y^{\prime })$ for any value of those parameters: to do that, we insert R(x, y; x', y')R−1(x, y; x', y') at any point in the matrix product expression of $T^{{\rm per}}_\mu (x,y)T^{{\rm per}}_\mu (x^{\prime },y^{\prime })$, and make R(x, y; x', y') commute to the left all the way around the trace (figure 6), exchanging parameters between the two rows along the way. When crossing the marked bond, we just need to note that R commutes with Aμ · Aμ.

Figure 6.

Figure 6. Schematic representation of the commutation between R matrices and X matrices with different parameters (represented by different colours).

Standard image High-resolution image

Equations (33) and (34) are also of interest by themselves: they tell us that $T^{{\rm per}}_\mu (x,y)$ and $T^{{\rm per}}_\mu (x^{\prime },y^{\prime })$ can actually exchange only one of their parameters and keep the other, instead of commuting altogether (by applying the procedure we just described, but with Rx or Ry instead of R). This decomposition of the R matrix is a well known property [49, 50], and will be extremely useful to us in the next section.

Moreover, considering the decomposition (19) which we obtained in the previous section for special values of the spectral parameters in $T^{{\rm per}}_\mu (x,y)$, by taking y = 1/qk − 1x and y' = 1/ql − 1x' in R(x, y; x', y'), we should be able to recover the R matrix between auxiliary dimensions k and l as an independent part of the whole matrix. In other words, this R matrix of twice infinite dimension should contain all the smaller R matrices for the XXZ chain. Taking y = 1/qx, in particular, should yield the Lax matrix X (possibly up to a permutation). Because of the complexity of equation (38), those facts are yet to be verified.

1.4. Q-operator and T–Q equations

From the previous section, we now know that Tper satisfies:

Equation (39)

Fixing x' = y' = 0 in that equation (although any other constants would do), we can therefore write:

Equation (40)

with:

Since the matrices P and Q are combinations of the matrix $T_\mu ^{{\rm per}}$ taken at various values of its parameters, and considering equation (39), we immediately see that P(x) and Q(y) commute for any values of x and y.

This relation is crucial to our reasoning, and, put together with (19), will allow us to reach our conclusion in just a few more lines.

Using this, we can now rewrite (19) as:

Equation (41)

The first and second orders of this equation give:

Equation (42)

Equation (43)

Equation (44)

where we have written the first one twice (once at x, once at qx).

Combining those equations as Q(1/qx) ×(42) +e−μQ(q/x) ×(43)−Q(1/x) ×(44) yields the TQ relation:

Equation (45)

Using this equation in conjunction with (27) then gives Mμ in terms of Q:

Equation (46)

Let us note that using equation (41), we can obtain all the equations from the so-called 'fusion hierarchy' [35, 36], which gives equations on the decomposition of products of matrices t[k], as well as the TQ equations for any t[k] (which involve products of k − 1 matrices Q). We find for instance that

and

See appendix C for more details on these relations.

At this point, in order to get an explicit result for the largest eigenvalue E(μ) of Mμ, we need some additional information, which we get by taking μ to 0.

1.5. Non-deformed case

We want to recover the expression for E(μ) which was found in [12]. Everything that we have shown so far is valid for all eigenspaces of Mμ, but we now need to look for any information that we might have specifically on the dominant eigenvalue or eigenvectors of Mμ, so that we can solve equation (46) explicitly in that eigenspace. Starting from the coordinate Bethe Ansatz, this was done, in [12], using prior knowledge on the behaviour of Bethe roots for μ → 0. We cannot use that same argument here, for two reasons: we did not start from the coordinate Bethe Ansatz, so that our Q-operator is not defined in terms of Bethe roots, and even though we can, in principle, argue that the two definitions of Q have to be consistent with one another, we do not expect to be able to do the same in the open case. Fortunately, we can get to the same result by purely algebraic calculations.

Let us go back to the definition of $T_\mu ^{{\rm per}}$ as given in (10), as well as that of X in terms of S and A:

Expanding this expression in powers of A, we see that each weight in $T_\mu ^{{\rm per}}$ is a finite sum of traces of the form xiyjqkTr[AμSlAm], where k, l, and m are integers, and the term qk comes from ordering the powers of A and S. Because of the trace, those terms are 0 if l ≠ 0 (which accounts for the conservation of the total number of particles: the number of ds and es in the trace have to be the same). Otherwise, they are simply equal to xiyjqk/(1 − qme−μ). In particular, there is exactly one term with m = 0, which is equal to 1/(1 − e−μ). In the non-deformed limit μ → 0, this term dominates the trace in each entry of the transfer matrix, as it is the only term that diverges, so that:

where |1N〉 is a vector with all entries equal to 1 if the total number of particles is N, and 0 otherwise. |1N〉〈1N| is the projector onto the dominant eigenspace (i.e. the steady state) of M in the sector with N particles.

From this, we draw two important conclusions. First, the prefactor in Q(y) diverges as 1/μ. Secondly, we see that this limit does not depend on x and y, which means that the roots of P(x) and Q(y), in this eigenspace, all go to infinity when μ goes to 0. This allows us to use a contour integral in (42) to separate the contribution from P(x) and P(qx) (whose roots all go to infinity, and are therefore all out of the unit circle for μ small enough) and those from Q(1/x) and Q(q/x) (whose roots, in terms of x, all go to 0, and are inside of the unit circle for μ small enough). We also find that the eigenvalue of t[2] in that eigenspace is

which gives E(0) = 0. Using this, we can solve (42) perturbatively in μ and find Q in terms of h. Putting the result in (46), we finally find E(μ). Since we will be doing the same for the open ASEP in the next section with only a few differences, we will give those final steps in detail only in section 2.5.

2. Open ASEP

We will now apply the same procedure to the open ASEP. Considering the same X matrix as before, we first generalize the boundary vectors found in [16] accordingly, and see how our construction relates to the original matrix Ansatz [17]. We then show the PQ factorization of the transfer matrix, which is much more straightforward than in the periodic case. Finally, we see how it decomposes into blocks, one of which is a one-parameter transfer matrix with finite auxiliary dimension, according to the same equation as in the periodic case but with a different quantum determinant. We also show, as an appendix, what this all becomes in the language of the XXZ chain with spin-1/2.

The current-counting Markov matrix for the open ASEP is given by:

with

where the matrices m(0) and m(L) represent two particle reservoirs to which the system is coupled, through the first and the last site, respectively. m(0) acts on site 1 and m(L) on site L, both in the canonical occupancy basis {0, 1}, and, as previously, M(i) acts on sites i and i + 1 in basis {00, 01, 10, 11}.

As in the periodic case, the bond over which we count the current can be chosen arbitrarily, so for the sake of simplicity we choose the one between the left reservoir and the first site.

2.1. Boundary algebra and commutation relations

Let us first find out how the presence of boundaries make this case different from the previous one. Most of the results and calculations in this section are generalizations of those found in [16]. We define:

which has the same structure as the transfer matrix from [16], but with the X matrices replaced by their generalization. Note that, since we have two transfer matrices, we have, in principle, four free parameters (two in each matrix), but we must in fact put the same parameter twice in each matrix (so that Uμ depends only on x, and Tμ on y) if we want to be able to find suitable boundary vectors. To simplify notations, we will therefore rewrite X and $\hat{X}$ as:

with nx = 1 + xA and $\tilde{ n}_x=1-x A$.

Note that for a general set of fugacities, the generalization (14) holds, with the matrices $A_{\mu _i}$ being inserted in both U and T.

The conditions that these boundary vectors must satisfy is:

where we notice that the first two are the same as in [16, 17] with nx replacing 1, and the next two also have an extra term (1 − q)yA. For x = y = 0, we naturally recover the conditions given in [16]. Note that those conditions were found by trial and error, so that they may not be unique.

2.1.1. Commutation relations

As in the periodic case, we will now see how the transfer matrices we have just defined commute with each of the individual jump matrices.

For the bulk matrices M(i), we can simply use equation (11). Summing all of them, we get

Equation (53)

Equation (54)

Unlike in the periodic case, where the trace over the product of matrices ensures that all the $\hat{X}$ matrices cancel one another, we here need to consider the action of the boundary matrices m(0) and m(L) as well. We find that we cannot cancel the two boundary terms in (53), and those in (54), independently. Instead, we have to consider the commutation with the product Uμ(x)Tμ(y). We show, in appendix A.2, that

Equation (55)

Equation (56)

which is exactly what is needed to compensate the terms coming from the bulk.

Putting those relations together, we can conclude that, for any values of x and y, we have:

Equation (57)

2.1.2. Explicit expressions for the boundary vectors

We will need an expression for the boundary vectors in a few calculations later, so we determine them explicitly here.

For that, we need to define four new parameters in terms of the boundary rates:

and reversely:

In terms of these, equations (49)–(52) become:

Equation (58)

Equation (59)

Equation (60)

Equation (61)

We first focus on the right boundary. We saw that a possible representation for d and e is e = S1(1 − x2A1) in Uμ or e = S2(1 − y2A2) in Tμ, and $d=S_i^{-1}(1-A_i)$ (where indices 1 and 2 refer to the auxiliary spaces of Uμ and Tμ, respectively). Equations (58) and (59) become:

which is to say, multiplying by Si to the left:

We can write those vectors as generating functions, which will make it easier for us to manipulate them. Let us then write $|\!| V \rangle \!\rangle =V(S_1) |\!|0\rangle \!\rangle =\sum \nolimits _{k=0}^\infty V_k S_1^k |\!|0\rangle \!\rangle$, which is such that A1||V〉〉 = V(qS1)||0〉〉. Those two last equations now become:

which we can iterate to get:

Equation (62)

where we recall that (x) is the infinite q-Pochhammer symbol:

We will sometimes write products of those in a more compact form, with only one symbol, such as (x, y) = (x)(y).

As for the left boundary, it is simpler to treat it using the contragredient representation $\overline{X}$ of X on both vectors (which are then the exact symmetric of ||V〉〉 and $|\!| \tilde{V} \rangle \!\rangle$, but with a and $\tilde{a}$ replacing b and $\tilde{b}$), and then apply the operator fi that we found in section 1.3 in order to go back to X. This gives us:

to which we could add factors $\frac{(q)_\infty }{(x^2)_\infty }$ and $\frac{(q)_\infty }{(y^2)_\infty }$ for normalization.

In fact, in most future calculations, we will use X for Uμ and $\overline{X}$ for Tμ, so that the factor $\frac{(y^2A_2)_\infty }{(qA_2)_\infty }$ goes to $|\!| \tilde{V} \rangle \!\rangle$ instead of $\langle \!\langle \tilde{W}|\!|$.

Note finally that scalar products involving those vectors (like, for instance, all the weights of Uμ and Tμ) are well defined only for |a|, |b|, $|\tilde{a}|$ and $|\tilde{b}|$ all smaller than 1. This corresponds to the ASEP being in its so-called 'maximal current' phase. We can assume this to be the case for now, but a simple analytic continuation can be performed in order to access more generic values of the parameters [47].

We now need to show two things: that the transfer matrix is a product of two commuting matrices, one depending on x and one on y, and that for special values of the parameters it decomposes into two independent blocks.

2.2. R matrix and PQ factorization

Unlike what we did for the periodic case, we will first show that Uμ(x)Tμ(y) factorizes into two commuting matrices P(x) and Q(y). Note that this decomposition is not trivial: Uμ and Tμ do not commute, so P is not simply equal to Uμ and Q to Tμ.

First, we need to show that Uμ(x)Tμ(y) commutes with Uμ(x')Tμ(y') for any values of x, y, x' and y'. If we want to be rigorous, the fact that both commute with Mμ is not enough (Mμ might have a degenerate eigenspace which is not degenerate for Uμ(x)Tμ(y)), so, as in the periodic case, we need to use R matrices to exchange spectral parameters. We cannot do as easily as in the periodic case, because of the boundaries: instead of taking R around a trace to exchange the top and bottom matrices, we now need to show that when applied to a boundary, R transfers the spectral parameters from a vector to another. We find the two following results (which are proven in appendix B.1 and illustrated on figure 7):

where we use the same R matrix as defined in (38).

Figure 7.

Figure 7. Action of the R matrix on the right boundary.

Standard image High-resolution image

These relations imply, at the level of the transfer matrices, that

which immediately gives that Uμ(x)Tμ(y) commutes with Uμ(x')Tμ(y'), but also that

so that we can write, dividing by Uμ(0)Tμ(0),

Equation (68)

with

where P and Q commute with each other. Note that, in terms of P and Q and at y = 0, equation (66) becomes P(x)Q(0) = P(0)Q(x), so that P and Q are the same up to some constant (since P(0) is set to 1 but not Q(0)). We will not need to use this identity in our calculations, so we will keep using distinct names for those two objects, to have notations that are consistent with the periodic case.

2.3. Decomposition of the transfer matrix

In this section, we investigate whether anything happens for those special values of x and y that we considered in section 1.2. The calculations involved are slightly more complicated than they were then, not only because of the boundaries, but also because x and y are now separated, and the factors (1 − xyA) in e have been replaced by (1 − x2A1) and (1 − y2A2), which do not give anything useful for xy = q1 − k.

The simple solution to this problem is to put x and y back together, using $g_{12}^-(x,y)$ (as defined in section 1.3) to go from $X(x,x)\cdot \overline{X}(y,y)$ to $X(x,y)\cdot \overline{X}(y,x)$. We choose to keep the second row of matrices in the contragredient representation to simplify future calculations. We recall that it corresponds to having f(xy) commute through X(x, y), so that

This operation makes the boundary vectors more complicated: they are not a product of two vectors acting each on one row any more (see figure 8). We will write those vectors as K+ for the right boundary and K for the left one:

Equation (69)

Figure 8.

Figure 8. Two equivalent representations for Uμ(x)Tμ(y). In the first one, the two rows of X matrices are independent, and the two spectral parameters x (orange) and y (blue) are separated. In the second, x and y are back together in X, but the two rows are now intertwined.

Standard image High-resolution image

We will denote by $K^+_{i,j}$ the coefficient of ||i, j〉〉 in K+ (i.e. the coefficient of $S_1^{i}S_2^{j}$ in the expansion of the ratios of q-Pochhammer symbols in K+), and by $K^-_{j,i}$ the coefficient of 〈〈i, j|| in K. Note that, in this section, the normalization of the boundary vectors is important. The factors $\frac{(q)_\infty }{(x^2)_\infty }$ and $\frac{(q)_\infty }{(y^2)_\infty }$ in K+ and K are there so that $K^-_{0,0}=K^+_{0,0}=1$. Also note that the transpose of K can be obtained from K+ through the transformation $\lbrace x\leftrightarrow y,S_1\rightarrow aS_2,S_2\rightarrow aS_1, b\rightarrow 1/a,\tilde{b}\rightarrow \tilde{a}\rbrace$, so that we only need to do calculations on K+.

Now that x and y have been reunited in the bulk of the system, the same decomposition (into block triangular matrices) happens as did in the periodic case. We have to see whether the same happens to the boundary matrices K±.

2.3.1. One-way boundaries

We first focus on the simpler case where $\tilde{a}=\tilde{b}=0$, which is to say γ = δ = 0. For this simpler case, we have:

where the factor ||01, 02〉〉 on which this whole function acts is implicit.

This can be expanded into:

where the first sum comes from $\frac{(y S_1/S_2)_\infty }{(x S_1/S_2)_\infty }$, and the second from everything that is to the right of that.

We will shortly be in need of one of Heine's transformation formulae for basic hypergeometric series, which we give now. For a function 2ϕ1(a, b; c; z) defined as:

Equation (71)

we have [53]:

Equation (72)

We will also need this simple identity on q-Pochhammer symbols:

Equation (73)

Coming back to K+, we have:

where we used (73) between the first line and the second, and (72) between the second and the third. This finally gives us:

Equation (74)

and, at the left boundary:

Equation (75)

We need to compare those values for (x, y) equal to (1/qk − 1y, y) or (q/y, qky) (which are the same values as before, but keeping y as a variable instead of x). We get:

Since (qjk + 1)i = 0 for jk + 1 ⩽ 0 and i + jk ⩾ 0, i.e. for jk − 1 and ikj, the first matrix is, as we expected, block triangular.

We now consider, for jk − 1, the ratio:

The term $\frac{(q^{j+1})_k}{(q^{i+1})_k}$ accounts for going to the contragredient representation on both lines (consistently with what happens to the X matrices), and can be transferred to the left boundary, where they compensate similar terms, emerging from the same calculation:

All the other terms depend only on k, and factor out of the matrix product. We will now put them together.

First, we need to make a few transformations on those factors. For y = 1/qk − 1x, we have:

which gives us:

with

Equation (76)

We see that the boundary matrices, just as the bulk matrices, are upper block triangular, with a first block of auxiliary dimension k, and a second of infinite auxiliary dimension, which is the same as the full matrix at different values of the spectral parameters, up to a global factor which we have just computed. Put into equations, this becomes:

where the factor e−2kμ comes, as before, from the first coefficient of the matrices Aμ (of which there are now 2) in the second block.

To make things simpler, we can redefine all our matrices as:

Equation (77)

which is equivalent to choosing another normalization for Uμ and Tμ. Written in terms of those new transfer matrices, this last equation takes its definitive form:

Equation (78)

This has the exact same form as equation (41), the only difference being a factor 2μ instead of μ in the right hand side. Note that, since $\tilde{P}$ and $\tilde{Q}$ are the same function up to a constant (as we saw at the end of section 2.2), we find that $\tilde{t}^{[k]}$ is symmetric under x↔1/qk − 1x.

The rest of the reasoning is the same as in the periodic case. We first consider the case where k = 1. This gives us quite simply: t[1](x) = (1 + x)L(1 + 1/x)L = h(x). From this, we get:

This explains how the function F(x) replaces h(x) (from the periodic case) in all the expressions for the cumulants of the current that were found in [15].

For k = 2, through the same calculations as in the periodic case, we find the TQ equation

which translates into

Equation (79)

Note that although the equation for $\tilde{t}^{[2]}$ is more compact that that for t[2], the former is not polynomial due to the presence of q-Pochhammer symbols in F and in the normalization of $\tilde{t}^{[2]}$, whereas the latter is.

We now have to verify that $\tilde{t}^{[2]}(x)$ is related to Mμ through some derivative. We will do this directly in the general case (with all four boundary rates), in a few pages. For now, we just give the two-dimensional boundary matrices that we find from K+ and K:

2.3.2. Two-way boundaries

In the general case, where both boundaries have two non-zero rates, the calculations are much more involved, and can be found in appendix B.2. Starting from expression (70) for K+(x, y), and taking xy = q1 − k, we find that

Equation (80)

and the corresponding result for the left boundary:

Equation (81)

Using this, we do the exact same operations as in the previous case, replacing hb with

Equation (82)

and we get the completely general version of the function $F(x)=\tilde{t}^{[1]}(x)$:

Equation (83)

We finally look at the transfer matrix $\tilde{t}^{[2]}(x)$ and try to relate it to Mμ, as we did in section 1.2 for the periodic case. By analogy with equation (24), we will try to rewrite $\tilde{t}^{[2]}(h)/F(x)$ as the standard two-dimensional transfer matrix.

The two-dimensional blocks from K+ and K for y = 1/qx are:

and the corresponding blocks from the bulk are:

Consider these transformations, with $\lambda = \frac{1+q x}{q(1+x)}$, i.e. $x=-\frac{1-q \lambda }{q(1-\lambda )}$ :

and

where the matrix products inside the parentheses are done in the auxiliary space, on each element of X2 or $\overline{X}_2$, and the third product is done on the physical space at each site. Notice that the inner products cancel out between one site and the next, and that the outer products are done between X2 and $\overline{X}_2$ and amount to a global factor $\frac{x}{(1+x)^2}$ on each site. Taking a product of L matrices $X_2\cdot \overline{X}_2$, this transformation gives, apart from the inner products at each end of the chain, a global factor $\frac{x^L}{(1+x)^{2L}}$, which accounts for the bulk part h(x) of F(x). Also note that the matrices from $\overline{L}^{(i)}$ need to be transposed if multiplied from right to left.

Considering that $\tilde{t}^{[2]}$ has a factor hb(x)hb(1/qx), and that

the transformations we need to do on the boundary matrices are:

of which we will not write the full expression in terms of λ. Their values and first derivatives at λ = 0, which is all we will need, are, in terms of the original boundary parameters:

Equation (84)

Equation (85)

with $A=\frac{2+(\alpha -2)\alpha -\gamma ^2}{1+q}$ and $B=\frac{2(1-\alpha +\gamma )}{(1+q)^2}$.

We now put all these matrices together, obtaining $\tilde{t}^{[2]}(h)/F(x)$ as desired. The Lax matrices L(i) are not exactly the same as those we had for the periodic case. Their values at λ = 0 are L(i)(0) = P(i) and $\overline{L}^{(i)}(0)=\overline{P}^{(i)}$, where P(i) is the permutation matrix exchanging the auxiliary space with the physical space when applied to the right in the matrix product, and $\overline{P}v$ is the same but when applied to the left. We see that the two boundaries do not play the same role. The right boundary matrix is simply the identity at λ = 0, and serves to connect the Lax matrices on the last site. The left boundary matrix is traced, at λ = 0, because of the Lax matrices on the first site, and we see that its trace is 1. All in all, at λ = 0, the whole transfer matrix is simply the identity (see figure 9(a)).

Figure 9.

Figure 9. Schematic representation of the value and first logarithmic derivative of $\hat{t}^{[2]}$ at 0. The first is the identity matrix (a). The second is a sum of terms adding up to Mμ: three terms at the left boundary (b), two for each bond in the bulk (c) and one term at the right boundary (d).

Standard image High-resolution image

As for its first derivative with respect to λ, we find that each pair of Lax matrices L(i) and $\overline{L}^{(i)}$ have a total contribution of 2M(i) + (1 − q)(τi − τi + 1), where τi is the number of particles on site i. The right boundary matrix, as can be seen in (84), gives a term 2m(L) + (1 − qL. At the left boundary, we have three terms to consider, one involving the derivative of K and the first Lax matrices taken at 0, and two more where we take the derivative of the first Lax matrices, and the boundary matrix at 0. The sum of those terms gives a total contribution of 2m(0) − (1 − q1. If we now sum all the terms that we have found, the parts proportional to (1 − q) all cancel out, and we are simply left with 2Mμ.

At the end of the day, we find that:

Equation (86)

which is the same as equation (26), with h(x) replaced by F(x), and an extra factor $\frac{1}{2}$. We have not mentioned the matrices Aμ in those last calculations, because their behaviour is trivial, and it is left to the reader, as an exercise, to check that adding one between two sites gives the correct deformation for Mμ.

As for the periodic case, we could have done all those calculations around λ = instead of 0, which would have given an equivalent result:

Equation (87)

In order to obtain the results found in [15], we need one last piece of information: how P and Q behave as μ goes to 0.

2.4. Non-deformed case: matrix Ansatz

As in the periodic case, the final step is to consider the μ → 0 limit in our transfer matrices, in order to get some information specific to the dominant eigenspace of Mμ and the corresponding eigenvalue E(μ). Once more, this is made much more difficult by the presence of the boundaries, but the principle remains the same.

We first need to consider the behaviour of Tμ(y) alone. As we recall, it is defined by:

where

and

As in the periodic case, we can expand each entry in Tμ as a finite sum of terms of the form $y^iq^k\langle \!\langle \tilde{W}|\!| A_\mu S^l A^m |\!| \tilde{V} \rangle \!\rangle$ (but this time, there is no constraint on l, as the number of particles is no longer conserved). In each entry, there is exactly one term with m = 0, equal to $\langle \!\langle \tilde{W}|\!| A_\mu S^l |\!| \tilde{V} \rangle \!\rangle$, where l is the difference between the number of particles in the initial and final configurations. We show, in appendix D, that for μ → 0, this term behaves as 1/(1 − e−μ) times a prefactor which does not depend on l, and all the others remain finite, so that:

Notice that the prefactor is equal to 1/hb(y) as defined in (82), up to a constant.

We can do the same calculations for Uμ, and see that there is no divergence there, but it is not necessary. Instead, we just apply Uμ(x) on this last result. As we recall, Uμ(x) is defined by

with

and where 〈〈W|| and ||V〉〉 verify:

Since the vector |1〉 can be written as the tensor product of the constant vector |1i〉 on each site i, applying each X to |1〉 gives the vector

Since, as we recall, d, e, and A satisfy

we find that D and E satisfy

Equation (88)

and the conditions on the boundary vectors simply write

The matrix U0(x)T0(y) becomes, up to a prefactor:

where, for any configuration $\mathcal{C}=\lbrace \tau _i\rbrace$,

Equation (91)

(where we used the fact that Aμ = 1 for μ = 0), which does not depend on x.

Moreover, since we know that [M, U0(x)T0(y)] = 0 (by considering equation (57) at μ = 0), and that 〈1|M = 0 (because M is a stochastic matrix), we find that

Equation (92)

which is to say that |P〉 is the steady state of the open ASEP. We therefore recover the original 'matrix Ansatz', as presented in [17]. Note that a matrix Ansatz also exists for the steady state of the periodic multi-species ASEP [48], which one should be able to recover from an appropriate transfer matrix with spectral parameters and current-counting deformations, and which would be related to the Uq(SU(k)) algebra (for k different types of particles, counting holes as one of those types).

The normalization of U0(x)T0(y) still needs to be determined. Since we know that the only dependence in y is a prefactor 1/hb(y), and that Uμ(x)Tμ(y) = Uμ(y)Tμ(x) (as shown in equation (66)), we conclude that, up to a constant term which we include in |P〉:

which is to say that, similarly to the periodic case, the prefactor in $\tilde{Q}(y)$ (as defined in (77)) diverges as 1/μ, and that hb(x)hb(y)U0(x)T0(y) is independent of x and y, so that the roots and poles of $\tilde{P}(x)$ and $\tilde{Q}(y)$ all go to infinity when μ goes to 0.

Since, in that limit, $\tilde{Q}(y)$ is a constant, we also find, from equation (79), that the eigenvalue of $\tilde{t}^{[2]}$ in that eigenspace is

It is straightforward to check that equation (88) then gives E(0) = 0, which we knew from the fact that M0 is a stochastic matrix.

Starting from μ = 0, where, as we just saw, we know $\tilde{Q}$ explicitly, we can expand everything in series in μ, and find an explicit expression for E(μ) perturbatively in μ. This is what we do in the next section, where we put everything together and give a summary of the whole procedure.

2.5. Summary—functional Bethe Ansatz for the open ASEP

In this section, we collect all the results we found for the open ASEP, and show how they lead to the expressions for the cumulants of the current obtained in [15].

The first step is to construct two transfer matrices Uμ(x) and Tμ(y):

Equation (93)

Equation (94)

(involving the function hb defined in equation (82)), such that, for any x and y, we have:

Equation (95)

Using these, we can construct two commuting matrices P(x) and Q(y) as:

Equation (96)

such that:

Equation (97)

We can show that those matrices verify, for any positive integer k:

Equation (98)

(where we omit the tildes, since we have correctly normalized P and Q from the start). The matrix t[k](x) is the one-parameter transfer matrix with a k-dimensional auxiliary space.

The first two of those relations write:

Equation (99)

Equation (100)

where F(x) is a scalar function given by

Equation (101)

and t[2] is such that:

Equation (102)

Using equation (99) at x and qx, and equation (100), we can find the T-Q equation:

Equation (103)

which allows us to express Mμ in terms of Q instead:

Equation (104)

We now consider:

Equation (105)

As we saw in section 2.4, the first eigenvalue of B goes to 0 for μ → 0, which is not a priori the case for the others. Moreover, all the roots and poles of P(x) and Q(y) go to infinity in that eigenspace.

From here on, we restrict ourselves to that specific eigenspace, so that P, Q and B refer to functions rather than matrices.

The next step is to rewrite equation (99) in a different way, and see that all the information we have about P and Q makes it solvable. This was done in [12] for the periodic case, and we reproduce it here, under a slightly different form, for the open case. Let us therefore define a function W as:

Equation (106)

and a convolution kernel K, as:

Equation (107)

along with the associated convolution operator X:

Equation (108)

Using those, as we show in appendix E, one can rewrite equation (99) in terms of only one unknown function W:

Equation (109)

which is the same as equation (59) in [12].

The last step is to take equations (104) and (106) at x = 0, to find:

Equation (110)

Considering what we said before about the roots and poles of P(x) being outside of the unit circle, and those of Q(1/x) being inside, we can replace $\frac{1}{2}\log \bigl (\frac{Q(q/x)}{Q(1/x)}\bigr )$ by −W(x) when expressing E(μ) as a contour integral over the unit circle (since P will not contribute), and obtain:

Equation (111)

and

Equation (112)

in which we recognize (13) and (14) from [15]. All this is done for a < 1 and b < 1, but can then be generalized to any a and b through the same reasoning as in [47] for the mean current, replacing the unit circle c1 by small contours around $\Gamma =\lbrace 0,q^k a,q^k \tilde{a},q^k b,q^k \tilde{b}\rbrace$.

In this last result, E(μ) is expressed as an implicit function of μ, through the variable B. In order to get an explicit expression, one has to invert (111) to get B in terms of μ, and inject the result into (112). Since B is of order μ around μ = 0, this can be done perturbatively in B and μ, to yield the coefficients of E(μ) expanded in powers of μ. Those are, up to a factorial, the cumulants of the current in our system. In the general case, they can be expressed as combinations of multiple contour integrals around powers of F convolved through K [15]. In the simpler case of the TASEP, where q = 0, that convolution kernel vanishes, and (111) and (112) simplify to

and

where F(z) is also much simpler, and has at most nine poles. This makes calculating the cumulants of the current much easier, especially in the case where a = b = 0 [14].

2.6. XXZ spin chain with general boundary conditions

In this section, we explain how our construction for the open ASEP can be translated for the spin-1/2 XXZ chain with non-diagonal boundary conditions [6].

Let us first define the bulk Hamiltonian of the XXZ spin chain of length L:

with h(i) acting as:

on sites i and i + 1 (in basis {00, 01, 10, 11}, as usual), and as the identity on the rest of the chain. We define Δ as $\frac{1}{2}(q^{-1/2}-q^{1/2})$, which is not the usual definition for the XXZ chain (that can be obtained simply by replacing q by q2).

Let us also write the deformed Markov matrix $M_{\lbrace \!\mu _i\!\rbrace }$ for the special choice of weights defined by:

which is on the line $\mu =\frac{1}{2} \log {\bigl (\frac{\gamma \delta }{\alpha \beta }q^{L-1}\bigr )}+\imath \mathbb {R}$ if ν0 and νL are imaginary numbers (in which case $M_{\lbrace \!\mu _i\!\rbrace }$ is Hermitian). The deformed local matrices become:

It is straightforward to check that in this case, we have $M_{\lbrace \!\mu _i\!\rbrace }=\sqrt{q}H+\epsilon$, where epsilon is a constant, with the boundary matrices being equal to:

Since we have three nontrivial parameters in each of those matrices, they are completely general: we can write (without restricting ourselves to Hermitian matrices)

with

which is to say

and

Those are well defined for any values of az, a+, a, bz, b+ and b.

Considering expression (14), or its equivalent for an open chain, and noting that $A_{\frac{\mu _i}{2}}=A^{-1/4}$, we can rewrite U(x) and T(y) in a way better suited to this situation:

with

where

The boundary vectors become:

Matrices Nx, Σ+ and Σ satisfy the Uq[SU(2)] algebra [45]:

and the conditions on the boundary vectors become:

We may note that the structure of this solution bears a strong resemblance to that of the Lindblad master equation found in [54, 55]. In that case, the algebraic relations satisfied by the boundary vectors are different from ours, as there is only one vector per boundary but two equations per vector, which may constrain the values of the boundary parameters. The connection between integrable spin chains and certain boundary-driven Lindblad models is also investigated in [56, 57], and a matrix product structure of the density matrix was observed in [58] for an open Hubbard chain, but the precise conditions for such systems to be integrable have yet to be understood.

3. Conclusion

In the present work, we treat the case of the asymmetric simple exclusion process with generic open boundaries and a current-counting deformation. We construct a transfer matrix with an infinite-dimensional auxiliary space and two free parameters, which commutes with the deformed Markov matrix of our system. We then derive the two essential properties of that transfer matrix: it is a factor of two matrices P and Q which commute and carry one of the free variables each, and for certain values of the product of those parameters it breaks into two independent matrices, one of which is the usual one-parameter transfer matrix for a given dimension of the auxiliary space. From these results, we derive the TQ equations for the transfer matrices, which allows us to identify our Q matrix with Baxter's Q-operator [19], although it is obtained through an entirely different method. Writing the algebraic relations between the P and Q matrices in a given eigenstate of our system then yields the functional Bethe Ansatz equations, which we can use, through the same method as in [12], to find an explicit expression for the leading eigenvalue of our deformed Markov matrix, and thus give a rigorous proof of the results conjectured in [15].

We have mainly focused on one specific eigenvalue, which is of particular importance as to the physics of the ASEP, and which we were able to obtain explicitly, for any system size and boundary parameters, and perturbatively in the deformation parameter μ, using specific knowledge of the corresponding eigenvector for the non-perturbed case. It is however to be noted that only a small part of our derivation depends on that specific knowledge, and that, in principle, with similar information on other eigenstates (i.e. the positions of the roots and poles of Q for μ = 0, which can in principle be obtained through the usual Bethe equation [26, 59]), we would be able to obtain similar results. The density of the spectrum was obtained, in the thermodynamic limit, in the context of the periodic TASEP in [60, 61], using the coordinate Bethe Ansatz, and it would be interesting to know whether that method can be extended to the open case or to the ASEP, and be written in terms of the functional Bethe Ansatz (for the open ASEP, only the first excited state without deformation was analysed, in [62]). Our construction can also be applied in more detail to the XXX and the XXZ spin chain, which will be the subject of a future paper.

It should be noted that, when solving the TQ equation for μ = 0, Q is usually taken to have the so-called 'crossing symmetry' x↔1/qx, which is a symmetry of the equation [11, 21]. In our case, Q(y) is explicitly constructed as a series in powers of y, and the fact that it is entire and without zeros inside of the unit circle (which is impossible with the aforementioned symmetry) plays an important part in finding the explicit form of E(μ). It would be interesting to analyse further the precise relation between our construction and the usual approach to the TQ equation.

There remains the problem of finding an expression for the eigenvectors, possibly in a form similar to that which exists for the periodic case (as a generalized determinant, or equivalently as a product of creation operators on a vacuum state). This was done for special cases where the boundary parameters satisfy certain constraints [2427], and also more recently in the general symmetric case (or XXX spin chain) [63], but no complete solution for the ASEP has yet been found.

Let us also comment on the alternative method of constructing the functional Bethe equations devised recently [3943] in the context of the XXX and XXZ spin chains, which consists in adding an extra inhomogeneous term to the TQ equation, in order to make polynomial solutions possible (it is easy to see, from the TQ equation obtained here, that it is not in general the case without that extra term), and without modifying the relation between Q and the eigenvalues of the Hamiltonian. We do not know at present exactly how that method compares with ours, and whether it solves the same aspects of the problem. As far as we can tell, both methods have their merits. The alternative construction has polynomial solutions and allows for a finite set of Bethe equations, which are easier to obtain explicitly, but only when that is possible (e.g. for small system sizes), while our construction allows to use the PQ equation, which seems to be essential in obtaining a general but perhaps less explicit expression of the desired eigenvalue regardless of the system size, and which we would not be able to obtain in the same way with an inhomogeneous term. Moreover, although our construction requires some heavy calculations involving q-series, it is done explicitly and at the level of the whole matrices rather than for individual eigenvalues only, which spares us from having to make assumptions on Q. It would be interesting to see whether those two methods can be combined in any way.

Finally, we may note that certain steps in our derivation seem somewhat sub-optimal (in particular the proof of the block decomposition of our infinite dimensional K matrices, which can be found in appendix B.2), and we believe that simpler purely algebraic derivations might exist.

Acknowledgments

The authors would like to thank K Mallick, R Blythe, E Ragoucy, P Di Francesco, V Terras, G Misguich, T Prosen and P Baseilhac for their helpful comments and discussions. AL gratefully acknowledges financial support from the Interuniversity Attraction Pole—Phase VII/18 (Dynamics, Geometry and Statistical Physics) at the KU Leuven.

Appendix A.: Commutation relations

A.1. Bulk

The commutation relation for the periodic case or for the bulk of the open case involves a calculation almost identical to that which can be found in the appendices of [16].

Let us recall:

The parts of the transfer matrix and of the Markov matrix involving sites i and i + 1 are:

The commutator of those two gives:

Equation (A.1)

where we got from the second to the third line using:

Equation (A.2)

A.2. Boundaries

We recall, for the left boundary vectors:

We consider the commutator of the left boundary matrix from Mμ with the relevant part of Uμ and Tμ (omitting the matrix Aμ here, because, as in the periodic case, its action is trivial), and use those relations. We get:

The first terms in each of those two equations cancel one another:

We note that if we had had two different spectral parameters for the two diagonal terms in X(x) or X(y), this relation would not have been possible (or we would have needed two equations on each boundary vector instead of one).

As for the second terms, a straightforward calculation gives:

so that

The exact same calculations can be done at the right boundary, and yields:

Appendix B.: Boundary vectors: R matrix and truncation

We start this appendix with a few definitions and formulae that we will need during our calculation. Most of those can be found in [53].

First, the q-binomial recursion formula at order k:

Equation (B.1)

Then, some useful basic hypergeometric functions:

Equation (B.2)

Equation (B.3)

and the q-Appell function:

Equation (B.4)

along with special relations that they verify: the q-Euler formulae

Equation (B.5)

Equation (B.6)

and relations that link Φ1 and 3ϕ2:

Equation (B.7)

Equation (B.8)

The integral representation of the q-Beta function:

Equation (B.9)

where $q^{\log _q(x)}=x$ and $\int \nolimits _0^1\frac{d_qt}{t}f(t)=\sum\nolimits_{k=0}^\infty f(q^k)$.

The q-derivative: $D_x f(x)=\frac{f(x)-f(qx)}{x}$, and the q-Leibniz formula:

Equation (B.10)

Finally, a useful relation:

Equation (B.11)

which can be found by expanding $\frac{(axt)_\infty (byt)_\infty }{(xt)_\infty (yt)_\infty }$ in two different ways.

B.1. Action of the R matrix

We start from $I_0=R^{-1}(x,x;y,y) |\!| V(x) \rangle \!\rangle \cdot |\!| \tilde{V}(y) \rangle \!\rangle =\frac{(x y)_\infty }{(q)_\infty }\frac{(q A_2)_\infty }{(x y A_2)_\infty } K^+$, as defined in ...

Equation (B.12)

where the term ||01, 02〉〉 is implicit.

We expand all but the leftmost two ratios:

Using equation (B.1), and relabelling nr and ms as n and m, we get:

We then use equation (B.7):

Then we use (B.5) on 2ϕ1:

and (B.8) on 3ϕ2:

We then use (B.1) in reverse (and exchange k and n + mk for convenience) to find:

Equation (B.13)

In this expression, we can see that, after rescaling S1 to S1/b, we have the symmetry {S1S2, b↔1/b}, so that

which is to say, after rearranging a few terms:

Through the exact same calculations, we can also show that

B.2. Truncation at xy = q1 − p

What we want to prove here, taking xy = q1 − p, is twofold. First, we need to show that $K^+_{i,j}(x,y)=0$ for jp − 1 and ip. Then, we need to find the relation between $K^+_{i+p,j+p}(x,y)$ and $K^+_{i,j}(q^px,q^py)$, which we will do by calculating $\overline{I_0}=D_{S_1}^p D_{S_2}^p I_0$.

The first part is rather straightforward. Considering equation (B.14), and since $K^+=\frac{(q)_\infty }{(xy)_\infty }\frac{(xy A_2)_\infty }{(q A_2)_\infty } I_0$, we see that K+ has a factor $\frac{(xy A_1)_\infty }{(x y A_2)_\infty }$ which results in a factor $\frac{(xy)_i}{(x y)_j}$ in $K^+_{i,j}$. Taking xy = q1 − p, this factor is 0 precisely when jp − 1 and ip, and since the rest of the expression for K+ has no pole for xy = q1 − p, we conclude that the corresponding terms in K+ are equal to 0. Note that, even though that prefactor is infinite for ip − 1 and jp, the absence of poles in K+ tells us that it is compensated by the rest of the expression.

We now need to calculate $\overline{I_0}=D_{S_1}^p D_{S_2}^p I_0$. In order to do this, we first notice that

and we use equation (B.9) on $\frac{(y/x)_k(xy)_{n+m-k}}{(y^2)_{n+m}}$ in (B.13) to write:

where $I_0=I_1 \frac{(xy)_\infty (y/x)_\infty }{(y^2)_\infty (q)_\infty }$. That prefactor will be put back in later when we re-sum the integral. Note that the term (xy) gives 0 when we take xy = q1 − p, but since that term is part of I0 and not of K+, it merely serves to simplify the calculation, and will be taken away at the end. For that reason, we will not apply xy = q1 − p on that term, but keep it in its generic form.

We will now differentiate with respect to S1 and S2. Consider the following relation, which is easy to prove by recursion: if ab/cd = q1 − p, then

This relation applies to both groups of q-Pochhammer symbols at the right in the integral (remember that xy = q1 − p), and gives:

We then have:

We use the formula for the q-derivative of a product (B.10) on the derivative with respect to $\tilde{b}$, with $f(\tilde{b})=(y\tilde{b}t)_p$ and the rest in g. Considering that $D_{\tilde{b}}^l(y\tilde{b}t)_p=(q^pyt)^l(q^{-p})_l(y\tilde{b}tq^l)_{p-l}$, we find:

so that, renaming kl as k, we have:

Consider the part that depends on l (writing $\frac{1}{(y/xq^{k+l}t)_\infty }=\frac{(y/xq^{k}t)_l}{(y/xq^{k}t)_\infty }$):

which, through (B.11), is equal to:

Using that xy = q1 − p, only the term for l = p survives, namely $(y/\tilde{b})_p(y\tilde{b}tq^{k})^p(-1)^pq^{p(p-1)/2}$, so that:

Using again the formula for the derivative of a product and relabelling kl as k again, we find:

We have almost the same sum as before over l, only with an extra term qpl which changes the result to:

so that, this time, the whole sum survives to taking xy = q1 − p.

We now have to re-sum the integral over t. First, we expand the two ratios of q-Pochhammer symbols on the right, and apply the derivative with respect to $\tilde{b}$:

so that

We only have to consider the part that depends on k and l, as we need to keep the terms that depend on n and m intact. We re-sum the integral, going back from I1 to I0:

Noticing that (xy)n + mk + l + p(xyqp + l)pl = (xyqp + l)n + mk(xy)2p, we find:

Using (B.11) on the sum over k, we get:

Equation (B.14)

We now write:

and

Putting this into (B.14), we get:

Since $(a)_p=\sum \limits _{l}\frac{(q)_p}{(q)_l(q)_{p-l}}q^{l(l-1)/2}(-a)^l$, this simplifies to

We can now get rid of the sum over r. We have:

which gives

Using (B.11) one final time on the sum over k gives:

We can at last put this back into $\overline{K^+}$. We get:

which is to say

In terms of K+, this gives us, for xy = q1 − p:

Equation (B.15)

The equivalent result for K is:

Equation (B.16)

Appendix C.: TQ equations and fusion hierarchy

In this appendix, we show how the set of PQ equations

allows to recover all the T–Q equation for each t[k], as well as fusion equations, where products of t[k]'s are expressed as linear combinations of t[l]'s with different dimensions l. We will consider the periodic case, where t[1](x) = h(x). For the open case, one simply has to replace h(x) by F(x) and μ by 2μ.

To make our notations more compact, we will make the dependence in x implicit, and write P(qkx) = Pk, Q(1/qkx) = Qk, $t^{[k]}(q^lx)=t^{[k]}_l$ and h(qkx) = hk. In a given equation, one can shift all the indices by any integer k, which corresponds to taking the equation at qkx instead of x.

C.1. T–Q equations

This first calculation is rather straightforward. Consider the PQ of order 1

Equation (C.1)

and multiply the PQ equation of order k

Equation (C.2)

by the product of all Ql's for l between 0 and k − 2:

The right-hand side of the equation can then be rewritten to give

in which we can simply use equation (C.1) shifted by n to obtain the T–Q equation:

Equation (C.3)

As we see, this equation involves products of k − 1 matrices Q. By transferring all the Qs to the right-hand side, we find an alternative expression:

Equation (C.4)

C.2. Fusion equations

Let us consider $t^{[2]}_0t^{[k]}_1$. Using the PQ equations of order 2 and k, we find:

The first two terms of this sum can also be obtained by considering $t^{[k+1]}_0h_1$:

so that

We obtain the fusion equations:

Equation (C.5)

which, for k, allows to recover the T–Q equation (45).

Through the same reasoning, we also find

Equation (C.6)

and combining the two, we find

Equation (C.7)

Appendix D.: Non-deformed limit: asymptotic calculations

We want to estimate terms of the form $I_{ij}=\langle \!\langle \tilde{W}|\!| A_\mu A^i S^j |\!| \tilde{V} \rangle \!\rangle$, with

for μ → 0, assuming that |a|, $|\tilde{a}|$, |b| and $|\tilde{b}|$ are all smaller than 1. In particular, we want to show that it diverges if i = 0, and is finite otherwise.

Let us assume that j ⩾ 0 (the calculations for j < 0 are similar). We have:

Since we are looking for possible divergences when e−μ goes to 1, we should focus on the asymptotic behaviour of the terms of the series, as n + m becomes large. As we will need to approximate q-Pochhammer symbols, let us first remark that, for 0 ⩽ x ⩽ 1,

which can be proven by recursion by considering that (xqk) = (1 − xqk)(xqk + 1) for $k\in {\mathbb {N}}$ and that (0) = 1. From this, we deduce that

so that (x) is a good approximation for (x)N if N is large enough.

Consider now, for N large, the term $\sum \nolimits _{n+m=N}\frac{(ya)_n(y/a)_{m}}{(q)_n(q)_{m}}(a\tilde{a})^{m}$. It can be approximated, for 0 ⩽ nN/2, by $\sum \nolimits _{n=0}^{N/2}\frac{(ya)_n(y/a)_{\infty }}{(q)_n(q)_{\infty }}(a\tilde{a})^{N-n}$ with

and, for N/2 < nN, by

The same can be done with $\sum \limits _{r+s=N-j}\frac{(y b)_r(y/b)_{s}}{(q)_r(q)_{s}}(b \tilde{b})^s\sim \frac{(yb)_\infty (y\tilde{b})_{\infty }}{(b\tilde{b})_\infty (q)_{\infty }}$, which leaves us with

As we can see, if i = 0, Iij diverges as 1/μ, with the prefactor that is given here, and is otherwise finite. We can check that, in all cases, the error in this estimation is finite.

Appendix E.: Self-consistent integral representation of the PQ equation

We consider equation (99):

and define a function W(x) as in (106):

Combining them, along with $B=-{\rm e}^{2\mu }\bigl (Q(0)\bigr )^{-1}$, we see that we also have

Since for μ small enough, all the roots and poles of P(x) and Q(y) are outside of the unit circle, we can expand their logarithms on and inside of the unit circle in powers of x and y, respectively. We write:

where we recall that P(0) was set to 1.

In those terms, the two expressions for W(x) become

Notice that in the first equation, each of the coefficients ak and bk is connected to a different power of x, so that W(x) determines those completely. We can therefore, in principle, write the argument of the exponential in the second equation in terms of W(x) alone. In this case, we can even do it explicitly: consider that the coefficients of f(x) − f(qx) are of the form ak(1 − qk), and those of f(qx) are akqk. To express the latter in terms of the former, each xk has to be replaced by $\frac{q^k}{1-q^k}x^k$, which can be done through a simple convolution:

The same can be done for g with negative powers of x, and, putting everything together, we find:

where

with

Notice that the constant term 2μ in W disappears in the convolution.

This allows us to finally write:

Equation (E.1)

This same method was used in [12] for the periodic ASEP.

Please wait… references are loading.