Integrable operators, ∂ -problems, KP and NLS hierarchy

We develop the theory of integrable operators K acting on a domain of the complex plane with smooth boundary in analogy with the theory of integ-rable operators acting on contours of the complex plane. We show how the resolvent operator is obtained from the solution of a ∂ -problem in the complex plane. When such a ∂ -problem depends on auxiliary parameters we define its Malgrange one form in analogy with the theory of isomonodromic problems. We show that the Malgrange one form is closed and coincides with the exterior logarithmic differential of the Hilbert–Carleman determinant of the operator K . With suitable choices of the setup we show that the Hilbert–Carleman

Γ + (z) = Γ − (z)M (z), z ∈ Σ, M (z) = 1 + 2πif (z)g T (z) Γ(z) → 1, as |z| → ∞. (1.4) Here Γ ± (z) denote the boundary values of the matrix Γ(z) as z approaches from the left and right the oriented contour Σ and 1 is the identity matrix in Mat(r × r, C). The matrices F and G that define the resolvent kernel (1.3) are related to the solution Γ of the RH problem (1.4) by the relation F (z) = Γ(z)f (z), G(z) = (Γ(z) T ) −1 g(z). (1.5) This connection between the Fredholm determinant and the RH problem has been exploited in several contexts where the kernel depends on large parameters and the study of the asymptotic behaviour of the Fredholm determinant is obtained via the Deift-Zhou nonlinear steepest descent method of the corresponding RH problem [12]. This analysis has been successfully implemented for n = 1 and r = 2 in a large class of kernels originating in random matrices, orthogonal polynomials, probability and partial differential equations see for example (see e.g. [5,11,12,16,19]). The knowledge of the resolvent operator allows to write variational formulae for the Fredholm determinant of the operator Id − K as δ log det(Id − K) = −Tr (Id − K) −1 • δK = −Tr (Id + R) • δK , (1.6) where here and below δ stands for exterior total differentiation in the space of parameters. There exist more general Riemann Hilbert problems then (1.4), that describe the inverse monodromy problem of a linear system of first order ODEs in the complex plane: in those cases the deformation parameters can be introduced in such a way that the monodromy data do not depend on them. In this context, the Kyoto school headed by Jimbo, Miwa and Ueno [25] introduced the concept of isomonodromic τ function starting from a differential one form ω (the Malgrange one form) defined on the space of isomonodromic deformation parameters. In several situations the isomonodromic tau function can be identified with the Fredholm determinant (possibly up to multiplication by explicit factors) of an operator of integrable form [2], [9], [14].
An enlargement of the class of integrable operators (1.1) was studied by Bertola and Cafasso [3] who considered Hankel composition operators that have been reduced to integrable operators in Fourier space.
The common feature of all these works is the appearance, in one way or another, of a Riemann-Hilbert problem, namely, a boundary value problem of a matrix with discontinuities across a contour (or union thereof) with boundary values related multiplicatively by a group-like element M (the "jump matrix") as in (1.4).
The goal of the present manuscript is to enlarge the class of integrable operators by considering operators acting on a bounded domain D of the complex plane with a matrix kernel K(z, z, w, w) ∈ M at(n × n, C), namely K(z, z, w, w) := f T (z, z) g(w, w) z − w , f T (z, z) g(z, z) ≡ 0 ≡ (∂zf (z, z)) T g(z, z), f, g ∈ C ∞ (D, M at(r × n, C)). (1.8) Here and below, instead of ∂, we use the symbol ∂z to specify the derivation respect to z. The dependence of f and g on z and z is to remind the reader that f and g are in general smooth matrix functions on the complex plane. The kernel K(z, z, w, w) and the corresponding integral operator K is a Hilbert-Schmidt operator with a well-defined and continuous diagonal in D × D and therefore its Fredholm determinant is well defined [32]. Our results are the following.
• In Section 2 we show that the resolvent of the integral operator Id − K is obtained through the solution of a ∂-Problem (instead of a Riemann-Hilbert problem) for a matrix function Γ ∂zΓ(z, z) = Γ(z, z)M (z, z); where χ D (z) is the characteristic function of the domain D. Note that the matrix M (z, z) is nilpotent because of (1.8). We show that the ∂-Problem is solvable if and only if the operator Id − K is invertible. Further we show, in analogy with integrable operators defined on contours, that the kernel of the resolvent is where Γ solves the ∂-problem (1.9).
• In Section 3 using the Jacobi variational formula (1.6) we show that where Tr(K) is (formal) operator trace, defined as 12) and Tr(Γ(z)) is matrix trace. The one form ω is shown to be closed. In analogy with the literature on Riemann-Hilbert problems on contours [1,2] we call ω the Malgrange one form of the ∂-Problem. The corresponding τ function of the ∂-Problem is henceforth defined by Using the relation between the Fredholm determinant and the Hilbert-Carleman determinant [15] det 2 (Id − K) := det(Id − K)e Tr(K) we conclude that the τ -function of the ∂-problem coincides with the Hilbert-Carleman determinant of the operator K, namely We also show (Subsection 3.1) that the formula (1.11) defines a closed one-form under the less restrictive assumption that M (z, z) is traceless but not nilpotent, and this allows us to define a τ -function (up to multiplicative constants) of the ∂-problem by the relation Note that in this more general case the τ -function of the ∂-problem is well defined but it is not in general related to a Hilbert-Carleman determinant of some integral operator.
• Finally in Section 4 we use the results of the previous section by considering the ∂-problem (1.14) where M is a 2 × 2 matrix of the form where ξ(z, t t t) = +∞ j=1 z j t j and M 0 (z,z) is a traceless matrix compactly supported on D; we show that the corresponding τ -function (1.15) of the ∂-problem (1.14) is a Kadomtsev-Petviashvili (KP) τ -function, namely it satisfies Hirota bilinear relations for the KP hierarchy (see e.g. [16]).
Further , we specialize the matrix M of the ∂-problem in (1.14) to the nilpotent and traceless form where β * (z, z) = β(z, z) is a smooth function and χ D , χ D are respectively the characteristic functions of a simply connected domain D ⊂ C + and its conjugate D. Then we show that the τ -function of the ∂-problem (1.14) is the τ -function for the focusing Nonlinear Schrödinger (NLS) equation and coincides with Hilbert-Carleman determinant of the operator K with integrable kernel K(z, z, w, w) where the complex function ψ(x, t) solves the focusing Nonlinear Schrödinger equation (NLS) While we can write the solution to the NLS equation in the form (1.17), the analytical properties of such family of initial data and solutions (e.g. the long-time behaviour) have still to be explored. It is shown in [4] that such family of initial data naturally emerge in the limit of an infinite number of solitons. For some choices of the density β and the domain D, the corresponding initial data are solitons or step-like oscillatory initial data.

Integrable operators and ∂-problems
Let D ⊂ C be a compact union of domains with smooth boundary and denote by K the integral operator acting on the space L 2 (D, d 2 z) ⊗ C n with a kernel K(z, w) of the form Here the matrix-valued functions f, g are assumed to be sufficiently smooth on D but no analyticity is required and for this reason we indicate the dependence on both variables z and z. The vanishing requirements along the locus z = w are sufficient to guarantee that the kernel K admits a welldefined value on the diagonal and it is continuous on We have emphasized that the kernel and the functions are not holomorphically dependent on the variables; that said, from now on we omit the explicit dependence on z, trusting that the class of functions we are dealing with will be clear by the context each time. The operator K acts as follows on functions We introduce the following ∂-problem for an r × r matrix-valued function Γ(z, z).
where 1 is the identity in GL r (C) and (2.6) We first show that where adj(Γ) denotes the adjugate matrix (the co-factor, transposed). Now the product in the last formula yields adj(Γ)Γ = (det Γ)1, so that where the last identity follows from the fact that M is traceless because Tr(M ) = Tr(M T ) = 0. Thus det Γ is an entire function which tends to 1 at infinity, and hence it is identically equal to 1 by Liouville's theorem. Now, if Γ 1 , Γ 2 are two solutions, it follows easily that R(z) := Γ 1 Γ −1 2 is an entire matrix-valued function which tends to the identity matrix 1 at infinity. This proves that R(z) ≡ 1 and hence shows the uniqueness.
where Γ(z) is a r × r matrix that solves the ∂-problem 2.1.
Proof. Suppose that the ∂-problem 2.1 is solved by Γ(z); we now show that the operator (Id − K) is invertible. Let us define the resolvent operator with the kernel given by (2.9). To verify that R is indeed the resolvent, we need to check that the following condition is satisfied (2.10) To this end we compute the kernel of R • K namely If we consider the generalized Cauchy-Pompeiu formula for the matrix (Γ T (z)) −1 we can express it as the integral equation We substitute (2.12) into (2.11): This shows that indeed R satisfies the resolvent equation (2.10) and hence the operator Id − K is invertible.
Viceversa, let us now suppose that the operator Id − K is invertible and denote We now verify that R has kernel where the matrices F (z) and G(z) are defined as with the inverse applied to each entry (and the transposition T acts on the matrix indices). Indeed we verify the condition (2.10) with R given by (2.14): Adding and subtracting the kernels K(z, w) and R(z, w), we obtain With the definitions (2.15) the contributions in the first line of (2.16) cancel out and the condition (2.10) is satisfied. To conclude the proof we need to verify that where the matrix Γ solves the ∂-problem 2.1. To this end, let us define the matrix Γ(z) From this definition it follows that We now substitute (2.20) in the definition (2.18): Then, following the general Cauchy formula (2.12), we find that the matrixΓ(z) satisfies Finally, since the support D of M is compact, the equation (2.18) implies that Γ is analytic outside of D and tends to 1 as |z| → ∞. Thus Γ solves the same ∂-problem 2.1 and since the solution is unique, it must coincide with Γ.

The Fredholm determinant
In Section 2 we have linked the solution of the ∂-problem 2.1 with the existence of the inverse of Id − K. From the conditions (2.1) we conclude that K is a Hilbert-Schmidt operator with a well-defined and continuous diagonal in D × D: according to [32] this is sufficient to define the Fredholm determinant for the operator Id − K, as explained in the following remark.
Remark 3.1. In general, for a Hilbert-Schmidt operator A, the Fredholm determinant is not defined but we can still define a regularization of it, called the Hilbert-Carleman determinant where Ψ n (A) is given by the Plemelj-Smithies formula It is shown that if A is Hilbert-Schmidt then (3.1) converges ( [15], Chapter 10, Theorem 3.1). If A has a well define trace, then we can rewrite it as Suppose now that A is a Hilbert-Schmidt operator with kernel A(x, y) on L 2 (X, dµ), for some space X and measure dµ, and such that X A(x, x)dµ(x) is well defined. Then the formula can be turned on its head to "force" a definition of Fredholm determinant even if the operator itself may not be of trace class. The resulting formula has all the properties of the standard Fredholm determinant. See [32] for more details.
Let us now assume that K depends smoothly on parameters t t t = (t 1 , t 2 , . . . , t j , . . . ) with t j ∈ C , ∀ j ≥ 1: we want to relate solutions of the ∂-problem 2.1 with the variational equations for the determinant.
Proposition 3.2. Let us suppose that the matrix M (z, z) in the ∂-Problem 2.1, depends smoothly on some parameters t t t, while remaining identically nilpotent. Then the solution Γ(z) of the ∂problem 2.1 is related to the logarithmic derivative of the Fredholm determinant of Id − K as follows: where δ stands for the total differential in the space of parameters t t t.
Proof. Using the Jacobi variational formula (1.6), we can rewrite the LHS of (3.3) as The last term in (3.3) comes from the identity term in (3.4). Let us now compute the term Tr ((R • δK)). The compositions of the two operator produces the kernel where we have omitted explicit notation of the dependence on t t t of the functions f, g, F, G, Γ.
We focus on the term in (3.5). Using the identity In order to compute the trace we need to compute the kernel (3.7) along the diagonal z = w and hence we consider lim w→z (3.7). Observe that (Γ T (ζ)) −1 g(ζ)f T (ζ) = − 1 π ∂ζ(Γ T (ζ)) −1 , and hence we can apply the formula (2.12) to eliminate the integral and rewrite (3.7) as follows We can now easily compute the expansion of (3.9) along the diagonal w → z by Taylor's formula, keeping in mind that Γ is not a holomorphic function inside D: Using the above expression we conclude the trace in L 2 (D, d 2 z) ⊗ C n of (3.5) is Using the cyclicity of the trace and its invariance under transposition of the arguments, we reorder the terms (3.12) in to the form We now consider the term (3.6). Taking its trace yields: We observe that the integrand is in L 2 loc because the numerator vanishes to order O(|z − ζ|) along the diagonal (3.15) and hence the integrand is O(|z −ζ| −1 ) which is locally integrable with respect to the area measure. We can now relate this integral to ∂ z Γ as follows. Using the formula (2.12) and the ∂-problem 2.1 we can rewrite Γ T (ζ) as Taking the holomorphic derivative with respect to ζ we get Plugging the result into (3.14) we obtain so that This concludes the proof of Proposition 3.2. In analogy to the theory of isomonodromic RH problems, we call the r.h.s. of the above relation the Malgrange one form ( see e.g. [1]).

Malgrange one form and τ -function
Combining the Proposition 3.2 and the remark 3.3, we define the following one form on the space of deformations, which we call Malgrange one form following the terminology in [1]: where Γ(z) is solution of the ∂-problem 2.1 and M (z) is defined in (2.6). For the operator K defined in (2.4), the Proposition 3.2 implies that and hence ω is an exact (and hence closed) one form in the space of deformation parameters the operator K may depend upon. The form ω can be shown to be closed under weaker assumptions on the matrix M then the ones that appears in the ∂-problem 2.1 as the following theorem show. Proof. From the ∂-problem we obtain Using (3.24) we can compute Substituting (3.26) in the equation (3.25) we obtain: The crux of the proof is now the correct evaluation of the iterated integral: By applying Fubini's theorem, since the integrand is antisymmetric in the exchange of the variables z ↔ w, we quickly conclude that the integral is zero. However the integrand is singular along the diagonal ∆ := {z = w} ⊂ D × D and we need to make sure that the integrand is absolutely summable.
Recalling that F (z, w) = −F (w, z), so that F (z, z) ≡ 0, we now compute the Taylor expansion of F (z, w) with respect to w near z; (3.29) Thus |F (z,w)| |z−w| 2 = O(|z − w| −1 ) which is integrable with respect to the area measure. Hence application of Fubini's theorem is justified.
From this theorem, we can define a τ -function associated to the the ∂-problem 1.14 by In general the above τ -function is defined only up to scalar multiplication and hence should be rather thought of as a section of an appropriate line bundle over the space of deformation parameters, depending on the context. However, for M in the form specified in (2.6) we know from Proposition 3.2 that we can identify the tau function with a regularized determinant: In the next section, by choosing a specific dependence on the parameters t t t in the more general setting of M as in Thm. 3.4 we are going to show that τ (t t t) is a KP τ -function in the sense that it satisfies Hirota bilinear relations [17].

τ (t t t) as a KP τ -function
In this section we consider a specific type of dependence of M on the "times": let M (z, t t t) be a 2 × 2 matrix that depends on t t t in the following form and M 0 (z,z) a traceless matrix compactly supported on D. A τ -function of the Kadomtsev-Petviashvili hierarchy, τ (t t t), can be characterized as a function of (formally) an infinite number of variable which satisfies the Hirota Bilinear relation where t t t ± [z −1 ] is the Miwa Shift, defined as: The residue in (4.3) is meant in the formal sense, namely by considering the coefficient of z −1 in the expansion at infinity and can be thought of as the limit of |z|=R as R → +∞. If the functions of z intervening in (4.3) can be written as analytic functions in a deleted neighbourhood of ∞, then the residue is a genuine integral; this is the case of interest below. As described in [23], the equation where D j is the Hirota derivative respect to t j , defined as Putting one obtains the celebrated KP equation The rest of this section is devoted to the verification of the Hirota bilinear relation (4.3) for the KP tau function.

Hirota bilinear relation for the KP hierarchy
The main result is the following. with the traceless matrix M 0 (z) compactly supported on a bounded domain D of the complex plane and the function ξ given by the formal sum ξ(z, t t t) = +∞ j=1 z j t j . Then the function τ (t t t) = exp ω , (4.9) with ω defined in (3.21) is a KP τ -function; i.e. it satisfies the Hirota Bilinear relation (4.3).

Remark 4.2.
In this setting the KP τ -function is in general complex-valued. Under appropriate additional symmetry constraints for the matrix M 0 and the domain D we can obtain a real-valued τ -function.
We prove the theorem in several steps. We first analyse the effect of the Miwa shifts on the τfunction. For this purpose we need to determine how the Miwa shift acts on the matrices Γ(z,z, t) and M (z,z, t t t). We consider M (z,z, t t t ± [ζ −1 ]) first. (4.10) For the matrices Γ(z, z, t t t ± [ζ −1 ]) we need to consider the two case separately. Let us start with the negative shift Γ(z, z, From (4.11), we notice that the matrix Γ(z, t − [ζ −1 ])D(z, ζ) satisfies the ∂-problem 2.1, i.e. there exists a connection matrix C(z) such that where obviously C(z) depends also on ζ and t t t.
The matrix C(z) is determined by the conditions that both Γ(z, t t t) and Γ(z, z, t t t − [ζ −1 ]) must tend to 1 for z → ∞ and are regular at z = ζ (4.13) Solving the system (4.13), we obtain that the matrix C(z) has the following form (4.14) Following the same ideas, we can find a similar formula for Γ(z, z, t Also in this case, we have three conditions similar to (4.13) : and we find out thatC(z) has the following form: .

(4.17)
We need to show how the Miwa shift acts on the Malgrange one form. We define δ [ζ] the differential deformed including the external parameter ζ where Γ(z) solves the ∂-problem 2.1 and γ(ζ) is a t t t independent function defined as which is analytic for ζ / ∈ D and goes to zero as ζ → ∞.
Observe that since TrM 0 = 0 we may express the formula in terms of the (2, 2) entry instead. The proof of this lemma is presented in the Appendix B. Now we can state the following proposition: where τ (t) is defined in (3.30), Γ(z) solves the ∂-problem 2.1 and γ(ζ) is defined in (4.20) Proof. From Lemma 4.3 and the equation (3.30), we rewrite (4.19) as an then, from the properties of the logarithm the statement (4.21) is proved.  Proof. For z / ∈ D the statement is trivial, so we consider the case of z ∈ D. We apply the operator ∂z to the matrix (4.23) ∂zH(z) = ∂zΓ(z, t t t)e (ξ(z,t t t)−ξ(z,s s s))E11 Γ −1 (z, s s s) + Γ(z, t t t)e (ξ(z,t t t)−ξ(z,s s s))E11 ∂zΓ −1 (z, s s s) = Γ(z, t t t)M (z, t t t)e (ξ(z,t t t)−ξ(z,s s s))E11 Γ −1 (z, s s s)+ − Γ(z, t t t)e (ξ(z,t t t)−ξ(z,s s s))E11 M (z, s s s)Γ −1 (z, s s s) From the Lemma 4.6 we can also state the following corollary, whose proof follows at once from Cauchy's residue theorem and the fact that H is entire:  Since both Γ 12 (z, t t t) and Γ 21 (z, s s s) are analytic for |z| sufficiently large (given that D is compact) we can omit the limit lim R→∞ since the integrals only depend on the homotopy class of the contour. Moreover for z ∼ ∞ Γ(z, t t t) ∼ I + O(z −1 ) so that the product Γ 12 (z)Γ 21 (z) is O(z −2 ). Therefore the integral in which means that the integrand does not have a pole as z → ∞. So the integral in (4.27) is zero and the statement is proved.

The case of focusing Nonlinear Schrödinger equation
In this subsection we make a specific choice of the matrix M 0 where β(z) is a smooth function on D ⊂ C + and χ D (χ D ) is the characteristic function of D (D ). We observe that M 0 satisfies the Schwarz symmetry Let us consider the ∂-problem with ξ(z, t t t) as in (4.2). Here we have sent t j → −2it j with respect to the normalization in the KP-hieararchy. Then the function ψ = ψ(t t t) satisfies the nonlinear Schrödinger hierarchy [13,28] written in the recursive form i∂ tm ψ 1 = 2ψ m+1 , ψ 1 := ψ (4.30) where ψ m and h m are functions of t t t.
The proof of this theorem is classical and is deferred to Appendix A. In particular the second flow gives the focusing NLS equation where comparing with the notation in the introduction t 2 = t and t 1 = x. The third flows gives the so called complex modified KdV equation Setting t k = 0 for k ≥ 4 one obtains that v(t 1 , t 2 , t 3 ) := 2|ψ 1 (t 1 , t 2 , t 3 )| 2 satisfies the KP equation (4.8) after the rescalings v = −4u and t j → i 2 t j .

Conclusions
The ∂-problems treated in this manuscript differ from the ∂ introduced in [29], [30] to study asymptotic behaviour of orthogonal polynomials or PDEs with non analytic initial data respectively. In those cases the ∂-problem is a by-product of the steepest descent Deift-Zhou method extended to the case where the jump-matrix is not analytic but otherwise the initial problem is an ordinary RHP; in our case, the initial data is defined from the solution of the ∂-problem and is encoded in the domain D and in the matrix M of the ∂-problem (1.9). An equation similar to (4.29) was also studied by Zhu et al. [35], with the aim to find solutions for the defocusing/focusing NLS with nonzero boundary conditions. A generalization that could be considered is one where instead of the "pure" ∂-problem (2.1) one has a mixed ∂ and Riemann-Hilbert problem; this would correspond to an operator for example acting on L 2 (D, d 2 z)⊕ L 2 (Σ, |dz|) (typically with ∂D ⊆ Σ); this type of problems would use, in the computation of the exterior derivative of the Malgrange form, the full Cauchy-Pompeiu formula. We defer this investigation to future efforts.

A Connection between the ∂-Problem and the Inverse Scattering Theory
In this section we prove Theorem 4.8 by deriving the corresponding Zakharov-Shabat Lax pair [34] for the solution of the ∂-problem (4.29). To simplify the presentation, we restrict only to the first flow, namely we set t 1 = x, t 2 = t and t j = 0 for j ≥ 3. The general case can be treated in a similar way. Let us consider the matrix Ψ(z; x, t) = Γ(z; x, t)e −i(zx+z 2 t)σ3 .
where Γ is a solution of the ∂-problem 4.29 so that we obtain the ∂-problem We denote the terms of the expansion of Ψ near z = ∞ as follows: The first observation is that Ψ satisfies the Schwartz-like symmetry which follows from the uniqueness of the solution after observing that the matrix Φ(z; x, t) := Ψ(z; x, t) † solves the same ∂-problem, thanks to the property M (z; x, t) = −M (z; x, t) † . Given that det Ψ ≡ 1 we can rewrite the symmetry as Ψ(z; x, t) = σ 2 Ψ(z; x, t) † σ 2 . (A.5) This translates to the following symmetry for the matrices Γ ℓ (x, t): Since the operators ∂ x and ∂z commute, we can see that ∂ x Ψ satisfies the problem (A.2) It now follows that the matrix U (z; x, t) := ∂ x Ψ(Ψ −1 ) is an entire function in z. Indeed Thus we obtain the following equation: We similarly conclude that V (z; x, t) is the polynomial part of the above expression, a quadratic polynomial in z. To complete the calculation we need to relate the matrix Γ 2 (x, t) to the ∂ x derivative of Γ 1 by taking the expansion of both sides of the Lax equation (A.8) as z → ∞, and using the explicit expression of U given in (A.9). The term O(z −1 ) in (A.8) provides the equation: The (1, 1) entry of (A.15) yields the relation while the off diagonal give In conclusion, the matrix V (z; x, t) is Summarizing, the matrix Ψ(z; x, t) solves the∂-Problem A.2 as well as the two linear PDEs where we have set ψ(x, t) := 2ib(x, t). We can see that the matrices U (z; x, t) and V (z; x, t) are in the form of the Lax pair of the NLS (1.17), namely, the zero curvature equations [34] .20) and the latter is equivalent to the NLS equation (1.17).
Using the explicit expression (4.14) for the matrix C we obtain: and this proves the Lemma 4.3.