Whitham Approach to Certain Large Fluctuation Problems in Statistical Mechanics

We show the relationship between the strongly non-linear limit (also termed the dispersionless or the Whitham limit) of the macroscopic fluctuation theory of certain statistical models and the inverse scattering method. We show that in the strongly non-linear limit the inverse scattering problem can be solved using the steepest descent method of the associated Riemann--Hilbert problem. The importance of establishing this connection, is that the equations in the strongly non-linear limit can often be solved exactly by simple means, the connection then provides a limit in which one can solve the inverse scattering problem, thus aiding potentially the exact solution of a particular large deviation problem.


Introduction
In this paper we study some mathematical aspects of large deviation problems that have garnered some interest in recent years.Such problems include the Kardar-Parisi-Zhang problem [1], the Kipnis-Marchioro-Presutti model [2], the symmetric exclusion process [3], and more.Particularly interesting in this regard is the availability of exact solutions [4].Even in the absence of such exact solution one may use macroscopic fluctuation theory [5] to study the question of large deviation.One such case is the Kardar-Parisi-Zhang problem in which the macroscopic fluctuation theory leads to the following equations [6]: where the boundary value to be solved is given by p(x, 1) = Λδ(x), q(x, 0) = f (x). (1.3) for some given f (x).Typically f (x) is chosen to be either constant (flat initial conditions) or itself proportional to a delta function, but other boundary conditions may be considered as well.Here f (x) has the meaning of the initial shape of the Kardar-Parisi-Zhang interface and the boundary condition for p(x, 1) allows to compute the appropriate generating function for the interface at final time t = 1.
The variable Λ serves as the parameter of the generating function [6].
Due to the similarity of the formulation of the large deviation problem across different models, we shall concentrate here on the example where the equations to be solved are given by Eqs.(1.1, 1.2) and the boundary conditions are given in Eq. (1.3).Other problems have a very close formulation, either directly [7][8][9] or through non-trivial manipulation [10].For example, for the Kipnis-Marchioro-Presutti model one has to consider rather the derivative nonlinear Schödinger equation, a variant of Eqs.(1.1, 1.2).Nevertheless, in the formal sense the Eqs.(1.1, 1.2) and the derivative non-linear Schrödinger equations have so much in common, that the methods provided here are easily modified to deal with that problem as well.As a result, we prefer, for the sake of brevity, to describe here the method for the particular case of the nonlinear Schrödinger equation, Eqs.(1.1, 1.2) , without spelling out how the method must be generalized to other cases.
To solve the boundary value problem described above, one may apply the inverse scattering method [7][8][9][10][11][12], which we will describe below.Despite being integrable, or "completely soluble", there is no recipe to solve a particular boundary problem for the nonlinear Schödinger equation, such as the problem presented in Eq. (1.3) above, and it is often necessary to rely on luck to find such a solution.
Nevertheless, certain limits are tractable.For example, if the fields p and q are small, such that the non-linear terms may be neglected, then a solution may be easily found.Another tractable limit, is the limit where the fields are large, such that the non-linear term dominates over the dispersive term in the non-linear Schrödinger equation, leading to non-viscid equations.In such limits it may be instructive to obtain a full solution to the large deviation problem, provided one knows how to perform the scattering transform to the solution obtained in the special limits.Then one may examine the scattering data, in order to have an educated guess for the full solution.
The current paper concerns itself with this program, for the case of the strongly non-linear limit (the linear limit is much easier to deal with, and is usually also much less instructive in finding exact solution, despite its physical importance).In certain cases in this limit, it is possible simply to drop the dispersive terms in the equations, and thus deal with the inviscid equations that arise.This case is most instructive, although the method we present is not restricted to this case.In fact, often solitons or oscillatory features may appear in the strongly non-linear limit, and the method described below is suitable to deal with this situation as well.Despite this, even in the case of sharp or oscillatory features, the solution may still feature regions in space-time where the inviscid equations hold with no dispersive terms, such that the inviscid equations contain a large amount of information even in the presence of sharp or oscillatory regions .This situation is well known in the strongly non-linear limit, which is often also called the "dispersionless limit", or the "Whitham limit" [13].
We proceed to solve the inverse scattering problem in the strongly non-linear limit.This is done by applying the map from inverse scattering problem to the Riemann-Hilbert problem, and then solving the Riemann-Hilbert problem thus obtained using the steepest descent method following closely Ref. [14].
The result of the work is then a relation between the strongly non-linear equations and the inverse scattering problem.In addition to the potential utility of this approach in discovering new solutions, the connection thus obtained also has additional advantages.Among them we list two here.First, the method allows to find a systematic expansion in a small parameter around the strongly non-linear solution.This is not done in the current paper, since to apply this approach requires to introduce even more tools associated with Riemann-Hilbert problem, and we leave that for future work.Secondly, the method allows to find solutions which feature oscillatory regimes for the fields p and q.Admittedly, though, the physical meaning of such solutions is not known at present, beyond their importance for the case of periodic boundary conditions [15,16].
To demonstrate our approach we work out the invscid flows in the case of the Kardar-Parisi-Zhang problem with flat initial conditions as worked out in Ref. [17], and show how these are connected to the exact solution given in Ref. [11].

Naïve Inviscid Limit
We first study the inviscid limit of Eqs.(1.1, 1.2).Since these are a variant of the nonlinear Schrödinger equations (where p and q are now not complex conjugates of each other but rather real) it is natural to consider the Madelung transformation to the fields such as to obtain hydrodynamic type equations for appropriately defined density and velocity fields, familiar from quantum mechanics.In this case it is also appropriate to call such a transformation a Cole-Hopf transformation.We apply the following [6]: to obtain: 2) One may now discard the terms on the right hand side of the equation, which are of higher derivative, and thus may be termed, dispersive, or viscous terms (thus obtaining the naïve inviscid limit).The resulting equations are those of a fluid with density ρ and velocity v and pressure given by −ρ 2 2 [18]: (2.5) These equations may be subjected to another transformation to bring them into the form of Riemann invariants.The Riemann invariants, λ 1 , λ 2 are defined as follows: whereby the equations that result by this transformation are given by: These equations are typical of Dispersionless integrable equations and are in fact the Whitham [13] universal equations written through Riemann invariants [19,20].
The inverse scattering method relies on the fact that by introducing an auxiliary parameter k and defining two k, x and t dependent matrices [21] one may recast the nonlinear Schrödinger equations, Eqs.(1.1,1.2), in the following form: Indeed if these equations are to hold for any k, x and t then Eqs.(1.1,1.2) must be satisfied.Eq. (3.2) has the form of a consistency condition for the following equations for a 2 × 2 matrix, P : Indeed, if Eq. (3.2) is satisfied then a solution for P exists.
If one assumes the p and q tend to 0 at x → ±∞ for any t, then P has solutions in that region in the form of plane waves P ∼ e −σz ıkx 2 T , for any x−independent matrix T .This leads naturally to consider the scattering problem.The problem reads as follows: given k and a plane wave solution at x → −∞ of the form P ∼ e −σz ıkx 2 , find T (k, t) as featured in the asymptotic behavior of P at x → +∞ as follows P ∼ e −σz ıkx 2 T (k, t).Writing , namely advancing the plane waves in time amounts to applying the matrix e σz k 2 t 2 .To obtain then the scattering matrix T (k, t) at any finite time t it is enough to know the scattering matrix at time t = 0. Indeed one may deduce the scattering at time t by first considering the plane waves at x = −∞ at time t and then rewinding these plane waves from time t to time 0 by applying the matrix e −σz k 2 t 2 , then one lets the waves scatter from −∞ to ∞ by applying the matrix T (k, 0) and finally one advances the plane waves from time t = 0 back to time t by applying the matrix e σz k 2 t .The application of the matrices is always from the left such that we get T (k, t) = e σz k 2 t 2 T (k, 0)e −σz k 2 t 2 .Furthermore, it can be shown that det T (k) = a + a − + b b = 1 and a ± can be shown to be analytic in the upper and lower half planes, respectively.

The Riemann-Hilbert Approach to Inverse Scattering
Let us find a matrix solution P (x, y) to Eq. (3.3) with boundary conditions: This function has the property This allows one to define: Using the property in Eq. ( 4.2) one obtains: It is then possible to take this equation and rearrange the elements [21] such as to write the following: where the different objects are defined as follows: and we have dropped denoting the dependence on k of a ± and b, b for brevity and it is implicitly assumed that these objects are to be evaluated at t = 0.The matrices G + and G − can then be shown to be analytic in the upper half and lower half planes, respectively.Such that we obtain the following Riemann-Hilbert problem: • G ± are analytic in the upper and lower half planes, respectively.
• On the real axis Given G, if one is able to solve the Riemann-Hilbert problem, then one obtains immediately the fields p and q.Indeed, G ± satisfy: such that p and q are easily extracted from this equation.However, the Riemann-Hilbert problem is often intractable.Nevertheless, it has asymptotes of the solution may be found by using the method of steepest descent presented in the following.

The Riemann-Hilbert Problem in Macroscopic Fluctuation Theory
Let us first study some particular properties of the Riemann-Hilbert problem in the case of macroscopic fluctuation theory.In this case p is proportional to a delta function, so we write p(x, 1) = −8Λδ(x).
This is easily solved, for example, by: x > 0 where I(a, b) = ´b a e −ıkx ′ q(x ′ )dx ′ .So that we get: where we have thus obtained: Thus given the time dependence of the elements of T discussed above, we have: We find G, the jump matrix associated with the Riemann-Hilbert problem, according to Eq. (4.8) At t = 1 one can find G ± explicitly through the Fourier transform of q.This is done by solving Eq. (4.10) and inserting that solution into Eqs.(4.4,4.5,4.8).This gives the following expressions:  It is then easy to ascertain that the matrices G ± are indeed analytic in the upper and lower half planes respectively and that G − = G + G.

The Steepest Descent Method
We now want to derive the strongly non-linear limit of the inverse scattering problem, this limit is in fact the one considered in Ref. [14] where the steepest descent method to the Riemann-Hilbert problem may be used, but before going on to describe the steepest descent method for the Riemann-Hilbert problem relevant for all time t, we concentrate on its form in the case t = 1.At this time-point, the Riemann-Hilbert problem is solved making use of Fourier transforms of q(x) and its restriction to subintervals of R, see Eqs. (4.16, 4.17).Such that the steepest descent method here is really nothing but the usual steepest descent method for the Fourier integral, which is intimately related to the Legendre transform.
A key to finding the solution in the inverse scattering method is to find b(k, t) from which all other elements of the scattering matrix T can be found.Indeed, since we know b(k, t) = Λe k 2 (t−1) and due to the unimodularity of the scattering matrix, which results in the equation a + a − − b b = 1, one may easily find a + a − given b(k, t).Then using the fact that a ± are analytic in the upper and lower half k−planes respectively, one can write: Now we assume We assume that S is a union of a finite number of intervals.We also have the condition m(µ) = m * (−µ), which is required to obtain a real solution.
Then we may write approximately At t = 1, the solution of the Riemann-Hilbert problem is solved by the Fourier transform of q(x), which is denoted by I(−∞, ∞).And the steepest descent method yields nothing but the saddle point equations for this integral.Let us assume k is in the lower half plane, since a − tends to one in this plane then a + must be large we may conclude: (5.4)So that q(x) is the inverse Fourier transform of a + and we find the saddle point equations: Solving this equation for k one may then find q(x) by substituting The term in the square brackets is of course the Legendre transform of the ı log a ± , which we may denote by Φ(x).Of course Φ ′ (x) = k * (x) so that we can also write q = exp ı ´x k * (x).Making the identification k * (x) = ı v(x) 2 one obtains one half of the Cole-Hopf transformation, Eq. (2.1), q(x) = e − 1 2 ´x v .Since ρ = 0 at time t = 1 for all x ̸ = 0 then p(x) = 0 and one cannot identify the other half of the Cole-Hopf transformation.At this point the identification k * = ı v 2 is merely suggestive, but later we shall be able to make this identification on more general grounds.
This rather trivial steepest descent approach for t = 1 does not rely on the Riemann-Hilbert problem at all, since for this specific time b(k, 1) = Λ is a constant and the problem becomes essentially linear.Nevertheless, during the full time evolution The Riemann-Hilbert problem becomes more complex and requires a more involved solution which we discuss in the next sub-section.

Reduction of the Problem
The steepest descent method of the Riemann-Hilbert problem includes first deforming the contour where the matrix G ± have a jump discontinuity (the "jump contour", or the "Riemann-Hilbert contour"), such as to simplify the Riemann-Hilbert problem greatly.This in analogy to the steepest descent method for integrals, but here this step includes some matrix manipulation and a doubling or tripling of the jump contour, unlike the simpler steepest descent methods for integrals.After this is achieved the simpler Riemann-Hilbert problem is solved.This sub-section concerns itself with the first step, namely the reduction of the Riemann-Hilbert problem to a simpler one.It is already in this step that we will be able to identify the Riemann invariants λ 1 , λ 2 , or equivalently, the fields ρ and v, but only in the inviscid case.In the more general case, the identification of the fields is more complicated and may only be deduced after performing the second step of actually solving the reduced Riemann-Hilbert problem.
The steepest descent method can be applied when have log(a + a − ) is large.In this case the matrix G takes the approximate form: Now we follow the classic method of Ref. [14] to solve this problem approximately.The first step is to note that if we find two functions, g ± analytic in the upper and lower half planes, respectively, and tending to 1 at infinity at both half-planes, then the following transformation of G ± and G leads to a new Riemann-Hilbert problem which has the same formulation as the original one.The transformation reads: Let us denote: (5.11) One can search for such functions g ± such that on some part of the jump contour, g + − g − = 0 and that there h ′ (k) > 0 on the contour where the jump occurs.In this case, one may make use of the following decomposition of G: (5.12) where one have used that a + a − is large.One may then separate the jump contour into two contour, one in which the matrix G+ jumps by G1 and the other where the matrix G+ jumps by G2 .As a second step we deform each of the contours, that of G1 towards the upper-half plane and that of G2 towards the lower half plane.The condition h ′ (k) > 0 then ensures that the off-diagonal terms in both G1 and G2 tends quickly to zero, and as such the jump matrix becomes the identity matrix, namely there is no jump.The procedure is illustrated in Fig. 1.Thus the part of the jump contour in which we were successful to find g + − g − = 0 and h ′ (k) > 0 simply disappears.Secondly, with the same g ± there may be a region where g − − g + = log(a + a − ) and h ′ (k) < 0, allowing us to write G: and by the same manipulation of the jump contour as before, simply remove that region where the conditions above are met.Finally, there may be a region where neither of the conditions above can be made to be met.In this case one may apply the following condition h ′ (k) = 0 and − log(a + a − ) < Re(g + − g − ) < 0. In this region one has G ≃ 0 e −ıh 0 −e ıh 0 0 . (5.15) The simplification afforded by this Riemann-Hilbert problem of this form is that we have segments, each of which the jump matrix is constant and of a specific off-diagonal form, while on other segments there is no jump.Such a problem can be solved by making use of Riemann theta functions, as shall be made explicit below.
Another point to be made is that for x > 0 and large k the conditions h ′ > 0, g + − g − = 0 can be met trivially since log a − → 0. For x < 0 it is much more convenient to apply the transformation G → e −σz log a + Ge −σz log a − , G + → G + e σz log a + G − → G − e −σz log a − to first bring G to the form: whereupon h ′ < 0, g + − g − = 0 can be applied rather than letting g − − g + = log(a + a − ).Of course both points of view are equivalent, but the latter one is more symmetric with respect to x ↔ −x.
The fact that g ± can be found is related to the fact that it satisfies a certain scalar Riemann-Hilbert problem.We shall not go over this here, as the method is described in Ref. [14] in full.We do mention here, that practically g ± can constructed in a self consistent manner which will be described presently.The method thus described then actually provides the solution to the inviscid equations, Eqs.(2.4, 2.5) in the case where the final Riemann-Hilbert problem contains just one non-trivial segment with jump matrix of the form of Eq. (5.15).Then the endpoints of the segments are λ 1 , λ 2 that obey Eqs.(2.7, 2.8), which are equivalent to Eqs. (2.4, 2.5).
Since the case where only one segment is already very instructive, and since additional segments only encumber the notation, we concentrate on this case.However, it is quite easy, once the case of one segment is understood to generalize the procedure to several segments.We thus use notations in this sub-section which suggest a single segment, with the understanding that the generalization is straightforward.
It further turns out that, in order to achieve all the above conditions, the segment [λ 1 , λ 2 ] does not lie on the real axis.This is already suggested in the transformation to ρ and v in Eq. (2.6), where ρ and v are real but then λ i is not.In fact the segment is of the form where v and ρ are real, and ρ ≥ 0. This means that the jump contour is also deformed to the complex plane.We give a description of the contour in Fig. 2. Let us denote the endpoints of the segment by λ 1 and λ 2 .We definte: (5.17) In fact g ′ ± must obeys: (5.18) g + − g − = 0 otherwise.(5.19)One may solve these conditions by taking g ± to be given by a single function g defined on the complex with a cut on the segment [λ 1 , λ 2 ], while g ± are just the values of this single function above and below the cut.Such a function may be given by naturally associating with it a differential and writing where a ± (µ) over the branch cut at [λ 1 , λ 2 ], and is smooth everywhere else.Such a differential can be written as: where we have allowed for using either a + or a − by making use of a σ where σ = ±, respectively.or alternatively as: where dw sing (k), which is adorned with the subscript sing, denotes the singular part of terms which precede it, namely all the singularities of those terms away from the branch cut when this term is treated as a differential .These singularities are the poles of the expression at infinity and the branch cut on S. Explicitly we have: In order for g ′ defined in Eq. (5.20) not to diverge at the branch points λ i , we must demand: (5.24) Setting λ i = ıv 2 ± √ ρ in this equation, one gets more explicitly: Summing the two equations and subtracting them (followed by a division by √ ρ) leads to the the following: x (5.27) Note that the second equation always has the solution ρ = 0, as the integral on the right hand side vanishes at ρ = 0 due the symmetry µ → −µ * , which m(µ) and the set S respect, but with respect to which the integral is antisymmetric.The solution of these equations are solutions the inviscid equations, Eqs.(2.4, 2.5), above as shall be shown below.
If set t = 1 and assume ρ to be small, we obtain from these equations the two conditions: or ρ = 0. (5.29) In fact, we know that at t = 1 we have ρ = 0, for any x ̸ = 0, Namely, the second condition, given in Eq. ( 5.29), may be only solved by ρ = 0 for x ̸ = 0. To show that this indeed the case, we take a derivative with respect to x of the first condition, and obtain: (5.30) Namely, if the integral in the second condition, Eq. ( 5.29), is to vanish then then ∂ x v must diverge.Since this is the case for any finite x by assumption, we must conclude that ρ(x, 1) = 0 for all x ̸ = 0.More generally and for any t there exists a region in which ρ = 0, for this region the Hodograph equations (Eqs.(5.26, 5.27) above) reduce to It is often the case that one may solve the large κ (or, equivalently, large Λ) limit at t = 1, where ρ = 0 for x ̸ = 0 and obtain that the left hand side of Eq. (5.31) is equal to a given function of v. Then it is a matter of solving a singular integral equation in order to find m(µ).This m(µ) plays also a role in an exact solution of the problem, where it is defined by Eq. (5.2).Of course the exact m(µ) may have small corrections which are more difficult to obtain by examining the approximate solution (although it is in principle possible to develop a systematic expansion), nonetheless, the approximate m(µ) may either prove exact, or may supply a valuable first guess to find an exact solution.We demonstrate this in section 6 below.Eqs.(5.26,5.27)are general equations for v and ρ, or equivalently, for λ 1 and λ 2 .We wish to show now that these equations are solutions to Eqs. (2.4, 2.5) or, equivalently, to Eqs. (2.7, 2.8).
The requirement that g ′ does not diverge suggests that around λ i it has the form g ′ (k) ∼ √ k − λ i .Let us define dΩ x and dΩ t as follows: The compatibility condition for these two equations read: Now dΩ x and dΩ t can be easily seen to be meromorphic differential on the genus-0 Riemann surface associated with R 2 (k) with pole of order 2 and 3 respectively, and residue ±1 at infinity at the upper and lower sheets of the Riemann surface respectively, which, by uniqueness of such differentials, means that we may immediately write: Indeed expanding at infinity we get dΩ x ∼ ±1 + O(1/k 2 ) dk and dΩ t ∼ ±k + O(1/k 2 ) dk on the upper and lower sheet, respectively.If the explicit expressions for dΩ x and dΩ t are substituted into the compatibility equation, Eq. (5.34), one obtains equations for λ i by examining the behavior around λ i of both sides of the compatibility condition, Eq. (5.33).Indeed, one obtains a term that diverges as (k − λ i ) −3/2 around λ i and the residue of that divergence must coincide on both sides of the equation.This condition then reads: Dividing both sides by λ i − λ 1 +λ 2 2 yields immediately the inviscid equation encountered above for the Riemann invariants λ 1 , λ 2 of Eqs.(2.4, 2.5), namely equations (2.7, 2.8).These equations read as follows: (5.36) Note that, although we have concentrated on the case where only two Riemann invariants are present, the same procedure will yield equations with any (even) number of Riemann invariants, which generalize equations (2.7, 2.8), to the case where the reduced Riemann-Hilbert contour is the union where g is the genus of the Riemann surface associated with ).The λ i 's thus obtained are then moduli of oscillatory solutions of the original nonlinear Schrödinger equation, Eqs.(1.1, 1.2).These oscillatory solutions are given in Eq. (5.50,5.51)below.When two of the λ i 's coincide, namely when λ 2j−1 → λ 2j ,such an oscillatory solution takes on a solitonic nature.The only added component here is that the differentials dΩ x and dΩ t may be shown to be normalized as to have null a−cycles.

Solution of the Reduced Problem
The solution of the reduced Riemann-Hilbert problem is achieved by making use of the Riemann theta functions.One can find more details about this solution in Refs [14,22], while here we give merely a very rapid exposition.It should be noted that this section is given here only for completeness, since if one is only interested in the case of two λ i 's, then the identification of the Riemann invariants as the end-points of the reduced Riemann-Hilbert contour has already been made, albeit without justification (it was merely suggestive that the two objects, the Riemann invariants of Eqs.(2.4,2.5) and the endpoints of the reduced Riemann-Hilbert contour, obey the same differential equations, Eqs.(2.7, 2.8)), and, furthermore, the solution of the Riemann-Hilbert problem is from this point on standard [14,22], namely, there are no special features associated with the peculiar formulation of the large deviation problem, except the fact that the Riemann-Hilbert contour lies rather unconventionally away from the real axis, the reflection across the imaginary axis being the symmetry that is obeyed by the contour in this case.This situation due to the non-conventional real section of the non-linear Schrödinger equation afforder by two real fields, p and q, rather than by two fields which are complex conjugates of each other.
In order to introduce the solution assume that the Riemann surface at hand is given by (5.37)In this case we have g holomorphic differentials ω i which may be normalized as follows: where a i denotes an a− cycle conventionally defined as shown in Fig. 3.
Then one defines the Abel map: Namely the integral is taken from ∞ on the upper sheet to the point k.Thus A(k ± ) denotes that the point P is to be taken as the point k on the upper or lower sheets respectively.One further defines the Riemann matrix The Riemann theta function is defined as [22]: We also define A g to be a vector whose i−th element is given by ¸bi g ′ dk.Lastly, we define a differential dΩ N to be a meromorphic differential with pole at infinity on the upper and lower sheets and of residue ±1, respectively.Associated with this definition is the vector V a vector whose i−th element is given by ¸bi dΩ N .In the lore of algebraic Riemann surfaces, the following is a standard identity: (5.42) Is convenient to find a solution to the matrix M ± instead of G ± , where M ± is defined below: where this change of variable results in the following asymptotics as k → ∞ : M ± → k k −1 1 1 .
But otherwise the matrix M ± solves the same Riemann-Hilbert problem as G ± , which becomes the reduced Riemann-Hilbert problem in the limit of large Λ.
The solution reads: here D is a constant vector depending neither on time, space or auxiliary spectral parameter, k.
The asymptote is therefore: where and the ellipsis denotes terms unimportant for the sequel.This expression makes use of the following definitions of v 0 , ω 0 and ρ 0 , connected to the asymptotes of ) Substituting Eqs.(5.45, 5.46) into Eq.(4.9), one may deduce: Thus we have: In the genus 0 case (g = 0), the theta function degenerates to a constant.And computing the asymptotes in Eqs.(5.47,5.47)explicitly gives v 0 = v, ρ 0 = ρ and ω 0 = 2ρ − v 2 4 which recovers the Cole-Hopf transformation in Eq. (2.1) q = e − 1 2 ´x v , p = −2ρe 1 2 ´x v .

Example for Flat Initial Conditions
In Ref. [17] the boundary value problem: was considered in the inviscid limit.This is term "flat" initial conditions since v is constant at initial time.In Ref. [17] Eqs.(2.4,2.5) were solved with these boundary conditions.The solution may be written by first solving β(t) from the equation: Then ρ and v can be written explicitly as: To identify the asymptotic Riemann-Hilbert problem associated with this large Λ limit, we may take the solution for v(x, 1) identify it as Eq. ( 5 This is a singular integral equation which can be solved by standard means, we, however, may guess the solution, namely m ′ (µ) = −4µ, which suggests m(µ) = 2(κ 2 − µ 2 ).This coincides with the result of Ref. [11] where it was shown that within an exact solution one obtains m(µ) = 2(κ 2 − µ 2 ) − 2 log(µ), the last term being a logarithmic correction to out approximate solution where the large parameter is µ ∼ κ.
Having identified m(µ), we may now check to see if Eqs. (5.26,5.27),which may be considered as the result of integration of the inviscid differential equations, are satisfied in this case.This is a matter of substituting m ′ (µ) = −4µ in those equations and performing the integrals.One find the following: Re [R 2 (κ)] = 0 (6.6) and the fact that these equations are indeed satisfied can be checked by direct substitution.

Conclusion
In this paper we have established a connection between the inverse scattering method and the Whitham limit in the case of the boundary value problems that appear in certain large deviation problems.
We have tried to give all the important features of the approach that connects the two problems, while leaving out many of the specific details that apply to certain cases.First, we have only dealt with the case where the relevant nonlinear equations to be solved are the non-linear Schrödinger equations, with real fields.We believe that the generalization to such systems as the derivative nonlinear Schrödinger equations is not substantially different than the current case.Furthermore we have mainly dealt with the case where the strongly non-linear limit leads to inviscid equations, where dispersive terms may simply be dropped.Although at first sight it may seem that this case is rather special, as it is known that instabilities, such as shocks, can cause oscillations or solitons to appear in the solution, it is actually quite straightforward to generalize the method to such cases, since the form of the solution in the case where the Riemann-Hilbert contour is multisegmented is written down in Eqs.(5.50,5.51).The case of solitons appear when two enpoints of the Riemann-Hilbert contour meet, as is well known.Indeed, in that limit the theta functions appearing in Eqs.(5.50,5.51)degenerate into hyperbolic trigonometric functions, from which solitons are easily obtained.In the case of multi-segmented Riemann-Hilbert contours one can recover a generalization of the equations for the Riemann invariants, Eqs.(2.7, 2.8), by making use Eqs.(5.33), by following the procedure outlined in this paper.Such a procedure is well known from Refrs.[19,20,23,24] .

Acknowledgement
I wish to thank Baruch Meerson for many useful discussions.I wish to acknowledge the Binational Science Foundation which has supported this research through grant number 2020193.
) one can easily find the time dependence of the elements as follows:a ± (k, t) = a ± (k, 0), b(k, t) = e −k2 t b(k, 0) and b(k, t) = e k 2 t b(k, 0).This can be shown by advancing the plane wave solutions in time at x = ±∞.One obtains that at large |x| the solution behaves as P ∼ e σz k 2 t 2 − ıkx 2

Figure 1 :
Figure 1: We have dropped the tildes in this figure.On the left the real axis is denoted by a heavy line and the matrices G + and G − are shown above and below it respectively.The jump matrix is G and it can be decomposed into G 1 G 2 .The jump can then be separated into contours each one with its own jump matrix, G 1 and G 2 .The jump then proceeds in two steps G + jump over to G m on the upper contour and G m jumps over to G − on the lower one.However, since the matrices G 1 and G 2 tend to the identity, the jump in fact disappears and G + becomes smoothly G − as the real axis is crossed .

Figure 2 :
Figure 2: The jump contour after deformation.The actual jump is only on the segment [λ 1 , λ 2 ], the rest of the contour, which is denoted in a dashed line has no jump on it, due to the procedure which removes it by decomposing G into G 1 G 2 .

Figure 3 :
Figure 3: The cycles over the Riemann surface associated with the function R 2g+2 (k).Dashed lines describe the part of the cycle that lies on the lower sheet.