Exact Coupling Coefficient Distribution in the Doorway Mechanism

In many--body and other systems, the physics situation often allows one to interpret certain, distinct states by means of a simple picture. In this interpretation, the distinct states are not eigenstates of the full Hamiltonian. Hence, there is an interaction which makes the distinct states act as doorways into background states which are modeled statistically. The crucial quantities are the overlaps between the eigenstates of the full Hamiltonian and the doorway states, that is, the coupling coefficients occuring in the expansion of true eigenstates in the simple model basis. Recently, the distribution of the maximum coupling coefficients was introduced as a new, highly sensitive statistical observable. In the particularly important regime of weak interactions, this distribution is very well approximated by the fidelity distribution, defined as the distribution of the overlap between the doorway states with interaction and without interaction. Using a random matrix model, we calculate the latter distribution exactly for regular and chaotic background states in the cases of preserved and fully broken time--reversal invariance. We also perform numerical simulations and find excellent agreement with our analytical results.


Introduction
In open quantum systems, strength function phenomena [1] give structural information about the system itself and about the excitation mechanism. Here, we address statistical features of the doorway mechanism which can be defined as follows: there are one or several somehow "distinct" and "simple" excitations whose amplitudes are spread over many "complicated" states. In a many-body system, collective excitations are often distinct, because all or large groups of particles move in a coherent fashion. As compared to the complexity of the other, non-collective excitations, these states can be interpreted in the framework of a ted in the framework of a simple, typically semiclassical, picture. The distinct states act as "doorways" to the background of the complicated states [1,2]. Mostly, the statistical features of the latter are chaotic. The strength function has Breit-Wigner shape, largely independent of the statistics of the background states. The width characterizing the Breit-Wigner strength function is referred to as spreading width [1].
The doorway mechanism is found in a rich variety of systems, comprising atoms and molecules [3], as well as atomic clusters, quantum dots and, more generally, mesoscopic systems [4,5,6]. Nuclear physics provides particularly beautiful and well-studied examples, such as Isobaric Analog States and multipole Giant Resonances [1,7,8,9,10].
What is a suitable theoretical interpretation of the Breit-Wigner shape? -Although the simple picture for the distinct excitations captures the main physics, it is important to realize that these states are not eigenstates of the real quantum Hamiltonian. Similarly, the statistical models for the background states do not describe eigenstates either. Thus, if we use the simple picture for the distinct states and the statistical model for the background states as a basis of the Hilbert space, there must be a non-vanishing interacting between these two classes of states. Rediagonaliziation then yields proper eigenstates of the model Hamiltonian. Averaging over the background states, one obtains the local density of states around the energy of the distinct state or states in the simple picture. The local density of states is once more of Lorentzian or Breit-Wigner shape with a spreading width that is -depending on the particular situation -closely related to or identical with the above mentioned spreading width in the strength function. It can be viewed as a measure for the quality of the simple picture describing the distinct states: the smaller the spreading width, the closer is this picture to the physics reality.
The strength of the interacting between the two classes of states uniquely determines the spreading width and, equivalently, the size of the overlap between the distinct state in the simple picture and the true eigenstates of the model Hamiltonian. These overlaps are of course the coupling coefficients when expanding the true eigenstates in the above mentioned basis. Recently, a new statistical observable was introduced: the distribution of the maximum coupling coefficients [11]. The first two moments of this distribution were already studied in Ref. [12], but with assumptions not valid in our context. In Ref [11], however, the full distribution is addressed. Importantly, its shape sensitively depends on the interaction strength. Moreover, it is an especially well-tailored measure to investigate weak interactions.
Here, we present exact results for the distribution of the coupling coefficients to a distinct state in the framework of a random matrix model. In the particularly interesting regime of weak interactions, this distribution coincides with the distribution of the maximum coupling coefficients.
The article is organized as follows. After properly posing the problem in Sec. 2, we calculate the distribution exactly for regular and chaotic background, respectively in Secs. 3 and 4. We discuss our results in Sec. 5.

Posing the Problem
In Sec. 2.1, we present the random matrix model for the doorway mechanism. We introduce and define the distribution of the maximum coupling coefficient in Sec. 2.2.

Doorway Mechanism in a Random Matrix Model
The model to be discussed here stems from nuclear physics [1] and is also often used in other fields [13]. For the convenience of the reader and to define our notation, we compile its salient features. As we are aiming at a random matrix model, it is convenient to choose from the beginning a proper basis of the full Hilbert space such that we can represent the Hilbert space operators by matrices. Introducing a cutoff, their dimension is finite. Eventually this cutoff effect is removed by taking the matrix dimension to infinity. We nevertheless use the Dirac notation for the wave functions, even though they are finite-dimensional vectors.
The total Hamiltonian H consists of three parts, the Hamiltonian H s for the K distinct states which become the doorway states, the Hamiltonian H b describing the N background states, where N will eventually be taken to infinity, and the interaction V coupling the two classes of states. Hence, we have For the matrix elements of the interaction, we make the assumptions s j |V |s k = b ν |V |b µ = 0 and b ν |V |s j = V νj for any j, k, µ, ν. Often, there is only one relevant doorway state or the spacing between the doorway states is much larger than their spreading widths. We focus on these cases and consider only one doorway state by setting K = 1, |s 1 = |s and V jν = V ν .
The eigenequations for the uncoupled Hamiltonians are H s |s = E s |s and Due to the interaction V the doorway state is not an eigenstate of the Hamiltonian H. We denote the eigenstates of the full Hamiltonian H by |n . The eigenequation to be solved is Resembling the situation in most systems, we put the doorway state |s in the center of the background spectrum. It interacts with the surrounding N states. Without loss of generality, we may set E s = 0. The exact eigenstate of H which evolves from the doorway state in the presence of the interaction is referred to as |0 . We expand the n-th eigenstate of H in the basis spanned by |b ν and |s as where the coupling coefficient c ns is the overlap between the doorway state |s in the non-exact picture for this distinct state and the n-th exact eigenstate |n of the full Hamiltonian. We are interested in the statistical features of these coupling coefficients.
We have to solve our model for c ns . The action of the full Hamiltonian H on the eigenstate |n yields on the one hand On the other hand we have Equating these two expressions, we find such that Using the normalization of |n , we eventually arrive at which is the desired expression for c ns in terms of the matrix elements of H. Formula (9) holds for 0 ≤ n ≤ N. This expression is still exact. However, since the eigenvalues of the full Hamiltonian E n depend on the coupling coefficents V ν , it is a complicated implicit expression. We proceed further by expanding the exact eigenvalues perturbatively in V , where the eigenstate |n of the full Hamiltonian to eigenvalue E n has evolved from the eigenstate |b ν (n) of the unperturbed Hamiltonian by adiabatically switching on the perturbation.
We now obtain a crucial simplification by setting E n = E ν(n) , i. e. we keep only the leading order term in the perturbative expansion of Eq. (10). To this approximation of |c ns | 2 for n = 0 the sum in Eq. (9) is completely dominated by the term ν = ν(n), which actually diverges such that |c ns | 2 ≈ 0 to first order for all n = 0. The only overlap integral which remains finite is the overlap of the doorway state with itself |c 0s | 2 . Here we set E 0 ≈ E s = 0 and no divergence occurs. Using the approximation E n = E ν(n) we have therefore essentially singled out |c 0s | 2 as the only non-vanishing -and thus inevitably maximum -overlap integral of the perturbed eigenstates with the doorway state.

Distribution of the Maximum Coupling Coefficient
The new statistical observable introduced in Ref. [11] is the distribution of the maximum of the overlaps between the eigenstates of the full Hamiltonian and the distinct state, that is the doorway state |s . In order to obtain it, we have to average in a suitable way over the interaction matrix elements and over the Hamiltonian modeling the background states. For the time being it suffices to denote this average by square brackets. Later on we give a precise definition. Hence, the distribution in question is given by On the other hand, the distribution of overlap between the evolved doorway state and the unperturbed doorway state reads Setting E n = E ν(n) amounts essentially to the approximation This approximation is certainly good for small interactions or, more precisely, as long as the mean coupling strength is an order of magnitude smaller than the mean level density of the background states. Our numerical simulations will strongly corroborate this statement. Hence, we focus on p 0 (c), which can be treated analytically. The statistics of the interaction matrix elements can only have minor impact on the resulting distribution. Hence, if not stated otherwise assume that the interaction matrix elements are Gaussian distributed random variables. We have to distinguish two cases. The total Hamiltonian H can be time-reversal non-invariant or time-reversal invariant, where we disregard spin degrees of freedom. In the first case, labeled by the Dyson index β = 2, the interaction matrix elements V ν are complex variables, in the second case, labeled β = 1, they are real. Introducing the N-component vector V , the corresponding distribution is In addition to the behavior under time-reversal invariance, the statistical properties of the Hamiltonian H b , however, must strongly affect the distribution p 0 (c). Hence, we do not specify it yet. As is well known from Random Matrix Theory, the parameter governing the physics is where D is the mean level spacing of the background states in the center of the band [1,13]. The distribution P i (V ) is chosen such that λ is independent of β.
Technically, it is more convenient to work out the probability density Q(u) of the random variable The relation between the two distributions reads Thus, once Q(u) is known, p 0 (c) follows immediately. We now use our statistical assumption that the interaction matrix elements V ν are Gaussian distributed. We write the distribution Q(u) in the form where d[V ] is the product of the differentials of all independent variables in V . The square brackets with index N denote an average over the N background states, that is, over the Hamiltonian H b . For the calculation of the averages, it is helpful to write the distribution Q(u) as the Fourier transform with the characteristic function After rescaling V ν = y ν /|E ν | we obtain the alternative expression where the vector y has real entries for β = 1 and complex ones for β = 2, respectively. The infinitesimal volume element d[y] is a product of the differentials of all independent entries of the vector y. As a generating function R is normalized to R(0) = 1.
In the sequel, we calculate the expressions (22) for generic choices of the Hamiltonian H b governing the dynamics of the background states.

Regular Background
The doorway state is embedded into a regular background, if the eigenvalues E ν of H b do not repel each other. The distribution of the background Hamiltonian then factorizes according to In order to keep the discussion most general we use for the interaction matrix elements a general factorizing distribution instead of the Gaussian distribution introduced before Eq. (15). We keep the reasonable, physically motivated assumption of statistical independence of the interaction matrix elements but relax the global orthogonal (β = 1) or unitary (β = 2) invariance of the interaction matrix elements, implicit in the measure Eq. (15). For complex coupling matrix elements we assume in addition that the distribution . We asign to complex coupling matrix elements with this invariance the Dyson index β = 2 and to real coupling matrix elements the Dyson index β = 1.
A straightforward calculation reveals that the characteristic function (21) factorizes as well and becomes an N-th power of a single integral, As we are interested in the local scale set by the mean level spacing D of the background states, the distribution p 0 (c) should not be sensitive to the particular choice of the distribution p b , as long as it does not contain scales competing with the mean level spacing D. The simplest choice is where D = 1/ √ N and √ N = ND is the length of the background spectrum. The following calculation is similar to the one described in the Appendix B of Ref. [14]. We perform the integral over the background distribution in Eq. (26) In the last equation we used an integral identity of the Fresnel type We find for the characteristic function We observe that the distribution of the interaction matrix elements p i enters only via the expectation value m 1 as defined in Eq. (30) and not via the second moment m 2 = v 2 . As pointed out after Eq. (16) p i is chosen such that the mean coupling strength, as defined through v, is independent of β. This means that m 1 in general is different for real and for complex coupling. We write m 1 = a β v, where now a β depends on Dyson's index and on the distribution p i . For instance for the Gaussian distribution we find Using the definition of λ in Eq. (16) we finally find The reader can easily convince herself/himself that other reasonable choices for p b (E) such as a Gaussian distribution yield the same functional form as in Eq. (33). The Fourier transform (20) results in Using the relation (18), we eventually arrive at As anticipated, the interaction strenght enters the distribution only via the dimensionless ratio λ = v/D. The distribution p i enters via the the factor a β defined in Eq. (30).
It is interesting to see that a β does not only depend on the distribution but also on the symmetry factor β. This means that for a regular background and for constant interaction strength λ the distribution p 0 distinguishes between real interaction and complex interaction. As we will see in the following this does not happen for a chaotic background. Therefore here opens -at least theoretically -the possiblity to distinguish between a regular and a chaotic background dynamics through the doorway state. Assume we can experimentally manipulate the interaction between the Doorway state and the background such the interaction matrix elements change from real to complex, for instance by switching on a magnetic field. For fixed mean interaction strength and for a chaotic background dynamics the distribution p 0 will be invariant, whereas for a regular background it will change.
In Fig. 1 p 0 is plotted for three different values of the mean coupling strength. We see that due to the numerical similarity of a 1 and a 2 the plots for real and for complex couplings are almost the same. The fact that for a Gaussian distribution a 1 ≈ a 2 seems to be rather accidental. However for p i being a semicircle (SC) distribution we find a

Chaotic Background
The dynamics of the background states is usually chaotic. The N × N Hamilton matrix H b modeling the background states has then to be chosen from a Gaussian random matrix ensemble. Again, we have to distinguish time-reversal invariant and non-invariant systems, that is, the cases labeled by the Dyson parameters β = 1 and β = 2, respectively. The Hamiltonian H b is from the Gaussian orthogonal ensemble (GOE) for β = 1 and from the Gaussian unitary ensemble (GUE) for β = 2 with variance w 2 of the diagonal elements As already said, y is an N-component vector which has real or complex entries for β = 1 and β = 2, respectively. In Sec. 4.1 we reformulate the problem in terms of matrix invariants. This enables us to introduce a handy supermatrix model in Sec. 4.2, which we then solve in Sec. 4.3.

Reformulation in a Rotation-Invariant Form
At first sight, the best way of tackling the problem seems to start from Eq. (19) and to introduce V ν = y ν /|E ν |. The y integration is then simply an integration over a N − 1 sphere in a real or complex space. This leads to where the N × N matrix has rank one. Furthermore, is the volume of the above mentioned sphere. Unfortunately, the remaining ensemble average is only feasible for the GUE, we carry it out in Appendix A. For the GOE, the calculation is hampered by the modulus of the determinant.
To adress the GOE case and to have a method that is capable of handling both cases, GOE and GUE, in a unifying way, it turns out necessary to cast the ensemble average into an invariant form. To this end, we start from Eq. (22), perform the integration over the vector y and obtain Now we make the observation, that the N × N-matrix average can be expressed as an (N + 1) × (N + 1)-matrix average as follows where H is a (N + 1) × (N + 1) matrix and the ensemble average is over an (N + 1) × (N + 1)-matrix GOE (GUE) ensemble. The derivation of Eq. (41), which is crucial for the calculation, is sketched in App. Appendix B. On the right hand side, the inconvenient modulus of the determinant has disappeared. The combinatorial factor L N β is given by We express the determinant on the right hand side of Eq. (41) as a Gaussian integral over a real (β = 1) or complex (β = 2) N + 1 vector y.
and plug subsequently Eqs. (40), (41) and (43) into Eq. (20) and write the integral over the N + 1 vector y in radial coordinates. We find where we introduced the function with the (N + 1) × (N + 1) matrix G = diag (g − 1, 0, . . . , 0). After combining the various constants we arrive at where we introduced formally the fractional derivative We can evaluate the fractional distribution for arbitrary β. For β = 1, 2 we obtain Here we see that the GOE case is more complicated than the GUE case. For the GUE the fractional derivative disappears and we find without further problems For the GOE we obtain an integral expression The remaining task is in both cases (GOE and GUE) the calculation of F N +1 (g).

Mapping onto a Supermatrix Model
Using tr δ(H) = 1 π Im tr 1 H − iǫ the ensemble average F N +1 (g) defined in Eq. (45) can be expressed via standard techniques as a supersymmetric matrix integral Here σ and τ are 2 × 2 (GUE) respectively 4 × 4 (GOE) supermatrices of the form The matrix entries in latin letters denote real commuting integration variables. The matrix entries in greek letters denote complex anticommuting integration variables.

The infinitesimal volume elements d[τ ] and d[σ]
are products of the differentials of all independent integration variables. The integration domain of the real commuting variables is the real axis. The matrix J is a 2 × 2 (GUE) or a 4 × 4 (GOE) diagonal supermatrix with entries J = diag (j, −j) (GUE) and J = diag (j, j, −j, −j) (GOE). Due to the broken rotation invariance of the original matrix model (1) the resulting supersymmetric representation (53) is a two-matrix model. We wish to evaluate the σ-integral by a saddle-point approximation and to calculate the τ integral exactly afterwards. It is well known [15] that the σ integral K N (τ ) yields for large N in the saddle-point approximation where D = βπ 2 w 2 /(2N) is the mean level spacing in the center of the band. However this approximation is only valid, if Str τ itself is of order of the mean level spacing. Since the integration domain of τ is the whole real axis, this is not automatically guaranteed. A necessary condition is that the variance of the Gaussian in the second line of Eq. (53) is itself of order of the mean level spacing, i. e. the τ integral in Eq. (53) is essentially localised to a small window of width D around zero. Consequently we must require And therefore g should scale as N for large N. The dimensionless coupling strength is given by λ = v/D. Since u it is of order one, we obtain for g in the GUE case Fortunately this is exactly the scaling behaviour we need to apply the saddle-point approximation. In the GOE case Eq. (57) holds in any Intervall ω c < x < (u − 1), where ω c is an infrared cutoff in the integral, which is small compared to one but large compared to the mean level spacing. i.e. in the limit N → ∞ in the whole integration domain of the x-integral in Eq. (51). In conclusion we can apply the approximation (55) both in the GUE as well as in the GOE case.

Remaining Matrix Integration and Final Result
Plugging Eq. (55) into Eq. (53) we obtain after a simple shift where we also employed that g ≃ N ≫ 1. Now the derivative with respect to the source term can be performed where we used the identity for a complex number z with negative imaginary part. The remaining average can be calculated by employing techniques of standard analysis. We use which holds for r ∈ R. We obtain Im tr 1 Finally we obtain This result simplifies considerable when we take into account the scaling behavior (57) of g. In the large N limit we can write lim N →∞ This can now be plugged into Eq. (46) to obtain expressions for Q(u) on the scale of the mean level spacing. For the GUE we find straightforwardly For the GOE we are left with an integral expression for Q(u) The integral can be evaluated further and be expressed in terms of standard special functions. Finally we arrive at where K n is the modified Bessel function of the second kind of order n. In the Fig. (2) the distributions p 0 (c) of Eq. (67) for the GOE (blue curves) and of Eq. (65) for the GUE (red curves) are plotted for the values λ = 0.1, 0.5, 2 of the mean coupling strength λ. We see that for small λ there is only a minor difference between GUE and GOE background.

Comparison
In Fig. 3 the distribution function of the overlap integral | 0|s | of the evolved doorway state with the unperturbed doorway state is plotted for four different coupling strengths However the difference of the distributions for different background complexities is rather small. This suggests a certain degree of universality of the curves. One might choose other ensembles for the background Hamiltonian, as for instance semi-Poisson [16] or transition ensembles. However we expect that for these ensembles which lie between the two extreme cases, GUE and Poissonian, their corresponding distributions will also lie in the channel between the full red line (GUE) and the full blue line (Poissonian with real coupling). For the most interesting case of small λ this channel is small. On the other hand the distributions are highly sensitive with respect to a change in the coupling strength λ.
In Fig. 4 we compare the curves for p 0 (c = | 0|s |) obtained from the analytical results (in this case from Eq. (67)) with Monte Carlo simulations in the case of a real coupling to a GOE background. The figure shows p 0 (c = | 0|s |) for three values of the coupling strength λ = 0.1, 0.5 and 2. We see fairly good agreement for all three values, even for the strong coupling value λ = 2. This shows that the approximation implied in Eq. (10) and thereafter, is justified far beyond the perturbative regime.

Discussion
The distribution of the maximum coupling coefficients in the doorway mechanism has been introduced as a new statistical observable. These coupling coefficients, that is, the overlaps between the eigenstates of the full Hamiltonian and the doorway state are not always at the disposal. However, in situations where they are accessible, this distribution provides a highly sensitive measure for the interaction strength. Of particular interest is the regime of weak interactions. In this regime, the distribution of the maximum coupling coefficients is very well approximated by the distribution of the overlap between the evolved doorway state and the unperturbed doorway state. While calculating the former seems unfeasible at present, we calculated the latter exactly for regular and chaotic background states in the cases of preserved and fully broken timereversal invariance. We performed our calculations in the framework of Random Matrix Theory which is well-known to provide reliable models for regular and chaotic systems. We also carried out numerical simulations which fully confirm our analytical results. Our exact calculations are of general interest for matrix models. We managed to reformulate a problem with breaking of rotation invariance in the space of N × N random matrices in terms of a rotation invariant problem involving (N + 1) × (N + 1) random matrices. This made it possible to map the matrix model in ordinary space onto a matrix model in superspace, which we solved by a saddle point approximation in the limit of infinite level number. Remarkably, the supermatrix model is in the class of two-matrix models which show up in a large variety of situations. at the Centro Internacional de Ciencias (C.I.C) in Cuernavaca, Mexiko in March 2nd to 6th 2009.

Appendix A. GUE Background with Finite Level Number
We define the average where the diagonal N ×N matrix G is defined as G = diag (g ′ , 1, . . . , 1). Her g ′ is related to the parameter of the main text as g ′ = 1 + 2(u − 1)w 2 /v 2 . The integration is over the set of all Hermitean N × N matrices, i. e. over the GUE ensemble. The normalisation A N is chosen such that 1 1N = 1. The task is to calculate the average A Laplace expansion of the determinant yields where S N is the permutation group. Obviously only terms ω = ω ′ contribute.
It is useful to expand the remaining sum in cycles involving the index 1. For indices k n > 1 and k n = k m we define the cycles C (1) Since the indices 1, k 1 , . . . k n do not appear in the remainder of the product, we can integrate over the remainder separately. This yields The average over the cycles is simple as well. Only terms involving the index 1 yield a factor different from w 2 . We ontain The averages are independent of the indices k n . The sum over the indices yields the combinatorial factor (N − 1)!/(N − 1 − n)!. Altogether we obtain Evaluating this equation for g ′ = 1 allows us to replace the sum. After some further simple manipulations we finally obtain which is almost our final result. The remaining task is to evaluate the g ′ independent constant F N (1). This is facilitated by the observation that 10) where K N +1 (x, y) = are oscillator wave functions and H n is the n-th Hermite polynomial. The constant K N (0, 0) can also be evaluated Of course K N (0, 0) is but the inverse level spacing at the center of the semicircle. Therefore lim N →∞ K N (0, 0)/ √ 2N = 1/π. We obtain our final result 13) where c N = 1 N odd 1 + 1/N N even (A.14) The even-odd difference disappears in the large N limit. In the following we set c N = 1. Using Eq. (37) we find for Q(u) This exact result can be compared with Eq. (50) in order to find a differential equation for F N +1 (g ′ ) This differential equation can easily be solved. However the solution is highly complicated. It discourages any attempt to calculate the matrix integral Eq. (53) for finite N in the GOE case. To make contact with the results obtained in the main text, we introduce the function With a change of variables y = ρ(x) in the integral we can write which coincides with Eq. (63) in the large N limit and for β = 2.

Appendix B. Derivation of Eq. (41)
We define the function where H is a N × N GOE or GUE random matrix and the brackets denote the corresponding GOE or GUE average. G N (z) is an analytic function in the cut complex plane C \ R − . In this Appendix we prove the identity where H is a (N + 1) ×(N + 1) GOE or GUE random matrix. The constant L N β is given in Eq. (42). We write the rhs of Eq. (B.2) in angle eigenvalue coordinates H → U −1 EU, where E is a N + 1 × N + 1 diagonal matrix of the eigenvalues E i of H. Since the average is over an invariant function the integral over the diagonalizing group is trivial. The average on the rhs. can now be written as 3) The power of the Vandermode determinant ∆ N (x) = i<j (x i − x j ) arises as Jacobian from the coordinate transformation. The constant C (N +1),β arising from the group integration can be found in Mehta's book [17]. Now the integral over the δ-distribution can be performed.
We see that the resulting integral can be written as a GOE (GUE) average over N × N matrices. This is indicated by using E i instead of E i as integration variables. We go back to Cartesean coordinates U −1 EU → H and find lhs. = (N + 1)C (N +1),β C (N ),β L N β | det H| β (H 2 + z) −β/2 N = (N + 1)C (N +1),β C N,β L N β G N (z) .