Gaussian-process-regression-based method for the localization of exceptional points in complex resonance spectra

Resonances in open quantum systems depending on at least two controllable parameters can show the phenomenon of exceptional points (EPs), where not only the eigenvalues but also the eigenvectors of two or more resonances coalesce. Their exact localization in the parameter space is challenging, in particular in systems, where the computation of the quantum spectra and resonances is numerically very expensive. We introduce an efficient machine learning algorithm to find EPs based on Gaussian process regression (GPR). The GPR-model is trained with an initial set of eigenvalue pairs belonging to an EP and used for a first estimation of the EP position via a numerically cheap root search. The estimate is then improved iteratively by adding selected exact eigenvalue pairs as training points to the GPR-model. The GPR-based method is developed and tested on a simple low-dimensional matrix model and then applied to a challenging real physical system, viz., the localization of EPs in the resonance spectra of excitons in cuprous oxide in external electric and magnetic fields. The precise computation of EPs, by taking into account the complete valence band structure and central-cell corrections of the crystal, can be the basis for the experimental observation of EPs in this system.


Introduction
In open quantum systems, resonances can occur [1,2].These are quasi-bound states which can decay.By introducing a complex scaling, e.g. according to Reinhardt [3], and thus non-Hermitian operators, the complex energy eigenvalues of the resonances can be calculated.Here, the real part represents their energy, while the imaginary part unveils their lifetime.Depening on the studied problem, the computational effort required to calculate the eigenvalues and eigenvectors can be enormous.
In a parameter-dependent Hamiltonian, certain choices of the parameters can lead to the degeneration of two or more resonances.A special case of this is the so-called exceptional point (EP), at which not only the eigenvalues but also the eigenstates degenerate [4].Thus, n ≥ 2 resonances coalesce in the general case of an EPn.EPs have attracted much attention during recent years.They appear in PT-symmetric quantum systems [5,6], photonic systems [7][8][9], electronic circuits [10], atoms in external fields [11,12], Bose-Einstein condensates [13], and have been applied for enhanced sensing [14,15].A property of the most prominent case of an EP2 is that the two associated eigenvalues exchange their positions after one adiabatic orbit in parameter space around the EP.In experiments, external fields often can be used as conveniently controllable parameters for the manipulation of resonance spectra.In 2007 the existence of these EPs was proven numerically for the hydrogen atom in electric and magnetic fields [11], however, at field strengths much higher than laboratory fields.Due to limitations especially in magnetic field strengths, EPs in the hydrogen atom have not yet been observed experimentally.
In 2014, a remarkable discovery by Kazimierczuk et al. [16] revealed a hydrogen-like spectrum up to a principal quantum number of n = 25 within cuprous oxide (Cu 2 O).These spectral features are caused by excitons, quasi-particles in a semiconductor consisting of electron and hole.To a first approximation, they closely resemble their atomic counterpart, the hydrogen atom.However, the fact that the excitons are environed by Cu 2 O necessitates consideration of the details of the valence band structure to precisely describe the observed spectrum.
This experiment opened up research into giant Rydberg excitons.While the achieved principal quantum numbers are far lower than those reached in atomic systems, the modified material parameters in the crystal still lead to the strongly magnified properties known from Rydberg atoms.Similarly, for Cu 2 O the field strengths needed to observe EPs of resonances with small quantum numbers are much lower compared to the field strengths needed in the hydrogen atom, which is why it is more favorable to find EPs in this system.
Under suitable conditions, the search for EPs can be reduced to a root search problem for the distance between the eigenvalues forming the EP.Existing methods to find EPs are the three-point [17] and octagon method [18].They are based on a Taylor expansion of the spacing between the eigenvalues in the local vicinity of the EP up to linear or quadratic order in the parameters, respectively.They work similarly to a Newton algorithm, iteratively generating improved estimates of the EP by finding the roots of these linear or quadratic approximations.They thus may fail if the starting points are not in close proximity to the EP.The numerical search for EPs in reference [18] has been applied to a hydrogen-like model of excitons in external fields, however, for a detailed comparison of experimental and theoretical results in cuprous oxide, the above mentioned band structure terms need to be considered.This increases the computational cost drastically for each diagonalization of the Hamiltonian due to its complexity, and thus methods using a large number of such diagonalizations cannot be applied.Hence, an improved method is required to accurately and efficiently identify EPs in complex systems like Cu 2 O with external fields.
Inspired by the remarkable advances in machine learning, especially within the realm of physics, a novel method on the foundation of Gaussian process regression (GPR) [19] is developed.As a prominent member of the supervised machine learning family, GPR serves as a powerful and innovative approach to predict the positions of EPs.The used data to train a GPR model is obtained by simulations.Hence, the error is only due to numerical inaccuracies.Unlike neural networks, GPR offers the advantage that the provided training points can be passed through almost exactly to a numerical accuracy, which is a key motivation for its utilization.Also, unlike kernel ridge regression which is deterministic [20], the GPR model provides a prediction and also a corresponding variance at chosen points, which are both used in our application of the method.The basic idea is to perform the root search over the cheaply calculated output of a GPR model, instead of using the expensive exact diagonalizations or low-precision linear or quadratic Taylor approximations.Yet, the optimization of the searching process goes beyond the usage of machine learning.An efficient algorithm is devised to enhance the search for EPs in Cu 2 O, which contributes to the discovery of promising EPs candidates and thus enables a possible experimental verification of these data.As a preparatory step, we introduce the stepwise grouping algorithm to detect the presence of EPs via their characteristic exchange behavior [21] in a dataset of parameter-dependent eigenvalue spectra, and filter out the associated eigenvalues.The obtained data is then used as an initial training set for the GPR model, which learns the relationship between parameter values and eigenvalues.After the root search over the model output, which produces a first estimate of the EP position, we calculate the exact eigenvalues at the predicted parameters of the EP.An improved model is then obtained by appending the additional new eigenvalue pair to the training data.This is repeated until the desired precision is achieved.The procedure is more efficient than existing methods, since the model keeps the information from the initial data set and from previous iteration steps and can thus approximate the eigenvalue distance very well after only very few iterations.Furthermore, the iterative process can now start at rather large distance from the true solution and needs fewer time-consuming exact diagonalizations.
The paper is organized as follows.In Section 2.1 we give a general background to resonances and EPs in quantum systems.In Section 2.2 we introduce our GPR-based method for localizing EPs, which is then applied in Section 3 to excitons in Cu 2 O in parallel electric and magnetic fields.We conclude with a summary and outlook in Section 4.

Resonances and exceptional points in quantum mechanics
Quantum states of real-world systems are rarely perfectly bound.External fields and coupling to the environment create decay channels, leading to quasibound states with limited lifetimes.These states are called resonances.Mathematically, they are conveniently described using complex energies Ẽ, where the imaginary part is proportional to the decay rate with the spectral energy position E and the FWHM linewidth Γ.When applying the  2) is encircled in the parameter space κ.Each point in the complex plane is described by the angle ϕ in Euler form as depicted in Equation ( 6).(b) This leads to an exchange of the positions of the two eigenvalues in the complex energy plane.The eigenvalues calculated for each point on the circle are marked by the respective color of the color bar indicating the angle ϕ, in order to illustrate the path of each eigenvalue and the associated permutation.
time evolution operator to a state with such a complex energy, the resulting probability density decays exponentially with the decay rate Γ/ℏ.
Since Hermitian operators only have real eigenvalues, these complex energies can not be straightforwardly calculated as the eigenstates of a standard quantum mechanical Hamiltonian, but can be computed with an appropriate non-Hermitian modification of the Hamiltonian, e.g., by application of the complex-coordinate-rotation method [2,3,22].
In non-Hermitian quantum mechanics the eigenstates of the Hamiltonian are not necessarily orthogonal, and in case of degenerate states it is even possible, that not only the eigenvalues but also the eigenvectors of two or more resonances coalesce.These degeneracies are called EPs [4,23].In order to visualize an EP2 -where two resonances form an EP -and its behavior, a linear non-Hermitian map represented by the twodimensional matrix [4] can be used as a simple example with κ ∈ C. Despite being a straightforward illustration, all significant aspects of EPs can be demonstrated since the close vicinity of an EP in an n-dimensional system can be expressed by a two-dimensional problem [23].Calculating the eigenvalues of Equation (2) results in It is evident that the eigenvalues are two branches of the same analytic function in κ.They coalesce at κ ± = ±i, which means that there exist two EPs in this system.The corresponding eigenvectors are from which it is obvious that these also coalesce at κ ± = ±i and that there is therefore only one linearly independent eigenvector The existence of an EP in the system (2) can be revealed by a characteristic exchange behavior of the eigenvalues in the complex plane when encircling the EP in the parameter space [21].In the system (2) the exchange behavior can be visualized by parameterizing which describes a circle with radius ϱ around the EP κ + = i.This orbit is shown in Figure 1a, where the color bar denotes the angle ϕ.The permutation of the eigenvalues in the complex energy plane can be seen in Figure 1b.After one closed loop in the parameter space the eigenvalues exchange their positions.The path of each eigenvalue is depicted by the colors of the color bar.Due to the simplicity of this example, this permutation can also be shown analytically by substituting the circular parametrization (6) into the eigenvalues from Equation ( 3) which leads to After one closed loop in the parameter plane, i.e. ϕ = 2π, λ 1 changes to λ 2 and vice versa, which is exactly the behavior shown in Figure 1.Due to the small radius both paths of the eigenvalues in the complex energy plane build a half circle.For larger radii or elliptical orbits this shape can be much more complex.EPs can occur in open physical quantum systems, depending on at least a two-dimensional parameter space.However, for a high-dimensional non-Hermitian Hamiltonian their localization is a very nontrivial task.
To systemize the localization of EPs, we first reduce the task to a two-dimensional root search problem.We start with a dataset of parameter-dependent eigenvalue spectra, in which the characteristic exchange behavior has been detected.We then consider the complex eigenvalues λ 1 and λ 2 depending on the complex control parameter κ, which are associated with a specific EP, and introduce the analytical complex functions with p = 0 exactly at the EP.To ensure analyticity, the eigenvalue difference is squared in the variable p, owing to the square-root behavior of an EP2.The EP can now be localized via a root search for p = 0.For our iterative procedure, we also want to keep the information given by the centroid s, since p and s together uniquely determine the eigenvalue pair, and thus help select the correct eigenvalue pair among all eigenvalues of a given diagonalization as described below in Section 2.2.3.
Existing methods for the localization of EPs include the three-point method [17] and the octagon method [18].Both methods use a Taylor expansion to approximate p, with the difference that the three-point method only uses terms up to linear order, while the octagon method also considers quadratic terms.Especially the three-point method converges towards the EP only if the starting points are in the close vicinity of the EP, similar to Newton's method.The main goal of this paper is to develop a method for the localization of EPs, which, on the one hand is not restricted to the local vicinity of the EP but acts more globally in the parameter space, and, on the other hand, requires a low number of exact diagonalizations to calculate the eigenvalues, as this is the most time-consuming and computationally expensive step.To this aim we resort to GPR in what follows.

GPR-based method to find EPs
The GPR-based method for the localization of EPs shall be applicable to high-dimensional systems, which are much more challenging than the simple model (2).To address the complexity of more challenging systems, a complex-symmetric five-dimensional matrix is introduced, which depends on a complex parameter κ and complex matrix elements M ij , 1 ≤ i ≤ 5, 3 ≤ j ≤ 5, i ≤ j, where the real and imaginary parts are random numbers.This matrix model, which has no analytic solution, is illustrated in Figure 2. The parameter plane with the orbit around the EP, highlighted in green, as well as the energy plane are presented.Here, the computational effort to diagonalize the matrix and hence get the eigenvalues is low, which is why the EP can be found by performing a two-dimensional root search on p without the GPR model, where only the two eigenvalues belonging to the EP are considered.The result is marked as a green cross in the parameter plane.Due to its dimensionality, each diagonalization yields five eigenvalues.In the model (cf. Figure 2), two eigenvalues lie within the visible permutation of the two eigenvalues of the EP.All eigenvalues not belonging to an EP form a closed orbit similar to the one in parameter space.Given that the size of the path encirling the EP is relatively large and the EP is not localized in its center, the path of the two permuting eigenvalues exhibit significantly greater complexity compared to the relatively straightforward two-dimensional example depicted in Figure 1.As the GPR model necessitates the incorporation of the two eigenvalues associated with the EP, the observed permutation needs to be distinguished from the other eigenvalues.The investigated five-dimensional matrix model is visualized with its respective energy and parameter plane.The EP, found via a two-dimensional root search on p without a GPR model, is marked as a green cross.Due to its dimensionality, five eigenvalues are visible, and their course can be traced via the color bar, which represents the angle ϕ, when the EP is encircled.The model shows one permutation, which indicates the existence of an EP.The shape of this permutation is complex because of the large radius and the fact that the EP is not at the center of the orbit.
The GPR method is first presented using the five-dimensional matrix model to verify its functionality as well as its accuracy.The method requires as a starting point a set of eigenvalue pairs calculated along a closed path in the parameter space encircling the EP, which exhibit the characteristic permutation of the two eigenvalues.This is achieved with the stepwise grouping algorithm described in Section 2.2.1.The centroids and squared differences s and p (see Equations ( 9) and ( 10)) of these eigenvalue pairs are used as initial training set for a GPR model, which yields a first estimation of the EP position in parameter space via a two-dimensional numerical root search.This step is introduced in Section 2.2.2.The estimate of the EP is then improved iteratively.Therefore an exact diagonalization of the Hamiltonian is performed at the estimated EP position.The eigenvalue pair belonging to the EP is filtered out of the complete set of eigenvalues and added to the training set for an improved GPR model, which yields an improved estimate of the EP position via a numerical root search.The steps of this loop are discussed in Section 2.2.3.The iteration stops when one of the convergence criteria presented in Section 2.2.4 is fulfilled.

Stepwise grouping algorithm
The starting point for searching for an EP is a set of eigenvalue spectra calculated along a closed path in the parameter space.Along the closed path most eigenvalues start at and return to the same point, except for those eigenvalue pairs belonging to an EP.In large data sets it is a nontrivial task to select these eigenvalue pairs.By application of the stepwise grouping algorithm we sort the eigenvalues into groups in such a way that each resulting group consists of the parameter-eigenvalue pairs tracing a single parameter-dependent path through the eigenvalue plane.An EP is then characterized by a path, where the initial and final points do not coincide, i.e., two eigenvalues exchange their positions along that path.Selecting such paths yields an initial set of eigenvalue pairs belonging to an EP.
In our application, the numerically calculated eigenvalues are initially sorted by their real parts.For a given parameter κ i , let λ(κ i ) denote the corresponding set of eigenvalues obtained from the diagonalization.In the limit where the {κ i } constitute a continous curve, λ({κ i }) gives a set of continous paths {λ j ({κ i })} through the eigenvalue space.In practice, we have a finite set of discrete values of κ i .For the stepwise grouping algorithm, we suppose that this set, while finite and discrete, still covers a closed loop in the parameter space sufficiently densely that for every path j 0 , the eigenvalue λ j 0 (κ i+1 ) is closer according to some distance measure to λ j 0 (κ i ) than the eigenvalues λ j (κ i+1 ) belonging to other paths j ̸ = j 0 are.Starting from a given point in the parameter space κ 0 , the algorithm then follows the loop, in each step appending to each path the eigenvalue closest to the one from the previous step.Care must be taken if an eigenvalue would be the prefered continuation of multiple paths.In this case, the algorithm appends the value to the path where the distance is smallest and renews the sorting for the other eigenvalues at parameter κ i+1 .To define a suitable distance measure between eigenvalues, multiple options exist.A simple option would be to use the Euclidean distance d e = |λ m − λ n | in the complex plane, which is used for the matrix model.A more sophisticated approach is utilized for the application to cuprous oxide (see Section 3), which additionally uses information contained in the eigenstates.We can form a vector Ψ where we combine the complex eigenvalues with additional admixtures of quantum numbers characterizing the state.To obtain a distance measure, we can then use the cosine similarity where M is the dimension of the vectors Ψ.The cosine similarity is subtracted from 1 to obtain a measure that is small when the vectors are similar.Note that in order to achieve accurate sorting of resonances, a sufficient number of points must be computed along the orbit to avoid significant variations in the eigenvalues between consecutive points.The exact number of points required depends on the radius of the orbit.When applied to the five-dimensional matrix model in Figure 2 the algorithm selects the one pair of eigenvalues that exchange along the closed path in the parameter space.

Model training
We want to interpolate numerically calculated values of the functions p and s defined in Equations ( 9) and (10), which are obtained via expensive diagonalizations.Since there is no experimental or other source of noise, we do not want to treat the training data as an approximation to some underlying function which is contaminated with small errors.Instead, we want to treat the training data as exact and interpolate to other input values.We are thus looking for a method that faithfully extends a set of functions from a known set of arguments to other input values.GPR-based methods are a much better choice for these requirements than, e.g., neural networks, where predicted values at training points can deviate significantly from the given values.By construction, they treat the data as known observations in a Bayesian sense (modulo some uncertainty, σ n ∼ 10 −6 (see Equations (A.10), (A.14) and (A.16) in Appendix A), which is negligibly small for our application) and output a conditional distribution of fit functions.Other methods would potentially also fulfill our requirements and would probably lead to similar results.For example, assuming a suitable choice of kernel and hyperparameters, the posterior mean function obtained by GPR is the same as the fit obtained via kernel ridge regression [24].GPR also has the advantage of allowing hyperparameter optimization via the log marginal likelihood, instead of requiring a grid search, and it provides a variance for each prediction.This variance, which corresponds to the model uncertainty about the prediction, is incorporated in the method as outlined in Section 2.2.3.The theoretical background and implementation of the GPR method is described in Appendix A. For our kernel, we use the Matérn type class as given in Equation (A.10) with ν = 5/2.The other hyperparameters are optimized during the training step as outlined in Appendix A.5.
The selected set of eigenvalue pairs are used to train two GPR models in a way that the two functions p, s ∈ C, defined in Equations ( 9) and (10), can be predicted for chosen values of κ, as mappings R 2 → R 2 , i.e.
Due to the coalescence of the eigenvalues, p should be zero at the EP.Performing a two-dimensional root search on the model output p yields a prediction of κ EP for the EP.The predictions of s will be used in a later step.For the five-dimensional matrix model (11) the obtaind value of κ EP is already close to the exact position of the EP.It is marked in Figure 3a (dots in the small square window) and as a violet dot (first iteration step) in the enlarged area in Figure 3c.

Iterative improvement of the GPR-model
To construct an iterative process, an exact diagonalization is performed with the predicted parameter value.However, the identification of the two eigenvalues associated with the EP among all obtained eigenvalues in each iteration is not a straightforward task for higher-dimensional systems.
The diagonalization of a five-dimensional matrix yields a total set of five eigenvalues, but only the ones related to the EP warrant selection.In contrast to the problem of finding the eigenvalues belonging to the EP among the initial eigenvalue set in Section 2.2.1, there is no permutation pattern to guide the selection process here.To find the two eigenvalues associated with the EP in each iteration step a similarity measure is introduced based on a Gaussian distribution Here, the value of s in Equation ( 10) plays a significant role.The model predictions p and s at the predicted κ point are considered as µ and compared to the exact eigenvalues of the matrix at this κ point.The values σ 2 are obtained as the model uncertainties in the prediction given by the GPR method, see Equation (A.15) in Appendix A.5.To perform this comparison, all possible pairs of eigenvalues are formed (10 eigenvalue pairs in case of the five-dimensional matrix model ( 11)), and their respective p and s values are calculated.They take the part of x in Equation ( 14).As the exponential function is monotonic, it is sufficient to consider the exponent, yielding the pair-discrepancy which is small when there is good agreement between model prediction and exact eigenvalue pair.The index 'm' indicates a model-predicted value and σ 2 is the variance obtained from the GPR model for the respective prediction.This pair-discrepancy is calculated for all possible eigenvalue pairs and the one with the lowest value is chosen.
Ideally, there should be a large gap between the smallest and second smallest c value to make sure, that the correct eigenvalue pair is selected.Applying this similarity measure to the matrix model in Figure 2 after the first training step, leads to the c values shown in Figure 4.As visible, there is indeed a large gap between the smallest and second smallest c value.This observation strongly suggests that the eigenvalue pair that has the lowest c value is most likely associated with the EP.The new pair of eigenvalues is used as additional training point for the GPR model.Repetitive iterations of this procedure yield a highly precise model, consequently enabling the determination of the precise location of the EP.
For the five-dimensional matrix model the whole iterative process is depicted in Figure 3a and 3b.The initial training set in this case consists of about 20 points.The GPR model slowly approaches the EP until it converges after nine training steps.The Euclidean distance between the exact EP and the last prediction is d e = 5.526 × 10 −6 .In order to explore the energy plane and thus give more information to the GPR model, two attempts are made to both improve convergence and reduce the number of diagonalizations.An additional training point is added to the training set by calculating the distance between the last two predictions, adding it to the last prediction and computing the exact eigenvalues at this new κ point.If this is executed after the second iteration, not only the convergence improves significantly, but also the number of diagonalizations is reduced, both visible in Figure 3c.The number of training steps is decreased to three, i.e. four diagonalizations, and the Euclidean distance to the exact EP is d e = 1.342 × 10 −6 .Adding the extra training point after the third iteration can be seen in Figure 3d.Here, no improved convergence or reduced number of diagonalizations is observable.Compared to the original training process, the Euclidean distance is almost identical with d e = 5.537 × 10 −6 .

Convergence criteria
To terminate the iterative procedure, it is essential to establish a convergence criterion.The eigenvalues λ K of the covariance matrix K can be calculated for each training iteration.The parameter space (i.e. the κ-space) is the input space of the kernel function and thus of the covariance matrix.As the number of κ-values increases, the eigenvalues decrease.If there are a lot of training points, especially if they are close together, a drop in the eigenvalues is visible.An interpretation for this drop is that the model has already seen this new training point, thus yielding no significant additional knowledge.The utilized data originates from the training process, where an extra training point is incorporated following the second iteration.A drop (O (10 −4 ) to O (10 −10 )) in the kernel eigenvalues appears in the last training step in Figure 5a.This deviation from the previous training step facilitates the establishment of a threshold.It is important to note that this criterion does not provide insights into the convergence of the EP itself.
At the EP the difference of the two eigenvalues ∆λ = |λ 1 − λ 2 | should be zero due to their degeneracy.Because of the square root behavior (see Equation ( 3)) the gradient is infinite at the EP.As a consequence, even slight changes in the κ-value can lead to significant variations in the difference between the eigenvalues.This strong dependency poses a challenge in identifying an appropriate threshold value for the eigenvalue difference as a convergence parameter.However, this is a criterion directly related to a property of an EP.In Figure 5b, the eigenvalue difference ∆λ is plotted as a function of the number of training steps.Here, the eigenvalue difference of the initial training set is calculated via where i denotes the index of the i-th training point κ i on the orbit.As visible, the eigenvalue difference decreases strictly monotonically which verifies the model's convergence towards the EP.

Results for exceptional points in Cu 2 O
In this section, we want to discuss the application of our GPR-based method to a specific physical system and reveal the existence of EPs in exciton spectra of cuprous oxide in external electric and magnetic fields.In semiconductors like Cu 2 O, excitons are created when an external photon lifts an electron from the valence band to the conduction band, leaving a hole behind.The resulting electron-hole pair interacts via the Coulomb interaction, leading to bound states that approximately behave like the states known from the hydrogen atom, but with modified material parameters.Such a hydrogenlike model reproduces the qualitative and some of the quantitative features visible in experiment.When the effects of the crystal structure are additionally taken into account, the resulting model is accurate enough to allow for line-by-line comparisons with experimental spectra [25,26].If external electric and magnetic fields are added to the system, resonances with finite lifetimes can occur.Tuning of the field strengths can then lead to the appearance of EPs.More technical details about the system and the numerical computation of the resonance states are given in Refs.[26][27][28] and, for the convenience of the reader, also in Appendix B.

Application of the GPR-based method
To minimize the number of diagonalizations and thus the computational cost, the GPRbased method is now applied to the Hamiltonian of Cu 2 O.The Hamiltonian depends on the magnetic and electric field strengths, γ and f , respectively.In analogy to the matrix models described in Section 2 the Hamiltonian can be parameterized by a complex parameter κ = e iϕ , which is related to the field strenghts via with ϱ = ∆γ γc = ∆f fc .These fields are aligned parallel to the symmetry axis [001] of Cu 2 O. Here, κ is the unit circle in the complex plane and ϱ the so-called relative radius, so the variation of the field strengths, ∆γ and ∆f respectively, depends on the center of the ellipse (γ c , f c ). Due to this representation, the field strengths on the ellipse can be converted to their respective points on the unit circle in the complex plane κ, which is used to train the GPR model.
An ellipse with a radius of ϱ = 0.06 is drawn in the field plane, which corresponds to ∆γ = 77.733mT and ∆f = 5.1966 V cm −1 .This ellipse and the observed permutations in the energy plane are depicted in Figure 6a.As a result of the large radius, there is a substantial overlap of resonances belonging to an EP and other resonances.Consequently, when plotting all eigenvalues for each angle ϕ (cf. Figure 6a), only the leftmost permutation is visually distinguishable.Applying the stepwise grouping algorithm yields all permutations shown in Figure 6b.Therefore, it functions as a powerful tool to efficiently filter these permutations, facilitating the discovery of additional EPs.Thus, it generates a greater amount of training data for a single closed loop in the parameter space.To avoid possible issues with the stepwise grouping algorithm, the orbit was discretized using 400 points to ensure that eigenvalues and their respective quantum numbers changed minimally between steps.This is of course computationally expensive, but nevertheless the overall computational workload is reduced drastically due to the discovery of additional EPs.For the application of GPR, the number of points is reduced, in order to prevent numerical inaccuracies leading to negative kernel eigenvalues.The initial training set for all EPs discussed in this section is thus made up of about 40-60 points.
The leftmost permutation in Figure 6b is taken as initial training set to study the convergence of the method.Figure 7 visualizes the result of the iterative process.To ensure its accuracy, a smaller ellipse around the EP is drawn with a relative radius of ϱ = 0.003, corresponding to ∆γ = 3.741 mT and ∆f = 0.266 V cm −1 .
The GPR-based method applied to the large ellipse converges after ten training steps and leads to a similar position of the EP γ EP,lr = 1.246612T , f EP,lr = 88.728936V cm −1 (18) as the prediction of the small radius in Figure 8 γ EP,sr = 1.246554T , f EP,sr = 88.728730V cm −1 .
The indices 'lr' and 'sr' represent the large and small radius, respectively.Hence, the predicted field strengths are identical at least up to and including the fifth significant digit and the Euclidean distance is only d e = 2.140 × 10 −4 .Thus, the iterative process Convergence of the EP by means of the kernel eigenvalues λ K and the eigenvalue difference ∆ Ẽ of the two eigenvalues belonging to the EP for the training process in Figure 8.The significant drop in the kernel eigenvalues from order 10 −6 to 10 −9 is visible.This drop is accompanied by a very small change in the eigenvalue difference, suggesting the convergence of the method.
converges and yields a fairly accurate prediction of the position of the EP even for a large radius.
Having a look at the two convergence criteria in Figure 9 supports the assumption of a converged training process.Not only the drop in the kernel eigenvalues λ K from order 10 −6 to 10 −9 for the small ellipse occurs, but also the eigenvalue distance ∆ Ẽ of the two eigenvalues associated with the EP reduces significantly during the iterative process.Furthermore, the drop is accompanied by minimal alteration in the eigenvalue difference, indicating the convergence of the method.Since the exact position of the EP is unknown, it is not possible to determine the exactness of the last prediction.However, the deviation only in the sixth significant digit in Figure 8 as well as the small eigenvalue difference indicates a very accurate result.
All EPs found with the GPR method are listed in Table 1.The GPR method yields the positions and eigenvalues of the EPs with high accuracy, and may serve as a starting point for an experimental verification of EPs in cuprous oxide.However, it should be noted that the accuracy of the results also depends on the size of the basis set and convergence parameters for the diagonalization of the cuprous oxide Hamiltonian (B.1) with external fields as discussed in Appendix B.

Conclusions and outlook
In open quantum systems, resonances with a finite lifetime can occur.There are EPs, where two resonances coalesce, i.e. their eigenvalues as well as their eigenvectors degenerate.It is hard to pinpoint them and additionally in systems like Cu 2 O the required diagonalizations of the Hamiltonian are computationally expensive.GPR serves as a powerful machine learning method to optimize the whole searching process.The GPR-based method was developed gradually by means of matrix models.An observed permutation of the eigenvalues belonging to the EP was used as initial training set, since it confirms the existence of an EP inside the orbit.In higher-dimensional systems, it is not straightforward to select the eigenvalues that perform the permutation due to overlap with other resonances.A challenge was to extract these eigenvalues, so the stepwise grouping algorithm was developed to solve the initial training set problem.The underlying principle of this approach is to determine the eigenvalue corresponding to the next angle on the orbit for a given resonance.This is accomplished by selecting the eigenvalue that exhibits the closest proximity to the eigenvalue of the current angle, utilizing a suitable distance metric.The grouping algorithm allows filtering the eigenvalues corresponding to the EP and hence simplifies obtaining the initial training set.For all parameter values, e.g.field strengths, the centroid and the difference of the eigenvalues can be calculated respectively.These are passed to the GPR model together with the field strengths.Thus, the latter provides a prediction of these quantities as a function of the field strengths.Due to the degeneracy at the EP, the eigenvalue difference has to be zero at this point, which is why a two-dimensional root search applied to the model provides a prediction for the position of the EP in the field plane.A diagonalization with the predicted field strengths was performed to obtain the exact eigenvalues.These were added to the training set to improve the prediction of the model.Hence, an iterative process was established.In higher-dimensional systems, diagonalization involves computing more than just the two eigenvalues associated with the EP.Selecting the correct eigenvalues in each iteration was achieved by introducing a similarity measure, which compares the model prediction with the exact eigenvalues.The smallest value of the similarity measure should correspond to the eigenvalue pair belonging to the EP.To ensure proper selection, a large gap between the smallest and second smallest similarity value is desirable, which was indeed observed.To determine the convergence of this method, a combination of the drop in the kernel eigenvalues and the difference of the eigenvalues associated with the EP appeared to be a promising convergence criterion.Training the GPR model and applying the method resulted in a relatively fast convergence to the EP with a high accuracy.The GPR-based method was first developed and tested for a five-dimensional matrix model, and then successfully applied to find EPs in Cu 2 O.
In this paper we have focused on the localization of EPs consisting of two coalescing resonances, i.e., the case of an EP2.The GPR-based method developed here can certainly be extended and applied to other more general situations.For example, the method can, in principle, also be applied to diabolical points (DPs), where the eigenvectors corresponding to a degenerate eigenvalue are orthogonal, contrary to those for EPs, and the two eigenvalues show a linear splitting instead of a square root branch point in the close vicinity of the degeneracy.In this case, the function p = (λ 1 − λ 2 ) 2 in Equation ( 9) must be replaced with p = λ 1 − λ 2 to determine the position of the DP.However, when encircling a DP the two resonances do not show the characteristic exchange behavior of an EP, and therefore a different strategy than described in Section 2.2.1 would be necessary to find candidates for DPs and initial training sets for the GPR-based method.
While two parameters are sufficient for the coalescence of the two resonances of an EP2, 2n − 2 parameters must be adjusted for the coalescence of all resonances of a more general higher-order EPn with n ≥ 3 [12,14,29].(Note that a different number of (n 2 + n − 2)/2 required parameters is given in reference [29].)For the localization of such points, the stepwise grouping algorithm described in Section 2.2.1 must be modified to search for the characteristic exchange behavior of a higher-order EPn, and the functions p and s in Equations ( 9) and (10) need to be adopted to the nth root branch point singularity of the EPn, i.e., s = 1 n n i=1 λ i and p must be extended to a set of complex functions p i = (λ i+1 − λ i ) n for i = 1, . . ., n − 1.The extended GPR-based method will then iteratively estimate the 2n − 2 real parameters of the system to finally achieve p i = 0 for i = 1, . . ., n − 1.However, it should be noted that although signatures of higher-order EPs have been observed, e.g., for the hydrogen atom in external electric and magnetic fields [12], the required number of parameters for the coalescence of all resonances of an EPn is often not available in physical systems, such as the hydrogen atom or cuprous oxide in external fields, which typically depend on a low and limited number of external parameters.through the training points results in the posterior distribution.This is illustrated in Figure A1.For this purpose, the joint Gaussian prior distribution is conditioned on the training points, leading to These are the key predictive equations for GPR [19], resulting in the posterior distribution, with the mean function µ (µ i = m(x i )) in Equation (A.14) and the covariance function σ 2 in Equation (A.15), that can be used to predict f * at the respective input points X * , as shown in Figure A1b.
To optimize the hyperparameters Θ, the log marginal likelihood (LML) is required, which is defined as Note that the scalar p here denotes the marginal likelihood, and should not be confused with the difference of eigenvalues in Equation (9).A short characteristic length-scale that might improve the data fit is not preferred by the LML.Rather, maximizing the LML (maximum likelihood estimation) leads to a value which increases the likelihood that the training data would be generated by the distribution over functions [35].This technique is particularly powerful because the LML is differentiable with respect to the hyperparameters Θ i and utilizes only the available training data.There are different Python packages to perform GPR.The one used in this work is GPflow [36].

Appendix B. Excitons in cuprous oxide
We are interested in excitons in cuprous oxide under the influence of an external magnetic and electric field.We thus briefly recapitulate the description of the system we want to use in this paper.The field-free spectrum of yellow excitons in Cu 2 O is dominated by the odd parity P states with an additional fine structure splitting of other odd parity states, like F states [16,37].For their accurate modelling, one needs to go beyond the qualitative hydrogen-like description and consider the non-parabolicity of the valence band H b (p).Since we also want to apply external fields, especially an external electric field, mixing between the odd-and even-parity states needs to be taken into account.Due to their small extention, S states probe features of the crystal on short length scales, requiring the inclusion of the central-cell corrections H CCC .With these, the total field-free Hamiltonian is given by [25, 28, 38- is the Pollmann-Büttner potential, is the exchange interaction [42], and is an additional short distance correction [43].The polaron radii are given by ρ e/h,i = ℏ 2m e/h ω LO,i (B.9) with the energies ℏω LOi of the longitudinal Γ − 4 phonons, and the values 1 V uc = a 3 is the volume of the unit cell.
For the investigation of EPs, we need to apply at least two tunable parameters to the Hamiltonian.For this, we use parallel magnetic and electric fields.The former can be added via the minimal substitution p e → p e + eA, p h → p h − eA and the supplementation of the interaction term of the magnetic field and the spin degrees of freedom [26,39], with the Bohr magneton µ B , the g-factor of the electron g c and the hole g s ≈ 2, and the fourth Luttinger parameter κ.
For the electric field, we add the term [44] H F = eF • (r e − r h ) = eF • r .(B.12) The relevant matrix element of the coordinate vector r is given in Appendix B.2.All material parameters are listed in Table B1.

Appendix B.1. Complex coordinate-rotation and numerical solution
For the treatment of complex resonances with finite linewidths, we use the complexcoordinate-rotation method [1][2][3]27].For the numerical solution of the Schrödinger equation we proceed analogously to Ref. [48], using the Coulomb-Sturmian functions [49] to obtain a generalized eigenvalue problem with dimension up to ∼ 10000 × 10000.The Table B1.Material parameters of Cu 2 O used in the calculations.

1 Figure 1 .
(a) The EP κ + = i of the simple example (

1 Figure 3 .
Figure 3. Illustration of the iterative process by means of the five-dimensional matrix model.(a) The points on the orbit around the EP, marked in blue, and their associated eigenvalues are used as initial training set.(b)The model slowly approaches the EP, marked as a green cross.After nine training steps the GPR model is converged and the Euclidean distance between the last model prediction and the exact EP is d e = 5.526×10 −6 .Two different attempts are made to optimize convergence and reduce the number of exact diagonalizations.(c) First, an additional training point is added after the second iteration to explore the energy plane.For this purpose, the difference of the last two predictions is calculated and added to the second prediction.This leads to convergence after the third training step, i.e. after the fourth diagonalization (considering the additional point).Not only the number of diagonalizations is significantly reduced, but also the Euclidean distance to d e = 1.342 × 10 −6 .(d) Similarly to the previous approach, an additional training point is added after the third iteration.This does not reduce the number of diagonalizations (nine training steps, ten diagonalizations) nor does it improve convergence (d e = 5.537 × 10 −6 ).

1 Figure 4 .
Similarity measure employed for selecting the new eigenvalue pair in each iteration.The logarithmic plot displays the pair-discrepancy values defined in Section 2.2.3.The calculations are performed after the first training step for the model in Figure2.A noticeable gap is observed between the smallest and second smallest pair-discrepancy values.This indicates that the eigenvalue pair with the lowest c value is highly likely to correspond to the EP.

1 Figure 5 .
Figure 5.The two convergence criteria, namely the eigenvalues of the covariance matrix and the eigenvalue difference, are shown for the model system.(a) The eigenvalues λ K of the covariance matrix K are depicted for each training step.A drop is visible in the third iteration from order O 10 −4 to O 10 −10 compared to the previous step.This indicates an already seen training point that does not provide any new information.A clear deviation to the previous training step is visible.Thus an appropriate threshold value can be defined easily.(b) The eigenvalue difference of the two eigenvalues belonging to the EP is plotted over the number of training steps.It decreases strictly monotonically, verifying convergence of the iterative process.Defining a threshold value is not as straightforward as for the kernel eigenvalues, since no clear change is visible between the last two iterations.

1 Figure 6 .
(a) Drawing an ellipse with a radius of ϱ = 0.06, corresponding to ∆γ = 77.733mT and ∆f = 5.1966 V cm −1 , results in the visible resonances for the given energy range.Here, the band gap energy E g is subtracted from the real part of the energy.Due to the large radius, there exists a significant overlap with other resonances, resulting in only the leftmost permutation being visually discernible when plotting all eigenvalues for each angle ϕ.(b) Applying the stepwise grouping algorithm reveals five permutations.Consequently, this algorithm serves as a powerful tool to effectively filter these permutations, enabling the identification of additional EPs and, subsequently, generating more training data for a single ellipse.This approach effectively reduces the overall computational workload while increasing the number of EPs discovered.

1 Figure 7 .
Applying the iterative process to the orbit in Figure6combined with the leftmost permutation as initial training set yields the illustrated prediction of the EP.It converges after ten training steps.The Euclidean distance to the prediction of the EP in Figure8is d e = 2.140 × 10 −4 , thus the predicted field strengths are identical up to and including the fifth significant digit.

1 Figure 8 .
Applying the iterative process to the small orbit with a relative radius of ϱ = 0.003 yields the illustrated prediction of the EP.The iterative process converges already after the fourth training step.

Table 1 .
EPs in Cu 2 O found using the GPR-based method.For each EP, the magnetic (γ) and electric (f ) field strengths as well as the complex energy Ẽ are given.Here, the band gap energy E g is subtracted from the real part of the energy.