Local random potentials of high differentiability to model the Landscape

We generate random functions locally via a novel generalization of Dyson Brownian motion, such that the functions are in a desired differentiability class, while ensuring that the Hessian is a member of the Gaussian orthogonal ensemble (other ensembles might be chosen if desired). Potentials in such higher differentiability classes are required/desirable to model string theoretical landscapes, for instance to compute cosmological perturbations (e.g., smooth first and second derivatives for the power-spectrum) or to search for minima (e.g., suitable de Sitter vacua for our universe). Since potentials are created locally, numerical studies become feasible even if the dimension of field space is large (D ~ 100). In addition to the theoretical prescription, we provide some numerical examples to highlight properties of such potentials; concrete cosmological applications will be discussed in companion publications.


Introduction
Random functions have many application in physics and mathematics, one of the best known ones is their use to describe disordered systems in solid state physics leading to Anderson localization (often Gaussian random potentials based on a truncated Fourier series are used). In this paper we derive, to our knowledge, new methods to generate random functions of high differentiability locally, while retaining a Hessian in the Gaussian orthogonal ensemble. Our motivation stems from our desire to study cosmological implications of certain landscapes in string theory, but we tried to make our results accessible to a wider audience by delegating cosmological applications to a separate publication. Readers not familiar with cosmology or string theory may simply skip the motivational paragraphs.
As alluded to, a recent application that motivated this work is the use of random functions to model certain landscapes in string theory [1]. For example, the Denev-Douglas landscape [2] was modelled by a random potential in [3,4] ("Random Supergravities"), see also [5]. Since a top down approach yielding the full potential is virtually impossible in all but the simplest cases, random potentials are used to conduct numerical experiments and search for suitable vacua [5,6], investigate the feasibility of inflation [7][8][9][10] or compute (distributions of) observables 1 [21][22][23][24][25][26][27], see also [28][29][30][31][32] for related work (for recent reviews of inflation see [33,34] and for model building in string theory see [35]). Naturally, the closer random potentials model the actual landscape of interest, the more reliable predictions become. Thus, our interest is to prescribe certain generic properties, such as the overall hilliness, the properties of the Hessian at well separated points etc., whenever a random potential is generated. For example, in random supergravities, the Hessian is a mix of Wishert and Wigner matrices [3]. A complementary analytic tool is random matrix theory (see [36,37] for a textbook introduction) which can often be used in conjunction with numerical experiments due to the feature of universality [38][39][40][41][42]. Most work in recent years relied on potentials constructed globally via truncated Fourier series [7, 8, 25-27, 43, 44], a subclass of which are Gaussian random potentials. However, this approach has the disadvantage of being computationally intensive as the dimensionality D of field space increases, since the number of random parameters increases as # D . Thus, for a description of the hundreds of fields on generic string theoretical landscapes [1], this approach becomes useless.
Fortunately, many questions of interest, for instance the likelihood of inflation, the probability of encountering a minimum or the values of observables such as the scalar spectral index (the slope of the observed power-spectrum), only require knowledge of the scalar fields' potential (the landscape) in the vicinity of the trajectory taken by the fields. This motivated Marsch et al. [45] to construct the potential locally by employing Dyson Brownian motion [46] (DBM), greatly reducing the cost of numerical experiments (∝ D # ) to the point where hundreds of fields can be treated on a notebook with Mathematica.
To generate a potential via DBM, one stitches together patches wherein the potential is given by a Taylor series, truncated at second order. After a prescribed step, for instance a set distance along an inflationary trajectory, random components are added to the Hessian. In DBM, these random components conform to a Gaussian distribution with prescribed mean and variance, so that the Hessian is a member of the Gaussian orthogonal ensemble.
While efficient, this procedure has a serious drawback: after each step the eigenvalues of the Hessian, i.e. the masses of fields, jump. Such potentials are ill suited to study cosmological perturbations, since artefacts arise in correlation functions. For example, even a single jump in the mass of one of the fields causes a prominent ringing pattern in the power-spectrum [47][48][49] (i.e. the two point correlation function). Higher order correlation functions, commonly lumped together under the name non-Gaussianities, are affected even more. Such jumps in the Hessian can also hinder the search for minima: whenever a minimum is approached, the Hessian starts to dominate the evolution; due to the random jumps, the trajectory is bounced around preventing a smooth approach to the minimum. To a lesser degree the steps can also reduce the probability of finding regions that are sufficiently flat for inflation. It is possible to reduce these artefacts by decreasing the step size; however, this brute force approach reduces the computational advantage of DBM.
Thus, in this paper, we generalize the method by Dyson to generate potentials in any desired differentiability class: we delegate perturbations not to the Hessian, but to higher derivative tensors, while retaining a Hessian in the Gaussian orthogonal ensemble (other distributions may be chosen if desired); for example, if V ∈ C 2 is desired, random Gaussian perturbations are added to the tensor of third derivatives ∂ 3 V /(∂φ a ∂φ b ∂φ c ).
After a brief review of the method to generate potentials V ∈ C 1 via DBM in Sec. 2, we provide two distinct methods to generate V ∈ C 2 , both of which yield the same statistical properties of the Hessian. Additional freedom is present, since the number of random variables exceed the number of conditions stemming from the prescribed statistical properties of the Hessian. The first method provides potentials that are "smoothest" in the sense that a maximal number of random variables is set to zero, Sec. 3.3.1. The second methods adds perturbations primarily in the directions set by the eigenvalues of the Hessian, Sec. 3.4.1. We compare both methods in Sec. 5 and find them to be qualitatively indistinguishable and free of artefacts. We plan to use these potentials for cosmological applications in a forthcoming publication, where we also intend to test how sensitive observables are to the chosen method -naturally, any dependence would dramatically reduce the predictiveness and thus reduce the usefulness of such random potentials to model concrete landscapes in string theory.
However, if observables are independent of the methods, we have the opportunity to compute generic predictions for wholes classes of string theoretical landscapes, as opposed to investigating inflationary models on a case by case basis.
While potentials V ∈ C 2 are sufficient to compute the power-spectrum, they should not be used for higher order correlations functions. For example, if one wants to compute the bi-spectrum, one needs to be able take three derivatives of the potential, i.e. V ∈ C 3 is needed. We therefore provide a generalization of the first method to create potentials in any desired differentiability class (V ∈ C k with k ∈ N) in Sec. 4. For k = 3, we find that spurious oscillations arise in the evolution of the Hessian's eigenvalues, Sec. 5. These oscillations are caused by truncating the Taylor series at higher order and can't be avoided within the current framework; nevertheless, their amplitude, and thus their effect on observables, can be reduced to any desired level by decreasing the step length. While not an ideal solution, such potentials still improve on potentials of lower differentiability (one can't compute the bi-spectrum at all if V ∈ C 1 ). We leave future applications of such potentials as well as improvements to the methods put forth in this article to future work.
We would like to reiterate that the methods introduced in this study are independent of the applications outlined above.
2 Creating random potentials along a trajectory

Motivation and goals
Inflationary observables depend only on properties of the potential in the vicinity of the trajectory, which motivated Marsh et.al. [45] to develop a computationally economical approach to generate random potentials locally by defining random functions around a path Γ in field space 2 : for any Γ, given the value of the potential V , gradient V and Hessian V ≡ H at a point p 0 , the values of the potential and the gradient vector at a nearby point p 1 can be obtained to leading order by means of a Taylor expansion. To construct a random potential, the Taylor expansion is truncated and the Hessian matrix at p 1 is altered by adding a random matrix δH to the Hessian at p 0 , H(p 1 ) = H(p 0 ) + δH . (2.1) By repeating this process along the path Γ, a continuously differentiable, random potential, i.e. V ∈ C 1 , can be obtained. The distribution of the Hessian matrix at well-separated points (i.e. separated by several units of a characteristic correlation length Λ h ) can be restricted to any desired distribution; if Wigner's Gaussian Orthogonal Ensemble (GOE) is chosen, as in [45], the elements of the Hessian undergo Dyson Brownian motion (DBM) [46]. As a consequence of statistical rotational invariance, Hessian matrices associated with well-separated points constitute a random sample of the statistical ensemble, which is invariant under orthogonal transformations. Further, if the field space is Ddimensional, the D(D + 1)/2 entries of the Hessian matrix H are statistically independent 3 . While the choice of the GOE is the simplest one, it is by no means unique; in concrete applications, e.g. to construct potentials obeying prescribed properties of a landscape, the rules of constructing the potential have to be adjusted accordingly.
We review Dyson Browning motion in more detail in the next section, before generalizing the prescription. We are particularly interested in two aspects: 1. Generate potentials V ∈ C k with k ≥ 2 (Sec. 3 and Sec. 4), which is needed to compute correlations functions (e.g., k = 2 for the power-spectrum, k = 3 for the bi-spectrum etc.) if artefacts are to be avoided. In addition, V ∈ C 2 is desirable for searches of extrema (if a critical point is approached, the jumps in the Hessian in ordinary DBM hinders a localisation/identification of extrema).
2. Incorporate a soft upper and lower bound on the values of the potential, as in [8]; such a bound is necessary if the potential is used to model a low energy, effective potential.
In this article, we focus on the first point, the generation of random functions V ∈ C k , which provides the foundation for concrete applications, such as the computation of cosmological perturbations, or further refinements, such as the incorporation of bounds mentioned in point 2. The latter topics are the subject of companion publications (in preparation).

Review: Dyson Brownian motion potentials
Dyson Brownian Motion is a canonical -but not unique -choice of rules to govern the stochastic evolution of the Hessian matrix that gives rise to independent GOE Wigner matrices at wellseparated points. To this end, the Hessian needs to be perturbed according to (see [45,46], whose results we summarize here) where δA ab are D(D + 1)/2 zero-mean stochastic variables and the term ∝ −H ab is the uniquely determined restoring force ensuring that the distribution of the entries of the Hessian remains finite and obeys the GOE. This restoring force does not imply the boundedness of the potential. The variable s represents the field space path length along the trajectory Γ and δs is the length of an individual step along Γ. Λ h can be interpreted as a horizontal correlation length. To achieve Dyson Brownian motion of H ab , the first two moments of δH ab need to satisfy [45] δH ab where σ represents the standard deviation of the corresponding Wigner ensemble. To implement the above prescription, consider a D-dimensional field space with fields φ a , a = 1 . . . D and a potential V . We would like to model V as a random one given a suitable starting position. The potential in the vicinity of the starting point p 0 can be expanded up to second order as where Λ v (mass dimension one) sets the vertical scale, andφ a ≡ φ a /Λ h are rescaled, dimensionless fields. The normalization factor √ D is introduced to simplify subsequent expressions. We use Einstein's summation convention over field indices and consider a flat field space metric if not stated otherwise. If we take the truncated Taylor expansion in (2.5) at p 0 , the potential at an adjacent point p 1 close to p 0 , i.e. with local coordinates δφ a satisfying δφ a 1 where . . . is the Cartesian norm, can be written as To generate a random potential, one can truncate the series expansion at second order and set where δv ab | p 0 are taken to be elements of a random matrix 4 . Repeated application over successive points p n along Γ results in a random, piecewise patched together potential V ∈ C 1 . Dyson Brownian motion random potentials are therefore defined by imposing for the mean and second moment of the added random components δv ab . Since O(v ab ) = 1/ √ D, the magnitude of a typical eigenvalue of v ab is of order one. Thus, v 0 and v a both receive contributions of order 1/ √ D over a correlation length. Hence, for potentials uncorrelated over distances which in turn explains the overall normalization factor of √ D in (2.5).

Eigenvalue relaxation
The probability distribution of a matrix' eigenvalues in the unfluctuated GOE (i.e. the stationary distribution) is given by Wiegners semi-circle law. If a Matrix is initialized in a fluctuated state, for example such that the smallest negative eigenvalue is close to zero, its eigenvalues relax to the stationary distribution as Dyson Brownian motion proceeds. The relevant correlation length is given by Λ h . As a consequence, it is unlikely that a shallow patch remains flat for s = ∆φ Λ h 5 . To be concrete, one can show [45] that the expectation value of the smallest eigenvalue λ min of a matrix H undergoing DBM satisfies where H 0 is the initial matrix at s = 0 and q ≡ exp(−s/Λ h ). Here, a normalization such that eigenvalues lie between −2 and 2 was chosen. Due to the exponential suppression in q, the initial eigenvalue λ min [H 0 ] is forgotten after a few Λ h are traversed.  We demonstrate this eigenvalue relaxation and recover the results of Marsh et al. [45] in Fig. 1, where we plot a random potential V ∈ C 1 created via Dyson Brownian motion (using(2.5) and perturbations of the form (2.10) and (2.11)) next to the corresponding eigenvalues of the Hessian. Evidently, eigenvalues relax to the stationary distribution after about Λ h /δs ∼ 100 iterations or steps. Here and in all subsequent figures we take 1M P = 0.1, δs = Λ h /100 and σ 2 = 2/D if not stated otherwise. While the choices of Λ h and Λ v have implications for concrete applications 6 , they merely correspond to a rescaling here. Thus, without loss of generality, we can keep them fixed. Further, as long as the step length is small enough (δs Λ h has to hold) it should not have any impact on applications; how small it has to be depends on the type of application. We come back to this point in Sec. 5. In the meantime, δs = Λ h /100 is small enough for our purposes, while big enough to enable fast computations. σ controls the strength of the perturbations and thus affects the overall hilliness of V . We chose σ 2 = 2/D in line with [45], to enable direct comparisons. To test our code, we recovered numerically the results of [45], which we omit here for reasons of brevity.
The path Γ we choose is given by following the slope of the potential. In a cosmological setting, one is commonly interested in solving the field equations in conjunction with the Friedmann equations, but following the steepest descent is faster and suffices for our purposes 7 .
While the observation of eigenvalue relaxation does not serve as a strong test for the ensemble to be the GOE (the same results hold true for other symmetric distributions with zero mean and finite variance for off diagonal elements), it provides a simple, necessary consistency check. In the following, we use such plots as benchmarks: as we construct potentials in a higher differentiability class, which is achieved by perturbing a higher order derivative tensor instead of the Hessian while retaining the statistical properties of the Hessian, plots such as the ones above should remain qualitatively unchanged.
3 Extending Dyson Brownian motion to generate random potentials V ∈ C 2 The perturbations of the Hessian in the aforementioned procedure yield a potential in C 1 . If one wishes to study inflationary cosmology on random potentials, one is commonly interested in the evolution of cosmological perturbations to compute correlation functions of the gauge invariant curvature fluctuation ζ. The latter can be measured by observations of the cosmic microwave background radiation (CMBR) or large scale structure surveys. Of particular interest are the twopoint function (power-spectrum) and the three-point function (a measure of non-Gaussianities). Since ζ is related to fluctuations in the inflatons at horizon crossing, see [33,34] for reviews, the correlation functions of ζ probe the properties of the inflationary potential around sixty e-folds before the end of inflation.
The n'th correlation function is sensitive to the n'th derivative of the potential -for instance, at the level of the slow roll approximation the second derivative of the potential enters the scalar spectral index. Thus, a discontinuity in the 2'nd derivative of the potential, as induced by Dyson Brownian motion, leads to artefacts already at the level of the power-spectrum: as shown in [47][48][49] such jumps lead to extended oscillations in the power-spectrum; higher order functions are affected as well. Keeping δs 1 and thus δH ab sufficiently small, washes out these artefacts, since they are superimposed on top of each other as in [64]. However, such a brute force approach is computationally intensive, offsetting the main advantage of DMB, and potentially not entirely free of artefacts. Thus, it is desirable to have access to a random potential in the class C k if the k'th correlation function is to be computed.
Even if one is not interested in cosmological perturbations but merely properties of the inflationary background trajectory, it is advisable to have k ≥ 2: for example, one might be interested in the final vacuum reached after inflation, as in [8]. To this end one needs to identify the presence of a local minimum. However, if the necessarily shallow region near the minimum is approached the gradient approaches zero; as a consequence, the background dynamics become dominated by the Hessian, not the gradient. We expect the sudden steps in H to hinder the proper identification of a minimum, since the artificial jumps prevent a smooth approach to it. This expectation can be seen numerically, and we plan to elaborate on this point in future work. To a lesser degree we expect this effect to arise during slow roll inflation as well, particularly if inflation is driven near a saddle point or maximum.
In this section we generate a random potential V ∈ C 2 , such that the elements in the Hessian still obey the GOE, by adding the random fluctuations to the tensor of third derivatives instead of the Hessian. Such a potential is sufficient to discuss background dynamics, including questions pertaining to the final resting place, as well as the power-spectrum. A generalization to V ∈ C k is given in Sec. 4.

A potential to third order
Let us start by Taylor expanding V at p 0 to third order, To create a random landscape, we wish to truncate the series at third order and add a perturbation to via an appropriately chosen δv abc . Since the third order derivative tensor, and thus δv abc , has to be symmetric under permutations of abc, the Hessian automatically inherits the proper symmetries. As before, going along a path Γ from point to point, we can patch together a random potential; however, this time, the potential, the gradient and the Hessian remain continuous. If one wishes to create a potential in C k , one needs to go up to the k + 1'th order in the Taylor expansion, as done in Sec. 4.

Imposing properties of the Hessian
As explained in Sec. 2.2, the perturbation of the Hessian needs to obey (2.10) and (2.11) in order to create a Dyson Brownian motion random potential. Thus, we wish to consider δv abc such that the mean and the variance of the Hessian remain the same as if perturbations were added directly to the Hessian. Consider δv abc to be Gaussian random variables. Since the sum of Gaussian random variables yields a new Gaussian random variable with mean and variances equal to the sum of the respective quantities of the summed variables, it is possible to generate a perturbation of the Hessian δv ab with the desired properties leading to DBM. To be concrete, noting that 8) for N (a, m 2 ) a normal distributed random variable with mean a standard deviation m while k is a scalar, we can work out the needed mean and variances for δv abc such that (2.10) and (2.11) hold, yielding where Var(x) =< x 2 > − < x > 2 . Translating the above expressions into explicit conditions for the mean and variance of δv abc is complicated by the sum over c. To alleviate this hurdle and simplify the procedure, we perform a rotation in field space. Let us consider two distinct rotations/prescriptions to identify conditions for the means and variances of δv abc : 1. We rotate the field space such that one basis vector aligns with the step δφ so that the sum collapses to a single entry. After dividing by the step-length, we can identify the desired conditions on the means and variances. This prescription is easily generalized to generate V ∈ C k .
2. We rotate the field space such that the Hessian is diagonalized. As a consequence, off-diagonal means are zero while variances can be simplified.
Once the conditions are imposed, we need to rotate back to the original coordinate system to perform the next step. While statistical properties of the Hessian are identical in both procedures, the actual conditions on δv abc differ. This is expected, since the number of independent statistical variables exceeds the number of conditions in (2.10) and (2.11) for k ≥ 2. Since the differences are delegated to a tensor not directly entering observables, we expect all prescriptions to yield consistent predictions, e.g. for the power-spectrum.

Rotating field space To align a basis vector with δφ
We wish to rotate a basis vector e i in field space in the direction of the vector δφ. For e i we choose the direction in which δφ has its maximal component, i.e. |δφ i | > |δφ c | for c = i (if the maximal component is degenerate, we choose the one with the lowest index). To perform this rotation, we identify two orthonormal vectors in the plane spanned by e i and δφ, which we extend to an orthonormal basis of R D via the Gram-Schmit procedure (this step is not unique). With respect to this basis, we consider the rotation by the required angle θ in the plane spanned by the two vectors of interest and use the identity on the space generated by the rest of the orthonormal basis.
Since by definition the basis vector e i is already of unit norm, we begin by normalizing the vector δφ, u ≡ δφ |δφ| . (3.11) To create another orthonormal vector in the plane of interest, we define v ≡ e i − (u · e i )u |e i − (u · e i )u| , (3.12) in line with the Gram-Schmidt method. The projection operator onto the plane spanned by e i and δφ is and Q = I − uu T − vv T (3.14) projects onto the D − 2 dimensional perpendicular subspace of R D . Since the rotation takes place in the target space of P , we can write the rotation matrix as where [u v] is the D × 2 matrix with u and v written as column vectors. R θ is the normal rotation matrix in two dimensions, i.e.
R is the desired rotation matrix aligning e i with δφ while R −1 is used to rotate back to the original coordinate system. Denoting quantities in the rotated coordinate system by an underscore, we have for example To avoid cluttering our notation, we suppress the underscore in the following whenever it is clear in which coordinate system computations are performed.

Imposing constraints on the mean and variance of δv abc
To impose (3.9) and (3.10), we go to the rotated coordinate system introduced in Sec. 3.3. All expressions and tensor components in this section are given in this coordinate system (denoted by an underscore in Sec. 3.3) if not stated otherwise. We first recall that the rescaled step length is given by where we keep δs = constant from step to step 8 . As δφ and e i are aligned (the index i is not a free index in this section), we have As a consequence, (3.9) becomes Noting that v abi | p n−1 = v abi | p n−2 + δv abi | p n−2 we arrive at the D(D − 1)/2 independent conditions for the mean Since v abc is completely symmetric, the following components are determined by symmetry < δv ibc | p n−2 > = < δv cbi | p n−2 > , (3.23) < δv aic | p n−2 > = < δv aci | p n−2 > .

(3.24)
Since a completely symmetric tensor of rank k and indices ranging from 1 to D has independent components, there are N (3, D) − N (2, D) = D(D − 1) 2 /6 hitherto unspecified entries in δv abc | p n−2 . These leave the conditions (3.9), and thus the Hessian, unaltered, but they affect the overall smoothness of the potential. Thus, imposing the GOE for the Hessian does not uniquely determine the distribution of δv abc | p n−1 or the class of random potentials. The only constraint on these unspecified entries is that they should lead to a symmetric third order tensor. The simplest choice is to leave these components unchanged in the rotated coordinate system. As a consequence, the only variation of these entries in the original frame stems from rotating to-and-fro. This choice leads to the "smoothest" potential satisfying (3.9). Following a similar line of reasoning, we can use (3.10) to set the variance of δv abc | p n−2 . Using we arrive at where used (3.22). Again, symmetries determine the values of variances related by permutations of a, b, i, while D(D − 1) 2 /6 entries are up to our choice. We again choose to leave these entries unchanged in the rotated coordinate system. Conditions (3.22) and (3.27) set the mean and variance of δv abc in the rotated coordinate system S and they are deceptively simple, but it should be noted that these equations are not tonsorial. Thus, they only hold in S. However, since δv abc is a tensor, it can be rotated back to the original coordinate system S via (3.18). In S the simplicity of the imposed conditions is hard to spot; indeed, working entirely in S, it is hard to distinguish between conditions dictated by (3.9) as well as (3.10) and free choices made on our part.
To summarize, we arrive at totally symmetric, Gaussian perturbations δv abc such that the Hessian obeys (2.10) and (2.11), which in turn define a generalized Dyson Brownian motion random potential V ∈ C 2 as opposed to V ∈ C 1 . A generalization to V ∈ C k along the same lines is straightforward, see Sec. 4.

Rotating field space to diagonalize the Hessian
By relegating perturbations to the tensor of third derivatives, we were free to choose D(D − 1) 2 /6 entries to our liking, since they did not influence the statistical properties of the Hessian. To check whether or not this choice has strong impact on other properties of the resulting potential, we provide in this section another recipe based on rotating the field space such that the Hessian is diagonalized. The required rotation matrix R is a D×D matrix with rows given by the eigenvectors of the Hessian, so thatH = RHR −1 (3.28) is diagonal in the rotated space. The inverse of this matrix R −1 is used to rotate back to the original coordinate system. The rotation to and from the rotated space is again governed by equations (3.17) and (3.18) respectively.

Imposing constraints on the mean and variance of δv abc
In this section, all the quantities are given in the rotated coordinate system of Sec. 3.4 unless stated otherwise and we omit the underscore notation. Before imposing (3.9) and (3.10) in the rotated coordinate system, we note that the only non-zero components of the Hessian are the diagonal ones, i.e., v aa (a repeated index a is not to be summed over in this section). Thus, equations (3.9) and (3.10) can be re-stated as for the diagonal elements and for the off-diagonal elements (a = b). Again, we have the freedom to choose undetermined components of v abc to our liking, as long as the resulting tensor is completely symmetric under permutations of the indices abc. Our choice is motivated by the following wishes/observations: • We wish to keep the potential as smooth as possible. Further, rotational symmetry is desirable.
• We observe that v aaa is present only in the equations of the diagonal elements while other entries (v aab and v abc ) are present in the diagonal as well as off-diagonal elements.
Therefore we choose for the means. Since v abi | p n−1 = v abi | p n−2 + δv abi | p n−2 , we have For the variance, we choose (3.39)  This choice is straightforward, except for the variance of v aaa , which can be read off once symmetries under rotations is imposed and everything else is fixed. Once the elements of the third order tensor are set in the rotated frame, we rotate them back to the original one by using the inverse rotation matrix R −1 .

Generating random potentials V ∈ C k
To generate a potential V ∈ C k while maintaining a Hessian in the GOE, we need to continue the Taylor expansion in (3.2) to order k + 1, while adding the perturbation to the k + 1'th derivative tensor. To keep our notation economical, let us introduce the multi-index C j , 2) . . .
so that e.g. the third derivative tensor reads v abc = v abC 1 . We kept the indices a, b explicit, since we want to impose conditions (2.10) and (2.11) onto the Hessian v ab . We add perturbations via staying with our convention that δv abC k−1 | p n−1 is added at p n and thus relevant for the potential along Γ from point p n onward. From here on we work in the rotated coordinate system introduced in Sec. 3.3, i.e. all tensors are assumed to be transformed via (3.17) (omitting the underscore), and we assume a constant distance between perturbations so that δφ = e i δs/Λ h = constant. Note that i is not a summation index in this section but designates the direction of δφ in the rotated coordinate system. As a consequence, we may write where the multi index becomes (4.8) Since the perturbation enters only in δv abC k−1 | p n−1 , the remaining deterministic terms can be taken out of the expectation value to yield where we used (4.4). Equating the above with the rhs. of equation (2.10) yields (4.10) Since v abC k−1 is completely symmetric, all means related to < δv abC k−1 |p n−2 > via a permutation of the indices, π(abC k−1 ) ≡ π(abi . . . i), are identical to the above expression. We leave the remaining N (k + 1, D) − N (2, D) elements of v abC k−1 unperturbed, i.e., impose zero mean and variance, as we did for V ∈ C 2 . The above result reduces to (3.22) for k = 2. Similarly, we can derive the variance: since only v abC k−1 contains a perturbation we get Var δv abC k−1 | p n−2 . (4.12) Plugging the above into (2.11) leads to Variances with indices given by a permutation π(abi . . . i) are identical to the above expression, while all remaining variances are set to zero. (4.13) reduces to (3.27) for k = 2. Evidently, it is straightforward to set the mean and variance of δv abC k−1 in the rotated coordinate system such the Hessian is a matrix in the GOE and V ∈ C k . The above values need to be transformed back to our original coordinate system via (3.18), in which the conceptual simplicity is hidden.
5 Examples, comparisons and discussion of V ∈ C i with i = 1, 2, 3 In addition to the known method of generating V ∈ C 1 via Dyson Brownian motion, see Sec. 2.2, we have derived two distinct methods to generate random potentials V ∈ C 2 , one of which we generalized to provide V ∈ C k for arbitrary k ∈ N; thus, we have at our disposal: 4. V ∈ C k , generated via rotating field space to align a basis vector with δφ and delegating perturbation to the k + 1'th derivative tensor, yielding (4.10) and (4.13) for the means and variances.
In this section we would like to compare the resulting potentials for k = 1, 2, 3 with each other based on a few selected examples to highlight general features. As explained in Sec. 2.3, we use Λ h = 0.1, Λ v = 1, δs = Λ h /100, σ 2 = 2/D, and chose the direction of steepest descent to provide the path Γ along which the potential is generated. We choose D = 5, so that we are well within the multi-field regime and avoid self intersecting Γ, yet plots, such as the ones depicting eigenvalue-relaxation of the Hessian, remain clear. Further, to ease comparison, we use the same initial configuration in plots (height, slope and eigenvalues of the Hessian); we use the words "iterations" and "steps" interchangeable. We varied initial conditions and the dimensionality of field space to make sure that the plots depicted here are representative.
In Fig. 2, we show exemplary plots of the potential along the path Γ: we observe that V ∈ C 2 generated via either method yields potentials comparable to the one originating from Dyson Brownian motion. However, potentials V ∈ C 3 , panel (d), and to a lesser degree V ∈ C 2 in panel (c), are generically more convex than the other ones. While these differences are minor, they may be important if questions pertaining to inflationary cosmology are to be addressed (only flat regions can support inflation); thus, for applications in inflationary cosmology, it is crucial to check the sensitivity of predictions to the method used to generate the potential. We leave such an investigation to future work.
The main desired difference between potentials is their differentiability. This difference becomes evident if we plot the eigenvalues of the Hessian along the path, as in Sec. 2.3. The corresponding eigenvalues of the potentials in Fig. 2 are shown in Fig. 3. As expected for potentials    whose Hessian is a member of the Gaussian orthogonal ensemble for distances above the horizontal correlation length Λ h , we observe eigenvalue relaxation once the path-length exceeds Λ h ; in other words, the initial values of the eigenvalues are usually forgotten after O(Λ h /δs) = O(100) iterations.
While such plots look qualitatively the same to the naked eye, a close up, as shown in Fig. 4, reveals the most important quantitative difference: potentials V ∈ C 1 show a jump of the eigenvalues after each step. These discontinuities are intrinsic to Dyson Brownian motion and, as discussed in Sec. 2, can be disastrous if cosmological perturbations generated during inflation are to be computed (artefacts arise, such as ringing patterns in correlation functions). Further, such potentials are not well suited to search for minima, which restricts their usability to model string theoretical landscapes if one's goal is to find suitable vacua for our universe. As we go to V ∈ C 2 , panel (b) and (c) of Fig. 4, the eigenvalues change smoothly, as expected. Since perturbations are delegated to the tensor of third derivatives, we still observe kinks, but these kinks are harmless if one wishes to compute the power-spectrum or search for minima. Nevertheless, they lead to spurious signals in higher order correlation functions, which motivated us to create potentials in even higher differentiability classes.    The eigenvalues of a potential V ∈ C 3 are plotted in panel (d) of Fig. 4; while the slope indeed changes continuously, as desired, we observe an artefact of a different kind: the perturbations added to the 4'th derivative tensor add spurious oscillations to the eigenvalues with a wavelength set by the step-length λ ∝ δs . (5.1) The cause of these oscillations is similar to over-fitting data with a polynomial of high degree, and we expect them to be problematic for the computation of the bi-spectrum. Can we eliminate this effect? Since we stitch together different Taylor expansions after each step, and these oscillations trace back to changes in the 4'th derivative tensor, one may hope to reduce the step-length to the point where the contribution of the forth order tensor are well below the ones of the third order tensor when the next patch is reached. For the depicted eigenvalues of the Hessian (V ∈ C 3 ), we stitch together parabola -thus, ideally, the step-length should be taken so small that the presence of a maximum/minimum of the particular Taylor expansion is unlikely to occur within the next step, i.e. < δv abii > δs 2 O(v abi δs, v abii δs 2 ). However, since the means of the perturbations are adjusted according to (4.10), this condition can not be satisfied generically due to the contribution < δv abii > ∼ O(v abi /δs): the left and right hand side of our tentative    . Eigenvalue evolution between successive steps for δs = Λ h /100 and different methods of creating random potentials: (a) Dyson Brownian motion causes jumps at each step; (b) and (c): either method of creating V ∈ C 2 leads to a continuous evolution of the Hessian, sufficient for several cosmological applications (hunt for minima, computation of power-spectrum); (d) For k ≥ 3 spurious oscillations arise, that can be problematic for applications. While they are intrinsic to the method we used to create such potentials, their amplitude can be made arbitrarily small by reducing δs, see Fig. 5. condition are generically of the same order. Thus, the presence of such oscillations is intrinsic to the method by which we create V ∈ C 3 .
To test this explanation, we varied δs from Λ h /10 to Λ h /10 4 in Fig. 5, and indeed, the presence of oscillations is not altered by a reduction of δs, while the wavelength in field space is set by the step-length with a proportionality constant of order one. However, we also observe that the amplitude of oscillations diminishes as the step-length is decreased. To leading order in δs, we can estimate this amplitude via that is A ∝ δs. The observed reduction is slightly weaker, which is understandable since our estimate did not take into consideration the variance of the perturbations in (3.36)-(3.38). However, it is clear how to proceed in applications, such as the computation of the bi-spectrum: to minimize the effect of oscillations, one needs to demand (at least) that A is considerably smaller  Figure 5. Eigenvalues for potentials V ∈ C 3 for varying step size δs. Spurious oscillations with a wavelength comparable to δs are visible. While their presence is intrinsic to the method used to create the potential, one can diminish their amplitude to any desired level by reducing δs accordingly. than the eigenvalues under consideration. We normalised the potential such that eigenvalues are of order one, so that we need to demand A 1 which directly translates into a condition for δs. Practically, one may create a few plots of the eigenvalues as in Fig. 5 to decide on an appropriately small δs (in our case, δs Λ h /10 4 appears appropriate). If observables such as the bi-spectrum are to be computed, one should check whether reducing δs further causes leading order changes in results (it should not). A similar line of reasoning needs to be followed for k > 3.
Alternatively, one may contemplate altering the method whereby the potential is generated. For example, one may consider applying a running average to the potential as it is being created to eliminate the effect of noisy artifices on scales of order δs. We leave such improvements to future studies.
To summarize, while potentials V ∈ C k with k ≥ 3 are not free of problems, they still offer an improvement over potentials in a lower differentiability class. In the end, one may pick the potential with the lowest k that is sufficient for the task at hand in order to retain the computational advantage that locally created random potentials offer over globally created ones. Potentials V ∈ C 2 are sufficient to hunt for minima on landscapes in string theory and enable a computation of the power-spectrum of cosmological perturbations. Further, they are free of the artificial oscil-lations that occur for k ≥ 3. Thus, we plan to use such potentials in forthcoming publications on cosmological applications.

Conclusion
We derived novel methods to generate random functions in a desired differentiability class along a trajectory by extending the prescription of Dyson Brownian motion (DBM). As in DBM, the Hessian of these functions evaluated at well separated points is a random matrix in the Gaussian orthogonal ensemble (GOE).
We were motivated to construct such functions to model complicated potentials on the string theoretical landscape (a field space of high dimensionality) for cosmological applications. Particularly potentials V ∈ C 2 are of interest to us, since they enable the search for minima as well as the study of cosmological perturbations and the computation of the power-spectrum (the two-point correlation function). Potentials V ∈ C k with k ≥ 3 are needed to compute higher order correlation functions.
The method of constructing such potentials inherits the basic idea from DBM to stitch together local patches wherein the potential is given as a truncated Taylor series. Whenever the next patch is entered, random variables are added to the k + 1'th derivative tensor (k = 1 for DMB, so perturbations are added to the Hessian). For DBM, the statistical properties of these variables are entirely determined by the desired ensemble (the GOE) of the Hessian. However, for k ≥ 2, additional freedom is present.
To explore this freedom, we provided two distinct prescriptions for k = 2: the first generates potentials that invoke the least number of random variables and thus provides, in a sense, the smoothest potentials. This prescription is readily extended to arbitrary k ∈ N. The second prescription perturbs primarily in the principal directions of the Hessian. It should be noted that all such potentials are indistinguishable if the statistical properties of the Hessian are used as a discriminator.
We followed with a small selection of examples to highlight the properties of potentials with k = 2, 3: for k = 2, we found potentials generated via the two different methods to be qualitatively indistinguishable and free of artefacts; we plan to used both of them in cosmological applications in a future publication. The k = 3 case is somewhat problematic: we observed spurious oscillations of eigenvalues with a wavelength given by the step-length after which perturbations are added. These artefacts are intrinsic to the methods used, but can be made arbitrarily small by reducing the step length. While not optimal from a computational efficiency point of view, such potentials can at least in principle be used to compute higher order correlation functions.
While our motivations stem from cosmology, the method to construct such random functions is general and may be of use in other areas of science.