Improved mean-field dynamical equations are able to detect the two-step relaxation in glassy dynamics at low temperatures

We study the stochastic relaxation dynamics of the Ising p-spin model on a random graph, which is a well-known model with glassy dynamics at low temperatures. We introduce and discuss a new closure scheme for the master equation governing the continuous-time relaxation of the system, which translates into a set of differential equations for the evolution of local probabilities. The solution to these dynamical mean-field equations describes the out-of-equilibrium dynamics at high temperatures very well, notwithstanding the key observation that the off-equilibrium probability measure contains higher-order interaction terms not present in the equilibrium measure. In the low-temperature regime, the solution to the dynamical mean-field equations shows the correct two-step relaxation (a typical feature of glassy dynamics), but with a too-short relaxation timescale. We propose a solution to this problem by identifying the range of energies where entropic barriers play a key role and defining a renormalized microscopic timescale for the dynamical mean-field solution. The final result perfectly matches the complex out-of-equilibrium dynamics computed through extensive Monte Carlo simulations.


I. INTRODUCTION
A myriad of problems from condensed matter physics [1], combinatorial optimization [2], neuroscience [3] and machine learning [4] are formulated via the extremization of some function of N variables.The optimization landscape is sometimes very complex, and the solution becomes difficult, or impossible, because of the presence of local attractors that slow down the exploration of the configuration space.The system is called frustrated [5,6].
The relevance of the subject is widely recognized, and some theoretical tools for the study of systems' equilibrium properties are now becoming well-established science, after years of applications in diverse contexts.The replica symmetry breaking [6,7], the cavity method [8,9] and the Thouless-Anderson-Palmer approach [10] allowed to tackle numerous problems.On the other arXiv:2307.00882v2[cond-mat.stat-mech] 1 Nov 2023 hand, the progress in out-of-equilibrium situations has been considerably slower.The field lacks a general understanding of the problems, and their treatments are highly specific depending on the type of variables (continuous [11] or discrete [12]), their interactions topology (fully-connected [13], random [14] or latticed [15]), or the nature of time itself (also continuous [16] or discrete [17]).
If we complicate the scenario by looking at problems with a known spin-glass phase in equilibrium, the available results are even more scarce.The pioneering work of Sompolinsky and Zippelius [18] studied the continuous-time dynamics of a soft-spin version of the Sherrington-Kirkpatrick (SK), where the variables are continuous.This case is usually modelled with a Langevin formalism as it is possible to write a differential equation directly for the soft-spin variables.Later, Sompolinsky [19] attempted the construction of a consistent mean-field theory that takes account of both equilibrium and non-equilibrium spin-glass behavior.
Motivated by the dynamical properties of spin-glasses, like the aging regime [5,20,21], a series of works [11,[22][23][24] pointed some inconsistencies in Sompolinsky's approach and exploited the picture of weak ergodicity breaking [25] to re-visit the problem.Besides the study of the SK model [23], they included a simpler but illustrative model: the spherical spin-glass model with p-spin interactions [22].The theory reproduced aging and allowed for the existence of multiple timescales.However, recently, this theory has been questioned for more complicated spherical models [26] and for Ising models on sparse random graphs [27].
While the spherical p-spin is a long-range model (fully-connected) with continuous variables, we focus our attention on the diluted ferromagnetic p-spin model, defined for discrete spin variables on sparse random graphs.Although in our case there is no quenched disorder in the interactions, and despite all the differences mentioned before, there is a common phenomenology.At low temperatures, both models exhibit multiple relaxation processes within the dynamic evolution, one of them with a very large characteristic time scale.This relation between traditional mean-field spin-glass models and more realistic structural glass models has been discussed in various occasions [28,29].
Our work exploits shared aspects between them, like the existence of multiple time scales, to give a simple mean-field theory to describe the spin glass dynamics of discrete variables on random graphs.We derive a hierarchy of approximations that allows us to consider spatial correlations with increasing accuracy.Although there is abundant literature about another important hierarchical system of equations for the glassy dynamics, known as the Generalized Mode Coupling Theory (GMCT) [30,31], it is important to emphasize here that our approaches are fundamentally different.
While the GMCT does not take into account the topology of the interactions, we explicitly consider a specific random graph of interacting variables where the notions of distance and neighborhood are relevant.This article is organized as follows.The Section II introduces a new closure for the master equation governing the system dynamics, written for a single instance of the interactions graph.
For simplicity, we applied it to the p-spin ferromagnet on random regular hypergraphs, where everything is reduced to one average case equation.In Section III we generalize the work of Montanari and Semerjian [32] on dynamical phase transitions to the out-of-equilibrium scenario.
The latter allows us to re-interpret the results of the dynamical theory presented in Section II by providing a way to compute a new time scale for our calculations.The results are compared with Monte Carlo simulations in Section IV.

II. CONDITIONED DYNAMIC APPROXIMATION
In its simplest form, the ferromagnetic p-spin model is defined by the Hamiltonian . .σ ip , where we have N discrete-spin variables σ i = ±1.The interaction is structured in groups, called plaquettes, of exactly p variables.This kind of model is usually represented using a hypergraph where the spins correspond to variable nodes (denoted by the indexes i, j, . ..) and the plaquettes to factor nodes (denoted by a, b, . ..).
The continuous time dynamics of the probabilities P t (⃗ σ) of having some configuration ⃗ σ = {σ 1 , σ 2 , . . ., σ N } is governed by the Master Equation [33]: where r k (⃗ σ) is the probability per time unit that the spin σ k changes to −σ k , when the system's configuration is ⃗ σ.These are called dynamic transition rates.The operator F k [•] takes any configuration ⃗ σ and flips the k-th spin to get: When the transition rates r k depends only on the variables that directly interact with the k-th node, we can choose some site i and sum (1) over all the configurations that keep the spin σ i fixed, the result is the local equation: The symbol ∂i represents the set of nodes that interact with i according to the model's Hamiltonian.
Of course, we have an equation like (2) for all spins in the system.But these are not the only equations we can get.We could in principle marginalize (1) keeping fixed a plaquette of p interacting variables and thus obtain a differential equation for the probability of a plaquette's configuration, or we could fix a variable σ i and all its neighborhood σ ∂i to obtain a differential equation for P (σ i , σ ∂i ): − b⊂∂i j∈b\i σ ∂j\b r j (σ j , σ ∂j )P (σ ∂j\b , σ ∂i , σ i ) − r j (−σ j , σ ∂j )P (σ ∂j\b , F j [σ ∂i ], σ i ) Here, the indexes i and j represent variable nodes, while the indexes a, b denote plaquettes, and σ a is the configuration of the variables inside the plaquette a.In order to lighten our notation, we preferred to think of the symbols a and b as sets of variables nodes.Thus, for example, b ⊂ ∂i stands for the subset of ∂i formed by the nodes inside the plaquette b.
By choosing each time a larger group of spins we can construct a hierarchical system of differential equations that we should truncate at some point.The simpler approximation that one can make is to neglect all connected correlations C ij = ⟨σ i σ j ⟩ − ⟨σ i ⟩ ⟨σ j ⟩, closing the system at the level of the equation (2).In practice, to substitute P (σ ∂j\b , σ ∂i , σ i ) by P (σ i ) k∈∂i\j P (σ k ).
In the sake of simplicity and concreteness, let us consider the ensemble of random regular hypergraphs, where all variables σ i participate in the same number c of plaquettes (c is the node's connectivity), and all plaquettes contain exactly p = 3 variables.These are locally tree-like structures where the typical length of the loops diverges with the system size as ln(N ).
It is important to notice that the only source of disorder in this ensemble is the presence of loops.After neglecting the correlations C ij and setting homogeneous initial conditions, the system can be characterized by just one differential equation.
where ϕ(t) is the probability that a spin points up, Given the structure of the Hamiltonian, we can write the transition rates in terms of a single integer: the number of unsatisfied interactions between a spin and its neighbors u = a⊂∂i δ( k∈a σ k , −1).
Within this approximation, starting the relaxation at low temperatures from the initial condition ϕ(0) = 1/2, the equation (5) gives φ(0) = 0 and ϕ(t) = 1/2 at all times.The energy density is then easily computed as e(t) = 0 for all times, which obviously is very different from the real dynamics of the model.
A less trivial approximation, similar to the one employed in [34] is the following: In the first line of ( 6) we approximated the conditional probability This means that the configuration of σ a\i is irrelevant for the probability distribution of σ b\i once σ i is given.The information of the variable at distance d = 1 in the graph is enough.This is reflected in the following connected correlations, computed using the conditional probability distributions: In principle, C σ can be non-zero.However, ( 6) allows us to factorize the joint conditional probability distribution, giving that This immediately leads to C σ i = 0 for both values of σ i .
Setting the same initial condition P (σ i ) = 1 2 for all i in a random regular hypergraphs leads to a single equation for P (σ a ), where we can drop the index a and define The corresponding differential equation is: In what follows we call (9) the Conditional Dynamic Approximation of the first order (CDA-1 ), for reasons that will be clearer latter.This equation can be numerically integrated in time to get a non-trivial relaxation of the energy density e(t).
The results for the Glauber dynamics of this model, where we make a specific choice for the , are compared with Monte Carlo simulations in Fig. 1.We represent the CDA-1 with dashed lines.For high temperatures, see Fig. 1a, the approximation works reasonably well for the transient regime and provides the right stationary value for the energy.However, below the dynamical spin glass transition, which occurs at temperature T d ≈ 0.51, the CDA-1 approximation is very poor and returns a relaxation very far from the behavior measured in the simulations (see Fig. 1b).
We are interested precisely in this two-step relaxation, which is typical of glassy dynamics at low temperatures.Given that we cannot obtain a non-zero correlation between neighboring plaquettes from the CDA-1, we will take another step.Analogously to (6), to close equation (4) we can write: Fig. 2 illustrates the meaning of equation ( 10), which we call Conditioned Dynamic Approximation of the second order (CDA-2 ) in what follows.The target is the conditional probability P (σ ∂j\b | σ i , σ ∂i ), which involves the configuration of the nodes in the neighborhoods of i and j.
By approximating this probability by P (σ ∂j\b | σ j , σ b\j ) we are neglecting the effect of the nodes which are at distance d = 3 from ∂j \ b (see Fig. 2), but allowing nonzero values of the connected correlations C σ i (see Eq. ( 7)).Then we say that the closure is of the second order in space.It is straightforward to generalize (10) and then write other levels of approximation: CDA-3, CDA-4, etcetera.these nodes are at distance d = 2 from the node j, we say that this is an approximation of the second order in space.
After putting Eqs. ( 4) and (10) together we obtain a closed system of differential equations for P (σ i , σ ∂i ).
The equations (11) do not contain any further assumption about the actual structure of the interactions.Actually, it is not even necessary to have exactly p spins in each plaquette for these equations to be valid.The dynamical rates r i (σ i , σ ∂i ) can also take any the form of any function of the spin on site i and its neighborhood.Thus, in the way we presented it, the CDA-2 is of general purpose.
If we consider again a random regular hypergraph with the initial condition P (σ i ) = 1 2 , ∀i = 1, . . ., N , the Eq.11 becomes the same for all the values of i.Now, all the sites are equivalent and it is possible to rewrite the probability P (σ i , σ ∂i ) in terms of one parameter: the number of unsatisfied interactions between the central spin σ i and its neighbors (u = a⊂∂i δ( k∈a σ k , −1)).
The equation for P (u) is derived in the Appendix A and allows a considerable simplification of the numerical computations.We concentrate here on their solution.More importantly, Fig. 1b shows the behaviour of the model below T d ≈ 0.51.In this case, the energy relaxation obtained from the CDA-2 approximation takes place through two different relaxation processes.This important feature of the glassy dynamics has been obtained thanks to the fact that the CDA-2 approximation takes into account the correlation between nearest neighbour energy defects.Such a correlation starts playing a key role exactly at the energy value where the first plateau develops (let us call e p this energy value).The plateau at e p becomes much longer (mind the log scale) when the temperature is decreased.
Notwithstanding the important result about the two-steps relaxation (not captured by simpler closure schemes), the CDA-2 approximation shows a too-short time scale to leave the plateau and an asymptotic energy value which depends on the temperature and is different from the energy reached in Monte Carlo numerical simulations.In fact, the final energies predicted by the CDA-2 correspond to the equilibrium paramagnetic phase, which is non-physical in the region T < T d [34].
We believe these two failures can be ascribed to the following approximation underlying any mean-field closure scheme: at every time during the dynamics, the average is taken over a measure assuming that all relevant configurations are easily accessible by the dynamics itself.This is in general true at high enough energies, where correlations are weak.However, for low enough energies correlations become very strong (e.g. the correlation between energy defects below e p ) and make the microscopic time scale to sample the measure larger.When this microscopic timescale grows, the curves like e(t) should be "stretched" and eventually, if the divergence of such a time scale takes place, the relaxation would reach a stop.
It is clear that at the plateau energy e p some relevant barrier must appear: indeed, the T = 0 dynamics is not able to relax below e p both in Monte Carlo simulations and in the mean-field equations.These barriers are likely to have an entropic origin as discussed in detail in Ref. [35].
Thus we can safely assume that no barrier is present for e > e p (at least along the typical trajectories followed by the relaxation dynamics) and the microscopic time scale can be set to the (conventional) unit value in such a regime.On the contrary, barriers are present below e p .But these are nonextensive barriers, that remain finite in the large N limit, as witnessed by the observation that the dynamics with any T > 0 can relax below e p .Fig. 1b also shows that Monte Carlo simulations relax to an asymptotic energy e d ≈ −0.96 strictly larger than −1.This is the so-called dynamical threshold energy.At this energy value, barriers become extensive, i.e. diverge in the large N limit, and the relaxation gets stuck.In the next Section, we are going to compute the dynamical threshold energy, extending the classical computation of the point-to-set correlation function to the out-of-equilibrium regime.

III. NON-EQUILIBRIUM DYNAMICAL TRANSITION
Models on random graphs that manifest frustration have been shown to exhibit purely dynamical phase transitions [28,29], where the ergodicity is broken but the free energy remains analytic.In such a complex situation, the work of Montanari and Semerjian [32] gives a simple method to detect the occurrence of the transition.
In short, the method reduces to look at the relation of a variable σ 0 in the system, with the variables {σ l } at a distance l of σ 0 .For a given configuration of {σ l }, it is possible to use a message passing technique [36] to compute the expected value m l 0 of the spin σ 0 .As we explain in the Appendix B, a set of self-consistent equations allows to obtain the probability distribution of expected values, Q l (m l 0 ), as a function of the distance l.Then the magnitudes to measure are the point-to-set correlation C l = dm Q l (m) m and the corresponding length l(ϵ) = min l : C l < ϵ .
The point-to-set correlation length l(ϵ) diverges exactly at T d , regardless of the value chosen for ϵ.Thus, the problem of detecting the presence of a dynamic phase transition is simplified to the determination of the divergence of a single magnitude [32].
Here, we extend the calculation to consider local measures that are completely out of equilibrium.Let us assume that from a dynamical computation we are able to obtain the local probabilities: where δ(x, y) stands for the Kronecker delta on x and y.
With the information contained in ( 12) we can write the set of self-consistent equations for the cavity probability distributions Q l (µ) and Q l (μ) (see Appendix B): where we considered again the case where P (σ i ) = 1 2 for all i = 1, . . ., N and have defined π(S 1 , . . ., S c−1 ) ≡ P (S 1 , . . ., S c−1 | σ = 1).The Dirac delta functions in these equations enforce the relations: where ⟨S⟩ = S S π(S) Now we can explore the solutions to Eqs. ( 13) and ( 14) to locate the divergence of the corresponding dynamic point-to-set correlation.For the sake of simplicity, we write the approximation π(S i ) for any local probability π.The "pair" probabilities π(S i , S i+1 ) can be exactly written in terms of two parameters: the energy density e and the conditional correlation between neighboring plaquettes C σ i =1 ≡ C (see Eq. ( 7)).Fig. 3a shows the position in the plane (e, C) of the divergence of the point-to-set correlation, i.e. the physical time scale, obtained from Eqs. ( 13) and ( 14).
In Fig. 3b we report the same data shown in Section II, but as a parametric plot of (e(t), C(t)).In this representation the absolute time becomes irrelevant and we observe a good similarity between the Monte Carlo data (points) and the analytical results based on the CDA-2 approximation (lines).This is a strong indication that a proper redefinition of the microscopic time scale in the dynamical mean-field equations could eventually provide a solution very close to the physical one.
The inset in Fig. 3b is a zoom on the region reached at very long times.The black line with points marks the place where the point-to-set correlation length diverges.As Ref. [32] points out, this is to be associated with the divergence of the physical time scale.Indeed the Monte Carlo simulation is not able to go to the left of this line, into the non-ergodic region (see Fig. 3a), and it seems to converge very closely to the dynamical critical line at large times.On the contrary, the CDA-2 approximation returns trajectories that enter into the non-ergodic region, evidently ignoring the dynamical phase transition.

EQUATIONS
As shown in Fig. 1b, at low temperatures, the CDA-2 approximation shows up two failures compared to the physical dynamics obtained through Monte Carlo simulations: (i) a too-fast relaxation below the plateau energy e p and (ii) a convergence to energy values below the dynamical threshold energy e d (see the inset of Fig. 3b), in the non-ergodic region, which should not be accessible on finite time scales.We cannot correct (ii) by going to higher order closures of the CDA hierarchy because they will not capture the divergence of the point-to-set correlation, a key ingredient for the existence of e d .On the other hand, we do not expect the CDA-3 and higher levels to significantly improve (i) as to justify the necessary efforts.
Nonetheless, in the (e, C) plane, the CDA-2 approximated trajectories follow very closely the true dynamics, showing non-equilibrium correlations (i.e.C ̸ = 0).So, we foresee the possibility of achieving a very good matching between the true and the CDA-2 approximated dynamics by just redefining the microscopic timescale in the dynamical mean-field equations.
The reason beyond this sort of time-reparametrization comes from the following argument: within the mean-field approximation, at every time, one takes the average over a particular probability distribution that should correspond to the configurations that the system is likely to visit at that time.As long as this probability distribution is well ergodic, the hypothesis that the required average can be taken in a short time is reasonable.However, when barriers come into play and the probability distribution to sample is no longer well ergodic, this could, in turn, correspond to a longer effective time scale.Only if this longer time scale is used in the solution to the meanfield dynamical equations, then there is a chance of describing the actual dynamics.The same argument can be brought to the extreme consequences when the CDA-2 approximated trajectory approaches the dynamical critical line, where the relaxation time scale diverges because in the probability measure the ergodicity breaks down.
As discussed above, the microscopic time scale does not need to be renormalized for energies large enough (e > e p ) because in this region there are no barriers.Below e p barriers arise and we assume that the exploration of the accessible configurations is slowed down by a factor exp(∆ S ).
The entropic barrier ∆ S is constant with respect to time and does not depend on the temperature.
This ansatz depends on two important energy values, e p and e d , that we now discuss in detail, and two parameters, ∆ S and γ, that will be adjusted to match the actual physical dynamics.
On the one hand, e p is the energy density of the intermediate plateau, which does not depend on the temperature, and thus can be computed by solving the CDA-2 equations at T = 0. On the other hand, e d marks the frontier between ergodic and non-ergodic configurations as computed in the Section III (see Fig. 3a).As already discussed, e d marks the place where the physical time scale diverges.Below e d the system relaxation would proceed by activation processes, which are ignored in the present approach.
The expression in Eq. ( 17) for the effective time scale depends on four parameters: e p , e d , ∆ S and γ.Two of them, e p and e d , can be computed analytically as explained above.The remaining two, ∆ S and γ, will be fixed by best fitting the data from Monte Carlo simulations.We stress,  17) and reported with he continuous lines.In (a) the graph mean degree is c = 3 and best fitting parameters are ∆ S = 0.15 and γ = 2, while in (b) we have c = 5, ∆ S = 0.17 and γ = 2. however, that their values do not depend on the temperature and so we have to fix just two parameters to describe the relaxation in the entire low temperature phase.Fig. 4 shows a comparison between Monte Carlo simulations and the dynamics derived from the CDA-2 equations when the proper time scales τ (e) is taken into account.We observe a very good agreement between the actual physical dynamics and the predictions from the CDA-2 approximation.The results are presented for two different connectivities (c = 3 and c = 5) in order to show their robustness.The best-fitting parameters are reported in the caption.While ∆ S seems to change a little bit with c, the γ parameter is very stable (γ = 2 fits perfectly the data).This γ value lies within the bounds derived in Ref. [32,37].In particular, one could be tempted to compare it to the results of the Monte Carlo simulations reported in Ref. [37] (γ MC ≈ 3.2).However, the key observation is that the computation of Ref. [37] is made on the equilibrium measure, while here we are studying an out-of-equilibrium process.The latter is likely to make smart choices to relax towards the lowest energy e d and it is not surprising that along these smart relaxation paths, the divergence of the time scale approaching the ergodicity-breaking transition is less severe.

V. CONCLUSIONS
While the glassy dynamics of continuous and unbounded variables interacting through a fully connected topology has been solved a long time ago [22][23][24] (although maybe not in all the aspects [26,38]) the glassy dynamics of other types of models is very hard to approximate at the analytical level.In this work, we have made an important step forward in the study of Ising models defined on sparse random graphs.By using a continuous time description, we have been able to reproduce faithfully the relaxation of the energy in the low-temperature regime of the diluted Ising p-spin model, where such a relaxation takes place following a two steps process (see Fig. 4).
Our solution takes into account two crucial aspects of the dynamics, which were not fully considered in previous works.On the one hand, we derive the Conditioned Dynamical Approximation, which is a closure scheme for the master equation that keeps track of the correlation between nearest neighbour energy defects.This correlation is related to the presence of local barriers that need to be crossed in order to proceed with the relaxation to lower energies.On the other hand, we realized that the mean-field equations are valid only under the assumption that all configurations with given mean-field parameters are well sampled: this can happen only on a time scale that grows approaching the ergodicity breaking transition.Following this idea, we have computed the point-to-set correlation [32] on the out-of-equilibrium measure predicted by the mean-field approximation, and defined a proper effective time scale τ (e) for the mean-field evolution, see Eq.( 17).
The ideas above provide a very satisfying result, with the solution to the CDA-2 dynamical mean-field equations matching perfectly the complex energy relaxation measured in Monte Carlo simulations.Nonetheless, there are many possible extensions of our approach which is worth pursuing in the near future.For example, observables depending on two times may show aging, even when one-time observables (like the energy) have reached their stationary value [11].Checking the quality of the mean-field approximation presented in this work for those observables is certainly very useful.Moreover, we foresee the possibility to extend the present approach to other interaction topologies, like non-regular random graphs, or even graphs with short loops.The extension to other models (e.g.vector spin models) is very open and requires some analytical work in addition.A quite different, but equally interesting, developing direction is to consider dynamics different from the Glauber one, eventually dynamics not satisfying detailed balance.This very broad class of processes are fundamental, for example, in the description of smart search algorithms to solve combinatorial optimization problems.Our methodology could be used to predict possible regions of divergence of the dynamical time scale and to estimate the relaxation near the algorithmic C l,π P S = σ 0 m l 0 l,π − σ 0 l,π m l 0 l,π C l,π P S = dm l 0 σ 0 P (σ 0 )Q l σ 0 ,π (m l 0 ) σ 0 m l 0 − σ 0 P (σ 0 )σ 0 dm l 0 σ 0 Q l σ 0 ,π (m l 0 ) m l 0 (B1) If we draw σ 0 = ±1 from the uniform distribution P (σ 0 ) = 1/2, the second term in (B1) vanishes and it is not difficult to see that: where Q l +,π (m l 0 ) ≡ Q l σ 0 =1,π (m l 0 ), Q l −,π (m l 0 ) ≡ Q l σ 0 =−1,π (m l 0 ) and it is necessary to use the fact that when P (σ 0 ) is uniform the symmetry of the problem leads to the relation Q l +,π (m l 0 ) ≡ Q l −,π (−m l 0 ).An important point to solve the problem is to introduce two new parameters.They will be defined in broadcasting processes that are slightly different from the original one.The first parameter will be µ l 0 , which is the expected value of the root defined in a broadcasting process where also at the first step one generates c − 1 new factor nodes, instead of c.We will denote by Q l σ 0 ,π (µ l 0 ) the probability density of having some value of µ l 0 .The second parameter is the expected value μl 0 taken in a broadcasting process where at the first step one generates only one factor node, and the corresponding probability density will be denoted as Ql σ 0 ,π (μ l 0 ).For those distribution the relations Q l +,π (µ l 0 ) ≡ Q l −,π (−µ l 0 ) and Ql +,π (μ l 0 ) ≡ Ql −,π (−μ l 0 ) must also hold.One has then two different distributions Q l +,π (µ l 0 ) and Ql +,π (μ l 0 ), that in what follows will be denoted simply as Q l π (µ) and Ql π (μ).We will only need then the probability π(σ 1 , . . ., σ p−1 | σ 0 = 1), which will also be re-denoted as π(σ 1 , . . ., σ p−1 ).
The advantage of introducing µ and μ is that one can compute the corresponding distributions using the following iterative equations:

FIG. 1 :
FIG. 1: Time dependence of the energy density e(t) in the p-spin ferromagnet with p = 3 at several temperatures.Each point represents the average of 10 4 Monte Carlo simulations with system size N = 10000.Error bars are of the size of the points.The result of the numerical integration of the CDA-1 equations is represented with dashed lines, while continuous lines represent the CDA-2 approximation.Data in panel (a) and (b) are for high (T > T d ≈ 0.51) and low (T < T d ) temperatures, respectively.

FIG. 2 :
FIG.2: Illustration of the Conditioned Dynamic Approximation of the second order (equation(10)).The circles represent variable nodes, and the squares represent factor nodes.The figures show the nodes involved in the conditional probability P (σ ∂j\b | σ i , σ ∂i ).The variables which are above the dashed lines are inside the argument of the conditional probability (∂j \ b), while the variable below the dashed lines are in the conditions (σ i , σ ∂i ).The approximation consists on neglecting the effect of the nodes that are not colored in the rightmost part of the figure.As

Fig. 1a illustrates
Fig.1aillustrates the behaviour of e(t) for c = 3, computing results for three specific temperatures.Together with Monte Carlo simulations (points) and the previous approximation (CDA-1, with dashed lines), we represent the results of the CDA-2 with continuous lines.When the temperature decreases, the theoretical technique becomes slightly less accurate to describe the transient regime but it keeps predicting the steady state of the system very accurately.In all cases, the transient regime obtained from the CDA-2 is closer to the simulations than the results of the CDA-1.

FIG. 3 :
FIG. 3: (a) Phase diagram in the (e, C) plane for p = 3 and c = 4.The continuous line with points marks the dynamical critical line where the relaxation time scale diverges.The dashed lines are physical bounds on e and C, derived using conditions 0 ≤ π(S 1 , S 2 ) ≤ 1.(b) Parametric plots (e(t), C(t)) for the p-spin ferromagnet with p = 3 and c = 3 at low temperatures.Each point represents the average over 10 4 Monte Carlo simulations with system size N = 10000.Error bars are of the size of the points.The result of the numeric integration of the CDA-2 equations is shown in continuous lines.

Finally, approaching the
dynamical phase transition, the microscopic time scale must diverge and we assume a simple power law divergence (e − e d ) −γ .So, the simpler ansatz for the effective time scale is the following τ e ∆ S (e p − e d ) / (e − e d ) γ e d < e < e p ∞ e < e d

FIG. 4 :
FIG.4: Time dependence of the energy density e(t) in the p-spin ferromagnet with p = 3 at low temperatures.Each point represents the average over 10 4 Monte Carlo simulations with system size N = 10000.Error bars are of the size of the points.The integration of the CDA-2 equations is performed using the effective time scale defined in Eq. (17) and reported with he continuous

σ 1 , 1 j=1 1 j=1F 1 F 2 FIG. 5 :
FIG. 5: Cavity point-to-set correlations in the p-spin ferromagnet defined over random regular hypergraphs with c = 3 and p = 3. a) Dependence of the cavity point-to-set correlation on the distance for several temperatures.b) Dependence on the temperature of the cavity point-to-set correlation length computed with small parameter ϵ = 0.05.A fit to the law l * = A/(T − T d) 1/2 (shown in dashed lines) gives T d ≈ 0.51