Nonperturbative renormalization for the neural network-QFT correspondence

In a recent work arXiv:2008.08601, Halverson, Maiti and Stoner proposed a description of neural networks in terms of a Wilsonian effective field theory. The infinite-width limit is mapped to a free field theory, while finite $N$ corrections are taken into account by interactions (non-Gaussian terms in the action). In this paper, we study two related aspects of this correspondence. First, we comment on the concepts of locality and power-counting in this context. Indeed, these usual space-time notions may not hold for neural networks (since inputs can be arbitrary), however, the renormalization group provides natural notions of locality and scaling. Moreover, we comment on several subtleties, for example, that data components may not have a permutation symmetry: in that case, we argue that random tensor field theories could provide a natural generalization. Second, we improve the perturbative Wilsonian renormalization from arXiv:2008.08601 by providing an analysis in terms of the nonperturbative renormalization group using the Wetterich-Morris equation. An important difference with usual nonperturbative RG analysis is that only the effective (IR) 2-point function is known, which requires setting the problem with care. Our aim is to provide a useful formalism to investigate neural networks behavior beyond the large-width limit (i.e.~far from Gaussian limit) in a nonperturbative fashion. A major result of our analysis is that changing the standard deviation of the neural network weight distribution can be interpreted as a renormalization flow in the space of networks. We focus on translations invariant kernels and provide preliminary numerical results.


Introduction and outline
Deep learning and neural networks (NNs) [1,2] have experienced a rapid development in the last decade, with an ever-increasing number of remarkable applications. In many cases, these systems outperform humans and ordinary algorithms. However, there are still many challenges to be solved: in particular, most NNs work as a black box and require a huge number of examples during the learning phases. More generally, there is no complete theoretical understanding of why deep learning works so well and how to improve it further. For example, it is not clear how training can be made more efficient and fast, how knowledge can be transferred to other tasks or how to choose hyperparameters systematically. This lack of reliability poses, in certain cases, important ethical problems. Indeed, with the growing use of AI for making decisions (for example, in banking, employment, medicine, military, etc), it is crucial to be able to explain the choices of the AI in a transparent way [3]. Moreover, having a black box is also a drawback for scientific discovery since the goal of science is to interpret and explain, and knowledge can grow only from understanding [4]. Our paper is part of the lively field of explainable AI [5,6], where physics do have a role to play [7]. A natural path for studying NNs is provided by theoretical physics: it offers an array of tools useful to describe a wide range of complex systems [8]. In the recent years, evidence has accumulated [9][10][11][12][13][14][15][16][17][18][19][20][21][22] in favor of a scenario involving a particular form of 'coarse-graining' , making contact with a familiar tool for physicists: the Wilsonian renormalization group (RG).
A macroscopic ideal gas is completely described by the ideal gas law, and a macroscopic fluid is well described by the Navier-Stokes equation. Both equations ignore the microscopic atomic and molecular interactions and provide a coarse-grained description. This idea was fully developed by Wilson's RG, formalizing the general feature that long range behavior of physical systems does not require an understanding of the nature and interactions of its microscopic building blocks. Through a very impressive argumentation, Wilson showed that it is possible to explain the apparent universality of physical systems near critical points from the observation that, up to the accuracy of physical predictions, the specific microscopic details can be absorbed through a few effective couplings, defining an effective large scale theory. Despite the fact that the RG was born in the era of critical phenomena, it turned out to be a very general framework, largely responsible for the success of field theory descriptions of long range distance physics, both in condensed matter physics and in high energy physics [23][24][25][26].
The ability of the RG to explain long range universality can be traced from information geometry [10,11,[27][28][29][30][31]. The RG coarse-graining is performed on the eigenvalues of the (free) Fisher information metric, which is a local version of the Kullback-Leibler (KL) divergence D KL (p||q) (or relative entropy) and which provides a reasonable measure of distinguishability between two probability distributions p and q [30]. From coarse graining, and in absence of singular structures, KL divergence decreases, as well as distinguishability between distributions, as to become smaller than any experimental precision. Beyond this limit, we cannot distinguish the two distributions, as different as they may have been originally. The ability of the RG to extract the relevant features from a large set of interacting microscopic degrees of freedom is a compelling argument for a link with deep learning. In fact, it is natural to expect a relation with any procedure able to extract relevant features from a massive data set, as it is the case, for instance, in principal component analysis (PCA), where some recent works stressed such a connection between signal detection and RG [9][10][11][12][13][14][15].
In this paper, we aim at developing further the correspondence between quantum field theory (QFT) and NNs, called the NN-QFT correspondence [32,33]. The main objective is to provide a description of the NN behavior using the non-perturbative RG and the corresponding effective field theory 5 . This positions our paper in a growing tradition of papers describing how the behavior of NNs can be understood through a more and less sophisticated coarse-graining, which can itself be related to a RG. Strong evidence in favor of a correspondence between RG and deep learning has been stressed for Restricted Boltzmann machines, whose architecture exhibits similarities with the Ising model (a theoretical model for ferromagnets) [16][17][18][19][20][21][22]. Historically, the Ising model was precisely the conceptual cradle of the RG through Kadanoff 's 'block-spin' method, which can be viewed as an elementary version of the general Wilson coarse-graining [34]. The use of a field theoretical formalism is not a novelty as well [32,[35][36][37][38][39][40]. In fact, this is expected since physics has shown that field theories are a general feature for systems involving emergent collective dynamics. For example, they appeared to provide a good understanding of the qualitative behavior of NNs through the spin-glass formalism [41,42].
We follow the correspondence between NN and QFT pioneered by Halverson, Maiti and Stoner [32]. Its originality with respect to other approaches lies in the observation that, under very general conditions, NN with infinitely wide layers are described by a Gaussian process (GP) due to central limit theorem [43][44][45][46][47][48][49][50][51]. Realistic architectures never involve an infinite number of hyperparameters N, and their behavior fails to be well described by a GP. However, this useful but purely theoretical limit allows approaching the non-GP as a perturbation from the large N limit, which is assumed to receive 1/N corrections which, for N large enough, can be computed perturbatively. In [32,33], the correspondence has been developed in the case of a fully connected network with a single hidden layer of width N. They developed the field theoretical machinery necessary to describe NNs (see appendix A for a summary). This includes computing correlation functions of outputs, obtained in QFT by constructing Green functions from Feynman rules in perturbation theory. Effective interactions (also called couplings) can then be extracted by comparing the NN correlation functions and the QFT Green functions. Finally, they introduced a RG flow from a cut-off on the volume of the input data, from the assumption that the effective field theory must be insensitive to the choice of the volume, up to a global rescaling of couplings entering in its definition. In the QFT language, this corresponds to an infrared (IR, large volume) cut-off: a major difference in our paper is that we will use an UV (data resolution) cut-off (see section 2.1.3).
The relation between different effective models can be translated locally through a set of β-functions which describe the evolution of couplings when the cut-off changes, with universal features of the theory emerging from the flow. In this paper, we are aiming at proposing a non-perturbative formalism, based on the Morris-Wetterich equation [52][53][54][55][56], to investigate the NN-QFT correspondence beyond the perturbative regime, i.e. beyond the large N regime. This means that our analysis does not requite the coupling constants to be small and that our equations are given in a 1/N expansion. As mentioned above, our framework differs from the one used in [32] in that we introduce a true partial integration of the degrees of freedom procedure, without any assumption on the expected large volume behavior of the corresponding effective field theory. Among the major differences with respect to the situation with ordinary QFT, the full (effective or IR) 2-point function is known theoretically, including non-Gaussian effects, whereas the free propagator (microscopic or UV) is not known. This unconventional setting allows going beyond standard limitations of the non-perturbative framework, in particular to close the infinite hierarchical system of equation describing the RG flow and keeping the full momentum dependence of the correlation functions following the Blaizot-Mendez-Wschebor (BMW) method [57][58][59][60].
Another unconventional aspect concerns the notions of power-counting and locality, which are traditionally inherited from the background space-time (which we will call 'data-space' in the case of NNs). In the case of QFT for NNs, such a relation appears as an additional hypothesis that no experience motivates 'a priori' . Other properties such as rotation and permutation invariances of the point components may not make sense for NN data. However, recent works in the context of background independent quantum gravity [61,62] have shown that the notions of scales and power-counting are more primitive than that of space-time, and such that locality can be derived from power-counting itself, ensuring moreover that the RG exists and is well-defined. In this article, we will discuss how these ideas can be relevant for NNs.
A RG can be then constructed by following the standard method, partially integrating on the degrees of freedom and starting with those associated with the highest scales (UV). However, the situation for the NN-QFT correspondence is quite different compared to usual studies of the non-perturbative RG: indeed, we are able to solve exactly the 4-point vertex function while keeping the full momentum-dependence without approximation on the 2-point function (since it is already known exactly). We can also solve almost exactly for the other momentum-dependent n-point vertex functions (when two momenta are equal and the others vanish, we can also find an exact solution without approximation). In this paper, we consider two versions of the RG. In a first approach, called passive, the notion of scale is fixed by the resolution chosen to describe the data. In that approach, the standard deviation of the hidden weights, σ W , is viewed as a reference mass scale. The resulting evolution equation provides an explicit realization of equivalence classes of networks having the same output (up to the machine precision), as the data is coarse-grained. In a second approach, called active, the RG flow is constructed by viewing σ W as a running scale. In such a way, the equivalence class is between networks having the same output, keeping the data resolution fixed. This implies that, for fixed N, NNs with different σ W can be viewed as belonging to the same RG trajectory. In particular, this implies that the renormalization flow can be used to make predictions for any σ W given the results for one of them. We illustrate this by describing the behavior of the quartic coupling constant of the effective field theory and check numerically the flow equations. In this paper, we focus on the analytic result and we plan to extend the numerical aspects in future works.

Outline
In section 2, we discuss some general concepts about the field theories which may be used to describe NNs. In particular, we comment on the definition of the data-space, IR and UV regimes, (non-)locality and its consequences on scaling and power-counting. At the end, we describe the passive and active points of views for the RG. In sections 3 and 4, we derive the passive and active RG flow equations respectively. In appendix A, we review the numerical simulations from [32] and provide some additional details. Finally, appendix B contains the details of technical computations.

NN-QFT, locality, scaling and RG
In this section, we present the framework of the NN-QFT correspondence proposed in [32,33]. As explained in the introduction, we focus on the Gaussian network 6 (or Gauss-net) which have a translation invariant kernel. In this section, we first recall the main ideas of the correspondence (some numerical results from [32] are reproduced in appendix A).
Then, we discuss the role played by non-local interactions 7 . In particular, we describe the different ways to relax locality and how this naturally leads to break the rotation invariance of the data. The most general QFT in the latter case are called random tensor field theories (or group field theories), which are generalization of random matrix field theories.
We are also revising the concept of power-counting, preferring a notion intrinsic to the RG compared to the one used in [32], which is inherited from a background 'data-space' . In the latter case, a classical scale dimension is attributed to the data and dimensional analysis is performed by requiring that the action is dimensionless (such that its exponential can serve as a weight in the path integral). However, it is not clear how to extend this notion in the presence of non-local interactions. We introduce two notions of scales which emerge from the analysis: the first is attached to the data and called 'working precision' , and the second is attached to the network and called the 'observation scale' . We consider two versions of the RG, flowing in these two parameter scales. We conclude the section with a short presentation of the Wetterich-Morris formalism [52,53,63] for non-perturbative RG and a discussion about the RG version considered in the reference paper [32]. Note that we voluntary use the same notations and conventions to make the comparison with their results easier.

Correspondence between neural networks and quantum field theory (NN-QFT)
In [32], the authors proposed a general QFT framework to describe the statistical behavior of NNs, working in the function-space rather than parameter-space (which can be viewed as a duality [33]). The original motivation stems from the observation that NNs in the infinite-width limit are described by a random GP [43]: the latter can also be described by a free (or Gaussian) QFT 8 . When the width is finite, the random process is not Gaussian and one can expect the NN to be mapped to an interacting field theory, which has been checked in [32].

Neural network and experimental Green functions
We consider a fully connected NN f θ,N (x) : f θ,N : R d in → R dout with learnable parameters (weights and biases) θ = (W 0 , b 0 , W 1 , b 1 ), a single hidden layer of width N, and an activation function σ: where the weights W i and biases b i characterize the affine transformation of each layer and σ acts element-wise. The weights W 0 and W 1 follow centered Gaussian distributions N (0, σ 2 W /d in ) and N (0, σ 2 W /N) respectively, and both biases b 0 and b 1 are drawn from centered Gaussian distributions N (0, σ b ). The input data x is a d in -dimensional vector, while we take d out = 1 for the output data for simplicity. As a consequence, W 0 is a (d in , N)-matrix, W 1 a (N, 1)-matrix, b 0 a N-vector, and b 1 a scalar. The Gauss-net activation is slightly peculiar because it acts as an exponential of the layer output normalized by the data of the previous layer: .
(2) 6 The term 'Gaussian' refers to the fact that the kernel is a Gaussian kernel, not that we have a GP in the infinite-width limit. 7 They were not considered in the original version of [32] but additional discussion has been added in a subsequent version during the preparation of this manuscript. 8 We refer to [32] for a gentle introduction to QFT with NNs in mind.
Finally, we stress that the NN is randomly initialized and that we will not consider the effect of training. Information on the NN can be extracted by considering correlations of the outputs: they are encoded by the 'experimental' correlation (or Green) functions G (n) exp [32]: where the statistical average 9 is taken over a large number of NNs with identical N and parameter distributions. The numerical evaluation of these quantities is explained in appendix A.

2.1.2.
Large N: free field theory NNs f θ,N : R d in → R dout with N → ∞ are well-described statistically by a Gaussian distribution: where the factor Z ensures that the expression is normalized when integrating over the full functional space: [df ] denoting the path integral measure in functional space and Ξ(x, y) the kinetic operator (Gaussian kernel). In general, we will omit the subscripts (θ, N) on NN function samples and write simply f. The origin of this Gaussian behavior in the limit N → ∞ can be traced from central limit theorem: since f θ (x) is formally a sum of N identically distributed random terms which self-average. The function f (x) splits into two contributions: where f b (x) ≡ b 1 being essentially N independent variables and following the Gaussian law N (0, σ b ), whereas f W (x) goes toward a Gaussian distribution only for large N. Formally, it reads as: where x 1 is given by (2) (such that f W depends on W 0 , W 1 and b 0 ). As stated above, for large N, one expects that such a quantity self-averages around its mean, and thus that fluctuations are small: where the last equality follows from the assumptions that initial distributions for θ are centered and non-correlated. Hence, the statistical properties of f W are essentially given by a centered Gaussian distribution, up to 1/N corrections. Obviously, the random nature of f W is inherited from the initial parameter distribution, however the asymptotic Gaussian behavior arises from the law of large numbers. The 2-point correlation (or Green) function: is the inverse of the Gaussian kernel Ξ(x, y) which appears in the free action (4): However, according to (6), it is also possible to decompose K as: where K W is the 2-point function associated to f W . It corresponds to the Fisher information metric [27,28] in the information geometry language, and is fixed from the choice of the activation function.
In this paper, we essentially focus on translation invariant kernels K W (x, y) ≡ K W (|x − y|), which is achieved by the Gauss-net architecture (2), corresponding to the kernel: where |x − y| := i (x − y) 2 i denotes the ordinary Euclidean distance between x and y. In field theory language, the kernel enters in the definition of the classical kinetic action 10 (i.e. the log-likelihood in probability theory): the corresponding probability distribution being given by the exponential law (4). The n-point correlation (or Green) functions are defined as: In the free theory, G (n) 0 is completely determined in terms of G 2 (x, y) = K(x, y) through Wick's theorem, and vanishes for n odd [32]. Hence, this implies that:

Data-space and momentum space
In this subsection, we discuss some definitions related to the data-space 11 corresponding to the NN input x, and how they differ from [32]. Continuity and infinity in computer science exist only as idealizations. First, any real number x ∈ R is represented numerically by a decimal number, bounded in precision by the number of bits used to encode it. For instance, if the maximal number of decimals is n 0 , two numbers x and x + 10 −m cannot be absolutely distinguished for m > n 0 . To be more realistic, we should see the data-space R d in as a lattice of step a 0 = 10 −n0 rather than a continuum manifold. The lattice spacing a 0 provides what physicists call an UV cut-off. Varying this parameter amounts to changing the data resolution, in full similarity with spacetime resolution in usual QFT. Second, computers cannot store an infinite amount of information. For this reason, it cannot handle infinite numbers (except as special data types and formal rules) and it is necessary to restrict the data to a finite interval x ∈ [−L/2, L/2] 12 . Hence, we consider the data-space to be a (2 N 0 ) d in square lattice with spacing a 0 , N 0 ∈ N, and total hypervolume: It is generally more convenient to work in Fourier (or momentum) space. The allowed momenta p = (p 1 , . . . , p d in ) lie in the first Brillouin region: Note that we assume periodic boundary conditions. This is not a problem for L large enough; for small L 13 , we can simply repeat the data set a large number of times to obtain a large enough effective volume to make the boundary conditions irrelevant.
In the rest of this paper, we use the following definitions: Definition 1. We call (2N 0 ) d in the discrete volume and a 0 the working precision. 10 Note that in this paper we choose the subscript kin for 'kinetic' , more familiar to physicist rather than G for 'Gaussian' , used in the [32]. 11 As we will see, its properties may be sufficiently different from the usual space of positions-spacetime-appearing in usual QFT to find another name. 12 In most of [32], the large-volume (what we call IR) cut-off in data-space L is denoted as 2 Λ. However, we keep this notation for the large-volume cut-off in momentum space. 13 We will precise 'small with respect to what' in a moment.
Note that (2N 0 ) d in also counts the number of states in the first Brillouin region. In this discrete setting, we can write the Fourier series of the network f (x) (for x ∈ (aZ N0 ) d in ) as: the basis functions e ipx being normalized such that x e i(p1−p2)x = N 0 δ p1p2 . Note that (18) holds for any discrete function on the lattice. In the continuum limit, for small a 0 and N 0 large such that L remains fixed, discrete sums can be replaced by integrals. Moreover, for volume large enough, integrals becomes standard Fourier transform. We call this limit the thermodynamic limit following the standard terminology in physics, and we focus on this regime in our investigations. Taking the Fourier transform of the 2-point function: where px := d in i=1 p i x i , we get for (12) 14 : Note that in this continuum approximation, the Dirac delta δ(p) has to be understood as a shorthand notation for a Kronecker delta (2π) −d in Vδ p0 . Translation invariance is crucial to obtain a kernel (20) which depends on a single momentum, and reflection invariance implies that it must be a function of p 2 . It would be interesting to understand how to generalize our computations for kernels which are not translation invariant [32]. For small p 2 , we may expandK(p) in power of p 2 , Up to O(p 4 ) corrections, the propagator looks like the canonical propagator of a free scalar field theory: where: In the QFT terminology, Z 0 and m 2 0 are respectively the wave function renormalization and bare mass. One can rescale the field to to set Z 0 = 1, in which case the mass becomes: We adopt the following definition: The massm 2 0 defines the typical mass scale and its inverse defines the (IR) correlation length ξ, or the typical observation scale: Note that the large volume limit is defined with respect to this correlation length, i.e. L ≫ ξ. Beside the existence of an intrinsic length scale, the system the propagator at large distance behaves like (m 2 0 + p 2 ) −1 . Assigning the label x (position space) to the original data and p (momentum space) to the Fourier conjugate may seem arbitrary. Indeed, while the machine precision provides a natural UV cut-off and an associated identification of the data-space as position space (since UV corresponds to small distances in that space), signals in Fourier space are also represented in the computer up to the machine precision. However, in that case, using the machine precision as a UV cut-off would not match the usual intuition in QFT. Given a translation-invariant kernel, another possibility is to identify the momentum space as the space where the propagator is diagonal, such that the propagator in position space depends on the distance |x − x ′ |.

Finite-N corrections and interactions
For a GP, as we have seen earlier, correlation functions G (2n) for n > 1 can be decomposed as a sum of product of 2-point functions thanks to the Wick theorem [23]. For N large but finite, the distribution is not exactly Gaussian, and the correlation functions do not match with Gaussian predictions.
The deviation of the QFT and experimental correlation functions from the Gaussian case are denoted as: Note that G (n) 0 are still the large N Green functions defined in (14). Importantly, we identify the exact 2-point function G (2) with the kernel K which equals G (2) 0 . Since it contains (quantum) corrections due to the interactions, the free 2-point Green function computed from the kinetic term only is not known (in standard QFT, the converse is true, see section 2.3.2 for a discussion). We will see in section 2.2.3 that connected functions ∆G (2n) c behaves as: which has also been investigated analytically and numerically in [32] (see also appendix A) 15 . This scaling is consistent with the fact that the exact 2-point function is independent of N. Qualitatively, this is reminiscent of what happens for the Ising model in large dimension. The local magnetization self-averages because the number of closest neighbors is large, and the statistical properties remain (quasi)-Gaussian. For space dimension d large but finite, thermodynamical quantities can be computed as power series in 1/d, which do not affect universal quantities, as soon as d > 4. For d < 4 however, the decoupling of physical scales breaks down and the Gaussian approximation is not suitable [24].
The same scenario is expected to be true for NNs. For finite N, the distribution does not obey Wick theorem, and correlations functions receive contributions which do not reduce to products of 2-point functions, and the classical action must include non-Gaussian contributions, i.e. products of f of degrees higher than 2. However, as soon as N remains large enough, deviations from the Gaussian behavior are expected to remain small. In the classical action, these corrections materialize as product of m fields (m > 2), that we call interactions: where S ′ kin [ f ] is a new free action of the form (13). We follow the orthodox assumption in field theory that S is polynomial, and we denote generally as couplings the monomials. The correlation functions are computed using (14) by replacing S kin [ f ] with S[ f ]. But, since the interacting action S int [ f ] is built from cubic and higher powers of f, this generally prevents from computing the path integral exactly, and one has to resort to a perturbative expansion encoded in terms of Feynman graphs [32]. The form of the interactions is discussed in the next subsection. For the rest of this paper, we discard the contribution f b in (6) from our analysis, and omit the subscript W.

Locality, scaling(s), and power-counting
In this subsection, we precise the class of interactions which are assumed to suitably reproduce the non-Gaussian properties of correlations. Moreover, we discuss the scaling behaviors, especially relevant for the RG investigations in the next section.

The theory space
The set of allowed couplings defines the theory space. They are generally guided by physical arguments, the symmetries of the system, and fundamental assumptions about the physical laws. This is especially the case for fundamental physics, where the expected properties of the space-time background play a key role. In turn, the structure of space-time is itself a consequence of the interactions between physical matter 16 . Indeed, if we are able to say that something is 'here' , this is because we can interact with this thing. In words, a statement such that 'the field must interact locally' is physically equivalent to 'locality is defined by the interactions of fields' . In other words, space-time in physics is more that a set of d in coordinates x ∈ R d in . It is equipped with a group structure, the Poincaré group, which dictates how the coordinates can be transformed from one to the other. As the history of the relativity theory shows [66][67][68], these properties are essentially consequences of interactions between light and matter.
In the QFT framework, the role of the background space is played by R d in . In [32], the authors adopt a conservative approach for most of their analysis, building the couplings as products of fields at the same point x ∈ R d in : However, this makes various assumptions which may not be valid for a general NN QFT. For this reason, we will make them explicit and explain how to gradually lift them to consider the most general QFT. Deciding which assumptions to use should be dictated by numerical evidences: in particular, it was found in [32] that (29) is sufficient for the activation functions and range of input parameters considered there (see appendix A for more details). This approach can be considered as NN phenomenology, in the sense that we are writing a model to match observations, but we can also use this model to check theoretical facts such as dualities [33,69,70]. The first assumption is locality of the interaction: the fields appearing in the monomial f(x) n can be taken at different points (for simplicity, we consider a single coupling in S int ): This breaks locality because fields at different points in space(time) can interact together. In fact, since g is a constant, this happens for arbitrarily large distances. Note that this preserves translation invariance The next natural step is to replace g by a coupling function, i.e. a function of space but independent of the field. Going back to (29) where all fields are at the same points, we can write a local action with a coupling function: It was argued in [32] from technical naturalness that g(x) must be approximately constant since a coupling function g(x) breaks the translation invariance of the action. However, this is correct only when assuming locality of the action: replacing g by a coupling function in (30) gives the non-local action [26]: However, translation invariance can be preserved if g depends only on the distances between the points: Moreover, having a coupling functions gives more control on the interaction region, for example, by restricting the non-locality to a small region. For instance, we can set g( . This allows representing non-locality by derivatives in momentum space and to show that they are subleading in the deep IR. Simple non-local models of this form have been considered in [32]. A special type of such a non-local interaction is obtained by smearing the fields (only in the interactions): in (29), we can replace the field f (x) by anotherf(x) given by a convolution with a kernel κ(x, y) [71][72][73]: such that In order for this to make sense, the Fourier transform of the kernel κ(x, y) must be an entire analytic function (with rapid decay if one wants to ensure UV finiteness). This corresponds to a coupling function: Smeared fields naturally appear in string theory and are responsible for its well-behaved UV behavior [74,75]. In fact, for the Gauss-net, rescaling the field f to remove the exponential from the kinetic term (20) is equivalent to smearing the field (as pointed out earlier by comparing with the p-adic string [64,65]).
There is a final assumption in all the previous interactions we wrote: that all components of x (which is a d in -dimensional vector) are homogeneous. First, this means that coordinates can be added/subtracted to each other. Second, this also implies that the role played by the ith coordinate can be played by the jth, or by any linear combination of the coordinates. Physically, this means that the previous interactions have a O(d in ) symmetry (the Euclidean rotation group, or the Lorentz group in Lorentzian signature). Together with translations (if present 17 However, it is not clear a priori that the data-space possesses this symmetry: it may not be possible to exchange two data components or even consider linear combinations if the components are not homogeneous. Despite the fact that the free theory supports such a symmetry, the rotational invariance has no meaning for a NN in general. Moreover, symmetries of the free theory can be broken by interactions, which are necessary to fully characterize the system. Conditions under which input and output symmetries can be present has been analyzed in [33]. In general, one can start by assuming no symmetry in order to describe the most general model, and then adapt to what the numerical experiments are indicating. Hence, we need to consider fields for which each component is independent: this amounts to interpreting f (x) as a field over d in independent copies of R, meaning that each of the d in components is independent and cannot be transformed into the others. Given that there are several fields, this means that the ith component of a given point can be inserted only in the ith argument of a field, however, it is not necessary to use all components of a single point in a single field. Obviously, this expression is non-local because the field is evaluated for components corresponding to different points. For example, for d in = 3, one can write the following cubic interaction: where x i , y i and z i are the components of the 3-dimensional points x, y and z. Note that nothing prevents to use only two points, for example setting y = z and integrating only over x and y, or more generally to repeat the same component in any number of fields (for an early example, see [76]). Such general theories are too wild and it is hard to make sense of them. A controllable subclass is provided by random tensor field theories [77]. In this case, the fields are tensors, each component of the positions being seen as a (continuous) index and indices can be contracted pairwise only (which is achieved by integrating over the component, since the index is continuous), such that a given component can appear at most twice. An intuitive way to represent it is to assign a color to each component, and Feynman diagrams can be written in terms of strand graphs (generalizing ribbon graphs from matrix models). For instance, a possible quartic interaction for d in = 3 is: We will see in the next section that tensor field theories are particularly interesting in the RG approach because, under some additional conditions, they possess a natural background-independent power-counting (see next subsection). We conclude this section by clarifying a subtlety concerning QFT in curved spaces. In this case, the Euclidean (or Poincaré) group is not a global symmetry (symmetries of the action are given by the isometry group of the background space) and one may ask what is the difference with tensor field theories. The point is that this group is still a local symmetry (general relativity can be seen as gauging the Poincaré group) such that the properties discussed above continue to hold. Indeed, one can always consider the tangent space associated to a point: since it is isomorphic to flat space, it means that the coordinates are still homogeneous.

Λ-scaling and power-counting
We are aiming to construct a field theory which admits a well-defined RG flow. In standard QFT, the rigorous construction of such a flow requires essentially three basics ingredients: (1) a scale decomposition, (2) a locality principle, (3) a power-counting.
The scale decomposition is the first ingredient to construct slices, and then to define a partial integration procedure. Power-counting and locality, in turn, are essential to understand the notions of effective couplings, i.e. how Feynman graphs can be replaced by an effective vertex together with a slice-dependent coupling. As long as we are endowed with R d in as a background space, all of these notions are obvious. Scale decomposition is intuitively related with the notion of metric distance, locality and non-locality are defined with respect to the background itself, and power-counting is related to dimensionality as well. Indeed, the existence of an extrinsic length scale and the requirement that the classical action S is dimensionless, to give meaning to the exponential e −S , allow fixing the dimensions (in terms of the length scale unit) of the couplings appearing in the classical action. This is the choice made in [32] x denotes the dimension of the quantity Q in units of x, they were able to fix the dimension of couplings like (29), We call it Λ-scaling such a scaling, for some reference scale Λ having (x)-dimension 1. However, from the discussion above, we can be a little puzzled by the fact of assigning a physical dimension to the variable x, and to truly view R d in as a background space. Rather, we adopt the minimal point of view considering it only as a configuration space, without dimension. Sacrificing the background space then makes the issues of scale, locality and power-counting less intuitive. The discussion in section 2.1.3 shows that the theory has a canonical notion of scale, given by the Fourier modes (spectrum) of the propagator. Regarding the notions of locality and power-counting, the difficulty is quite similar to that encountered in canonical approaches to quantum gravity, where space-time and background metric disappear [61]. In this context, a clever solution was found, which in some sense defines the power-counting from a locality principle, starting from the observation that standard locality in field theory can be algebraically translated as the ability of connected Feynman diagrams to be contracted to a point. Locality can then be defined algebraically from the requirement that, at least for some leading order sector, such a contraction procedure exists. A recent example, arising from quantum gravity models is provided by tensorial field theories [62,78,79]. In these theories, interactions are non-local in the usual sense (from the point of view of the configuration space) but, for some of them, the only divergences come from a sub-family of Feynman diagrams (in general the so-called melonic diagrams), which is contractible to an elementary vertex compatible with some internal symmetry defining the tensorial interactions themselves. Interactions having these properties are then said to be local. In turn, graphs admitting such a contraction property have been shown to admit a well-defined power-counting. The reason for this is that, to be well-defined, a power-counting requires that the existence of a family of Feynman graphs having the same behavior with respect to some cut-off Λ. If, order by order in the perturbative series, quantum corrections have different scaling behavior with respect to Λ, no power-counting exists. The existence of a contraction procedure allows defining the relative scaling of the various terms entering in the classical action with respect to Λ, such that there exists non-vanishing leading sectors of the perturbative expansion which have the same behavior with respect to Λ. Let us illustrate heuristically on a simple example how contractibility and power-counting allows fixing the scaling dimension of couplings. Consider the following classical action: describing the scalar field ϕ : R d → R, and where ∆ denotes the standard Laplacian. It is moreover local in the usual sense. For g small enough, quantum corrections can be computed using standard perturbation theory. The first contribution to the effective mass δ (1) m 2 arise from the following integral in Fourier space (the symmetry factors are irrelevant for our discussion): for some cut-off Λ for large momenta. In the same way, the first correction for g, say δ (2) g involves the following integral: the upper index referring to the number of vertices involved in the Feynman diagram. Now, to obtain a well-defined power-counting, the correction for g has to scale with Λ in the same way as g itself. This is solved by g ∼ Λ 4−d , and we say that the Λ-scaling of g is [g] Λ = 4 − d. This moreover implies δm 2 ∼ Λ 2 , and thus [m 2 ] Λ = 2. Now, we have to check that it is coherent to all orders of the perturbative expansion. To this end, let us consider a Feynman graph G V , of order V contributing to the perturbative expansion through the amplitude Contracting along a spanning tree T V ⊂ G V , we reduce the original number of propagator edges L to L − V + 1, and the resulting graph looks like an effective (local) vertex, having L − V + 1 loops of length one (tadpoles). Each tadpole behaves like´dp/(p 2 + m 2 ), and thus scales as Λ d−2 .
The divergent degree for the contracted graph G V \T V is therefore: Because the contraction procedure removes V − 1 propagator edges, it increases ω(G V ) by 2(V − 1): . Finally, because the interaction is quartic, we have the relation 2L = 4V − N, N being the number of external edges. Finally, we get: Each vertex contributes a factor Λ d−4 , and the scaling g ∼ Λ 4−d ensures that all the quantum corrections have the same scaling. Moreover, setting N = 2, we get ω = 2, in agreement with the one-loop scaling dimension for mass. Obviously, because this theory is local in the usual sense, the derived scaling dimensions are exactly the same as the one derived from the standard dimensional analysis of the classical action. The two methods however do not coincide for non-local interactions such that (38). We argue that this more abstract way to think about locality, scaling and power-counting is more appropriate in a context where the construction of the theory space is not guided by experimental evidences, such that it seems more appropriate to work from the outset within a sufficiently broad framework to accommodate future developments in formalism. However, the exploration of these aspects for NNs is beyond the scope of this paper, since standard locality seems to hold for the Gauss-net kernel [32].

N-scaling
There exists another scaling dimension, called N-scaling, associated to the behavior of correlation functions with respect to the width N of the hidden layer. The Gaussian universality for large N ensures that the couplings g n behave as g n ∼ N −α(n) for some positive function α(n).
The computation can be done by returning to the definition (7) and using the fact that W 1 follows a centered Gaussian distribution with variance σ 2 W /N. For instance, we find: which is of order 1. The computation of higher correlation functions can be done using a similar strategy, from the assumptions that the x (i) 1 with different index i are statistically independent variables. This in particular ensures that: from this observation, a tedious calculation which is given in [32] shows that the connected 4-point function G c (x 1 , x 2 , x 3 , x 4 ) has to scale as 1/N, and more generally that α(n) = n/2 − 1. This analytic result also shows the limitations of the approach. Indeed, we expect that a more fundamental method will be able to predict the weights of the interactions. Moreover, the derivation assumes the relation (46), and thus the independence of the x (i) 1 having different indices i, but such an assumption seems to be in conflict with an interaction such that (29), which morally must introduce couplings mixing different outputs from the definition (7) of f W . One may expect that these difficulties could be solved by working with a random vector of size N, with components φ i (x) rather than with the function f W (x), defining it as an observable f W (x) := ⟨φ i ⟩ i.e. the vacuum of the corresponding theory. However, the construction of such a theory is going beyond the scope of this paper, and we plan to investigate it in a forthcoming work.

Renormalization group 2.3.1. The Wilson approach
The RG is probably one of the most important concepts discovered in physics during the last century and forms together with field theory the reference framework of modern physics, from condensed matter to high energies. Pioneered in the works of Wilson and Kadanoff [34,[80][81][82], RG is based on the idea of organizing the theory according to length scales, integrating out short distance degrees of freedom following a recursive procedure called coarse-graining and providing an effective description for the long distance degrees of freedom, through an effective action where microscopic interactions are hidden in effective interactions.
Note that RG is in fact a semi-group, which is non-invertible. Thus at each step, information is lost, and RG can be viewed as a systematic procedure to extract large scale relevant features.
To illustrate the physics underlying the Wilson procedure, and before making contact with NN field theory, let us consider a physical system made of a single real scalar field ϕ whose configuration probability follows the exponential form p[ϕ] = e −S [ϕ] , for some classical action S [ϕ]. To have a concrete example in mind, we can take for ϕ the real field described by the classical action (40). All the statistical properties of the distribution can be derived from the generating functional (partition function): This integral being over all configurations for ϕ(x), all the degrees of freedom are integrated out in one step. Equation (47) the classical field Ψ being defined as Ψ(x) := δW/δj(x). The RG is nothing but a path between these two boundaries. It is constructed by partially integrating out the degrees of freedom building the field ϕ. Note that such a partial integration procedure is never arbitrary, and the Wilson RG assumes the existence of a canonical slicing s = {s 1 , s 2 , · · · , s ∞ } 18 in the configuration space of elementary degrees of freedom, allowing to integrate partially following a preferred order. In general, this slicing is provided by the spectral distribution µ(E), E ∈ R of the UV 2-point function for an exponential family like (47): s i ⊂ µ(E). In fact, the 2-point function can be identified with the Fisher information metric along the constrained space with fixed couplings. This gives a connection between RG and information geometry [31] because of the regularity property of the Fisher metric, and in absence of singular structures, distance between probability distributions has to be reduced with coarse-graining, explaining the power of RG to discuss of universality in physics [24]. Integrating all the degrees of freedom in the first slice s 1 leads to an effective model with classical action S ′ which defines a new effective physics where effects coming from degrees of freedom in the first slice are hidden in effective interactions. Now, integrating the slice s 2 , we obtain a new classical action S ′′ and so on. Such a partial integration (up to a global rescaling of fields to reach a fixed point) is called a RG transformation, and the chain of RG transformations describes a 'move' in the interior of the theory space: bounded by UV and IR effective physics (figure 1). Let us illustrate how that works on the concrete example of the scalar field ϕ described by action (40). In that case, µ(E) corresponds to the spectrum of the Laplacian ∆, whose eigenmodes are Fourier modes, and E ≡ p. Assuming continuity of the spectrum, we can consider infinitesimal coarse-graining, integrating out slices of infinitesimal thickness. This leads to a differential equation describing how the couplings change as the reference scale changes. Formally, this can be done as follows. We assume the existence of an upper bound for p, say Λ, and we call µ Λ (p) the spectrum of the free 2-point function with cut-off Λ, K Λ (p). As the cut-off Λ moves, degrees of freedom are added or removed from the spectrum. Thus, let us consider the bare action 'at scale Λ': where V[ϕ] includes interactions following our definition of section 2.1. Now let us consider the running cut-off Λ(s) = sΛ, for s ∈ [0, 1], which interpolate between the UV scale s = 1, and IR scale s = 0. If K Λ(s) (E) is at least C (1) in s, we can consider the variation at first order from s to s ′ = s + δ: Figure 1. The RG trajectory into the theory space, from UV to IR physics.
The following decomposition can be translated as a partial integration from the original partition function using the functional identity: where:S and χ denotes the degrees of freedom integrated out. Indeed, defining: and: we show that the identity (52) can be rewritten as: The classical action at scale Λ(s ′ ) formally looks like the action at the scale Λ(s). What is different between them is the interaction, which comes at scale Λ(s ′ ) from a partial integration over the field χ. The transformation (54) can be translated in a differential equation for δ small enough. Indeed, in this limit, the modes χ have a large mass, and can be treated perturbatively. Thus, expanding V Λ(s) [ϕ + χ] in powers of χ and keeping only terms of order 2, we get Polchinski's equation [80]: This equation is formally 'exact' . However, this has the reputation to be very hard to solve for many reasons.
The first one is that it takes place in a functional space of infinite dimension. If we decide to work in a reduced phase space, taking into account only the most relevant interactions, difficulties appear, instabilities with respect to the considered truncation appear as soon as we try to get beyond the perturbative sector, which is precisely what we are aiming at in this paper. For this reason, and as it is the case for the largest part of non-perturbative investigations in the literature [52,53,56,63,[83][84][85], we will prefer to use the Wetterich formalism, better for dealing with non-perturbative approximations. We will discuss this method in the next section.

Renormalization group(s) for the NN-QFT
The analogy between NNs and RG is evident: both are aiming at extracting relevant features from a massive number of degrees of freedom. RG shows that microscopic details can be ignored to describe long distance physics, and that microscopic theories can be indistinguishable from their common large distance properties. Extracting regularities from large sets of data is exactly what machine learning does; and, as we recalled in the introduction, the question of the relevance of the RG in artificial intelligence is growing in the literature [16][17][18][19][20]. However, the effective field theory that we presented in the first part offers a new framework to discuss aspects related to the RG in the study of the behavior of NNs [32]. As the previous section stressed out, the field theory that we consider exhibits strong similarities with theories usually considered by physicists: the long distance (i.e. large volume, small momenta) limit (22) of the free propagator being the same as for the usual scalar field ϕ described by the action (40). This formal similarity will serve as a guide in the construction of the RG, and it is very tempting to carry out a coarse-graining in momenta, exactly as for the scalar field ϕ in the previous section. We will discuss two different coarse-graining strategies which we call respectively passive and active RGs. But before we go into them in detail, let us make a few general remarks about what distinguishes this NN field theory from ordinary theories.
In the standard scenario, what is known is the UV theory i.e. the classical action. This action itself is viewed as an effective description, valid at some fundamental scale and ignoring the details about nature and physics of microscopic degrees of freedom underlying the physical world. The choice of the classical action is constrained by predictivity (which promotes just-renormalizable theories), consistency with quantum effects (compensation of anomalies in gauge theories, for instance), and the effective structures at the scale at which the theory is defined, which generally implies some symmetries (rotation, reflection, gauge invariance, etc). In this respect, the RG aims to provide an approximation of the exact quantum theory, and to compare it with experiments. The case of the field theory that we consider differs from this general picture in its relations between UV and IR scales. The propagator (12) is exact and defined in the deep IR. From a RG point of view, the knowledge of this propagator takes into account all the fluctuations at all scales. But, for finite N, the knowledge of the 2-point functions is not sufficient to reproduce higher correlations functions, and non-Gaussian interactions are required in the classical action to reproduce experimental correlation functions. But, due to these interactions, the flow of the different ingredients entering in the definition of the classical action becomes non-trivial, with the consequence that both S kin and S int in the UV are unknown. Thus, in some sense, the situation is the inverse of what we do in ordinary field theory: we have to infer the form of the UV theory (or more likely a class of UV theory) from the knowledge of only a part of the IR theory. By construction, such an inference cannot lead to a single solution, but a class of solutions which have to satisfy the following requirements: (a) reproduce the exact 2-point functions up to the experimental precision; (b) reproduce the deviations from Wick's theorem, due to interactions, and which are less and less perturbative as N becomes small, once again up to irrelevant corrections with respect to the experimental precision.
Any measurement in physics comes with a finite precision: hence, two effective descriptions are considered to be equivalent and sufficient to describe something if the predictions agree up to the experiment precision. The precision is also finite in numerical simulations, and this explains why that we are able to infer only an equivalent class of models rather than a point in the theory space. In the first section, we showed that the relative relevance of the interactions is not the same such that irrelevant interactions contribute below the machine precision threshold, meaning that we have no way to distinguish between several initial conditions whose trajectories are sufficiently close in the IR (see figure 2). This argument allows working, in a first approximation, within a finite subspace of the full theory space, focusing on interactions having the largest canonical dimension.

Passive RG
Because of the existence of an intrinsic length scale ξ defined in (25), we can think to partially integrate microscopic degrees of freedom with respect to this length scale to construct a proper RG flow following standard field theory. In this picture, what is playing the role of a microscopic scale is the working precision (see section 2.1.3), which introduces a cut-off in momentum integration Λ = 1/a 0 . We can then construct a coarse graining procedure from grid size dilatation (see figure 3).
Note that from such a procedure, one needs to have ξ ≫ a 0 . Because the maximal value for p is p ∞ = 2π/a 0 , we can show that it implies p ∞ ξ ≫ 1, which invalidates the expansion (21). However, it may happen that such an expansion holds in a sufficiently large domain. A necessary condition is that the expansion (21) holds for the smallest (nonzero) momentum p 0 = 2π/a 0 N 0 , implying: Figure 2. Behavior of the RG flow with different initial conditions. The red region corresponds to initial conditions for all microscopic actions whose RG flows are experimentally indistinguishable in the deep IR regime, and corresponds to the same effective physics described by Γ.
which is the condition defining the large volume limit. A dilatation procedure as described in figure 3 induces a RG by partial integration of momenta into the windows ∼]1/a ′ 0 , 1/a 0 ]. The existence of two complementary limits ξ ≪ L and ξ ≫ a 0 is reminiscent of a crossover scale behavior, between a deep UV limit p ∼ 1/a 0 and a deep IR limit (p ∼ 1/L), which we will study separately in the next section. Note that such a crossover scale appears generally in situations where two very different mass scales appear, ensuring decoupling 19 of some effects associated to the larger one when experiments focus on the first one [86]. Here, what plays the role of a large mass is the inverse of the typical observation scale ξ; in the very large mass limit, p ∞ ≪ (ξ) −1 , and the IR sector recovers all the physics. In the opposite limit, p 0 ≫ (ξ) −1 , everything is UV, and an expansion such that (21) does not hold. In other words, for p ≪ (ξ) −1 , one expects that quantum effects are suppressed with powers of (ξ) −1 . This observation can be a source of improvement for approximations used to solve the RG flow equation (61) in the next section. In particular, we understand that contributions coming from higher couplings will tend to stay small if they are at the transition scale (ξ) −1 . Section 3 is devoted to this RG strategy.

Active RG
In the process described above, the observation scale ξ is kept fixed and the working precision is changed. Conversely, we can keep the data (i.e. the working precision) fixed and change the observation scale. If the first version is essentially passive with respect to the NNs (i.e. the latter is not changed), this strategy is, in contrast, active (see figure 4). Indeed, remembering the expression (25), ξ is completely determined in terms of σ W , the standard deviation of the weight distribution. Hence, flowing in the observation scale is equivalent to changing the weight standard deviation, and thus the NN.
Physically, if we think of a thermodynamic system like a ferromagnet, such a strategy is equivalent to turning the thermostat's knob to lower the temperature towards the critical regime. This alternative point of view is the subject of the section 4. Remark 1. The active RG is closer to the RG version considered in the [32] than the passive scheme, as the flow equations derived in section 4 show explicitly. However, despite this formal contact, our approach differs by its very construction. While, from their point of view, the RG is the mathematical explanation of a principle of invariance with respect to a certain volume 'cut-off ' , our RG is the result of a procedure of partial integration of the degrees of freedom of the field.
Indeed, the RG flow is usually performed with respect to a UV cut-off (spacetime/data-space resolution) and not an IR cut-off (volume). In [32], the large volume cut-off was introduced because the 2-point function diverges at large distance (at least, for the ReLU-net), which reminds the short-distance divergence of the canonical propagator in particle QFT. Moreover, one can ask whether the data-space should be identified with the position or momentum space in usual spacetime QFT, and in principle this could depend on the problem. From our arguments in section 2.1.3, it seems more natural to identify the data-space with the position space (except in the case where the data is already the Fourier transform of a space(time) process). IR divergences are also present in particle QFT, and they are cured following different methods according to their origin. The first case are massless particles, for which a refined definition of amplitudes is needed [25,87]. An IR cut-off (such as a mass) can be introduced at intermediate stages to regulate the integral, but it is not a renormalization parameter. In practice, the divergences of the ReLU-net arise from a similar origin (singularity in the propagator for large distance/zero-momentum). Second, IR divergences appear for internal on-shell propagators: they translate the fact that quantum effects shift the vacuum and masses of the fields. Resummation of quantum effects through renormalization leads to finite results [25,74]. Third, some quantities can diverge for infinite volume for example when studying phase transition: in that case, the usual method is to study the theory with different values of a volume cut-off and to extrapolate to infinite volume (thermodynamic limit) [88]. However, this is not a renormalization flow. For these reasons, we take a more conservative approach and identify small resolution in data space with the UV limit, and perform the RG flow for the associated cut-off.
It is also noted that in [32, section 4.3] that the Gauss-net does not require renormalization because the 2-point function is exponentially decaying with the distance, such that all integrals are convergent. In fact, the previous paragraph shows that renormalization is still needed in this case because its role is not only to handle properly (spurious) UV divergences, but also to take into account quantum effects (some of which lead to IR divergences). Said another way, renormalization provides a mapping between the bare and physical parameters (at a given energy scale) there is always a renormalization flow in the space of couplings. Indeed, the bare parameters describe the properties of the fields without interactions: they are not physical because fields do not live in isolation and any measurement implies an interaction. A famous example of a perfectly finite theory but which has an infinite number of finite (such that predictivity is not lost) counter-terms and non-trivial RG flow (with the so-called stub length) is string field theory [75,89,90].

Flowing through NN-QFT theory space: the passive RG
In this section, we show how the passive RG within Wetterich formalism allows predicting the behavior of correlation functions for a fully connected NN with a single hidden layer. We start with a short presentation of the Wetterich formalism, before turning on to applications. We will consider separately two different regimes, the deep IR regime k ≪ (ξ) −1 where the effective propagator can be suitably approximated with an ordinary Laplacian ∼ (−∆ + m 2 ) −1 , and the UV regime k ∼ (ξ) −1 , where the propagator follows the exponential law ∼ e −∆/m 2 /m 2 .

Wetterich formalism
In section 2.3.1, we provided a formal introduction to Wilson's ideas for the RG. In this section, we present another incarnation, the so-called Wetterich formalism [52,53,56], which focuses on the effective action for integrated degrees of freedom rather than on the effective classical action for the remaining degrees of freedom, as it is the case in (57). We focus on the passive RG as presented in the section 2.3.2. Let Λ = 1/a 0 be some reference working precision and k ∈ [0, Λ]. Assuming that we performed partial integration up to the scale k, we denote as Γ k the effective action for those averaged degrees of freedom. Obviously, it must satisfy the boundary conditions: (a) Γ k=Λ = S, no fluctuations are integrated out, and the effective action reduces to the classical action. (b) Γ k=0 = Γ, all fluctuations are integrated out and we recover the full effective action Γ defined in (48).
The Wetterich formalism aims to construct a smooth interpolation between these two limits. To this end, it is convenient to modify the classical action with a scale dependent mass term ∆S k , which reads in momentum space: The substitution S → S + ∆S k defines a k-dependent partition function Z k through the definition (47). The shape of the scale-dependent mass r k (p 2 ) is designed to freeze low momenta modes p 2 < k 2 , decoupling them from long distance physics whereas high energy modes p 2 > k 2 remain essentially unaffected. Moreover, in order to recover the full classical action Γ for k = 0, r k (p 2 ) has to vanish in that limit. In the same way, it has to become very large in the opposite limit, for k → Λ, in order to satisfy the UV boundary condition Γ k→Λ → S (all the fluctuations are frozen). The interpolating functional Γ k is defined as: As k varies from k to k − δk, effective couplings involved in the effective action change. To obtain the differential equation governing the behavior of Γ k , as k varies, we can differentiate the definition (60) with respect to k. After a tedious calculation whose details can be found in [56], we get the following functional equation: where Γ (n) k denotes the nth functional derivative with respect to the classical field Ψ(x) := δ ln Z k /δj(x). This equation, up to the formal character of its derivation, is as exact as equation (57) is. It defines a trajectory through a functional space and is as hard to solve as the equation (57). Approximations are required to make the underlying physics tractable. The standard strategy, called truncation, is to identify a relevant finite-dimensional subspace of the full theory space, and to project the flow equation (61) onto it. Working with equation (61) has the great advantage that this projection procedure does not require to assume that couplings are small, and thus allows investigating approximate but non-perturbative solutions of the RG flow.

Local potential approximation in the deep IR
The local potential approximation (LPA) is one of the most popular approximation procedures [56] to solve the exact RG flow equation (61). This approximation focuses on the region of the full theory space spanned by local interactions in the sense of (29). For the investigations in this section, we assume p 2 ≪ 2σ 2 W /d in , which is our reference mass scale. This implies: which defines the IR regime (see section 2.3.2). Note that due to the scaling behavior of derivative contributions, one expects that the validity of this description survives in the weak UV regime: 1/a 0 ≫ k ≫ (ξ) −1 due to the large river effect which states, that in a suitable vicinity of the Gaussian fixed point and in the absence of singularities along the flow, the latter projects itself into the subspace spanned by the most relevant couplings [91].

Symmetric phase
To begin, we focus on the simplest truncation around sixtic interactions, discarding from our analysis contributions arising from higher couplings. This is equivalent to setting: where to avoid confusion with the example given in section 2.3, we denote as Ψ the classical field. Note that such an expansion around Ψ = 0 is named symmetric phase expansion, and we call symmetric phase the domain of the full phase space where it remains valid. It may happen that such an expansion breaks down, in cases where Ψ = 0 becomes an unstable vacuum. This is the case when phase transitions are encountered. In this section, we focus on the symmetric phase, and discuss more elaborate formalisms in the next section. Approximation (63) ensures that we keep effects up to order 1/N 2 .
To be more concrete, we assume that Γ k [Ψ] can be decomposed as a sum of two contributions: where: Without lost of generality, the kinetic contribution can be written as: The kernel K k (p 2 ) is a priori difficult to track. Fortunately, because we are aiming to deal with IR effects, the momentum p is expected to be small, justifying to expand K(p) in power of p 2 : The first term of this expansion define the running mass, and we denote it as m 2 (k). In the same way the second term of the expansion is called running wave function renormalization, and we denote it as Z(k). Nerveless, it is easy to check that in the symmetric phase Z(k) does not depend on the running scale k (see below). Thus we must have Z(k) = Z 0 = 1. This scheme defines the derivative expansion [54,63,84,92], and for this section we focus on the two first terms: To keep only effects up to order 1/N 2 , we consider the following truncation for the effective potential: where: This form follows from the expression of a local interaction of order n in position space: the Fourier transformation to momentum space introduces one momentum p j for each field together with a delta function for momentum conservation, and a sum (since the p j take discrete values) over each value of the momentum. The final piece is the regulator r k . From the choice of the kinetic truncation (67), it is suitable to use the modified version of the standard optimized Litim's regulator [93]: where, on the RHS, functions are computed for Ψ = 0. From the truncation (67), we must have: .
The fourth derivative Γ (4) k (p 1 , p 2 , p 3 , p 4 ) can be easily computed from the truncation (68), leading to: replacing Ψ = 0 at the end of the computation. Thus, setting p 1 = 0 on both sides of equation (71), we get after some calculations 20 : where Remark 2. In equation (71), the only dependence on the external momenta p 1 and p 2 on left-hand side is through the conservation delta δ p1,−p2 arising from the structure of the four-point function vertex Γ (4) k . Thus, the field strength Z(k), whose flow equation could be deduced by taking derivatives on both sides of equation (71) with respect to p 2 1 , vanish identically. In the same way, taking the fourth and sixth derivatives with respect to M of the flow equation (61), and from the condition (73), we get schematically: and: A tedious calculation leads to: and k du 6 dk = 30 u 4 u 6 (k 2 + m 2 (k)) 2 Vol(k) − 90 u 3 4 (k 2 + m 2 (k)) 3 Vol(k).
These equations illustrate how the scaling can be fixed without assuming any background dimension, as discussed in section 2.2.2. Indeed, a moment of reflection shows that the argument below equation (42) about the existence of a non-trivial expansion is equivalent to the statement that a global rescaling of all couplings must exist such that the flow equations become an autonomous system. For k large enough, the sum in Vol(k) can be well approximated by an integral, and 21 : One expects that such an approximation remains valid for k 2 ≫ 4π 2 /L 2 , with L being large (see figure 5). Thus, defining,ū we get the autonomous system (β 2n := kdū 2n /dk): 20 Details are given on appendix B.
From these equations, it is obvious that the behavior of the flow depends on the dimension d in . For instance, for d in > 4, all the couplings are irrelevant and trajectories return toward the Gaussian region, thē u 2 axis being the only direction of instability. In contrast, for d in < 4, some couplings become relevant, and trajectories are repelled from the Gaussian region. u 4 is the first one to become relevant, for d in > 3; for d in < 3, u 6 becomes relevant as well. Figure 6 illustrates the behavior of the RG flow for several dimensions. We have integrated numerically the flow equations for u 2 = 1, u 4 = −0.5, u 6 = 0.01 in figure 7.

Beyond the symmetric phase
In this section, we consider another approximation scheme for the effective potential U k . Focusing on the IR regime, we assume that Ψ(p) essentially reduces to its zero component (the macroscopic field): and, defining χ := Ψ 2 0 /2, we expand the effective potential per unit volume in power series around χ = κ(k): within this parametrization, we identify directly κ with the (non-zero) vacuum, which runs with the scale k.
The two-point function Γ k is moreover defined as: Note that we introduced the field strength renormalization Z(k) because its own flow is nonzero as soon as κ ̸ = 0, i.e. broken phase effect introduces an anomalous dimension. As a technical device, we move the mass contribution in the effective potential. For a uniform field configuration, we must have: Therefore, taking the derivative with respect to t := ln(k/Λ) (Ẋ := kdX/dk) and writing U ′ k (χ) = ∂U k /∂Ψ 0 , we get from (61) or, using the definition (87): As in the previous section, we use the Litim regulator 22 , but modify it to deal with the running field strength Z(k): leading straightforwardly to:U For k large enough, we may use the same integral approximation as for (80), but taking into account that K d in must now depend on the anomalous dimension η k because of the factor Z(k) in (91): such that:U As in the previous section, we introduce dimensionless quantities (labeled with overlines) as: Note that all these changes of variable make sense from the requirement that all the terms in the potential must have the same dimension (the dimensions of g and h have been fixed). The derivative on the RHS in equation (94) is taken at χ fixed. Therefore, we have: where in the RHS the derivative is taken withχ fixed. We obtain: The flow equations can be deduced from the normalization conditions at scale k: Hence, becauseU k [χ =κ] = −ḡκ, we obtain forκ, and after a tedious calculation, we obtain for u 4 and u 6 : The computation of the anomalous dimension is long and provided in appendix B. The result is: For the Litim's regulator (91), the anomalous dimension η k in the LPA with kinetic truncation up to order p 2 is given by: where: In this section, we are aiming to discuss the deep UV regime 1/a 0 > k ≳ (ξ) −1 , where the expansion (21) is not valid. For this regime, the derivative expansion breaks down as well, and a local approximation for interactions is no longer justified. We present a method, inspired from the BMW formalism [57][58][59][60], which considerably improves the accuracy of truncations in regimes where the momentum-dependence of vertex functions is as relevant as the purely local ones, i.e. relevant enough to invalidate the derivative expansion. In this section, we summarize the essential results through three compact statements, in order to focus on the results, and leave the technical details in appendix B. The procedure that we propose is based on the following three approximations (see also [57,58]): (a) We parametrize the 2-point function Γ (2) k (p, p ′ ) with a single parameter, the running mass m 2 (k), such that: such that Γ k (p, p ′ ) reduces to the exact 2-point function in the deep IR for m 2 (k = 0) = σ 2 W /2d in . (b) We assume that vertices are slowly varying with respect to the momenta q running through the effective loops in the flow equation. The allowed windows of momenta being such that q 2 ≲ k 2 , for k small enough with respect to other momenta, we require: for some vacuum Ψ 0 . (c) The third approximation is about the propagator entering in the flow equation. For q in the windows of momenta allowed by ∂ k r k (q 2 ), we must have: where θ is the Heaviside step function and α a positive number, expected to be of order 1.
To complete these approximations we need to choose a suitable regulator. In principle, we could always use the Litim regulator (or any other regulator used in the literature). However, due to the parameterization of the phase space that we have chosen, and in particular the expression of the 2-point function, this regulator loses its crucial advantage which consists in freezing all the fluctuations below the scale k. The Litim optimal condition is moreover expected to be a relevant constraint to define a regulator, especially in the symmetric phase, and we have the following statement: Claim 1. The scale dependent mass: satisfies all the requirements for a regulator as soon as m 2 (k) > 0, freezes out all fluctuations with momentum q 2 < k 2 , and is optimal in Litim's sense.
The physical discussion motivating this choice being a little technical, we provide it in appendix B. Note that Litim's condition, which relies on the existence of an optimized gap for the inverse 2-point function Γ (2) k + r k (p 2 ), is not an absolute criterion in regard to the reliability of the results. Indeed, some choices are expected to provide an optimal bound for the gap, which may have an influence on the computation of physical quantities like critical exponents [94][95][96]. Working within the set of 'optimized regulators' in Litim's sense, we may complete the optimization argument with a principle of minimal sensitivity [94,97,98], requiring that physical quantity has to be stationary with respect to some parameters spanning a family of regulators. This can be done for instance by replacing r k → βr k , and to vary the physical quantities with respect to β. This will be the only optimization scheme that we will discuss in this paper.
Within these approximations, the equation for the 2-point function reads: which, from the observation that Γ (n+1) k (p 1 , · · · , p n , 0) ≡ ∂Γ (n) k (p 1 , · · · , p n )/∂Ψ 0 , leads to a closed equation for Γ (2) k :Γ This is the standard BMW strategy. Our approach will however be a little different. First, we work in the symmetric phase Ψ 0 = 0. Second, we exploit the fact that the 2-point function in our parametrization depends only on a single parameter (the mass) to close the hierarchy around the 6-point function, thus removing the need for the usual assumption of a proportionality relation between the 6 and 4 points contributions in the flow equation of Γ k (4) (see [57]). On the contrary, we will be able to deduce an expression for the 6-point function from the knowledge of the 4-point function, itself deduced from the flow equation of the 2-point function. The only relevant parameters at sufficiently large times being the local parameters, u 2n , whose flow equations are deduced from the derivative expansion. The derivation of these equations being technical, we provide it in appendix B, summarizing them in the following statement: Such an equation however assumes that K Λ is the free propagator. Rather, in our construction, it has to be understood as the effective propagator, taking into account fluctuations. Therefore, we have to construct a coarse graining with a fixed shape of the effective propagator, the corresponding free propagator remaining unknown. There is a pragmatic way to do this. We formally introduce a regulator ∆S k in the classical action. This leads to the Wetterich equation (61), but with the additional constraint that: This equation simply means that we relate the running scale k to the standard deviation of the NN weights as: and that we keep the shape of the 2-point function fixed along the RG trajectory (if it exists), fixing Γ  k (p 1 , p 2 , q, −q) with respect to the momenta 'q' running through the effective loop following the discussion of section 3.3, we get: Remark 3. Note that this approach implicitly assumes k is small enough to justify the replacement: Γ (122). Hence, the resulting flow equations are expected to be exact for reference scales in the IR.
Because the LHS can be explicitly computed from (120), we therefore obtain: In the same way, the flow equation for Γ (4) k (p, −p, 0, 0) allows in principle to compute Γ (6) k (p, −p, 0, 0, 0, 0) within the same approximation. Let us illustrate how this works. Let us consider a given network defining the 'fundamental scale' k ≡ Λ 0 . We can measure the 4-point function at zero momentum Γ (4) k=Λ0 (0, 0, 0, 0) ≡ u 4 (Λ 0 )δ(0). This condition in turn fixes the value ofṙ k (0). For instance, let us consider the following explicit example, working with the slightly modified Litim's regulator: Straightforwardly, we have r k (0) = αk 2 andṙ k (0) = 2αk 2 , and the previous equality reads as: where we introduced the dimensionless variable x := q/k, and: Introducing the dimensionless couplingū 4 := Λ d in −4 0 u 4 , and solving on α, we thus obtain: Because u 4 = O(1/N), and I 2 is a pure number, α is close to 1 for large N. However, α increases as N decreases, and forū 4 ∼ 2/I 2 the approximation breaks down. Note that we could expect this not to be a limitation of the approach in itself, but a limitation of the Litim regulator, however, a moment of reflection shows that such a singular behavior is in fact very general, and independent on the choice of the regulator. Under the condition (127), the problem (123) is well posed but trivial: it reduces to a pure scaling behavior. Indeed, given (123), we have: The flow is entirely fixed by dimensional analysis, and the flow equation for u 4 reduces to its linear contribution: In turn, this equation determines Γ . The effective loop behaves like k 6−2d in , times a factor which is k independent. Hence, we deduce: meaning that u 6 follows a purely scaling behavior as well.
We can use (121) to write the flow equations in terms of the standard deviation σ W : where now u 4 and u 6 are seen as functions of σ W . As displayed in figures 8 and 9, the numerical simulations match to a good precision with the solution to this equation (see appendix A for the computations of u 4 ).
Remark 4. Finally, let us make a remark in regard to the results obtained in the [32]. Indeed, the authors arrived to the equations (131) with σ W replaced by an IR cut-off from perturbation theory, whose validity assumes N to be large enough. What is puzzling with this calculation is that the RG predictions work even for small N, where we expect that perturbation theory breaks down. Our derivation solves this paradox: using a non-perturbative framework, we are able to show that coupling constants follow scaling laws (128), without assumption on the sizes of the coupling constants (however, note that both derivations have been performed with different activation functions, such that it would be interesting to check how general (131) is).

Conclusion and outlooks
In this paper, we have pushed further the use of the RG for the NN-QFT correspondence [32,33], which states that a NN can be represented by a QFT. In the infinite limit of the hidden layer width N, the NN is described by a GP and mapped to a free field theory, and interactions translate finite-N corrections. The main difference with usual QFTs used in physics stems from the choice of the kernel (or propagator), itself inherited from the choice of activation function in the NN. Since it encodes important properties on the theory (IR and UV divergences, scaling, etc), it is important to ask how the data-space of the NN inputs differ from usual spacetime. As a consequence, the usual assumptions on interaction locality may not be appropriate. In the first part of this paper, we have discussed several of these aspects, providing an interpretation slightly different from the one in [32] 24 . We have then described how to build a non-perturbative RG flow following Wetterich-Morris formalism. We introduce two different points of view: in the passive case, the UV cut-off is related to the data resolution, while in the active case, it is given in terms of the standard deviation of the NN weights. The main difference with [32] is that they postulate a global scale invariance with respect to a large volume cut-off (IR, in the language of our paper) on the data-space. Intriguingly, their results agree strongly with numerical simulations, for the small widths, whereas perturbation theory is expected to fail, and even if a scale invariance with respect to the volume is not expected. In this paper, we solve this paradox by developing a RG based on an explicit coarse-graining, and derive flow equations from a process of partial integration of the field degrees of freedom. We find that the active point of view is formally identified with the flow in [32], thus justifying, in an explicitly non-perturbative framework, the agreement between theory and experience found in the paper. A natural extension of this work is to include non-local interactions using tensor models. Another possible direction is to generalize the derivation to other networks such as ReLU-net [32].
On the numerical side, the main result of our paper is the flow equation (131) which shows that the weight standard deviation σ W can be interpreted as a running cut-off in terms of which the couplings of the NN-QFT change. This means that given the couplings for a specific value of σ W , it is possible to compute analytically the couplings for any other value of σ W without doing any numerical simulation. We have verified this statement using numerical simulations (figures 8 and 9). In this paper, we have focused the analysis on the analytical computations in the QFT side: we plan to analyze the equations numerically in future works.
From a function-space perspective, it is natural to understand the learning process as a RG flow induced in a suitable theory space. It would very interesting to investigate how the notions presented in this paper could generalize to describe this process and how the couplings change under learning.

Data availability statement
The data that support the findings of this study are openly available at the following URL/DOI: https:// github.com/melsophos/nnqft. Then, the experimental n-point correlation functions are computed as (3): We define the difference with the large N Green functions G . (A.5) Note that no absolute value has been taken until now, and the result can be positive or negative. The large N Green functions are computed with Wick theorem from the Gauss-net kernel (11). For example, the 4-point function is given by: In order to reduce variance of the results, we will compute the Green functions by averaging over n bags , each made of n nets networks: where G (n) exp (x 1 , . . . , x n )| A means that the correlation functions is computed with the bag A. This also allows extracting standard deviations if needed.

(A.9)
For n = 2, 4, 6, there are respectively n comb = 21, 126, 462 inequivalent combinations. We denote by ⟨·⟩ the average of a quantity over all possible combinations of points, and by ⟨| · |⟩ the average of the absolute value 25 .
The numerical Green functions are exact Green functions because they contain already all quantum corrections from loop diagrams. Hence, it is more natural to write a 1PI effective field theory and determine the coefficients by matching the Green functions computed from 1PI Feynman diagrams. Moreover, the Wetterich formalism from sections 3 and 4 gives relations for the 1PI couplings. We consider the following 1PI interactions to describe the neural network: where S kin is the large N free action (13). We consider a local Lagrangian because it turns out that it reproduces well the experimental Green functions for the points considered previously [32]. In the notations of [32], we have u 4 = 4! λ and u 6 = 6! κ. However, the interpretation is slightly different compared to [32] which writes a microscopic action. The interactions are associated with the part of the kinetic operator Ξ W corresponding to the weight only, since the bias part is always Gaussian and independent of N [32]. Hence, propagators attached to vertices are K W instead of K: the latter appear only in the disconnected 2-point propagators.
We now turn our attention to the computation of the experimental Green functions. We take: σ W = 1, n bags = 20, n nets = 30 000. (A.11) Since we know the exact 2-point function G 2 = K, we must have: Similarly, we know from (27) that higher-order Green functions must decrease as N increases: We check that it is indeed the case by plotting the values of m 2 , m 4 and m 6 for the different combinations of points (A.8). The figures A1 (not present in [32]) show that these values go toward 0 as N increases for n = 4, 6. On the other hand, the values for n = 2 do not have any specific pattern, which is expected, since G (2) exp should be independent of N.
We can simplify further this information and extract a single number. To do this, we take the absolute value of the normalized deviations (A.5) and average over the different combinations of points (A.8) to get ⟨|m n |⟩. Moreover, to get an idea of how small are the normalized deviations, we define a background as follows: we compute the standard deviation of m n over all bags of neural networks for all combinations of points (A.8), and then average over the latter. The idea is to compare the normalized error encoded by m n with its numerical fluctuations over different bags, represented by the standard variation. On the figure A2, we reproduce the results from [32]: ⟨|m n |⟩ for n = 4, 6 is below the background only for small N and for N = 1000, and it is always below the background for n = 2. In principle, ⟨|m n |⟩ should always be below the background for higher N (which was not studied in the original paper [32] and which we could not reach for computational reasons) so the current test is not very sharp; figure A1 give a cleaner assessment. Next, we can compute u 4 (x 1 , x 2 , x 3 , x 4 ). Using Feynman rules, it can be obtained by subtracting the disconnected contributions (equal to G (4) 0 and built from the 1PI 2-point function) from the full 4-point function to extract the contact interaction and truncating the external legs: , where K W was defined in (11) (see [32] for more details). Importantly, this equation is really an equality and not an approximation as in [32]: since we are working with 1PI diagrams, there are no quantum corrections and any n-point Green function is built from vertices of order n ′ ≤ n. Higher-order vertices m > n appear only in loop diagrams, which are not present. Hence, this allows determining all 1PI couplings exactly in a recursive way. Our results agree quantitatively for u 4 with those of [32] because the loop corrections are subleading in the large N expansion. However, this may give different results for u 6 since the latter receive loop corrections from the microscopic quartic vertex.  We take: σ W = 1, n bags = 30, n nets = 30 000. (A. 16) We find that u 4 is constant to a very good precision when evaluated over all combinations of points (A.8). In figure A3, we display the values of u 4 averaged over all combinations and the corresponding standard deviation and find that its absolute value decreases as N increases, reproducing the results [32]. Importantly, we find that u 4 is negative, which was not indicated in [32] (their figure 4 has an implicit absolute value needed to use the log-scale). As a consequence, the effective action (A.10) must include a sixtic contribution, however small, for the path integral to be stable: truncating to quartic interactions as in [32] leads to an exponential growth of the weight. A preliminary analysis of the passive flow equations (section 3) indicate that they can be integrated over a large range of k only if the initial conditions satisfy u 4 < 0 and u 6 > 0, otherwise the flow diverges.
(A. 17) We see that u 4 decreases as σ W and N increase and that the values are well predicted by the active RG flow equations (131). As such, knowing u 4 for a single σ W at fixed N allows computing it for any other σ W .
These equations depend on local couplingsū 2 andū 4 . The flow ofū 2 is fixed by the flow equation (B.35), but requires the knowledge ofū 4 . It can be obtained using standard LPA, equations (76) and (77) for a sixtic truncation, which discard contributions or order 1/N 3 from the N-scaling. We get, forū 4 andū 6 : where: Finally, from the flow equation for Γ (4) k (p 1 , p 2 , p 3 , p 4 ) (equation (76)), setting p 1 = −p 2 = p and p 3 = p 4 = 0, we get:  (76)): The first term corresponds toū 4f k (x, 0), the second defines R k (x). A direct inspection shows that R k ∼´dqṙ k (q 2 )G k (−q − p)G k (q), which becomes small for p large enough. We thus obtain the approximation forh k (x) for x large enough: k (p 1 , p 2 , p 3 , p 4 ) has to be symmetric under any permutation of the four external momenta p 1 , p 2 , p 3 and p 4 . Moreover, because we assume local interactions as building blocks, external momenta have to be conserved: p 1 + p 2 + p 3 + p 4 = 0. Let us assume that Γ (4) k is the analytic continuation with respect to some couplings from a perturbative solution Γ (4) k,pert , defined as the formal sum of an asymptotic perturbative series: where the first sum runs over one-particle irreducible (1PI) Feynman diagrams G 4 having four external points. The product runs over vertices υ ∈ G, 2n(υ) denoting the number of fields involved in the interaction having coupling constant g 2n(υ) . Finally, A G is the Feynman amplitude associated with the graph G. Note that all Feynman amplitudes arise with a global Dirac delta δ(p 1 + p 2 + p 3 + p 4 ) ensuring momentum conservation. We recall that Feynman diagrams provide a graphical representations of the Wick contractions involved in the perturbative expansion around the Gaussian theory. A typical Feynman graph is a set of vertices and edges, vertices corresponding to interactions and edges to the Wick contractions between pairs of fields. The momentum dependence of the 4-point function can be investigated from the structure of Feynman graphs labeling its perturbative expansion. First, we assume that the the theory involves only 4-points vertices. At one loop, Γ k,pert (p 1 , p 2 , p 3 , p 4 ) has the following structure: k,1-loop (p 1 , p 2 , p 3 , p 4 ) = δ(p 1 + p 2 + p 3 + p 4 ) 4 j=2 γ 1-loop (p 1 + p j ), (B. 47) each term corresponding to the allowed permutations of the external momenta 28 . Explicitly, the relevant one-loop diagrams have the following structure: the loop being proportional to´dqK(q 2 )K((q + p 1 + p 2 ) 2 ). It can be easily checked that the decomposition (B.47) keeps the same form after including sixtic interactions. Our aim is to prove that such a decomposition remains a suitable approximation for Γ (4) k beyond one-loop. To this end, we will make use of a renormalization group argument, using the explicit expression (B.38). Assuming that (B.47) holds beyond one loop, and there exists a function γ k (p) such that: Γ (4) k (p 1 , p 2 , p 3 , p 4 ) = δ(p 1 + p 2 + p 3 + p 4 ) the cyclic permutation covering the three pairings (p 1 , p 2 ), (p 1 , p 3 ) and (p 1 , p 4 ), the solid black edge materializing the effective propagatorṙ k (p 2 ) G k (p 2 ), whereas the dotted edge corresponds to G k (p 2 ). Let us investigate the structure of the (Γ (4) k ) 2 contribution. From our assumption, and neglecting the dependence of the effective vertex on the momentum q running through the effective loop, we get: where L (2) k (p 1 + p 2 ) :=´dqṙ k (q 2 )G k ((q + p 1 + p 2 ) 2 )G k (q 2 ). For p 2 i 's close to the running horizon ξ −2 (k) := k 2 u 2 (k), f k (p) becomes small as the explicit expression (B.38) shows. Thus γ k (p i ) ∼ −u 4 /6, and (Γ up to permutation of the external momenta. Assuming we are close to the quartic sector, we focus on the last contribution. Setting p 5 = −p 6 = q, we have two relevant configurations to investigate. The first one is for p 5 and p 6 hooked to the same vertex. In that case, the loop depends only on two momenta, hooked to another vertex, say p 1 + p 2 in the following: where we discarded the dependence of the effective vertices on the momentum running through the effective loop, and we assumed γ k (p) = γ k (−p). Such a contribution in the first term on the RHS of the equation (B.52) does not break the ansatz for Γ (4) k , the remaining momentum q could be set to zero outside of the tadpole. The second configuration is for p 5 and p 6 hooked to different effective vertices. It is however easy to check that for large external momenta with respect to the IR cut-off k, these contributions are suppressed. For instance, we have: and for p 2 4 large enough with respect to k, this contribution is less relevant than the first in the flow equation for Γ (4) k . From the same argument, it is easy to check that the second kind of contributions in the flow of Γ (6) k , involving Γ (6) k and Γ (4) k does not break the ansatz for Γ (4) k in the range of momenta that we consider. □