Gaussian information bottleneck and the non-perturbative renormalization group

The renormalization group (RG) is a class of theoretical techniques used to explain the collective physics of interacting, many-body systems. It has been suggested that the RG formalism may be useful in finding and interpreting emergent low-dimensional structure in complex systems outside of the traditional physics context, such as in biology or computer science. In such contexts, one common dimensionality-reduction framework already in use is information bottleneck (IB), in which the goal is to compress an ‘input’ signal X while maximizing its mutual information with some stochastic ‘relevance’ variable Y. IB has been applied in the vertebrate and invertebrate processing systems to characterize optimal encoding of the future motion of the external world. Other recent work has shown that the RG scheme for the dimer model could be ‘discovered’ by a neural network attempting to solve an IB-like problem. This manuscript explores whether IB and any existing formulation of RG are formally equivalent. A class of soft-cutoff non-perturbative RG techniques are defined by families of non-deterministic coarsening maps, and hence can be formally mapped onto IB, and vice versa. For concreteness, this discussion is limited entirely to Gaussian statistics (GIB), for which IB has exact, closed-form solutions. Under this constraint, GIB has a semigroup structure, in which successive transformations remain IB-optimal. Further, the RG cutoff scheme associated with GIB can be identified. Our results suggest that IB can be used to impose a notion of ‘large scale’ structure, such as biological function, on an RG procedure.


I. INTRODUCTION
An overarching theme in the study of complex systems is effective low-dimensionality.We are content, for example, with the existence of laws of fluid dynamics whose few phenomenological parameters accurately account for the macroscopic behavior of many completely different fluids.We are also confident that the laws are insensitive to the particular microscopic configuration of a fluid at any given time.These are connected, but different notions of low-dimensionality; the first deals with simplification in model space, while the second refers to the emergence of collective modes, of which relatively few, when compared to the total number of degrees of freedom, will be important.A central result of Wilson's renormalization group (RG) formulation is that an effective low-dimensional model of a system may be found through repeated coarsening of the microscopic or "bare" model.In other terms, by successively removing dynamical degrees of freedom from the system description, the effective model "flows" towards a description involving very few parameters.In general, there are many strategies which can be used to simplify the description of a high-dimensional system, and RG methods, though vast in breadth, form only a subset of these.An altogether different dimensionality reduction framework is the information bottleneck (IB), which attempts to compress (or more accurately coarsen) a signal while keeping as much information about some a priori defined "relevance" variable as possible [1].Both IB and RG have been applied in theoretical neuroscience [2][3][4][5], computer science [6][7][8][9][10][11], and other frontier areas of applied statistical physics [12,13].Given the ubiquitous need to find simplifying structure in complex models and data, a synthesis of the ideas present in IB and RG could yield powerful new analysis methods and theoretical insight.
Probability-theoretic investigations of renormalization group methods are not a recent development [14].One early paper by Jona-Lisinio used limit theorems from probability theory to argue the equivalence of older, fieldtheoretic RG formalism due to Gell-Mann and Low with the modern view due to Kadanoff and Wilson [15].Recent work [13,[16][17][18][19][20] has focused on connections of RG to information theory.Since the general goal in RG is to remove information about some modes or system states through coarsening, an effective characterization of RG explains how the information loss due to coarsening generates the RG flow or relates to existing notions of emergence.Moreover, like the probabilistic viewpoint promoted by Jona-Lisinio, the information-theoretic viewpoint enjoys a healthy separation from physical context.The hope is that, by removing assumptions about the particular organization or interpretation of the degrees of freedom in the system, RG methods can be generalized and made applicable to problems outside of a traditional physics setting [5,21].This viewpoint also has the potential to enrich traditional RG applications, as Koch-Janusz et al. point out [12].Their neural-network implementation of an IB-like coarsening scheme was able to "discover" the relevant, large-scale modes of the dimer model, whose thermodynamics are completely entropic, and whose collective modes do not resemble the initial degrees of freedom.More recently, Gordon et al. built upon this scheme to formally connect notions of "rele-vance" between IB and RG [13].
In contrast to most RG formulations which require an explicit, a priori notion of how the modes of the system should be ordered, the information bottleneck approach defines the relevance of a feature by the information it carries about a specified relevance variable.To be concrete, let X be a random variable, called the "input," which we wish to coarsen.Then, let Y be another random variable, called the "relevance variable," which has some statistical interaction with the input X. IB defines a non-deterministically coarsened version of X, X, which is optimal in the sense that the mutual information (MI) between X and Y is maximized.Because X is defined as a non-deterministic coarsening of X, an exact correspondence between RG and IB demands that the RG scheme uses what is known as a "soft" cutoff.This means, for example, that the ubiquitous perturbative momentum shell approach put forth by Wilson cannot be mapped exactly onto IB under the interpretation of X as some coarsegrained variable.The trade-off between degree of coarsening, indicated by I( X; X) and the amount of relevant information retained I( X; Y ) is controlled by a continuous variable, denoted β.Formally, the non-deterministic map which yields X from X is found by optimizing the IB objective function: For large values of β, the compressed representation x is more detailed and retains a greater deal of predictive information about Y .Conversely, for smaller β, relatively few features are kept, in favor of reducing I( X; X) (increasing compression/coarsening).The formalism investigated here is the one originally laid out in 2000 by Tisbhy et al. [1], but since then a number of thematically similar IB schemes have been proposed [22][23][24].IB methods have been employed extensively in computer science, specifically towards artificial neural networks and machine learning [6][7][8][9][10][11].In theoretical neuroscience, Palmer et al. have demonstrated using IB that the retina optimally encodes the future state of some time-correlated stimuli, suggesting that prediction is a biological function instantiated early on in the visual stream [4,25].IB has also been applied in studies of other complex systems, for instance to efficiently discover important reaction coordinates in large MD simulations [26], and to rigorously demonstrate hierarchical structure in the behavior of Drosophila over long timescales [27].
From a broad perspective, there are some basic similarities between RG and IB.Both frameworks entail a coarsening procedure by which the irrelevant aspects of a system description are discarded in order to generate a lower-dimensional, "effective" picture.Further, the Lagrange multiplier β in IB, which parameterizes the level of detail retained, can be seen as roughly analogous to the scale cutoff present in some implementations of RG.As a first guess, one might imagine that X in IB roughly corresponds to the (fluctuating) bare state of a system we are interested in renormalizing, and its compressed representation X is a coarsened dynamical field akin to a fluctuating "local" order parameter.However, it is not difficult to find implementations of RG which do not map to IB in this way, and vice versa.For example, in Wilsonian RG schemes with a hard momentum cutoff, the decimation step represents a deterministic map from bare to coarsened system state.Together with our provisional interpretation, this contradicts the original formulation of IB, in which the coarsening is non-deterministic [28].
Another, more serious discrepancy is due to the expected use cases of these two theoretical frameworks.Generically, the fixed point description of criticality offered by RG is legitimate due to the presence of infinitely many interacting degrees of freedom, otherwise the coarsened model cannot be mapped back into the original model space.In IB, the random variables X is finitedimensional, such as a finite lattice of continuous spins, and "dimensional reduction" does not refer to the convergence towards a low-dimensional critical manifold in model space, but instead the actual removal of dimensions from the coarsened representation of X.Finally, and perhaps most dauntingly, there is not an obvious equivalent of the IB relevance variable Y in RG.It seems counterintuitive that one would want more control over the collective mode basis used to describe a system, when for the vast majority of RG applications, length or energy scale works perfectly well as a cutoff.
Despite these apparent mismatches, there are some significant structural similarities between IB for continuous variables and a class of RG implementations involving soft cutoffs.For concreteness, we restrict our discussion of the correspondence to Gaussian statistics.While this precludes the analysis of non-Gaussian criticality, it allows all of the results to be expressed analytically and makes connections more transparently.This can also serve as a basis for later investigations involving non-Gaussian statistics and interacting systems.To begin, we show that Gaussian information bottleneck (GIB) [29] exhibits a semi-group structure in which successive IB coarsenings compose larger IB coarsenings.This structure is summarized in an explicit function of the Lagrange multiplier β which simply multiplies under semigroup action and is therefore analogous to the length scale in canonical RG.Next we explore how the coarsening map P (x|x) provided by IB defines an infra-red regulator which serves as a soft cutoff in several non-perturbative renormalization group (NPRG) schemes.This relation shows that the freedom inherent in choosing a cutoff scheme maps directly to the choice of Y -statistics in IB.Finally, we use a Gaussian field theory as a toy model to explore the physical significance of this fact.One result is that the RG scheme provided by IB can select a collective mode basis which is not Fourier, and hence impose a cutoff which cannot be interpreted as a wavenumber.Additionally, in whichever collective mode basis is chosen, the shape of this IB cutoff scheme is closely related to the Litim regulator which is ubiquitous in NPRG literature [30].

II. SEMIGROUP STRUCTURE IN GAUSSIAN INFORMATION BOTTLENECK
Every IB problem begins with the distribution P (x, y), which specifies the statistical dependencies linking the input variable X to the relevance variable Y .Gaussian information bottleneck (GIB) refers to the subset of IB problems in which P (x, y) is jointly Gaussian.Under this constraint, a family of coarsening maps P β (x|x) can be found exactly for all β.Chechik et al. [29] showed this by explicitly parameterizing the coarsening map, then minimizing the IB objective function with respect to these parameters.Their parameterization consists of two matrices A and Σ ξ , which are used to define the compressed representation X as a linear projection of the input plus a Gaussian "noise" variable ξ.Explicitly, X = AX + ξ with ξ ∼ N (0, Σ ξ ).Under this parameterization, one exact solution is given by: Where Θ is the Heaviside step function and The matrix V represents a set of eigenvectors with corresponding eigenvalues λ i in the following way: The matrix Σ −1 X Σ X|Y used above also appears in canonical correlation analysis and we therefore refer to it as the "canonical correlation matrix".Note that since it is not generally symmetric, the eigenvector matrix V is not generally orthogonal.An important property of the canonical correlation matrix is that its eigenvalues lie within the unit interval; that is, λ i ∈ [0, 1] for all i.
The GIB solution (2) is not unique.At a cursory level, this follows from the IB objective function (1), which is a function only of mutual information terms and hence invariant to all invertible transformations on X, X, and Y .However, not all invertible transformations X → f (X) will leave the joint distributions P (x, y) and P β (x, x) Gaussian.It is specifically invertible linear transformations X → LX (and analogous transformations for X and Y ) which preserve IB optimality and leave all joint distributions Gaussian.One consequence of this is that X → L X changes the coarsening parameters (A, Σ ξ ) → (LA, LΣ ξ L T ) = (A , Σ ξ ).If L is invertible, then these new parameters also solve GIB.When testing whether a given parameter combination (A, Σ ξ ) is GIB-optimal, it is therefore useful to consider the quantity V −1 A T Σ −1 ξ AV −T , Which is invariant to all invertible linear transformations on X, X, and Y .
In this section, we show that solutions to GIB have an exact semigroup structure, wherein two GIB solutions "chained together" compose a larger solution which is still optimal.To be more precise, let P (x, y) be jointly Gaussian, then suppose P β1 (x 1 |x) is IB optimal.Because P β1 (x 1 |x) is Gaussian under the parameterization X1 = A 1 X +ξ 1 , it must also be that P (x 1 , y) is jointly Gaussian and thus a valid starting point for a new GIB problem.Taking X1 to be the new input variable, let the second optimal coarsening map be P β2 (x 2 |x), and parameterize it the same way: X2 = A 2 X1 + ξ 2 .Then, we claim, the composition of these two coarsening maps, obtained by integrating the expression P β2 (x 2 |x 1 )P β1 (x 1 |x) over x1 , is also given by a single IB-optimal coarsening P β (x|x) for some β = β 2 • β 1 , where • is a binary operator whose explicit form will be provided shortly.We represent this composition schematically with the Markov chain: To simplify the analysis, we begin by redefining [31] the input variable X by projecting it onto the eigenvectors of the canonical correlation matrix.Assuming that V is full-rank, X → V T X is an invertible linear transformation.Invertibility guarantees that the objective function is unaffected, while linearity guarantees that P (y, x) remains Gaussian.We call this new basis for X the "natural basis" since after this transformation Σ X , Σ X|Y , and A 1 are diagonal.Additionally, after the first compression to X1 , the new analogous quantities, e.g.Σ X1 , Σ X1|Y , and A 2 will remain diagonal.For the transformation matrices A 1 and A 2 , this fact can be seen by inspecting (2), while Lemma B.1. in [29] proves that Σ X and Σ X|Y are diagonal.In this new basis, they are given by: We now show that successively applied GIB compression as portrayed in (3) composes GIB transformations of greater compression.A more detailed treatment is given in appendix A. Suppose that A and Σ ξ describe a nondeterministic map AX + ξ.From Lemma A.1. in [29], this map (A, Σ ξ ) is IB-optimal if there exists some β such that where α i is as given in (2).Consider two successive maps with bottleneck parameters β 1 and β 2 , each with unit noise.The composition of these transformations is represented by the pair (A, Σ ξ ) = (A 2 A 1 , A 2 A T 2 + I).Both A 1 and A 2 can be computed explicitly using (2), though A 2 is initially given in terms of the statistics P (x 1 , y).Using X1 = A 1 X + ξ 1 , we thus re-write A 2 in terms of the original relevance variable-input variable statistics P (x, y).After this substitution, direct evaluation of where β 2 • β 1 is the bottleneck parameter of the full, 1step compression: It is important to note that this computation defines the binary operator •.If GIB did not have a semigroup structure, it would not be possible to identify • in this manner.Direct computations show that this operator satisfies closure and associativity, and thus furnishes the space in which β values live, that is R > 1, with a semigroup structure.As bonuses, if we consider β = ∞ to be an element, we see that it is the identity element.This aligns with the fact that in the limit β → ∞, the IB objective (1) becomes insensitive to the encoding cost I(X; X) and hence no coarsening occurs; X becomes a deterministic function of every component of X which contains information about Y .Further, • is symmetric.One should be careful to note, however, that the maps P β1•β2 (x|x) and P β2•β1 (x|x) need only agree in the overall level of compression achieved, and may otherwise differ since X → L X is a symmetry.
A. What is the significance of this structure?
A broad goal of this paper is to explore structural similarities between IB and RG.The semigroup structure present in Wilsonian RG is crucial to its explanation of scaling phenomena, so its presence in GIB is a promising sign.The traditional picture is this: consider the RG transformations R b1 and R b2 which rescale length by factors b 1 and b 2 , respectively.Then a fundamental property of R is that R b1 R b2 = R b1b2 .This structure imposes a strong constraint on the behavior of the flow near a fixed point.If σ represents an eigenvector of the Jacobian matrix at the fixed point, then its associated eigenvalue λ σ will scale as b yσ [32].In short, the semigroup typically allows one to define the critical exponent y σ .
The operator • we introduced does not immediately lend itself to this sort of analysis.However, we can introduce a function b(β) which satisfies b( . By inspection, this function is given by: This quantity is interesting because it is analogous to the length-rescaling factor found in typical Wilsonian or Kadanoff RG schemes, yet in IB there is no need for a notion of space, and hence rescaling length generally means nothing.Compare this with, for example, a momentumshell decimation scheme.One identifies the rescaling factor by comparing the new and old UV cutoffs, and so it acquires the meaning of a length-rescaling factor.Here, b is determined entirely by the Lagrange multiplier β and the structure of GIB, both of which are defined without deference to an a priori existing notion of spatial extent.As discussed in the introduction, connecting IB to RG is attractive, in part, precisely because IB is an information-theoretic framework and does not rely on physical interpretations.Hence this rescaling factor b should be considered an information-theoretic quantity in the same way as β. Can b(β) as defined above be used in the same way as the rescaling factor b is used in RG? First, limits of the IB problem involving extremal values of β should match intuition about b in an RG context.Indeed, for β → ∞, the zero-coarsening limit, b(β) → 1. Next, by the data processing inequality, at β = 1 the optimal IB solution is degenerate with complete coarsening, i.e.X becomes independent of X. Correspondingly, the limit β → 1 gives b(β) → ∞.Next, let us recall the scope of the GIB framework.GIB makes statements only about completely Gaussian statistics, so no anomalous scaling will appear, and thus a discussion of critical exponents is hard to motivate.Second, GIB is defined for finitedimensional X and Y , so we cannot simply connect it to, say, momentum-shell Wilsonian RG, which only makes statements about infinite systems.Finally, and related to the last point, we have not identified yet what the analogous "model space" is in the context of IB, or how an optimal GIB map could represent an RG transformation in that space.This will be the subject of the next section, where we show that the non-deterministic nature of IB coarsening aligns exactly with existing soft-cutoff RG methods.
Whether or not this analysis helps to formally connect IB and RG, it is interesting to ask whether other IB problems exhibit semigroup structure.One could imagine, for example, that a series of high-β compression steps (low-compression limit) might be easier than one large compression step.If this is the case, IB problems with semigroup structure may benefit from an iterative chaining scheme similar to the one we present here.One possible application of this structure is the construction of feed-forward neural networks with IB objectives.If the IB problem in question has semigroup structure, then the task of training the entire network can be reduced to training the layers one-by-one on smaller (higher-compression) IB problems.This has benefits in biological systems, such as biochemical and neural networks, where processing is often hierarchical, likely as a result of underlying evolutionary and developmental constraints.Biological systems are also shaped by their output behavior, which sets a natural relevance variable in the arc from sensation to action.

III. STRUCTURAL SIMILARITIES BETWEEN IB AND NPRG A. Soft-cutoff NPRG is a theory of non-deterministic coarsening
The renormalization group is not a single coherent framework, but rather a collection of theories, computational tools, and loosely-defined motifs.As such, it is probably not possible to succinctly define RG on the whole.A common theme, at least, is that RG techniques describe how the effective model of a given system changes as degrees of freedom are added or removed.The modern view of RG theory, which is largely due to Wilson [33][34][35][36][37] and Kadanoff [38], concerns itself with the removal of degrees of freedom through a process known as decimation, in which a thermodynamic quantity (typically the partition function) is re-written by performing a configurational sum or integral over a subset of the original modes.Here, even before discussing rescaling and renormalization, we must make procedural choices.To begin, one must specify the subset of degrees of freedom which are to be coarsened off.In theories where modes are labelled by wavenumber or momentum, one typically establishes a cutoff and decimates all modes with momentum above it.As a result, those modes are completely removed from the system description, and their statistics are incorporated into the couplings which parameterize the new effective theory.Another consideration is the practicality of carrying out such a procedure.If the model in consideration can be expanded in a perturbation series about a Gaussian model, and if the non-Gaussian operators are irrelevant or marginal under the flow, then this analysis is amenable to perturbative RG.However, this is often not the case, for example in systems far from their critical dimension, or in non-equilibrium phase transitions, where there may not even be critical dimensions [39,40].
In non-perturbative RG (NPRG) approaches, the need for a perturbative treatment is removed by working from a formally exact flow equation at the outset.The first such treatment was put forth in 1973 by Wegner and Houghton, who used Wilson's idea of an infinitesimal momentum-shell integration to derive an exact flow equation for the full coarse-grained Hamiltonian [41].Because this equation describes the evolution of the Hamiltonian for every field configuration, this and other NPRG flow equations are called integro-differential equations, and the NPRG is sometimes referred to as the functional renormalization group (FRG).Later, Wilson and Kogut [35], as well as Polchinski [42], proposed new NPRG flow equations in which the cutoff was not described explicitly through a literal demarcation between included and excluded modes, but instead through nondeterministic coarsening, so that the effective Hamiltonian satisfies a functional generalization of a diffusion equation [43].These approaches were introduced, at least in part, as a response to difficulties [44] that arise from the sharp cutoff in the Wegner-Houghton construction.Correspondingly, the Wilson-Polchinski FRG approach can be thought to give a soft cutoff, where modes can be "partially coarsened".
The most common NPRG approach in use today was first described in 1993 by C. Wetterich [45].Like the Wilson-Polchinski NPRG, the Wetterich approach uses a soft cutoff, but the objects computed by this framework are fundamentally different.Instead of computing the effective Hamiltonian of modes which are below the cutoff, the Wetterich framework computes the effective free energy of modes above the cutoff.For this reason, we say that the Wilson-Polchinski framework is UV-regulated and Wetterich is IR-regulated.Yet, despite this difference in perspective, the Wetterich formalism still describes the flow effective models make from their microscopic to macroscopic pictures.In this section, we will explore how the soft-cutoff construction is related to a notion of non-deterministic coarsening, and in turn, the information bottleneck framework.An indepth discussion of the philosophy and implementation of NPRG techniques would be distracting, so we instead refer the reader to a number of good references on the topic [46][47][48][49].
So far we have not explained how one actually imposes a soft-cutoff scheme.We begin by examining the Wetterich setup, in which one writes the effective (Helmholtz) free energy at cutoff k: The bare action, given by S, is the microscopic theory which is known a priori.The source J allows us to take (functional) derivatives of this object to obtain cumulants (connected Green's functions).The remaining term ∆S k [χ] is known as the deformation, and it is this term which enforces the cutoff.It is written as a bilinear in χ: For compactness, we will often resort to a condensed notation and express integrals instead as contraction over suppressed continuous indices.For example, the deformation may be re-written: The kernel (matrix) R is known as the regulator, and it controls the "shape" of the cutoff.Almost always, it is chosen to be diagonal in Fourier basis so that the cutoff k has the interpretation of a wavenumber or momentum.The resulting Fourier-transformed regulator R k (q) has some freedom in its definition, but it must satisfy the following properties [30]: These constraints guarantee that the deformation acts as an IR cutoff.The first condition increases the effective mass of low-momentum modes and suppresses their contribution to the effective free energy.The second ensures that modes with high momentum (q 2 > k 2 ) are left relatively unaffected, and contribute more fully to W k .The third condition ensures that the so-called "effective action," defined as approaches the bare action (or Hamiltonian, as the case may be) in the limit k → ∞.Here, the order parameter ϕ is given by δW k [J]/δJ † .Because of this construction, the second regulator property also ensures that in the limit k → 0, the deformation ∆S k disappears, and the effective action Γ k becomes the Legendre transform of W [J].This functional Γ k=0 is known in many-body theory as the 1PI generating functional, and in statistical mechanics as the Gibbs free energy.In the Wetterich formalism, one is generally interested in computing the flow of Γ k because of these useful boundary conditions.
To see how this approach is related to nondeterministic coarsening, we will connect it to a softcutoff UV-regulated approach, also put forth by Wetterich, which is formally equivalent to the Wilson-Polchinski framework.We begin with the following expression defining the average action Γ av k [ χ], taken directly from the paper [50], with only a slight change in notation: where we refer to this functional P k [ χ|χ] as the coarsening map.If we were interested in performing deterministic coarsening, i.e. one involving a hard cutoff, the coarsening map would be something like a delta-function ) for some functional Φ k .However, in all soft-cutoff UV-regulated approaches, this distribution is Gaussian in χ In principle, given the coarsening parameters A k and ∆ k for all k, the exact flow equation for Γ av k is determined.Wetterich gives explicit choices for these parameters, while Wilson and Polchinski independently give their own (though in slightly different fashion).The term C k is a normalizing constant which is essentially unimportant to the remainder of our discussion.Now we connect the IR and UV approaches to show that they are complementary, and in some sense, equivalent.In particular, suppose we know P k [ χ|χ] for all k.
Then, from this single object, one can construct both the IR-regulated and UV-regulated flows.This should make intuitive sense; the IR-regulated part tracks the thermodynamics of the already-integrated modes, while the UV-regulated part tracks the model of the unintegrated modes.This can all be seen clearly by writing out the full sourced partition function Z[J] and invoking the normalization of the coarsening map.
In the final expression, the normalizing constant C k has been dropped.Readers familiar with the Polchinski formulation will immediately recognize W k [ J [ χ]] as the effective interaction potential.However, the argument to this potential is shifted by the source J, which therefore enters nonlinearly, unlike in Polchinski's approach.This difference is due to the fact that we define a flow for each initial source configuration, instead of adding a linear source term to the vacuum flow.
To arrive at (7) above, we had to define the effective field-dependent source J and identify a suitable deformation term in P k [ χ|χ].By directly substituting (6), one can see that As promised, the existence of a family of distributions P k [ χ|χ] with a known parameterization (A k , ∆ k ) allows us to define an IR regulator scheme, and therefore compute the NPRG flow both above and below the cutoff.The deformation term ∆S k ultimately came from the χ 2 term present in the coarsening map, which could be interpreted as a free energy.We also identify immediately that the IR regulator R k corresponding to a given choice of coarsening map is given by A † k ∆ −1 k A k .We will next use this viewpoint to introduce information bottleneck into the discussion.In particular, we will associate the coarsening map P k [ χ|χ] with the IB coarsening map P β (x|x) and examine some consequences.This discussion comes with some restrictions.Firstly, one should note that all soft-cutoff NPRG frameworks, regardless of the structure of the microscopic action, assume a Gaussian coarsening map.With a non-Gaussian P k [ χ|χ], the flow may still be defined, but it will not, in general, satisfy any known exact flow equations.This is easiest to see in the IR Wetterich formalism, since a non-Gaussian P k would yield a ∆S k [χ] which is no longer bilinear in χ, and hence one could not write the flow equation in terms of the exact effective propagator, as it usually is.Indeed, the more general ∆S k [χ] could have terms at arbitrarily high order in χ, and thus require arbitrarily high-order derivatives of Γ k in the flow equation.So, while it is not impossible to seriously consider non-Gaussian P k [ χ|χ], it is certainly inadvisable without good reason.
With this in mind, we must also note that IB has an exact solution involving Gaussian P β (x|x), but only when the variables X and Y are jointly Gaussian.By analogy, this restricts us to discussing theories where the bare action S[χ], or perhaps more accurately, the bare Hamiltonian H[χ] contains only linear and bilinear terms in χ.While everything presented above holds for general S, everything that follows will be totally Gaussian so that IB optimality can be exactly satisfied.Finally, note that IB may not be well-defined for infinite-dimensional random variables such as fields, so our scope is further limited to finite-dimensional multivariate Gaussian distributions of classical variables.

B. The Gaussian IB regulator scheme
In the last section, we briefly introduced soft-cutoff NPRG approaches and argued that both UV-and IRregulated flows can be defined given a family of Gaussian coarsening maps P k [ χ|χ].Broadly, we aim to show in this paper that IB and RG can be connected by identifying this map with the IB-optimal coarsening map P β (x|x).By this we do not mean to say that the family of maps produced by IB are the "correct" starting point for NPRG.Instead, we simply note that IB-optimality is a constraint one could impose on the coarse-graining scheme.Assuming we do so, what characteristics does the IB-RG scheme carry?Using the exact solution to GIB and Eq. ( 8), we identify the regulator, or soft-cutoff scheme, required by IB optimality for some known initial statistics P (x, y) Here the β i are critical bottleneck values, indexed by the components of the so-called "natural" basis, which is found by diagonalizing the canonical correlation matrix Σ −1 X Σ X|Y as discussed in section II.The critical bottleneck values β i are given by (1 − λ i ) −1 .If V is the matrix of right eigenvectors of this matrix, then s i is given by [V T Σ X V ] ii .Notice also that this regulator is diagonal in natural basis.Θ denotes the Heaviside step function.
In the typical context, R is diagonalized by a Fourier transform, and thus it represents a cutoff in wavevector or momentum.Here this notion is generalized, and instead of identifying a cutoff wavenumber k, we should consider the cutoff to be of information-theoretic origin, and fundamentally defined by β.Consequently, the degree to which the mode labelled by i is coarsened should be found by comparing its corresponding critical value β i to the cutoff β.As such, we can essentially make the replacements k 2 → β and q 2 → β i , with the caveat that β and β i should approach unity as k 2 and q 2 go to zero.
In Figure 1, we plot R (IB) obtained from the first toy model presented in section IV B and compare it against the well-known Litim regulator [30], denoted R (L) and given in Eq. (11).Ignoring for now the particulars of the model, we point out that the IB and Litim regulators appear qualitatively similar, and for fixed parameters t and η, all limits involving q and k satisfy the regulator scheme requirements.Moreover, we see that the NPRG and IB notions of mode relevance are in agreement.Smaller canonical correlation eigenvalues λ (top plot) correspond to collective modes which get integrated out later in the flow.This is reflected in the structure of the soft cutoff, which increasingly suppresses fluctuations as q → 0.
Is it okay to take (9) seriously as an IR regulator scheme?Let us attempt to compare with the conditions outlined in the last section.The typical interpretation X Σ X|Y as a function of label q, which may be interpreted as a wavevector magnitude.Modes with smaller eigenvalue can be thought to carry more information about Y .Bottom: Regulator values as a function of cutoff k and mode label q for the Litim scheme (11) and the IB scheme (black and blue, respectively).
of the first requirement on R is that the lowest energy modes should be given extra mass by the regulator so that they are "frozen out" of the configurational integral.In fewer words, there should not be soft modes in intermediate stages of the flow.By analogy, it must be true that (R β ) 11 > 0, where we take β 1 = min i β i to represent the most "relevant" mode (in the IB sense).Indeed, for all β, this is satisfied by (9).Next, R must vanish for the i th mode when the cutoff β is taken sufficiently far below β i .Because of the step function, this is satisfied.Finally, each diagonal component (R β ) ii should diverge as β → ∞ so that at zero compression, only the saddle point configuration of the microscopic theory contributes to the generating function, or whichever thermodynamic potential we are interested in.If β i are all finite, then this limit holds as well [51].
Because it satisfies all of the properties required of a typical regulator in a soft-cutoff scheme, we call (9) the "IB regulator" and denote it R (IB) .This identification has some interesting consequences, which will be explored in the coming section.One particularly striking feature is that the cutoff scheme is now parameterized by the family of distributions P (x, y).In IB theory, these distributions formalize the notion of "important features" of X implicitly through its correlations with Y .This means that the RG scheme selected by a given set of IB solutions will not favor, for instance, "long distance modes" unless P (x, y) is chosen to enforce that.Instead, the analogue of long distance modes are those modes which have the most information about Y .In section IV B we will attempt to clarify this by calculating the IB regulator explicitly in a simple, familiar context.

IV. CONSEQUENCES AND INTERPRETATIONS OF THE CORRESPONDENCE
A. The Blahut-Arimoto update scheme may displace the flow-equation description The apparent goal of Information Bottleneck is to identify the coarsening map P β (x|x) for some set of β values.This seems to align poorly with the problem statement and goals of NPRG, in which the coarsening map P k [ χ|χ] is taken as the starting point and used to derive the flow equations.Is it really true that solving IB only gets us to the starting point of an RG scheme, after which we still need to "do the RG part?"In this section, we investigate one way to resolve this dissonance by noting that the quantities one would usually consider to be the results of the NPRG flow can be used to parameterize P β (x|x) itself.From this viewpoint, one may organize the computation around a set of self-consistent update equations instead of a set of flow equations.
The general IB problem can be solved, in principle, by iterating what is known as the Blahut-Arimoto procedure, which is borrowed from rate distortion theory in a more general context [1].This procedure relies on the fact that when P β (x|x) is IB optimal, it satisfies the following condition: Where everything on the RHS is to be considered a function of P β (x|x) through dx P (y|x)P β (x|x)P (x) .
The function Z β (x) normalizes P β (x|x) and therefore also depends on P β (x|x) through the above equations.
In brief, the BA procedure entails taking an estimate for P β (x|x), plugging it into the IB optimality criterion above, then iterating until satisfactory convergence.In this way, we say that P β (x|x) is selfdetermined.This procedure is practically very difficultif not impossible-for distributions of multivariate continuous variables in general.However, in the case of GIB, we can parameterize the distributions then use Gaussian integral identities to update these parameters exactly.Chechik et al. [29] carry out this procedure in terms of the matrices A and Σ ξ , used to define X = AX + ξ.We repeat this computation but instead parameterize the update equation using Σ X , Σ X| X , and Σ X X .The first two of these represent objects of interest in the UV-and IRregulated parts of the NPRG scheme, respectively.The third quantity, Σ X X carries information about how the IR degrees of freedom X are coupled to the original, UV variables X.In a very condensed form, the BA update equations in this parameterization read: where both B and Σ X|X can be expressed in terms of β, P (x, y) and the current estimate for the parameterization of P (t, x).The full expressions are complicated and given fully in appendix C. Note that Σ X| X represents the IR-regulated flow; it is directly analogous to the effective propagator G k in the Wetterich formalism.In other words, given that we are only looking at Gaussian statistics, the function W β (J) (or Γ k ) can be simply reconstructed from Σ X| X .Next, Σ X represents the UV-regulated part, since the probability distribution describing X can be reconstructed from it.
We reiterate that this self-consistent updating scheme comes from IB optimality, written in terms of objects we would usually calculate in NPRG.The idea of a selfconsistent updating scheme which determines the IRregulated statistics and UV-regulated dynamics simultaneously is interesting.In addition to essentially replacing the flow-equation description, it is very non-perturbative in nature.However, it seems wrong that imposing a constraint on P (x|x) should make anything easier, especially given the fact that IB enforces a goal which is only sometimes aligned with the typical goals of RG analysis.A natural question, then, is whether IB has actually provided any new leverage.More precisely, if we really have given up the flow equation in favor of a self-consistency scheme, does this new scheme actually help to calculate the objects of interest as the flow equation usually would?If so, why would IB optimality be necessary?
In the case of general, i.e. non-Gaussian P (x, y), the integration dx P (y|x)P (x|x) can't be carried out directly.This is equivalent to the statement that at (and below) intermediate values of k in NPRG, W k [J] can't be directly computed from its integral representation.The whole point of Wilsonian RG is to get around this integration step by connecting W k to W k→∞ = 0 by invoking a known flow equation.So, to answer our question, the IB update scheme may actually provide the same leverage, but only if (1.) we can represent the BA procedure parametrically, and (2.) the derivation of that parametric representation does not require the explicit marginalization of x to obtain P (y|x).The updates we present above for the fully Gaussian problem satisfy the first requirement, but fail the second since we explicitly carried out Gaussian integrals over x in the derivation.It is therefore unclear at this point whether some structure in IB could allow us to estimate P (y|x) parametrically, which seems to be a prerequisite for the utility of a more general IB-RG framework in which IB is exactly enforced.Finally, we note that these conditions are necessary, but not sufficient, since further integration steps may be required to complete the BA update, for example in computing D KL [P (y|x)||P (y|x)] and going from an updated P (x, x) back to the moments of P (x|x).
In principle, the self-consistent structure imposed by IB-optimality obviates the need for a traditional cutoff/flow equation description.However, the opposite is also true: if the cutoff scheme and flow equation are known, then the self-consistency conditions are displaced.Because GIB is exactly solvable, we are able to examine both approaches here.In Eq. ( 9), we present a soft cutoff scheme which arises from the constraint of GIBoptimality, but it is given in terms of quantities which have no physical context, and so it is hard to say a priori how it relates to existing cutoff schemes structurally.In the next section, we consider a toy model which provides this physical context and therefore affords us a glimpse into how IB-optimal NPRG schemes differ structurally from those already employed.
B. Collective modes are not always Fourier: a minimal example In the Wetterich NPRG, the cutoff is enforced through a deformation ∆S k [χ] = 1 2 χ † R k χ added to the bare action or Hamiltonian.In section III B, we identified this structure as the free energy of a Gaussian coarsening map from the bare degrees of freedom χ to some compressed representation χ.We then defined the IB regulator through the deformation produced by the map solves the Gaussian Information Bottleneck problem, and showed that it satisfies the various "design" constraints traditionally placed upon it.An immediate consequence of this construction is that the regulator design space is now parameterized by the joint distributions P (x, y) which define the starting point of IB, and for many such distributions, the preferred basis selected by IB will look nothing like Fourier modes.Of course, for finite systems not organized in a lattice, this is unsurprising; the Fourier basis will not exist in any familiar sense.However, for practitioners of NPRG, it may cause discomfort to consider a regulator R v(β) (u) in which the numbers v and u do not represent radii in momentum space.In contrast, for the majority of applications, the standard cutoff scheme is provided by the Litim regulator Which should be interpreted as a soft momentumspace cutoff.The Litim regulator sees widespread use both because it is optimized to give good convergence properties in certain contexts [52], and because its simple form often leads to analytically expressible flow equations (after appropriate truncation procedures) [30,53].

The IB regulator R (IB) β
given in (9) does not manifestly have any such nice qualities, and in the general case may be difficult to interpret.In this section, we calculate R (IB) β explicitly in a trivial statistical field theory problem to explore its structure in a familiar context and address some of its non-intuitive features.For our model, we consider a real scalar field χ(x) in d dimensions at equilibrium and finite temperature k B T = 1.This fluctuating field will serve as the "input variable" X.We also add a disordered source field h(x) which will serve as the "relevance variable" Y .
We also give Gaussian statistics to the disorder: In our condensed notation, the above equations are re-expressed: Together, the Boltzmann weight H[χ|h] and the distribution P [h] describing the disorder statistics constitute a joint distribution P [χ, h] which is jointly Gaussian and thus-momentarily casting aside worries about the continuously infinite-dimensional random variablesa valid starting point for GIB.From the IB standpoint, the goal would usually be to construct a coarsened field χ(x) which discards some information about χ while encoding as much as possible about the statistics of h.However, the goal here is not to discuss χ, but rather to better understand the NPRG cutoff scheme that IB imposes as a consequence of this starting point.Since we have assumed a canonical form for the bare Green's function G −1 0 and the source term is h • χ, the only remaining control over P [χ, h] is the two-point correlation of h: To explore different forms of R (IB) β , we therefore consider three different constructions of H. First, we choose h to be totally uncorrelated at different points, with a constant variance at each point.Second, we choose H diagonal in Fourier basis, but with some dispersion that adds position-space correlations.In both of these first examples, we will arrive at regulators with momentumspace cutoffs.It is the goal of the third case to present an H which is not diagonal in momentum basis, thereby introducing a non-momentum cutoff structure.

IB regulator when disorder correlations are diagonal in momentum space
In the first and simplest case, we take H to be a δfunction multiplied by some constant factor η. Since the Fourier transform F is unitary [54], the momentum-space representation of H is unchanged from its position-space representation: The first step in GIB analysis is constructing the canonical correlation matrix Σ −1 X Σ X|Y , where we have chosen X ↔ χ and Y ↔ h.After a calculation involving only Gaussian integral identities and our definition of P [χ, h], we obtain: Next, we find the right eigenfunctions V (x, u) and corresponding eigenvalues λ(u) of the correlation.For our Where G0 (q) = 1/(t + q 2 ) is obtained after Fourier transform of G 0 .To finally obtain the IB regulator in a familiar form, we would like to find a way to express it completely in terms of q, k, and the various other parameters introduced in this application.However, equation (9) gives us R (IB) in terms of the bottleneck parameter β, which has not been defined yet in this application.
The crucial insight is to note that β serves essentially the same role as k in the typical theory.To find the explicit map between the two, we use the fact that critical bottleneck values β(q) are defined in terms of the canonical correlation eigenvalues λ(q) through β(q) = (1 − λ(q)) −1 .In this model, the critical bottleneck values are Using this map, we can replace β with β(k), where k is the usual momentum cutoff.Doing so, we find that the IB regulator can be neatly expressed in terms of the Litim regulator: In particular, the limit η → 0 gives R (IB) → R (L) .It is interesting that the Litim regulator appears in this expression, since its derivation invokes optimality principles which are not obviously connected to information bottleneck.

Momentum-space IB regulator with dispersion in disorder correlations
Without changing our decision to make H diagonal in Fourier basis, we can also add q-dependence to η.In this case, the steps taken above are essentially unchanged, and we end up with a slightly different regulator: With some manipulations, one could optionally rewrite this in terms of (1 − x)Θ(1 − x) in order to appeal to the Litim description once again.
A new feature appears in the regulator scheme when η is given q-dependence.For extreme choices of η, the ordering of modes can actually be reversed.To see how FIG. 2. A depiction of the IB problem applied to a Gaussian field theory for d = 2, as described in Eq. (12).Each column represents a different random variable ( X, X, or Y ) in the IB problem, while each row depicts a sample drawn from the joint distribution between them.Using h as the relevance variable Y , the GIB-optimal coarsened field χ can be constructed through non-deterministic coarsening of χ, as depicted by the arrows.The Lagrange multiplier β1 controls the trade-off between minimizing mutual information between χ(β1) and χ while maximizing mutual information between χ(β1) and h.According to the semigroup structure described in Sec.II, this process can be repeated to generate χ(β2 • β1) through non-deterministic mapping from χ(β1) with compression level β2.
this is possible, note that fundamentally it is the IB parameter β which sets the cutoff, while the critical values β(q) define the mapping to q.Therefore, by picking, e.g., η(q) ∼ G−2 0 (q), one achieves a β(q) which monotonically decreases with respect to q, meaning longer wavelength modes (lower q) actually get integrated out before shorter ones.However, this construction presents some pathologies and is hard to interpret in the truly continuous case, so we will not explore it further here.

Explicit form of the IB regulator in a more general case
In the last section we assumed a form of H which was diagonal in Fourier basis.This assumption led us to a regulator scheme which could be interpreted as a soft cutoff in momentum space.In this section we explore an example in which H is no longer diagonal in Fourier basis: Where F α is the fractional Fourier transform through angle α and η is a constant.Under this definition, we can again compute Σ χ and find the spectrum of Σ −1 χ Σ χ|h .This yields eigenfunctions analogous to the plane wave solutions in last section, but indexed by a new parameter u which can neither be interpreted as position nor wavenumber: Here, the notation [•] indicates that V † is best conceptualized as a functional parameterized by u, where for instance the collective modes of χ(x) would be given by V † [χ](u).Stated differently, the leftmost operator G1/2 0 is evaluated at u, and the rightmost is a Fourier transform over the integrand [•].Unfortunately, this solution is only formal, and cannot be visualized in the same manner as plane waves.In a true field theory, even with the trivial Gaussian setup, both H(x 1 , x 2 ) and V (x, u) are poorly behaved when written as functions of x.When written as an integral in q, V diverges when |q max | → ∞, and is discontinuous in both x and u.One way to conceptualize this is by comparison with G −1 0 , which includes ∇ 2 and thus cannot be written as elementary functions of x.After Fourier transform, we can replace the operator description with a simple function of the continuous variables q.Similarly, although we cannot express H and V as functions of x, the various operators we are interested in can be written simply in the non-orthogonal basis defined by V : It is hard to say what the label u physically represents beyond being a parameter that defines and orders collective modes χ (u) = V † [χ](u) in the system.Despite this, the regulator maintains its simple form: Where now v takes the role of the cutoff, replacing k as u has replaced q.That is, the collective modes labelled by u are ordered in terms of their predictiveness about the disordered source field h.GIB then imposes a soft-cutoff scheme at a scale v, which is a proxy for the bottleneck parameter β, as k was in the Fourier case.We stress that these labels v and u are defined by the correlation structure of P [χ, h] and have no simple intrinsic physical meaning.Without significantly more effort, all we can say is that a mode labelled u 1 carries more information about the disorder h than a mode labelled Many of the difficulties present in this discussion, such as the poorly-behaved character of collective modes V (x, u) and disorder correlator H(x 1 , x 2 ), as well as the non-intuitive nature of the mode labels u and v, stem from a common cause.IB is only suited to analysis of systems with finitely many degrees of freedom, and field theories have infinitely many.The calculations above were nonetheless performed in this context to demonstrate that IB defines collective modes of a system and establishes a cutoff scheme which, in general, differs from traditional notions of relevance, as represented by the Fourier basis and momentum cutoff.This idea could be crucial to understanding collective behavior in systems without clear notions of locality or organization.Such problems abound in, for example, the brain where longdistance connections between brain areas are common and important for computation while information is also spread across many areas and recombined for important, multi-modal tasks.The recurrent, highly interconnected, and still computationally efficient structure in the brain renders the simple notion of physical distance between cells rather limiting.

C. The relevance variable Y can have many physical interpretations
Gaussian IB begins with a choice of joint distribution P (x, y).As we have discussed, this distribution gives a constrained parameterization of a cutoff scheme which is analogous to the one employed in Wetterich NPRG.In the last section, we showed that not all choices P (x, y) lead to collective modes V T X which have a canonical interpretation such as Fourier modes.That discussion was carried out under the assumption that the relevance variable Y pertains to a source field with some disorder statistics.Generally speaking, this is only one way of constructing Y .Even within the constraint of P (x, y) being jointly Gaussian, the physical interpretation of x and y can vary.Here we briefly discuss some of these alternative interpretations.
First, Y may represent the environment of a set of variables X.This scenario is analogous to the one presented by Koch-Janusz et al. [12].Consider a collection of spins on a lattice, and choose some enclosed region.Let X be the state of the spins in that region and let Y denote the state of those outside.In the case that these spins have Gaussian statistics, this is a valid starting point for GIB.With this setup, we expect that the most relevant collective modes would be relatively slowly varying in position.In fact, Gordon et al. recently formalized this idea for field theories not restricted to Gaussian statistics [13].They consider a "buffer" zone between X and Y whose size is taken to infinity.In this limit, the first collective variables encoded by IB at strong compression (low β in our notation) correspond to the operators with the smallest scaling dimensions, and hence the most relevant operators in the RG sense.Their approach is therefore promising for the analysis of systems with local interactions whose order parameter is not known a priori.More fundamentally, they have shown that Y and X can be chosen to enforce a traditional, "physical" definition of relevance.
Second, consider a stationary stochastic process with Gaussian statistics both in time and across variable index.We could choose X to represent the current state of the system while Y represents the future.Here, the most relevant modes are those projections of X which vary the slowest.In fact, if we suppose that time has been properly discretized, this interpretation of the GIB problem is equivalent to a certain class of slow feature analysis problems [55].
Third, we can imagine another dynamical system in which variables X which are driven by a stochastic signal Y such that the joint distribution is Gaussian and stationary.Now, the features of X which are most relevant are no longer simply the slowest-varying components.The cutoff scheme we find will depend on the statistics which generate Y , the manner in which Y couples to X, the internal dynamics of X, and whether we take Y to be in the past, future, or present.
Together with the example from last section, in which Y fulfilled the role of a disordered source field, these examples span a number of physically interesting scenarios.Certainly, more are possible.Any valid interpretation will generally consist of a set of random variables {Z i } that obeys a Gaussian joint distribution, which is then partitioned into two or three disjoint sets.The first is {X n }, the second is {Y m }, and the third, which is op-tional, is a dummy set containing every Z i which we don't care to include in the model.In the case that these sets aren't disjoint, it is possible to have X and Y become deterministically related which is an invalid starting point for GIB.Finally, we note that while this framework allows for some discussion of systems involving dynamics, it is poorly suited for application to general stochastic processes as the distribution P (X, Y ) must be stationary.This also means that the connections drawn here between GIB and NPRG are not meant to cover the more general, dynamical NPRG framework often seen in nonequilibrium statistical mechanics literature [49,56,57].However, given the importance of both IB and the dynamical NPRG to applications in nonequilibrium settings, we believe that a more general framework is in demand.

V. CONCLUSION
In this manuscript, we have examined structural similarities between the Gaussian information bottleneck problem and a class of RG techniques involving soft cutoffs.Our main result is to identify that the crucial connection between the two is a non-deterministic coarsening map.In NPRG, this map defines both the UV-regulated coarse-grained Hamiltonian of the Wilson-Polchinski picture, as well as the IR-regulated free energy used in the Wetterich approach.Therefore, one can rigorously connect IB to RG by requiring that this coarsening map solves a particular IB problem.In doing so, one parameterizes a space of soft cutoff schemes in terms of IB relevance variable statistics P (x, y).Additionally, one can identify the structures in an IB problem which are analogous to UV-and IR-cutoffs in RG.
While we believe that this connection holds for more general IB problems, we limited our discussion to Gaussian statistics for two main reasons.First, NPRG coarsening maps are always Gaussian, since this leads to simpler flow equations with physical interpretations.Second, in order to be compatible with this first consideration, we studied only the GIB problem which has exactly known solutions that are Gaussian [29].
Another result was to show that the GIB coarsening map satisfies a semigroup property.In particular, we identify an explicit function b(β) which multiplies under composition of coarsening maps in a manner analogous to the length scale in a traditional RG setting.Given that the typical role of semigroup structure in RG theory is the identification of anomalous exponents, it is not within the scope of this manuscript to assign a similar task to b(β).More immediately, the presence of this structure within GIB raises the question of whether it may be present in IB schemes more generally.If so, would an iterative coarse-graining scheme consisting of repeated low-compression transformations be advantageous as an analysis technique?
By explicitly comparing the set of GIB solutions provided by Chechik et al. with a generic NPRG scheme, we identified the IR cutoff scheme present in GIB (9).A similar analysis can be carried out to identify the UV cutoff, but doing so involves a discussion about reparameterization which we felt would distract from the main points.Direct computations on a toy model showed that the IB regulator has some characteristics which are similar to the ubiquitous Litim regulator [30].An important generalization is that IB selects the collective mode basis according to which features of the system state X will be most informative about Y , whatever it is chosen to be.We gave a simple example in which this collective mode basis could not be interpreted as a Fourier basis.In general, this will be the case, though depending on how Y is defined, one may still arrive at collective modes which are essentially Fourier in nature.One bit of analysis we did not carry out is the connection of IB to the dynamical NPRG, though for non-equilibrium problems involving IB-such as the predictive coding problem-this may be a fruitful avenue for further work.
Next, we note that IB is generally extremely difficult to solve, so restricting an NPRG scheme to a family of exact IB solutions is completely unrealistic without significant advances in IB theory.One avenue of attack is to find better ways of solving IB.As outlined in sec.IV A, a more general parametric Blahut-Arimoto scheme would be very powerful in this context since it could essentially replace the flow-equation description with a self-consistency scheme at each cutoff value.However, given that the exact Gaussian form we derive is complicated, this seems unlikely to work.A more realistic approach to practical IB-RG implementation is to relax the IB-optimality constraint.We suggest that even in a non-Gaussian setting, one could directly calculate the IB regulator (9) proposed here and use the NPRG flow equations in exactly the same way.While the resulting statistics would no longer be exactly IB-optimal, this procedure is no more difficult than any other NPRG implementation, and may produce qualitatively similar results to an exact IB solution.
We reiterate that not all IB problems will benefit from the RG connections presented here, and vice versa.Ideally, the problem in question involves a system with a large, but finite, number of degrees of freedom X statistically coupled to a similarly large number of random variables Y .Finiteness is required by IB, but because of the construction of the NPRG, this is not an issue.The flow is defined exactly even in the absence of a traditional rescaling step, which would be illegal in a finite system since it adds more modes.Biophysics systems, for example, may be particularly well-suited to IB-RG analysis, because Y can be chosen to have biological relevance, and the cutoff scheme will define and prioritize collective modes that are most informative about that function.Biological systems all have size and energy constraints that make the efficient compression of inputs from the external world critical for survival.Balancing that, and just as important for function, organisms also have clear preference for what is relevant in that external signal, namely which aspects can be used to drive behavior that confers a fitness benefit.The IB framework helps cast behavioral relevance as the prime mover in input compression, while the RG can help show how this kind of computation is achieved.Uniting these theories can provide a way to pull together normative notions of relevance with their mechanistic implementation.
In order to ensure that both A 1 and A 2 are diagonal, we project X into natural basis with the replacement X → V T X.Note that A 2 is actually automatically diagonal because the first compressed representation X1 = A 1 X +ξ 1 is already in natural basis.After this transformation, the optimality condition (A1) is simplified because the V −1 matrices have been absorbed into the definition of X.The new condition is: Now we explicitly compute A 1 and A 2 .From (2) we have: where The latter two equations must be re-expressed in terms of the original X − Y statistics, represented by λ i and s i .
ii + 1 Solving for λ , we have: Now, directly evaluating A 1 and s i , we get the following for λ and s : Using these last two expressions, A 2 can be expressed directly in terms of s and λ.By direct substitution, we can these modes in terms of their information content about Y .In the toy model, we begin with physical definitions for the statistics in Eqs. ( 12) and (13).Then, by interpreting χ as the input variable X and the disorder h as the relevance variable Y , we ask what the structure of the resulting GIB-regularized NPRG scheme looks like.Like any other GIB problem, we must first calculate the canonical correlation Green's function, Σ −1 χ Σ χ|h .Two Green's functions come directly from the definitions: To find Σ χ , we need Σ χh , which we get through µ χ|h : Compute this mean by looking at the Hamiltonian for χ with frozen disorder h: Hence we can identify Σ χh : Now, use the Schur complement formula to identify Σ χ : Once the canonical correlation Green's function is known, one calculates its eigenfunctions (or eigenvectors, in the usual, finite-dimensional case) and eigenvalues.In the main text, we consider three constructions of H which altogether yield two eigenbases: Fourier and non-Fourier.Let's first calculate the spectrum λ(q) for case 2 in section IV B, which also covers the analysis of case 1.
Where H represents a "diagonal" function, H(q 1 , q 2 ) = η(q 1 )δ d (q 1 − q 2 ).The frozen disorder propagator G 0 is also diagonal in Fourier basis: We use G0 (q) to represent both the function (t+q 2 ) −1 , and the diagonal kernel (t+q 2 ) −1 δ d (q 1 −q 2 ) interchangeably, as needed.Using the expression for Σ −1 χ Σ χ|h derived in the last section, we have: Since F is unitary and both H and G0 are diagonal, we have: V (x, q) = F † (x, q) = 1 (2π) d/2 e iq•x , λ(q) = 1 1 + η(q) G0 (q) b.Non-Fourier collective basis Next, we carry out the same computation for case 3, in which H is not diagonal in Fourier basis, and so neither is the canonical correlation Green's function.Written formally, the disorder correlator is Where F α is the d-dimensional fractional Fourier transform.The 1-dimensional version defined as: This transform is unitary, satisfies F Hence we arrive at the eigendecomposition: In the main text, we refrain from writing V † as a kernel V † (u, x), because it is discontinuous and divergent.This is more evident when it is expressed in integral form: Where, e.g., u 2 = u • u = u 2 1 + u 2 2 + ... + u 2 d .

FIG. 1 .
FIG. 1. IR Regulators compared between Litim and IB schemes.The IB problem depicted here is from the toy model discussed in section IV B for the simple case where the collective modes selected by IB are Fourier and the disorder correlation has no dispersion (η is constant).Top: Eigenvalues of the canonical correlation matrix Σ −1X Σ X|Y as a function of label q, which may be interpreted as a wavevector magnitude.Modes with smaller eigenvalue can be thought to carry more information about Y .Bottom: Regulator values as a function of cutoff k and mode label q for the Litim scheme (11) and the IB scheme (black and blue, respectively).