Relative stability toward diffeomorphisms indicates performance in deep nets

Understanding why deep nets can classify data in large dimensions remains a challenge. It has been proposed that they do so by becoming stable to diffeomorphisms, yet existing empirical measurements support that it is often not the case. We revisit this question by defining a maximum-entropy distribution on diffeomorphisms, that allows to study typical diffeomorphisms of a given norm. We confirm that stability toward diffeomorphisms does not strongly correlate to performance on benchmark data sets of images. By contrast, we find that the stability toward diffeomorphisms relative to that of generic transformations $R_f$ correlates remarkably with the test error $\epsilon_t$. It is of order unity at initialization but decreases by several decades during training for state-of-the-art architectures. For CIFAR10 and 15 known architectures, we find $\epsilon_t\approx 0.2\sqrt{R_f}$, suggesting that obtaining a small $R_f$ is important to achieve good performance. We study how $R_f$ depends on the size of the training set and compare it to a simple model of invariant learning.


Introduction
Deep learning algorithms LeCun et al. (2015) are now remarkably successful at a wide range of tasks Amodei et al. (2016); Huval et al. (2015); Mnih et al. (2013); Shi et al. (2016); Silver et al. (2017).Yet, understanding how they can classify data in large dimensions remains a challenge.In particular, the curse of dimensionality associated with the geometry of space in large dimension prohibits learning in a generic setting Luxburg and Bousquet (2004).If high-dimensional data can be learnt, then they must be highly structured.
A popular idea is that during training, hidden layers of neurons learn a representation Le (2013) that is insensitive to aspects of the data unrelated to the task, effectively reducing the input dimension and making the problem tractable Ansuini et al. (2019); Recanatesi et al. (2019); Shwartz-Ziv and Tishby (2017).Several quantities have been introduced to study this effect empirically.It includes (i) the mutual information between the hidden and visible layers of neurons Saxe et al. (2019); Shwartz-Ziv and Tishby (2017), (ii) the intrinsic dimension of the neural representation of the data Ansuini et al. (2019); Recanatesi et al. (2019) and (iii) the projection of the label of the data on the main features of the network Kopitkov and Indelman (2020); Oymak et al. (2019); Paccolat et al. (2021a), the latter being defined from the top eigenvectors of the Gram matrix of the neural tangent kernel (NTK) Jacot et al. (2018).All these measures support that the neuronal representation of the data indeed becomes well-suited to the task.Yet, they are agnostic to the nature of what varies in the data that need not being represented by hidden neurons, and thus do not specify what it is.true function is highly anisotropic, in the sense that it depends only on a linear subspace of the input space Bach (2017); Chizat and Bach (2020); Ghorbani et al. (2019Ghorbani et al. ( , 2020)); Paccolat et al. (2021a); Refinetti et al. (2021); Yehudai and Shamir (2019).For image classification, such an anisotropy would occur for example if pixels on the edge of the image are unrelated to the task.Yet, fully-connected nets (unlike CNNs) acting on images tend to perform best in training regimes where features are not learnt Geiger et al. (2021Geiger et al. ( , 2020)); Lee et al. (2020), suggesting that such a linear invariance in the data is not central to the success of deep nets.
Instead, it has been proposed that images can be classified in high dimensions because classes are invariant to smooth deformations or diffeomorphisms of small magnitude Bruna and Mallat (2013); Mallat (2016).Specifically, Mallat and Bruna could handcraft convolution networks, the scattering transforms, that perform well and are stable to smooth transformations, in the sense that f (x) − f (τ x) is small if the norm of the diffeomorphism τ is small too.They hypothesized that during training deep nets learn to become stable and thus less sensitive to these deformations, thus improving performance.More recent works generalize this approach to more common CNNs and discuss stability at initialization Bietti and Mairal (2019a,b).Interestingly, enforcing such a stability can improve performance Kayhan and Gemert (2020).
Answering if deep nets become more stable to smooth deformations when trained and quantifying how it affects performance remains a challenge.Recent empirical results revealed that small shifts of images can change the output a lot Azulay and Weiss (2018); Dieleman et al. (2016); Zhang (2019), in apparent contradiction with that hypothesis.Yet in these works, image transformations (i) led to images whose statistics were very different from that of the training set or (ii) were cropping the image, thus are not diffeophormisms.In Ruderman et al. (2018), a class of diffeomorphisms (low-pass filter in spatial frequencies) was introduced to show that stability toward them can improve during training, especially in architectures where pooling layers are absent.Yet, these studies do not address how stability affects performance, and how it depends on the size of the training set.To quantify these properties and to find robust empirical behaviors across architectures, we will argue that the evolution of stability toward smooth deformations needs to be compared relatively to that of any deformation, which turns out to vary significantly during training.
Note that in the context of adversarial robustness, attacks that are geometric transformations of small norm that change the label have been studied Alaifari et al. (2018); Alcorn et al. (2019); Athalye et al. (2018); Engstrom et al. (2019); Fawzi and Frossard (2015); Kanbak et al. (2018); Xiao et al. (2018).These works differ for the literature above and from out study below in the sense that they consider worst-case perturbations instead of typical ones.

Our Contributions
• We introduce a maximum entropy distribution of diffeomorphisms, that allow us to generate typical diffeomorphisms of controlled norm.Their amplitude is governed by a "temperature" parameter T .• We define the relative stability to diffeomorphisms index R f that characterizes the square magnitude of the variation of the output function f with respect to the input when it is transformed along a diffeomorphism, relatively to that of a random transformation of the same amplitude.It is averaged on the test set as well as on the ensemble of diffeomorphisms considered.
• We find that at initialization, R f is close to unity for various data sets and architectures, indicating that initially the output is as sensitive to smooth deformations as it is to random perturbations of the image.Deng et al. (2009).For more primitive architectures (whose test error is higher) such as fully connected nets or simple CNNs, R f remains of order unity.For CIFAR10 we study 15 known architectures and find empirically that t ≈ 0.2 R f .
• R f decreases with the size of the training set P .We compare it to an inverse power 1/P expected in simple models of invariant learning Paccolat et al. (2021a).
The library implementing diffeomorphisms on images is available online at github.com/pcslepfl/diffeomorphism.The code for training neural nets can be found at github.com/leonardopetrini/diffeo-sota and the corresponding pre-trained models at doi.org/10.5281/zenodo.5589870.
2 Maximum-entropy model of diffeomorphisms

Definition of maximum entropy model
We consider the case where the input vector x is an image.It can be thought as a function x(s) describing intensity in position s = (u, v) ∈ [0, 1] 2 , where u and v are the horizontal and vertical coordinates.To simplify notations we consider a single channel, in which case x(s) is a scalar (but our analysis holds for colored images as well).We denote by τ x the image deformed by τ , i.e.
The deformation amplitude is measured by the norm To test the stability of deep nets toward diffeomorphisms, we seek to build typical diffeomorphisms of controlled norm ∇τ .We thus consider the distribution over diffeomorphisms that maximizes the entropy with a norm constraint.It can be solved by introducing a Lagrange multiplier T and by decomposing these fields on their Fourier components, see e.g.Kardar (2007) or Appendix A. In this canonical ensemble, one finds that τ u and τ v are independent with identical statistics.For the picture frame not to be deformed, we impose fixed boundary conditions: τ = 0 if u = 0, 1 or v = 0, 1.One then obtains: where the C ij are Gaussian variables of zero mean and variance C 2 ij = T /(i 2 + j 2 ).If the picture is made of n × n pixels, the result is identical except that the sum runs on 0 < i, j ≤ n.For large n, the norm then reads ∇τ 2 = (π 2 /2) n 2 T , and is dominated by high spatial frequency modes.It is useful to add another parameter c to cut-off the effect of high spatial frequencies, which can be simply done by constraining the sum in Eq.2 to i 2 + j 2 ≤ c 2 , one then has ∇τ 2 = (π 3 /8) c 2 T .
Once τ is generated, pixels are displaced to random positions.A new pixelated image can then be obtained using standard interpolation methods.We use two interpolations, Gaussian and bi-linear1 , as described in Appendix C. As we shall see below, this choice does not affect our result as long as the diffeomorphism induced a displacement of order of the pixel size, or larger.Examples are shown in Fig. 1 as a function of T and c.

Phase diagram of acceptable diffeomorphisms
Diffeomorphisms are bijective, which is not the case for our transformations if T is too large.When this condition breaks down, a single domain of the picture can break into several pieces, as apparent in Fig. 1.It can be expressed as a condition on ∇τ that must be satisfied in every point in space Lowe (2004), as recalled in Appendix B. This is satisfied locally with high probability if τ 2 1, corresponding to T (8/π 3 )/c 2 .In Appendix, we extract empirically a curve of similar form in the (T, c) plane at which a diffeomorphism is obtained with probability at least 1 /2 .For much smaller T , diffeomorphisms are obtained almost surely.
Finally, for diffeomorphisms to have noticeable consequences, their associated displacement must be of the order of magnitude of the pixel size.Defining δ 2 as the average square norm of the pixel displacement at the center of the image in the unit of pixel size, it is straightforward to obtain from Eq.2 that asymptotically for large c (cf. Appendix B for the derivation), The line δ = 1/2 is indicated in Fig. 1, using empirical measurements that add pre-asymptotic terms to Eq.3.Overall, the green region corresponds to transformations that (i) are diffeomorphisms with high probability and (ii) produce significant displacements at least of the order of the pixel size.3 Measuring the relative stability to diffeomorphisms Relative stability to diffeomorphisms To quantify how a deep net f learns to become less sensitive to diffeomorphisms than to generic data transformations, we define the relative stability to diffeomorphisms R f as: where the notation y can indicate alternatively the mean or the median with respect to the distribution of y.In the numerator, this operation is made over the test set and over the ensemble of diffeomorphisms of parameters (T, c) (on which R f implicitly depends).In the denominator, the average is on the test set and on the vectors η sampled uniformly on the sphere of radius η = τ x − x x,τ .An illustration of what R f captures is shown in Fig. 2. In the main text, we consider median quantities, as they reflect better the typical values of distribution.In Appendix E.3 we show that our results for mean quantities, for which our conclusions also apply.
Dependence of R f on the diffeomorphism magnitude Ideally, R f could be defined for infinitesimal transformations, as it would then characterize the magnitude of the gradient of f along smooth deformations of the images, normalized by the magnitude of the gradient in random directions.However, infinitesimal diffeomorphisms move the image much less than the pixel size, and their definition thus depends significantly on the interpolation method used.It is illustrated in the left panels of Fig. 3, showing the dependence of R f in terms of the diffeomorphism magnitude (here characterised by the mean displacement magnitude at the center of the image δ) for several interpolation methods.We do see that R f becomes independent of the interpolation when δ becomes of order unity.In what follows we thus focus on R f (δ = 1), which we denote R f .
SOTA architectures become relatively stable to diffeomorphisms during training, but are not at initialization The central panels of Fig. 3 show R f at initialization (shaded), and after training (full) for two SOTA architectures on four benchmark data sets.The first key result is that, at initialization, these architectures are as sensitive to diffeomorphisms as they are to random transformations.
Relative stability to diffeomorphisms at initialization (guaranteed theoretically in some cases Bietti and Mairal (2019a,b)) thus does not appear to be indicative of successful architectures.Standard data augmentation techniques (translations, crops, and horizontal flips) are employed for training.However, the results we find only mildly depend on using such techniques, see Fig. 12 in Appendix.
Learning relative stability to diffeos requires large training sets How many data are needed to learn relative stability toward diffeomorphisms?To answer this question, newly initialized networks are trained on different training-sets of size P .R f is then measured for CIFAR10, as indicated in the right panels of Fig. 3. Neural nets need a certain number of training points (P ∼ 10 3 ) in order to become relatively stable toward smooth deformations.Past that point, R f monotonically decreases with P .In a range of P , this decrease is approximately compatible with the an inverse behavior R f ∼ 1/P found in the simple model of Section 6.Additional results for MNIST and FashionMNIST can be found in Fig. 13, Appendix E.3.

Simple architectures do not become relatively stable to diffeomorphisms
To test the universality of these results, we focus on two simple architectures: (i) a 4-hidden-layer fully connected (FC) network (FullConn-L4) where each hidden layer has 64 neurons and (ii) LeNet LeCun et al. (1989) that consists of two convolutional layers followed by local max-pooling and three fully-connected layers.
Measurements of R f for these networks are shown in Fig. 4. For the FC net, R f ≈ 1 at initialization (as observed for SOTA nets) but grows after training on the full data set, showing that FC nets do not learn to become relatively stable to smooth deformations.It is consistent with the modest evolution of R f (P ) with P , suggesting that huge training sets would be required to obtain R f < 1.The situation is similar for the primitive CNN LeNet, which only becomes slightly insensitive (R f ≈ 0.6) in a single data set (CIFAR10), and otherwise remains larger than unity.
Layers' relative stability monotonically increases with depth Up to this point, we measured the relative stability of the output function for any given architecture.We now study how relative stability builds up as the input data propagate through the hidden layers.In Fig. 14    of depth at initialization, and monotonically decreases with depth after training.Overall, the gain in relative stability appears to be well-spread through the net, as is also found for stability alone Ruderman et al. (2018).

Relative stability to diffeomorphisms indicates performance
Thus, SOTA architectures appear to become relatively stable to diffeomorphisms after training, unlike primitive architectures.This observation suggests that high performance requires such a relative stability to build up.To test further this hypothesis, we select a set of architectures that have been relevant in the state of the art progress over the past decade; we systematically train them in order to compare R f to their test error t .Apart from fully connected nets, we consider the already cited LeNet (5 layers and ≈ 60k parameters); then AlexNet Krizhevsky et al. (2012) and VGG Simonyan and Zisserman (2015), deeper (8-19 layers) and highly over-parametrized (10-20M (million) params.)versions of the latter.We introduce batch-normalization in VGGs and skip connections with ResNets.Finally, we go to EfficientNets, that have all the advancements introduced in previous models and achieve SOTA performance with a relatively small number of parameters (<10M); this is accomplished by designing an efficient small network and properly scaling it up.Further details about these architectures can be found in Table 1, Appendix E.2.
The results are shown in Fig. 5.The correlation between R f and t is remarkably high (corr.coeff. 3: 0.97), suggesting that generating low relative sensitivity to diffeomorphisms R f is important to obtain good performance.In Appendix E.3 we also report how changing the train set size P affects the position of a network in the ( t , R f ) plane, for the four architectures considered in the previous section (Fig. 18).We also show that our results are robust to changes of δ, c (Fig. 21) and data sets (Fig. 20).
What architectures enable a low R f value?The latter can be obtained with skip connections or not, and for quite different depths as indicated in Fig. 5. Also, the same architecture (EfficientNetB0) trained by transfer learning from ImageNet -instead of directly on CIFAR10 -shows a large improvement both in performance and in diffeomorphisms invariance.Clearly, R f is much better predicted by t than by the specific features of the architecture indicated in Fig. 5.The color scale indicates depth, and the symbols the presence of batch-norm ( ) and skip connections ( †).Dashed grey line: power low fit t ≈ 0.2 R f .R f strongly correlates to t , much less so to depth or the presence of skip connections.Statistics: Each point is obtained by training 5 differently initialized networks; each network is then probed with 500 test samples in order to measure R f .The results are obtained by log-averaging over single realizations.Error bars -omitted here -are shown in Fig. 19, Appendix E.3.

Stability toward diffeomorphisms vs. noise
The relative stability to diffeomorphisms R f can be written as R f = D f/G f where G f characterizes the stability with respect to additive noise and D f the stability toward diffeomorphisms: Here, we chose to normalize these stabilities with the variation of f over the test set (to which both x and z belong), and η is a random noise whose magnitude is prescribed as above.Stability toward additive noise has been studied previously in fully connected architectures Novak et al. (2018) and for CNNs as a function of spatial frequency in Tsuzuku and Sato (2019); Yin et al. (2019).
The decrease of R f with growing training set size P could thus be due to an increase in the stability toward diffeomorphisms (i.e.D f decreasing with P ) or a decrease of stability toward noise (G f increasing with P ).To test these possibilities, we show in Fig. 6

A minimal model for learning invariants
In this section, we discuss the simplest model of invariance in data where stability to transformation builds up, that can be compared with our observations of R f above.Specifically, we consider the "stripe" model Paccolat et al. (2021b), corresponding to a binary classification task for Gaussiandistributed data points x = (x , x ⊥ ) where the label function depends only on one direction in data space, namely y(x) = y(x ).Layers of y = +1 and y = −1 regions alternate along the direction x , separated by parallel planes.Hence, the data present d − 1 invariant directions in input-space denoted by x ⊥ as illustrated in Fig. 7-left.
When this model is learnt by a one-hidden-layer fully connected net, the first layer of weights can be shown to align with the informative direction Paccolat et al. (2021a).The projection of these weights In this model, R f can be defined as: where we made explicit the dependence of f on the two linear subspaces.Here, the isotropic noise ν is added only in the invariant directions.Again, we impose η = ν .R f (P ) is shown in Fig. 7-right.We observe that R f (P ) ∼ P −1 , as expected from the weight alignment mentioned above.
Interestingly, Fig. 3 for CIFAR10 and SOTA architectures support that the 1/P behavior is compatible with the observations for some range of P .In Appendix E.3, Fig. 13, we show analogous results for MNIST and Fashion-MNIST.We observe the 1/P power-law scaling for ResNets.It suggests that for these architectures, learning to become invariant to diffeomorphisms may be limited by a naive measure of sampling noise as well.By contrast for EfficientNets, in which the decrease in R f is more limited, a 1/P behavior cannot be identified.

Discussion
A common belief is that stability to random noise (small G f ) and to diffeomorphisms (small D f ) are desirable properties of neural nets.Its underlying assumption is that the true data label mildly depends on such transformations when they are small.Our observations suggest an alternative view: 1. Figs.6,16: better predictors are more sensitive to small perturbations in input space.
2. As a consequence, the notion that predictors are especially insensitive to diffeomorphisms is not captured by stability alone, but rather by the relative stability 3. We propose the following interpretation of Fig. 5: to perform well, the predictor must build large gradients in input space near the decision boundary -leading to a large G f overall.
Networks that are relatively insensitive to diffeomorphisms (small R f ) can discover with less data that strong gradients must be there and generalize them to larger regions of input space, improving performance and increasing G f .This last point can be illustrated in the simple model of Section 6, see Fig. 7-left panel.Imagine two data points of different labels falling close to the -e.g.-left true decision boundary.These two points can be far from each other if their orthogonal coordinates differ.Yet, if R f = 0 (now defined in Eq.6), then the output does not depend on the orthogonal coordinates, and it will need to build a strong gradient -in input space -along the parallel coordinate to fit these two data.This strong gradient will exist throughout that entire decision boundary, improving performance but also increasing G f .Instead, if R f = 1, fitting these two data will not lead to a strong gradient, since they can be far from each other in input space.Beyond this intuition, in this model decreasing R f can quantitatively be shown to increase performance, see Paccolat et al. (2021b).

Conclusion
We have introduced a novel empirical framework to characterize how deep nets become invariant to diffeomorphisms.It is jointly based on a maximum-entropy distribution for diffeomorphisms, and on the realization that stability of these transformations relative to generic ones R f strongly correlates to performance, instead of just the diffeomorphisms stability considered in the past.
The ensemble of smooth deformations we introduced may have interesting applications.It could serve as a complement to traditional data-augmentation techniques (whose effect on relative stability is discussed in Fig. 12 of the Appendix).A similar idea is present in Hauberg et al. (2016); Shen et al. (2020) but our deformations have the advantage of being easier to sample and data agnostic.Moreover, the ensemble could be used to build adversarial attacks along smooth transformations, in the spirit of Alaifari et al. (2018); Engstrom et al. (2019); Kanbak et al. (2018).It would be interesting to test if networks robust to such attacks are more stable in relative terms, and how such robustness affects their performance.
Finally, the tight correlation between relative stability R f and test error t suggests that if a predictor displays a given R f , its performance may be bounded from below.The relationships we observe t (R f ) may then be indicative of this bound, which would be a fundamental property of a given data set.Can it be predicted in terms of simpler properties of the data?Introducing simplified models of data with controlled stability to diffeomorphisms beyond the toy model of Section 6 would be useful to investigate this key question.

A Maximum entropy calculation
Under the constraint on the borders, τ u and τ v can be expressed in a real Fourier basis as in Eq.2.By injecting this form into ∇τ 2 we obtain: where D ij are the Fourier coefficients of τ v .We aim at computing the probability distributions that maximize their entropy while keeping the expectation value of ∇τ 2 fixed.Since we have a sum of quadratic random variables, the equipartition theorem Beale (1996) applies: the distributions are normal and every quadratic term contributes in average equally to ∇τ 2 .Thus, the variance of the coefficients follows T i 2 +j 2 where the parameter T determines the magnitude of the diffeomorphism.

B Boundaries of studied diffeomorphisms
Average pixel displacement magnitude δ We derive here the large-c asymptotic behavior of δ (Eq.3).This is defined as the average square norm of the displacement field, in pixel units: where we approximated the sum with an integral, in the third step.The asymptotic relations for ∇τ that are reported in the main text are computed in a similar fashion.In Fig. 8, we check the agreement between asymptotic prediction and empirical measurements.If δ 1, our results strongly depend on the choice of interpolation method.To avoid it, we only consider conditions for which δ ≥ 1/2, leading to  The bound in Eq.12 (u • u > 0) correspond to the green region.The gray disc corresponds to the bound ∇τ ∞ < 1.
Condition for diffeomorphism in the (T, c) plane For a given value of c, there exists a temperature scale beyond which the transformation is not injective anymore, affecting the topology of the image and creating spurious boundaries, see Fig. 9a-c for an illustration.Specifically, consider a curve passing by the point s in the deformed image.Its tangent direction is u at the point s.When going back to the original image (s = s − τ (s)) the curve gets deformed and its tangent becomes u = u − (u • ∇)τ (s).
(9) A smooth deformation is bijective iff all deformed curves remain curves which is equivalent to have non-zero tangents everywhere ∀ s, u = 0 u = 0.
(10) Imposing u = 0 does not give us any constraint on τ .Therefore, we constraint τ a bit more and allow only displacement fields such that u • u > 0, which is a sufficient condition for Eq.10 to be satisfied -cf.Fig. 9d.By extremizing over u, this condition translates into or, equivalently, were we identified by Ξ the l.h.s. of the inequality.We find that the median of the maximum of Ξ over all the image ( Ξ(s) ∞ ) can be approximated by (see Fig. 8b): The resulting constraint on T reads

C Interpolation methods
When a deformation is applied to an image x, each of its pixels gets mapped, from the original pixels grid, to new positions generally outside of the grid itself -cf.Fig. 9a-b.A procedure (interpolation method) needs to be defined to project the deformed image back into the original grid.
For simplicity of notation, we describe interpolation methods considering the square [0, 1] 2 as the region in between four pixels -see an illustration in Fig. 10a.We propose here two different ways to interpolate between pixels and then check that our measurements do not depend on the specific method considered.
Bi-linear Interpolation The bi-linear interpolation consists, as the name suggests, of two steps of linear interpolation, one on the horizontal, and one on the vertical direction -Fig.10b.If we look at the square [0, 1] 2 and we apply a deformation τ such that (0, 0) → (u, v), we have Figure 10: (a) We consider the region between four pixels as the square [0, 1] 2 where, after the application of a deformation τ , the pixel (0, 0) is mapped into (u, v).(b) Bi-linear interpolation: the value of x in (u, v) is computed by two steps of linear interpolation.First, we compute x in the red crosses, by averaging values on the vertical axis.Then, a line interpolates horizontally the values in the red crosses to give the result.(c) Gaussian interpolation: we denote by s i the pixel positions in the original grid.The interpolated value of s in any point of the image is given by a weighted sum of n × n Gaussian centered in each s i -in red.
Gaussian Interpolation In this case, a Gaussian function4 is placed on top of each point in the grid -cf.Fig. 10.The pixel intensity x can be evaluated at any point outside the grid by computing In order to fix the standard deviation σ of G, we introduce the participation ratio n.Given Ψ i = G(s, s i )| s=(0.5,0.5), we define The participation ratio is a measure of how many pixels contribute to the value of a new pixel, which results from interpolation.We fix σ in such a way that the participation ratio for the Gaussian interpolation matches the one for the bi-linear (n = 4), when the new pixel is equidistant from the four pixels around.This gives σ = 0.4715.
Notice that this interpolation method is such that it applies a Gaussian smoothing of the image even if τ is the identity.Consequently, when computing observables for f with the Gaussian interpolation, we always compare f (τ x) to f (x), where x is the smoothed version of x, in such a way that Empirical results dependence on interpolation Finally, we checked to which extent our results are affected by the specific choice of interpolation method.In particular, blue and red colors in Figs3, 13 correspond to bi-linear and Gaussian interpolation, respectively.The interpolation method only affects the results in the small displacement limit (δ → 0).
Note: throughout the paper, if not specified otherwise, bi-linear interpolation is employed.ResNet and EfficientNet.The slope 2 at small η identifies the linear regime.For larger noise magnitudes, non-linearities appear.
We introduced in Section 5 the stability toward additive noise: We study here the dependence of G f on the noise magnitude η .In the η → 0 limit, we expect the network function to behave as its first-order Taylor expansion, leading to G f ∝ η 2 .Hence, for small noise, G f gives an estimate of the average magnitude of the gradient of f in a random direction η.

Empirical results
Measurements of G f on SOTA nets trained on benchmark data-sets are shown in Figure 11.We observe that the effect of non-linearities start to be significant around η = 1.For large values of the noise -i.e.far away from data-points -the average gradient of f does not change with training.

E Numerical experiments
In this Appendix, we provide details on the training procedure, on the different architectures employed and some additional experimental results.

• Dynamics:
-Fully connected nets: ADAM with learning rate = 0.1 and no scheduling.
-Transfer learning: SGD with learning rate = 10 −2 for the last layer and 10 −3 for the rest of the network, momentum = 0.9 and weight decay = 10 −3 .Both learning rates decay exponentially during training with a factor γ = 0.975.-All the other networks are trained with SGD with learning rate = 0.1, momentum = 0.9 and weight decay = 5 × 10 −4 .The learning rate follows a cosine annealing scheduling Loshchilov and Hutter (2016).

E.2 Networks architectures
All networks implementations can be found at github.com/leonardopetrini/diffeosota/tree/main/models.In Table 1, we report salient features of the network architectures considered.

E.3 Additional figures
We present here: • Fig. 13: R f as a function of P for MNIST and FashionMNIST with the corresponding predicted slope, omitted in the main text.• Fig. 14

TFigure 1 :
Figure 1: Samples of maxentropy diffeomorphisms for different temperatures T and high-frequency cutoffs c for an ImageNet datapoint of resolution 320 × 320.The green region corresponds to well behaving diffeomorphisms (see Section 2.2).The dashed line corresponds to δ = 1.The colored points on the line are those we focus our study in Section 3.

Figure 2 :
Figure2: Illustrative drawing of the data-space R n×n around a data-point x (black point).We focus here on perturbations of fixed magnitude -i.e. on the sphere of radius r centered in x.The intersection between the images of x transformed via typical diffeomorphisms and the sphere is represented in dashed green.By contrast, the red point is an example of random transformation.For large n, it is equivalent to adding an i.i.d.Gaussian noise to all the pixel values of x.Figures on the right illustrate these transformations, the color of the dot labelling them corresponds to that of the left illustration.The relative stability to diffeomorphisms R f characterizes how a net f varies in the green directions, normalized by random ones.

Figure 3 :
Figure 3: Relative stability to diffeomorphisms R f for SOTA architectures.Left panels: R f vs. diffeomorphism displacement magnitude δ at initialization (dashed lines) and after training (full lines) on the full data set of CIFAR10 (P = 50k) for several cut-off parameters c and two interpolations methods, as indicated in legend.ResNet is shown on the top and EfficientNet on the bottom.Central panels: R f (δ = 1) for four different data-sets (x−axis) and two different architectures at initialization (shaded histograms) and after training (full histograms).The values of c (in different colors) are (3, 5, 15) and (3, 10, 30) for the first three data-sets and ImageNet, respectively.ResNet18 and EfficientNetB0 are employed for MNIST, F-MNIST and CIFAR10, ResNet101 and EfficientNetB2 for ImageNet.Right panels: R f (δ = 1) vs. training set size P at c = 3 for ResNet18 (top) and EfficientNetB0 (bottom) trained on CIFAR10.The value of R f0 at initialization is indicated with dashed lines.The triangles indicate the predicted slope R f ∼ P −1 in a simple model of invariant learning, see Section 6. Statistics: Each point in the graphs 2 is obtained by training 16 differently initialized networks on 16 different subsets of the data-sets; each network is then probed with 500 test samples in order to measure stability to diffeomorphisms and Gaussian noise.The resulting R f is obtained by log-averaging the results from single realizations.

Figure 4 :
Figure 4: Relative stability to diffeomorphisms R f in primitive architectures.Top panels: R f at initialization (shaded) or for trained nets (full) for a fully connected net (left) or a primitive CNN (right) at P = 50k.Bottom panels: R f (P ) for c = 3 and different data sets as indicated in legend.Statistics: see caption in the previous figure.

Figure 5 :
Figure 5: Test error t vs. relative stability to diffeomorphisms R f computed at δ = 1 and c = 3for common architectures when trained on the full 10-classes CIFAR10 dataset (P = 50k) with SGD and the cross-entropy loss; the EfficientNets achieving the best performance are trained by transfer learning from ImageNet ( ) -more details on the training procedures can be found in Appendix E.1.The color scale indicates depth, and the symbols the presence of batch-norm ( ) and skip connections ( †).Dashed grey line: power low fit t ≈ 0.2 R f .R f strongly correlates to t , much less so to depth or the presence of skip connections.Statistics: Each point is obtained by training 5 differently initialized networks; each network is then probed with 500 test samples in order to measure R f .The results are obtained by log-averaging over single realizations.Error bars -omitted here -are shown in Fig.19, Appendix E.3.

Figure 6 :
Figure 6: Stability toward Gaussian noise (G f ) and diffeomorphisms (D f ) alone, and the relative stability R f .Columns correspond to different data-sets (MNIST, FashionMNIST and CIFAR10) and rows to architectures (ResNet18 and EfficientNetB0).Each panel reports G f (blue), D f (orange) and R f (green) as a function of P and for different cut-off values c, as indicated in the legend.Statistics: cf.caption in Fig.3.Error bars -omitted here -are shown in Fig.22, Appendix E.3.

Figure 7 :
Figure 7: Left: example of the stripe model.Dots are datapoints, the vertical lines represent the decision boundary and the color the class label.Right: Relative stability R f for the stripe model in d = 30.The slope of the curve is −1, as predicted.

Figure 9 :
Figure 9: (a) Idealized image at T = 0. (b) Diffeomorphism of the image.(c) Deformation of the image at large T : colors get mixed-up together, shapes are not preserved anymore.(d) Allowed region for vector transformations under τ .For any point in the image s and any direction u, only displacement fields for which all the deformed direction u is non-zero generate diffeomorphisms.The bound in Eq.12 (u • u > 0) correspond to the green region.The gray disc corresponds to the bound ∇τ ∞ < 1.

Figure 11 :
Figure 11: Stability to isotropic noise G f a function of the noise magnitude η for CIFAR10 (left) and ImageNet (right).The color corresponds to two different classes of SOTA architecture:ResNet and EfficientNet.The slope 2 at small η identifies the linear regime.For larger noise magnitudes, non-linearities appear.

Figure 13 :
Figure 13: Relative stability to diffeomorphisms R f (P ) at δ = 1.Analogous to Figure 3-right but here we have MNIST (a-b) and FashionMNIST (c-d) in place of CIFAR10.Stability monotonically decreases with P .The triangles give a reference for the predicted slope in the stripe model -i.e.R f ∼ P −1 -see Section 6.The slopes in case of ResNets are compatible with the prediction.For EfficientNets, the second panel of Fig.3 suggests that stability to diffeomorphisms is less important.Here, we also see that it builds up more slowly when increasing the training set size.Finally, blue and red colors indicate different interpolation methods used for generating image deformations, as discussed in Appendix C. Results are not affected by this choice.

Figure 18 :Figure 19 :
Figure 18: Test error t vs. relative stability to diffeomorphisms R f for different training set sizes P .Same data as Fig.5, we report here curves corresponding to training on different set sizes for 4 architectures.The other architectures considered together with the power-law fit are left in background.For a small training set, CNNs behave similarly.Statistics: Each point is obtained by training 5 differently initialized networks; each network is then probed with 500 test samples in order to measure R f .The results are obtained by log-averaging over single realizations.

•
Our central result is that after training, R f correlates very strongly with the test error t : during training, R f is reduced by several decades in current State Of The Art (SOTA) architectures on four benchmark datasets includingMNIST Lecun et al. (1998), FashionMNIST  Xiao et al. (2017), CIFAR-10 Krizhevsky (2009)and ImageNet G f (P ), D f (P ) and R f (P ) for MNIST, Fashion MNIST and CIFAR10 for two SOTA architectures.The central results are that (i) stability toward noise is always reduced for larger training sets.This observation is natural: when more data needs to be fitted, the function becomes rougher.(ii) Stability toward diffeomorphisms does not behave universally: it can increase with P or decrease depending on the architecture and the training set.Additionally, G f and D f alone show a much smaller correlation with performance than R f -seeFigs.15,16,17 in Appendix E.3.

Table 1 :
Network architectures, main characteristics.We list here (columns) the classes of net architectures used throughout the paper specifying some salient features (depth, number of parameters, etc...) for each of them.