Ising Model Selection Using $\ell_{1}$-Regularized Linear Regression: A Statistical Mechanics Analysis

We theoretically analyze the typical learning performance of $\ell_{1}$-regularized linear regression ($\ell_1$-LinR) for Ising model selection using the replica method from statistical mechanics. For typical random regular graphs in the paramagnetic phase, an accurate estimate of the typical sample complexity of $\ell_1$-LinR is obtained. Remarkably, despite the model misspecification, $\ell_1$-LinR is model selection consistent with the same order of sample complexity as $\ell_{1}$-regularized logistic regression ($\ell_1$-LogR), i.e., $M=\mathcal{O}\left(\log N\right)$, where $N$ is the number of variables of the Ising model. Moreover, we provide an efficient method to accurately predict the non-asymptotic behavior of $\ell_1$-LinR for moderate $M, N$, such as precision and recall. Simulations show a fairly good agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings. Although this paper mainly focuses on $\ell_1$-LinR, our method is readily applicable for precisely characterizing the typical learning performances of a wide class of $\ell_{1}$-regularized $M$-estimators including $\ell_1$-LogR and interaction screening.


Introduction
The advent of massive data across various scientific disciplines has led to the widespread use of undirected graphical models, also known as Markov random fields (MRFs), as a tool for discovering and visualizing dependencies among covariates in multivariate data [1]. The Ising model, originally proposed in statistical physics, is one special class of binary MRFs with pairwise potentials and has been widely used in different domains such as image analysis, social networking, gene network analysis [2][3][4][5][6][7]. Among various applications, one fundamental problem of interest is called Ising model selection, which refers to recovering the underlying graph structure of the original Ising model from independent, identically distributed (i.i.d.) samples. A variety of methods have been proposed [8][9][10][11][12][13][14][15][16][17][18], demonstrating the possibility of successful Ising model selection even when the number of samples is smaller than that of the variables. Notably, it has been demonstrated that for the 1   In this paper, we focus on one simpler linear estimator called 1 -regularized linear regression ( 1 -LinR) and theoretically investigate its typical learning performance using the powerful replica method [20-23] from statistical mechanics. The 1 -LinR estimator, widely known as least absolute shrinkage and selection operator (LASSO) [24] in statistics and machine learning, is considered here mainly for two reasons. On the one hand, it is one representative example of model misspecification since the quadratic loss of 1 -LinR does not match the true log-conditional-likelihood as 1 -LogR, nor does it have the interaction screening property as IS. On the other hand, as one of the most popular linear estimator, 1 -LinR is more computationally efficient than 1 -LogR and IS, and thus it is of practical importance to investigate its learning performance for Ising model selection. Since it is difficult to obtain results for general graphs, as a first step we consider the random regular (RR) graphs G N,d, K0 in the paramagnetic phase [23], where G N,d,K0 denotes the ensemble of RR graphs with constant node degree d and uniform coupling strength K 0 on the edges.

Contributions
The main contributions are summarized as follows. First, we obtain an accurate estimate of the typical sample complexity of 1 -LinR for Ising model selection for typical RR graphs in the paramagnetic phase, which, remarkably, has the same order as 1 -LogR. Specifically, for a typical RR graph G ∈ G N,d,K0 , using 1 -LinR with a regularization parameter 0 < λ < tanh (K 0 ), one can consistently reconstruct the structure with M > c(λ,K0) log N λ 2 samples, where c (λ, K 0 ) = 2(1−tanh 2 (K0)+dλ 2 ) 1+(d−1) tanh 2 (K0) . The accuracy of our typical sample complexity prediction is verified by its excellent agreement with experimental results. To the best of our knowledge, this is the first result that provides an accurate typical sample complexity for Ising model selection. Interestingly, as λ → tanh (K 0 ), a lower bound M > 2 log N tanh 2 (K0) of the typical sample complexity is obtained, which has the same scaling as the information-theoretic lower bound M > c log N K 2 0 [11] for some constant c at high temperatures (i.e., small K 0 ) since tanh (K 0 ) = O (K 0 ) as K 0 → 0.
Second, we provide a computationally efficient method to precisely predict the typical learning performance of 1 -LinR in the non-asymptotic case with moderate M, N , such as precision, recall, and residual sum of square (RSS). Such precise non-asymptotic predictions of 1 -LinR for Ising model selection have been unavailable even for 1 -LogR [10, 16] and IS [14,15], nor are they the same as previous asymptotic results of 1 -LinR assuming fixed α ≡ M/N [25-28]. Moreover, although our theoretical analysis is based on a tree-like structure assumption, experimental results on two dimensional (2D) grid graphs also show a fairly good agreement, indicating that our theoretical result can be a good approximation even for graphs with many loops.
Third, while this paper mainly focuses on 1 -LinR, our method is readily applicable to a wide class of 1 -regularized M -estimators [19], including 1 -LogR [10] and IS [14,15]. Thus, an additional technical contribution is providing a generic approach for precisely characterizing the typical learning performances of various 1 -regularized M -estimators for Ising model selection. Although the replica method from statistical mechanics is non-rigorous, our results are conjectured to be correct, which is supported by their excellent agreement with the experimental results. Additionally, several technical advances we propose in this paper, e.g., the entropy term computation by averaging over the Haar measure and the modification of EOS to address the finite-size effect, might be of general interest to those who use the replica method as a tool for performance analysis.

Related works
There has been some earlier works on the analysis of Ising model selection (also known as the inverse Ising problem) using the replica method [4-7, 29] from statistical mechanics. For example, in [6], the performance of the pseudo-likelihood (PL) method [30] is studied. However, instead of graph structure learning, [6] focuses on the problem of parameter learning. Then, [7] extends the analysis to the Ising model with sparse couplings using logistic regression without regularization. The recent work [29] analyzes the performance of 2 -regularized linear regression but the techniques invented there are not applicable to 1 -LinR since the 1 -norm breaks the rotational invariance property.
Regarding the study of 1 -LinR (LASSO) under model misspecification, the past few years have seen a line of research in the field of signal processing with a specific focus on the single-index model [31,32,27,33,34]. These studies are closely related to ours but there are several important differences. First, in our study, the covariates are generated from an Ising model rather than a Gaussian distribution. Second, we focus on model selection consistency of 1 -LinR while most previous studies consider estimation consistency except [33]. However, [33] only considers the classical asymptotic regime while our analysis includes the high-dimensional setting where M N .
As far as we have searched, there is no earlier study of 1 -LinR estimator for Ising model selection, though some are found for Gaussian graphical models [35,36]. One closely related work [15] states that at high temperatures when the coupling magnitude is approaching zero, both logistic and ISO losses can be approximated by a quadratic loss. However, their claim is only restricted to the very small magnitude near zero while our analysis extends the validity range to the whole paramagnetic phase. Moreover, they evaluate the minimum number of samples necessary for consistently reconstructing "arbitrary" Ising models, which, however, seems much larger than that actually needed. By contrast, we provide the first accurate assessment of typical sample complexity for consistently reconstructing typical samples of Ising models defined over the RR graphs. Furthermore, [15] does not provide precise predictions of the non-asymptotic learning performance as we do.
where Z Ising (J * ) = s exp i<j J * ij s i s j is the partition function and J * = J * ij i,j are the original couplings, respectively. In general, there are also external fields but here they are assumed to be zero for simplicity. The structure of Ising model can be described by an undirected graph G = (V, E), where V = {0, 1, ..., N − 1} is a collection of vertices at which the spins are assigned, and E = (i, j) |J * ij = 0 is a collection of undirected edges, i.e., J * ij = 0 for all pairs of (i, j) / ∈ E. For each vertex i ∈ V, its neighborhood is defined as the subset N (i) ≡ {j ∈ V| (i, j) ∈ E}.
2.2 Neighborhood-based 1 -regularized linear regression ( 1 -LinR) The problem of Ising model selection refers to recovering the graph G (edge set E), given M i.i.d. samples D M = s (1) , ..., s (M ) from the Ising model. While the maximum likelihood method has nice properties of consistency and asymptotic efficiency, it suffers from high computational complexity. To tackle this difficulty, several local learning algorithms have been proposed, notably the 1 -LogR estimator [10] and IS estimator [14]. Both 1 -LogR and IS optimize a regularized local cost function (·) for each spin i.e., ∀i ∈ V, where h , and · 1 denotes the 1 norm. Specifically, (x) = log 1 + e −2x for 1 -LogR and (x) = e −x for IS, which correspond to the minus log conditional distribution [10] and the ISO [14], respectively. Consequently, the problem of recovering the edge set E is equivalently reduced to local neighborhood selection, i.e., recovering the neighborhood set N (i) for each vertex i ∈ V. In particular, given the estimatesĴ \i in (2), the neighborhood set of vertex i can be estimated via the nonzero coefficients, i.e., (3) In this paper, we focus on one simple linear estimator, termed as the 1 -LinR estimator, i.e., ∀i ∈ V, which, recalling that s (µ) i ∈ {−1, +1}, corresponds to a quadratic loss (x) = 1 2 (x − 1) 2 in (2). The neighorbood set for each vertex i ∈ V is estimated in the same way as (3). Interestingly, the quadratic loss used in (4) implies that the postulated conditional distribution is Gaussian and thus inconsistent with the true one, which is one typical case of model misspecification. Furthermore, compared with nonlinear estimators 1 -LogR and IS, the 1 -LinR estimator is more efficient to implement.

Statistical Mechanics Analysis
In this section, a statistical mechanics analysis of the 1 -LinR estimator is presented for typical RR graphs in the paramagnetic phase. Our analysis is applicable to any M-estimator of the form (2) and please refer to Appendix A for a unified analysis, including detailed results for the 1 -LogR estimator.
To characterize the structure learning performance, the precision and recall are considered: where T P , F P , F N denote the number of true positives, false positives, and false negatives in the estimated couplings, respectively. The concept of model selection consistent for an estimator is defined in Definition 1, which is also known as the sparsistency property [10]. Definition 1. An estimator is called model selection consistent if both the associated precision and recall satisfy P recision → 1 and Recall → 1 as M → ∞.
Additionally, if one is further interested in the specific values of the estimated couplings, our analysis can also yield the residual sum of squares (RSS) for the estimated couplings.
Our theoretical analysis of the learning performance builds on the statistical mechanics framework. Contrary to the probably almost correct (PAC) learning theory [37] in mathematical statistics, statistical mechanics tries to describe the typical (as defined in Definition 2) behavior exactly rather than to bound the worst case which is likely to be over-pessimistic [38]. Definition 2. "typical" means not just most probable but in addition the probability for situations different from the typical one can be made arbitrarily small as N → ∞ [38].
Similarly, when referring to typical RR graphs, we mean tree-like RR graphs, i.e., when seen from a random node, they look like part of an infinite tree, which are typical realizations from the uniform probability distribution on the ensemble of RR graphs.

Problem Formulation
For simplicity and without loss of generality, we focus on spin s 0 . With a slight abuse of notation, we will drop certain subscripts in following descriptions, e.g., J \i will be denoted as J hereafter which represents a vector rather than a matrix. The basic idea of the statistical mechanical approach is to introduce the following Hamiltonian and Boltzmann distribution induced by the loss function (·) where Z = dJ e −βH(J|D M ) is the partition function, and β (> 0) is the inverse temperature. In the zero-temperature limit β → +∞, the Boltzmann distribution (7) converges to a point-wise measure on the estimator (2). The macroscopic properties of (7) can be analyzed by assessing the free energy density f (D M ) = − 1 N β log Z, from which, once obtained, we can evaluate averages of various quantities simply by taking its derivatives w.r.t. external fields [21]. In current case, f (D M ) depends on the predetermined randomness of D M , which plays the role of quenched disorder. As  M µ=1 P Ising s (µ) |J * . Consequently, one can analyze the typical performance of any 1 -regularized M-estimator of the form (2) via the assessment of (8), with 1 -LinR in (4) being a special case with (x) = 1 2 (x − 1) 2 .

Replica computation of the free energy density
Unfortunately, computing (8) rigorously is difficult. For practically overcoming this difficulty, we resort to the powerful replica method [20-23] from statistical mechanics, which is symbolized using the following identity The basic idea is as follows. One replaces the average of log Z by that of the n-th power Z n which is analytically tractable for n ∈ N in the large N limit, and constructs an analytically continuable expression from N to R, then takes the limit n → 0 by using the expression. Although the replica method is not rigorous, it has been empirically verified from extensive studies in disorder systems [22,23] and also found useful in the study of high-dimensional models in machine learning [39,40]. For more details of the replica method, please refer to [20][21][22][23].
Specifically, with the Hamiltonian H J |D M , assuming n ∈ N is a positive integer, the replicated partition function [Z n ] D M in (9) can be written as where h a = j J a j s j will be termed as local field hereafter, and a (and b in the following) is index variable of the replicas. The analysis below essentially depends on the distribution of h a but it is nontrivial. To resolve it, we take a similar approach as [7, 29] and introduce the following ansatz. ∈ N (0)} as the active and inactive sets of spin s 0 , respectively, then for a typical RR graph G ∈ G N,d,K0 in the paramagnetic phase, i.e., (d − 1) tanh 2 (K 0 ) < 1, the 1 -LinR estimator in (4) is a random vector determined by random realizations of D M and obeys the following form whereJ j is the mean value of the estimator and w j is a random variable which is asymptotically zero mean with variance scaled as O (1).
The consistency of Ansatz 1 is checked in Appendix B. Under Ansatz 1, the local fields h a can be decomposed as h a = j∈ΨJ j s j + h a w where h a w ≡ j 1 √ N w a j s j is the "noise" part. According to the central limit theorem (CLT), h a w can be approximated as multivariate Gaussian variables, which, under the replica symmetric (RS) ansatz [21], can be fully described by two order parameters where C \0 ≡ {C \0 ij } is the covariance matrix of the original Ising model without the spin s 0 . Since the difference between C \0 and that with s 0 is not essential in the limit N → ∞, hereafter the superscript \0 will be discarded. As shown in Appendix A, for quadratic loss (x) = 1 2 (1 − x) 2 of 1 -LinR, the average free energy density (9) in the limit β → ∞ can be computed as where Extr {·} denotes the extremum operation w.r.t. relevant variables and ξ, S denote the energy and entropy terms: where α ≡ M/N, χ ≡ lim β→∞ β (Q − q), E s,z (·) denotes the expectation operation w.r.t. z ∼ N (0, 1) and (s 0 , s Ψ ) ∼ P Ising (s 0 , s Ψ |J * ) ∝ e s0 j∈Ψ J * j sj [7]. For different losses (·), the free energy results (13) only differ in the energy term ξ, which in general is non-analytical (e.g., logistic loss for 1 -LogR) but can be solved numerically. Please refer to Appendix A.3 for more details.
In contrast to the case of 2 -norm in [29], the 1 -norm in (15) breaks the rotational invariance property, i.e., w a 1 = Ow a 1 for general orthogonal matrix O, making it difficult to compute the entropy term S. To circumvent this difficulty, we employ an observation that, when considering the RR graph ensemble G N,d,K0 as the coupling network of the Ising model, the orthogonal matrix O diagonalizing the covariance matrix C appears to be distributed from the Haar orthogonal measure [41,42]. Thus, it is assumed that I in (15) can be replaced by its average [I] O over the Haar-distributed O:  Finally, under the RS ansatz, the average free energy density (9) in the limit β → ∞ reads where z ∼ N (0, 1), and G (x) is a function defined as and ρ (γ) is the eigenvalue distribution (EVD) of the covariance matrix C, and Θ is a collection of macroscopic parameters Θ = χ, Q, E, R, F, η, K, H, J j j∈Ψ . For details of these macroscopic parameters and ρ (γ), please refer to Appendix A and F, respectively. Note that in (17), apart from the ratio α ≡ M/N , N and M also appear as λM/ √ N in the free energy result, which is different from previous results [7,29,39]. The reason is that, thanks to the 1 -regularization term λM J 1 , the mean estimates J j j∈Ψ in the active set Ψ and the noise w in the inactive setΨ essentially give different scaling contributions to the free energy density.
Although there are no analytic solutions, these macroscopic parameters in (17) can be obtained by numerically solving the corresponding equations of state (EOS) employing the physics terminology. Specifically, for the 1 -LinR estimator, the EOS can be obtained from the extremization condition in (17) as follows (for EOS of a general M-esimator and 1 -LogR, please refer to Appendix A.3): whereΛ satisfying Eη = − ρ(γ) Λ−γ dγ is determined by the extremization condition in (18) and soft (z, τ ) = sign (z) (|z| − τ ) + is the soft-thresholding function. Once the EOS is solved, the free energy density defined in (8) is readily obtained.

High-dimensional asymptotic result
One important result of our replica analysis is that, as derived (see Appendix A.3) from the free energy result (17), the original high dimensional 1 -LinR estimator (4) is decoupled into a pair of scalar estimators, one for the active set and one for the inactive set, i.e., standard Gaussian random variables. The decoupling property asserts that, once the EOS (19) is solved, the asymptotic behavior of 1 -LinR can be statistically described by a pair of simple scalar soft-thresholding estimators (see Figs. 1(a) and 1(b)).
In the high-dimensional setting where N is allowed to grow as a function of M , one important question is that what is the minimum number of samples M required to achieve model selection consistency as N → ∞. Though we obtain a pair of scalar estimators (20) and (21), there are no analytical solutions to EOS (19), making it difficult to derive an explicit condition. To overcome this difficulty, as shown in Appendix D, we perform a perturbation analysis of EOS (17) and obtain an asymptotic relation H F , Then, we obtain that for a RR graph G ∈ G N,d,K0 , given M i.i.d. samples D M , the 1 -LinR estimator (4) can consistently recover the graph structure G as where c(λ, K 0 ) is a constant value dependent on the regularization parameter λ and coupling strength K 0 and a sharp prediction (as verified in Sec. 5) is obtained as For details of the analysis, including the counterpart of 1 -LogR, see Appendix D. Consequently, we obtain the typical sample complexity of 1 -LinR for Ising model selection for typical RR graphs in the paramagnetic phase. The result in (22) is derived for 1 -LinR with a fixed regularization parameter λ.
Since the value of λ is upper bounded by tanh (K 0 ) (otherwise false negatives occur as discussed in Appendix D), a lower bound of the typical sample complexity for 1 -LinR is obtained as Interestingly, the scaling in (24) is the same as the information-theoretic worst-case result M > c log N

Non-asymptotic result for moderate M, N
In practice, it is desirable to predict the non-asymptotic performance of the 1 -LinR estimator for moderate M, N . However, the scalar estimator (20) for the active set (see Fig. 1(a)) fails to capture the fluctuations around the mean estimates. This is because in obtaining the energy term ξ (16) of the free energy density (17), the fluctuations around the mean estimates J j j∈Ψ are averaged out by the expectation E s,z (·). To address this problem, we replace E s,z (·) in (17) with a sample average by accounting for the finite-size effect, thus obtaining a modified estimator for the active set as follows where Fig. 1(c) for a schematic) is equivalent to the scalar one (20) (Fig. 1(a)) as M → ∞ but it enables us to capture the fluctuations of {Ĵ j } j∈Ψ for moderate M . Note that due to the replacement of expectation with sample average in the free energy density (17), the EOS (19) also needs to be modified and it can be solved iteratively as sketched in Algorithm 1. The details are shown in Appendix E.1.
Update values of χ, Q, E, R, F, η, K, H 8 until convergence Consequently, for moderate M, N , the non-asymptotic statistical properties of the 1 -LinR estimator can be characterized by the reduced d-dimensional 1 -LinR estimator (25) (Fig. 1(c)) and scalar estimator (21) ( Fig. 1(b)) using MC simulations. Denote {Ĵ t j }, t = 1, ..., T MC as the estimates in t-th MC simulation, where {Ĵ t j } j∈Ψ and {Ĵ t j } j∈Ψ are solutions of (25) and (21), and T MC is the total number of MC simulations. Then, the P recision and Recall are computed as where · 0 is the 0 -norm indicating the number of nonzero elements. In addition, the RSS can be

Discussions
It might seem surprising that, despite apparent model misspecification due to the use of quadratic loss, the 1 -LinR estimator can still correctly infer the structure with the same order of sample complexity as 1 -LogR. Also, our theoretical analysis implies that the idea of using linear regression for binary data is not as outrageous as one might imagine. Here we provide an intuitive explanation of its success with a discussion of its limitations.
On the average, from (4), the condition for the 1 -LinR estimator is given as where · and ∂|J k | represent average w.r.t. the Boltzmann distribution (7) and the sub-gradient of |J k |, respectively. In the paramagnetic phase, s i s j decays in its magnitude exponentially w.r.t. the distance of sites i and j. This guarantees that once the connections J k of sites in the first nearest neighbor set Ψ are given so that holds, the other conditions are automatically satisfied by setting all the other connections that are not from Ψ to zero. For appropriate choice of λ, (28) has solutions of sign(J * k )J k > 0, ∀k ∈ Ψ. Namely ∀k ∈ Ψ, the estimate of J k has the same sign as the true value J * k . This implies that on average the 1 -LinR estimator can successfully recover the network structure up to the connection signs if λ is chosen appropriately.
The key of the above argument is that s i s j decays exponentially fast w.r.t. the distance of two sites, which does not hold after the phase transition. Thus, it is conjectured that the 1 -LinR estimator will start to fail in the network recovery just at the phase transition point. However, it is worth noting that this is in fact not limited to 1 -LinR: 1 -LogR also exhibits similar behavior unless post-thresholding is used, as reported in [43].

Experimental results
Figure 2: Theoretical and experimental results of RSS, P recision and Recall for RR graph and 2D grid using 1-LinR and 1-LogR with different values of α ≡ M/N . The standard error bars are obtained from 1000 random runs. A good agreement between theory and experiment is achieved, even for small-size 2D grid graph with many loops. For more results, please refer to Appendix G.
In this section, we conduct numerical experiments to verify the accuracy of the theoretical analysis. The experimental procedures are as follows. First, a random graph G ∈ G N,d,K0 is generated and the Ising model is defined on it. Then, the spin snapshots are obtained using the Metropolis-Hastings algorithm [44][45][46] in the same way as [7], yielding the dataset D M . We randomly choose a center spin s 0 and infer its neighborhood using the 1 -LinR (4) and 1 -LogR [10] estimators. To obtain standard error bars, we repeat the sequence of operations 1000 times. The RR graph G ∈ G N,d,K0 with node degree d = 3 and coupling strength K 0 = 0.4 is considered, which satisfies the paramagnetic condition (d − 1) tanh 2 (K 0 ) < 1. The active couplings {J ij } (i,j)∈E have the same probability of taking both signs of +1 or −1 2 .
We first verify the precise non-asymptotic predictions of our method described in Sec.3.4. good agreement between the theoretical predictions and experimental results, even for small N = 200 and small α (equivalently small M ), verifying the correctness of the replica analysis. Interestingly, a quantitatively similar behavior between 1 -LinR and 1 -LogR is observed in terms of precision and recall. Regarding RSS, the two estimators actually behave differently, which can be clearly seen in Fig. 7 in Appendix G: the RSS is much smaller for 1 -LogR, which is natural since the estimates of 1 -LogR are closer to the true ones due to the model match. As our theoretical analysis is based on the typical tree-like structure assumption, it is interesting to see if it is applicable to graphs with loops. To this end, we consider the 2D 4-nearest neighbor grid with periodic boundary condition, which is one common loopy graph. Fig. 2 (lower figure) shows the results for a 15 × 15 2D grid with uniform constant coupling K 0 = 0.2. The agreement between the theoretical and numerical results is fairly good, indicating that our theoretical result can be a good approximation even for loopy graphs. More results for different values of N and λ are shown in Fig. 7  show the results of Precision and Recall, respectively. As expected, the Precision increases consistently with N when c > c 0 (λ, K 0 ) and decreases consistently with N when c < c 0 (λ, K 0 ) while the Recall increases consistently and approaches to 1 as N → ∞, which verifies the sharpness of the critical scaling value prediction. The results for 1 -LogR, including the case of λ = 0.1 for both 1 -LinR and 1 -LogR, are shown in Fig. 9 and Fig. 10 in Appendix G.

Conclusion
In this paper, we provide a unified statistical mechanics framework for the analysis of typical learning performances of 1 -regularized M -estimators, 1 -LinR in particular, for Ising model selection on typical paramagnetic RR graphs. Using the powerful replica method, the high-dimensional 1regularized M-estimator is decoupled into a pair of scalar estimators, by which we obtain an accurate estimate of the typical sample complexity. It is revealed that, perhaps surprisingly, the misspecified 1 -LinR estimator is model selection consistent using M = O (log N ) samples, which is of the same order as 1 -LogR. Moreover, with a slight modification of the scalar estimator for the active set to account for the finite-size effect, we further obtain sharp predictions of the non-asymptotic behavior of 1 -LinR (also 1 -LogR) for moderate M, N . There is an excellent agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings.
Several key assumptions are made in our theoretical analysis, such as the paramagnetic assumption which implies that the coupling strength should not be too large. It is worth noting that the restrictive paramagnetic assumption is not only limited to 1 -LinR, but also to other low-complexity estimators like 1 -LogR unless post-thresholding is used [43]. These assumptions restrict the applicability of the presented result, and thus overcoming such limitations will be an important direction for future work. [11] Narayana P Santhanam and Martin J Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. IEEE Transactions on Information Theory, 58 (7):4117-4134, 2012.
[13] Guy Bresler. Efficiently learning ising models on arbitrary graphs. In Proceedings of the forty-seventh annual ACM symposium on Theory of computing, pages 771-782, 2015. [

A Free energy density f computation
The detailed derivation of the average free energy density f = − 1 N β [log Z] D M in (9) using the replica method is illustrated. Our method provides a unified framework for the statistical mechanics analysis of any 1 -regularized M -estimator of the form (2). As a result, for generality, in the following derivations, we first focus on a generic 1 -regularized M -estimator (2) with a generic loss function (·). After obtaining the generic results, specific results for both the 1 -LinR estimator (4) with square loss (x) = 1 2 (x − 1) 2 and the 1 -LogR estimator with logistic loss (x) = log 1 + e −2x are provided. For the IS estimator, the results can be easily obtained by substituting (x) = e −x , though the specific results are not shown.

A.1 Energy term ξ of f
The key of replica method is to compute the replicated partition function dw a e −βλM n j∈Ψ |Jj|+ n where 1 √ N w a j , j ∈ Ψ in the finite active set Ψ are neglected in the second line when N is large, P (s 0 , s Ψ |J * ) = sΨ P Ising (s|J * ) is the marginal distribution of s 0 , s Ψ that can be computed as [7], P noise ({h a w } a | {w a } a ) is the distribution of the "noise" part h a w ≡ 1 √ N j∈Ψ w a j s j of the local field. In the last line, the asymptotic independence between h a w and (s 0 , s Ψ ) are applied as discussed in [7]. Regarding the marginal distribution P (s 0 , s Ψ |J * ), in general we have to take into account the cavity fields in the marginal distribution. In the case considered in this paper, however, the paramagnetic assumption simplifies the marginal distribution and finally it is proportional to e s0 j∈Ψ J * where is the covariance matrix of the original Ising model without s 0 . To write the integration in terms of the order parameters Q, q, we introduce the following trivial identities so that [Z n ] D M in (29) can be rewritten as where L ≡ e −βλn j∈Ψ |Jj| s0,sΨ According to CLT and (30) and (31), the noise parts h a w , a = 1, . . . , n follow a multivariate Gaussian distribution with zero mean (paramagnetic assumption) and covariances Consequently, by introducing two auxiliary i.i.d. standard Gaussian random variables v a ∼ N (0, 1) , z ∼ N (0, 1), the noise parts h a w , a = 1, . . . , n can be written in a compact form h a w = Q − qv a + √ qz, a = 1, . . . , n so that L in (37) could be written as where Dz = dz √ 2π e − z 2 2 . As a result, using the replica formula, we have where in the last line, a change of variable y = s 0 j∈ΨJ j s j + √ Q − qv + √ qz is used.
As a result, from (9), the average free energy density in the limit β → ∞ reads where Extr {·} denotes extremization w.r.t. some relevant variables, and ξ, S are the corresponding energy and entropy terms of f , respectively: and the relation lim β→∞ β (Q − q) ≡ χ is used [6,7]. The extremization in the free energy result (42) comes from saddle point method in the large N limit.

A.2 Entropy term S of f
To obtain the final result of free energy density, there is still one remaining entropy term S to compute, which requires the result of I (44). However, unlike the 2 -norm, the 1 -norm in (44)  Specifically, also under the RS ansatz, two auxiliary order parameters are introduced, i.e., Then, by inserting the delta functions a δ (w a ) Moreover, replacing the original delta functions in (48) as the following identities and taking average over the orthogonal matrix O, after some algebra, the I is replaced with the following average To proceed with the computation, the eigendecompostion of the matrix L n is performed. After some algebra, for the configuration of w a that satisfies both (w a ) T w a = N R and (w a ) T w b = N r, the eigenvalues and associated eigenvectors of matrix L n can be calculated as follows where λ 1 is the eigenvalue corresponding to the eigenvector u 1 while λ 2 is the degenerate eigenvalue corresponding to eigenvectors u a , a = 2, ..., n. To compute e and ρ (γ) is the eigenvalue distribution (EVD) of C. Then, combined with (51), after some algebra, we obtain that Furthermore, replacing the original delta functions in (48) as we obtain In addition, using a Gaussian integral, the following result can be linearized as where Dz i = dzi √ 2π e − z i 2 2 . Consequently, the entropy term S of the free energy density f is computed as For β → ∞, according to the characteristic of the Boltzmann distribution, the following scaling relations are assumed to hold, i.e., Finally, the entropy term is computed as
Specifically, the free energy can be written as follows and the corresponding EOS can be written as Note that the mean estimates J j , j ∈ Ψ in (61) is obtained by solving the following reduced optimization problem where the corresponding fixed-point equation associated with anyJ k , k ∈ Ψ can be written as follows where the sign(·) denotes an element-wise application of the standard sign function. For a RR graph G ∈ G N,d,K0 with degree d and coupling strength K 0 , without loss of generality, assuming that all the active couplings are positive, we have E s (s 0 s k ) = tanh (K 0 ) , ∀k ∈ Ψ, and E s (s k s j ) = tanh 2 (K 0 ) , ∀k, j ∈ Ψ, k = j. Given these results and thanks to the the symmetry, we obtain where soft (z, τ ) = sign (z) (|z| − τ ) + is the soft-thresholding function, i.e., On the other hand, in the inactive setΨ, each component of the scaled noise estimates can be statistically described as the solution to the scalar estimator min Consequently, recalling the definition of w in (11), the estimates Ĵ j , j ∈Ψ in the inactive setΨ areĴ which z j ∼ N (0, 1) , j ∈Ψ are i.i.d. random Gaussian noise.
Consequently, it can be seen that from (64) and (66), statistically, the 1 -LinR estimator is decoupled into two scalar thresholding estimators for the active set Ψ and inactive setΨ, respectively.

A.3.2 Logistic loss (y) = log 1 + e −2y
In the case of logistic lass (y) = log 1 + e −2y for the 1 -LogR estimator, however, there is no analytic solution to y in min y (y−s0( √ Qz+ j∈ΨJ j sj )) 2 2χ + (y) and we have to solve it together iteratively with other parameters Θ. After some algebra, we obtain the EOS for the 1 -LogR estimator: s, z, χ, Q, J)) , E = αE s,z s0z √ Q tanh (ŷ (S, z, χ, Q, J)) , F = αE s,z (1 − tanh (ŷ (S, z, χ, Q, J))) 2 , In the active set Ψ, the mean estimates J j , j ∈ Ψ can be obtained by solving a reduced 1regularized optimization problem (68) In contrast to the 1 -LinR estimator, the mean estimates J j , j ∈ Ψ in (68) for the 1 -LogR estimator do not have analytic solutions and also have to be solved numerically. For a RR graph G ∈ G N,d,K0 with degree d and coupling strength K 0 , after some algebra, the corresponding fixed-point equations for J j = J, j ∈ Ψ are obtained as follows J = soft E s,z ŷ (s, z, χ, Q, J) s 0 j∈Ψ s j , λdχ which can be solved iteratively.
The estimates in the inactive setΨ are the same as (66) that of 1 -LinR, which can be described by a scalar theresholding estimator once the EOS is solved.

B Check the consistency of ansatz (A1)
To check the consistency of Ansatz (A1), first we categorize the estimators based on the distance or generation from the focused spin s 0 . Considering the original Ising model whose coupling network is a tree-like graph, we can naturally define generations of the spins according to the distance from the focused spin s 0 . We categorize the spins directly connected to s 0 as the first generation and denote the corresponding index set as Ω 1 = {i|J * i = 0, i ∈ {1, . . . , N − 1}}. Each spin in Ω 1 is connected to some other spins except for s 0 , and those spins constitute the second generation and we denote its index set as Ω 2 . This recursive construction of generations can be unambiguously continued on the tree-like graph, and we denote the index set of the g-th generation from spin s 0 as Ω g . The overall construction of generations is graphically represented in Fig. 4. Generally, assume that the set of nonzero values of the 1 -LinR estimator is denoted as Ψ = {Ω 1 , . . . , Ω g }. Then, Ansatz (A1) means that the correct active set of the mean estimates is Ψ = {Ω 1 }. Figure 4: Schematic of generations of spins. In general, the g-th generation of spin s 0 is denoted as Ω g , whose distance from spin s 0 is g.
To verify this, we examine the values of mean estimates based on (60). Due to the symmetry, it is expected that for each a = 1, ..., g, the values of the mean estimatesJ j∈Ωa = J a are identical to each other within the same set Ω a , a = 1...g. In addition, if the solutions satisfy Ansatz (A1) in (11), i.e., J 1 = J, J a = 0, a ≥ 2, from (60) we obtain where the result E s (s i s j ) = tanh d0 (K 0 ) is used for any two spins s i , s j whose distance is d 0 in the RR graph G ∈ G N,d,K0 . Note that the solution of the first equation in (70) automatically satisfies the second equation (sub-gradient condition) since |tanh (K 0 )| ≤ 1, which indicates that J 1 = J, J a = 0, a ≥ 2 is one valid solution. Moreover, the convexity of the quadratic loss function indicates that this is the unique and correct solution, which checks the Ansatz (A1).

C Check the consistency of ansatz (A2)
We here check the consistency of a part of the Ansatz (A2) in Sec.3.2, the orthogonal matrix O diagonalizing the covariance matrix C is distributed from the Haar orthogonal measure. To achieve this, we compare certain properties of the orthogonal matrix generated from the diagonalization of the covariance matrix C with the orthogonal matrix which is actually generated from the Haar orthogonal measure. Specifically, we compute the cumulants of the trace of the power k of the orthogonal matrix. All cumulants with degree r ≥ 3 are shown to disappear in the large N limit [41,42]. The nontrivial cumulants are only second order cumulant with the same power k. We have computed these cumulants about the orthogonal matrix from the covariance matrix C and found that they exhibit the same behavior as the ones generated from the true Haar measure, as shown in Fig. 5.

D Details of the High-dimensional asymptotic result
Here the asymptotic performance of P recision and Recall are considered for both the 1 -LinR estimator and the 1 -LogR estimator. Recall that perfect Ising model selection is achieved if and only if P recision = 1 and Recall = 1

D.1 Recall rate
According to the definition in (5), the recall rate is only related to the statistical properties of estimates in the active set Ψ and thus the mean estimates J j j∈Ψ in the limit M → ∞ are considered.

D.1.1 quadratic loss
In this case, in the limit M → ∞, the mean estimates J j = J j∈Ψ in the active set Ψ are shown in (64) and rewritten as follows for ease of reference As a result, as long as λ (1 + χ) < tanh (K 0 ), J > 0 and thus we can successfully recover the active set so that Recall = 1. In addition, when M = O (log N ), χ → 0 as N → ∞, as demonstrated later by the relation in (81). As a result, the regularization parameter needs to satisfy 0 < λ < tanh (K 0 ).

D.1.2 Logistic loss
In this case, in the limit M → ∞, the mean estimates J j = J j∈Ψ in the active set Ψ are shown in (69) and rewritten as follows for ease of reference J = soft E s,z ŷ (s, z, χ, Q, J) s 0 j∈Ψ s j , λdχ There is no analytic solution forŷ (s, z, χ, Q, J) and the following fixed-point equation has to be solved numericallŷ y (s, z, χ, Q, J) − s 0 √ Qz + J j∈Ψ s j χ = 1 − tanh (ŷ (s, z, χ, Q, J)) .
Then one can determine the valid choice of λ to enable J > 0. Numerical results show that the choice of λ is similar to that of the quadratic loss.

D.2 Precision rate
According to the definition in (5), to compute the P recision, the number of true positives T P and false positives F P are needed, respectively. On the one hand, as discussed in Appendix D.1, in the limit M → ∞, the recall rate approach to one and thus we have T P = d for a RR graph G ∈ G N,d,K0 . On the other hand, the number of false positives F P can be computed as F P = F P R · N , where F P R is the false positive rate (FPR).
As shown in Appendix A.3, the estimator in the inactive setΨ can be statistically described by a scalar estimator (66) and thus the F P R can be computed as which depends on λ, M, N, H. However, for both the quadratic loss and logistic loss, there is no analytic result for H in (59). Nevertheless, we can obtain some asymptotic result using perturbative analysis.
Specifically, we focus on the asymptotic behavior of the macroscopic parameters, e.g., χ, Q, K, E, H, F , in the regime F P R → 0, which is necessary for successful Ising model selection.
In addition, as F P R = erfc λM √ 2HN → 0, from (59) we obtain where the first result in uses the asymptotic relation erfc (x) as x → ∞ and the last result in results from the asymptotic relation in (80). Then, substituting (84) into (83) leads to the following relation Interestingly, the common terms 3Eη γ H in both sides of (85) cancel with each other. Therefore, the key result for H is obtained as follows In addition, from (86) and (82), Q can be simplified as As shown in (59) Fig. 6 when M = 50(log N ) for N = 10 2 ∼ 10 12 using the 1 -LinR estimator.
In the paramagnetic phase, it can be obtained that the mean value of the eigenvalue γ . Specifically, dy | y=ŷ(s,z,χ,Q,J) 2 = O (1), then the F P R in (74) can be rewritten as follows where the last inequality uses the upper bound of erfc function, i.e., erfc (x) ≤ 1 x √ π e −x 2 . Consequently, the number of false positives F P satisfies where the last inequality holds when λ 2 M 2 > 1, which is necessary when F P → 0 as N → ∞.
Consequently, to ensure F P → 0 as N → ∞, from (89), the term λ 2 M 2 should grow at least faster than log N , i.e., Meanwhile, the number of false positives F P will decay as O e −c log N for some constant c (> 0).

D.2.1 Quadratic loss
In this case, when 0 < λ < tanh (K 0 ), from (61), we can obtain an analytic result for as follows On the other hand, from the discussion in Appendix D.1, the recall rate Recall → 1 as M → ∞ when 0 < λ < tanh K 0 . Overall, for a RR graph G ∈ G N,d,K0 with degree d and coupling strength where c (λ, K 0 ) is a value dependent on the regularization parameter λ and coupling strength K 0 , which can be approximated in the limit N → ∞ as:

D.2.2 Logistic loss
In this case, from (67), the value of can be computed as However, different from the case of 1 -LinR estimator, there is no analytic solution but it can be calculated numerically. It can be seen that the 1 -LinR estimator only differs in the value of scaling factor with the 1 -LogR estimator for Ising model selection.
E Details of the non-asymptotic result for moderate M, N As demonstrated in Appendix A.3, from the replica analysis, both 1 -LinR and 1 -LogR estimators are decoupled and their asymptotic behavior can be described by two scalar estimators for the active set and inactive set, respectively. It is desirable to obtain the non-asymptotic result for moderate M, N . However, it is found that the behavior of the two scalar estimators by simply inserting the finite values of M, N into the EOS does not always lead to good consistency with the experimental results, especially for the Recall when M is small. This can be explained by the derivation of the free energy density. In calculating the energy term ξ, the limit M → ∞ is taken implicitly when assuming the limit N → ∞ with α ≡ M/N . As a result, the scalar estimator associated with the active set can only describe the asymptotic performance in the limit M → ∞. Thus, one cannot describe the fluctuating behavior of the estimator in the active set such as the recall rate for finite M . To characterize the non-asymptotic behavior of the estimates in the active set Ψ, we replace the expectation E s (·) in (58) by the sample average over M samples, and the corresponding estimates are obtained as As the mean estimates J j j∈Ψ are modified as in (97), the corresponding solution to the EOS in (61) also needs to be modified, and this can be solved iteratively as sketched in Algorithm 1. For a practical implementation of Algorithm 1, the details are described in the following.
First, in the EOS (19), we need to obtainΛ satisfying the following relation which is difficult to solve directly. To obtainΛ, we introduce an auxiliary variable Γ ≡ − 1 Λ , by which (98) can be rewritten as which can be solved iteratively. Accordingly, the χ, Q, K, H in EOS (19) can be equivalently written in terms of Γ .
Second, when solving the EOS (19) iteratively using numerical methods, it is helpful to improve the convergence of the solution by introducing a small amount of damping factor damp ∈ [0, 1) for χ, Q, E, R, F, η, K, H, Γ in each iteration.
The detailed implementation of Algorithm 1 is shown in Algorithm 2. (t) Similarly as the case for quadratic loss, as the mean estimates J j j∈Ψ are modified as in (100), the corresponding solutions to the EOS in (67) also need to be modified, which can be solved iteratively as shown in Algorithm 3.

F Eigenvalue Distribution ρ (γ)
From the replica analysis presented, the learning performance will depend on the eigenvalue distribution (EVD) ρ (γ) of the covariance matrix C of the original Ising model.
There are two issues to be noted. One is about the formula connecting the performance of the estimator and the spectral density, and the other is the numeric values of quantities which are computed from 2HN + damp · η 23 until convergence the formula. For the first point, no assumption about the spectral density is needed to obtain the formula itself and this formula is valid when the graph structure is tree-like and the Ising model defined on the graph is in the paramagnetic phase. For the second point, we need the specific form of the spectral density to obtain numeric solutions in general. As a demonstration, we assume the random regular graph with constant coupling strength for which the spectral density can be obtained analytically as has already been known before in [7].
In general, it is difficult to obtain this EVD; however, for sparse tree-like graphs such as RR graph G ∈ G N,d,K0 with constant node degree d and sufficiently small coupling strength K 0 that yields the paramagnetic state (E s (s) = 0), it can be computed analytically. For this, we express the covariances as Fig. 8 show the full results of non-asymptotic learning performance prediction when λ = 0.1 and λ = 0.3, respectively. Good agreements between replica results and experimental results are achieved in all cases. As can be seen, there is negligible difference in P recision and Recall between 1 -LinR and 1 -LogR. Meanwhile, compared to Fig. 7 when λ = 0.1, the difference in RSS between 1 -LinR and 1 -LogR is reduced when λ = 0.3. In addition, by comparing Fig. 7 and Fig. 8, it can be seen that under the same setting, when λ increases, the P recision becomes larger while the Recall becomes smaller, implying a tradeoff in choosing λ in practice for Ising model selection with finite M, N .    ≈ 137. When c > c 0 (λ, K 0 ), the Precision increases consistently with N and approaches 1 as N → ∞ while it decreases consistently with N when c < c 0 (λ, K 0 ). ≈ 19.4. When c > c 0 (λ, K 0 ), the Precision increases consistently with N and approaches 1 as N → ∞ while it decreases consistently with N when c < c 0 (λ, K 0 ). The Recall increases consistently and approach to 1 as N → ∞.