Paper The following article is Open access

Ising model selection using 1-regularized linear regression: a statistical mechanics analysis*

, and

Published 24 November 2022 © 2022 The Author(s). Published on behalf of SISSA Medialab srl by IOP Publishing Ltd
, , Machine Learning 2022 Citation Xiangming Meng et al J. Stat. Mech. (2022) 114006 DOI 10.1088/1742-5468/ac9831

1742-5468/2022/11/114006

Abstract

We theoretically analyze the typical learning performance of 1-regularized linear regression (1-LinR) for Ising model selection using the replica method from statistical mechanics. For typical random regular graphs in the paramagnetic phase, an accurate estimate of the typical sample complexity of 1-LinR is obtained. Remarkably, despite the model misspecification, 1-LinR is model selection consistent with the same order of sample complexity as 1-regularized logistic regression (1-LogR), i.e. $M=\mathcal{O}\left(\mathrm{log}\,N\right)$, where N is the number of variables of the Ising model. Moreover, we provide an efficient method to accurately predict the non-asymptotic behavior of 1-LinR for moderate M, N, such as precision and recall. Simulations show a fairly good agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings. Although this paper mainly focuses on 1-LinR, our method is readily applicable for precisely characterizing the typical learning performances of a wide class of 1-regularized M-estimators including 1-LogR and interaction screening.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

The advent of massive data across various scientific disciplines has led to the widespread use of undirected graphical models, also known as Markov random fields (MRFs), as a tool for discovering and visualizing dependencies among covariates in multivariate data [1]. The Ising model, originally proposed in statistical physics, is one special class of binary MRFs with pairwise potentials and has been widely used in different domains such as image analysis, social networking, gene network analysis [27]. Among various applications, one fundamental problem of interest is called Ising model selection, which refers to recovering the underlying graph structure of the original Ising model from independent, identically distributed (i.i.d.) samples. A variety of methods have been proposed [818], demonstrating the possibility of successful Ising model selection even when the number of samples is smaller than that of the variables. Notably, it has been demonstrated that for the 1-regularized logistic regression (1-LogR) [10, 16] and interaction screening (IS) [14, 15] estimators, $M=\mathcal{O}\left(\mathrm{log}\,N\right)$ samples suffice for an Ising model with N spins under certain assumptions, which is consistent with respect to (w.r.t.) previously established information-theoretic lower-bound [11]. Both 1-LogR and IS are 1-regularized M-estimators [19] with logistic and IS objective (ISO) loss functions, respectively.

In this paper, we focus on one simpler linear estimator called 1-regularized linear regression (1-LinR) and theoretically investigate its typical learning performance using the powerful replica method [2023] from statistical mechanics. The 1-LinR estimator, also more widely known as least absolute shrinkage and selection operator (LASSO) [24] in statistics and machine learning, is considered here mainly for two reasons. On the one hand, it is one representative example of model misspecification since the quadratic loss of 1-LinR does not match the true log-conditional-likelihood as 1-LogR, nor does it have the interaction screening property as IS. On the other hand, as one of the most popular linear estimator, 1-LinR is more computationally efficient than 1-LogR and IS, and thus it is of practical importance to investigate its learning performance for Ising model selection. Since it is difficult to obtain results for general graphs, as a first step we consider the random regular (RR) graphs ${\mathcal{G}}_{N,d,{K}_{0}}$ in the paramagnetic phase [23], where ${\mathcal{G}}_{N,d,{K}_{0}}$ denotes the ensemble of RR graphs with constant node degree d and uniform coupling strength K0 on the edges.

1.1. Contributions

The main contributions are summarized as follows. First, we obtain an accurate estimate of the typical sample complexity of 1-LinR for Ising model selection for typical RR graphs in the paramagnetic phase, which, remarkably, has the same order as 1-LogR. Specifically, for a typical RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$, using 1-LinR with a regularization parameter $0< \lambda < \mathrm{tanh}\left({K}_{0}\right)$, one can consistently reconstruct the structure with $M > \frac{c\left(\lambda ,{K}_{0}\right)\mathrm{log}\,N}{{\lambda }^{2}}$ samples, where $c\left(\lambda ,{K}_{0}\right)=\frac{2\left(1-{\mathrm{tanh}}^{2}\left({K}_{0}\right)+d{\lambda }^{2}\right)}{1+\left(d-1\right){\mathrm{tanh}}^{2}\left({K}_{0}\right)}$. The accuracy of our typical sample complexity prediction is verified by its excellent agreement with experimental results. To the best of our knowledge, this is the first result that provides an accurate typical sample complexity for Ising model selection. Interestingly, as $\lambda \to \mathrm{tanh}\left({K}_{0}\right)$, a lower bound $M > \frac{2\,\mathrm{log}\,N}{{\mathrm{tanh}}^{2}\left({K}_{0}\right)}$ of the typical sample complexity is obtained, which has the same scaling as the information-theoretic lower bound $M > \frac{{c}^{\prime }\,\mathrm{log}\,N}{{K}_{0}^{2}}$ [11] for some constant c' at high temperatures (i.e. small K0) since $\mathrm{tanh}\left({K}_{0}\right)=\mathcal{O}\left({K}_{0}\right)$ as K0 → 0.

Second, we provide a computationally efficient method to precisely predict the typical learning performance of 1-LinR in the non-asymptotic case with moderate M, N, such as precision, recall, and residual sum of square (RSS). Such precise non-asymptotic predictions of 1-LinR for Ising model selection have been unavailable even for 1-LogR [10, 16] and IS [14, 15], nor are they the same as previous asymptotic results of 1-LinR assuming fixed αM/N [2528]. Moreover, although our theoretical analysis is based on a tree-like structure assumption, experimental results on two dimensional (2D) grid graphs also show a fairly good agreement, indicating that our theoretical result can be a good approximation even for graphs with many loops.

Third, while this paper mainly focuses on 1-LinR, our method is readily applicable to a wide class of 1-regularized M-estimators [19], including 1-LogR [10] and IS [14, 15]. Thus, an additional technical contribution is providing a generic approach for precisely characterizing the typical learning performances of various 1-regularized M-estimators for Ising model selection. Although the replica method from statistical mechanics is non-rigorous, our results are conjectured to be correct, which is supported by their excellent agreement with the experimental results. Additionally, several technical advances we propose in this paper, e.g. the entropy term computation by averaging over the Haar measure and the modification of EOS to address the finite-size effect, might be of general interest to those who use the replica method as a tool for performance analysis.

1.2. Related works

There have been some earlier works on the analysis of Ising model selection (also known as the inverse Ising problem) using the replica method [47, 29] from statistical mechanics. For example, in [6], the performance of the pseudo-likelihood (PL) method [30] is studied. However, instead of graph structure learning, [6] focuses on the problem of parameter learning. Then, [7] extends the analysis to the Ising model with sparse couplings using logistic regression without regularization. The recent work [29] analyzes the performance of 2-regularized linear regression but the techniques invented there are not applicable to 1-LinR since the 1-norm breaks the rotational invariance property.

Regarding the study of 1-LinR (LASSO) under model misspecification, the past few years have seen a line of research in the field of signal processing with a specific focus on the single-index model [27, 3134]. These studies are closely related to ours but there are several important differences. First, in our study, the covariates are generated from an Ising model rather than a Gaussian distribution. Second, we focus on model selection consistency of 1-LinR while most previous studies consider estimation consistency except [33]. However, [33] only considers the classical asymptotic regime while our analysis includes the high-dimensional setting where MN.

As far as we have searched, there is no earlier study of 1-LinR estimator for Ising model selection, though some are found for Gaussian graphical models [35, 36]. One closely related work [15] states that at high temperatures when the coupling magnitude is approaching zero, both logistic and ISO losses can be approximated by a quadratic loss. However, their claim is only restricted to the very small magnitude near zero while our analysis extends the validity range to the whole paramagnetic phase. Moreover, they evaluate the minimum number of samples necessary for consistently reconstructing 'arbitrary' Ising models, which, however, seems much larger than that actually needed. By contrast, we provide the first accurate assessment of typical sample complexity for consistently reconstructing typical samples of Ising models defined over the RR graphs. Furthermore, [15] does not provide precise predictions of the non-asymptotic learning performance as we do.

2. Background and problem setup

2.1. Ising model

Ising model is one special class of MRFs with pairwise potentials and each variable takes binary values [22, 23], which is one classical model from statistical physics. The joint probability distribution of an Ising model with N variables (spins) $\boldsymbol{s}={\left({s}_{i}\right)}_{i=0}^{N-1}\in {\left\{-1,+1\right\}}^{N}$ has the form

Equation (1)

where ${Z}_{\text{Ising}}\left({\boldsymbol{J}}^{\ast }\right)={\sum }_{\boldsymbol{s}}\mathrm{exp}\left\{{\sum }_{i< j}{J}_{ij}^{\ast }{s}_{i}{s}_{j}\right\}$ is the partition function and ${\boldsymbol{J}}^{\ast }={\left({J}_{ij}^{\ast }\right)}_{i,j}$ are the original couplings, respectively. In general, there are also external fields but here they are assumed to be zero for simplicity. The structure of Ising model can be described by an undirected graph $G=\left(\mathtt{V},\mathtt{E}\right)$, where $\mathtt{V}=\left\{0,1,\dots ,N-1\right\}$ is a collection of vertices at which the spins are assigned, and $\mathtt{E}=\left\{\left(i,j\right)\vert {J}_{ij}^{\ast }\ne 0\right\}$ is a collection of undirected edges, i.e. ${J}_{ij}^{\ast }=0$ for all pairs of $\left(i,j\right)\notin \mathtt{E}$. For each vertex $i\in \mathtt{V}$, its neighborhood is defined as the subset $\mathcal{N}\left(i\right)\equiv \left\{j\in \mathtt{V}\vert \left(i,j\right)\in \mathtt{E}\right\}$.

2.2. Neighborhood-based 1-regularized linear regression (1-LinR)

The problem of Ising model selection refers to recovering the graph G (edge set $\mathtt{E}$), given M i.i.d. samples ${\mathcal{D}}^{M}=\left\{{\boldsymbol{s}}^{\left(1\right)},\dots ,{\boldsymbol{s}}^{\left(M\right)}\right\}$ from the Ising model. While the maximum likelihood method has nice properties of consistency and asymptotic efficiency, it suffers from high computational complexity. To tackle this difficulty, several local learning algorithms have been proposed, notably the 1-LogR estimator [10] and IS estimator [14]. Both 1-LogR and IS optimize a regularized local cost function $\ell \left(\cdot \right)$ for each spin i.e. $\forall \enspace i\in \mathtt{V}$,

Equation (2)

where ${h}_{{\backslash}i}^{\left(\mu \right)}\equiv {\sum }_{j\ne i}{J}_{ij}{s}_{j}^{\left(\mu \right)}$, ${\boldsymbol{J}}_{{\backslash}i}\equiv {\left({J}_{ij}\right)}_{j(\ne i)}$, and ${{\Vert}\cdot {\Vert}}_{1}$ denotes the 1 norm. Specifically, $\ell \left(x\right)=\mathrm{log}\left(1+{e}^{-2x}\right)$ for 1-LogR and $\ell \left(x\right)={e}^{-x}$ for IS, which correspond to the minus log conditional distribution [10] and the ISO [14], respectively. Consequently, the problem of recovering the edge set $\mathtt{E}$ is equivalently reduced to local neighborhood selection, i.e. recovering the neighborhood set $\mathcal{N}\left(i\right)$ for each vertex $i\in \mathtt{V}$. In particular, given the estimates ${\hat{\boldsymbol{J}}}_{{\backslash}i}$ in (2), the neighborhood set of vertex i can be estimated via the nonzero coefficients, i.e.

Equation (3)

In this paper, we focus on one simple linear estimator, termed as the 1-LinR estimator, i.e. $\forall \enspace i\in \mathtt{V}$,

Equation (4)

which, recalling that ${s}_{i}^{\left(\mu \right)}\in \left\{-1,+1\right\}$, corresponds to a quadratic loss $\ell \left(x\right)=\frac{1}{2}{\left(x-1\right)}^{2}$ in (2). The neighborhood set for each vertex $i\in \mathtt{V}$ is estimated in the same way as (3). Interestingly, the quadratic loss used in (4) implies that the postulated conditional distribution is Gaussian and thus inconsistent with the true one, which is one typical case of model misspecification. Furthermore, compared with nonlinear estimators 1-LogR and IS, the 1-LinR estimator is more efficient to implement.

3. Statistical mechanics analysis

In this section, a statistical mechanics analysis of the 1-LinR estimator is presented for typical RR graphs in the paramagnetic phase. Our analysis is applicable to any M-estimator of the form (2) and please refer to appendix A for a unified analysis, including detailed results for the 1-LogR estimator.

To characterize the structure learning performance, the precision and recall are considered:

Equation (5)

where TP, FP, FN denote the number of true positives, false positives, and false negatives in the estimated couplings, respectively. The concept of model selection consistent for an estimator is defined in definition 1, which is also known as the sparsistency property [10].

Definition 1. An estimator is called model selection consistent if both the associated precision and recall satisfy Precision → 1 and Recall → 1 as M.

Additionally, if one is further interested in the specific values of the estimated couplings, our analysis can also yield the residual sum of squares (RSS) for the estimated couplings. Our theoretical analysis of the learning performance builds on the statistical mechanics framework. Contrary to the probably almost correct (PAC) learning theory [37] in mathematical statistics, statistical mechanics tries to describe the typical (as defined in definition 2) behavior exactly rather than to bound the worst case which is likely to be over-pessimistic [38].

Definition 2. 'Typical' means not just most probable but in addition the probability for situations different from the typical one can be made arbitrarily small as N [38].

Similarly, when referring to typical RR graphs, we mean tree-like RR graphs, i.e. when seen from a random node, they look like part of an infinite tree, which are typical realizations from the uniform probability distribution on the ensemble of RR graphs.

3.1. Problem formulation

For simplicity and without loss of generality, we focus on spin s0. With a slight abuse of notation, we will drop certain subscripts in following descriptions, e.g. J \i will be denoted as J hereafter which represents a vector rather than a matrix. The basic idea of the statistical mechanical approach is to introduce the following Hamiltonian and Boltzmann distribution induced by the loss function $\ell \left(\cdot \right)$

Equation (6)

Equation (7)

where $Z=\int d\boldsymbol{J}\,{e}^{-\beta \mathcal{H}\left(\boldsymbol{J}\vert {\mathcal{D}}^{M}\right)}$ is the partition function, and $\beta \left( > 0\right)$ is the inverse temperature. In the zero-temperature limit β → +, the Boltzmann distribution (7) converges to a point-wise measure on the estimator (2). The macroscopic properties of (7) can be analyzed by assessing the free energy density $f({\mathcal{D}}^{M})=-\frac{1}{N\beta }\,\mathrm{log}\,Z$, from which, once obtained, we can evaluate averages of various quantities simply by taking its derivatives w.r.t. external fields [21]. In current case, $f({\mathcal{D}}^{M})$ depends on the predetermined randomness of ${\mathcal{D}}^{M}$, which plays the role of quenched disorder. As N, M, $f({\mathcal{D}}^{M})$ is expected to show the self averaging property [21]: for typical datasets ${\mathcal{D}}^{M}$, $f({\mathcal{D}}^{M})$ converges to its average over the random data ${\mathcal{D}}^{M}$:

Equation (8)

where ${\left[\cdot \right]}_{{\mathcal{D}}^{M}}$ denotes expectation over ${\mathcal{D}}^{M}$, i.e. ${\left[\cdot \right]}_{{\mathcal{D}}^{M}}={\sum }_{{\boldsymbol{s}}^{\left(1\right)},\dots ,{\boldsymbol{s}}^{\left(M\right)}}\left(\cdot \right){\prod }_{\mu =1}^{M}{P}_{\text{Ising}}\left({\boldsymbol{s}}^{\left(\mu \right)}\vert {\boldsymbol{J}}^{\,\ast }\right)$. Consequently, one can analyze the typical performance of any 1-regularized M-estimator of the form (2) via the assessment of (8), with 1-LinR in (4) being a special case with $\ell \left(x\right)=\frac{1}{2}{\left(x-1\right)}^{2}$.

3.2. Replica computation of the free energy density

Unfortunately, computing (8) rigorously is difficult. For practically overcoming this difficulty, we resort to the powerful replica method [2023] from statistical mechanics, which is symbolized using the following identity

Equation (9)

The basic idea is as follows. One replaces the average of log Z by that of the nth power Zn which is analytically tractable for $n\in \mathbb{N}$ in the large N limit, and constructs an analytically continuable expression from $\mathbb{N}$ to $\mathbb{R}$, then takes the limit n → 0 by using the expression. Although the replica method is not rigorous, it has been empirically verified from extensive studies in disorder systems [22, 23] and also found useful in the study of high-dimensional models in machine learning [39, 40]. For more details of the replica method, please refer to [2023].

Specifically, with the Hamiltonian $\mathcal{H}\left(\boldsymbol{J}\vert {\mathcal{D}}^{M}\right)$, assuming $n\in \mathbb{N}$ is a positive integer, the replicated partition function ${\left[{Z}^{n}\right]}_{{\mathcal{D}}^{M}}$ in (9) can be written as

Equation (10)

where ${h}^{a}={\sum }_{j}{J}_{j}^{a}{s}_{j}$ will be termed as local field hereafter, and a (and b in the following) is index variable of the replicas. The analysis below essentially depends on the distribution of ha but it is nontrivial. To resolve it, we take a similar approach as [7, 29] and introduce the following ansatz.

Ansatz 1 (A1): $\mathrm{D}\mathrm{e}\mathrm{n}\mathrm{o}\mathrm{t}\mathrm{e}\enspace {\Psi}=\left\{j\vert j\in \mathcal{N}\left(0\right)\right\}$ and $\bar{{\Psi}}=\left\{j\vert j=1,\dots ,N-1,j\notin \mathcal{N}\left(0\right)\right\}$ as the active and inactive sets of spin s0, respectively, then for a typical RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ in the paramagnetic phase, i.e. $\left(d-1\right){\mathrm{tanh}}^{2}\left({K}_{0}\right)< 1$, the 1-LinR estimator in (4) is a random vector determined by random realizations of ${\mathcal{D}}^{M}$ and obeys the following form

Equation (11)

where ${\bar{J}}_{j}$ is the mean value of the estimator and wj is a random variable which is asymptotically zero mean with variance scaled as $\mathcal{O}\left(1\right)$.

The consistency of ansatz 1 is checked in appendix B. Under ansatz 1, the local fields ha can be decomposed as ${h}^{a}={\sum }_{j\in {\Psi}}{\bar{J}}_{j}{s}_{j}+{h}_{w}^{a}$ where ${h}_{w}^{a}\equiv {\sum }_{j}\frac{1}{\sqrt{N}}{w}_{j}^{a}{s}_{j}$ is the 'noise' part. According to the central limit theorem (CLT), ${h}_{w}^{a}$ can be approximated as multivariate Gaussian variables, which, under the replica symmetric (RS) ansatz [21], can be fully described by two order parameters

Equation (12)

where ${C}^{{\backslash}0}\equiv \left\{{C}_{ij}^{{\backslash}0}\right\}$ is the covariance matrix of the original Ising model without the spin s0. Since the difference between C\0 and that with s0 is not essential in the limit N, hereafter the superscript \0 will be discarded.

As shown in appendix A, for quadratic loss $\ell \left(x\right)=\frac{1}{2}{(1-x)}^{2}$ of 1-LinR, the average free energy density (9) in the limit β can be computed as

Equation (13)

where $\mathtt{E}\mathtt{x}\mathtt{t}\mathtt{r}\left\{\cdot \right\}$ denotes the extremum operation w.r.t. relevant variables and ξ, S denote the energy and entropy terms:

Equation (14)

Equation (15)

Equation (16)

where $\alpha \equiv M/N,\chi \equiv {\mathrm{lim}}_{\beta \to \infty }\,\beta \left(Q-q\right)$, ${\mathbb{E}}_{s,z}(\cdot )$ denotes the expectation operation w.r.t. $z\sim \mathcal{N}(0,1)$ and $({s}_{0},{\boldsymbol{s}}_{{\Psi}})\sim {P}_{\text{Ising}}({s}_{0},{\boldsymbol{s}}_{{\Psi}}\vert {\boldsymbol{J}}^{\,\ast })\propto {e}^{{s}_{0}{\sum }_{j\in {\Psi}}{J}_{j}^{\ast }{s}_{j}}$ [7]. For different losses $\ell \left(\cdot \right)$, the free energy results (13) only differ in the energy term ξ, which in general is non-analytical (e.g. logistic loss for 1-LogR) but can be solved numerically. Please refer to appendix A.3 for more details.

In contrast to the case of 2-norm in [29], the 1-norm in (15) breaks the rotational invariance property, i.e. ${{\Vert}{w}^{a}{\Vert}}_{1}\ne {{\Vert}O{w}^{a}{\Vert}}_{1}$ for general orthogonal matrix O, making it difficult to compute the entropy term S. To circumvent this difficulty, we employ an observation that, when considering the RR graph ensemble ${\mathcal{G}}_{N,d,{K}_{0}}$ as the coupling network of the Ising model, the orthogonal matrix O diagonalizing the covariance matrix C appears to be distributed from the Haar orthogonal measure [41, 42]. Thus, it is assumed that I in (15) can be replaced by its average ${\left[I\right]}_{O}$ over the Haar-distributed O:

Ansatz 2 (A2): $\mathrm{D}\mathrm{e}\mathrm{n}\mathrm{o}\mathrm{t}\mathrm{e}\enspace C\equiv {\mathbb{E}}_{\boldsymbol{s}}[\boldsymbol{s}{\boldsymbol{s}}^{T}]$, where ${\mathbb{E}}_{\boldsymbol{s}}[\cdot ]={\sum }_{\boldsymbol{s}}{P}_{\text{Ising}}(\boldsymbol{s}\vert {\boldsymbol{J}}^{\,\ast })(\cdot ),$ as the covariance matrix of spin configurations s . Suppose that the eigendecomposition of C is C = OΛOT , where O is the orthogonal matrix, then O can be seen as a random sample generated from the Haar orthogonal measure and thus for typical graph realizations from ${\mathcal{G}}_{N,d,{K}_{0}}$, I in (15) is equal to the average [I]O .

The consistency of ansatz (A2) is numerically checked in appendix C. Under ansatz (A2), the entropy term S in (14) can be alternatively computed as $\underset{n\to 0}{\mathrm{lim}}\,\frac{1}{N\beta }\frac{\partial }{\partial n}\,\mathrm{log}{\left[I\right]}_{O}$, as shown in appendix A. Finally, under the RS ansatz, the average free energy density (9) in the limit β reads

Equation (17)

where $z\sim \mathcal{N}\left(0,1\right)$, and $G\left(x\right)$ is a function defined as

Equation (18)

and $\rho \left(\gamma \right)$ is the eigenvalue distribution (EVD) of the covariance matrix C, and $\mathtt{\Theta }$ is a collection of macroscopic parameters $\mathtt{\Theta }=\left\{\chi ,Q,E,R,F,\eta ,K,H,{\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}\right\}$. For details of these macroscopic parameters and $\rho \left(\gamma \right)$, please refer to appendices A and F, respectively. Note that in (17), apart from the ratio αM/N, N and M also appear as $\lambda M/\sqrt{N}$ in the free energy result, which is different from previous results [7, 29, 39]. The reason is that, thanks to the 1-regularization term $\lambda M{{\Vert}\boldsymbol{J}{\Vert}}_{1}$, the mean estimates ${\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}$ in the active set Ψ and the noise w in the inactive set $\bar{{\Psi}}$ essentially give different scaling contributions to the free energy density.

Although there are no analytic solutions, these macroscopic parameters in (17) can be obtained by numerically solving the corresponding equations of state (EOS) employing the physics terminology. Specifically, for the 1-LinR estimator, the EOS can be obtained from the extremization condition in (17) as follows (for EOS of a general M-esimator and 1-LogR, please refer to appendix A.3):

Equation (19)

where $\tilde{\Lambda }$ satisfying $E\eta =-\int \frac{\rho \left(\gamma \right)}{\tilde{\Lambda }-\gamma }d\gamma $ is determined by the extremization condition in (18) and $\mathtt{s}\mathtt{o}\mathtt{f}\mathtt{t}\left(z,\tau \right)=\mathtt{sign}\left(z\right){\left(\left\vert z\right\vert -\tau \right)}_{+}$ is the soft-thresholding function. Once the EOS is solved, the free energy density defined in (8) is readily obtained.

3.3. High-dimensional asymptotic result

One important result of our replica analysis is that, as derived (see appendix A.3) from the free energy result (17), the original high dimensional 1-LinR estimator (4) is decoupled into a pair of scalar estimators, one for the active set and one for the inactive set, i.e.

where ${z}_{j}\sim \mathcal{N}\left(0,1\right),j\in \bar{{\Psi}}$ are i.i.d. standard Gaussian random variables. The decoupling property asserts that, once the EOS (19) is solved, the asymptotic behavior of 1-LinR can be statistically described by a pair of simple scalar soft-thresholding estimators (see figures 1(a) and (b)).

Figure 1.

Figure 1. Equivalent low-dimensional estimators for high-dimensional 1-LinR obtained from the statistical mechanics analysis. (a), (b) Diagrams of the pair of scalar estimators in equations (20) and (21). (c) A schematic description of the modified estimator in equation (25) which takes into account the finite-size effect.

Standard image High-resolution image

In the high-dimensional setting where N is allowed to grow as a function of M, one important question is that what is the minimum number of samples M required to achieve model selection consistency as N. Though we obtain a pair of scalar estimators (20) and (21), there are no analytical solutions to EOS (19), making it difficult to derive an explicit condition. To overcome this difficulty, as shown in appendix D, we perform a perturbation analysis of EOS (17) and obtain an asymptotic relation HF. Then, we obtain that for a RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$, given M i.i.d. samples ${\mathcal{D}}^{M}$, the 1-LinR estimator (4) can consistently recover the graph structure G as N if

Equation (22)

where c(λ, K0) is a constant value dependent on the regularization parameter λ and coupling strength K0 and a sharp prediction (as verified in section 5) is obtained as

Equation (23)

For details of the analysis, including the counterpart of 1-LogR, see appendix D. Consequently, we obtain the typical sample complexity of 1-LinR for Ising model selection for typical RR graphs in the paramagnetic phase.

The result in (22) is derived for 1-LinR with a fixed regularization parameter λ. Since the value of λ is upper bounded by $\mathrm{tanh}\left({K}_{0}\right)$ (otherwise false negatives occur as discussed in appendix D), a lower bound of the typical sample complexity for 1-LinR is obtained as

Equation (24)

Interestingly, the scaling in (24) is the same as the information-theoretic worst-case result $M > \frac{c\,\mathrm{log}\,N}{{K}_{0}^{2}}$ obtained in [11] at high temperatures (i.e. small K0) since $\mathrm{tanh}\left({K}_{0}\right)=\mathcal{O}\left({K}_{0}\right)$ as K0 → 0.

3.4. Non-asymptotic result for moderate M, N

In practice, it is desirable to predict the non-asymptotic performance of the 1-LinR estimator for moderate M, N. However, the scalar estimator (20) for the active set (see figure 1(a)) fails to capture the fluctuations around the mean estimates. This is because in obtaining the energy term ξ (16) of the free energy density (17), the fluctuations around the mean estimates ${\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}$ are averaged out by the expectation ${\mathbb{E}}_{s,z}\left(\cdot \right)$. To address this problem, we replace ${\mathbb{E}}_{s,z}\left(\cdot \right)$ in (17) with a sample average by accounting for the finite-size effect, thus obtaining a modified estimator for the active set as follows

Equation (25)

where ${s}_{0}^{\mu },{s}_{j,j\in {\Psi}}^{\mu }\sim P\left({s}_{0},{\boldsymbol{s}}_{{\Psi}}\vert {\boldsymbol{J}}^{\ast }\right),\,{z}^{\mu }\sim \mathcal{N}\left(0,1\right),\mu =1\dots M$. The modified d-dimensional estimator (25) (see figure 1(c) for a schematic) is equivalent to the scalar one (20) (figure 1(a)) as M but it enables us to capture the fluctuations of ${\left\{{\hat{J}}_{j}\right\}}_{j\in {\Psi}}$ for moderate M. Note that due to the replacement of expectation with sample average in the free energy density (17), the EOS (19) also needs to be modified and it can be solved iteratively as sketched in algorithm 1. The details are shown in appendix E.1.

Algorithm 1. Method to solve EOS (19) together with (25).

Consequently, for moderate M, N, the non-asymptotic statistical properties of the 1-LinR estimator can be characterized by the reduced d-dimensional 1-LinR estimator (25) (figure 1(c)) and scalar estimator (21) (figure 1(b)) using MC simulations. Denote $\left\{{\hat{J}}_{j}^{t}\right\},t=1,\dots ,{T}_{\text{MC}}$ as the estimates in tth MC simulation, where ${\left\{{\hat{J}}_{j}^{t}\right\}}_{j\in {\Psi}}$ and ${\left\{{\hat{J}}_{j}^{t}\right\}}_{j\in \bar{{\Psi}}}$ are solutions of (25) and (21), and TMC is the total number of MC simulations. Then, the Precision and Recall are computed as

Equation (26)

where ${{\Vert}\cdot {\Vert}}_{0}$ is the 0-norm indicating the number of nonzero elements. In addition, the RSS can be computed as $\mathrm{R}\mathrm{S}\mathrm{S}={\sum }_{j}{\left\vert {\hat{J}}_{j}-{J}_{j}^{\ast }\right\vert }^{2}=\frac{1}{{T}_{\text{MC}}}{\sum }_{t=1}^{{T}_{\text{MC}}}{\sum }_{j\in {\Psi}}{\left\vert {\hat{J}}_{j}^{t}-{K}_{0}\right\vert }^{2}+R$.

4. Discussions

It might seem surprising that, despite apparent model misspecification due to the use of quadratic loss, the 1-LinR estimator can still correctly infer the structure with the same order of sample complexity as 1-LogR. Also, our theoretical analysis implies that the idea of using linear regression for binary data is not as outrageous as one might imagine. Here we provide an intuitive explanation of its success with a discussion of its limitations.

On the average, from (4), the condition for the 1-LinR estimator is given as

Equation (27)

where ⟨⋅⟩ and ∂|Jk | represent average w.r.t. the Boltzmann distribution (7) and the sub-gradient of |Jk |, respectively. In the paramagnetic phase, ⟨si sj ⟩ decays in its magnitude exponentially w.r.t. the distance of sites i and j. This guarantees that once the connections Jk of sites in the first nearest neighbor set Ψ are given so that

Equation (28)

holds, the other conditions are automatically satisfied by setting all the other connections that are not from Ψ to zero. For appropriate choice of λ, (28) has solutions of $\mathrm{sign}({J}_{k}^{\ast }){J}_{k} > 0,\forall \enspace k\in {\Psi}$. Namely ∀ k ∈ Ψ, the estimate of Jk has the same sign as the true value ${J}_{k}^{\ast }$. This implies that on average the 1-LinR estimator can successfully recover the network structure up to the connection signs if λ is chosen appropriately.

The key of the above argument is that ⟨si sj ⟩ decays exponentially fast w.r.t. the distance of two sites, which does not hold after the phase transition. Thus, it is conjectured that the 1-LinR estimator will start to fail in the network recovery just at the phase transition point. However, it is worth noting that this is in fact not limited to 1-LinR: 1-LogR also exhibits similar behavior unless post-thresholding is used, as reported in [43].

5. Experimental results

In this section, we conduct numerical experiments to verify the accuracy of the theoretical analysis. The experimental procedures are as follows. First, a random graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ is generated and the Ising model is defined on it. Then, the spin snapshots are obtained using the Metropolis–Hastings algorithm [4446] in the same way as [7], yielding the dataset ${\mathcal{D}}^{M}$. We randomly choose a center spin s0 and infer its neighborhood using the 1-LinR (4) and 1-LogR [10] estimators. To obtain standard error bars, we repeat the sequence of operations 1000 times. The RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with node degree d = 3 and coupling strength K0 = 0.4 is considered, which satisfies the paramagnetic condition $\left(d-1\right){\mathrm{tanh}}^{2}\left({K}_{0}\right)< 1$. The active couplings ${\left\{{J}_{ij}\right\}}_{\left(i,j\right)\in \mathtt{E}}$ have the same probability of taking both signs of +1 or −1. 3

We first verify the precise non-asymptotic predictions of our method described in section 3.4. Figure 2 (top) shows the replica and experimental results of RSS, Precision, Recall for N = 200 with different values of αM/N. It can be seen that for both 1-LinR and 1-LogR, there is a fairly good agreement between the theoretical predictions and experimental results, even for small N = 200 and small α (equivalently small M), verifying the correctness of the replica analysis. Interestingly, a quantitatively similar behavior between 1-LinR and 1-LogR is observed in terms of precision and recall. Regarding RSS, the two estimators actually behave differently, which can be clearly seen in figure 7 in appendix G: the RSS is much smaller for 1-LogR, which is natural since the estimates of 1-LogR are closer to the true ones due to the model match. As our theoretical analysis is based on the typical tree-like structure assumption, it is interesting to see if it is applicable to graphs with loops. To this end, we consider the 2D four-nearest neighbor grid with periodic boundary condition, which is one common loopy graph. Figure 2 (bottom) shows the results for a 15 × 15 2D grid with uniform constant coupling K0 = 0.2. The agreement between the theoretical and numerical results is fairly good, indicating that our theoretical result can be a good approximation even for loopy graphs. More results for different values of N and λ are shown in figures 7 and 8 in appendix G.

Figure 2.

Figure 2. Theoretical and experimental results of RSS, Precision and Recall for RR graph and 2D grid using 1-LinR and 1-LogR with different values of αM/N. The standard error bars are obtained from 1000 random runs. A good agreement between theory and experiment is achieved, even for small-size 2D grid graph with many loops. For more results, please refer to appendix G.

Standard image High-resolution image

Subsequently, the asymptotic result and sharpness of the critical scaling value ${c}_{0}\left(\lambda ,{K}_{0}\right)\equiv \frac{c\left(\lambda ,{K}_{0}\right)}{{\lambda }^{2}}$ in (22) are evaluated. First, figure 3 (left) shows comparison of ${c}_{0}\left(\lambda ,{K}_{0}\right)$ between 1-LinR and 1-LogR for the RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ when d = 3, K0 = 0.4, indicating similar behavior of 1-LogR and 1-LinR. Then, we conduct experiments for M = c log N with different values of c around ${c}_{0}\left(\lambda ,{K}_{0}\right)$, and investigate the trend of Precision and Recall as N increases. When λ = 0.3, figure 3 (middle and right) show the results of Precision and Recall, respectively. As expected, the Precision increases consistently with N when $c > {c}_{0}\left(\lambda ,{K}_{0}\right)$ and decreases consistently with N when $c< {c}_{0}\left(\lambda ,{K}_{0}\right)$ while the Recall increases consistently and approaches to 1 as N, which verifies the sharpness of the critical scaling value prediction. The results for 1-LogR, including the case of λ = 0.1 for both 1-LinR and 1-LogR, are shown in figures 9 and 10 in appendix G.

Figure 3.

Figure 3. (Left) Critical scaling value ${c}_{0}\left(\lambda ,{K}_{0}\right)\equiv \frac{c\left(\lambda ,{K}_{0}\right)}{{\lambda }^{2}}$ of 1-LinR and 1-LogR for the RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with d = 3, K0 = 0.4. (Middle, right) Precision and recall for RR graph using 1-LinR with λ = 0.3. Experimental results are shown for N = 200, 400, 800, 1600, 3200. When $c > {c}_{0}\left(\lambda ,{K}_{0}\right)$ (${c}_{0}\left(\lambda =0.3,{K}_{0}\right)\approx 19.41$ in this case), the Precision increases consistently with N and approaches 1 as N while it decreases consistently with N when $c< {c}_{0}\left(\lambda ,{K}_{0}\right)$. The Recall increases consistently and approach to 1 as N. For more results, please refer to appendix G.

Standard image High-resolution image

6. Conclusion

In this paper, we provide a unified statistical mechanics framework for the analysis of typical learning performances of 1-regularizedM-estimators, 1-LinR in particular, for Ising model selection on typical paramagnetic RR graphs. Using the powerful replica method, the high-dimensional 1-regularized M-estimator is decoupled into a pair of scalar estimators, by which we obtain an accurate estimate of the typical sample complexity. It is revealed that, perhaps surprisingly, the misspecified 1-LinR estimator is model selection consistent using $M=\mathcal{O}\left(\mathrm{log}\,N\right)$ samples, which is of the same order as 1-LogR. Moreover, with a slight modification of the scalar estimator for the active set to account for the finite-size effect, we further obtain sharp predictions of the non-asymptotic behavior of 1-LinR (also 1-LogR) for moderate M, N. There is an excellent agreement between theoretical predictions and experimental results, even for graphs with many loops, which supports our findings. Several key assumptions are made in our theoretical analysis, such as the paramagnetic assumption which implies that the coupling strength should not be too large. It is worth noting that the restrictive paramagnetic assumption is not only limited to 1-LinR, but also to other low-complexity estimators like 1-LogR unless post-thresholding is used [43]. These assumptions restrict the applicability of the presented result, and thus overcoming such limitations will be an important direction for future work.

Acknowledgments

This work was supported by JSPS KAKENHI Nos. 17H00764, 18K11463, and 19H01812, and JST CREST Grant No. JPMJCR1912, Japan.

Appendix A.: Free energy density f computation

In this appendix, the detailed derivation of the average free energy density $f=-\frac{1}{N\beta }{\left[\mathrm{log}\,Z\right]}_{{\mathcal{D}}^{M}}$ in (9) using the replica method is illustrated. Our method provides a unified framework for the statistical mechanics analysis of any 1-regularized M-estimator of the form (2). As a result, for generality, in the following derivations, we first focus on a generic 1-regularized M-estimator (2) with a generic loss function $\ell \left(\cdot \right)$. After obtaining the generic results, specific results for both the 1-LinR estimator (4) with square loss $\ell \left(x\right)=\frac{1}{2}{\left(x-1\right)}^{2}$ and the 1-LogR estimator with logistic loss $\ell \left(x\right)=\mathrm{log}\left(1+{e}^{-2x}\right)$ are provided. For the IS estimator, the results can be easily obtained by substituting $\ell \left(x\right)={e}^{-x}$, though the specific results are not shown.

A.1. Energy term ξ of f

The key of replica method is to compute the replicated partition function ${\left[{Z}^{n}\right]}_{{\mathcal{D}}^{M}}$. According to the definition in (10) and ansatz (A1) in section 3.2, the average replicated partition function ${\left[{Z}^{n}\right]}_{{\mathcal{D}}^{M}}$ can be re-written as

Equation (29)

where $\left\{\frac{1}{\sqrt{N}}{w}_{j}^{a},j\in {\Psi}\right\}$ in the finite active set Ψ are neglected in the second line when N is large, $P\left({s}_{0},{\boldsymbol{s}}_{{\Psi}}\vert {\boldsymbol{J}}^{\,\ast }\right)={\sum }_{{\boldsymbol{s}}_{\bar{{\Psi}}}}{P}_{\text{Ising}}\left(s\vert {\boldsymbol{J}}^{\,\ast }\right)$ is the marginal distribution of s0, s Ψ that can be computed as [7], ${P}_{\text{noise}}\left({\left\{{h}_{w}^{a}\right\}}_{a}\vert {\left\{{w}^{a}\right\}}_{a}\right)$ is the distribution of the 'noise' part ${h}_{w}^{a}\equiv \frac{1}{\sqrt{N}}{\sum }_{j\in \bar{{\Psi}}}{w}_{j}^{a}{s}_{j}$ of the local field. In the last line, the asymptotic independence between ${h}_{w}^{a}$ and (s0, s Ψ) are applied as discussed in [7]. Regarding the marginal distribution $P\left({s}_{0},{\boldsymbol{s}}_{{\Psi}}\vert {\boldsymbol{J}}^{\,\ast }\right)$, in general we have to take into account the cavity fields in the marginal distribution. In the case considered in this paper, however, the paramagnetic assumption simplifies the marginal distribution and finally it is proportional to ${e}^{{s}_{0}{\sum }_{j\in {\Psi}}{J}_{j}^{\enspace \ast }{s}_{j}}$ [7]. When Ψ has a small cardinality d, we can compute the expectation w.r.t. (s0, sΨ) exactly by exhaustive enumeration. For large d, MC methods like the Metropolis–Hastings algorithm [4446] might be used.

To proceed with the calculation, according to the central limit theorem (CLT), the noise part ${\left\{{h}_{w}^{a}\right\}}_{a=1}^{n}$ can be regarded as Gaussian variables so that ${P}_{\text{noise}}\left({\left\{{h}_{w}^{a}\right\}}_{a}\vert {\left\{{w}^{a}\right\}}_{a}\right)$ can be approximated as a multivariate Gaussian distribution. Under the RS ansatz, two auxiliary order parameters are introduced, i.e.

Equation (30)

Equation (31)

where ${C}^{{\backslash}0}=\left\{{C}_{ij}^{{\backslash}0}\right\}$ is the covariance matrix of the original Ising model without s0. To write the integration in terms of the order parameters Q, q, we introduce the following trivial identities

Equation (32)

Equation (33)

so that ${\left[{Z}^{n}\right]}_{{\mathcal{D}}^{M}}$ in (29) can be rewritten as

Equation (34)

Equation (35)

where

Equation (36)

Equation (37)

According to CLT and (30) and (31), the noise parts ${h}_{w}^{a},a=1,\dots ,n$ follow a multivariate Gaussian distribution with zero mean (paramagnetic assumption) and covariances

Equation (38)

Consequently, by introducing two auxiliary i.i.d. standard Gaussian random variables ${v}_{a}\sim \mathcal{N}\left(0,1\right),z\sim \mathcal{N}\left(0,1\right)$, the noise parts ${h}_{w}^{a},a=1,\dots ,n$ can be written in a compact form

Equation (39)

so that L in (37) could be written as

Equation (40)

where $\mathcal{D}z=\frac{dz}{\sqrt{2\pi }}{e}^{-\frac{{z}^{2}}{2}}$. As a result, using the replica formula, we have

Equation (41)

where in the last line, a change of variable $y={s}_{0}\left({\sum }_{j\in {\Psi}}{\bar{J}}_{j}{s}_{j}+\sqrt{Q-q}v+\sqrt{q}z\right)$ is used.

As a result, from (9), the average free energy density in the limit β reads

Equation (42)

where $\mathrm{E}\mathrm{x}\mathrm{t}\mathrm{r}\left\{\cdot \right\}$ denotes extremization w.r.t. some relevant variables, and ξ, S are the corresponding energy and entropy terms of f, respectively:

Equation (43)

Equation (44)

Equation (45)

and the relation ${\mathrm{lim}}_{\beta \to \infty }\,\beta \left(Q-q\right)\equiv \chi $ is used [6, 7]. The extremization in the free energy result (42) comes from saddle point method in the large N limit.

A.2. Entropy term S of f

To obtain the final result of free energy density, there is still one remaining entropy term S to compute, which requires the result of I (44). However, unlike the 2-norm, the 1-norm in (44) breaks the rotational invariance property, which makes the computation of I difficult and the methods in [7, 29] are no longer applicable. To address this problem, applying the Haar orthogonal ansatz (A2) in section 3.2, we employ a method to replace I with an average ${\left[I\right]}_{O}$ over the orthogonal matrix O generated from the Haar orthogonal measure.

Specifically, also under the RS ansatz, two auxiliary order parameters are introduced, i.e.

Equation (46)

Equation (47)

Then, by inserting the delta functions ${\prod }_{a}\delta \left({\left({w}^{a}\right)}^{T}{w}^{a}-NR\right){\prod }_{a< b}\delta \left({\left({w}^{a}\right)}^{T}{w}^{b}-Nr\right)$, we obtain

Equation (48)

Moreover, replacing the original delta functions in (48) as the following identities

and taking average over the orthogonal matrix O, after some algebra, the I is replaced with the following average ${\left[I\right]}_{O}$

Equation (49)

Equation (50)

To proceed with the computation, the eigendecompostion of the matrix Ln is performed. After some algebra, for the configuration of wa that satisfies both ${\left({w}^{a}\right)}^{T}{w}^{a}=NR$ and ${\left({w}^{a}\right)}^{T}{w}^{b}=Nr$, the eigenvalues and associated eigenvectors of matrix Ln can be calculated as follows

Equation (51)

where λ1 is the eigenvalue corresponding to the eigenvector u1 while λ2 is the degenerate eigenvalue corresponding to eigenvectors ua , a = 2, ..., n. To compute ${\left[{e}^{\frac{1}{2}\,\mathtt{Tr}\left(C{L}_{n}\right)}\right]}_{O}$, we define a function $G\left(x\right)$ as

Equation (52)

and $\rho \left(\gamma \right)$ is the eigenvalue distribution (EVD) of C. Then, combined with (51), after some algebra, we obtain that

Equation (53)

Furthermore, replacing the original delta functions in (48) as

we obtain

Equation (54)

In addition, using a Gaussian integral, the following result can be linearized as

where $\mathcal{D}{z}_{i}=\frac{d{z}_{i}}{\sqrt{2\pi }}{e}^{-\frac{{{z}_{i}}^{2}}{2}}$. Consequently, the entropy term S of the free energy density f is computed as

For β, according to the characteristic of the Boltzmann distribution, the following scaling relations are assumed to hold, i.e.

Equation (55)

Finally, the entropy term is computed as

Equation (56)

Equation (57)

A.3. Free energy density result

Combining the results (45) and (57) together, the free energy density for general loss function $\ell \left(\cdot \right)$ in the limit β is obtained as

Equation (58)

where the values of the parameters $\Theta =\left\{\chi ,Q,E,R,F,\eta ,K,H,{\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}\right\}$ can be calculated by the extremization condition, i.e. solving the equations of state (EOS). For general loss function $\ell \left(y\right)$, the EOS for (58) is as follows

Equation (59)

where $\tilde{\Lambda }$ satisfying $E\eta =-\int \frac{\rho \left(\gamma \right)}{\tilde{\Lambda }-\gamma }d\gamma $ is determined by the extremization condition in (52) combined with the free energy result (58). In general, there are no analytic solutions for the EOS (59) but it can be solved numerically.

A.3.1. Quadratic loss $\ell \left(y\right)={\left(y-1\right)}^{2}/2$

In the case of square lass $\ell \left(y\right)={\left(y-1\right)}^{2}/2$ for the 1-LinR estimator, there is an analytic solution to y in $\underset{y}{\mathrm{min}}\left[\frac{{\left(y-{s}_{0}\left(\sqrt{Q}z+{\sum }_{j\in {\Psi}}{\bar{J}}_{j}{s}_{j}\right)\right)}^{2}}{2\chi }+\ell \left(y\right)\right]$ and thus the results can be further simplified. Specifically, the free energy can be written as follows

Equation (60)

and the corresponding EOS can be written as

Equation (61)

Note that the mean estimates $\left\{{\bar{J}}_{j},j\in {\Psi}\right\}$ in (61) is obtained by solving the following reduced optimization problem

Equation (62)

where the corresponding fixed-point equation associated with any ${\bar{J}}_{k},k\in {\Psi}$ can be written as follows

Equation (63)

where the $\mathtt{sign}\enspace (\cdot )$ denotes an element-wise application of the standard sign function. For a RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with degree d and coupling strength K0, without loss of generality, assuming that all the active couplings are positive, we have ${\mathbb{E}}_{s}\left({s}_{0}{s}_{k}\right)=\mathrm{tanh}\left({K}_{0}\right),\forall \enspace k\in {\Psi}$, and ${\mathbb{E}}_{s}\left({s}_{k}{s}_{j}\right)={\mathrm{tanh}}^{2}\left({K}_{0}\right),\,\forall \enspace k,j\in {\Psi},k\ne j$. Given these results and thanks to the symmetry, we obtain

Equation (64)

where $\mathtt{s}\mathtt{o}\mathtt{f}\mathtt{t}\left(z,\tau \right)=\mathtt{sign}\left(z\right){\left(\left\vert z\right\vert -\tau \right)}_{+}$ is the soft-thresholding function, i.e.

Equation (65)

On the other hand, in the inactive set $\bar{{\Psi}}$, each component of the scaled noise estimates can be statistically described as the solution to the scalar estimator $\underset{w}{\mathrm{min}}\left\{\frac{K}{2}{w}^{2}-\left(\sqrt{H}z-\frac{\lambda M}{\sqrt{N}}\,\mathrm{sign}\left(w\right)\right)w\right\}$ in (58). Consequently, recalling the definition of w in (11), the estimates $\left\{{\hat{J}}_{j},j\in \bar{{\Psi}}\right\}$ in the inactive set $\bar{{\Psi}}$ are

Equation (66)

which ${z}_{j}\sim \mathcal{N}\left(0,1\right),j\in \bar{{\Psi}}$ are i.i.d. random Gaussian noise.

Consequently, it can be seen that from (64) and (66), statistically, the 1-LinR estimator is decoupled into two scalar thresholding estimators for the active set Ψ and inactive set $\bar{{\Psi}}$, respectively.

A.3.2. Logistic loss $\ell \left(y\right)=\mathrm{log}\left(1+{e}^{-2y}\right)$

In the case of logistic lass $\ell \left(y\right)=\mathrm{log}\left(1+{e}^{-2y}\right)$ for the 1-LogR estimator, however, there is no analytic solution to y in $\underset{y}{\mathrm{min}}\left[\frac{{\left(y-{s}_{0}\left(\sqrt{Q}z+{\sum }_{j\in {\Psi}}{\bar{J}}_{j}{s}_{j}\right)\right)}^{2}}{2\chi }+\ell \left(y\right)\right]$ and we have to solve it together iteratively with other parameters Θ. After some algebra, we obtain the EOS for the 1-LogR estimator:

Equation (67)

In the active set Ψ, the mean estimates $\left\{{\bar{J}}_{j},j\in {\Psi}\right\}$ can be obtained by solving a reduced 1-regularized optimization problem

Equation (68)

In contrast to the 1-LinR estimator, the mean estimates $\left\{{\bar{J}}_{j},j\in {\Psi}\right\}$ in (68) for the 1-LogR estimator do not have analytic solutions and also have to be solved numerically. For a RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with degree d and coupling strength K0, after some algebra, the corresponding fixed-point equations for $\left\{{\bar{J}}_{j}=J,j\in {\Psi}\right\}$ are obtained as follows

Equation (69)

which can be solved iteratively.

The estimates in the inactive set $\bar{{\Psi}}$ are the same as (66) that of 1-LinR, which can be described by a scalar theresholding estimator once the EOS is solved.

Appendix B.: Check the consistency of ansatz (A1)

To check the consistency of ansatz (A1), first we categorize the estimators based on the distance or generation from the focused spin s0. Considering the original Ising model whose coupling network is a tree-like graph, we can naturally define generations of the spins according to the distance from the focused spin s0. We categorize the spins directly connected to s0 as the first generation and denote the corresponding index set as ${{\Omega}}_{1}=\left\{i\vert {J}_{i}^{\ast }\ne 0,i\in \left\{1,\dots ,N-1\right\}\right\}$. Each spin in Ω1 is connected to some other spins except for s0, and those spins constitute the second generation and we denote its index set as Ω2. This recursive construction of generations can be unambiguously continued on the tree-like graph, and we denote the index set of the gth generation from spin s0 as Ωg . The overall construction of generations is graphically represented in figure 4. Generally, assume that the set of nonzero values of the 1-LinR estimator is denoted as ${\Psi}=\left\{{{\Omega}}_{1},\dots ,{{\Omega}}_{g}\right\}$. Then, ansatz (A1) means that the correct active set of the mean estimates is ${\Psi}=\left\{{{\Omega}}_{1}\right\}$.

Figure 4.

Figure 4. Schematic of generations of spins. In general, the gth generation of spin s0 is denoted as Ωg , whose distance from spin s0 is g.

Standard image High-resolution image

To verify this, we examine the values of mean estimates based on (60). Due to the symmetry, it is expected that for each a = 1, ..., g, the values of the mean estimates ${\bar{J}}_{j\in {{\Omega}}_{a}}={J}_{a}$ are identical to each other within the same set Ωa , a = 1...g. In addition, if the solutions satisfy ansatz (A1) in (11), i.e. J1 = J, Ja = 0, a ⩾ 2, from (60) we obtain

Equation (70)

where the result ${\mathbb{E}}_{s}\left({s}_{i}{s}_{j}\right)={\mathrm{tanh}}^{{d}_{0}}\left({K}_{0}\right)$ is used for any two spins si , sj whose distance is d0 in the RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$. Note that the solution of the first equation in (70) automatically satisfies the second equation (sub-gradient condition) since $\left\vert \mathrm{tanh}\left({K}_{0}\right)\right\vert \leqslant 1$, which indicates that J1 = J, Ja = 0, a ⩾ 2 is one valid solution. Moreover, the convexity of the quadratic loss function indicates that this is the unique and correct solution, which checks the ansatz (A1).

Appendix C.: Check the consistency of ansatz (A2)

We here check the consistency of a part of the ansatz (A2) in section 3.2, the orthogonal matrix O diagonalizing the covariance matrix C is distributed from the Haar orthogonal measure. To achieve this, we compare certain properties of the orthogonal matrix generated from the diagonalization of the covariance matrix C with the orthogonal matrix which is actually generated from the Haar orthogonal measure. Specifically, we compute the cumulants of the trace of the power k of the orthogonal matrix. All cumulants with degree r ⩾ 3 are shown to disappear in the large N limit [41, 42]. The nontrivial cumulants are only second order cumulant with the same power k. We have computed these cumulants about the orthogonal matrix from the covariance matrix C and found that they exhibit the same behavior as the ones generated from the true Haar measure, as shown in figure 5.

Figure 5.

Figure 5. The RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with N = 1000, d = 3, K0 = 0.4 is generated and we compute the associated covariance matrix C and then diagonalize it as C = OΛOT , obtaining the orthogonal matrix O. Then the $\mathtt{Tr}\left({O}^{k}\right),\mathtt{Tr}\left({O}^{-k}\right)$ for several k (k = 1 ∼ 8) are computed, where $\mathtt{Tr}\left(\cdot \right)$ is the trace operation. This procedure is repeated 200 times with different random numbers, from which we obtain the ensemble of $\mathtt{Tr}\left({O}^{k}\right)$ and $\mathtt{Tr}\left({O}^{-k}\right)$. Consequently, the cumulants of 1st, 2nd, and 3rd orders are computed. All of them exhibit the expected theoretical behavior.

Standard image High-resolution image

Appendix D.: Details of the high-dimensional asymptotic result

Here the asymptotic performance of Precision and Recall are considered for both the 1-LinR estimator and the 1-LogR estimator. Recall that perfect Ising model selection is achieved if and only if Precision = 1 and Recall = 1.

D.1. Recall rate

According to the definition in (5), the recall rate is only related to the statistical properties of estimates in the active set Ψ and thus the mean estimates ${\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}$ in the limit M are considered.

D.1.1. Quadratic loss

In this case, in the limit M, the mean estimates ${\left\{{\bar{J}}_{j}=J\right\}}_{j\in {\Psi}}$ in the active set Ψ are shown in (64) and rewritten as follows for ease of reference

Equation (71)

As a result, as long as $\lambda \left(1+\chi \right)< \mathrm{tanh}\left({K}_{0}\right)$, J > 0 and thus we can successfully recover the active set so that Recall = 1. In addition, when $M=\mathcal{O}\left(\mathrm{log}\,N\right)$, χ → 0 as N, as demonstrated later by the relation in (81). As a result, the regularization parameter needs to satisfy $0< \lambda < \mathrm{tanh}\left({K}_{0}\right)$.

D.1.2. Logistic loss

In this case, in the limit M, the mean estimates ${\left\{{\bar{J}}_{j}=J\right\}}_{j\in {\Psi}}$ in the active set Ψ are shown in (69) and rewritten as follows for ease of reference

Equation (72)

There is no analytic solution for $\hat{y}\left(s,z,\chi ,Q,J\right)$ and the following fixed-point equation has to be solved numerically

Equation (73)

Then one can determine the valid choice of λ to enable J > 0. Numerical results show that the choice of λ is similar to that of the quadratic loss.

D.2. Precision rate

According to the definition in (5), to compute the Precision, the number of true positives TP and false positives FP are needed, respectively. On the one hand, as discussed in appendix D.1, in the limit M, the recall rate approach to one and thus we have TP = d for a RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$. On the other hand, the number of false positives FP can be computed as FP = FPRN, where FPR is the false positive rate.

As shown in appendix A.3, the estimator in the inactive set $\bar{{\Psi}}$ can be statistically described by a scalar estimator (66) and thus the FPR can be computed as

Equation (74)

which depends on λ, M, N, H. However, for both the quadratic loss and logistic loss, there is no analytic result for H in (59). Nevertheless, we can obtain some asymptotic result using perturbative analysis.

Specifically, we focus on the asymptotic behavior of the macroscopic parameters, e.g. χ, Q, K, E, H, F, in the regime FPR → 0, which is necessary for successful Ising model selection. From $\eta =\frac{1}{K}\mathrm{e}\mathrm{r}\mathrm{f}\mathrm{c}\left(\frac{\lambda M}{\sqrt{2HN}}\right)$ in EOS (59) and the FPR in (74), there is FPR = . Moreover, by combining $E\eta =-\int \frac{\rho \left(\gamma \right)}{\tilde{\Lambda }-\gamma }d\gamma $ and $K=E\tilde{\Lambda }+\frac{1}{\eta }$, the following relation can be obtained

Equation (75)

Thus as $FPR=\mathrm{e}\mathrm{r}\mathrm{f}\mathrm{c}\left(\frac{\lambda M}{\sqrt{2HN}}\right)\to 0$, there is $\int \frac{\rho \left(\gamma \right)}{1-\frac{\gamma }{\tilde{\Lambda }}}d\gamma \to 1$, implying that the magnitude of $\tilde{\Lambda }\to \infty $. Consequently, using the truncated series expansion, we obtain

Equation (76)

where $\left\langle {\gamma }^{k}\right\rangle =\int \rho \left(\gamma \right){\gamma }^{k}d\gamma $. Then, solving the quadratic equation (76), we obtain the solution (the other solution is not considered since it is a smaller value) of $\tilde{\Lambda }$ as

Equation (77)

To compute $\int \frac{\rho \left(\gamma \right)}{{\left(\tilde{\Lambda }-\gamma \right)}^{2}}d\gamma $, we use the following relation

Equation (78)

Equation (79)

Substituting the results (77)–(79) into (59), after some algebra, we obtain

Equation (80)

Equation (81)

Equation (82)

Equation (83)

In addition, as $FPR=\mathrm{e}\mathrm{r}\mathrm{f}\mathrm{c}\left(\frac{\lambda M}{\sqrt{2HN}}\right)\to 0$, from (59) we obtain

Equation (84)

where the first result in ≃ uses the asymptotic relation $\mathrm{e}\mathrm{r}\mathrm{f}\mathrm{c}\left(x\right)\simeq \frac{1}{x\sqrt{\pi }}{e}^{-{x}^{2}}$ as x and the last result in ≃ results from the asymptotic relation in (80). Then, substituting (84) into (83) leads to the following relation

Equation (85)

Interestingly, the common terms $3E\eta \left\langle \gamma \right\rangle H$ in both sides of (85) cancel with each other. Therefore, the key result for H is obtained as follows

Equation (86)

In addition, from (86) and (82), Q can be simplified as

Equation (87)

As shown in (59), $F=\alpha {\mathbb{E}}_{s,z}{\left(\frac{d\ell \left(y\right)}{dy}{\vert }_{y=\hat{y}\left(s,z,\chi ,Q,J\right)}\right)}^{2}$, thus the result $H\simeq F\left\langle \gamma \right\rangle $ in (86) implies that there is a linear relation between H and αM/N. The relation between E, F, H and α are also verified numerically in figure 6 when M = 50(log N) for N = 102 ∼ 1012 using the 1-LinR estimator.

Figure 6.

Figure 6.  E, F, H versus α when α = 50(log N)/N for N = 102 ∼ 1012 for RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with d = 3, K0 = 0.4. Note that in this case, there is $\left\langle \gamma \right\rangle =1$.

Standard image High-resolution image

In the paramagnetic phase, it can be obtained that the mean value of the eigenvalue $\left\langle \gamma \right\rangle $. Specifically, we have $\left\langle \gamma \right\rangle =\frac{1}{N}{\sum }_{i=1}^{N}{\gamma }_{i}=\frac{1}{N}\,\mathrm{Tr}\,C=(1/N)\times N=1$. Denote by $H\simeq F\left\langle \gamma \right\rangle \equiv \alpha {\triangle}$, where ${\triangle}={\mathbb{E}}_{s,z}{\left(\frac{d\ell \left(y\right)}{dy}{\vert }_{y=\hat{y}\left(s,z,\chi ,Q,J\right)}\right)}^{2}=\mathcal{O}\left(1\right)$, then the FPR in (74) can be rewritten as follows

Equation (88)

where the last inequality uses the upper bound of erfc function, i.e. $\mathrm{e}\mathrm{r}\mathrm{f}\mathrm{c}\left(x\right)\leqslant \frac{1}{x\sqrt{\pi }}{e}^{-{x}^{2}}$. Consequently, the number of false positives FP satisfies

Equation (89)

where the last inequality holds when $\frac{{\lambda }^{2}M}{2{\triangle}} > 1$, which is necessary when FP → 0 as N. Consequently, to ensure FP → 0 as N, from (89), the term $\frac{{\lambda }^{2}M}{2{\triangle}}$ should grow at least faster than log N, i.e.

Equation (90)

Meanwhile, the number of false positives FP will decay as $\mathcal{O}\left({e}^{-c\,\mathrm{log}\,N}\right)$ for some constant $c\left( > 0\right)$.

D.2.1. Quadratic loss

In this case, when $0< \lambda < \mathrm{tanh}\left({K}_{0}\right)$, from (61), we can obtain an analytic result for △ as follows

Equation (91)

Equation (92)

On the other hand, from the discussion in appendix D.1, the recall rate Recall → 1 as M when 0 < λ < tanh K0. Overall, for a RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with degree d and coupling strength K0, given M i.i.d. samples ${\mathcal{D}}^{M}=\left\{{\boldsymbol{s}}^{\left(1\right)},\dots ,{\boldsymbol{s}}^{\left(M\right)}\right\}$, using 1-LinR estimator (4) with regularization parameter λ, perfect recovery of the graph structure G can be achieved as N if the number of samples M satisfies

Equation (93)

where $c\left(\lambda ,{K}_{0}\right)$ is a value dependent on the regularization parameter λ and coupling strength K0, which can be approximated in the limit N as:

Equation (94)

D.2.2. Logistic loss

In this case, from (67), the value of △ can be computed as

Equation (95)

However, different from the case of 1-LinR estimator, there is no analytic solution but it can be calculated numerically. It can be seen that the 1-LinR estimator only differs in the value of scaling factor △ with the 1-LogR estimator for Ising model selection.

Appendix E.: Details of the non-asymptotic result for moderate M, N

As demonstrated in appendix A.3, from the replica analysis, both 1-LinR and 1-LogR estimators are decoupled and their asymptotic behavior can be described by two scalar estimators for the active set and inactive set, respectively. It is desirable to obtain the non-asymptotic result for moderate M, N. However, it is found that the behavior of the two scalar estimators by simply inserting the finite values of M, N into the EOS does not always lead to good consistency with the experimental results, especially for the Recall when M is small. This can be explained by the derivation of the free energy density. In calculating the energy term ξ, the limit M is taken implicitly when assuming the limit N with αM/N. As a result, the scalar estimator associated with the active set can only describe the asymptotic performance in the limit M. Thus, one cannot describe the fluctuating behavior of the estimator in the active set such as the recall rate for finite M. To characterize the non-asymptotic behavior of the estimates in the active set Ψ, we replace the expectation ${\mathbb{E}}_{s}(\cdot )$ in (58) by the sample average over M samples, and the corresponding estimates are obtained as

Equation (96)

where ${z}^{\mu }\sim \mathcal{N}\left(0,1\right)$ and ${s}_{0}^{\mu },{s}_{j,j\in {\Psi}}^{\mu }\sim P\left({s}_{0},{\boldsymbol{s}}_{{\Psi}}\vert {\boldsymbol{J}}^{\ast }\right)$ are random samples μ = 1, ..., M. Note that the mean estimates ${\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}$ are replaced by ${\left\{{\hat{J}}_{j}\right\}}_{j\in {\Psi}}$ in (96) as we now focus on its fluctuating behavior due to the finite size effect. In the limit M, the sample average will converge to the expectation and thus (96) is equivalent to (68) when M.

E.1. Quadratic loss $\ell \left(y\right)={\left(y-1\right)}^{2}/2$

In the case of quadratic loss $\ell \left(y\right)={\left(y-1\right)}^{2}/2$, there is an analytic solution to y in$\underset{y}{\mathrm{min}}\left[\frac{{\left(y-{s}_{0}\left(\sqrt{Q}z+{\sum }_{j\in {\Psi}}{\bar{J}}_{j}{s}_{j}\right)\right)}^{2}}{2\chi }+\ell \left(y\right)\right]$. Consequently, similar to (62), the result of (96) for the 1-LinR estimator becomes

Equation (97)

As the mean estimates ${\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}$ are modified as in (97), the corresponding solution to the EOS in (61) also needs to be modified, and this can be solved iteratively as sketched in algorithm 1. For a practical implementation of algorithm 1, the details are described in the following.

First, in the EOS (19), we need to obtain $\tilde{\Lambda }$ satisfying the following relation

Equation (98)

which is difficult to solve directly. To obtain $\tilde{\Lambda }$, we introduce an auxiliary variable $\Gamma \equiv -\frac{1}{\tilde{\Lambda }}$, by which (98) can be rewritten as

Equation (99)

which can be solved iteratively. Accordingly, the χ, Q, K, H in EOS (19) can be equivalently written in terms of Γ.

Second, when solving the EOS (19) iteratively using numerical methods, it is helpful to improve the convergence of the solution by introducing a small amount of damping factor $\mathtt{d}\mathtt{a}\mathtt{m}\mathtt{p}\in [0,1)$ for χ, Q, E, R, F, η, K, H, Γ in each iteration.

The detailed implementation of algorithm 1 is shown in algorithm 2.

Algorithm 2. Detailed implementation of algorithm 1 for the 1-LinR estimator with moderate M, N.

E.2. Logistic loss $\ell \left(y\right)=\mathrm{log}\left(1+{e}^{-2y}\right)$

In the case of square lass $\ell \left(y\right)=\mathrm{log}\left(1+{e}^{-2y}\right)$, since there is no analytic solution to y in $\underset{y}{\mathrm{min}}\left[\frac{{\left(y-{s}_{0}\left(\sqrt{Q}z+{\sum }_{j\in {\Psi}}{\bar{J}}_{j}{s}_{j}\right)\right)}^{2}}{2\chi }+\ell \left(y\right)\right]$, the result of (96) for the 1-LogR estimator becomes

Equation (100)

Similarly as the case for quadratic loss, as the mean estimates ${\left\{{\bar{J}}_{j}\right\}}_{j\in {\Psi}}$ are modified as in (100), the corresponding solutions to the EOS in (67) also need to be modified, which can be solved iteratively as shown in algorithm 3.

Algorithm 3. Detailed implementation of solving the EOS (67) together with (100) for 1-LogR with moderate M, N.

Appendix F.: Eigenvalue distribution $\rho \left(\gamma \right)$

From the replica analysis presented, the learning performance will depend on the eigenvalue distribution (EVD) $\rho \left(\gamma \right)$ of the covariance matrix C of the original Ising model.

There are two issues to be noted. One is about the formula connecting the performance of the estimator and the spectral density, and the other is the numeric values of quantities which are computed from the formula. For the first point, no assumption about the spectral density is needed to obtain the formula itself and this formula is valid when the graph structure is tree-like and the Ising model defined on the graph is in the paramagnetic phase. For the second point, we need the specific form of the spectral density to obtain numeric solutions in general. As a demonstration, we assume the random regular graph with constant coupling strength for which the spectral density can be obtained analytically as has already been known before in [7].

In general, it is difficult to obtain this EVD; however, for sparse tree-like graphs such as RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ with constant node degree d and sufficiently small coupling strength K0 that yields the paramagnetic state $({\mathbb{E}}_{\boldsymbol{s}}(\boldsymbol{s})=\mathbf{0})$, it can be computed analytically. For this, we express the covariances as

Equation (101)

where $Z(\boldsymbol{\theta })=\int d\boldsymbol{s}{P}_{\text{Ising}}(\boldsymbol{s}\vert {J}^{\ast })\mathrm{exp}({\sum }_{i=0}^{N-1}{\theta }_{i}{s}_{i})$ and the assessment is carried out at θ = 0.

In addition, for technical convenience we introduce the Gibbs free energy as

Equation (102)

The definition of (102) indicates that following two relations hold:

Equation (103)

where the evaluations are performed at θ = 0 and m = arg min m A( m ) (=0 under the paramagnetic assumption).

Consequently, we can focus on the computation of $A\left(\boldsymbol{m}\right)$ to obtain the EVD of C−1. The inverse covariance matrix of a RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$ can be computed from the Hessian of the Gibbs free energy [7, 47, 48] as

Equation (104)

and in matrix form, we have

Equation (105)

where I is an identity matrix of proper size, and the operations $\mathrm{tanh}\left(\cdot \right),{\mathrm{tanh}}^{2}\left(\cdot \right)$ on matrix J are defined in the component-wise manner. For RR graph $G\in {\mathcal{G}}_{N,d,{K}_{0}}$, J is a sparse matrix, therefore the matrix $\frac{\mathrm{tanh}\left(\boldsymbol{J}\right)}{1-{\mathrm{tanh}}^{2}\left(\boldsymbol{J}\right)}$ also corresponds to a sparse coupling matrix (whose nonzero coupling positions are the same as J ) with constant coupling strength ${K}_{1}=\frac{\mathrm{tanh}\left({K}_{0}\right)}{1-{\mathrm{tanh}}^{2}\left({K}_{0}\right)}$ and fixed connectivity d, the corresponding eigenvalue (denoted as ζ) distribution can be calculated as [49]

Equation (106)

From (105), the eigenvalue η of C−1 is

Equation (107)

which, when combined with (106), readily yields the EVD of η as N as follows:

Equation (108)

where $\eta \in \left[\frac{d}{1-{\mathrm{tanh}}^{2}{K}_{0}}-d+1-\frac{2\,\mathrm{tanh}\left({K}_{0}\right)\sqrt{d-1}}{1-{\mathrm{tanh}}^{2}\left({K}_{0}\right)},\frac{d}{1-{\mathrm{tanh}}^{2}\,{K}_{0}}-d+1+\frac{2\,\mathrm{tanh}\left({K}_{0}\right)\sqrt{d-1}}{1-{\mathrm{tanh}}^{2}\left({K}_{0}\right)}\right]$.

Consequently, since γ = 1/η, we obtain the EVD of $\rho \left(\gamma \right)$ as follows

Equation (109)

where $\gamma \in \left[1/\left(\frac{d}{1-{\mathrm{tanh}}^{2}{K}_{0}}-d+1+\frac{2\,\mathrm{tanh}\left({K}_{0}\right)\sqrt{d-1}}{1-{\mathrm{tanh}}^{2}\left({K}_{0}\right)}\right),1/\left(\frac{d}{1-{\mathrm{tanh}}^{2}\,{K}_{0}}-d+1-\frac{2\enspace \mathrm{tanh}\left({K}_{0}\right)\sqrt{d-1}}{1-{\mathrm{tanh}}^{2}\left({K}_{0}\right)}\right)\right]$.

Appendix G.: Additional experimental results

Figures 7 and 8 show the full results of non-asymptotic learning performance prediction when λ = 0.1 and λ = 0.3, respectively. Good agreements between replica results and experimental results are achieved in all cases. As can be seen, there is negligible difference in Precision and Recall between 1-LinR and 1-LogR. Meanwhile, compared to figure 7 when λ = 0.1, the difference in RSS between 1-LinR and 1-LogR is reduced when λ = 0.3. In addition, by comparing figures 7 and 8, it can be seen that under the same setting, when λ increases, the Precision becomes larger while the Recall becomes smaller, implying a tradeoff in choosing λ in practice for Ising model selection with finite M, N.

Figure 7.

Figure 7. Theoretical and experimental results of RSS, Precision and Recall for both 1-LinR and 1-LogR when λ = 0.1, N = 200, 400, 800 with different values of αM/N. The standard error bars are obtained from 1000 random runs. An excellent agreement between theory and experiment is achieved, even for small N = 200 and small α (small M).

Standard image High-resolution image
Figure 8.

Figure 8. Theoretical and experimental results of RSS, Precision and Recall for both 1-LinR and 1-LogR when λ = 0.3, N = 200, 400, 800 with different values of αM/N. The standard error bars are obtained from 1000 random runs. An excellent agreement between theory and experiment is achieved, even for small N = 200 and small α (small M).

Standard image High-resolution image

Figures 9 and 10 show the full results of critical scaling prediction when λ = 0.1 and λ = 0.3, respectively. For comparison, both the results of 1-LinR and 1-LogR are shown. It can be seen that apart from the good agreements between replica results and experimental results, the prediction of the scaling value ${c}_{0}\left(\lambda ,{K}_{0}\right)\equiv \frac{c\left(\lambda ,{K}_{0}\right)}{{\lambda }^{2}}$ is very accurate.

Figure 9.

Figure 9.  Precision and Recall versus N when M = c log N and K0 = 0.4 for 1-LinR and 1-LogR when λ = 0.1, where ${c}_{0}\left(\lambda ,{K}_{0}\right)\equiv \frac{c\left(\lambda ,{K}_{0}\right)}{{\lambda }^{2}}\approx 137$. When $c > {c}_{0}\left(\lambda ,{K}_{0}\right)$, the Precision increases consistently with N and approaches 1 as N while it decreases consistently with N when $c< {c}_{0}\left(\lambda ,{K}_{0}\right)$.

Standard image High-resolution image
Figure 10.

Figure 10.  Precision and Recall versus N when M = c log N and K0 = 0.4 for 1-LinR and 1-LogR when λ = 0.3, where ${c}_{0}\left(\lambda ,{K}_{0}\right)\equiv \frac{c\left(\lambda ,{K}_{0}\right)}{{\lambda }^{2}}\approx 19.4$. When $c > {c}_{0}\left(\lambda ,{K}_{0}\right)$, the Precision increases consistently with N and approaches 1 as N while it decreases consistently with N when $c< {c}_{0}\left(\lambda ,{K}_{0}\right)$. The Recall increases consistently and approach to 1 as N.

Standard image High-resolution image

Footnotes

  • This article is an updated version of: Meng X, Obuchi T and Kabashima Y 2021 Ising model selection using 1-regularized linear regression: a statistical mechanics analysis Advances in Neural Information Processing Systems vol 34 ed M Ranzato, A Beygelzimer, Y Dauphin, P S Liang and J Wortman Vaughan (New York: Curran Associates) pp 6290–303.

  • Though this setting is different from the analysis where the nonzero couplings take a uniform sign, the result can be directly compared thanks to gauge symmetry [21].

Please wait… references are loading.
10.1088/1742-5468/ac9831