This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Perfect reconstruction of sparse signals with piecewise continuous nonconvex penalties and nonconvexity control

and

Published 14 September 2021 © 2021 The Author(s). Published on behalf of SISSA Medialab srl by IOP Publishing Ltd
, , Citation Ayaka Sakata and Tomoyuki Obuchi J. Stat. Mech. (2021) 093401 DOI 10.1088/1742-5468/ac1403

1742-5468/2021/9/093401

Abstract

We consider a reconstruction problem of sparse signals from a smaller number of measurements than the dimension formulated as a minimization problem of nonconvex sparse penalties: smoothly clipped absolute deviations and minimax concave penalties. The nonconvexity of these penalties is controlled by nonconvexity parameters, and the 1 penalty is contained as a limit with respect to these parameters. The analytically-derived reconstruction limit overcomes that of the 1 limit and is also expected to overcome the algorithmic limit of the Bayes-optimal setting when the nonconvexity parameters have suitable values. However, for small nonconvexity parameters, where the reconstruction of the relatively dense signals is theoretically expected, the algorithm known as approximate message passing (AMP), which is closely related to the analysis, cannot achieve perfect reconstruction leading to a gap from the analysis. Using the theory of state evolution, it is clarified that this gap can be understood on the basis of the shrinkage in the basin of attraction to the perfect reconstruction and also the divergent behavior of AMP in some regions. A part of the gap is mitigated by controlling the shapes of nonconvex penalties to guide the AMP trajectory to the basin of the attraction.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

A signal processing scheme for reconstructing signals through linear measurements, when the number of measurements is less than the dimensionality of the signals, is known as compressed sensing (or compressive sensing) [1, 2]. Let ${\boldsymbol{x}}^{0}\in {\mathbb{R}}^{N}$ and $A\in {\mathbb{R}}^{M{\times}N}$ be the unknown original signal and measurement matrix, respectively. Compressed sensing is mathematically formulated as a problem of reconstructing the signal x 0 from its measurement y = A x 0, where the number of measurements is less than the signal dimension (M < N). In general, the problem is underdetermined, and the solution is not unique. However, the signal can be reconstructed utilizing the knowledge that the original signal is sparse; it contains zero components with a finite probability. The reconstruction of signals from a limited number of measurements is a common challenge in various fields. In the past decade, theories and techniques of compressed sensing have been enriched by interdisciplinary work in fields such as signal processing, medical imaging, and statistical physics.

A natural way to reconstruct a sparse signal is to minimize the 0 norm under a constraint:

Equation (1)

where || x ||0 is the number of nonzero components in x . However, a combinatorial search with respect to the support set is required to exactly solve (1); hence, it is unrealistic for implementation. The minimization of the 1 norm [1, 3] is a widely used approach:

Equation (2)

which is a convex relaxation problem of (1), where ${\Vert}\boldsymbol{x}{{\Vert}}_{1}={\sum }_{i=1}^{N}\vert {x}_{i}\vert $. Efficient algorithms to solve (2) have been developed [4, 5], in addition to the convex optimization techniques [6]. Further, some conditions about the measurement matrix, such as the null space property and the restricted isometry property [7, 8], are found and under such conditions it is shown that solutions (1) and (2) become equivalent.

Despite the mathematical tractability of 1 minimization, its performance is inferior to 0 minimization for the practical setting of the measurement matrix A. The difference between 1 and 0 is expected to be reduced by introducing the minimization of p (0 < p < 1) norm. In fact, p (0 < p < 1) minimization achieves the reconstruction of the original signal from a lower number of measurements than 1 minimization [9, 10]. However, p (0 < p < 1) minimization leads to a discontinuity of the reconstructed signal with respect to the input, which induces algorithmic instability. Smoothly clipped absolute deviations (SCADs) [11] and minimax concave penalties (MCPs) [12], which are piecewise continuous nonconvex penalties, are potential candidates to address this limitation. SCADs and MCPs are designed to provide continuity, unbiasedness, and sparsity to the estimates, and their nonconvexities are controlled by nonconvexity parameters. The mathematical treatment of nonconvex penalties is seemingly difficult compared with 1. However, it is shown that a data compression problem under SCAD and MCP can be solved without additional computational cost compared with convex optimization problems in a certain parameter region, and this region is characterized by replica symmetry in the context of statistical physics [13]. This investigation implies the prospects of these penalties for improvement in reconstructing the signals in compressed sensing.

In this study, we theoretically verify the performance of the minimization of SCAD and MCP for the reconstruction of sparse signals in compressed sensing. The perfect reconstruction is achieved with a smaller number of measurements compared with the 1 reconstruction limit [1416]. Further, SCAD and MCP minimization overcomes the algorithmic limit of the Bayes-optimal method [17], in the sense that there exists a unique stable solution corresponding to the perfect reconstruction, even beyond the algorithmic limit of the Bayes-optimal method, and that there are no phase transitions that can be algorithmic barriers. Based on this finding, as a reconstruction algorithm, we employ the approximated message passing (AMP) algorithm to SCAD and MCP minimization. Examining its performance, it is found that AMP cannot actually achieve the perfect reconstruction beyond the Bayes-optimal algorithmic limit, despite the theoretical basis. To investigate the gap between the AMP's behavior and the analytical result, we use the technique of the state evolution (SE), which allows us to track the macroscopic dynamics of AMP. As a result, it is found that the gap comes from the shrinkage in the basin of attraction (BOA) to the perfect reconstruction and the divergent behavior of AMP in some parameter regions. One of the contributions of the paper is the finding of the scenario of the algorithmic failure different from that in the Bayes-optimal setting, where the emergence of another local minimum than the success solution, which is the global minimum, hampers the convergence of AMP to the perfect reconstruction.

We mitigate the abovementioned gap and improve the performance of AMP by introducing a method controlling nonconvexity parameters, named nonconvexity control, into AMP, which is our other contribution. The efficiency of the nonconvexity control is understood from the flow of SE. Further, the property of the fixed point of SE gives a guide for the protocol of the nonconvexity control. The algorithmic limit of SCAD and MCP minimization, called the nonconvexity control limit (NCC limit), is determined as the limit where the proposed nonconvexity control leads to the perfect reconstruction. The resultant algorithmic limit is very close to but slightly inferior to the Bayes-optimal algorithmic limit.

The remainder of this paper is organized as follows. In section 2, we introduce the nonconvex sparse penalties, SCAD and MCP, used in this study. The equilibrium properties of compressed sensing under SCAD and MCP are studied in section 3, based on the replica method under the replica symmetric (RS) assumption. In section 4, the limit for the perfect reconstruction is derived for SCAD and MCP, and we show that their performance is expected to overcome the 1 reconstruction limit and the algorithmic limit of the Bayes-optimal method by investigating the presence of phase transitions and the local stability of the solution corresponding to the perfect reconstruction. In section 5, we demonstrate the actual reconstruction of the signal using AMP, and show that the reconstructed signal diverges when the nonconvexity parameters are small, even when the reconstruction is theoretically supported. We show that this divergence can be suppressed by introducing the nonconvexity control. Section 6 is devoted to the summary and discussion of this paper.

2. Definition of SCAD and MCP

The problem considered in this study is formulated as

Equation (3)

where $J(\boldsymbol{x};\lambda ,a)={\sum }_{i=1}^{N}J({x}_{i};\lambda ,a)$ is a sparsity-inducing penalty and λ, a are regularization parameters. We deal with two types of nonconvex penalties, SCAD and MCP. The shapes of these penalties are controlled by two parameters, λ and a, and 1 penalty is considered as a limit. We call these regularization parameters nonconvexity parameters.

SCAD is defined by [11]

Equation (4)

where λ ∈ (0, ) and a ∈ (1, ). Figure 1(a) represents the SCAD penalty at λ = 1 and a = 3. The dashed vertical lines are the thresholds |x| = λ and |x| = . The SCAD penalty for |x| ⩽ λ is equivalent to the 1 penalty, and that for |x| > is equivalent to the 0 penalty; i.e. the penalty has a constant value. These 1 and 0 regions are connected to each other through a quadratic function. At a, SCAD is reduced to the 1 penalty, J(x; λ, a) = λ|x|, and the minimization of SCAD at λ is also equivalent to the 1 minimization.

Figure 1.

Figure 1. Shapes of (a) SCAD for λ = 1 and a = 3 and (b) MCP for λ = 1 and a = 3. The dashed lines represent the thresholds where the penalty shape changes.

Standard image High-resolution image

MCP is defined by [12]

Equation (5)

where λ ∈ (0, ) and a ∈ (1, ). Figure 1(b) represents MCP for λ = 1 and a = 3. The vertical line represents the threshold |x| = . As with SCAD, MCP is also reduced to 1 by taking the limit a, and is also equivalent to 1 minimization when the penalty is minimized at λ.

2.1. Estimator under SCAD and MCP

For understanding the properties of SCAD and MCP, let us consider the one-dimensional fitting problem of the input w using a Gaussian model penalized by SCAD or MCP as

Equation (6)

where s > 0. Both SCAD and MCP have upward convex terms, hence to define a solution of (6) for all w regions, we need to carefully consider the possible regions of s and a, which give the coefficient of the quadratic term in the rhs of (6). In the case of SCAD, the relationship a > 1 + s should hold to obtain the solution as

Equation (7)

where ${\mathcal{M}}_{\text{SCAD}}$ and ΣSCAD represent the coefficients of the linear and quadratic terms in (6) given by

Equation (8)

Equation (9)

and sgn(w) denotes the sign of w. An example of the estimator as a function of the input w under SCAD is shown in figure 2(b), where s = 1, λ = 1 and a = 3. The SCAD estimator behaves like the 1 estimator, which is shown in figure 2(a), and like the ordinary least square (OLS) estimator when λ(1 + s−1) ⩾ |z| > λ and |w| > aλs−1, respectively. In the region aλs−1 ⩾ |w| > λ(1 + s−1), the estimator linearly transits between 1 and OLS estimators.

Figure 2.

Figure 2. Behaviour of the estimator for (a) 1, (b) SCAD, and (c) MCP. The dashed diagonal lines represent the OLS estimator.

Standard image High-resolution image

From the same argument as SCAD, a > s should hold to define the solution of (6) under MCP. Further, the condition a > 1 is imposed in the definition of MCP, hence in summary the restriction is given as a > min{1, s}. When the condition a > min{1, s} is satisfied, the solution of the single body problem under MCP is given by [18]

Equation (10)

where

Equation (11)

Equation (12)

Figure 2(c) shows the behaviour of the MCP estimator at s = 1, λ = 1, and a = 3. The MCP estimator behaves like the OLS estimator at |w| > aλs−1, and is connected from zero to the OLS estimator in the region aλs−1 ⩾ |w| > λ.

3. Replica analysis for SCAD and MCP

We assume that the signal to be reconstructed is generated according to the Bernoulli–Gaussian distribution,

Equation (13)

where δ(x) is the Dirac delta function. Further, we consider the measurement matrix A to be a random Gaussian, where each component is independently and identically distributed according to the Gaussian distribution with mean 0 and variance N−1. The measurement is expressed as y = A x 0, and the minimization of J( x ; λ, a) is implemented under the constraint  y = A x . For mathematical tractability, we express the constraint  y = A x by introducing a parameter τ as

Equation (14)

where the probability is concentrated at y = A x taking the limit τ → 0. The posterior distribution corresponding to the problem (3) is given by

Equation (15)

where β is a parameter to attain the uniform distribution over the minimizer of (3) at β, and

Equation (16)

is the normalization constant. The minimizer of (3) is given by $\hat{\boldsymbol{x}}=\langle \boldsymbol{x}\rangle $, where ⟨⋅⟩ denotes the expectation with respect to x according to the posterior distribution (15).

The performance of the reconstruction (3) depends on the randomness A and x 0. Here, we examine the typical performance of the SCAD and MCP minimization at N and M keeping M/N = αO(1), where α is the compression ratio. Free energy density defined by

Equation (17)

is the key in this discussion, where ${E}_{{\boldsymbol{x}}^{0},A}[\dots ]$ denotes the expectation with respect to A and x 0 introduced for the discussion of the typical property. Here, we proceed with the calculation for general τ and β, and take the limit after the derivation of the general form. It is calculated using the following identity

Equation (18)

Assuming that n is a positive integer, we can express the expectation of ${Z}_{\beta }^{n}$ in (18) using n-replicated systems

Equation (19)

The detail of the calculation is shown in [13, 16, 19], and here we briefly summarize the calculation. The free energy density under the RS assumption is given by

Equation (20)

where ${\mathrm{e}\mathrm{x}\mathrm{t}\mathrm{r}}_{{\Omega},~{{\Omega}}}$ represents the extremization with respect to the quantities Ω = {Q, χ, m} and $~{{\Omega}}=\left\{~{Q},~{\chi },~{m}\right\}$. The function $\xi (~{Q},\sigma )$ depends on the regularization as

Equation (21)

Equation (22)

where $\int Dz={\int }_{-\infty }^{\infty }\mathrm{d}z\enspace \mathrm{exp}(-{z}^{2}/2)/\sqrt{2\pi }$. Equation (22) is equivalent to the one-dimensional problem (6). Here, $\overline{\dots }$ denotes the average over σ according to the distribution

Equation (23)

with ${\sigma }_{-}=\sqrt{~{\chi }}$ and ${\sigma }_{+}=\sqrt{~{\chi }+{~{m}}^{2}{\sigma }_{x}^{2}}$. The random fields σ z and σ+ z effectively represent the randomness induced by A and x 0, in particular zero-signals and non-zero-signals, respectively. We denote the solution of x in the effective single-body problem (22) as ${x}^{{\ast}}({~{Q}}^{-1},\sigma z)$, which depends on the regularization, and we consider the specific form of ${x}^{{\ast}}({~{Q}}^{-1},\sigma z)$ and $L(~{Q},\sigma z)$ later. The saddle point equations are given by

Equation (24)

Equation (25)

Equation (26)

Equation (27)

Equation (28)

Equation (29)

At the saddle point, ${x}^{{\ast}}({~{Q}}^{-1},\sigma z)$ is statistically equivalent to the point estimate $\hat{\boldsymbol{x}}$, and χ, Q and m are related to the physical quantities as

Equation (30)

Equation (31)

Equation (32)

Hence, the expectation value of the mean squared error (MSE) between the reconstructed signal and the original signal is represented as

Equation (33)

The saddle point equations for variables Ω = {Q, χ, m} directly depend on the functional form of the regularization, but the equations for $~{{\Omega}}$ do not depend on it. In the following subsections, we show the form of saddle point equations of Ω for SCAD and MCP.

The RS solution is stable against the symmetry breaking perturbation when [13, 16]

Equation (34)

which is known as de Almeida–Thouless (AT) condition [20]. The form of this condition also depends on the functional form of the regularization.

3.1. SCAD

As mentioned in section 2.1, $~{Q}{ >}{(a-1)}^{-1}$ should hold to define the minimizer of (22). In the following, the subspace of the macroscopic parameters where the minimizer of (22) can be defined is denoted by ${{\Omega}}^{{\dagger}}(a)\equiv \left\{Q,\chi ,m\vert ~{Q}{ >}{(a-1)}^{-1}\right\}$ for each a, and we restrict our discussion to Ω(a). When we are in Ω(a), the minimizer of (22) under SCAD is given by [18]

Equation (35)

and substituting solution (35) into (22), we obtain

Equation (36)

where ${\theta }_{1}(\sigma )=\lambda /(\sqrt{2}\sigma )$, ${\theta }_{2}(\sigma )=\lambda (1+~{Q})/(\sqrt{2}\sigma )$, and ${\theta }_{3}(\sigma )=a\lambda ~{Q}/(\sqrt{2}\sigma )$. Equation(21) for SCAD regularization is derived as

Equation (37)

where

Equation (38)

Equation (39)

Equation (40)

Equation (41)

The regularization-dependent saddle point equations are given by

Equation (42)

Equation (43)

Equation (44)

where $\hat{\rho }$ is the density of the nonzero component in the estimate given by

Equation (45)

From (34), the AT condition is derived as

Equation (46)

3.2. MCP

As with SCAD, we concentrate our discussion on the subspace of the macroscopic parameters ${{\Omega}}^{{\dagger}}(a)=\left\{Q,\chi ,m\vert ~{Q}{ >}{a}^{-1}\right\}$, where the solution of (22) can be defined. The solution of the single body problem under MCP in ${{\Omega}}^{{\dagger}}(a)=\left\{Q,\chi ,m\vert ~{Q}{ >}{a}^{-1}\right\}$ is given by [18]

Equation (47)

and we obtain

Equation (48)

where ${\theta }_{1}(\sigma )=\lambda /(\sqrt{2}\sigma )$ and ${\theta }_{2}(\sigma )=a\lambda ~{Q}/(\sqrt{2}\sigma )$, and (21) for MCP is derived as

Equation (49)

where

Equation (50)

Equation (51)

Equation (52)

The saddle point equations for Ω are given by

Equation (53)

Equation (54)

Equation (55)

where $\hat{\rho }$ is the density of the nonzero component in the estimate given by

Equation (56)

The AT condition for MCP is derived as

Equation (57)

4. Stability of success solution

One of the solutions of the saddle point equations in Ω(a) is characterized by $Q=m=\rho {\sigma }_{x}^{2}$. Following the correspondence between the order parameters and the MSE (33), this solution indicates the perfect reconstruction of the original signal x 0. Hence, we call the solution with $Q=m=\rho {\sigma }_{x}^{2}$ the success solution. The saddle point equation can have solutions other than the success solution; however, these solutions do not satisfy the AT condition as far as we observed. Substituting the relationship $Q=m=\rho {\sigma }_{x}^{2}$, we immediately obtain χ = 0 and $~{Q}=~{m}=\infty $, and the only variable to be solved is $~{\chi }$. The expansion of Q and m up to the order $O({~{Q}}^{-2})$ gives the expression of $~{\chi }$ for the success solution under SCAD

Equation (58)

and under MCP

Equation (59)

where ${\theta }_{-}=\lambda /\sqrt{2~{\chi }}$ and ${\theta }_{+}=\lambda /\sqrt{2{\sigma }_{x}^{2}}$. Equations (58) and (59) are reduced to the saddle point equation of $~{\chi }$ corresponding to the success solution for 1 regularization by setting λ = 1 and when a [16].

For both penalties, the success solution is a locally stable solution as a saddle point of the RS free energy when

Equation (60)

This condition is derived by the linear stability analysis of χ around 0. Further, we can show that the AT condition for the success solution is equivalent to (60). This means that when the success solution is locally stable as an RS saddle point, it is also stable with respect to the replica symmetry breaking perturbation. Therefore, the reconstruction limit αc (ρ) is defined as the minimum value of α that satisfies (60) for each ρ. We also define ρc (α) as the maximum value of ρ to satisfy (60) at α, and we use both αc (ρ) and ρc (α) for convenience.

Figures 3 and 4 show the ρ-dependence of αc (ρ) for SCAD minimization and MCP minimization, respectively, where (a) represents a = 3 and (b) represents λ = 0.01. The typical reconstruction is possible in the parameter region ααc (ρ), and that for 1 minimization (L1) and the algorithmic limit of the Bayes-optimal method given by spinodal transition (BO(S)), and the phase transition boundary of the Bayes-optimal method (BO(P)) over which the success solution is locally stable, are shown for comparison. As λ and a decrease, αc (ρ) of SCAD and MCP become less than that of the algorithmic limit of the Bayes-optimal reconstruction method [17]. Further, the reconstruction limit αc (ρ) approaches ρ as λ → 0. Mathematically, αc (ρ) → ρ is provided by scaling θ and $~{\chi }\to 0$ at λ → 0, which reduces (60) to ρ < α. This inequality, ρ < α, is considered to be the fundamental limit, because, in general sparse estimation methods, the estimation of the support increases the effective degrees of the estimated variables. Hence, we need more measurements than the number of the variables to be estimated. It is indicated that SCAD and MCP with λ → 0 achieve the typical reconstruction when the number of the measurements and that of nonzero variables are balanced.

Figure 3.

Figure 3. Reconstruction limit of SCAD at (a) a = 3 for λ = 0.01 and λ = 0.1 and (b) λ = 0.01,  a = 3 and a = 50. The lines with 'L1', 'BO(S)' and 'BO(P)' are the reconstruction limit under 1 minimization, the algorithmic limit by the Bayes-optimal method given by spinodal transition, and the phase transition point of the Bayes-optimal method, respectively. The shaded regions are α < ρ.

Standard image High-resolution image
Figure 4.

Figure 4. Reconstruction limit of MCP at (a) a = 3 for λ = 0.01 and λ = 0.1 and (b) λ = 0.01 for a = 3 and a = 50. The lines with 'L1', 'BO(S)' and 'BO(P)' are the same as figure 3. The shaded regions are α < ρ.

Standard image High-resolution image

We denote as ac (λ) the value of a under which the signals can be reconstructed for each λ. The reconstruction limit ac (λ) on the λa plane is shown in figure 5 for (a) SCAD and (b) MCP, respectively, at α = 0.5 for ρ = 0.3 and ρ = 0.4. The horizontal dashed lines represent amin, which is equal to 1 when the success solution is stable, and the signals can be reconstructed in the parameter region amin < aac (λ). The dashed vertical lines represent λc , defined as the maximum value of λ that gives ac (λ) > amin. Hence, the signal cannot be reconstructed at λλc . For the reconstruction of dense signals, small nonconvexity parameters λ and a are required, and ac (λ) and λc for MCP are always greater than those for SCAD. The dependences of λc on ρ/α for SCAD and MCP are compared in figure 6 for (a) α = 0.3 and (b) α = 0.5. The vertical lines represent the reconstruction limit of 1 minimization, and the values of λc diverge as ρ/α approaches the 1 reconstruction limit. This divergence of λc means that one can reconstruct the signals using any λ ∈ (0, ) and a ∈ (amin, ) when the signals are sufficiently sparse to be reconstructed by 1 minimization. For any system parameters, the divergence of λc in MCP is faster than that in SCAD, which indicates that the range of possible values of nonconvexity parameters for reconstruction in MCP is wider than that in SCAD. In this sense, MCP is superior to SCAD.

Figure 5.

Figure 5. Reconstruction limit ac (λ) at α = 0.5 for (a) SCAD and (b) MCP, respectively. The vertical dashed lines represent the maximum value of λ, where the reconstruction is possible with amin < aac (λ). The horizontal dashed lines represent a = 1, which is the minimum value of a for the success solution.

Standard image High-resolution image
Figure 6.

Figure 6.  ρ/α-dependence of λc for (a) α = 0.3 and (b) α = 0.5. In the parameter region on the left side to the vertical lines, 1 minimization reconstructs the original signals.

Standard image High-resolution image

4.1. Comment on the RS-failure solution

We mention the existence of the solution of RS saddle point equation at α < αc for subsequent discussions. One can find a solution with ɛ > 0 ($Q{< }\rho {\sigma }_{x}^{2}$ and $m{< }\rho {\sigma }_{x}^{2}$) and χ > 0 within Ω(a) at α < αc , which violates the AT condition. We term this solution as an RS-unstable failure solution. Figures 7 and 8 show the α-dependence of ɛ and χ for SCAD and MCP, respectively, where the vertical dashed lines represent αc . The RS-unstable failure solution is smoothly connected to the success solution that appears at ααc , and does not coexist with the success solution. The RS-unstable failure solution does not contribute to the equilibrium property of the system, but this solution is still useful to consider the behavior of the algorithm as shown in the following sections.

Figure 7.

Figure 7.  α-dependence of ɛ and χ in RS solution of SCAD at ρ = 0.35 for (a) λ = 1, a = 10 and (b) λ = 0.3, a = 5. The vertical dotted lines indicate αc , and the vertical dashed lines in (b) indicate the disappearance of the finite ɛ and χ.

Standard image High-resolution image
Figure 8.

Figure 8.  α-dependence of ɛ and χ in RS solution of MCP at ρ = 0.4 for (a) λ = 1, a = 10 and (b) λ = 0.5, a = 5. The vertical dotted lines indicate αc , and the vertical dashed lines in (b) indicate the disappearance of the finite ɛ and χ.

Standard image High-resolution image

4.2. Comment on the diverging 'solution'

As shown in figures 7(b) and 8(b), when λ is sufficiently small, the values of ɛ and χ tend to diverge at sufficiently small α, and the solution with finite ɛ and χ disappears. In fact, (43) indicates that the solution χ is stable for both SCAD and MCP when

Equation (61)

holds, where ${\theta }_{\infty }^{+}=a\lambda \sqrt{\alpha /\left\{2(\alpha +\varepsilon )\right\}}$ and ${\theta }_{\infty }^{-}=a\lambda \sqrt{\alpha /(2\varepsilon )}$, although the solution is out of the physical region Ω(a). Considering the limit χ, the set of the saddle point equations for SCAD and MCP is reduced to the same one equation for the MSE ɛ as

Equation (62)

The solutions of (62) can be finite or infinite depending on α, λ, ρ and a, and the infinite ɛ is always stable if it exists when α < 1. The diverging solutions do not contribute to the thermodynamic behavior, because they are not in the region Ω(a), but they affect the algorithmic behavior of SCAD or MCP minimization.

5. Approximate message passing with nonconvexity control

For numerical computation of the estimate under a given measurement matrix, AMP is a feasible algorithm with low computational cost. As discussed below, the typical trajectory and fixed point of AMP can be connected to the analysis based on replica method, hence we can understand algorithmic behavior by comparing with the analysis. The detailed derivation has been given by the previous studies [13, 17, 21], and here we briefly introduce the algorithm. In AMP, the estimates under a general separable sparse penalty is recursively updated as

Equation (63)

where ${\hat{x}}_{i}^{(t)}$ denotes the estimate at time step t, and

Equation (64)

Equation (65)

Equation (66)

Equation (67)

Equation (68)

The solution of (63) corresponds to the minimizer of (22), with the replacement of $~{Q}$ and σz with ${~{Q}}_{i}^{(t)}$ and ${h}_{i}^{(t)}$, respectively.

The local stability of AMP corresponds to the AT instability condition [13]. Hence, it is expected that AMP reconstructs the original signal in the theoretically derived parameter region α > αc (ρ) at sufficiently large N, because the current problem does not exhibit any first order transitions or spinodal transitions under fixed nonconvexity parameters, in contrast to the Bayes-optimal setting [17] or the Monte Carlo sampling case [22]. Figure 9 shows examples of reconstructed signals of N = 400 and α = 0.5 after 1000 steps update of AMP under SCAD (a) at λ = 1 and a = 3 for ρ = 0.25 and (b) at λ = 0.1 and a = 3 for ρ = 0.3, where the original and reconstructed signals are represented by solid lines and circles, respectively. In these parameter regions, perfect reconstruction is theoretically supported, but as shown in figure 9(b), the naive update of AMP does not achieve the perfect reconstruction for small values of nonconvexity parameters. The discrepancy between the replica analysis and AMP appears, in particular, when the signal is dense. The tendency is common in both SCAD and AMP, hence we explain their characteristic behavior using SCAD as a representative.

Figure 9.

Figure 9. True signal ${x}_{i}^{0}$ (solid line) and reconstructed signal ${\hat{x}}_{i}$ (circles) at N = 100 and M = 50 for (a) ρ = 0.25 with SCAD at λ = 1 and a = 3, and (b) ρ = 0.28 with SCAD at λ = 0.1 and a = 3.

Standard image High-resolution image

As mentioned above, any other stable solutions do not exist when the success solution is locally stable, hence the discrepancy between the replica analysis and AMP cannot be understood by the spinodal transition as with the Bayes-optimal method. For understanding the difficulty in achieving the perfect reconstruction of the signal by AMP at a small nonconvexity parameter region, we use SE [23]. The typical trajectory of AMP is characterized by two parameters ${V}^{(t)}\equiv {E}_{{\boldsymbol{x}}^{0},A}[{\hat{V}}^{(t)}]$ and ${\varepsilon }^{(t)}\equiv {E}_{{\boldsymbol{x}}^{0},A}[{\hat{\varepsilon }}^{(t)}]$, where ${\hat{\varepsilon }}^{(t)}\equiv {\sum }_{i=1}^{N}{({\hat{x}}_{i}^{(t)}-{x}_{i}^{0})}^{2}/N$ is the MSE at tth iteration step. In particular, when the components of A are independently and identically distributed with mean 0 and variance 1/N, as for the Gaussian measurement matrix, the time evolution of V(t) and ɛ(t) is described by SE equations [13, 17]

Equation (69)

Equation (70)

where $\hat{x}(s,w)={{\Sigma}}_{\text{p}}(s,w){\mathcal{M}}_{\text{p}}(s,w)$ for p ∈ {SCAD, MCP}. SE is equivalent to the RS saddle point equation, and the fixed point denoted by V* and ɛ* corresponds to the RS saddle point as V* = χ and ${\varepsilon }^{{\ast}}=Q-2m+\rho {\sigma }_{x}^{2}$, respectively. Hence, the success solution is described as V* = ɛ* = 0 in the SE. As mentioned in section 4.1, the failure solution appears in some parameter regions, but it always involves the RS instability and never coexists with the success solution. Note that the flow of the SE describes the typical trajectory of the AMP with respect to A and x 0. Hence, it does not necessarily describe a trajectory under a fixed realization of A and x 0. However, it is expected that the trajectories converge to the flow of SE for a sufficiently large system size. Hence, SE flow supports an understanding of a trajectory of AMP under a fixed set of A and x 0 [24].

Figure 10 shows the flow of SE at α = 0.5 and ρ = 0.28 for SCAD at (a) λ = 1 and a = 3 and (b) λ = 0.1 and a = 3. The arrows assigned to the coordinate $(\hat{V},\hat{\varepsilon })$ are the normalized vector of (V(t+1)V(t), ɛ(t+1)ɛ(t)) with ${V}^{(t)}=\hat{V}$ and ${\varepsilon }^{(t)}=\hat{\varepsilon }$, which indicate the direction of SE's flow, and the stars depict the fixed points of SE. As shown in figure 10(a), SE has a fixed point with finite ɛ and V for large λ, which corresponds to the RS-unstable failure solution, but as λ decreases, most of the SE flow leads to a divergence of V and ɛ as shown in figure 10(b); however, there is still a region close to the V-axis where the flows are directed to V = ɛ = 0. This region shrinks to V-axis as the nonconvexity parameter decreases. Flows of SE in MCP are almost the same as SCAD, as shown in figure 10. In case of 1 minimization, one can check that SE reaches to the success solution from any initial condition of (V, ɛ), namely the volume of the BOA diverges to infinity at α > αc (ρ) or ρ < ρc (α). Therefore, this shrinking basin and the diverging flow are significant properties of the minimization problems of SCAD and MCP.

Figure 10.

Figure 10. Flow of SE at α = 0.5 and ρ = 0.28 for SCAD (a) λ = 1 and a = 3 and (b) λ = 0.1 and a = 3. The fixed points are depicted by stars.

Standard image High-resolution image

We quantify the volume of the BOA under SCAD minimization and show its dependency on ρ for α = 0.5 as figure 11(a). The BOA to V = ɛ = 0 is zero at ρ = ρc , and gradually increases from zero as ρ decreases from ρc . When the nonconvexity parameter λ is small, the basin volume tends to be small for any ρ region. Figure 11(b) shows the possible maximum value of ɛ, denoted by ɛmax, as an initial condition to converge to V = ɛ = 0. Namely, ɛmax is the maximum value of ɛ on the boundary of the BOA. It means that to achieve perfect reconstruction at sufficiently large ρ with a small nonconvexity parameter, we need to set the initial condition as ɛO(10−2) for λ = 0.1 and a = 3, and as ɛO(10−5) for λ = 0.01 and a = 3. Such initial conditions with small ɛ are not realistic, and the possibility that AMP attains the success solution V = ɛ = 0 from randomly chosen initial conditions is exceedingly small.

Figure 11.

Figure 11. (a) Volume of the BOA to V = ɛ = 0 at α = 0.5. (b) Maximum value of ɛ, denoted by ɛmax, on the boundary of the BOA.

Standard image High-resolution image

The shrinking BOA is the origin of the difficulty of AMP for small nonconvexity parameters. To resolve this problem, we recall that the RS-unstable failure solution appears for α < αc for large nonconvexity parameters, as discussed in figures 7 and 8. The emergence of the RS-unstable failure solution implies that the SE has a locally stable fixed point from the correspondence between the saddle point of RS free energy and the SE, as shown in figure 10(a). Therefore, AMP for the large nonconvexity parameter does not converge to a fixed point, but its trajectory is confined into a subshell characterized by finite ɛ and V. We utilize this nondivergence property of AMP in α < αc at sufficiently large nonconvexity parameters for the perfect reconstruction of dense signals at small nonconvexity parameters. The procedure introduced here, based on the above consideration, is termed nonconvexity control, where we decrease the value of nonconvexity parameters in updating AMP.

Here, we consider the control of the parameter λ under a fixed value of a. Figure 12 shows λ-dependence of ɛ and V at α = 0.5 and a = 3 for ρ = 0.28 and ρ = 0.32, and explains how the nonconvexity control proposed here works or fails for the perfect reconstruction. We treat the set of macroscopic fixed points as a sequence generated by shifting the value of λ. Note that the sequence mainly consists of RS-unstable failure solutions. At ρ = 0.28, the sequences of ɛ and V are connected to zero by decreasing λ, hence one can potentially attain perfect reconstruction by starting from large λ and decreasing λ. However, at ρ = 0.32, ɛ and V tend to diverge when λ ∈ (0.2117, 0.5530) as shown in figure 12. In this region, the finite RS-unstable fixed points disappear, and the SE flow goes to the diverging state, although the state is not allowed as a solution. Figure 13 is an example of SE flow going towards the diverging state in the absence of the solution with finite ɛ and V, which is observed at α = 0.5, ρ = 0.32, λ = 0.4 and a = 3. The discontinuity in the sequence of the macroscopic parameters and the SE flow to the diverging state obstruct the nonconvexity control.

Figure 12.

Figure 12.  λ-dependence of (a) ɛ and (b) V at α = 0.5 and a = 3 for ρ = 0.28 and ρ = 0.32.

Standard image High-resolution image
Figure 13.

Figure 13. SE flow at α = 0.5, ρ = 0.32, λ = 0.4 and a = 3. There is no fixed point and the flow diverges.

Standard image High-resolution image

In figure 14, we compare the sequence of the macroscopic fixed point with the BOA to the origin at α = 0.5 and a = 3 for (a) ρ = 0.28 and (b) ρ = 0.32. The solid lines of figure 14 are drawn by continuously shifting λ, and are equivalent to figure 12. Dots on the lines represent examples of the fixed points at each λ. In figure 14(a), the shaded region denotes the BOA to V = ɛ = 0 at λ = 0.3 and a = 3, where the perfect reconstruction is theoretically supported. To attain perfect reconstruction at this parameter region, we need to prepare the initial condition with ɛO(10−2), which is not realistic. However, by decreasing the nonconvexity parameter λ from larger values, such as λ = 1, the fixed point, which corresponds to the RS-unstable failure solution, comes into the BOA to the perfect reconstruction at λ = 0.3 as shown in figure 14(a). In figure 14(b), BOA to the perfect reconstruction at λ = 0.1 and a = 3 is depicted as the shaded region at α = 0.5 and ρ = 0.32. For larger ρ, the sequence of the fixed point shows discontinuity as shown in figure 14(b), which is caused by the diverging property of the macroscopic quantities as shown in figures 12 and 13. It is expected that the sequence of the fixed point below λ = 0.2117 can provide a clue to attaining the perfect reconstruction, but the corresponding BOA is already shrunk.

Figure 14.

Figure 14. Sequence of macroscopic fixed points of SCAD corresponding to the continuous decrease in λ at a = 3 (solid line) and the BOA (shaded region) for a sufficiently small λ at α = 0.5. The dots denote representative fixed points at the λ assigned next to the dots. (a) Macroscopic fixed points and the BOA at λ = 0.3 is shown for ρ = 0.28. (b) Macroscopic fixed points and the BOA at λ = 0.1 is shown for ρ = 0.32.

Standard image High-resolution image

Based on the abovementioned observations, we define NCC limit ρNCC(α) as the largest value of ρ under given α at which the sequence of the fixed points reach V = ɛ = 0 as λ decreases without facing to the discontinuity due to the divergence of the macroscopic parameters; αNCC(ρ) is defined as well. Figures 15(a) and (b) show the phase diagram on ρλ plane at α = 0.5 and a = 3, and that on αλ plane at ρ = 0.32 and a = 3. The NCC limit is denoted by the horizontal dashed lines, and the stability of the diverging state (61) is satisfied on the left side of dotted–dashed lines. At ρ < ρNCC or α > αNCC(ρ), the sequence of the fixed point is connected to V = ɛ = 0 by decreasing λ as shown in figure 14(a). Examples of the SE's flow below the NCC limit in the RS-unstable failure and the success region are shown in figures 10(a) and (b), respectively. At ρ > ρNCC or α < αNCC, a 'no solution' region, in which SE does not have any fixed points in Ω(a) and diverges, appears between the 'success' region and 'RS-unstable failure' region, and the nonconvexity control fails due to the 'no solution' region. An example of the SE flow in the 'no solution' region is figure 13. As λ increases, the RS-unstable failure phase and the success phase are connected to each other for any ρ. This property is the same as the 1 minimization.

Figure 15.

Figure 15. Phase diagram of SCAD on (a) ρλ plane at α = 0.5 and a = 3 and (b) αλ plane at ρ = 0.32 and a = 3. The horizontal dashed lines denote the NCC limit: (a) ρNCC(α = 0.5) and (b) αNCC(ρ = 0.32). On the left of the dotted–dashed line, the diverging state is stable and the SE flow tends to diverge.

Standard image High-resolution image

Figure 16 shows the α-dependence of the NCC limit ρNCC(α) for (a) SCAD and (b) MCP. The dependency of ρNCC on a is weak; the changes in a induce the changes in the ρ value smaller than O(10−3), and here the value is maximized with respect to a. For comparison, the phase transition boundary αc (ρ) for λ = 0.1 and a = 3 is shown as principle limit in the sense that the stability of the success solution is guaranteed. According to the principle limit, it is expected that MCP can achieve perfect reconstruction under denser signals than SCAD, but its NCC limit is inferior to SCAD in the order O(10−2) in terms of ρ. This observation implies that there remains room for improvement in designing nonconvex penalties to overcome the basin shrinkage sustaining global stability of V = ɛ = 0 in the dense region.

Figure 16.

Figure 16. Reconstruction limit by the NCC limit and the phase transition boundary (principle limit) at λ = 0.1 and a = 3 for (a) SCAD and (b) MCP. For comparison, the algorithmic limit of the Bayes optimal method (BO, spinodal) and the principle limit of the Bayes optimal method (BO, principle) are shown.

Standard image High-resolution image

The problem in practice is the protocol of the nonconvexity control. We consider here an 'equilibrium' approach; we spend sufficient time steps at each λ for the convergence to the macroscopic fixed point, which corresponds to the RS-unstable failure solution, and after that we decrease λ by dλ. Hence, we need to set the sufficient time for 'equilibration' and dλ appropriately. However, we cannot assess ${\hat{\varepsilon }}^{(t)}$ in AMP's trajectory since its calculation requires the unknown true signal x 0. Instead, we observe ${\hat{V}}^{(t)}$ and ${\hat{D}}^{(t)}\equiv \frac{1}{N}{\sum }_{i=1}^{N}{({\hat{x}}_{i}^{(t+1)}-{\hat{x}}_{i}^{(t)})}^{2}$ as criteria of the convergence, and we decrease λ by dλ after the saturation of the ${\hat{V}}^{(t)}$ and ${\hat{D}}^{(t)}$ around certain values. Next, in determining dλ, the macroscopic fixed point at λ is required to be in the BOA of the macroscopic fixed point at λ − dλ for the effective nonconvexity control. We denote the maximum value of dλ as dλmax over which the abovementioned condition is violated, and choose a value dλ smaller than dλmax for nonconvexity control. The value of dλmax assessed by SE at α = 0.5 and ρ = 0.28 is shown as the solid line in figure 17(a), where dλmax is obtained by observing the SE flow at λ − dλ starting from the fixed point at λ for various dλ. Here, we note that at this parameter region, the perfect reconstruction is possible at λ < 0.3, hence dλ for λ < 0.3 is trivially dλλ.

Figure 17.

Figure 17. (a) The value of dλmax below which the nonconvexity control is effective at α = 0.5, ρ = 0.28 and a = 3 for SCAD. (b) Trajectory on Vɛ plane with convexity control of SCAD. The trajectories of AMP for one realization of X 0 and A at N = 105 (solid line) and SE (circles) are shown. The initial condition is set to be V = E = ρ, and λ = 1, a = 3. The value of λ is decreased by dλ = 0.1 after the convergence of ${\hat{V}}^{(t)}$ and D(t) at each λ.

Standard image High-resolution image

The value of dλ shown in figure 17(a) is for the typical realization of A and x 0, and the possible value of dλ might fluctuate depending on A and x 0. To be on the safe side, we set dλ = 0.1 for any λ in applying the nonconvexity control to AMP under a given A and x 0. Figure 17(b) shows the actual trajectory of AMP (solid line) for one realization of A and x 0 at N = 105, and corresponding SE (circles) at α = 0.5, ρ = 0.28 under nonconvexity control. The initial condition of AMP is set to be x = 0 and v = ρ 1N , where 1N is the N-dimensional vector whose all components are 1, and hence that of SE is V = ε = ρ. We start with λ = 1 at a = 3, and decrease λ by dλ = 0.1 after the convergence of ${\hat{V}}^{(t)}$ and ${\hat{D}}^{(t)}$ at each λ, until λ becomes to be 0.3. The behavior of AMP is well described by SE. Comparing to the flow of SE (figure 10(b)), the trajectory of AMP with nonconvexity control approaches the BOA connected to the success solution at λ = 0.1, which is almost on the V axis.

6. Summary and discussion

We have analytically derived the perfect reconstruction limit of the sparse signal in compressed sensing by the minimization of nonconvex sparse penalties, SCAD and MCP. In particular, when the nonconvexity parameters are small, SCAD and MCP minimization reconstruct dense signals that are beyond the 1 reconstruction limit. This analytical result also appears to imply that SCAD and MCP minimization overcomes the algorithmic limit of the Bayes optimal method, but the numerical experiments using AMP have shown that this is actually not the case. The gap between the analytical and numerical results has been understood by observing the flow of SE, revealing the failure of AMP comes from the shrinking BOA and the divergent behavior of AMP in some parameter regions. We have found that SCAD and MCP minimization show a novel failure scenario of the algorithm different from the Bayes-optimal setting where the algorithmic limit is characterized by the emergence of local minima. To mitigate the abovementioned gap and determine the algorithmic limit of AMP, we have proposed the protocol of the nonconvexity control, leading to largely improved performance.

Originally, SCAD and MCP were designed to satisfy the continuity and the oracle property, which is the simultaneous appearance of the asymptotic normality and consistency, at a certain limit with respect to the nonconvexity parameters [11, 12]. However, such a property is not sufficient for practical usage. The design of the nonconvex penalties that do not lead to a shrinkage of the basin is another possibility for nonconvex compressed sensing. From the relationship between the sparse prior in the Bayesian approach and the sparse penalty in the frequentist approach, it is implied that SCAD and MCP can be related to the Bernoulli–Gaussian prior in Bayesian terminology with large variance [25]. A unified understanding of the sparsity over the Bayesian and frequentist approach will be helpful for designing such desirable sparse penalties. Further, the application of the nonconvexity control to the general matrix beyond that consists of i.i.d. entries should be discussed for practical usage. The rotationally invariant matrix is one of the candidates to examine the effectiveness of the nonconvexity control for the general matrix [2628].

Acknowledgments

The authors would like to thank Yoshiyuki Kabashima, Satoshi Takabe, Mirai Tanaka, and Yingying Xu for their helpful discussions and comments. This work is partially supported by JSPS KAKENHI No. 19K20363 (A S), and Nos. 19H01812, 18K11463 and 17H00764 (T O), Japan Science and Technology Agency (JST) PRESTO Grant No. JPMJPR19M2 (A S), and a Grant for Basic Science Research Projects from the Sumitomo Foundation (T O).

Please wait… references are loading.
10.1088/1742-5468/ac1403