MEM_GE: A New Maximum Entropy Method for Image Reconstruction from Solar X-Ray Visibilities

, , , , , , and

Published 2020 May 5 © 2020. The American Astronomical Society. All rights reserved.
, , Citation Paolo Massa et al 2020 ApJ 894 46 DOI 10.3847/1538-4357/ab8637

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0004-637X/894/1/46

Abstract

Maximum Entropy is an image reconstruction method conceived to image a sparsely occupied field of view, therefore it is particularly effective at achieving super-resolution effects. Although widely used in image deconvolution, this method has been formulated in radio astronomy for the analysis of observations in the spatial frequency domain, and an Interactive Data Language code has been implemented for image reconstruction from solar X-ray Fourier data. However, this code relies on a non-convex formulation of the constrained optimization problem addressed by the Maximum Entropy approach, and this sometimes results in unreliable reconstructions characterized by unphysical shrinking effects. This paper introduces a new approach to Maximum Entropy based on the constrained minimization of a convex functional. In the case of observations recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), the resulting code provides the same super-resolution effects of the previous algorithm, while also working properly when that code produces unphysical reconstructions. We also obtain results via testing the algorithm with synthetic data simulating observations of the Spectrometer/Telescope for Imaging X-rays (STIX) in Solar Orbiter. The new code is available in the HESSI folder of the Solar SoftWare (SSW) tree.

Export citation and abstract BibTeX RIS

1. Introduction

Solar SoftWare (SSW) contains a large collection of computational procedures for the reconstruction of images from the X-ray data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI; Lin et al. 2002) between 2002 February and 2018 August—see Dennis & Tolbert (2019) for a recent evaluation of the performance of the different available methods. Some of these methods apply directly to RHESSI counts while others have been conceived to process RHESSI visibilities, i.e., calibrated samples of the Fourier transform of the incoming photon flux, generated via a data stacking process. Among count-based methods, SSW includes Back Projection (Hurford et al. 2002), Clean (Högbom 1974), Forward Fit (Aschwanden et al. 2002), Pixon (Metcalf et al. 1996), and Expectation Maximization (Benvenuto et al. 2013). Among visibility-based methods, SSW includes MEM_NJIT (Bong et al. 2006; Schmahl et al. 2007), a Maximum Entropy method; VIS_FWDFIT (Schmahl et al. 2007), which selects pre-defined source shapes based on their best fitting of visibilities; uv_smooth (Massone et al. 2009), an interpolation/extrapolation method in the Fourier domain; VIS_CS (Felix et al. 2017), a catalog-based compressed sensing algorithm; and VIS_WV (Duval-Poo et al. 2018), a wavelet-based compressed sensing algorithm. Although each one of these algorithms combines specific values with applicability limitations and specific flaws, a critical comparison of the maps of a given flaring event obtained by the application of all (or most) of these algorithms provides a good picture of what a reliable image of the event could be.

In particular, MEM_NJIT provides reconstructions characterized by a notable degree of reliability. The capability of fitting the experimental observations is generally satisfactory both in terms of comparison between the predicted and experimental visibility profiles with respect to each RHESSI detector and in terms of the reduced χ2 values computed considering either all detectors or just the detectors providing the observations. Furthermore, the algorithm is robust with respect to the level of noise affecting the observations. Moreover, the computational time is among the smallest in the SSW scenario and allows reconstruction within a few seconds. Finally, MEM_NJIT is characterized by super-resolution properties (Cornwell & Evans 1985).

However, MEM_NJIT sometimes produces images with multiple unrealistically small sources. The origin of this problem is not totally clear but is related to the minimization technique and the setting of the convergence criteria. MEM_NJIT addresses a constrained maximization of the entropy function, which turns into an optimization problem with two penalty terms, the chi-squared function relating the measured and predicted visibilities and a term that ensures the conservation of the overall flux. The optimization of these terms is computationally difficult especially since the optimization functional is not convex, which implies that the numerical schemes may suffer convergence issues. Therefore, the optimization procedure may continue on past the physical meaningful solution to find a solution involving multiple bright points (see Figure 1, where the reconstructed images have dimension 101 × 101, the pixel size is 1farcs5, and a zoom is applied for sake of clarity). Of course, this outcome can generally be avoided by increasing a tolerance parameter from its default value to data-dependent higher values, but this typically results in a worse fitting of the observations and a less accurate estimate of the emission flux and, in general, impedes the use of MEM_NJIT in an automatic pipeline for data processing.

Figure 1.

Figure 1. Three examples of MEM_NJIT misbehavior. First row of panels: the 2002 February 20 event; time interval: 11:06:05–11:07:42; energy range: 25–50 keV; detectors: 3 through 9. Second row of panels: the 2002 May 1 event; time interval: 19:21:29–19:22:29; energy range: 3–6 keV; detectors: 3 through 9. Third row of panels: the 2007 December 13 event; time interval: 22:11:33–22:12:56; energy range: 6–12 keV; detectors: 3 through 9. Left column panels: MEM_NJIT reconstructions. Right column panels: comparison between predicted and measured visibilities.

Standard image High-resolution image

The present paper illustrates a new algorithm for the constrained maximization of the image entropy, which, differently than MEM_NJIT, relies on the optimization of a convex functional. Indeed, this approach aims at the minimization of χ2 under the constraint of maximum entropy, which leads to the formulation of a convex functional characterized by a single penalty term and, therefore, a single regularization parameter that is fixed a priori. The minimization of this functional is performed iteratively and in an alternate fashion (Combettes & Pesquet 2011): for each iteration, a gradient step minimizes the χ2 functional, and then, a proximal step minimizes the negative entropy and projects the result of the first step onto the hyper-surface of vectors with positive components and constant flux.

The method is implemented in SSW and can be reached via the HESSI GUI with the name MEM_GE. We point out that the Interactive Data Language (IDL) code is implemented in a way that is relatively independent of the instrument providing the experimental visibilities and the related uncertainties. In particular, when using RHESSI visibilities, MEM_GE images are very similar to MEM_NJIT images when this latter approach works but MEM_GE also provides meaningful images for those cases where the MEM_NJIT images, made with the standard value of the tolerance parameter, are unphysical.

The plan of the paper is as follows. Section 2 provides some details about the formulation of MEM_GE and the optimization algorithm. Section 3 illustrates the results of its application against both experimental visibilities recorded by RHESSI and synthetic visibilities generated within the simulation software of the Spectrometer Telescope for Imaging X-rays (STIX) on board Solar Orbiter. Comments on these results are contained in Section 4. Our conclusions will be offered in Section 5.

2. The Optimization Problem

Both RHESSI and STIX provide observations, named visibilities, which are calibrated Fourier transforms of the incoming photon flux picked up at specific sampling points ${{\boldsymbol{\xi }}}_{1},\ldots ,{{\boldsymbol{\xi }}}_{{N}_{v}}$ in the spatial frequency plane (u,v) (for RHESSI, the number Nv of recorded visibilities is variable and depends on the observation; for STIX, it is fixed at 30). Therefore, image reconstruction for RHESSI and STIX involves the solution of the inverse Fourier transform problem with limited data

Equation (1)

where ${\boldsymbol{v}}\in {{\mathbb{C}}}^{{N}_{v}}$ is the vector containing the observed visibilities; the photon flux N × N image to reconstruct is transformed into the M-dimensional vector ${\boldsymbol{x}}\in {{\mathbb{R}}}^{M}$ with M = N2, by means of the standard column-wise concatenation procedure; ${\boldsymbol{F}}\in {{\mathbb{C}}}^{{N}_{v}\times M}$ is the matrix computing the Discrete Fourier Transform of x at the spatial frequencies ${{\boldsymbol{\xi }}}_{1},\ldots ,{{\boldsymbol{\xi }}}_{{N}_{v}}$ sampled by either RHESSI or STIX.

The mathematical basis of MEM_NJIT is the constrained maximization of the entropy

Equation (2)

where xj is the signal content of pixel j, m = F'/M, F' is the total flux in the image, and e is Euler's number (e = 2.71828). Three constraints are considered in the algorithm; one is

Equation (3)

with

Equation (4)

where σ is the vector of the experimental uncertainties. A second constraint involves the flux and is given by

Equation (5)

with

Equation (6)

Finally, a third constraint requires that all components of x must be nonnegative. The algorithm implemented in the MEM_NJIT IDL code addresses the constrained maximum problem

Equation (7)

where α and β are the Lagrange multipliers associated to constraints (3) and (5); these two parameters are not estimated a priori but are updated during the maximization process. The first main drawback of this approach is that the maximization problem (7) involves a functional that is not convex, and therefore, numerical schemes may lead to unstable solutions. Further, the functional is characterized by two Lagrange multipliers, whose updating process is sometimes nonoptimal. When one of these conditions occurs, MEM_NJIT produces unphysical reconstructions like the ones shown in Figure 1.

MEM_GE addresses these two issues by providing a different formulation of the maximum entropy optimization problem. The first idea is to replace the maximization problem (7) with the minimization problem

Equation (8)

under the flux constraint (5) and where λ is the regularization parameter. The main advantage of this approach is that now the optimization problem is convex, and therefore, it can be addressed by relying on several numerical methods whose convergence properties are well established. In particular, in MEM_GE, we adopted the following iterative scheme whereby, at each iteration, we compute (Combettes & Pesquet 2011):

  • 1.  
    A gradient step to minimize χ2;
  • 2.  
    A proximal step to maximize the entropy subjected to the positivity and flux constraints.

After this second step, the algorithm is accelerated by computing a linear combination with the approximation corresponding to the previous iteration, and a monotonicity check is also performed (Beck & Teboulle 2009).

Two main technical aspects are involved by the implementation of the algorithm. First, the regularization parameter λ is a priori determined relying on the observation that an over-regularizing value λ0 of this parameter implies that the corresponding regularized solution must be related to the average flux by

Equation (9)

for all j. Simple numerical approximations show that (9) implies

Equation (10)

where ∂j indicates the partial derivative along the jth direction. In order to determine λ0, we approximate each component of ${{\boldsymbol{x}}}_{{\lambda }_{0}}$ with m. This results in an overestimate of the regularization parameter; therefore, the optimal value for the regularization parameter is chosen as a rate of the estimated λ0, where this rate is determined as a function of the visibility signal-to-noise ratio using a heuristic look-up table.

Second, the realization of the flux constraint in the second step relies on the solution of the not-regularized constrained minimum problem

Equation (11)

which is performed by applying the projected Landweber method (Piana & Bertero 1997): the estimate of the flux is computed by summing up the pixel content of the solution of problem (11).

3. Application to X-Ray Visibilities

This section validates the reliability of MEM_GE in the case of both observations provided by RHESSI and synthetic visibilities simulated according to the STIX imaging concept setup.

3.1. RHESSI

In order to illustrate the behavior of the new algorithm when applied to RHESSI observations, we consider two sets of events. The first set is made of the same cases considered in Figure 1, and we verified whether MEM_GE is able to provide reconstructions that do not suffer the same pathological behavior characterizing MEM_NJIT, while guaranteeing the same data fidelity. Specifically, Figure 2 refers to the same events considered in Figure 1, but this time, the reconstruction method employed is MEM_GE. Then, Figures 35 compare the reconstructions provided by MEM_GE and MEM_NJIT with the ones of VIS_CS, EM, Clean, and uv_smooth for these same three data sets. Finally, Figures 68 show the reconstructions provided by all six algorithms for three events whereby MEM_NJIT works properly (for the reconstructions in Figures 38, we do not report the comparisons between the measured and predicted visibilities since they show very similar behaviors among the reconstruction methods, with the only exception of uv_smooth, which is characterized by fitting performances systematically slightly worse). Tables 1 and 2 illustrate a quantitative comparison of performances from all codes and for all events considered in this subsection. We point out that all MEM_NJIT reconstructions in this subsection have been obtained by using a default value of 0.03 for the tolerance parameter. We recall that the tolerance parameter epsilon is a number smaller than one such that the algorithm stops when χ2 is smaller than epsilon times the number of degrees of freedom, and the relative difference between the predicted and estimated total flux is smaller than epsilon.

Figure 2.

Figure 2. MEM_GE reconstructions (left panels) and corresponding comparisons between predicted and observed visibilities (right panels) for the same cases as in Figure 1.

Standard image High-resolution image
Figure 3.

Figure 3. Reconstructions of the 2002 May 1 event provided by six imaging methods available from the HESSI GUI. The observations and conditions are the same as in Figure 1 for this same event. For the sake of comparison, contour levels of MEM_GE corresponding to 10% and 50% of the maximum intensity are superimposed to the reconstructions.

Standard image High-resolution image
Figure 4.

Figure 4. Reconstructions of the 2002 February 20 event provided by six imaging methods available from the HESSI GUI. The observations and conditions are the same as in Figure 1 for this same event. For the sake of comparison, the contour levels of MEM_GE corresponding to 10% and 50% of the maximum intensity are superimposed to the reconstructions.

Standard image High-resolution image
Figure 5.

Figure 5. Reconstructions of the 2007 December 13 event provided by six imaging methods available from the HESSI GUI. The observations and conditions are the same as in Figure 1 for this same event. For the sake of comparison, contour levels of MEM_GE corresponding to 10% and 50% of the maximum intensity are superimposed to the reconstructions.

Standard image High-resolution image
Figure 6.

Figure 6. Reconstructions of the 2002 February 13 event provided by the same six imaging methods considered in Figures 35. Time interval: 12:29:40–12:31:22; energy range: 6–12 keV; detectors: 3 through 9. For the sake of comparison, contour levels of MEM_GE corresponding to 10% and 50% of the maximum intensity are superimposed to the reconstructions.

Standard image High-resolution image
Figure 7.

Figure 7. Reconstructions of the 2002 July 23 event provided by the same six imaging methods considered in Figures 35. Time interval: 00:29:23–00:29:39; energy range: 25–50 keV; detectors: 3 through 9. For the sake of comparison, contour levels of MEM_GE corresponding to 10% and 50% of the maximum intensity are superimposed to the reconstructions.

Standard image High-resolution image
Figure 8.

Figure 8. Reconstructions of the 2004 August 31 event provided by the same six imaging methods considered in Figures 35. Time interval: 05:34:47–05:35:47; energy range: 6–12 keV; detectors: 3 through 9. For the sake of comparison, contour levels of MEM_GE corresponding to 10% and 50% of the maximum intensity are superimposed to the reconstructions.

Standard image High-resolution image

Table 1.  Quantitative Parameters Corresponding to the Reconstructions of the Three Events Presented in Figures 35

  Reduced χ2 (all) Reduced χ2 (used) Flux Time
2002 Feb 20 (20–50 keV; 11: 06: 05–11: 07: 42 UT)
MEM_GE 2.45 2.99 26.2 22
MEM_NJIT 2.43 2.54 21.0 11
VIS_CS 2.88 3.52 18.1 3
CLEAN 2.74 3.30 39.6 7
EM 2.39 2.91 18.4 83
UV_SMOOTH 5.39 7.21 40.8 1
2002 May 1 (3–6 keV; 19: 21: 29–19: 22: 29 UT)
MEM_GE 3.02 3.93 1.81 × 104 12
MEM_NJIT 2.90 1.68 1.49 × 104 12
VIS_CS 2.18 2.64 1.56 × 104 3
CLEAN 2.30 2.82 2.95 × 104 11
EM 1.88 2.18 1.56 × 104 100
UV_SMOOTH 3.19 4.18 1.94 × 104 1
2007 Dec 13 (6–12 keV; 22: 11: 33–22: 12: 56 UT)
MEM_GE 1.76 1.61 12.6 × 102 7
MEM_NJIT 7.45 2.29 5.63 × 102 59
VIS_CS 2.64 3.01 6.47 × 102 3
CLEAN 2.68 3.10 9.90 × 102 8
EM 2.56 2.91 6.57 × 102 108
UV_SMOOTH 2.75 3.20 8.01 × 102 1

Note. The flux is measured in photon cm−2 s−1, and time is measured in seconds.

Download table as:  ASCIITypeset image

Table 2.  Quantitative Parameters Corresponding to the Reconstructions of the Three Events Presented in Figures 68

  Reduced χ2 (all) Reduced χ2 (used) Flux Time
2002 Feb 13 (6–12 keV; 12: 29: 40–12: 31: 22 UT)
MEM_GE 0.96 0.99 4.15 × 102 20
MEM_NJIT 0.98 1.03 4.22 × 102 3
VIS_CS 0.99 1.04 3.70 × 102 3
CLEAN 1.07 1.15 6.67 × 102 8
EM 0.97 1.01 3.67 × 102 45
UV_SMOOTH 2.13 2.67 7.38 × 102 1
2002 Jul 23 (25–50 keV; 00: 29: 23–00: 29: 39 UT)
MEM_GE 3.42 3.53 2.90 × 103 20
MEM_NJIT 5.17 6.18 2.71 × 103 4
VIS_CS 5.45 6.07 2.41 × 103 3
CLEAN 4.78 5.04 2.85 × 103 10
EM 4.21 4.67 2.41 × 103 158
UV_SMOOTH 6.17 6.86 3.19 × 103 1
2004 Aug 31 (6–12 keV; 05: 34: 47–05: 35: 47 UT)
MEM_GE 1.24 1.32 1.44 × 104 14
MEM_NJIT 1.31 1.42 1.24 × 104 5
VIS_CS 1.20 1.26 1.29 × 104 3
CLEAN 1.22 1.29 1.70 × 104 10
EM 1.25 1.33 1.29 × 104 82
UV_SMOOTH 1.76 2.08 1.54 × 104 1

Note.  The flux is measured in photon cm−2 s−1, and time is measured in seconds.

Download table as:  ASCIITypeset image

3.2. STIX

We simulated the following four configurations with an overall incident flux of 104 photons ${\mathrm{cm}}^{-2}\,{{\rm{s}}}^{-1}$ (see Figure 9 and Table 3 for all parameters):

  • 1.  
    a double foot-point flare in which one of the sources is more extended and has double the flux of the other source (Configuration 1);
  • 2.  
    a double foot-point flare in which the sources have the same size and the same flux (Configuration 2);
  • 3.  
    a loop flare with small curvature (Configuration 3);
  • 4.  
    a loop flare with large curvature (Configuration 4).

For each one of these configurations, the STIX software utilized a Monte Carlo approach to produce a set of synthetic visibilities with their corresponding standard deviations. These visibility sets have been processed by MEM_GE, and the results are compared in Table 2 with the ones provided by the two other methods currently implemented in the STIX software tree, i.e., visibility-based Clean and count-based Expectation Maximization (EM; Massa et al. 2019).

Figure 9.

Figure 9. Four ground-truth configurations utilized to generate synthetic STIX visibilities. From left to right: two foot points with different sizes (Configuration 1); two foot points with the same size (Configuration 2); a loop with orientation in the bottom left–top right direction (Configuration 3); a loop with orientation in the right–left direction (Configuration 4).

Standard image High-resolution image

Table 3.  Reconstruction of Four Source Configurations Characterized by an Overall Incident Photon Flux of 104 Photons ${\mathrm{cm}}^{-2}\,{{\rm{s}}}^{-1}$

Configuration 1
  First Peak Second Peak Total Flux (×103) C-statistic
  X Y FWHM Flux (×103) X Y FWHM Flux (×103)    
Simulated −12.0 −12.0 11.0 6.53 12.0 12.0 8.0 3.33 10.00  
MEM_GE −12.0 ± 0.5 −11.8 ± 0.5 10.9 ± 1.1 4.79 ± 0.12 12.0 ± 0.6 11.7 ± 0.6 8.7 ± 0.9 2.24 ± 0.11 10.69 ± 0.30 6.0 ± 1.2
EM −12.1 ± 0.6 −12.0 ± 0.5 11.1 ± 0.9 5.96 ± 0.13 12.0 ± 0.5 12.0 ± 0.5 8.3 ± 0.4 2.95 ± 0.10 10.63 ± 0.04 3.4 ± 0.3
CLEAN −12.3 ± 0.6 −12.0 ± 0.4 13.5 ± 0.5 5.34 ± 0.10 11.8 ± 0.5 12.1 ± 0.3 10.1 ± 0.2 2.59 ± 0.09 8.59 ± 0.12 26.5 ± 2.9
 
Configuration 2
  First Peak Second Peak Total Flux (×103) C-statistic
  X Y FWHM Flux (×103) X Y FWHM Flux (×103)    
Simulated −15.0 15.0 10.0 4.95 15.0 −15.0 10.0 4.95 10.00  
MEM_GE −14.6 ± 0.7 14.7 ± 0.5 8.3 ± 0.5 3.37 ± 0.13 14.5 ± 0.7 −14.8 ± 0.4 8.2 ± 0.5 3.41 ± 0.12 10.64 ± 0.21 5.9 ± 0.8
EM −14.4 ± 0.6 14.8 ± 0.5 8.7 ± 0.6 4.30 ± 0.14 14.4 ± 0.6 −14.8 ± 0.6 8.5 ± 0.6 4.32 ± 0.13 10.61 ± 0.03 3.2 ± 0.3
CLEAN −13.5 ± 0.5 14.6 ± 0.5 12.1 ± 0.4 3.82 ± 0.10 13.4 ± 0.5 −14.4 ± 0.6 11.9 ± 0.4 3.84 ± 0.10 8.38 ± 0.11 31.2 ± 2.5
 
Configuration 3
  Peak Total Flux (×103) C-statistic            
  X Y                
Simulated −10.0 −10.0 10.00              
MEM_GE −9.1 ± 1.8 −10.0 ± 2.1 10.55 ± 0.21 6.1 ± 0.6            
EM −9.6 ± 2.0 −9.8 ± 2.7 10.64 ± 0.04 3.4 ± 0.3            
CLEAN −10.2 ± 1.3 −9.0 ± 2.0 8.51 ± 0.11 29.0 ± 2.6            
 
Configuration 4
  Peak Total Flux (×103) C-statistic            
  X Y                
Simulated 20.0 0.0 10.00              
MEM_GE 18.2 ± 1.0 0.2 ± 0.7 10.44 ± 0.17 5.8 ± 0.3            
EM 18.5 ± 1.1 0.4 ± 0.7 10.63 ± 0.03 3.4 ± 0.2            
CLEAN 18.6 ± 0.7 0.0 ± 0.3 8.39 ± 0.11 32.1 ± 2.9            

Note. The morphological and photometric parameters reconstructed by MEM_GE are compared with the ground truth and with the values provided by EM and CLEAN. The positions X, Y and the FWHM of the sources are given in arcseconds, while the total flux and the flux corresponding to each foot point in Configurations 1 and 2 are given in photons cm−2 s−1. Mean values and standard deviations are computed on 25 random realizations of the data performed with the simulator implemented in the STIX DPS.

Download table as:  ASCIITypeset image

4. Discussion of Results

One of the nice aspects of MEM_GE (see Figures 1 and 2) is that it provides reliable reconstructions for those events where MEM_NJIT, with its default tolerance parameter set to 0.03, and produces multiple unrealistically small sources, while predicting the experimental visibilities with a statistical fidelity close to the MEM_NJIT one. On the other hand, it behaves similarly to MEM_NJIT for those flaring events where MEM_NJIT reconstructions are reliable (see Figures 68). As far as the morphological properties are concerned, MEM_GE systematically introduces high-resolution effects. This is particularly visible in the case of the reconstruction of the 2002 February 13 event (Figure 6), where a convex loop shape is reproduced by MEM_GE and MEM_NJIT (and also EM), while this convexity is much smeared out in the reconstructions provided by Clean and uv_smooth (however, we point out that the performances of Clean depend on which Regression Combined Method is used for the final Clean beam image and which Beam Width Factor is used). In this case, VIS_CS fails to produce the correct orientation of the loop and reproduces a concave shape. Analogously, the spatial resolution achieved by the two MEM codes (and by EM and, partly, by uv_smooth) is as fine as can be expected in the reconstruction of the 2002 July 23 X Class flare, given the angular resolution of the modulation collimators used in the analysis: all four sources characterizing this emission topography are clearly visible in the reconstructions. On the contrary, neither VIS_CS nor Clean are able to distinguish all four emitting regions. Tables 1 and 2 show that, when MEM_NJIT works properly, the two maximum entropy methods provide similar estimates of the quantitative parameters; although, MEM_GE may reproduce data with significantly smaller χ2 values (as in the case of the 2007 December 13 and 2002 July 23 events) but may require a higher computational time (as in the case of the 2002 February 13 and 2002 July 23 events). We finally point out that the tolerance parameter in MEM_NJIT can be manually tuned to higher values in order to obtain more regularized reconstructions. However, Figure 10 shows that increasing the tolerance value results in morphologies closer to the ones corresponding to MEM_GE reconstructions but with a less accurate ability of the method to both fit the observations and conserve the flux value given as input to the algorithm. Even more importantly, the figure shows that this tuning procedure is dependent on the experimental data set considered.

Figure 10.

Figure 10. Analysis of the outcome of MEM_NJIT for different values of the tolerance parameter. First row of panels: the 2002 February 20 event. Second row of panels: the 2002 July 23 event. First column panels: MEM_NJIT reconstructions for tolerance = 1, with superimposed level curves corresponding to MEM_GE reconstructions. Second column panels: reduced χ2 value vs. tolerance computed considering both used and all detectors. Third column panels: total flux of the source obtained from the image (red line) vs. tolerance. The blue line corresponds to an a priori estimate used by MEM_NJIT as input constraint.

Standard image High-resolution image

Interestingly, the super-resolution properties of Maximum Entropy are confirmed by the analysis of synthetic STIX data illustrated in Table 3, where the performances of MEM_GE are compared to the ones of visibility-based Clean and count-based EM (there is no MEM_NJIT code adapted to the STIX framework). In fact, in the reconstructions of the foot points, the ability of MEM_GE to recover the ground-truth FWHM outperforms the effectiveness of the other visibility-based algorithm (EM works systematically better, probably because the signal-to-noise ratio associated with the counts is higher than that associated with the real and imaginary parts of the visibilities, as shown by Massa et al. 2019).

Also, similarly to EM, MEM_GE behaves better than Clean in both reproducing the exact total flux and best fitting the synthetic measurements. EM and Clean seem more accurate in separately reproducing the flux of each of the two circular sources in Configuration 1 and Configuration 2.

5. Conclusions

This paper introduces a novel algorithm, named MEM_GE, implementing a Maximum Entropy approach to image reconstruction from X-ray visibilities in solar astronomy. The motivation of this effort relies on the fact that the only existing algorithm (MEM_NJIT) realizing this approach for this kind of data provides reliable and super-resolved reconstructions for data sets associated with most events, but it sometimes suffers numerical instabilities and a lack of convergence to meaningful solutions. Differently than the old algorithm, the new implementation relies on the constrained minimization of a convex functional, which is realized by alternating the gradient descent and proximal steps. On the one hand, the resulting code maintains the good imaging properties of the previous one, while, on the other hand, guaranteeing convergence to reliable reconstructions when MEM_NJIT gives unphysical sources (unless the tolerance parameter is tuned in a data-dependent fashion).

MEM_GE is available in the SSW tree and can be most easily accessed through the HESSI GUI. It is one of the algorithms that is currently used for the population of the RHESSI image archive, and it can also be used to process STIX synthetic visibilities.

The robust imaging properties of this algorithm, together with its systematic reliability, make it particularly appropriate for the accurate estimation of morphological properties like the ones considered in the analysis of coronal hard X-ray sources (Xu et al. 2008; Guo et al. 2012a, 2012b; Dennis et al. 2018). MEM_GE is an appropriate algorithm to use for conducting a statistical study of these kinds of events.

Please wait… references are loading.
10.3847/1538-4357/ab8637