Deepening gamma-ray point-source catalogues with sub-threshold information

We propose a novel statistical method to extend Fermi-LAT catalogues of high-latitude γ-ray sources below their nominal threshold. To do so, we rely on the determination of the differential source-count distribution of sub-threshold sources which only provides the statistical flux distribution of faint sources. By simulating ensembles of synthetic skies, we assess quantitatively the likelihood for pixels in the sky with relatively low-test statistics to be due to sources, therefore complementing the source-count distribution with spatial information. Besides being useful to orient efforts towards multi-messenger and multi-wavelength identification of new γ-ray sources, we expect the results to be especially advantageous for statistical applications such as cross-correlation analyses.


Introduction
Our view of the high-energy γ-ray sky has been revolutionised by the Large Area Telescope (LAT) onboard the Fermi satellite, which is on its surveying mission since 2008: Since the publication of the fourth source catalogue (4FGL) based on 8 years of data [1], incremental updates appear periodically.The latest incarnation, the data release 3 (DR3) based on 12 yrs of data [2], includes 6658 point-like sources in the energy range from 50 MeV to 1 TeV, with extragalactic blazars constituting the largest associated class.Apart for revealing entirely new classes of objects (such as Galactic millisecond pulsars [3]), the explosion of the number of γ-ray sources has allowed for numerous applications in multi-wavelength and multi-messenger astrophysics and astroparticle physics.Unsurprisingly, the essential requirement for a source to enter a catalogue is that its signal strength is significantly above the background, dominated by γ rays associated to energy-loss processes of cosmic rays in the interstellar gas and radiation field.Since the pioneering analysis of EGRET data [4], the signal strength is typically quantified by a test statistics 2 ln(L/L 0 ), comparing the maximum value of the likelihood function including the source, L, vs. the one without the source, L 0 .
However, many more sources are believed to hide below the detection threshold.In particular, a significant part of the quasi-isotropic, high Galactic latitude emission, is attributed to point-like sources too dim to be individually detected by the LAT [5].Nonetheless, a statistical approach relying on the pixel-count distribution has long been suggested to provide a diagnostic tool to separate truly diffuse γ-ray signals from unresolved sources roughly isotropically distributed [6].Since its pioneering application in [7], a number of articles have used it [8][9][10] to extend the measurement of the differential source-count, i.e. dN/dS, below the detection threshold of Fermi-LAT catalogues, as well as to probe extragalactic source populations [11] and dark matter properties [12,13].
Recently, in [14] one of us extended these results by adopting machine learning techniques, obtaining a dN/dS (number of sources per unit flux) distribution which is in excellent agreement in the resolved regime with the one derived from catalogues, while extending as dN/dS ∝ S −2 in the unresolved regime down to fluxes of about 5 × 10 −12 cm −2 s −1 .In [14], the dN/dS has been deduced by training a neural network to reconstruct the source-count distribution of the high-latitude sky given a set of simulations with broad priors for the dN/dS parameters.The core idea was that if a neural network can learn how to correctly reconstruct the dN/dS for a series of simulated sky maps whose prior is broad enough, and if the simulator is accurate enough, then the neural network can determine correctly the source-count distribution function given the Fermi photon counts map.The output of [14], i.e. dN/dS, describes the number of gamma-ray sources per differential unit of flux.This quantity can only be statistically used to describe the unresolved gamma-ray source population, and gives us no information concerning the location of any of the sources.
The goal of the present work is instead to probabilistically extend the Fermi-LAT source catalogue exploiting the output of [14].We will do so by proposing a new methodology applied to the Fermi-LAT data which consists in a) proposing a test statistic (TS) for the gamma-ray counts in the sky, whose distributions across spatial pixels corresponding to the Fermi-LAT measured map and those coming from the simulations can be compared; and b) identifying the pixels in the measured map whose TS values are large enough depending on the previous comparison, therefore overcoming the limitation of [14] of not being able to provide spatial information.
The new methodological development we propose in this work can be exploited to obtain an extended catalogue, final product of this analysis, which, albeit only probabilistically defined, can be nonetheless very useful, for instance, in multi-wavelength (e.g.[15][16][17]) or in cross-correlation [18][19][20][21] analyses.Other methods to build γ-ray probabilistic catalogues have been attempted in the past, e.g.[22], which however has been applied to limited regions of the real sky.
This article is structured as follows: In Sec. 2 we describe our data selection and introduce the map-making models we use in our analysis.Sec. 3 is devoted to the quantitative setting of the problem and the statistical procedure followed.In Sec. 4 we present our results, which are also made publicly available at https://doi.org/10.5281/zenodo.8070852.In Sec. 5 we outline some perspectives and present our conclusions.
2 Data selection and model components

Data selection
We consider the updated 14 years Fermi-LAT data set (from week 9 to week 745) in the (1, 10) GeV energy range1 and Pass 8 event selection [23,24].This ensures a good balance between high statistics and good angular resolution of the detector.We employ the Fermi Science Tools suite version 2.0.8 [25] to analyse Fermi-LAT data with the following settings: We adopt the P8R3_SOURCEVETO_V3 instrument response functions (IRF), event class (EVCLASS) 2048 (SOURCEVETO) and event type (EV-TYPE) 1 (FRONT).We used standard quality selection criteria, i.e.DATA QUAL==1 and LAT CONFIG==1.Atmospheric γ rays from the Earth limb emission are removed by adopting a cut on the maximum zenith angle ZMAX of 90 degrees.We consider FRONT events in order to have an optimal angular resolution, and the SOURCEVETO class of events, in order to have a good suppression of the charged cosmic-ray background while still retaining a large event statistics.In table 1, we provide the reader with a summary of the settings.
We employ the Fermi Science Tools to compute photon counts map, exposure map and point spread function (PSF) of the LAT.The maps are binned in N pix pixels with equal area relying on HEALPix. 2 The chosen pixelization is expressed in terms of the "resolution parameter" N side controlling the number of subdivisions of a great circle on the sphere, related to N pix via N pix = 12 N 2 side .As we motivate in section 4, we simulate the sky with N side = 1024, but run the pixel analysis with N side = 512.
We perform an energy-integrated analysis in the (1, 10) GeV energy range, but our procedure can be extended to multiple energy bins.

Null-and alternative-hypothesis models
Our goal is to devise an algorithm able to detect point sources as γ-ray excesses on top of a Poissondistributed model for the background diffuse emission components.In this respect, our null-hypothesis model only consists of diffuse emission components, while the alternative hypothesis model also contains point sources.
We construct our background-only map B including two components: where • F 0 is an isotropic (usually assumed extragalactic) component accounting for the average flux of all the unresolved sources as well as possible diffuse emission mechanisms.
• G is the gll_iem_v07 template morphology of the diffuse γ-ray emission from the Milky Way provided by the Fermi-LAT collaboration3 , with normalization A gal .
The parameters A gal and F 0 are determined via a fit to the Fermi-LAT photon counts map, as explained in section 3.In order to obtain a background map in units of counts in each of the N pix pixels, we multiply by the exposure map E and the steradian-to-pixel conversion factor 4π/N pix : The exposure map is extracted using the Fermi Science Tools for 10 logarithmic bins in the (1,10) GeV range (and treated as the corresponding piecewise function when performing the integral), in order to be consistent with the binning of the Galactic foreground model.We also account for the PSF, responsible for an angular smoothing of the map, via the Fermi Science Tools.Similarly to the map E, we average over the (1,10) GeV range with a E −2.4 weight, according to the overall energy dependence of the data at high Galactic latitudes [26].
On the other hand, our first alternative hypothesis model is based on the 4FGL-DR3, and will be used to validate our procedure and TS analysis on the official Fermi catalogue.We generate a new map including the same background model as in eq.(2.1) (i.e.same parameters for A gal and F 0 ), and adding a synthetic map associated to the list of sources in the 4FGL-DR3 catalogue with |b| ≥ 30 • , C. The map is appropriately smoothed via the PSF and convoluted with the exposure map, and converted into a pixelized count map, as previously described.
Finally, as our second alternative hypothesis we consider a model for point sources based on the source-count distribution derived in [14].Analogously to the catalogue, we include an additional source component, S, on top of the background-only model: The map S is constituted of point sources randomly placed on the sphere, drawn according to the dN/dS inferred in [14].We produced 5000 realisations of this source model, by considering variations of the dN/dS parameters inferred in [14].In particular, we adopt the model related to the gll_iem_v07 foreground template 4 and |b| < 30 • Galactic plane cut, and we vary the dN/dS within the estimated uncertainties by fitting a Gaussian Process (c.f.[27]) to the dN/dS output of the neural network 5 .For each parameter configuration of the dN/dS, the F iso is consistently determined by the procedure proposed in [14].Note that the residual Poisson-distributed isotropic component F iso is now lower than F 0 in eq.(2.1), since part of the flux F 0 is accounted for by the discrete sources.We note that both the K and M maps are simulated, i.e. are "theoretical skies", not the "measured sky".The map K is deterministic, meaning we simulate it by accounting for the Fermi PSF for the point-like sources whose fluxes and positions are given by the 4FGL catalogue, on the top of the assumed smooth background (the remaining contributions to the K map in eq.2.3).In the case of the map M, we rely on statistical information, in which the dN /dS does not describe the position of any gamma-ray source, nor does it tell us their exact number.Rather, the integral of dN /dS in a flux bin tells us the expected number of sources in that bin, while the actual number of sources is only defined up to Poisson fluctuations.Once such a number of sources per flux bin has been determined, we place them at uniformly random positions in the sky, accounting for the S term in Eq. (2.4).In summary, we run the simulator developed in [14] in order to build the maps M.

Problem setting and statistical framework
Our first step is the determination on the null-hypothesis model, consisting solely of Poisson-distributed background emission.To this end, we apply a mask of |b| < 30 • around the Galactic plane, and remove the emission of all sources in the 4FGL-DR3 catalogue, further masking a 1 • disk around each source centroid.This cut roughly corresponds to the 90% containement angle of the PSF and represents a good compromise between removing the bulk of the pointlike emission while avoiding a too large removal of the diffuse emission area.We checked that the results are not particularly sensitive to the exact cut used in the 1 • − 2 • range.We then compute the expected photon counts λ i of model B (depending on A gal and F 0 ) in the pixel i and, given the actual counts k i , define the Poissonian likelihood The adopted parameters A gal and F 0 are those maximising the likelihood above.For N side = 512, we find A gal = 0.914 and F 0 = 4.81 • 10 −7 cm −2 s −1 sr −1 .
Our second step is to quantify the significance of additional point sources on top of the nullhypothesis model.For this purpose, we define a test statistic (T S) function in each pixel, quantifying how well the observed photon counts agree with the background-only hypothesis: where we indicate with λ i the best-fit background-only model counts in the pixel i, computed according to eq. (2.1); x i can be either the observed photon counts (when applied to the actual photon count map of data, leading to the single set T S data i ) or the photon counts in pixel i generated via synthetic maps.If the maps are generated according to the hypothesis eq. ( 2.3), the procedure yields the single set T S cat i ; if the maps are randomly taken from the model of eq. ( 2.4), we obtain a number of sets T S sim i equal to the number of realisations.The T S choice of eq.(3.2) is inspired by Pearson's χ 2 test statistic for Poisson distributed counts, whose variance σ 2 i = λ i .This choice also loosely mimics the one adopted by the Fermi-LAT collaboration, albeit no exact correspondence can be expected.In fact, Fermi-LAT uses a recursive procedure where, after a first step similar to the above, energy-dependent renormalisations of the background are allowed independently in each "region of interest".We prefer to use a simple and coherent definition valid for all pixels in the sky, also because we do not assume a particular probability distribution for the T S values.Rather, we use it as a "signal interest label", and derive a probabilistic interpretation for it from simulations of synthetic maps (see below).
Obviously, since in our simulations the sources are uniformly distributed across the sky, the spatial distribution of T S sim  the underlying dN/dS are correct.Since the determination of dN/dS exploits information below the nominal catalogue's threshold, the statistical agreement between the actual map and the synthetic maps should hold at values below this threshold and inform us about new, faint sources.This is the gist of the argument that we exploit to extend statistically the Fermi-LAT catalogue below threshold.This expectation is illustrated in figure 1, where we report the descending cumulative T S distribution for real data (black), the 4FGL-DR3 catalogue source model, K (gray), and for several realisations of the dN/dS source model, M (blue).While the distributions match rather well at high T S, the one associated to the catalogue departs from the data one well before the simulation's ones do.
As a next (and third) step, we attribute a statistical meaning to the T S distributions.At large T S, we expect the observed map and the simulated ones to agree well: Visible (but insignificant) discrepancies are expected at the highest values of T S, where Poisson fluctuations are important.At low T S we expect the deviations from the two distributions to eventually become statistically significant.This is due to the interplay of the weakness of the sources and the systematic errors in the background template, that are more and more relevant at low fluxes.
To assess how well the two distributions are in agreement, we apply a two-sample Kolmogorov-Smirnov (KS) test [28] to the normalised cumulative distribution functions of real data and simulated sky (or catalogue sky), both cut at a minimum T S * , which we vary on a grid.Their compatibility depends on the confidence level 1 − α at which we set the test.This is strictly speaking arbitrary, although conventionally α is set to 0.05 in most applications.We will actually consider three values of α: 0.01, 0.05 and 0.1, to assess the dependence of the results on this meta-parameter.In general, increasing α yields more conservative constraints, as the criterion to reject the null hypothesis becomes more restrictive (i.e. the distributions should agree more closely).
For the single catalogue model in eq. ( 2.3), we present the results in terms of the T S * leading to agreement with real data as a function of α.
In the case of the dN/dS model, since we have multiple realisations, we quantify what is the fraction of simulations which passes the test against the true sky distribution as a function of T S * (and α).This provides a quality factor QF that we associate to the considered cut.Lacking a better criterion, a reasonable choice may be to require a majority of realisations to lead to an agreement (i.e.QF > 0.5), but it is strictly speaking another meta-parameter which depends on the criteria of the user.We will also show how results depend on QF , with QF 's closer to 1 associated to more restrictive requirements.Note that we will "naively" interpret the QF as probability of the T S being associated to a source, by considering the whole sample of simulations as equally viable (i.e.only the prior information based on the pixel statistics is used).In principle, one could refine this indicator by restricting the sample of simulations to the ones which reproduce the high-T S distribution.We expect the actual probability to be somewhat higher than the one estimated via the QF .Also note that nowhere in our procedure we have relied on the 4FGL catalogue as input. 6he final step is how to identify the "firing pixels" in the true sky which are going to constitute the main deliverable of our work.For a fixed KS test significance α and a quality factor QF, we obtain a value T S * above which a fraction QF of simulations is compatible with the measured sky.Those pixels i in the latter having T S i > T S * will simply constitute of candidate pixels.
We conclude this section with a general remark.We note that our procedure treats every pixel independently, assigning a T S value to each of them, regardless of the T S values of the neighboring pixels.In this respect, it is useful to distinguish between a source and a "firing pixel": Multiple pixels may be firing (i.e.having a sufficiently large T S) in association to a single source, depending on the choice of N pix , the intensity of the source and the PSF of the instrument (indirectly, thus, also on the spectrum of the source).This instance is more and more likely the smaller the pixel size, i.e. the larger N pix , is.Conversely, multiple sources may be associated to a single pixel, notably for choices of N pix which are too low.Clearly, for the exercise proposed here to be useful, we must require that the choice of N pix leads to a number of pixels in the non-masked sky sufficiently larger than the current number of sources in the catalogue in the same region.At the same time, a pixel size much smaller than the angular resolution would be meaningless.Keeping this in mind, for the sake of simulations, we fix N side = 1024, which corresponds to an angular resolution of approximately 0.06 • , in order to match the angular resolution of the Galactic foreground template.However, since the Fermi-LAT angular resolution in the (1,10) GeV energy band is at best of the order of 0.15 • 7 , for the sake of our pixel analysis, we will downgrade our simulations to N side = 512, which corresponds to an angular resolution of approximately 0.12 • .The choice of this value for N side ensures that there are considerably more pixels than the number of resolved sources (approximately 500 times the number of resolved sources in the region |b| > 30 • ) and that the pixel size of our analysis roughly matches the best resolution of the LAT at the energies of interest.The T S analysis is therefore performed with synthetic and real data maps pixelized with N side = 512.

Results
We comment here on some summary statistics and synthetic analysis of our main results.In table 2  The first conclusion is that, provided that the choice of QF (and/or of α) is not too stringent, our model maps M agree with the data better than the catalogue map does, i.e. the agreement extends to lower T S * Compare in particular the second with the last column in table 2. This is just a manifestation of the fact that, at least for a majority of realisations, including sub-threshold sources provides a much better description of Fermi-LAT sky map.On the other hand, requiring a too large    2, but for differential flux; i.e. number of pixels for a given flux bin.White histograms correspond to the whole catalogue map K, while colored histograms are only those pixels in K coinciding (within the 68% PSF containment angle) with the firing pixels of our dN/dS model map.pixels in the catalogue map K with flux S > S i , whose positions are found in spatial coincidence with pixels in our firing-pixels map obeying the same flux condition. 8We display the fraction as a function of the flux and for different T S cuts.
Roughly 9 , Fermi-LAT starts to loose sensitivity to sources below 2×10 −10 cm −2 s −1 [8, 14, 29, 30], above the rising region of the sigmoid-like curves of figure 2. As expected, modulo fluctuations, the fraction tends towards unity at high fluxes.Note that we do not expect perfect recovering of the sources in the Fermi-LAT catalogue, anyway, either due to some simplifications (e.g.we use energy-integrated quantities for our analysis, as opposed to the differential energy analysis by the LAT collaboration) or methodological differences, such as the pixelisation procedure or the iterative background estimate procedure in Fermi.But it is clearly the case that we do recover the largest majority of the "bona fide" sources that we should.N coincidence 3000 98.4% 5000 97.8% 9000 97.8% Table 5.The fraction of firing pixels in common between the selections derived for the two galactic foreground models.The comparison is performed at equal number of selected sources for the two cases.
Figure 3 gives us a more detailed look at the information contained in figure 2. In this case we do not consider the "cumulative" number of pixels, but only those lying in a certain flux bin.The colored histograms correspond to the pixels in map K matching with the firing pixels of our dN/dS model map (related to the numerator of the fraction computed in figure 2).The white histogram instead correspond to all the pixels in map K (related to the denominator of the aforementioned fraction).The sigmoid curve in figure 2 grows close to unity at large S since-barring fluctuations-the quasitotality of sources in the catalogue are found within the adopted cuts.When lowering the cut T S, the peak of the colored histogram shifts to lower S, while still preferentially populating the high-S tail (compare case T S = 36 with case T S = 78).If the T S is lowered too much, as illustrated by the case T S = 10, the matching starts developing a "flat tail" at lower S, hinting at a growing random match component.Eventually, the curves in figure 2 drop to zero at low S not because of the scarcity of matchings (i.e., the numerator), but due to the faster growing number of pixels having flux above the indicated value (i.e. the denominator).
As a further check, we assess how robust the selected catalogue is to variations of the background model.We compare the results obtained with the gll_iem_v07 template with the previous incarnation of the galactic foreground model developed by the Fermi collaboration: gll_iem_v05_rev1.Since the T S * scale and related QF are quantitatively different if the different backgrounds are used, we proceed by comparing the two results at equal number of selected sources.We select T S * cuts delivering the same number of selected sources for the two cases and evaluate the exact overlap of pixel candidates.The results are reported in Tab. 5.As can be seen, within a few percent, the catalogues obtained agree.
The main output of our work consists of sky maps of "firing" pixels (i.e.source candidates).These pixels have a significant T S value among all pixels for which there is a statistical compatibility between simulations and real data, according to KS tests.The firing pixels depend on the significance α of the KS test, as well as on a "quality factor" QF , accounting for the fraction of simulated maps passing the KS test.We provide our results electronically in the form of a Python package called gPCS (for gamma-ray Photon Count Statistics), as well as a summary FITS, both available on Zenodo: https://doi.org/10.5281/zenodo.8070852.The Python package can be used to determine the firing pixels given either a T S * or a quality factor and α parameter.The Python package also includes a convenience function to export the firing pixels in the form of a FITS file, and we provide an example on how to compute the summary table provided with the package.The gPCS Python package can be easily installed using pip, with the command: § ¤

Discussion and conclusions
We have presented an approach to statistically push the Fermi-LAT sensitivity to point-like sources at high latitudes below the current threshold for detection, leveraging on the fact that the underlying source count distribution function is constrained even for lower test statistics (T S).This function has been recently re-derived using machine learning methods, yielding results in accordance with existing pixel statistics analyses.This information, not accounted for in the current catalogue construction procedure, allowed us to provide a catalogue of directions (i.e.firing pixels) in the sky likely to be associated to sources, albeit only in a probabilistic sense, that we assessed via a Kolmogorov-Smirnov (KS) test.The actual catalogue of directions depends on meta-parameters such as the confidence level 1 − α of the KS test, or the fraction QF of simulated skies for which the simulation agrees with data above a certain T S level.For reasonable choices of these meta-parameters, we found a number of "firing pixels" ∼50% higher than what one would infer from a catalogue only including sources listed in the latest incarnation of the Fermi-LAT catalogue.Our results also pass some sanity checks, like the fact that Fermi-LAT catalogue sources which are luminous enough are found within the directions inferred by our method, within the angular resolution of the instrument.If compared with the "local" iterative procedure followed by Fermi-LAT to establish their catalogue, another possible advantage of our catalogue of directions in the sky is that its T S scale has a homogeneous meaning, and may be more suitable for global statistical analyses.
We provide our results in digital form, allowing the user to select the meta-parameters QF and α to be more or less restrictive in their selection criteria, hopefully adapting to a wide range of applications.The most obvious one we can think of consists of cross-correlation studies, either multiwavelength or multi-messenger, where the statistical advantage of a significantly larger sample more than compensates having a (sub-leading) fraction of spurious directions.Another straightforward use of our results could be a guided source search and identification program via multi-wavelength studies, which would also help transforming some of these candidate source directions into bona fide sources.In turn, these studies may further benefit of machine learning techniques that have been recently proposed to ease the probabilistic classification of unassociated sources [31].
We stress that, although our results are based on the source-count distribution as inferred from a specific machine learning analysis of the photon counts, the method developed here to identify "firing pixels" can be applied to any other determination of the dN/dS of the high-latitude sky, and it is, in this respect, independent on the choice of the dN/dS as long as the latter correctly describes point sources in the faint regime.
Since our work had primarily a methodological proof-of-principle motivation, we have resorted to a couple of technical simplifications: We worked with an energy-integrated spectrum and we limited ourselves to high Galactic latitudes.We could rather naturally lift the former approximation by a brute force approach.We expect the main complication to be computational, but the problem should remain affordable in a reasonable time.Also the latter limitation could be in principle lifted, extending the analysis to low Galactic latitudes.The main problem we can anticipate is that the existing background models fail to perform as well at low latitudes as they do at high latitudes.One also needs to include multiple populations contributing to both resolved and unresolved emission, see e.g.[32] and sec.6.2 in [33] for the role played by Galactic millisecond pulsars in this context.In the worst case, the method may not allow one to gain much more with respect to more traditional techniques.Depending on the outcome, one may however think of multi-zone models for the background or the like.
A third direction for future progress is to attach a more realistic probability scale to the T S maps.We have constructed our synthetic skies only based on the statistical properties for the sources previously deduced via the pixel analysis.However, only a subset of these simulations is consistent with the properties of the actual sky: Notably at high flux where only a few sources exist, the associated T S distribution significantly depends on the distribution of the sources in the sky randomly falling in a lower or higher background region (for an illustration, see the large variability at high T S in figure 1).As briefly discussed in the article, a promising approach would be to narrow the sample of admissible synthetic skies used to compute QF to the sole simulations which are statistically consistent with the T S pattern of the most luminous sources.Such 'constrained' simulations should then improve the reliability of the QF scale.Another working direction with the same goal could be to improve the description of background and/or the parameterization for sub-threshold sources in the pixel analyses beyond what considered in [14].This may bring the simulation distribution illustrated in figure 1 in even better agreement with the T S histogram.Finally, exploring alternative metrics to the agreement in the KS sense is also a direction potentially worth exploring.

i
never coincides with the one obtained from the actual data, T S data i , nor with the one from the 4FGL-DR3 catalogue, T S cat i .However the histogram of the T S sim i values should statistically match the actual one T S data i , if the functional form and inferred parameters of

Figure 1 .
Figure 1.Descending cumulative T S distribution for the maps corresponding to: real data (black), 4FGL-DR3 catalogue (gray) and a few realisations of the dN/dS source model (blue).

Figure 2 .
Figure 2. Fraction of sources in the model K map, whose position corresponds to a firing pixel for different values of T S. See text for details.

Figure 3 .
Figure 3. Analogous to figure2, but for differential flux; i.e. number of pixels for a given flux bin.White histograms correspond to the whole catalogue map K, while colored histograms are only those pixels in K coinciding (within the 68% PSF containment angle) with the firing pixels of our dN/dS model map.

Table 1 .
Fermi Science Tools settings used for the 14-year data set analysis.

Table 2 .
we report, for different values of α, the T S * above which the 4FGL-DR3 catalogue source model and real data pass the KS test (second column, T S cat * ), and the corresponding T S * where the dN/dS simulated sky maps do the same (T S sim Required T S * for three values of α (KS test confidence level) for the matching between 4FGL-DR3 catalogue source model and real data (second column, T S cat * ), and the corresponding T S * for the dN/dS simulated sky maps vs real data (rest of columns, T S sim * ), at different values of QF .* ), at different values of QF .
It is possible to export a FITS table using the export_fits_table function.For example to reproduce the table provided with the library we can use:For further details on the package usage, please refer to the documentation and examples: https://github.com/aurelio-amerio/gPCS.