The following article is Open access

The DECam Ecliptic Exploration Project (DEEP). III. Survey Characterization and Simulation Methods

, , , , , , , , , , , , , , , , , , , , , , , , , , and

Published 2024 February 27 © 2024. The Author(s). Published by the American Astronomical Society.
, , Focus on the DECam Ecliptic Exploration Project Citation Pedro H. Bernardinelli et al 2024 AJ 167 134 DOI 10.3847/1538-3881/ad1527

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1538-3881/167/3/134

Abstract

We present a detailed study of the observational biases of the DECam Ecliptic Exploration Project's B1 data release and survey simulation software that enables direct statistical comparisons between models and our data. We inject a synthetic population of objects into the images, and then subsequently recover them in the same processing as our real detections. This enables us to characterize the survey's completeness as a function of apparent magnitudes and on-sky rates of motion. We study the statistically optimal functional form for the magnitude, and develop a methodology that can estimate the magnitude and rate efficiencies for all survey's pointing groups simultaneously. We have determined that our peak completeness is on average 80% in each pointing group, and our magnitude drops to 25% of this value at m25 = 26.22. We describe the freely available survey simulation software and its methodology. We conclude by using it to infer that our effective search area for objects at 40 au is 14.8 deg2, and that our lack of dynamically cold distant objects means that there at most 8 × 103 objects with 60 < a < 80 au and absolute magnitudes H ≤ 8.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Our understanding of the trans-Neptunian region has improved significantly since the discovery of the first trans-Neptunian object (TNO) by Jewitt & Luu (1993). Thousands of objects have now been discovered, individual surveys are now capable of finding hundreds of objects (e.g., Petit et al. 2011; Bannister et al. 2016, 2018; Bernardinelli et al. 2020a, 2022), and models of this region make quantitative predictions of the orbital structure, number of objects, and size distribution of this population (see reviews by Morbidelli & Nesvorný 2020; Gladman & Volk 2021). However, such predictions are not immediately testable against the observations from any given survey. Any survey has observational biases, driven primarily by the observational strategy used and the apparent magnitude limits achieved in the exposures. Thus, any comparison of a model to the real trans-Neptunian region (as seen by each survey) must factor in these effects.

Survey simulators have become widespread tools for testing these population models against observations (Jones et al. 2006; Lawler et al. 2018a; Bernardinelli et al. 2022). These tools provide a standardized way to estimate discovery biases of a survey for any given synthetic object as a function of orbital elements, magnitude, colors, and light-curve properties, without requiring the need to directly inject objects into the survey images, which can oftentimes be a prohibitively expensive task, in terms of computation time, especially if multiple models are being tested. A detailed understanding of these biases means that one can not only test the populations of detected objects but also place meaningful limits given survey nondetections in a region of parameter space. Examples in the literature include, but are not limited to, tests for clustering of high semimajor axis (a) and high perihelion (q) trans-Neptunian objects (Shankman et al. 2017; Bernardinelli et al. 2020b; Napier et al. 2021), limits on the discoverability of distant planets in the Solar System (Belyakov et al. 2022), tests of the structure of the classical Kuiper Belt (Petit et al. 2011; Kavelaars et al. 2021; Bernardinelli et al. 2022), and size distribution calculations (Lawler et al. 2018b).

The DECam Ecliptic Exploration Project (DEEP; Trilling et al. 2024) is an ongoing survey using the Dark Energy Camera (DECam; Flaugher et al. 2015) dedicated to finding faint trans-Neptunian objects with digital tracking, using a carefully chosen observational strategy (Trujillo et al. 2024) that maximizes the discovery potential of low-inclination Kuiper Belt objects. The survey consists of a series of "pointing groups" or "long stares," with a nominal observing strategy of 100 images per night in an expanding cone-shaped pattern that follows the typical motion of the TNO population (see Figure 2 of Trujillo et al. 2024). The survey consists of four fields (A0, A1, B0, and B1) that started being imaged in 2019. The analysis presented here uses data from the B1 field with data taken between 2019 and 2021. Applications of the techniques presented here to the other DEEP fields are forthcoming. This observational strategy enables the DEEP project to reach a unique combination of depth and area, which will enable us to probe the absolute magnitude distribution of TNOs as a function of subpopulation with high statistical significance at magnitudes fainter than mr ≈ 25, beyond the limits of current (Bannister et al. 2018; Bernardinelli et al. 2022) and upcoming (Ivezić et al. 2019) large area surveys (see Trilling et al. 2024, for a detailed description of the DEEP survey goals).

In Section 2, we present a synthetic population of objects injected into the DEEP images, and in Section 3, we discuss how this population can be used to characterize the survey's completeness as a function of magnitudes and orbital elements. We then use these results to construct a survey simulator, presented in Section 4 and publicly released on GitHub. 17 Section 5 presents two quantitative analyses of "toy models" of the trans-Neptunian region to demonstrate the power of the survey simulator, and we conclude in Section 6 by outlining expected improvements.

2. Synthetic Population Injected in the DEEP Data

The DEEP images were processed using the GPU-based KBMOD algorithm (Whidden et al. 2019; Smotherman et al. 2021). A detailed presentation of the full object recovery methodology is given in Smotherman et al. (2024; see also Napier et al. 2024). The detection "tracklets" (a pair of sky coordinates derived from the digital tracking procedure projected to the beginning and ending of exposures of each long stare) were subsequently linked into orbits. In Smotherman et al. (2024), we demonstrate that this recovery procedure procedure leads to high-confidence recoveries in both injected fakes and known objects, leading to high-quality orbit fits (σa /a ≈ 10−4) and negligible contamination. The digital tracking procedure is conducted on a two-dimensional grid of rates of motion r and a positional angle ϕ, where positive ϕ indicates decreasing ecliptic longitude. In the results presented here, our search was limited to 90 < r < 400 px day−1 (1 DECam pixel is 0farcs263) and ∣ϕ∣ < 45 deg. As seen in Figure 1, this choice exhausts the parameter space of bound orbits from 20 to 80 au, including the entirety of the Kuiper Belt. We note that objects at slightly higher or lower rates can still be recovered by the search, as the joint fit procedure of Smotherman et al. (2021) corrects for rates approximately outside the search rate that still make it above the detection threshold if it produces a meaningful (but potentially trailed) stack.

Figure 1.

Figure 1. Histograms of semimajor axis a, eccentricity e, inclination i, and barycentric distance d of the injected fakes. The blue lines represent all injected fakes. The red lines ("recoverable" fakes) represent those with rates 150 < r < 400 px day−1 and mVR < 25.5, well inside the completeness limit of both rate and magnitude selection functions (as discussed in the Section 3.2). The orange lines indicate those fakes that were recovered by the digital tracking procedure. The histograms of a and d were constrained to the region of parameter space where the majority of the objects are found.

Standard image High-resolution image

We use the LSST Science Pipelines (Jurić et al. 2017) in order to inject these fakes into the images. Objects were injected at the sky coordinates derived from their orbits at the midpoint of each exposure, and used the realistic PSF model derived for each exposure (Smotherman et al. 2024). These fakes were injected after astrometric and photometric calibration of the images and before the creation of coadded templates and differenced images, guaranteeing that our objects have meaningful PSF fluxes that correspond to their nominal magnitudes at that exposure. This also means that any image-processing artifact stemming from the image coaddition and differencing procedures will impact real and injected objects equally. In other words, we have injected a realistic set of synthetic objects into the data that will be detected by our detection pipeline along with the real objects, thus allowing us to determine the detection efficiency for our survey.

Our goal is to fully understand the limits of what can be detected in single-night stacks and subsequently linked to other nights in the DEEP survey, so this population is not constrained by the expected structure of the trans-Neptunian region, but rather to fully characterize the range of parameters where we can expect to discover objects. To do this, it is important to understand the efficiencies in the regions of the parameter space where objects have been found and also where no objects have been detected, so this population (intentionally) goes beyond the boundaries of the known TNOs. By constructing such a population model, instead of, for example, using a full model of the trans-Neptunian region, we ensure that we are not biasing ourselves toward what other projects have seen, which would impact our ability to make statistical statements about our observations. This population is divided in three parts:

  • 1.  
    A low-eccentricity (e < 0.3) population in the 30 < a < 80 au range. This accounts for the classical Kuiper Belt, as well as moderate a and high-q objects. a and e are uniformly distributed in this range, and the inclination distribution is $\propto \sin (i)$ in the 0° ≤ i ≤ 60° range, leading to an approximately uniform distribution in i for objects inside the survey footprint. The longitude of ascending node Ω, argument of perihelion ω, and mean anomaly ${ \mathcal M }$ are uniformly distributed in the [0°, 360°] range.
  • 2.  
    An "excited" population with a logarithmically distributed in the 20 < a < 2000 au range, and perihelia uniformly distributed between 10 and 100 au. To avoid creating unphysical orbits, objects where we sampled a < q are changed to e = 0. The inclinations are distributed as in the first population, but going up to 90°, and Ω and ω are uniformly distributed. Objects with a > 150 au and q > 30 au are limited to $| { \mathcal M }| \leqslant 30^\circ $, to ensure that these objects are not generated at large heliocentric distances. This choice of parameters leads to a population that reproduces the range of parameters for the scattering and detached TNO populations.
  • 3.  
    A fully isotropic population that exhausts the parameter space of all bound orbits following the prescription of Bernardinelli et al. (2020a). This population covers the distance range between 25 and 1000 au, with half of the synthetic objects uniformly distributed between 25 < d < 80 au, and the other half logarithmically distributed between 80 < d < 1000 au. This population includes objects such as putative distant planets (e.g., Batygin et al. 2019) as well as comets and other highly eccentric orbits at Kuiper Belt distances (e.g., Bernardinelli et al. 2021).

This population is illustrated in Figure 1. We note that each of the histograms of "recoverable" objects (limited to 150 < r < 400 px day−1 and mVR < 25.5) and "recovered" objects qualitatively follow each other closely (up to a normalization), and there is no obvious structure in these distributions, i.e., there are no significant dependencies in e and i, while the dependencies in a and d are directly related to the dependency in r (quantified in Section 3). This indicates that our selection function is not dominated by the orbital elements of each object, but rather by the rates and magnitudes. To see that this must be the case, we note that, during the four hours of our long stares, the motion of any given TNO is linear and dominated by the reflex motion of the Earth, scaled by the topocentric distance of the object, suppressing the motion driven by the instantaneous barycentric velocity of this object. In Section 3, we discuss how these parameters can be characterized.

These objects intentionally span a wide range of apparent magnitudes (20% of the sample uniformly distributed between 20 < mVR < 24 and 80% between 24 < mVR < 28), significantly beyond the detection limits of the survey, so we can have a complete understanding of the efficiencies and potential pitfalls in the analyses. These magnitudes are assigned to the object for 2020 January 1, and their magnitudes reproduce the expectation given their motion and the motion of the Earth. Half of the objects are also assigned a sinusoidal light curve that changes their magnitude by $\delta m=A\sin (2\pi t/T+{\phi }_{0})$, where the semi-amplitude A is distributed between 0 and 0.5 mag, peaking at A = 0.25 mag. The periods T are uniformly distributed between 2 and 100 hr, with a random phases at the 2020 January 1 epoch; we note that, for any period significantly longer than the four hours of the long stare, this means that we are sampling the object at a fully randomized phase for each long stare. This simplified model enables a reasonable characterization of the effect of the light curves in the discoverability of our synthetic objects, and it reproduces the range seen by Strauss et al. (2024) of periods and amplitudes. That is, this choice of magnitude amplitudes broadens our efficiency functions in a realistic manner (so that fainter objects are found if they are at the peak of their light curves, and conversely, brighter objects are missed if their light curve is at a minimum). All reported magnitudes correspond to the mean magnitude of each object during the corresponding long stare. We will defer the full characterization of our light-curve amplitude selection function to a future publication (see also Strauss et al. 2024).

3. Estimating the Detection Efficiency

The detectability of an object combines both the information on magnitude m and orbital elements ${\boldsymbol{a}}=\{a,e,i,{\rm{\Omega }},\omega ,{ \mathcal M }\}$ by factoring in the detection probabilities p(m) with the survey geometry and digital tracking efficiencies p( a , m∣survey geometry, digital tracking), and for objects linked in multiple nights, the linking probability p( a , {mμ }∣linking), accounting for the probability that the object recovered in multiple pointing groups μ is subsequently linked to an orbit and also that such a linkage represents the proper set of detections of the object. 18 While we can easily estimate the magnitude-independent p( a ∣survey geometry), the digital tracking probability of detection is both a function of magnitude m and rate of motion r. This is distinct from a survey that detects objects in each image, as TNOs are typically very close to point sources, and so their detectability is dominated by the the limiting magnitude of the exposure for an object inside its effective area. We introduce here a methodology that simultaneously accounts for the magnitude and rate completeness of each pointing group.

3.1. Efficiency as a Function of Magnitude

We begin by estimating the detection efficiency as a function of magnitude for each pointing group. We follow the prescription of Bernardinelli et al. (2020a), where we parameterize the efficiency by a probability of detection p(m θ ) for an object with magnitude m using a set of variables θ . After the synthetic objects have been properly recovered and identified, we index the recovered objects by α and nonrecovered by β. We maximize the likelihood:

Equation (1)

This likelihood allows us to estimate the parameters of our chosen efficiency function without having to bin our injected samples.

There are many possible choices of functional forms for p(m θ ) (e.g., Gladman et al. 1998; Bernstein et al. 2004; Jones et al. 2006; Bannister et al. 2016; Bernardinelli et al. 2020a). The simplest possible form is a step function, where the only free parameters are the limiting magnitude m0 and the efficiency for m < m0. Smooth functional forms are also possible, with the logistic function

Equation (2)

being one of the simplest possible parameterizations. Here, m50 is the magnitude of 50% completeness, c ∈ [0, 1] the peak efficiency, and k the transition sharpness between the "detected" and "not detected" regimes. These parameters are such that p(m) = c for mm50, p(m50) = c/2, and p(m) = 0 for mm50. The limit k corresponds to a step function. A similar parameterization finds the magnitude of 25% efficiency (m25) fitting two logistics 19 with transition sharpness k1 and k2,

Equation (3)

We also posit a third model, which we fit as a sanity check to verify whether we are overfitting our data, that uses three logistics and has a characteristic magnitude of 12.5% completeness m12.5. We note that this methodology extends to other functional forms, not just the ones presented here (see, e.g., Bannister et al. 2016, for a functional form that quantifies lost bright objects).

To decide the optimal functional form, we use the Bayesian information criterion (BIC; Schwarz 1978) test on the maximum likelihood ${ \mathcal L }$. For Npar model parameters and Ndata data points, we have that the optimal model minimizes

Equation (4)

Because we are interested in the general behavior of our efficiency function, we perform this by fitting one p(m) to the entire set of synthetic objects, that is, we estimate an approximate selection function for all long stares. We restrict our data to 150 < r < 400 px day−1, a region where the dominating cause of object loss is from the magnitude selection function. Of the three functional forms for p(m) presented here, Equation (3) has the lowest BIC of BIC = 8055.14, while Equation (2) and the triple exponential have BIC = 8071.99 and BIC = 8061.12, respectively. Following Kass & Raftery (1995), the difference between two BIC values approximates the Bayes factor between these two models, and therefore, among a set of models (model 1, model 2, and model 3), we can estimate the odds that a given model best fits our data. If we call model i the model with the lowest BIC, then the probability ${ \mathcal O }$ that model that best fits the data is approximately

Equation (5)

The odds against Equation (2) fitting the data better than Equation (3) are approximately 4570:1, and the odds against a triple exponential fitting the data better than Equation (3) are approximately 20:1. We have therefore confirmed that the model given by Equation (3) is likely the best model for our application of the three considered here.

However, we stress that the BIC is an approximate metric and that the choice of model among these three models does not qualitatively alter our characterization, as the m50 and m25 models predict similar drop-off points. Converting the best-fit m25 = 26.22 from Equation (3) to an m50 value, we get a converted m50 of 25.96, while fitting Equation (2) directly leads us to m50 = 25.93. Our results are therefore only weakly sensitive to the choice of p(m). From now on, we will assume p(m) is given by Equation (3), but our results and methodology are independent of this choice.

3.2. Joint Rate and Magnitude Selection Function

The selection function for each pointing group is not only a function of the source magnitude m, but also the rate of motion r, which we assume has the same functional form and parameters for all pointing groups. Thus, any fits of Equations (2) and (3) do not yield a meaningful completeness c, as in principle, one can add an arbitrary number of bright objects outside the rate range in which objects are meaningfully detected, or reduce the samples to a predefined range of rates. On the other hand, attempting to fit a rate completeness function requires the data to be limited to mm25, also necessarily reducing the sample size. We propose a methodology that accounts for both rate and magnitude selection functions simultaneously. We posit a simple smooth functional form for the rate completeness,

Equation (6)

corresponding to a logistic function in each end of the distribution (with κ1 < 0 and κ2 > 0), and a rate of 50% completeness for each side. The constant r0 determines the transition between the two regimes, and can be chosen prior to the fit without loss of constraining power. In principle, we could attempt a similar statistical exercise as in the previous section to determine the optimal functional form for this function. In practice, the recovery efficiency drops to zero quite steeply, so we can safely posit this simpler form. We assume that this rate selection function is the same for all pointing groups, as the underlying digital tracking algorithm in kbmod does not change in each processing.

The combined rate and magnitude selection function becomes

Equation (7)

As before, c corresponds to the peak efficiency (when mm25 and r50,1 < r < r50,2), the parameter range where objects are systematically recovered. To simplify the notation, we index each pointing group by μ and define θ μ ≡ {m25,μ , cμ , k1,μ k2,μ }, φ ≡ {r50,1, r50,2 κ1, κ2} (noting that φ is independent of pointing group). We generalize the likelihood in Equation (1) by including the rate in addition to the magnitude of each implanted source and by multiplying it over all pointing groups, so

Equation (8)

Maximizing this ${ \mathcal L }$ achieves our goal of obtaining a meaningful {cμ } as well as characterizing the global rate selection function. One way of understanding Equation (8), then, is that for any given long stare, the rate selection function is informed by all other long stares, and so its constraints are stronger than what could be derived from any individual long stare, mitigating the effect of the joint dependency of magnitude selection function with the rates of the implanted sources. That is, we do not need as many sources in each (m, r) bin as we would if we were trying to reconstruct the joint rate and magnitude selection functions for each individual long stare. We illustrate this procedure in Figure 2.

Figure 2.

Figure 2. Left: completeness vs. magnitude for all long stares (solid blue) as well as the overall completeness for the survey (solid red), where all pointing groups are fit to the same probability curve. The vertical dashed lines indicate the best-fit m25 for each long stare (blue) and for the entire data set (red). The dotted lines represent the m25 each exposure in the long stare (blue) and the survey (red) would have if the data were not stacked. The green dots are the true binned recovery rates averaged over all long stares with objects limited to r50,1 < r < r50,2 with Poisson error bars. Right: rate and magnitude for all synthetic sources (recovered in red and missed in blue) in our long stare with the deepest m25. The shaded gray curves represent the completeness in Equation (7), and the horizontal dashed purple lines represent the 50% rate completeness terms (r50,1 and r50,2).

Standard image High-resolution image

4. Simulation Methodology

The characterization in the previous sections established that, for any given TNO inside our survey area, our discoverability is a function of its on-sky motion (the rate) and magnitude. Thus, these are the primary parameters that our survey simulator needs in order to decide which objects are recovered. For any given object, the simulation proceeds as follows:

  • 1.  
    The object's orbital elements or state vector are integrated to all exposure times for the survey, and on-sky positions are generated. These positions are then used to check whether an object could be seen by a functional DECam CCD with this pointing. This properly accounts for chip gaps, and so this information is also used to decide whether this synthetic object would lie inside its search CCD for long enough to be properly recovered by the digital tracking methodology. It should be noted that only the synthetic objects that have successfully passed this check are included in the procedure of Section 3.
  • 2.  
    For each individual observations that could be recovered by the digital tracking procedure in a given long stare μ, we compute its rate r and mean magnitude m given the parent object's light curve (that can be constant), and sample a random uniform number u ∈ [0, 1]. If p(m, r θ μ , ϕ ) > u, the object is considered to be recovered. Because 0 ≤ p(m, r θ μ , ϕ ) ≤ cμ , even bright objects have a ∼20% chance of being discarded (as 〈cμ 〉 ≈ 0.8), thus properly accounting for the area lost in the digital tracking procedure.
  • 3.  
    We compute the discoverability of the orbit given its arc (the time span between the first and last observations); the shortest arc after dropping either the first or the last night of data; and the number of nights in which the object was detected. These inform whether the object would be linked or not. For the processing of Smotherman et al. (2024), an object is linked with high efficiency (94%) if it was observed in at least four nights, with complete arcs spanning 0.8 yr (0.5 yr for the shortened arcs). We sample another uniform number u ∈ [0, 1], and compare to the linking efficiency ≈ 94%, to determine whether the object would be properly linked. This accounts for objects with the proper number of observations but whose detections are astrometric outliers (that is, the recovered positions do not agree with the nominal position for the object) or objects whose orbit fits yield poor results, as measured by their reduced χ2 (see Smotherman et al. 2024 for a detailed discussion).

This simulation is implemented as an extension of DESTNOSIM (Bernardinelli et al. 2022), the survey simulator initially developed for the Dark Energy Survey (DES) search for TNOs (Bernardinelli et al. 2020a, 2022), and it makes use of the dynamics package Spacerocks (Napier 2020) to produce the ephemerides for the synthetic objects. The modular nature of the software allows characterization of any processing of DEEP, as well as any similar survey, such as DES. A full tutorial, as well as documentation and scientific applications of the software, are publicly available on a GitHub 20 repository. We encourage users to extend the capabilities of the software, as well as to publicly release their analyses in addition to their publications, ensuring that their results are reproducible by the community.

5. Applications

As an initial demonstration of the software, we study two simple scenarios with toy populations that quantitatively illustrate its simulation capabilities.

5.1. Effective Search Area

The 2019–2021 data from the DEEP B1 field include 10 distinct DECam pointings (see Figure 3). Using the camera's nominal field of view (3 deg2), it would seem that our search area for multiyear orbits is ≈30 deg2. This is, however, not precise: not all fields were observed in all years, not all of DECam's CCDs are functional, and there is a fraction of area lost due to the survey's observing geometry (the combination of fields, cadence, and camera footprint). The observing strategy of DEEP is such that fields are imaged in an expanding "cone" pattern: in 2019, the B1a, B1b, and B1c fields were visited; in 2020, the B1a through B1f fields were observed, and all fields were observed in 2021.

Figure 3.

Figure 3. Sky coordinates of the B1 fields. The shaded regions indicate the observing coverage of each pointing, with the colors representing the observing years that covered each pointing (red: 2019–2021, orange: 2020–2021, yellow: 2021). The blue dots show a representative sample (20%) of the TNO that could be successfully linked and are displayed in their nominal sky position in 2020 January 1 (the reference epoch). The dotted (dashed) line mark the location of the ecliptic (invariable) plane. We note that few objects in fields B1g–B1j are recovered: this is expected, as these fields were not observed in 2019 and 2020, so objects from these fields have not yet been imaged to be linkable in the data.

Standard image High-resolution image

We use the simulator to estimate the "effective" search area, that is, the area in the sky where any potential bound orbit could be observed by DEEP and then subsequently be recovered by the linking procedure. For these purposes, we ignore the magnitude and rate selection functions, as our interest is in the geometrical design of the survey. We simulate an isotropic, all-sky population of objects as defined above, but we fix their initial distance to 40 au (it should be noted that, as defined here, this effective area is distance dependent, as we are accounting for the motion of the object). The procedure of Bernardinelli et al. (2020a) distributes orbits in a spherical Fibonacci lattice, and so the fraction of the points we recover is a proxy for the area on the sphere that is available for objects to be searched (see González 2010 for a detailed explanation of these lattices). We simulated 4 × 107 TNOs, and of these, 14,408 satisfy the linking criteria of Section 4, leading to Aeff ≈ 14.8 deg2. We emphasize that this effective area will change as the survey progresses.

5.2. Limits on a Distant, Dynamically Cold Population

There are few objects known in the outer Kuiper Belt, defined as having a larger than Neptune's 2:1 mean motion resonance and low eccentricities (e ≲ 0.24, Gladman & Volk 2021). While the high-inclination component of this population extends as far as the 3:1 resonance (Sheppard et al. 2016; Bernardinelli et al. 2022), this component is thought to be created by a combination of mean motion resonances and the Kozai effect (Gomes et al. 2008), and no dynamically cold (e ≤ 0.24 and i ≤ 5°) objects with a > 60 au are known. In Smotherman et al. (2024), we recovered 110 TNOs, primarily cold Classicals (low-inclination, low-eccentricity objects between Neptune's 3:2 and 2:1 mean motion resonances; see Figure 7 of Smotherman et al. 2024). In agreement with other surveys, we also also did not detect any objects in this high-a, low-e, and low-i region of the parameter space, so we can use our simulator to place limits on the existence of a distant thin disk of objects past this a range. We emphasize that this is not the first time such limits have been placed (Trujillo & Brown 2001; Trujillo et al. 2001; Allen et al. 2002; Bernstein et al. 2004), but no limits with our combination of area and magnitudes have been placed to date, and such a population has not been entirely ruled out (see Section 6.2.3 of Gladman & Volk 2021).

We simulate a population similar to the stirred classical Kuiper Belt population of Petit et al. (2011), following their aa−5/2 surface density for the Kuiper Belt, but moving it further to the 60 ≤ a ≤ 80 au range. The other parameters follow Petit et al. (2011): e uniformly distributed between 0 and 0.03; $i\propto \sin (i)\exp (-{i}^{2}/(2{\sigma }_{i}^{2}))$, where σi = 2fdg6 and $\omega ,{\rm{\Omega }},{ \mathcal M }$ uniform in the 0°–360° range. We simulate this population in the 24 ≤ m ≤ 26 range, corresponding roughly to 5 < H < 8 at these distances. It is worth noting that, for m ≪ 24, our magnitude efficiency is constant, and our population estimates are limited by our survey area, so we do not need to simulate brighter objects. We measure the fraction of members of this population recovered at each magnitude bin mb . This allows us to compute our probability of detecting an object p(mb ). Assuming a size distribution, we can convolve it with our detection probability p(mb ) to place limits on the number of objects with an absolute magnitude HHb . Using Poisson statistics, we have that an undetected population whose simulation leads to three detections on average can be rejected it with 95% confidence, as the Poisson error bar is $\sqrt{3}$ for such a process. Assuming the Fraser et al. (2014) size distribution for the cold Classical population, namely, a broken power law with slope 1.5 for the larger objects and 0.3 for the smaller objects, with the break at H = 6.9, we have N(H ≤ 8) = 8 × 103 objects in this region of the outer Solar System. It is important to note that any size distribution that is shallower than that of Fraser et al. (2014) would lead to a stronger constraint, that is, a smaller number of objects. A more useful limit would be for a ∼ 90 au (Gladman & Volk 2021), but our completeness drops sharply after d ≈ 80 au (see Figure 1). Further data releases of the DEEP survey will attempt to recover and place limits to these more distant populations.

6. Summary

We constructed a survey simulator for the DEEP survey via injection of a population of synthetic outer Solar System objects that exhausts the parameter space of orbits at Kuiper Belt distances (and beyond) into the DEEP images. This software, publicly released on GitHub, enables the user to characterize our observational biases as a function of magnitude and rate of motion for each pointing group, and of the survey as a whole. Such a construction enables further studies of recoverability of populations of trans-Neptunian objects by the survey, and thus detailed comparisons between models of the structure of the trans-Neptunian region and the TNOs observed by the DEEP survey. In addition to the software release, we will provide real-world use cases in the form of Jupyter Notebooks, ensuring that our analyses are reproducible by the community.

In this first release, we focused on the 2019–2021 data from the B1 fields (Smotherman et al. 2024) processed through kbmod (Whidden et al. 2019; Smotherman et al. 2021), but the methodology developed here is general and can be extended to any survey that uses digital tracking to recover TNOs. Our lack of dynamically cold objects in distant (a > 60 au) orbits, combined with the survey simulator, enables us to derive that this yet-undetected potential population has at most 8 × 103 members with H ≤ 8.

Acknowledgments

This work is based in part on observations at Cerro Tololo Inter-American Observatory at NSF's NOIRLab (NOIRLab Prop. ID 2019A-0337; PI: D. Trilling), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.

This work is supported by the National Aeronautics and Space Administration under grant No. NNX17AF21G issued through the SSO Planetary Astronomy Program and by the National Science Foundation under grants No. AST-2009096 and AST-2107800. This research was supported in part through computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor. This work used the Extreme Science and Engineering Discovery Environment (XSEDE; Towns et al. 2014), which is supported by National Science Foundation grant No. ACI-1548562. This work used the XSEDE Bridges GPU and Bridges-2 GPU-AI at the Pittsburgh Supercomputing Center through allocation TG-AST200009.

H.S. acknowledges support by NASA under grant No. 80NSSC21K1528 (FINESST). H.S., M.J., and P.H.B. acknowledge the support from the University of Washington College of Arts and Sciences, Department of Astronomy, and the DiRAC Institute. The DiRAC Institute is supported through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences and the Washington Research Foundation. M.J. wishes to acknowledge the support of the Washington Research Foundation Data Science Term Chair fund, and the University of Washington Provost's Initiative in Data-Intensive Discovery.

This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the US Department of Energy, the US National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute for Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo á Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey.

The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l'Espai (IEEC/CSIC), the Institut de Física d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, NSF's NOIRLab, the University of Nottingham, the Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.

Footnotes

Please wait… references are loading.
10.3847/1538-3881/ad1527