THE DIFFERENCE IMAGING PIPELINE FOR THE TRANSIENT SEARCH IN THE DARK ENERGY SURVEY

, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and

Published 2015 November 9 © 2015. The American Astronomical Society. All rights reserved.
, , Citation R. Kessler et al 2015 AJ 150 172 DOI 10.1088/0004-6256/150/6/172

1538-3881/150/6/172

ABSTRACT

We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg2 fields are repeatedly observed in the g, r, i, z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImg functions are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are ∼130 detections per deg2 per observation in each band, of which only ∼25% are artifacts. Of the ∼7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another ∼30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 "shallow" fields with single-epoch 50% completeness depth ∼23.5, the SN Ia efficiency falls to 1/2 at redshift z ≈ 0.7; in our 2 "deep" fields with mag-depth ∼24.5, the efficiency falls to 1/2 at z ≈ 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

The discovery of the accelerating expansion of the universe (Riess et al. 1998; Perlmutter et al. 1999) using Type Ia supernovae (SNe Ia) has greatly motivated ever larger transient searches in broadband imaging surveys. The associated search pipelines have become increasingly complex in distributing enormous computing tasks needed to rapidly find new transients for spectroscopic observations, and in processing a wide range of data quality.

A new era of transient searches began in the early 2000s with "rolling searches" in which the same telescope is used for discovering new objects and providing precise photometric measurements of the light curve in multiple passbands. To collect large SN Ia samples for measuring cosmological parameters, the earliest rolling searches include the Supernova Legacy Survey (SNLS: Astier et al. 2006; Perrett et al. 2010), ESSENCE (Miknaitis et al. 2007), and the Sloan Digital Sky Survey-II (SDSS-II: Frieman et al. 2008; Sako et al. 2008). Each of these surveys discovered many hundreds of SNe Ia, about half of which were spectroscopically confirmed. The next generation of rolling searches includes the recently completed Pan-STARRS1 (Kaiser et al. 2002), the ongoing Dark Energy Survey (DES: Bernstein et al. 2012), and the Large Synoptic Survey Telescope (LSST: Ivezic et al. 2008; LSST Science Collaboration et al. 2009), expected to begin in the next decade. Another advantage of these rolling searches is that there is a complementary wide-area survey with the same instrument; this benefits the absolute calibration by including dithered exposures over the SN fields to inter calibrate the CCDs, regular observations of standard-star fields, and measurements of the telescope and atmospheric transmission functions.

The goal of this paper is to describe the difference-imaging pipeline (DiffImg) used to discover point-source transients in DES. We present detailed performance results of DiffImg for single-epoch detections, and for the redshift dependence of discovering and classifying SN Ia light curves. While the search strategy was optimized to find SNe Ia to build a Hubble diagram for measuring dark energy properties, (Bernstein et al. 2012), DiffImg does not depend on the transient type. In addition to SNe Ia, our DiffImg has found many other transient types including core-collapse SNe (CC SNe), super-luminous SNe (SLSNe: Papadopoulos et al. 2015), active galactic nuclei (AGNs), Kuiper Belt objects (KBOs: Gerdes et al. 2015), and a possible tidal disruption event (Rees 1988; Foley et al. 2015).

The challenges for DiffImg are to produce a high quality subtracted image for each search image by subtracting a deep coadded template, reject a large number of non-astrophysical detections (artifacts) in the subtracted images, develop a workflow to process each night of data in less than a day, and monitor the performance well enough to uncover subtle problems and to determine efficiencies and biases for science analyses. We use publicly available codes for the core routines needed to determine an astrometric solution, co-add exposures, measure the point-spread function (PSF), align template and search images, perform the subtractions, and fit light curves to a series of SN templates for classification. In addition to these existing codes, we have developed new software tools for automated scanning of subtracted images (Goldstein et al. 2015, hereafter G15), detailed monitoring based on artificial SNe overlaid on images, and a workflow to distribute jobs on arbitrary computing platforms.

The essential monitoring element is to inject fake SNe Ia onto galaxies in real images (hereafter called "fakes"). The Supernova Cosmology Project used fakes to monitor the efficiency of human scanners in the real-time SN Ia search (Pain et al. 2002), and also to measure the analysis efficiency as part of the SN Ia rate measurement. Fakes were later used for real-time monitoring in the SDSS-II Supernova search (Dilday et al. 2008) to measure the efficiency of the detection pipeline and human scanning. The Nearby Supernova Factory moved stars on the image to serve as fake transients; they monitored their single-epoch detection efficiency and trained their machine learning method that was used to reject large numbers of subtraction artifacts (Bailey et al. 2007). SNLS used fakes in an offline analysis (Perrett et al. 2010, hereafter P10) to measure their efficiencies and selection biases that impact the Hubble diagram. In DES-SN the fakes are used to (1) monitor the detection depth, (2) monitor the single-epoch detection efficiency from the fraction of fakes that are detected, (3) monitor the efficiency for multiple detections that are required for spectroscopic targeting and science analysis, (4) train the automated scanning software (G15), and (5) characterize the DiffImg performance for a fast Monte Carlo (MC) simulation.

Compared to the use of fakes in previous surveys, an improvement in DES-SN is that the ideal efficiency is predicted from the signal-to-noise ratio (S/N) of the flux measurements, and thus the DiffImg performance can be rigorously evaluated by comparing the predicted and fake efficiencies. This prediction, as a function of fake SN redshift, comes from a fast MC simulation that computes realistic light curves without using images or pixels. The fast MC simulation analytically computes the light curve fluxes and their uncertainties using input from observed conditions, and also from key DiffImg properties derived from the fakes. The agreement (or lack of) between the predicted and measured efficiency provides a robust measure of the DiffImg performance.

There is another practical motivation for using a fast MC simulation to validate the point-source DiffImg efficiency with fakes. Typical science analyses require large SN simulations that are repeated many times for development, evaluation of systematic uncertainties, and estimates of contamination from CC SNe. Ideally, such simulations would be similar to the fakes in which calculated light curves are overlaid on CCD images and processed with DiffImg. The CPU resources for so many image-based simulations, however, would be quite enormous. On the other hand, the fast MC simulation in SNANA (Kessler et al. 2009) can generate close to 102 light curves per second on a single core, which is five orders of magnitude faster than the ideal image-based simulation. Our goal, therefore, is to use a single realization of fakes to characterize the DiffImg performance for the fast MC simulation; the fast MC can then be used to rapidly generate samples of point-source transients with the same efficiencies and uncertainties as an image-based simulation. Although only one transient type (SN Ia) is used to generate fakes for image overlays and DiffImg processing, the resulting fast MC simulation can in principle be used for any SN type, and more generally for any point-source transient.

The outline of this paper is as follows. An overview of DES and the transient search is given in Section 2, and DiffImg is described in Section 3. The monitoring of single-epoch detections is given in Section 4, including the single-epoch magnitude depths, data quality evaluation, efficiency versus S/N, and the anomalous scatter of flux measurements for objects on bright galaxies. The efficiency of multiple detections required for a transient is described in Section 5, including the discovery efficiency and the classification efficiency. In Section 6 we compare the simulation to data in a preliminary photometric analysis. Comparisons with SNLS and DiffImg limitations are discussed in Section 7, and we conclude in Section 8.

2. OVERVIEW OF THE DARK ENERGY SURVEY AND TRANSIENT SEARCH

The Dark Energy Survey includes a wide-area 5000 deg2 optical survey in the southern celestial hemisphere and a dedicated transient search over 27 deg2, both using the Dark Energy Camera (DECam: Flaugher et al. 2015). DECam is mounted on the Blanco 4 m telescope at the Cerro Tololo Inter-American Observatory (CTIO) and the data are processed by the DES data management system (Sevilla et al. 2011; Desai et al. 2012; Mohr et al. 2012) at the National Center for Supercomputing Applications (NCSA). The 570 Megapixel DECam has a 3 deg2 field of view and is composed of 62 science-image CCDs, each with 2k × 4k pixels, and 8 CCDs for guiding. After accounting for CCD gaps and two non-functioning CCDs, the active field of view is 2.7 deg2.

The transient search is performed in 10 "SN fields" (27 deg2) that are repeatedly observed in the g, r, i, z passbands. We refer to this part of the survey as DES-SN. Eight of these fields are observed with few-minute exposure times and are referred to as "shallow" fields; the remaining two "deep" fields are observed much longer (Table 1). Defining the AB magnitude-depth as the mag where the DiffImg single-epoch detection efficiency has fallen to 50%, the shallow and deep field depths are ∼23.5 and ∼24.5, respectively, and the depth in each band is the same. The SN portion of the DES observing strategy is that the wide-area survey transitions to observing SN fields when the seeing is above 1farcs1, and, in addition, any SN field (in any band) which has not been observed for 7 days is scheduled with the highest observing priority regardless of the seeing. This 7 day trigger typically results in better data quality compared to the 1farcs1 trigger. For each SN field, the pointing at a repeat visit is the same to within a few arcseconds. Additional dithered observations from the wide-area survey are used for the inter-calibration of the CCDs.

Table 1.  Exposure Summary of DES-SN Fields

    Central λ ${T}_{\mathrm{expose}}$ (s)a Template
  Band (Å) per Epoch $\langle {N}_{\mathrm{epoch}}\rangle $ b
Shallow g 4830 1 × 175 = 175 8.0
  r 6430 1 × 150 = 175 8.5
  i 7830 1 × 200 = 200 9.3
  z 9180 2 × 200 = 400 9.3
Deep g 4830 3 × 200 = 600 5.5
  r 6430 3 × 400 = 1200 7.5
  i 7830 5 × 360 = 1440 9.3
  z 9180 11 × 330 = 3630 8.3

Notes.

a ${N}_{\mathrm{expose}}\times {t}_{\mathrm{expose}1}={T}_{\mathrm{expose}}$. bAveraged over fields. The average template exposure time is $\langle {N}_{\mathrm{epoch}}\rangle \times {T}_{\mathrm{expose}}$.

Download table as:  ASCIITypeset image

On a given night, the number of consecutive exposures (${N}_{\mathrm{expose}}$) varies with band and field as shown in Table 1. ${N}_{\mathrm{expose}}$ = 1 for the shallow g, r, i bands where the sky level is well below saturation. In the deep fields (and shallow z band), ${N}_{\mathrm{expose}}$ > 1 to limit the sky level to be well below saturation within each exposure. For a shallow field, each observing block is scheduled for all four bands and takes ∼20 minute with overhead. For a deep field, observing all exposures in each band takes about 2 hr, and would be difficult to schedule such a long block within the constraints of the global DES observing strategy. Each deep-field band is therefore scheduled independently; the total exposure time per epoch (${T}_{\mathrm{expose}}$) is 10 minute in the g band, and more than an hour in the z band (Table 1).

The ten SN fields are divided into four groups of adjacent fields: three C fields that overlap the Chandra deep fields, three X fields that overlap the XMM-LSS fields, two S fields that overlap SDSS stripe 82, and two E fields that overlap the ELAIS S1 field. The field locations were chosen based on (1) visibility from CTIO, (2) visibility from telescopes in the northern hemisphere to perform follow-up spectroscopy of live targets, (3) galactic extinction, (4) avoiding overlap with extremely bright stars, (5) overlap with pre-existing galaxy catalogs and calibration. A summary of each field and its location is given in Table 2. The maximum nightly data volume from observing all ten fields is 170 GB, corresponding to just over 5000 CCD images. The average data volume in a typical night corresponds to a few fields. In addition to the SN field observations, DES-SN makes use of the extensive calibration data (Section 3.1.1) taken as part of survey operations.

Table 2.  DES-SN Field Names and Locations

  Deep or Field Center (deg): ${N}_{\mathrm{visit}}$ a
Field Shallow R.A. Decl. g/r/i/z
C1 shallow 54.2743 −27.1116 29/30/30/30
C2 shallow 54.2743 −29.0884 28/28/27/28
C3 deep 52.6484 −28.1000 25/23/28/27
X1 shallow 34.4757 −4.9295 26/27/27/27
X2 shallow 35.6645 −6.4121 26/26/25/24
X3 deep 36.4500 −4.6000 21/20/22/24
S1 shallow 42.8200 0.0000 29/29/28/28
S2 shallow 41.1944 −0.9884 27/28/28/28
E1 shallow 7.8744 −43.0096 27/26/27/26
E2 shallow 9.5000 −43.9980 26/26/26/27

Note.

aNumber of single-epoch visits to each field in each band, in Y1.

Download table as:  ASCIITypeset image

Science Verification (SV) took place 2012 November through 2013 January, with the goal of ensuring that the DECam performance meets the DES science requirements. During the beginning of SV, the SN fields were observed to obtain initial calibrations and to build templates. The latter part of SV was used to test DiffImg. Nominal survey operations began in the Fall of 2013. The first season (2013 August to 2014 February) is referred to as Y1, and the second season (2014 August to 2015 February) is referred to as Y2.

3. THE DIFFERENCE-IMAGING PIPELINE

All images taken with DECam at CTIO are transferred to NCSA and run through the detrending process to produce images suitable for higher level analyses. For each exposure, all CCDs on the focal plane are processed as a single unit where bad pixels are masked and corrections are applied for bias, flat-field illumination, pupil ghost, crosstalk, linearity, and overscan. More details are given in Mohr et al. (2012), Desai et al. (2012) and references within. The detrending process is virtually identical for the SN fields and the wide-area survey, and it is similar to a community pipeline used to process DECam data for non-DES observers.

For the SN fields, DiffImg is run after the detrending and a schematic overview is shown in Figure 1. In contrast to detrending, DiffImg is run independently for each CCD in order to simplify the distribution of jobs among CPUs. Many of our DiffImg stages use publicly available Terapix/AstrOmatic codes40 (Bertin et al. 2002) including SCAMP (Bertin 2006) for astrometry, SExtractor (Bertin & Arnouts 1996) to find objects, PSFEx (Bertin 2011) to determine the position-dependent PSF, and SWarp to sum individual exposures (to make "coadds") and to align template images to search images. The sub-sections below describe DiffImg in more detail.

Figure 1.

Figure 1. Schematic overview of the SN-field processing: detrending (left panel) and DiffImg (middle+right panels). The thin-lined boxes refer to operations on individual exposures; the thick-lined boxes refer to operations on coadds. Astromatic.net codes are shown in parentheses.

Standard image High-resolution image

3.1. Pre-survey Observations and Analysis

3.1.1. Calibration

For the results presented here, the calibration41 is determined from data taken during SV. The calibration is needed to determine magnitudes for detected transients, which are used to select transients of appropriate brightness for spectroscopic observations. The calibration is also used to convert the fake magnitudes into fluxes in CCD counts.

During nightly operations, DES typically observes a set of 3 standard star fields corresponding to low, intermediate and high airmass. These observations are done during evening twilight, and again during morning twilight (Tucker et al. 2007, G. Bernstein et al. 2015, in preparation). These standard star fields are mostly in SDSS stripe 82, but supplemented with additional fields, mostly at decl. ≈ −45° to −40°. The stars in these fields, which we refer to as secondary standard stars, have had their magnitudes transformed into the defined DES "natural" system in which the color terms are close to zero. For photometric nights, these well calibrated secondary standard stars are used to determine a nightly calibration consisting of zero points (ZPs), atmospheric extinction coefficients, and color terms needed to transform the photometry from the individual DECam CCDs to the defined DES "natural" system. A set of standard stars within each of the ten SN fields, referred to as "tertiary standards," were calibrated from data taken during the SV period under photometric conditions, using exposures centered on the SN fields plus additional dithered exposures from the wide-area survey; this resulted in typically 100–200 well-calibrated tertiary standard stars per CCD-area in the SN fields (K. Wyatt et al. 2015, in preparation). These tertiaries were used to calibrate the template images for DiffImg (Section 3.1.2), and this calibration is transferred to each transient magnitude. The relative calibration between DES fields over large areas has been checked using the stellar locus regression method (High et al. 2009; Kelly et al. 2014), where consistency of colors is verified at the 2% level. The absolute calibration has been checked at the 2% level using very short DES exposures on a handful of spectrophotometric standards measured by the Hubble Space Telescope. While this early calibration meets some of the DES requirements, extensive efforts continue to significantly improve the calibration for analysis.

3.1.2. Templates

For Y1 we constructed deep coadded templates from the Y2 season, while the calibration is from SV. Starting with the image that has the lowest sky noise (${\sigma }_{\mathrm{SKY}}^{\mathrm{min}}$), up to 10 epochs are selected with the smallest PSF that have sky noise less than $2.5\cdot {\sigma }_{\mathrm{SKY}}^{\mathrm{min}}.$ The average number of coadded epochs per band is shown in Table 1, along with the total exposure time per epoch. In the deep-field i band, for example, the templates include a CCD-average of 9.3 epochs which corresponds to a total exposure of 3.7 hr. With an average of 8 coadded epochs per template, the image-subtracted sky-noise (${\sigma }_{\mathrm{SKY}}$) is only 6% higher compared to using an ideal template with infinite S/N.42

Calibrated tertiary standards (Section 3.1.1) are used to determine the ZP for each exposure, and the pixel flux values in each exposure are re-scaled to a common ZP, ZP = 31.1928.43 The coadded templates are combined with a weighted average of each exposure, and the weight within each CCD is fixed to the inverse of the average sky-variance.

The astrometric alignment was done in two steps. First, the exposures were aligned to the USNO-B1 catalog (Monet et al. 2003) and then coadded to produce an intermediate set of templates in which the alignment is good to ∼100 mas, or 0.4 pixel. These intermediate templates were used to produce an internal DES catalog based on SExtractor output. Next, the exposures were re-aligned to this internal catalog, resulting in ∼20 mas (0.08 pixel) precision. After this final astrometric alignment, the exposures are coadded again to produce the final set of templates.

3.2. Single-CCD Processing

3.2.1. Astrometry

During SV and Y1, the astrometric solution was obtained for the entire focal plane in the detrending process, using the SCAMP program and the UCAC-4 catalog (Zacharias et al. 2013) as an astrometric reference. While this worked well for the wide-area survey, there were sometimes very poor solutions in the SN fields leading to errors up to an arcsecond. We suspected that bright saturated objects contributed to this problem because of the longer exposure times in the SN fields compared to the wide-area survey.

After Y1, two astrometry updates were incorporated. First, we switched to using a fainter reference catalog in the SN fields, USNO-B (Monet et al. 2003). Second, rather than using SCAMP to separately find an astrometric solution for the search and template images, we used the SCAMP feature allowing a joint astrometric solution for the search and template images. While the absolute astrometric precision of the USNO-B catalog (250 mas) is worse than UCAC-4 (60 mas), the second change ensures good astrometric alignment (<30 mas) between the search and template, which is critical for good subtractions.

These changes were not incorporated into the detrending process, and were instead added to DiffImg. Since DiffImg is designed for single-CCD processing, a SCAMP solution is obtained separately for each CCD rather than over the focal plane. The astrometric changes worked significantly better, but a few percent of the processed CCDs still suffered catastrophic failures in the astrometric solution. As a final refinement to eliminate these catastrophic solutions, we used our own DES data to construct a reference catalog (Section 3.1.2).

3.2.2. Overlaying Fakes onto Images

In the next stage, two classes of fake point sources are overlaid on the CCD image. The first class consists of four 20th mag fakes in each band (hereafter called "MAG20" fakes) overlaid in random locations away from masked regions. The resulting S/N from the DiffImg flux measurements is part of the data quality evaluation (Section 4).

The second class of fakes, "SN fakes," consists of SN Ia light curve fluxes overlaid onto the CCD image near real galaxies. The fake SN Ia light curve magnitudes are generated by the SNANA simulation (Kessler et al. 2009), and include true parent populations of stretch and color, a realistic model of intrinsic scatter (Guy et al. 2010; Kessler et al. 2013), a redshift range from 0.1 to 1.4, and a galaxy location chosen randomly with a probability proportional to its surface brightness density. All fake SN Ia light curves are generated and stored prior to the start of the survey in order to simplify the overlay software in DiffImg. The fake SN Ia flux added to the image is determined by a ZP based on the comparison of calibration star magnitudes with their fluxes recovered by SExtractor. The SN flux is spread over nearby pixels using the PSF found by the program PSFEx, and the flux in each pixel is smeared by random Poisson noise.

Ideally, fake SNe would be overlaid onto a duplicate set of images so that images with and without fakes can be processed separately. For DES-SN we did not prepare for this duplication, and therefore care is taken to avoid consuming too many galaxies with fake SNe that can overlap real transients and cause them to be undetected. Figure 2 shows the fraction of catalog galaxies populated by fake SNe Ia as a function of redshift; the redshift distribution has been sculpted to ensure adequate low-redshift fakes for monitoring without populating more than a few percent of the galaxies at the low and high redshift ranges. At a given epoch, the average number of overlaid fakes per CCD is ∼20. Most of the overlaid fakes are far from peak or at high redshift, and thus only about 1/3 of these are bright enough to be detected.

Figure 2.

Figure 2. As a function of redshift, fraction of catalog galaxies in the SN fields with an overlaid fake SN Ia.

Standard image High-resolution image

There are a few caveats regarding the selection of galaxies and the placement of the fake. First, simulated SNe Ia are matched to a real galaxy based on the galaxy photo-z (${z}_{\mathrm{phot}}$) since we do not have a sufficiently large catalog based on spectroscopic redshifts. To avoid extreme photo-z outliers, we remove galaxies that are exceedingly bright or faint for its ${z}_{\mathrm{phot}}$ value by requiring a brightness-redshift constraint for both the r and i band magnitudes (mr,i),

Equation (1)

where $\mu ({z}_{\mathrm{phot}})$ is the distance modulus for a flat ΛCDM cosmology with ${{\rm{\Omega }}}_{{\rm{\Lambda }}}=0.7$ and H0 = 70 (km s−1)/Mpc. This caveat has negligible impact because the fakes are overlaid over a wide redshift range and a wide range of galaxy mags.

The second caveat is that the surface brightness profile is assumed to be Gaussian (Sérsic index = 0.5) rather than a more general sum of Sérsic profiles such as a bulge plus disk component. This overly simplistic profile results in fakes placed preferentially near the galaxy cores with inadequate sampling of the disk tails. While this feature may actually help monitor subtraction problems on galaxies, it can result in biased estimates of quantities that depend on the distance to the galaxy core, such as measuring the fraction of SNe correctly matched to its host galaxy.

The final caveat concerns masking of bad pixels. While the placement of fakes is independent of the masking, the efficiency analysis presented here ignores fakes in which more than 10% of their PSF-weighted pixels are masked; 7% of the fakes are therefore discarded. For analyses requiring the absolute efficiency, such as rates, we can impose masking cuts on the data, or perform additional fake studies to include the effects of masking.

3.2.3. Image Coadding and Subtraction

The program SWarp is used to co-add the search exposures, and to remap the template to be aligned with the coadded search image. For image subtraction, we use a modified version of the difference imaging program hotPants. Our version is an attempt to improve the performance, and is based on the implementation44 of A. Becker, which uses the algorithm of (Alard & Lupton 1998, hereafter AL98), and uses some of their original code. The basic approach in hotPants is to transform one image (which we call a template with pixel values ${t}_{x,y}$) so that it can be subtracted pixel by pixel from another image taken at a different time and under different observing conditions. This linear transformation is described by

Equation (2)

where ${t}_{{xy}}^{\prime }$ is the convolved image which is subtracted pixel-by-pixel from the unconvolved image. The main computation in hotPants involves the determination of the values of the kernel of the transformation ${k}_{x(x-x^{\prime} )y(y-y^{\prime} )}.$ The parameter r is the size of the kernel and x, y, x' and y' are the pixel coordinates. In general, one should add a constant term to (Equation (2)), but our version makes a global background subtraction of the images before determining the kernel.

The kernel is assumed to vary slowly over the image and this variation is described by a polynomial:

Equation (3)

The AL98 algorithm allows a polynomial of arbitrary order, but since we process each CCD separately our hotPants version includes only linear terms.

A major difference between our version and the AL98 algorithm lies in the parameterization of the ${k}_{(x-x^{\prime} )(y-y^{\prime} )}^{{ij}}.$ AL98 parameterize the kernel as an arbitrary number of Gaussian functions of fixed width multiplied by polynomials whose coefficients are parameters to be fit. In routine use, the number of polynomial coefficients is large and comparable to the number of pixels in the kernel. Instead, we have chosen a method similar to that in Bramich (2008) in which the pixel values in the core of the kernel are fitted without the use of a function to parameterize them. We have, however, retained the Gaussian function for the pixels at the edges of the kernel: the Gaussian form is useful for cases where a large kernel is needed to match images with very poor seeing. While our approach seems more transparent in terms of understanding the fit parameters, we do not have solid evidence that our parameterization results in better subtracted images.

3.2.4. Detections and Candidates

We first measure the PSF to define the detection profile we are searching for, and then PSF-like objects on the subtracted image are found by SExtractor. Selection requirements in Table 3 are applied to reduce the number of artifacts. An object satisfying these requirements is referred to as a "detection."

Table 3.  Selection Requirements on SExtractor Detections in a Subtracted Image

(1) S/N > 3.5, although the effective S/N cut from
SExtractor is higher (∼5) as shown in Figure 8
in 35 × 35 pixel stamp around the detected object:
(2) fewer than 200 pixels with a flux less than −2σ below zero,
(3) fewer than 20 pixels with flux less than −4σ below zero,
(4) fewer than 2 pixels with flux less than −6σ below zero.
(5) detection not near object in veto catalog containing
80,000 stars with r-band mag <21.
Veto radius is mag-dependent, and total vetoed area
over all 10 fields is 0.63 deg2, or 2.4% of the area.
(6) for co-added images, cosmic ray rejection based on
consistency of detected object on each exposure.
(7) detected object profile is PSF-like based on the
SExtractor SPREAD_MODEL variable (Desai et al. 2012)
(8) SExtractor A_IMAGE <1.5 × PSF

Download table as:  ASCIITypeset image

A "raw candidate" is defined when two or more detections have measured positions matching to within 1''. The two detections can be in the same band or different bands, or on the same night or different nights. All raw candidates are saved, which includes moving objects such as asteroids and KBOs. Requiring detections on separate nights (Section 3.3) is used to reject moving objects.

3.3. Post-processing

In addition to the single-CCD operations, there are post-processing steps that operate on all fields and CCDs, and continually update the candidate properties. A few percent of the events land on a CCD in two overlapping fields, and thus single-CCD processing is not a useful concept when constructing candidates from multiple observations.

In some past surveys, as well as the start of DES-SN, the first post-processing step was to perform a visual inspection of each detection in order to reject subtraction artifacts that produce false detections. In DES-SN we use a new machine learning based code to replace human scanning; this "autoScan" program is described in detail in G15. The algorithm makes use of the supervised machine learning technique Random Forest. The training sample includes nearly 900,000 DiffImg detections, half of which were flagged as artifacts by human scanners and the other half are detections of fakes. For each detection, the inputs to autoScan include a 51 × 51 pixel2 detection-centered stamp from the search, template, and subtracted images. The flux and uncertainty from each pixel on these three stamps contributes ∼15,000 pieces of information. However, rather than using the pixel-level information we found that autoScan performs better and faster using 37 high-level features computed from the stamps. The three most important features are (1) ratio of PSF-fitted flux to aperture flux on the template image, (2) mag-difference between the detection and the nearest catalog source, and (3) the SPREAD_MODEL output from SExtractor.

For each object, the autoScan program returns a score between 0 and 1, where 0 corresponds to an obvious artifact and 1 is for a high-quality detection. While autoScan could have been applied before making raw candidates, we have so far been conservative and apply the autoScan requirement here in the post-processing in order to fully monitor the autoScan performance.

The first post-processing step is to define "science candidates," a detection on two distinct nights, each satisfying the autoScan requirement. Science candidates are the official product of DiffImg, and as more epochs are acquired these candidates are repeatedly analyzed to select targets (object and host galaxy) for spectroscopic observations. If there is a future science case requiring single-night detections, we can recover the raw single-night candidates; the caveat is that during survey operations, only the 2-night science candidates are selected for spectroscopic observations.

The next post-processing stage is to match each science candidate to a host galaxy, which is later targeted for a spectroscopic redshift. We use the "directional-light-radius" (${d}_{\mathrm{LR}}$) method described in Sako et al. (2014). Currently the galaxy profiles are approximated by a Gaussian (Sérsic index = 0.5), and will eventually be updated with profile fits to an arbitrary Sérsic index. If there are multiple nearby galaxies within $4\times {d}_{\mathrm{LR}}$ they are all flagged to acquire a spectroscopic redshift.

The next post-processing stage, "forced photometry," computes the PSF-fitted flux and its uncertainty for each observation since the start of the observing season, regardless of whether there was a detection. The flux and uncertainty are computed at the same coordinates (R.A., decl.) on each subtracted image, and the coordinates are computed as the weighted average from each detection. This stage allows recovering small fluxes just below detection threshold, and fluxes consistent with zero, in order to construct complete light curves. Ideally the autoScan program would be used to flag bad subtractions that could lead to badly measured fluxes. However, while the autoScan results exist for detections, we do not have the infrastructure to run autoScan on non-detections in a manner analogous to the forced photometry. In addition, autoScan would need additional training to accept subtractions with no significant detection. As an alternative to autoScan, forced photometry measurements are rejected from light curve fitting (below) if (1) the PSF-fitted flux and aperture flux differ by more than 5σ, or (2) within a 1'' radius there are 2 or more pixels with S/N < −6.

The final post-processing stage is to use the SNANA program PSNID 45 (Sako et al. 2011) to perform photometric classification by comparing each candidate light curve to a series of photometric griz light curve templates constructed on a redshift grid for (1) SN Ia, (2) CC type II, and (3) CC type Ib/Ic. For each candidate-template χ2 calculation, we discard up to two epochs with the largest χ2 contribution (if above 10). This outlier rejection helps to avoid bad fits from a few poorly measured forced-photometry fluxes, particularly on bright galaxies as described in Section 4. A relative probability is computed from each χ2, and a Bayesian probability is computed for each SN type; the largest probability (${P}_{\mathrm{max}}$) determines the type and redshift. If ${P}_{\mathrm{max}}\lt 0.5,$ or the best fit χ2 is poor, the candidate type is flagged as unknown. The probability for each type and the estimate of peak magnitude contribute to the spectroscopic target selection process (Section 3.4.2).

3.4. Spectroscopic Target Selection

While spectroscopic target selection is outside the scope of DiffImg, here we give a brief description to give a more complete picture of the DES-SN program. The two components of spectroscopic targets, host redshifts and live transients, are described below.

3.4.1. Host Galaxy Redshifts

The large numbers and faint magnitudes of SNe discovered in DES-SN overwhelm the available resources for spectroscopically classifying each candidate. However, we can efficiently use multi-fiber spectroscopic resources to measure an accurate host-galaxy redshift for the majority of our SN candidates. Using the Anglo-Australian Telescope (AAT), the OzDES program (Yuan et al. 2015) is a 100-night spectroscopic survey with the 400 fiber Two Degree Field (2dF) instrument feeding the dual-beam AAOmega spectrograph. The overlap between the field of view of DECam and 2dF is nearly complete. With repeat visits to the same source, spectra are coadded to enable redshift measurements for much fainter galaxies than would naively be expected from a 4 m class telescope; redshifts are obtained for about half of the 24th mag galaxies (r band). In addition to targeting host galaxies for SN candidates, OzDES also targets a variety of DES sources such as AGN to derive reverberation mapped black-hole masses, galaxies for DES photo-z calibration, white dwarfs for calibration, and live transients for spectroscopic typing.

3.4.2. Spectroscopic Identification of Live Targets

The spectroscopic selection for live transients is primarily focused on SN Ia. The selection is based on a visual examination of light curves along with PSNID probabilities. The phase estimate is used to give higher priority to candidates near peak brightness. Highest priority is given to candidates with peak r band magnitude rpeak < 20.5 mag (mag-limited) and to candidates with a photometric redshift below 0.2 (volume limited). These two samples have large overlap, and are expected to be very nearly complete. Lower priority is given to candidates over the full redshift range where we expect to acquire a spectroscopic typing for ∼10% of the SN Ia sample. Starting in Y2, transient activity in multiple seasons is used to reject AGN-like candidates.

Telescopes used to spectroscopically confirm transients discovered by DiffImg include the 3.9 m AAT at Siding Springs Observatory in Australia, the 8.2 m Very Large Telescope (VLT) on Cerro Paranal in Chile, the 9.2 m South African Large Telescope (SALT) near Sutherland in South Africa, the 10.4 m Gran Telescopio Canarias (GTC) in La Palma, the Keck 10 m on Mauna Kea in Hawaii, the 6.5 m Magellan Telescope at Las Campanas Observatory in Chile, the 6.5 m MMT on Mount Hopkins in Arizona, the 3 m Shane telescope at Lick Observatory in California, the 4.1 m Southern Astrophysical Research (SOAR) telescope at Cerro Pachon in Chile, the 8.1 m Gemini-South telescope at Cerro Pachon in Chile, and the 9.2 m Hobby–Eberly Telescope at the McDonald Observatory in Texas.

3.5. Statistics Summary

A summary of the first-season (Y1) statistics for single-epoch detections is shown in Table 4. The average number of objects per field found by SExtractor increases with the passband central wavelength. In the shallow fields there are ∼100,000 per field in the g band, increasing to ∼170,000 in the z band. In the deep fields there are 130,000 in the g band, increasing to 270,000 in the z band. A visual scanning assessment shows that more than 90% of these detections are subtraction artifacts. Following the SExtractor detections on the subtracted image, there is a significant reduction from the selection cuts and autoScan. The selection cuts reduce the number of detections by a factor of 3–4 in the g band, and a factor of ∼2 in the z band. The automated scanning provides a further reduction of a factor of ∼4 in the g band, increasing to an order of magnitude in the z band. After all selection requirements and automated scanning, the average number of objects per field in Y1 is ∼104 in both the deep and shallow fields, and the artifact fraction is ∼25% as determined from a visual scanning assessment.

Table 4.  Number of Non-fake Single-epoch Detections in the Y1 Season per 3 deg2 Field (Thousands)

    Number of Detections
    (×103)
  Detection        
Fields Stage g r i z
Deep SExtractor 133 166 277 270
  + selection cutsa 32 81 172 167
  + autoScan b 8 8 9 12
  autoScan/cuts ratio 0.25 0.10 0.06 0.07
Shallow SExtractor 98 103 126 173
  + selection cutsa 29 26 55 92
  + autoScan b 8 7 9 10
  autoScan/cuts ratio 0.28 0.27 0.18 0.11

Notes.

aIncludes selection cuts in Table 3. bIncludes cuts and automated scanning requirement (G15).

Download table as:  ASCIITypeset image

To determine the average number of detections per square degree for a single-epoch visit (${\bar{n}}_{\mathrm{detect}}$), the number of Y1 detections (autoScan row in Table 4) is divided by 2.7 deg2 and ${N}_{\mathrm{visit}}$ from Table 2: ${\bar{n}}_{\mathrm{detect}}\approx 110$ in the g band and $\approx 150$ in the z band.

The total number of raw candidates in Y1, which requires two SExtractor detections passing the selection cuts in Table 3, is 1.2 × 105. Requiring two detections on different nights reduces this slightly to 1.0 × 105. Requiring the two separate-night detections to satisfy the automated scanning reduces the number of candidates to 7489, or a factor of 13 reduction. Table 5 shows the average number of candidates per deep field and per shallow field.

Table 5.  Average Number of Non-fake Y1-candidates per 3 deg2 Field

Candidate ${N}_{\mathrm{cand}}$ per Field
Selection DEEP SHALLOW
2 detections (raw cand) 18830 10410
2 nights (without autoScan) 17460 8230
2 nights + autoScan (science cand) 1040 680

Download table as:  ASCIITypeset image

Following SExtractor detections, the selection cuts and autoScan have a dramatic effect on reducing the number of detections and candidates. This is because the vast majority of the SExtractor detections are false positives, or artifacts of the image subtraction. These artifacts come from a variety of sources, including bright stars and galaxies, defective pixels, edges of masked regions, CCD edges, and cosmic rays. Some of these artifacts are illustrated in Figure 1 of G15. The large rejection by autoScan costs only a 1.0% loss of fake SNe Ia candidates, mainly for fakes with low S/N at peak brightness. We are therefore confident that autoScan is highly efficient for real astrophysical transients.

Subtraction artifacts are illustrated in Figure 3 for a deep field image processed by DiffImg. SExtractor detections failing selection cuts (dashed red boxes) are the most clearly evident upon visual inspection, while those failing autoScan (solid red boxes) are more subtle. In this example, most of the artifacts are around a few bright objects even though most of the bright sources are cleanly subtracted. On average, artifacts are ∼1 mag fainter than real transients. To get an estimate of the artifact rate for bright sources, ∼3% of bright fakes (mag < 20) fail the detection and autoScan requirements. The origin of these artifacts is not understood.

Figure 3.

Figure 3. From DiffImg, a co-added search image (top) and subtracted image (bottom) from a typical night (2013 October 13) in deep field C3 for i band. The image size is roughly 2farcs7 × 4farcs6, or about 1/13 the area viewed by a single CCD. Non-fake SExtractor detections on the subtracted image (bottom) are highlighted in both images: dashed red boxes for objects failing the selection cuts in Table 3, solid red boxes for objects passing these cuts and failing autoScan, and yellow circles for objects passing cuts and autoScan, which are used to make science candidates. To set the scale, the brightest masked star has mag m = 11.4; the other masked star has mag m = 15.0. To see detections in more detail, Figure 1 in G15 shows a collection of 51 × 51 pixel2 stamps for search+template+subtracted images, each centered on a detection.

Standard image High-resolution image

3.6. Classification Summary

Here we show the breakdown of PSNID classifications for science candidates (Section 3.3). To avoid the noisiest light curves we consider the subset in which three bands each have an observation with S/N > 5; this subset is roughly half of all candidates. Applying PSNID to the entire light curves for the full Y1 sample results in nearly equal classification fractions (∼1/3) for SN Ia, SN CC (mostly Type II) and unknown.

While a full Y1 analysis is relevant after the survey, during survey operations PSNID is run on newly discovered light curves that have only a few epochs. To illustrate the real-time PSNID performance, Figure 4 shows the classification fractions as a function of time the light curve has been observed. ${\mathrm{MJD}}_{\mathrm{cand}}$ is the time when the second epoch is detected, or when the object became a science candidate. ${\mathrm{MJD}}_{\mathrm{ref}}$ represents the current MJD, which we take to be 56,600 in this example. The fits include observations between ${\mathrm{MJD}}_{\mathrm{cand}}-20$ and ${\mathrm{MJD}}_{\mathrm{ref}}.$ When only the early part of the light curve is available for fitting (−5 days in Figure 4), about 70% of the candidates are classified as SN Ia, fewer than 10% as SN CC, and the rest are unknown. When fitting 2 months of the light curve, more than half of the classifications are SN CC.

Figure 4.

Figure 4.  PSNID classification fraction for SN Ia and SN CC vs. time that the light curve has been observed, for candidates in which three bands have an observation with S/N > 5. See Section 3.6 for explanation of ${\mathrm{MJD}}_{\mathrm{cand}}$ and ${\mathrm{MJD}}_{\mathrm{ref}}.$ Zero on the horizontal axis corresponds to newer candidates used in the PSNID fits; −50 days corresponds to older candidates whose second detection occurred 50 days earlier and thus have longer light curve coverage in the PSNID fits. "Unknown" corresponds to light curves for which PSNID cannot determine a SN type.

Standard image High-resolution image

3.7. Data Reprocessing

During Y1, the monitoring of fakes showed a significant inefficiency that was traced to severe astrometry problems as described in Section 3.2.1. This problem was fixed after Y1, and before the start of Y2 operations all of Y1 was reprocessed in order to recover hundreds of host-galaxy spectroscopic targets that had been missed during Y1. During Y2, the monitoring of fakes showed good DiffImg performance in the shallow fields, but there were still significant flux-outliers in the deep fields. This problem was eventually traced to the program which determines the PSF used for calculating PSF-fitted fluxes, and it was fixed after Y2.

Both Y1 and Y2 have been fully reprocessed in all ten SN fields, with all DiffImg fixes. Results presented in this paper are based on the Y1 season, using templates constructed from Y2 images. The reprocessed results are used to discover transients missed during the survey, to update the photometric classification with PSNID, and to update the host-galaxy target list for measuring spectroscopic redshifts. While transients discovered in the reprocessing have become too faint to target for spectroscopic observations, this is not a serious issue because we target only a small fraction of the transients anyway.

Another subtle change in the reprocessing campaign was to fully analyze each exposure in the deep field sequences (in addition to the coadd) to improve the KBO search. In particular, this reprocessing led to the discovery of one of the two Neptune Trojans in Gerdes et al. (2015), as well as improved orbital fits for both objects. We are currently upgrading DiffImg to overlay fake KBOs onto the images; these fake KBOs will allow measuring the search efficiency, and they will be used to develop improved KBO-finding algorithms.

We do not expect more DiffImg improvements during the remainder of DES, unless our monitoring uncovers new problems or we improve the subtraction problem on bright galaxies as described in Section 4.3. Even without software changes, we may reprocess the data in the future using better templates and lower detection thresholds in order to improve the depth of the search.

3.8.  DiffImg Processing Time

Using the IBM iDataPlex Carver computational system at NERSC,46 we give the processing time for the DiffImg steps in the middle panel of Figure 1. For a shallow field with a single exposure, the processing time for a single CCD is ∼10 minutes, half of which is spent on the hotPants program. In the deep fields we perform the hotPants subtraction for each exposure as well as the coadded image, and thus the processing time scales roughly with the number of exposures. For a deep-field sequence with 11 z band exposures, the processing time for a single CCD is ∼90 minutes.

The post-processing steps (right panel in Figure 1) run serially, and the processing time depends on how long the survey has been running. Near the start of a survey season the post-processing takes a few minutes, but near the end of the season it takes several hours.

4.  DiffImg MONITORING-I: SINGLE EPOCHS

Here we describe monitoring of the single-epoch detection efficiency and data quality, using both the MAG20 fakes and the SN fakes processed by DiffImg.

4.1. Data Quality Assessment

The measured S/N from the MAG20 fakes is part of the data quality evaluation (See Figure 5). We define ${\overline{{\rm{S}}/{\rm{N}}}}_{\mathrm{mag}20}$ to be the average S/N among all of the (4 × 60 = 240) MAG20 fakes overlaid on each exposure, where each S/N is the ratio of the PSF-fitted flux to its uncertainty. If ${\overline{{\rm{S}}/{\rm{N}}}}_{\mathrm{mag}20}\lt 20$ in the shallow fields, or <80 in the deep fields, the exposures are flagged to be retaken. In addition, an exposure is retaken if the i band PSF width (FWHM) at zenith is >2''; this seeing value is computed by correcting the measured PSF for airmass and wavelength. These criteria for retaking an exposure are a compromise between data quality in the SN fields and lost observing in the wide-area survey. The largest ${\overline{{\rm{S}}/{\rm{N}}}}_{\mathrm{mag}20}$ values are from high-quality data triggered because there were no observations within the past 7 days. The lower ${\overline{{\rm{S}}/{\rm{N}}}}_{\mathrm{mag}20}$ values are typically from data triggered by seeing >1farcs1 and from observations at larger airmass.

Figure 5.

Figure 5. For each set of exposures in Y1, the CCD-averaged PSF width (i-band at zenith, FWHM, arcsec) is plotted against the CCD-averaged S/N from the MAG20 fakes. Exposure sequences with points outside the dashed box are retaken, typically the following night. Left panels are for the deep-fields and the right panels are for the shallow fields.

Standard image High-resolution image

While ${\overline{{\rm{S}}/{\rm{N}}}}_{\mathrm{mag}20}$ and the PSF are used to determine if an exposure sequence needs to be retaken, the SN Ia fakes are used to determine complementary information about the data quality. For a given epoch, the fakes are used to determine the magnitude depth, ${m}_{\mathrm{eff}=1/2},$ defined as the mag where the DiffImg detection efficiency has fallen to 50%. Figure 6 illustrates the determination of ${m}_{\mathrm{eff}=1/2}.$ The ${m}_{\mathrm{eff}=1/2}$ distribution is shown in Figure 7 for each band, and for deep and shallow fields. The variation in ${m}_{\mathrm{eff}=1/2}$ is from the variation in observing conditions.

Figure 6.

Figure 6. True mag distribution for fakes from a single epoch. Left panel is for deep field C3-r; right panel is for shallow field C1-r. Shaded overlay is for fakes satisfying DiffImg detection requirements. The dashed vertical line shows ${m}_{\mathrm{eff}=1/2}$ as defined in the text.

Standard image High-resolution image
Figure 7.

Figure 7. Distribution of ${m}_{\mathrm{eff}=1/2}$ in each passband, determined with fakes. Each entry is from one epoch. Left panels are for the deep fields; right for the shallow fields.

Standard image High-resolution image

4.2. Detection Efficiency versus S/N

The detection efficiency as a function of S/N (${\epsilon }_{{\rm{S}}/{\rm{N}}}$) is a crucial input to the MC simulation (Section 5) and also provides another monitoring metric. We do not attempt a first-principles calculation of ${\epsilon }_{{\rm{S}}/{\rm{N}}},$ primarily because of the complicated behavior of SExtractor that largely defines the detection threshold. Therefore ${\epsilon }_{{\rm{S}}/{\rm{N}}}$ is empirically measured from the fakes as illustrated in Figure 8 for the i band. The effective S/N threshold, defined for ${\epsilon }_{{\rm{S}}/{\rm{N}}}=0.5,$ is about 5 in each band and is the same in both the deep and shallow fields. Each sub-panel shows the nominal ${\epsilon }_{{\rm{S}}/{\rm{N}}}$ curve computed from all of the fake data, along with a systematic test based on splitting the data into two equal-size samples. The probability of detecting a transient depends on the ZP, PSF, and sky-noise through their effect on the S/N, and we expect the detection efficiency to depend primarily on S/N. Figure 8 shows that there is no unexpected dependence, which is important because not all of the selection criteria are based on S/N.

Figure 8.

Figure 8. Single-epoch detection efficiency (${\epsilon }_{{\rm{S}}/{\rm{N}}}$) vs. S/N, as measured with fakes. The solid-filled circles are computed from all the data, and is the same in each panel. The solid and dashed curves correspond to splitting the sample into roughly two equal bins for zero point (top), PSF (middle) and sky noise (bottom).

Standard image High-resolution image

4.3. Anomalous Subtractions on Bright Galaxies

The final issue is the reliability of forced-photometry flux measurements that are used to classify light curves, both visually and with fitting programs. The average fake fluxes are recovered to within few percent of their true values, which is adequate precision since it is smaller than the model errors used in light curve fitting. We have also checked the reliability of the flux uncertainties, and found that these uncertainties are underestimated in proportion to the local galaxy surface brightness (SB) under the SN location; we refer to this effect as the "SB anomaly." The excess flux scatter can cause problems with monitoring and light curve fitting, and thus we have modeled this effect in both simulations and fitting programs (Section 5).

To define the SB, we first sum the template flux at the candidate location, using an aperture with 1farcs3 radius, which contains most of the flux for a typical PSF. The SB flux is defined as the average flux per square arcsecond, and the SB-mag (${m}_{\mathrm{SB}}$) is the corresponding magnitude per square arcsecond. For fakes we characterize the quality of the uncertainties using the rms of ${\rm{\Delta }}F/{\sigma }_{F}$ (${\mathrm{RMS}}_{{\rm{\Delta }}}$), where ${\rm{\Delta }}F$ is the difference between the measured (forced photometry) flux and the true flux of the fake, and ${\sigma }_{F}$ is the uncertainty on the forced-photometry measurement. Ideally ${\mathrm{RMS}}_{{\rm{\Delta }}}=1$ in all cases, but we find that ${\mathrm{RMS}}_{{\rm{\Delta }}}$ increases with SB as shown in Figure 9 for the deep fields and in Figure 10 for the shallow fields. For low SB (${m}_{\mathrm{SB}}\gt 24$), ${\mathrm{RMS}}_{{\rm{\Delta }}}$ is very close to unity as expected. For the brightest galaxies where ${m}_{\mathrm{SB}}\approx 20,$ ${\mathrm{RMS}}_{{\rm{\Delta }}}\approx 5$ in the deep fields and ∼3 in the shallow fields.

Figure 9.

Figure 9. For the 2 deep fields in each pass band, rms of ΔF/${\sigma }_{F}$ as a function of the galaxy surface-brightness mag (${m}_{\mathrm{SB}}$) for fakes as defined in the text. ${\rm{\Delta }}F$ is the difference between the true and measured fake flux, and ${\sigma }_{F}$ is the uncertainty. The horizontal dashed line through 1 shows the expected value if DiffImg correctly determines the flux uncertainties. The dotted-red curve is for very faint fakes with SN mag m > 26; the dashed-blue curve is for fakes with SN mag m < 24. The model for this effect (Section 5) depends on ${m}_{\mathrm{SB}}$ and not on the transient mag.

Standard image High-resolution image
Figure 10.

Figure 10. Same as Figure 9, but for the 8 shallow fields.

Standard image High-resolution image

Figures 9 and 10 also show rms versus ${m}_{\mathrm{SB}}$ separately for dim fakes with m > 26 (red curve) and for brighter fakes with m < 24 (blue curve). The consistency shows that this effect depends mainly on the brightness of the galaxy and not the transient source.

5.  DiffImg MONITORING-II: SCIENCE CANDIDATES

While monitoring the single-epoch detection efficiency and data quality are important on a nightly basis (Section 4), the science prospects ultimately depend on the DiffImg candidate efficiency and our ability to select spectroscopic targets based on a small number of epochs. Here we describe the DiffImg monitoring of science candidates using SN fakes combined with MC simulations. The basic idea is to use the MC simulation to predict the SNe Ia efficiency versus redshift, and compare with the true efficiency measured from the fakes.

There are two different efficiencies to monitor as a function of redshift. The first efficiency is the fraction of fakes that become a science candidate (${{\mathcal{E}}}_{\mathrm{cand}}$). As long as ${{\mathcal{E}}}_{\mathrm{cand}}$ is optimal, then even if DiffImg measures fluxes with many catastrophic outliers an improved offline photometry analysis can make all of the discovered light curves useful for science analysis. However, if there are too many flux outliers then real-time photometric classification becomes more difficult, which complicates the selection of spectroscopic targets.

It is therefore important to monitor a second DiffImg efficiency, the fraction of fakes passing the photometric analysis (${{\mathcal{E}}}_{\mathrm{PSNID}}$) used for spectroscopic targeting, which is based on the PSNID program (Sako et al. 2011). The key component of the PSNID analysis (Table 6) is a requirement on the fit probability computed from the template-fit χ2, and therefore even a few measured fluxes that are highly discrepant from their true values can cause PSNID to reject the light curve. Up to two highly discrepant fluxes (w.r.t. the fit) are rejected, allowing for a small level of subtraction problems. In summary, simply discovering an event is not adequate unless the flux measurements are of sufficient quality to perform light-curve template fitting without suffering significant inefficiency.

Table 6.  PSNID Analysis Requirements for ${{\mathcal{E}}}_{\mathrm{PSNID}}$ Study

Category Requirement
Sampling 5 or more observations.
  3 bands with at least one S/N >5 observation.
  An observation with ${T}_{\mathrm{obs}}\lt -2$ days.a
  An observation with ${T}_{\mathrm{obs}}\gt +5$ days.
Fit-χ2 Fit prob ${P}_{\mathrm{fit}}\gt 0.1$ b
  Reject up to two 3.16σ data-fit outliers (Δχ2 > 10).
Typing Best fit template (among Ia, II, Ib, Ic) is Type Ia.

Notes.

a ${T}_{\mathrm{obs}}$ is the observer-frame time since the epoch of peak brightness. b ${P}_{\mathrm{fit}}$ is calculated from χ2/dof. Because of the large PSNID model errors, the true chance of finding ${P}_{\mathrm{fit}}\lt 10\%$ for SNe Ia is ∼1%.

Download table as:  ASCIITypeset image

Details of the MC simulation are given in Appendix A, and here we give a brief overview. The MC simulation uses the observed cadence, and the simulated flux and noise are computed from the observing conditions at each epoch: ZP, PSF, sky noise, CCD gain.

While the cadence information is trivially obtained from survey observations, the MC simulation also needs two inputs based on the fakes processed by DiffImg. First, we use the efficiency versus S/N (${\epsilon }_{{\rm{S}}/{\rm{N}}}$) measured in each passband, and illustrated in Figure 8 for the i band. Since there is good agreement between the deep and shallow fields, we use the same ${\epsilon }_{{\rm{S}}/{\rm{N}}}$ function in all fields.

The second input from the fakes is a model for the SB anomaly, the anomalous flux uncertainty that increases with the local surface brightness. The galaxy Sérsic profile in the simulation is used to analytically compute ${m}_{\mathrm{SB}},$ and the ${\mathrm{RMS}}_{{\rm{\Delta }}}$ versus ${m}_{\mathrm{SB}}$ curves in Figures 9 and 10 are used to scale the sky noise as a function of passband, and as a function of deep or shallow field. These same ${\mathrm{RMS}}_{{\rm{\Delta }}}$ versus ${m}_{\mathrm{SB}}$ curves are used in the PSNID analysis to scale the flux uncertainties. The PSNID analysis results in $7662$ fakes passing the selection criteria in Table 6 (includes all 10 fields), and a similar number of SNe Ia from the MC simulation.

Figure 11 shows the science-candidate efficiency (${{\mathcal{E}}}_{\mathrm{cand}}$) and PSNID-analysis efficiency (${{\mathcal{E}}}_{\mathrm{PSNID}}$) as a function of redshift for one shallow field in each group. The analogous deep-field plots are shown in Figure 12. In the shallow fields, ${{\mathcal{E}}}_{\mathrm{cand}}\simeq 1$ for redshifts z < 0.5, and falls to 50% at z ≃ 0.7. In the deep fields, ${{\mathcal{E}}}_{\mathrm{cand}}\simeq 1$ for redshifts z < 0.8, and falls to 50% at z ≃ 1.1. The overall agreement is good between the fakes and the MC simulation. While we might have expected the SB anomaly to affect the discovery of lower redshift SNe that preferentialy lie on brighter galaxies, we find that the low-redshift efficiencies are ∼100% and thus the SB anomaly has a negligible impact on discovering SNe Ia. The SB anomaly and its impact are discussed further in Section 7.2. The most notable discrepancy is in ${{\mathcal{E}}}_{\mathrm{cand}}$ for redshifts z > 1.2 in the C3 deep field, and ${{\mathcal{E}}}_{\mathrm{PSNID}}$ for redshifts z > 0.8 in both of the deep fields (Figure 12). Finally, it is worth noting that prior to the final reprocessing the fake efficiencies were significantly worse than the MC prediction for the reasons described in Section 3.7.

Figure 11.

Figure 11. For one shallow field in each group, the Y1 efficiency vs. redshift is shown for fakes processed by DiffImg (black dots), and the MC prediction (red histogram). Left panel is the efficiency for becoming a science candidate (${{\mathcal{E}}}_{\mathrm{cand}}$); right panel is the PSNID-analysis efficiency (${{\mathcal{E}}}_{\mathrm{PSNID}}$) defined in Table 6.

Standard image High-resolution image
Figure 12.

Figure 12. Same as Figure 11, but for the two deep fields.

Standard image High-resolution image

5.1. What are the Science Candidates?

Here we give a very approximate breakdown for the 7500 science candidates discovered by DiffImg in Y1, where each candidate requires a DiffImg detection on 2 separate nights with no other selection requirements. First we use our MC simulation to predict the SN contribution (Ia+CC; see Appendix A) and we include events that reach peak brightness well before and after the Y1 season. We find 2000 ± 300 SNe, where the uncertainty is from the rate measurements, and nearly 60% of the SNe are Type Ia. This SN contribution corresponds to about 27% of the candidates.

A non-astrophysical candidate, or artifact, is defined as a candidate in which more than half of the detections fail the automated scanning requirement (Section 3.3 and G15). Using this arbitrary but illustrative definition, ∼30% of the science candidates are artifacts (i.e., ∼2300 in Y1), compared with 1.5% of the fakes. These artifacts become a science candidate because of the relatively loose requirement of only 2 detections passing the selection requirements and automated scanning. The relatively small number of artifacts does not cause problems during survey operations, and thus we choose to reject them with offline analysis software rather than trying to reduce the number of science candidates.

For the remaining science candidates, a preliminary assessment of the OzDES spectral classifications shows that they are mostly AGN and variable stars.

6. REALITY CHECK: DATA-MC COMPARISON

Since the DiffImg results presented so far are based on fakes and simulations, here we perform a reality check and compare the SNANA-based MC simulation to Y1 data, where the MC simulation is a mix of SNe Ia and CC SNe as described in Appendix A. Recall that the MC simulation has input from fakes processed by DiffImg, but there is no tuning with real science candidates. Here we make a data-MC comparison for the photometric redshift distribution (${z}_{\mathrm{phot}}$) of a photometrically selected SN Ia sample, using only the SN light curve information. We do not use any spectroscopically confirmed typing information, nor do we use any host-galaxy redshifts.

For this comparison we do not use the PSNID selection criteria in Table 6. Instead, we use a more stringent analysis designed to photometrically select a highly pure SN Ia sample. We fit both the data and MC samples with the SALT-II model (Guy et al. 2010) using the photo-z technique described in Kessler et al. (2010a). Finally, the SALT-II fit parameters are used in a nearest neighbor (NN) analysis similar to that described in Sako et al. (2014). Details of the analysis are given in Appendix B, and the resulting ${z}_{\mathrm{phot}}$ comparison is shown in Figure 13. The high-redshift roll-off in the ${z}_{\mathrm{phot}}$ distributions is mainly from the requirement that three bands each have an observation with S/N > 5. The overall agreement is reasonable, except for z > 1 in the deep fields. This data-MC discrepancy will be monitored as we continue to improve photometric classification methods and the simulation.

Figure 13.

Figure 13.  ${z}_{\mathrm{phot}}$ distribution in Y1 from a photometric analysis (Appendix B). Left panel is for the 2 deep fields; right is for the 8 shallow fields. Black points are the data; red histogram is the MC prediction, re-scaled to have the same number of entries as the data.

Standard image High-resolution image

7. DISCUSSION

7.1. Comparison of Search with SNLS

Here we make some rough performance comparisons between the SNLS and DES-SN deep field search for SNe Ia. These two surveys have similar depths and passbands, and they each measured their efficiency with fake SNe Ia overlaid on images. While the DES-SN trigger requires 2 epochs in any band, the SNLS trigger requires a single detection in the Megacam iM band. For the single-epoch detection efficiency, Figure 9 of P10 shows that the magnitude at 50% efficiency is ${m}_{\mathrm{eff}=1/2}=24.3$ in the iM band for an exposure time of 3640 s.47 This depth is very similar to our average DES i band depth, ${m}_{\mathrm{eff}=1/2}=24.5$ (Figure 7), using 1440 s exposures.

P10 also measure the efficiency versus redshift for finding fake SNe Ia. Both the P10 and SNANA simulations predict the observed color and stretch distribution for SNLS, and thus the two simulations are consistent in describing the parent populations of stretch and color. Figure 10 of P10 shows that ${{\mathcal{E}}}_{\mathrm{cand}}=50\%$ at z ≃ 0.95, slightly below the corresponding DES-SN redshift z ≃ 1.1.

7.2. SB Anomaly

As described in Section 4, our image subtractions degrade with increasing galaxy surface brightness, leading to increased flux scatter (see Figures 9 and 10). The origin of this SB anomaly has not been identified, but we speculate that it may be caused by an underestimate of the pixel flux errors in resampled images in the vicinity of bright galaxies. In particular, resampling introduces pixel-to-pixel correlations in the galaxy profile which are not included in our estimate of the PSF-fitted uncertainties. Other possibilities include subtle problems in the astrometric solution, the PSF determination, or the coadding of exposures.

To check for the possibility that we introduced the SB anomaly in our customized version of hotPants, we have run a few tests using the publicly available version. We find that the subtracted images look very similar, and that our version results in notably fewer outlier fluxes. We are therefore confident that we have not introduced bugs to cause the SB anomaly.

In the literature on transient-search pipelines we could not find a quantitive analysis on the effect of subtractions on bright galaxies. However, there are some interesting clues in the final-photometry results reported by Pan-STARRS1 and SNLS. In the recent Pan-STARRS1 cosmology analysis, which uses the same underlying subtraction technique as our DiffImg, their light curve fits have a reduced χ2 distribution with a larger high-side tail than expected (see Figure 6 in Rest et al. 2014). They attribute this effect to subtraction artifacts on bright galaxies, which is similar to our SB anomaly.

In the SNLS final photometry (Astier et al. 2013), they use a scene modeling technique with stacked images, originally developed for SDSS (Holtzman et al. 2008), which does not use resampled images. As a function of total SN + galaxy brightness, they find no evidence for flux bias or scatter (see Figures 7 and 10 in Astier et al. 2013), which is encouraging that the SB anomaly can be resolved in the offline analysis. It is not clear if their lack of SB anomaly is due to a different photometry method, their astrometric precision being an order of magnitude better compared to our DiffImg,48 or because they do not probe sufficiently bright galaxies to see the effect.

We are actively developing a final-analysis photometry method similar to that in Holtzman et al. (2008), Astier et al. (2013), but the SB anomaly may not get resolved for finding transients with DiffImg. The SB anomaly's impact on discovering SNe Ia, however, is quite limited because of their brightness at low redshifts where the SB anomaly is most pronounced, and because only 2 detections are needed among of the many above-threshold observations. The main impact is that the larger flux uncertainties at low redshift slightly degrade the classification performance of the PSNID program.

In contrast to bright SNe Ia, the SB anomaly can have a more dramatic effect on detecting and measuring fluxes for faint or fast transients, such as CC SNe or kilonovae. For example, kilonova models for neutron-star (NS) mergers suggest optical signals that are much dimmer redder, and short-lived compared to SNe Ia (Barnes & Kasen 2013). Using DiffImg to search for such events in very nearby galaxies, the SB anomaly could significantly degrade the detection efficiency, and those that are detected could have color uncertainties much larger than expected from photo-electron statistics, thereby making it difficult to distinguish kilonovae from other astrophysical transients.

To further diagnose the SB anomaly, Figure 14 shows the autoScan score distribution for i band fake detections in the two deep fields (X3, C3). AutoScan assigns a score near zero to a clear artifact, and a score near one to a cleanly subtracted point-source transient; scores above 0.5 are used to make candidates. The upper-left panel in Figure 14 shows the autoScan score distribution for all of the i band detections; this reference distribution is strongly peaked near one, showing that most of the detections are from good subtractions. The remaining panels show the autoScan score distribution, in bins of ${m}_{\mathrm{SB}},$ for the small subset of >3σ flux outliers. For the brightest SB range ($20\lt {m}_{\mathrm{SB}}\lt 21$) the autoScan scores are all close to zero, indicating that these are visibly poor subtractions. As the SB decreases, the autoScan scores improve. We have checked the distributions of PSF, sky noise and ZP, and find no significant difference between the outliers and the reference; hence there is no apparent correlation of the SB anomaly with observing conditions.

Figure 14.

Figure 14.  autoScan score distribution for fakes in the two deep fields, i band. Score is 0 for a cleary bad subtraction and 1 for a good subtraction. Upper left panel shows the reference distribution for all fakes. Remaining panels probe the SB anomaly for a subset of fakes that are faint (mSN > 23), are detected by SExtractor, satisfy the cuts in Table 3, and have a measured flux more than 3σ away from the true flux. Each panel indicates a different ${m}_{\mathrm{SB}}$ range.

Standard image High-resolution image

For PSNID light curve fitting we could remove the few observations that fail autoScan but we do not currently have the infrastructure to apply this requirement to the many non-detections that are often more numerous than the autoScan failures. As described in Section 5, we have chosen instead to model the increased flux scatter and inflate the flux uncertainties based on ${m}_{\mathrm{SB}}.$

Finally, we note that our characterization of the bright-galaxy subtraction artifact is a dependence on a single parameter: ${m}_{\mathrm{SB}}.$ While this description is adequate to classify newly discovered SNe for spectroscopic observations, a more accurate description may be needed for dimmer transients (e.g., kilonovae), and the Hubble-diagram analysis if this effect persists in the final photometry. For example, the SB anomaly could also depend on the exposure conditions and the SB gradient at the SN location.

7.3. Host-galaxy Matching

The SN science analyses will rely mainly on photometric classification, and the redshifts will come from host galaxy spectroscopy, primarily from OzDES (Yuan et al. 2015). The spectroscopic redshifts are very accurate in principle, if the correct galaxy is matched to each SN. We have used fakes to measure the SN-host matching performance in Y1, and found a 99% success rate. However, our fakes are preferentially distributed close to the galaxy cores with too few events in the disk tails, and thus the SN-host matching result from fakes is too optimistic.

We are therefore preparing to test SN-host matching with an independent set of fake locations based on more realistic galaxy profiles from semi-analytic models that are fit to Sérsic profiles. As we obtain more accurate DES galaxy profiles in future analyses, we will be able to use our own data to evaluate the SN-host matching efficiency. Also note that fake locations can be rapidly generated and analyzed at the catalog level since there is no need to overlay SN fluxes on images to process with DiffImg. The eventual goal is to update the simulation to include a model of outlier redshifts from mis-matched host galaxies.

8. CONCLUSIONS

We have assembled a pipeline capable of using hundreds of CPU cores to process up to 170 GB of raw imaging data in less than a day, with the goal of discovering astrophysical transients. For the subtracted images produced by DiffImg in Y1, the typical number of SExtractor-detected objects per band is a few hundred thousand per 3 deg2 field, and the vast majority (>90%) are subtraction artifacts. Selection requirements and automated scanning reduce the artifact fraction down to 25%, and ∼104 detections per band (Table 4). The number of detections per single-epoch visit is ∼130 per deg2.

The number of science candidates, requiring a detection on 2 separate nights, is 1040 per deep field, and 680 per shallow field (Table 5). Our MC simulation predicts that roughly 27% of the discovered transients are SNe Ia or CC SNe. Another ∼30% are artifacts, and most of the remaining candidates are AGN or variable stars.

We have implemented extensive monitoring in DiffImg based on overlaying fake SNe Ia near galaxies on the search images. Comparing the DiffImg efficiency for fakes to the efficiency from MC simulations shows that the DiffImg performance is close to what is expected. The main defect of DiffImg is the SB anomaly in which larger host-galaxy surface brightness results in larger flux-scatter that is not described by the uncertainty (see Figures 9 and 10). There are other small fake-MC discrepancies in the efficiency (e.g., Figure 12); it is not clear if the cause is a more subtle DiffImg defect, or if the MC simulation is too optimistic.

As a rigorous demonstration of our monitoring technique, we performed a very preliminary photometric classification analysis on real (non-fake) data, and compared the resulting ${z}_{\mathrm{phot}}$ distribution to a MC simulation. Inputs to the MC simulation include observed conditions (PSF, ZP, sky noise) and the DiffImg behavior measured with fakes (efficiency versus S/N and anomalous flux scatter versus SB). The resulting data-MC agreement is reasonable in both the deep and shallow fields (Figure 13).

Finally, the results presented here are based on fully reprocessed data after the first two DES seasons. DiffImg issues during Y1 and Y2 resulted in some poor subtractions, but with recent DiffImg improvements and a reliable model of the flux uncertainties, we expect our spectroscopic target selection to be more efficient and more automated in the remaining seasons.

This research used resources of the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Part of this research was conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The DES data management system is supported by the National Science Foundation under Grant Number AST-1138766. The DES participants from Spanish institutions are partially supported by MINECO under grants AYA2012-39559, ESP2013-48274, FPA2013-47986, and Centro de Excelencia Severo Ochoa SEV-2012-0234, some of which include ERDF funds from the European Union. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l'Espai (IEEC/CSIC), the Institut de Física d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University. We are grateful for the extraordinary contributions of our CTIO colleagues and the DECam Construction, Commissioning and Science Verification teams in achieving the excellent instrument and telescope conditions that have made this work possible. The success of this project also relies critically on the expertise and dedication of the DES Data Management group.

APPENDIX A: MC SIMULATION TO PREDICT THE DiffImg EFFICIENCY

The fast MC simulation of SNe Ia is from SNANA (Kessler et al. 2009). It uses the exact same generation parameters as those used to generate the fakes (Section 3.2.2): parent populations of color and stretch, intrinsic scatter model, and a random galaxy location in proportion to its surface brightness density. For studies requiring SN CC we use the SNANA simulation as described in (Kessler et al. 2010b). For studies requiring the absolute rate, we use the SN Ia volumetric rate from Dilday et al. (2008) and the CC rate from Bazin et al. (2009). Each simulated epoch corresponds to a real observation in the survey where the model magnitude is converted to an equivalent forced-photometry flux using the measured ZP. The observed PSF and sky noise at each epoch are used to predict the measurement uncertainty,

Equation (4)

where ${\sigma }_{\mathrm{SIM}}$ is the uncertainty in photoelectrons, F is the flux, $A={\left[2\pi \int {\mathrm{PSF}}^{2}(r,\theta ){rdr}\right]}^{-1}$ is the noise-equivalent area, and b is the effective sky level including dark current, readout noise, and noise from the host galaxy. ${\mathrm{RMS}}_{{\rm{\Delta }}}$ is an empirical error scaling of the sky noise that increases with the local surface brightness as shown in Figures 9 and 10; this term accounts for the SB anomaly: systematic subtraction problems near bright galaxies. While the measured ${\mathrm{RMS}}_{{\rm{\Delta }}}$ curves are used to compute anomalous fluctuations in the measured fluxes, the reported uncertainties are computed with ${\mathrm{RMS}}_{{\rm{\Delta }}}=1$ in the same way as the data.

The simulation includes the candidate selection requirement of a detection on two separate nights. The detection efficiency is computed from the ${\epsilon }_{{\rm{S}}/{\rm{N}}}$ curves in Figure 8. A simulated detection requires ${\epsilon }_{{\rm{S}}/{\rm{N}}}\gt r,$ where $0\lt r\lt 1$ is a random number.

Finally, the simulated light curves are stored in data files and analyzed in exactly the same way as transients (fakes or real events) found by DiffImg.

APPENDIX B: PHOTOMETRIC ANALYSIS AND SELECTION REQUIREMENTS

Here we describe a photometric analysis and selection requirements to obtain a high-purity sample of SNe Ia in the first season of the DES-SN program. The goal of this analysis is to compare the ${z}_{\mathrm{phot}}$ distribution for data and the MC simulation. Using the SALT-II model, light curve fitting is done with the SNANA program snlc_fit.exe. For each candidate, the 5 fitted parameters are (1) time of peak brightness (t0), (2) SALT-II color parameter (c), (3) SALT-II stretch parameter (x1), (4) SALT-II amplitude (x0), and (5) photometric redshift (${z}_{\mathrm{phot}}$).

The first fit iteration chi-squared (${\chi }_{1}^{2}$) is computed in the usual manner: from the data-model flux-difference for each epoch, and the quadrature sum of the data and model uncertainties. Since the model uncertainty depends on the fitted parameter ${z}_{\mathrm{phot}},$ the second fit iteration chi-squared (${\chi }_{2}^{2}$) is

Equation (5)

The index e is the epoch index and ${\sigma }_{1}^{e}$ is the quadrature sum of the data and model-uncertainty from the first fit iteration in which there is no ${\chi }_{\sigma }^{2}$ term. While the ${\sigma }_{1}^{e}$ add an irrelevant constant to ${\chi }_{2}^{2},$ it has the effect of making ${\chi }_{\sigma }^{2}$ small. The analysis selection requirements are as follows:

  • 1.  
    three bands with at least one observation satisfying S/N > 5.
  • 2.  
    at least 1 observation with ${T}_{\mathrm{rest}}\lt -2$ days, where ${T}_{\mathrm{rest}}\equiv {T}_{\mathrm{obs}}/(1+{z}_{\mathrm{phot}}).$
  • 3.  
    at least 1 observation with ${T}_{\mathrm{rest}}\gt +10$ days.
  • 4.  
    SALT-II stretch parameter $| {x}_{1}| \lt 4$.
  • 5.  
    $0.02\lt {z}_{\mathrm{phot}}\lt 2$.
  • 6.  
    fit probability ${P}_{\mathrm{fit}}\gt 0.1,$ calculated from fit χ2/dof.
  • 7.  
    $| {\chi }_{\sigma }^{2}| \lt 2.5.$
  • 8.  
    NN requirement described below.

The NN analysis is based on the four-dimensional space of x1, c, ${z}_{\mathrm{phot}}$ and ${\tilde{m}}_{B}.$ The first three variables are from the SALT-II light curve fit (see above). ${\tilde{m}}_{B}$ is the true rest-frame B-band magnitude as described in Section 4.3 of Kessler et al. (2013), and is not the naive best-fit model magnitude. For a given set of fitted parameters, the NNs are simulated events that satisfy a four-dimensional distance constraint,

Equation (6)

where the primed quantities are the fitted parameters from a simulated training sample that includes SNe Ia and CC SNe events. The optimal distance-metric parameters (${{\rm{\Delta }}}_{c},{{\rm{\Delta }}}_{{x}_{1}},{{\rm{\Delta }}}_{z},{{\rm{\Delta }}}_{B}$) are trained with the simulation to maximize the product of the SN Ia purity and the efficiency. The final selection requirement is that for simulated neighbors satisfying Equation (6), more than half are true SNe Ia with at least 1σ confidence.

Footnotes

  • 40 
  • 41 

    The global DES calibration plan is available at http://des-docdb.fnal.gov:8080/cgi-bin/ShowDocument?docid=6584&version=7

  • 42 

    ${\sigma }_{\mathrm{SKY}}/{\sigma }_{\mathrm{SKY}}(\mathrm{ideal})\simeq \sqrt{(1+1/{N}_{\mathrm{template}})}$ where ${N}_{\mathrm{template}}$ is the number of coadded templates.

  • 43 

    $\mathrm{ZP}=25+2.5{\mathrm{log}}_{10}(300),$ where 25 is a nominal ZP per second and 300 s is a reference exposure time.

  • 44 
  • 45 

    PSNID—Photometric SN Identification.

  • 46 

    National Energy Research Scientific Computing Center.

  • 47 

    See Table 2 in P10 for SNLS exposure time in each band.

  • 48 

    While the SNLS final-photometry pipeline has much better astrometric precision than our DiffImg, the SNLS search pipeline (P10) and DiffImg have similar astrometric precision.

Please wait… references are loading.
10.1088/0004-6256/150/6/172