Articles

FAST GENERATION OF ENSEMBLES OF COSMOLOGICAL N-BODY SIMULATIONS VIA MODE RESAMPLING

, , , and

Published 2011 July 21 © 2011. The American Astronomical Society. All rights reserved.
, , Citation Michael D. Schneider et al 2011 ApJ 737 11 DOI 10.1088/0004-637X/737/1/11

0004-637X/737/1/11

ABSTRACT

We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a power spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with <20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires ∼8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.

Export citation and abstract BibTeX RIS

1. INTRODUCTION

The matter power spectrum is a sensitive probe of cosmological models. Observations of galaxy clustering, weak gravitational lensing, and the Lyα forest are all tracers of the matter power spectrum and have to date provided key constraints on the initial conditions of the universe as well as on its growth and expansion history. Many future surveys will rely on tracers of the matter power spectrum to constrain models for dark energy, dark matter, and massive neutrinos. However, due to nonlinear gravitational evolution of the matter distribution, both the mean matter power spectrum and its variance and correlations between bands in wavenumber are difficult to predict without expensive numerical simulations.

Considerable work has been done to predict the mean nonlinear matter power spectra to the precision necessary for near-term surveys (e.g., Smith et al. 2003; Lawrence et al. 2010). In order to use observations of the matter power spectrum to constrain cosmological parameters, it is necessary to know the error distribution for the power spectrum estimates in addition to the mean spectrum predicted by the cosmological model. It is by now well established that nonlinear gravitational evolution creates a complicated error distribution for the power spectrum with significantly increased small-scale variance and correlations between all band powers when compared with the Gaussian case (Scoccimarro et al. 1999; Meiksin & White 1999; Hamilton et al. 2006; Neyrinck et al. 2006; Neyrinck & Szapudi 2007; Takahashi et al. 2009). The differences between the Gaussian error and nonlinear error models for the matter power spectrum are quite significant for parameter estimation from galaxy surveys (Hamilton et al. 2006; Neyrinck & Szapudi 2007), but uncertainty in nonlinear galaxy bias with respect to dark matter may dominate the inferred parameter constraints in the near future. Weak-lensing surveys probe the matter distribution directly and are thus insensitive to biasing. The line-of-sight projection in weak lensing reduces the nonlinear contribution to the power spectrum covariance, but it is still a significant consideration for parameter estimation (Semboloni et al. 2007; Takada & Jain 2009; Sato et al. 2009).

Standard estimators of the matter power spectrum covariance matrix in a given model require a large ensemble of statistically independent realizations of the matter density over volumes at least as large as the survey being analyzed. Meiksin & White (1999) found that at least several hundred realizations are necessary to estimate the covariance over an interesting dynamical range and Takahashi et al. (2009) used 5000 simulations to obtain a low-noise estimator for a Gpc3 volume. Hamilton et al. (2006) attempted to estimate the covariance matrix by sub-sampling a single simulation volume, but found that the window functions they applied to define different pseudo-independent sub-volumes significantly altered the covariance. That is, the Fourier transform of a windowed density field has Fourier modes that are linear combinations of the modes in the N-body simulation volume with periodic boundary conditions. The modes of the windowed density, then, have different covariance from the modes in the periodic box, which is particularly enhanced by the correlations between the largest and smallest scales (Rimes & Hamilton 2006). In a similar line of inquiry, Norberg et al. (2009) showed that jackknife estimates of the covariance matrix from simulated survey regions systematically underpredict the true nonlinear covariance matrix. Finally, if some information is known about the structure of the power spectrum covariance, then the "shrinkage estimator" of Pope & Szapudi (2008) gives a way to reduce both the bias and noise in covariance estimates from a fixed number of density realizations. However, in our experience the "shrinkage estimators" as formulated in Pope & Szapudi (2008) rely quite heavily on the prior information one is able to supply.

In this paper, we describe a method to obtain multiple realizations of the matter density from a single N-body simulation by resampling those Fourier modes (in a periodic box) that can be approximated as Gaussian distributed with zero correlations between Fourier modes. While we anticipate this algorithm could be useful for a number of applications (such as adding power larger than the simulation volume as in Cole 1997), in this paper we focus on the minimum number of simulations needed to estimate the matter power spectrum covariance matrix. As a benchmark for our study we use the results of Takahashi et al. (2009), who used 5000 N-body simulations to estimate the covariance matrix in a 1 (h−1 Gpc)3 simulation volume. While we focus here on the three-dimensional matter power spectrum, one final goal will be to efficiently estimate the weak-lensing power spectrum covariance for upcoming cosmic shear surveys. An example of this using a brute-force approach is given in Sato et al. (2009).

The structure of this paper is as follows. In Section 2, we describe our method for resampling large-scale modes in a periodic N-body simulation and the necessary adjustments to the remaining small-scale modes. We give an approximate algorithm requiring only a single snapshot, but limited to simulations of at least several Gpc on a side, in Section 2.1.2. We describe our full algorithm for arbitrary simulation sizes (as long as the large-scale modes are Gaussian distributed) in Section 2.1.3. In Section 3, we present validation tests for adding new large-scale mode realizations. We describe how our resampling algorithm can be useful for power spectrum covariance matrix estimation in Section 3.2.3. We discuss several future directions and applications for this work in Section 4. Where numerical results are given, unless otherwise stated, we assume the same cosmology as Takahashi et al. (2009) with Ωm = 0.238, Ωb = 0.041, ΩΛ = 0.762, ns = 0.958, σ8 = 0.76, and H0 = 73.2 km s−1 Mpc−1.

2. RESAMPLING LARGE-SCALE POWER

Our goal is to isolate and resample those Fourier modes in a single periodic N-body simulation that can be accurately approximated as Gaussian distributed. We assume that the Fourier modes of the initial conditions for the simulation are generated from a random realization of a complex Gaussian field on a grid from a specified power spectrum (i.e., the real and imaginary parts of the Fourier modes are each drawn from independent Gaussian distributions such that the amplitude of the modes is equal to the square root of the power spectrum). As the simulation is evolved, gravitational interactions between the particles tracing the matter density field cause the phases of the Fourier modes to become correlated and the distribution of Fourier mode amplitudes to become skewed toward larger values. The growth rate of correlations and departure from Gaussianity are functions of wavenumber as gravitational collapse proceeds most rapidly on small scales.

Therefore, we first need to separate the Fourier modes into "large scales" defined to be Gaussian distributed, and "small scales" that we will define to be all the remaining Fourier modes in the simulation. Because it worked for our purposes we crudely separate large- and small-scale modes by defining all modes with wavenumber amplitude less than some value kthresh as Gaussian distributed. In practice, we defined kthresh so that the rms mass overdensity smoothed in top-hat spheres of radius 2π/kthresh was less than 0.2 (similar to Cole 1997). This is based on the idea that larger mass overdensity fluctuations indicate more nonlinear growth leading to larger departures from Gaussianity.

Having defined those Fourier modes that we will model as Gaussian, we next resample them by generating new realizations of the initial conditions that have nonzero Fourier modes only for $|\vec{k}| < k_{\rm thresh}$ and extrapolating the amplitudes of the modes to the desired redshift using linear theory.

To obtain a new realization of the density field that approximates what we would have obtained had we run a simulation with the new large-scale modes in the initial conditions of the simulation, we have to adjust the small-scale modes in some way to account for the phase correlations that would have emerged between the large and small scales.

2.1. Resampling Algorithm

In this section, we imagine that we know the particle positions in a simulation that has no large-scale power (where "large scale" denotes Fourier modes with $|\vec{k}| \le k_{\rm thresh}$) and we seek an algorithm to add new large-scale modes. We will denote the simulation without large-scale power as a "small-modes" simulation and the simulation with all Fourier modes nonzero as an "all-modes" simulation.

2.1.1. Physical Picture

Altering the amplitudes and phases of the large-scale modes of the mass density in a cosmological volume has two dominant effects:

  • 1.  
    sub-volumes are expanded or compressed, and
  • 2.  
    sub-volumes gravitationally evolve with an effective matter background density that is enhanced or diminished,

depending on where the large-scale modes become more or less overdense, respectively (Frenk et al. 1988; Little et al. 1991; Tormen & Bertschinger 1996; Cole 1997; Angulo & White 2010). We follow Cole (1997) to model both of these effects. To model the expansion and compression of sub-volumes, we perturb the particle positions using the Zel'dovich displacements (Zel'dovich 1970) calculated from the large-scale modes, δL,

Equation (1)

The expansion and compression of sub-volumes is analogous to the frequency modulation of a radio wave. The large-scale modes to be added are a new signal that modulates the sizes of sub-volumes in the matter density field; expressed to first order by the Zel'dovich displacements in Equation (1).

Compressed (expanded) regions also evolve with an increased (decreased) effective matter density, altering the growth rate of structure. Because the growth rate increases monotonically with time, we model the changing effective local matter density by finding new times $a^{\prime }(\mathbf {x})$ at which the linear growth rate matches the growth rate in a universe with matter density $\Omega _{m}\left(1+\delta _{L}(\mathbf {x}, a)\right)$ (with ΩΛ kept fixed). The perturbed scale factors $a^{\prime }(\mathbf {x})$ are then defined by the relation

Equation (2)

where D(a, Ωm) is the linear growth function. In practice, we solve Equation (2) numerically to get a' as demonstrated in Figure 1.

Figure 1.

Figure 1. Top left: linear growth function normalized to one at the time of the simulation initial conditions (aIC = 1/21) vs. scale factor. Top right: linear growth function vs. matter density, also normalized to unity at the time of the initial conditions for all values of Ωm. Bottom: scale factor as a function of matter density at matched linear growth. The vertical dashed line shows the value of Ωm = Ω0m = 0.238 used in our simulations. The vertical dotted lines show the values of Ω0m(1 ± 4.3σ(δL)), where σ(δL) ≈ 0.17 is the rms mass overdensity in top-hat spheres at the scale R = 2π/(256kF) (kF ≡ 2π/Lbox) for the density perturbations with nonzero Fourier amplitudes only for $|\vec{k}| < k_{\rm thresh}$. The horizontal dotted lines in the bottom panel show the corresponding scale factor range over which our small-modes simulation must be saved.

Standard image High-resolution image

2.1.2. Approximate Method for a Single Snapshot

Combining the two effects of expansion/compression and time perturbations, the model for adding large-scale fluctuations to the mass density field is

Equation (3)

where $\delta (\mathbf {x},a)\equiv \rho (\mathbf {x},a)/\bar{\rho }- 1$ is the mass overdensity that would be measured in the all-modes simulation, $\delta _{S}(\mathbf {x},a)$ is the mass overdensity measured in the small-modes simulation, δL has zero power for $|\vec{k}| > k_{\rm thresh}$, a' is defined by Equation (2), and $\mathbf {x}^{\prime } \equiv \mathbf {x}+ \mathbf {d}_{L}(\mathbf {x},a)$. Note that for this definition of $\mathbf {x}^{\prime }$ the Zel'dovich displacements are added to the particle positions prior to application of the time perturbation (i.e., at the positions at time a, not at a'). We will discuss this choice below.

When |a' − a| ≪ 1 (which requires $\langle |\delta _L(\mathbf {x})|^{2}\rangle ^{1/2} \ll 1$) the first-order correction to δS in a' should be sufficient for adding δL to the density field. Expanding both sides of Equation (2) to first order in (a' − a) and δL,

Equation (4)

Solving for a' gives

Equation (5)

which shows that the perturbed scale factor is roughly proportional to the large-scale density perturbations being added. Using Equation (5) in the Taylor expansion of Equation (3) gives

Equation (6)

This suggests that the addition of large-scale modes can be quickly computed given an estimate of the nonlinear growth rate of the small-modes simulation. With a single snapshot, the growth rate can be estimated from the continuity equation,4

Equation (7)

and, using Equation (3),

Equation (8)

So, given particle positions and velocities at scale factor a for a simulation that has no large-scale modes, Equation (6) provides a method to add new large-scale mode realizations when the large-scale density perturbations are small.

According to Equation (5) the range of a' − a is proportional to the range of $\delta _{L}(\mathbf {x}, a)$. Because $\delta _{L}(\mathbf {x},a)$ is Gaussian distributed by construction, we can determine the range of δL from its variance,

Equation (9)

where kF ≡ 2π/Lbox is the fundamental frequency of the simulation box, Δ2(k) ≡ k3P(k)/(2π2) is the theoretical dimensionless matter power spectrum, and Wtop-hat(kR) is the Fourier transform of a spherical top-hat window of radius R. The scale R is roughly given by either the pixel scale of the gridded density field or the mean interparticle spacing, whichever is larger. For the simulations we use Lbox = 1000 h−1 Mpc, kthresh = 8kF ≈ 0.05 h−1 Mpc, and, with 2563 particles, R ∼ 2π/(256kF). Evaluating Equation (9) gives σ ≈ 0.17, which is still less than 0.2 as we required when choosing kthresh. But, as shown in Figure 1, the variations in $\delta _{L}(\mathbf {x},a)$ greater than about 2σ lead to |a' − a| > 1 making Equation (6) insufficiently accurate. In our test simulations δL actually varies over ±4.3σ ≈ ±0.87, giving a range of perturbed scale factor 0.26 ≲ a' ≲ 4.4 (shown by the dotted lines in Figure 1).

In Figure 2 we show the maximum value of |a' − a| as a function of the simulation box size while keeping kthresh/kF fixed at 8 and, as in the preceding paragraph, assuming maximal variations in $\delta _{L}(\mathbf {x},a)$ of 4.3σ. For Lbox ∼ 3000 h−1 Mpc, as in the recently completed MXXL simulation,5 the variation in a' − a is less than 0.2 and the approximate formula in Equation (6) could be sufficient for adding the large-scale modes to the small-modes simulation. To further assess the validity of Equation (6), we show a histogram of the values of the third term in Equation (6), normalized by $\delta _S(\mathbf {x}^{\prime },a)$, on a 5123 grid in our 1 h−1 Gpc small-modes simulation in Figure 3. The distribution of grid cell values is strongly peaked around zero, but has wide tails extending to values of several hundred. This again illustrates that Equation (6) is a bad approximation for a (1 h−1 Gpc)3 volume. However, if we scale the bin values on the lower horizontal axis by the ratio of the maximum perturbed scale factor, max(a' − a), in a (3 h−1 Gpc)3 simulation to the maximum (a' − a) in our simulation, we get the axis on the top of Figure 3. For a (3 h−1 Gpc)3 simulation, 92% of the grid cells of the third term in Equation (6) have values less than one. So, Equation (6) may be a good enough approximation for adding large-scale modes to the MXXL or other simulations with similarly large volumes.

Figure 2.

Figure 2. Maximum variation in perturbed scale factor a' as a function of the simulation box size for fixed kthresh/kF = 8 and assuming the maximum fluctuations in $\delta _{L}(\mathbf {x},a)$ are 4.3σ. Top: maximum absolute difference between a' and a = 1 for the growth matching condition for adding large-scale modes to a simulation (Equation (2)). Bottom: maximum a' for the growth matching condition for removing large-scale modes from a simulation (Equation (11)). There is no solution for a' left of the vertical dashed line.

Standard image High-resolution image
Figure 3.

Figure 3. Histogram of the values of $\frac{(a^{\prime }-a)}{a}\,\frac{d\ln \delta _S}{d\ln a}$ on a 5123 grid for our ΛCDM cosmology in a 1 (h−1 Gpc)3 simulation. The tick marks on the top (blue) axis show the projected bin values if we scale the bins on the bottom axis by the ratio of the maximum perturbed scale factor, max(a' − a), in a 3 h−1 Gpc box vs. the maximum perturbed scale factor in our 1 h−1 Gpc box. The vertical dashed lines show the values of ±1 on this axis, where the third term in Equation (6) would be small relative to δS.

Standard image High-resolution image

2.1.3. Full Recipe

For box sizes smaller than several Gpc on a side, we need an algorithm that accurately implements the growth matching condition of Equation (2) to compute the density $\delta _S(\mathbf {x}^{\prime },a^{\prime })$ in Equation (3). For computational expediency and for later comparison with Takahashi et al. (2009), we have limited our test simulations to Lbox = 1000 h−1 Mpc. Therefore, we have not tested the accuracy of Equation (6). Instead, because our test simulation parameters yield |a' − a| > 1, we apply the model of Equation (3) by adjusting the positions of each particle in the simulation. If $\mathbf {x}_{i}(a)$ is the position of the ith particle at scale-factor a in the small-modes simulation, the model from Equation (3) with modes δL added to the simulation can be rewritten for each particle position as

Equation (10)

where a* is the scale factor of the target time where the full density field (with new large-scale modes) is desired. That is, we first apply the same Zel'dovich displacement to the particle positions in all the snapshots (evaluated at the unperturbed positions in each snapshot). Then, we interpolate between these modified snapshots to find the particle position at perturbed time $a_{i}^{\prime } \equiv a^{\prime }(\mathbf {x}_{i} + \mathbf {d}(\delta _{L}(\mathbf {x}_{i}(a)), a^{*}))$. To find the particle positions at all values of a' we must save simulation snapshots at times covering the range of a' values and spaced sufficiently closely in a so that the interpolation of positions is accurate. This requires running the simulation far into the future (a > 4 for our test simulations) and saving of order tens of snapshots, which creates both an extra computational and also a disk storage burden over the approximation in Equation (6).

The steps of our full algorithm for adding large-scale modes to the small-scale density are then

  • 1.  
    choose a threshold scale kthresh defining the boundary between large- and small-scale modes and a target time a* where the resampled density is desired;
  • 2.  
    calculate the range of perturbed scale factor a' defined by Equation (2) about a* where the simulation snapshots will be required;
  • 3.  
    run an N-body simulation in a periodic volume with nonzero Fourier modes in the initial conditions only for $|\vec{k}| > k_{\rm thresh}$ and save snapshots over the required range of a' that are spaced sufficiently close together to allow for accurate interpolation of particle positions;
  • 4.  
    calculate a new realization of large-scale modes δL(a*) on a grid;
  • 5.  
    calculate the Zel'dovich displacements defined in Equation (1) from δL;
  • 6.  
    apply the Zel'dovich displacements to each snapshot of the small-modes N-body simulation; and
  • 7.  
    move each particle in the small-modes simulation to its location at time $a^{\prime }(\mathbf {x}(a)+\mathbf {d}_L)$ as defined by Equation (2), interpolating between snapshots when necessary.

The resulting particle positions give a close approximation to the particle positions that would be obtained by running an N-body simulation with the Fourier modes in δL present in the initial conditions (with the amplitudes suitably scaled according to the linear growth function).

The two operations of adding Zel'dovich displacements and perturbing the time where the particle positions are evaluated do not commute when the density field has evolved nonlinearly from the initial conditions. To decide on how to order these operations we imagine an approximation to an N-body simulation where the initial conditions are first evolved by adding Zel'dovich displacements to the particles and then the density is scaled by the linear growth function. Reversing these operations, with the Zel'dovich move applied last, would be less analogous to the way the particle positions actually evolve in a simulation. In this latter scenario, the particles would first gravitationally collapse at rates determined by the local mean matter density and then at late times would undergo displacements to expand or compress regions in under- or overdense volumes. There is nothing physical about this ordering of operations. We have confirmed with our test simulations that the mode resampling does not work well unless the Zel'dovich move is the first step of the mode-resampling algorithm.

2.2. Algorithm Implementation

The algorithm in Section 2.1 describes how to add large-scale Fourier modes to a simulation that had zero large-scale power in the initial conditions. Ideally we would like to first remove the existing large-scale modes from a previously run simulation and then apply the algorithm in Section 2. We will discuss why this is challenging in Section 2.3, but first we describe details of the implementation of the mode-addition algorithm.

To generate a simulation without large-scale power (what we call a "small-modes" simulation), we generated Gaussian Fourier mode realizations on a grid only for those modes, set all modes with $|\vec{k}| < k_{\rm thresh}$ to zero, and then performed the reverse Fourier transform to get the initial density field that is used for the first particle displacement step. We then evolve the simulation normally, but save 43 snapshots equally spaced in ln (a) over the range 0.1 ≲ a ≲ 5.4 so the particle positions at perturbed times can be accurately determined by interpolating between snapshots. For testing purposes we ran another simulation using the same initial conditions but before the large-scale modes were set to zero (called an "all-modes" simulation). The particle positions at a = 1 in the all-modes simulation give our benchmark for testing the reconstruction of the density field when the large-scale modes that were deleted from the initial conditions are added back into the small-modes simulation at a = 1. Note that all the phases in the initial conditions were identical for both the all-modes and small-modes simulations. Unless otherwise specified, all our simulations were in the box 1 h−1 Gpc on a side with 2563 particles and were evolved using a lean version of the GADGET-2 code (Springel et al. 2001; Springel 2005) from Gaussian initial conditions at redshift 20.

Because gravitational evolution couples small-scale Fourier modes to the largest modes in the simulation box, the nonlinear perturbation growth rate in the small-modes simulation (i.e., with large-scale modes missing) is smaller than that in the all-modes simulation, as illustrated with the power spectrum comparison in Figure 4. The solid lines show the power spectra of select snapshots in the all-modes simulation while the dashed lines show power spectra of the small-modes simulation at the same times (recall that the power spectra are equal in the initial conditions). At late times, the all-modes power spectrum has a significantly larger amplitude on small scales than the small-modes power spectrum. A first test of our mode-addition algorithm will be to correct for this deficit of nonlinear power.

Figure 4.

Figure 4. Matter power spectra for several snapshots of the simulations with scale factor increasing from bottom to top. The solid lines are for the all-modes simulation, dashed lines are for the small-modes simulation and dotted lines show the linear theory prediction. Notice the divergence of the nonlinear all-modes and small-modes spectra as the simulation proceeds.

Standard image High-resolution image

In order for the growth-matching condition in Equation (2) to provide a useful means for correcting for missing large-scale power, the errors in matching the growth rate by perturbing the matter density must be much smaller than the change in the growth rate due to the omission of large-scale power. We tested this condition explicitly by running two simulations with different values of Ωm and we plot their power spectra at matched linear growth in Figure 5. The power spectra in the two cosmologies match to <0.1% for wavenumbers k < 0.2 h Mpc−1, which is much less variation than between the all-modes and small-modes power spectra. This is why matching the linear growth is a good way to mimic the change in the local matter density. We note that Zheng et al. (2002) performed similar tests and showed that when the linear theory power spectrum is fixed (as we have assumed), several clustering statistics besides just the nonlinear power spectrum can be easily related between models with different Ωm and σ8.

Figure 5.

Figure 5. Power spectra relative to the all-modes spectra from simulations with different values of Ωm plotted at matched values of the linear growth. For comparison, the power spectra for our fiducial cosmology with and without large-scale modes are also shown. The variation in the spectra with different Ωm is only apparent for k ≳ 0.5 h Mpc−1 and even there it is much less than the difference between the spectra with large-scale modes removed.

Standard image High-resolution image

Next, we tested the various steps of the mode-addition algorithm from Section 2.1 by matching the particle positions between the all-modes and the small-modes simulations by adding the large-scale modes from the all-modes initial conditions back into the small-modes simulation at a = 1 (after scaling the large-scale modes by the linear growth function). Figure 6 illustrates our comparison for a section of the 1(h−1 Gpc)3 simulation volume. The black points show the particle positions in the all-modes simulation while red shows the positions in the small-modes simulation. It is readily apparent to the eye that both the time perturbation and Zel'dovich moves are necessary to obtain a good match to the particle positions in the all-modes simulation. The Zel'dovich move alone does a better job matching the particle positions in the largest overdensities than the time perturbation alone, but leaves significant position offsets compared to the final algorithm combining both perturbations.

Figure 6.

Figure 6. Particle positions in a section of our 1 Gpc3h−3 simulation box (of width 50 Mpc h−1). The red points denote the particle positions in a simulation with the large-scale Fourier amplitudes omitted from the initial conditions as described in the text. The black points denote positions in a simulation with all Fourier modes included, but that is otherwise identical to the simulation denoted by the red points. The four panels show various components of our algorithm for replacing the missing large-scale power. The top left panel has no correction applied to the red points, and the top right panel has the red particles shifted to their positions at different times so that the linear growth rate mimics the perturbation to the local matter density from the added large-scale modes. The bottom left panel shows a Zel'dovich move applied to the red points, where the Zel'dovich displacements are derived from the large-scale modes only. Finally, the bottom right panel shows our full algorithm combining the time-perturbation and Zel'dovich displacements. The axis labels are in h−1 Mpc.

Standard image High-resolution image

2.3. Subtracting Large-scale Power

The algorithm we have demonstrated requires that an N-body simulation be run with initial conditions that have the large-scale Fourier mode amplitudes set to zero and with a relatively large number of late-time snapshots saved that can be used for interpolating particle positions. Ideally, we would like to have an algorithm that can be applied to existing N-body simulations. Such an algorithm would first require a method for subtracting the large-scale modes already present in the simulation. Our procedure for adding large-scale modes should work in reverse but with the new growth-matching condition,

Equation (11)

We arrive at this condition by modeling regions where δL is overdense (underdense) as regions that have evolved without large-scale power but with a larger (smaller) local value of Ωm. We then find the time a' where a simulation with density $\Omega _m(1+\delta _{L}(\mathbf {x},a))$ and no large-scale power would have the same linear growth as the original simulation with density Ωm at time a. However, removing the effect of large-scale underdensities can require evolving those overdense regions far into the future to match the growth on the right-hand side of Equation (11). For ΛCDM, the local linear growth in large underdensities can freeze and will never match the right-hand side in Equation (11) even if evolved infinitely into the future. The growth matching condition in Equation (11) and the problem with underdensities is illustrated in Figure 7. For our fiducial cosmology, Lbox = 1 h−1 Gpc, and kthresh = 8kF, all regions in δL that have a density smaller than the mean density minus ∼2σ will be impossible to correct using the growth matching condition of Equation (11). In the bottom panel of Figure 2, we show the maximum a' derived from Equation (11) as a function of simulation box size assuming kthresh/kF = 8 and that the maximum variations in $\delta _{L}(\mathbf {x},a)$ are 4.3σ. For cubic simulation boxes with sides less than ∼1.8 h−1 Gpc there is no solution for a'. For larger box sizes the maximum a' quickly decreases from ∼6 to very close to one. So for simulation box sizes of several h−1 Gpc, removing large-scale modes may be feasible with minimal extra computational effort. However, again for computational expediency, we have limited our considerations in this paper to our 1 h−1 Gpc simulations and therefore do not consider the removal of large-scale modes further.

Figure 7.

Figure 7. Scale factor vs. matter density at matched linear growth for mode subtraction. Left panel: the scale factor vs. the linear growth function normalized to one at the time of the initial conditions (aIC = 1/21) for different values of Ωm over the range [0.01, 0.5]. Each black line represents a different value of Ωm starting with Ωm = 0.01 for the leftmost line and increasing in steps of 0.05. The solid vertical red line shows the value of the normalized growth function at a = 1 and the target mass density Ω0m = 0.238. Right panel: the growth matching condition for removing large-scale modes in Equation (11) amounts to finding the value of the scale factor in the left panel where the red vertical line intersects the black lines for each Ωm value. The line of intersection points is plotted as the black line in the right panel. The vertical dotted lines show the range of Ω0m(1 ± 4.3σ(δL)) (as in Figure 1). The scale factor value matching the upper bound in the perturbation to Ωm is shown by the dotted horizontal line. There is no scale factor value that can match the linear growth at the lower bound of the perturbed Ωm, rendering the mode subtraction impossible for the overdense regions of δL.

Standard image High-resolution image

3. VALIDATION TESTS

We present addition tests in this section to assess the accuracy of our method for introducing large-scale modes in N-body simulations. We are primarily interested in the matter power spectrum and power spectrum covariance matrix and investigate estimators for these in turn.

3.1. Recovering the Nonlinear Power Spectrum

In Figure 8, we demonstrate the reconstruction of the nonlinear power spectrum from the small-modes simulation. Using the all-modes simulation as our benchmark, we used the Fourier modes from the all-modes initial conditions with $|\vec{k}| < k_{\rm thresh}$ to determine the Zel'dovich moves and perturbed times for adjusting the small-modes simulation (so if our algorithm worked perfectly the power spectra should be perfectly matched). In addition to the ΛCDM simulations described in Section 2.2, we ran analogous all-modes and small-modes simulations for a standard cold dark matter (SCDM) cosmology (i.e., with Ωm = 1). Because the SCDM model has no dark energy, the linear growth rate is completely determined by the matter density, and our growth matching condition in Equation (2) should be an especially good approximation for the effect of large-scale Fourier modes on the gravitational growth rate. Figure 8 shows that the fractional errors in the reconstructed power spectra are indeed slightly smaller for the SCDM simulation, but the errors are much less than 1% for both cosmologies. Due in part to the cloud-in-cell deconvolution (which overcorrects the power on small scales), the fractional errors begin to increase significantly at about 0.5 kNyquist.

Figure 8.

Figure 8. Power spectra relative to the all-modes spectra demonstrating the accuracy with which large-scale power can be added to a simulation without any large-scale modes. Left: SCDM; right: ΛCDM. The large-scale modes (δL) added to the simulations without large-scale power (small modes) have the same phases as in the simulation run with all modes present (shown by the gray horizontal lines). The vertical dashed line marks the Nyquist frequency of the density fields.

Standard image High-resolution image

Note that the Zel'dovich correction alone, shown by dotted blue lines in Figure 8, does a very poor job of correcting the small-scale power spectrum for the added large-scale modes. This seems to be in contrast to the lower left panel of Figure 6 where the Zel'dovich correction alone appears to correct the particle positions rather well. We now see that the eye notices the agreement of large-scale structures more easily than the disagreement of small-scale structures in the dot plots of Figure 6.

3.2. Power Spectrum Covariance Matrix

The gravitational growth of perturbations increases the small-scale power spectrum variance and induces significant correlations between power spectrum bands (Scoccimarro et al. 1999; Meiksin & White 1999). In this section, we test how well our mode-addition algorithm can reproduce these effects.

By perturbing a single small-modes simulation with independent Gaussian realizations of the large-scale modes, we can generate many realizations of the density field. These are not independent realizations because the small-scale modes are initially the same for each resampling. We use the covariance matrix estimated in Takahashi et al. (2009) from 5000 N-body simulations to test our sample covariance estimates from simulations with resampled large-scale modes.

3.2.1. Sample Covariance Matrix Estimator

First we define the estimator for the covariance matrix that we use for our comparisons.

Let $\delta _{\vec{k}_i}$ denote the amplitude at grid point $\vec{k}_i$ of the discrete Fourier transform of the density field from an N-body simulation measured on a three-dimensional uniform periodic grid. Then, the standard estimator for the power spectrum of the mode amplitudes is obtained by averaging $|\delta _{\vec{k}_i}|^2$ over a spherical shell of constant radius $|\vec{k}_i|$,

Equation (12)

where $N_{k_i}$ is the number of modes that fit in the shell. We follow Takahashi et al. (2009) and set this shell thickness to 0.01 h Mpc−1. If we generate Nr realizations of the density field (e.g., by running Nr identical N-body simulations except with different random number seeds to generate the initial conditions), the mean power spectrum estimator for all the realizations is

Equation (13)

The estimator for the covariance of the power spectrum from all the realizations is then

Equation (14)

Meiksin & White (1999), Norberg et al. (2009), and Takahashi et al. (2009) found that at least several hundred realizations of the matter density field are required to accurately estimate the sample covariance matrix. This means that hundreds (or thousands in some cases) of N-body simulations must be run with different random number seeds in order to obtain a power spectrum covariance matrix estimate that is sufficiently accurate for cosmological parameter estimation.

3.2.2. Multiple Power Spectra from a Single Simulation Box

Because running hundreds or thousands of N-body simulations is often prohibitively expensive, previous studies have attempted to estimate the power spectrum covariance from a single N-body simulation by applying window functions to the gridded density field to sub-sample the simulation volume. Hamilton et al. (2006) found that because the windows are convolved with the Fourier modes of the periodic simulation box, the resulting power spectrum covariance matrix estimates are significantly biased with respect to the result obtained from an ensemble of periodic simulations. Furthermore, this bias cannot simply be accounted for by deconvolving the window functions from the power spectrum estimates, but is a result of higher order connected correlations entering the covariance matrix estimator, which Hamilton et al. (2006) call "beat-coupling."

Although direct sub-sampling of the simulation volume has been shown not to reproduce the covariance from an ensemble of simulations, the idea that different regions of a large simulation volume provide statistically independent information about the small-scale covariance is sound if we accept the ergodic hypothesis and assume our simulation is big enough. To avoid the convolution in Fourier space imposed by the configuration-space windows of Hamilton et al. (2006), we attempted to sub-sample the Fourier modes that enter the shell-averaged power spectrum estimator in Equation (13). For shells with large wavenumbers there are thousands of modes used to estimate the power spectrum. The nonlinear power spectrum covariance depends on the connected four-point correlations of the Fourier modes, but with thousands of modes on a uniform grid there must be many redundant four-point configurations that could yield independent samples of the (shell-averaged) power spectrum covariance (after appropriate scaling by the number of modes used to estimate the mean power spectrum).

The enumeration and counting of the four-point configurations in each shell is complicated, so we instead divided the surface of each wavenumber shell into uniform area HEALPix (Górski et al. 2005) pixels and then selected only one wavevector within each pixel to obtain a power spectrum estimate. For a given number of pixels covering the spherical shell, the minimum multiplicity of wavevectors in a pixel determines the number of pseudo-independent power spectrum estimates we can derive with this method. That is, if all pixels have at least n wavevectors within their boundaries, then we can generate at most n pseudo-independent realizations of the power spectrum estimates. We defined smaller (and therefore more) pixels for shells of increasing wavevector magnitude. This ensured that the multiplicity of each pixel was greater than one for a wide range in wavevector magnitudes.

We applied this sub-sampling to a small-modes simulation with 500 large-scale mode resamplings. The resulting power spectrum estimates and correlation coefficients of the estimated power spectrum covariance are shown in Figure 9. While the mean power spectrum is successfully recovered with this algorithm, the power spectrum covariance estimator has nearly zero off-diagonal terms, in stark contrast to the result from Takahashi et al. (2009; shown in red, dashed lines) for wavenumbers ≳ 0.1 h Mpc−1. One explanation for this could be that our method of sub-sampling modes in the wavevector shells is not actually selecting representative samples of all the four-point configurations in each shell, which would be fixed by a more precise sub-sampling procedure.

Figure 9.

Figure 9. Top: power spectrum estimates from using only a sub-sample of the Fourier modes in each wavevector shell. The power spectra are normalized by the power spectrum of the all-modes simulation at a = 1, so the wiggles at small k are indicating the sample variance in the all-modes simulation. Each power spectrum estimate from sub-sampled k-shells is shown by the gray lines in the top panel. The bottom panel of the top figure shows the number of Fourier modes used to estimate the shell-averaged power spectrum. The large jumps are where we changed the number (and size) of the pixels used to sub-sample each k-shell. Bottom: correlation coefficients of the power spectrum estimates (solid, black) compared to the result from Takahashi et al. (2009; dashed, red).

Standard image High-resolution image

Because pursuing this algorithm further would significantly increase the computational complexity while limiting the range of applications, we instead turned to using multiple small-modes simulations with large-scale mode resampling to further explore the covariance matrix estimates as described in the following section.

3.2.3. Covariance Matrix Estimates from Mode Resampling

The power spectrum variance and correlation coefficients estimated from 500 resamplings of the large-scale modes in a single small-modes simulation are shown by the blue squares in Figure 10. Note that we use the same binning in wavenumber as Takahashi et al. (2009) in all the cases shown in order to make an accurate comparison. While the small-scale variance estimate is boosted beyond the Gaussian prediction, it is systematically biased low relative to the results of Takahashi et al. (2009). At intermediate wavenumbers just larger than kthresh, the estimated variance is only ∼20% of the true value. This is not too surprising because it is at these scales where our algorithm for adding large-scale modes is expected to be most inaccurate. This is because our approximation that longer wavelength modes simply change the effective local matter density must be wrong when the longer wavelengths are nearly equal to the smaller wavelengths to be perturbed. The modes both just larger and smaller than kthresh may contribute to coherent structures in the mass density distribution so that resampling only some of the Fourier modes in these structures would severely underestimate the true variance on these scales. Also, we should note that the modes with wavenumbers just smaller than kthresh are likely not strictly Gaussian distributed in the all-modes simulation, so that their true distribution and coupling to small modes is not properly represented.

Figure 10.

Figure 10. Left: power spectrum variance normalized to that for a Gaussian density field with the same linear power. Right: select correlation coefficients of the power spectrum covariance. Each panel corresponds to a row of the matrix of correlation coefficients, with the row labeled by the annotated k value in each panel (which is also where the correlation coefficients are all equal to one in each panel). The vertical dashed lines show the value of kthresh ≡ 8kF. The colors and point styles are the same for both panels. The black circles are values from Takahashi et al. (2009). The red diamonds are derived from the sample covariance estimate from 500 resamplings of each of 20 small-modes simulations. The blue squares are the estimates using 500 resamplings of only 1 small-modes simulation. Finally, the green ×'s are the estimates when using 20 simulations without any resamplings (i.e., the sample covariance estimate that would be derived without knowledge of the algorithm in this paper).

Standard image High-resolution image

The right panel in Figure 10 shows four rows of the matrix of estimated correlation coefficients of the power spectrum. It is significant that the mode resampling generates nonzero correlation coefficients between the large and small scales (blue crosses—lower panels especially), which are identically zero in the unperturbed small-modes simulation. However, the correlation coefficients are again biased low by multiplicative factors of two or more.

We have checked that the biases in the estimated power spectrum variance and correlation coefficients do not change when using more than 500 large-scale mode realizations. Instead we speculate that this bias is real and indicates that (at least for a fixed kthresh) the mode-resampling algorithm generates realizations of the matter density from only a subset of the full nonlinear density probability distribution. This makes sense intuitively if we realize that the mode resampling as implemented here can never generate a new set of halos in the simulation box, but only moves and merges or de-merges the halos already present in the small-modes simulation. Several previous investigations have shown that the phase correlations in the Fourier modes of the density field in a dark matter simulation are dominated by the masses and positions of the most massive halos (Watts et al. 2003; Neyrinck & Szapudi 2007). It makes sense then that the power spectrum variance on small scales would also be quite sensitive to the particular realization of the most massive halos. However, with a larger simulation volume the covariance estimates might be less dependent on the particular realization of the massive halos because many different halo configurations would be present.

If our mode-resampling algorithm is simply sampling a subset of the density probability distribution, then the bias in the power spectrum variance and correlation coefficients should decrease if we use more than one simulation. To check this hypothesis we ran 19 more small-modes simulations (for a total of 20) with different random number seeds in the initial conditions and added 500 resamplings of the large-scale modes to each simulation (for a total of 10,000 realizations of the density field). The sample covariance estimates improve considerably as shown by the red diamonds in Figure 10. The biases in both the variance and the correlation coefficients are much smaller over all scales. In addition, the extremely low dip in the variance estimates at intermediate scales when using only one simulation has disappeared entirely with 20 simulations.

Finally, we show an estimate of the covariance matrix using 20 simulations without any resampling of large-scale modes with the green crosses in Figure 10. With such a small number of simulations the statistical noise dominates in the estimator. Comparing with the estimator in 500 resamplings of each of the N-body simulations shows how much our mode-addition algorithm reduces the statistical errors.

3.2.4. Convergence Rate of Covariance Estimates

Although with 20 small-modes simulations the bias in the power spectrum covariance estimate is much smaller than with only one simulation, a small bias is still noticeable in Figure 10. In Figure 11, we attempt to quantify the improvement in the covariance matrix estimates as more small simulations are added by plotting the rms dispersion in elements of the power spectrum variance relative to the result from Takahashi et al. (2009). We show the distribution in the variance estimates for wavenumber bins with centers in 0.295 < k/(Mpc h−1) < 0.395 with the boxes in the left panel of Figure 11. The horizontal lines in the centers of the boxes denote the median of the variance estimates and the boxes' tops and bottoms denote the first and third quartiles. We have selected only high-k modes to plot in Figure 11 because the variance estimate in this range has a roughly constant bias as seen in Figure 10 and we are primarily interested in the accuracy of the covariance estimates on small scales where the deviation from the Gaussian model is most severe. We show how the normalization of the fit to the covariance dispersion changes with the number of mode resamplings per N-body simulation in the right panel of Figure 11. The reduction in the number of N-body simulations needed to give a certain error in the estimate covariance flattens around 100 resamplings per simulation.

Figure 11.

Figure 11. Left: dispersion in components of the power spectrum covariance matrix estimates as functions of the number of N-body simulations used to estimate the covariance. The dispersion is measured relative to the result from Takahashi et al. (2009) using 5000 simulations. Each N-body simulation has 500 resamplings of the large-scale modes, so a point on the plot has (number of N-body simulations) × 500 total power spectra used to estimate the covariance. The boxes indicate the medians (central black lines) and quartiles (tops and bottoms of the boxes) for the dispersion of the diagonal covariance elements within the wavenumber bins 0.295 ⩽ k/(h Mpc−1) ⩽ 0.395. The dashed black line indicates the fit to the dispersion found in Takahashi et al. (2009). The red line is a fit to the medians of our dispersion measurements. Right: normalization of the fit to σ2cov vs. Nr as a function of the number of mode resamplings per N-body simulation. That is, we plot the fit parameter A ≡ 2/(σ2covNbr) just as for the red solid line in the left panel, but use between 1 and 500 mode resamplings per N-body simulation. (We fit a different b parameter for each value on the abscissa, but do not show these values.) The error bars denote the 95% confidence intervals on the nonlinear least-squares parameter estimate. The dashed gray line shows the expected scaling if each mode resampling was equivalent to running a new N-body simulation.

Standard image High-resolution image

Takahashi et al. (2009) performed a similar analysis and found that the dispersion in their sample variance estimators was well fit by σ = 2/Nr, where Nr is the number of N-body simulations (or realizations) used to compute the estimator (shown as the dashed black line in Figure 11). We find a nonlinear least-squares fit to our variance estimates of 2/(8.5N0.8r), which is shown by the solid red line in Figure 11. From this we conclude that the sample variance estimator using resampled large-scale modes can achieve similar accuracy as the standard sample variance estimator but with ∼8 times fewer N-body simulations. However, it is difficult to extrapolate this convergence rate far beyond the 20 simulations we have actually performed because small changes in our choice of wavenumber bins or in our least-squares fit can cause large extrapolation errors.

4. DISCUSSION AND CONCLUSIONS

We have demonstrated an algorithm to resample the large-scale Fourier modes of the density in an N-body simulation that successfully reproduces the nonlinear power spectrum and significantly reduces the number of simulations required to estimate the power spectrum covariance matrix. We expect our algorithm to aid the calculation of uncertainties for observational probes of the matter power spectrum such as weak lensing and galaxy clustering. Because we can quickly provide multiple realizations of the density field, it is possible to estimate uncertainties after applying appropriate survey windows to the density, which can have significant impacts on the estimated power spectrum covariances. This could be a distinct advantage over approaches that either precompute the power spectrum covariance without considering the survey window (Semboloni et al. 2007; Takahashi et al. 2009; Sato et al. 2009), or use approximate or analytic methods that may not include the survey windows correctly (e.g., Crocce et al. 2011). We have no reason to believe the extension to redshift space will pose any particular challenges. However, note that we have not yet tested the perturbation of the particle velocities so our results are limited to real-space statistics for now.

Our algorithm uses both Zel'dovich displacements as well as time perturbations to match the effective local linear growth in under- or overdense regions. It is intriguing that neither of these operations alone is sufficient to reconstruct even the nonlinear power spectrum. Evaluating the time perturbation at the particle positions prior to the Zel'dovich move gives a bad result, indicating that the early-time movement of the particles is significant in determining the later growth of structures. Any extensions of our algorithm will likely have to consider both of these components carefully.

Because applying the time perturbation to remove large-scale modes from an existing simulation generally requires many simulation snapshots far into the future, we found it onerous (or even impossible if resampling too many modes in a box that is too small) to use existing simulations for mode resampling. However, if the simulation volume is at least several Gpc on a side, then according to Figure 7 it may be possible to subtract a limited number of large-scale modes with a feasible amount of computation. For simulation volumes smaller than many Gpc on a side, we advocate running a new simulation with large-scale modes removed in the initial conditions, which can then have new large-scale modes added directly in later snapshots using our algorithm. Note that even for adding new large-scale modes the time perturbation requires many snapshots closely spaced in scale factor in both the past and future of the desired time (but the simulation does not have to be run as far into the future as is needed for removing large-scale modes). This means that application of our algorithm for, e.g., parameter estimation from a galaxy survey, will require special-purpose N-body simulations that will save significant computation time for the error analysis.

Our algorithm has other applications such as including scaling of halo masses and concentrations so that mock galaxy catalogs could be constructed from each of the resampled density fields. In our view, this is the only viable method to obtain robust uncertainties for estimating cosmological parameters from galaxy clustering and baryon acoustic oscillations. Our algorithm may even provide a feasible method to go beyond the work of Sefusatti et al. (2006) for estimating the covariance of higher-order correlations. For parameter estimation with weak lensing, our resampled density fields could be directly input to existing ray-tracing pipelines. However, one would need resampled densities for each of the lens planes (which could extend to z ∼ 3 for upcoming survey requirements). This would require saving even more closely spaced snapshots of the small-modes simulation(s).

The sample covariance matrix estimated from a large number of resamplings of a single simulation has nonzero correlations between large and small scales that are introduced by our mode-addition algorithm. But the covariance elements are biased low relative to the benchmark result from Takahashi et al. (2009), which used 5000 N-body simulations. A possible explanation for this bias is that our method of resampling large-scale modes samples only a subset of the full probability distribution for the nonlinear density field. We have shown that the bias decreases monotonically as a function of the number of N-body simulations used to estimate the covariance, which is consistent with this explanation. If our mode-resampling algorithm were perfect at adding large-scale fluctuations, the bias in the covariance that we find would indicate that the distribution of the small-scale nonlinear density field is not entirely determined by the coupling with large-scale modes. Rather, the phases of the small-scale Fourier modes in the initial conditions would be important as well, which contradicts the conclusion of Little et al. (1991) who found little impact on the final nonlinear density when the small-scale phases were randomized in the initial conditions. This could be because the eye notices mostly the large-scale structures in a dot plot of the particle positions but there can be significant small-scale deviations that are less noticeable (as shown by the "Zel'dovich only" lines in Figure 8 versus the "Zel'dovich only" bottom left panel of Figure 6).

One area for future investigation is to study the dependence of the bias in the power spectrum sample covariance (or other measures of the distribution of the nonlinear density) with kthresh. This would quantify the relative importance of different large-scale modes on the nonlinear density distribution. This could be a promising way to better understand the "loss" of information about cosmology in the matter power spectrum on translinear scales (Rimes & Hamilton 2006; Neyrinck et al. 2006; Neyrinck & Szapudi 2007). One could also quantify on which scales (if any) the density in a large simulation becomes ergodic (so that the distribution of the density in sub-volumes of the simulation is equivalent to the distribution of the density field in an ensemble of large simulations). The fact that our small-scale covariance estimates have such a large bias for kthresh ∼ 0.5 h Mpc−1 (and in a single simulation) suggests that assuming ergodicity on these scales is a fallacy when defining estimators of (at least some) summary statistics of the density.

Another application of our algorithm could be for tiling smaller simulation volumes to obtain the (properly correlated) density field in volumes that are too large to simulate, as originally proposed by Cole (1997). This would probably be most effective if we run several of our "small-modes" simulations and then tile them together by resampling the modes both larger and smaller than the fundamental frequency of the simulation volume. A related application is rerunning resimulation studies (that take a sub-volume of a high-resolution N-body simulation and add additional physics) with different large-scale mode realizations. This could be a relatively fast way to obtain different realizations of the resimulations with only one high-resolution N-body simulation at hand.

Finally, we hope to apply our algorithm for obtaining estimates of both the mean power spectrum and its covariance as needed in simulation emulator frameworks (Heitmann et al. 2009; Schneider et al. 2011). Because we can estimate the power spectrum covariance with an easily achievable number of simulations, it will now be feasible to build an emulator for the cosmological-parameter dependence of the covariance (Schneider et al. 2008). Note that this paper is complementary to Angulo & White (2010) who described how to rescale the outputs of a single N-body simulation to mimic the outputs with a different cosmology but did not consider the time perturbations we use here. In combination, our algorithm and that of Angulo & White (2010) may serve to drastically reduce the number of simulations needed to construct simulation emulators for cosmological analysis.

We thank Adrian Jenkins for extensive technical advice on GADGET-2 and setup of initial conditions, David Weinberg for pointing us to his earlier related work, Alex Szalay for explanations of the growth of correlations in the Fourier phases of the density field, Yanchuan Cai for advice on applying perturbation theory to (an ultimately failed attempt to) add large-scale modes to our simulations, and Mark Neyrinck, Bhuvnesh Jain, and Ravi Sheth for useful conversations. S.M.C. acknowledges the support of a Leverhulme Research Fellowship. Some of the calculations for this paper were performed on the ICC Cosmology Machine, which is part of the DiRAC Facility jointly funded by STFC, the Large Facilities Capital Fund of BIS, and Durham University. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

Footnotes

  • Note that for simulation volumes ≫1 Gpc on a side relativistic corrections to the continuity equation could be important. These arise from the way that the large-scale potential perturbations modify the geodesic equations.

  • The MXXL simulation (R. E. Angulo et al. 2011, in preparation) was run by the Virgo Consortium (http://www.virgo.dur.ac.uk) in the spring of 2011. It is 3 h−1 Gpc on a side with 3 × 1011 particles with masses of 6.3 × 109M.

Please wait… references are loading.
10.1088/0004-637X/737/1/11