Parallelized Solution Method of the Three-dimensional Gravitational Potential on the Yin–Yang Grid

, , , and

Published 2018 August 20 © 2018. The American Astronomical Society. All rights reserved.
, , Citation Marius Almanstötter et al 2018 ApJ 863 142 DOI 10.3847/1538-4357/aad33a

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0004-637X/863/2/142

Abstract

We present a new method for solving the 3D gravitational potential of a density field on the Yin–Yang grid. Our algorithm is based on a multipole decomposition and is completely symmetric with respect to the two Yin–Yang grid patches. It is particularly efficient on distributed-memory machines with a large number of compute tasks, because the amount of data being explicitly communicated is minimized. All operations are performed on the original grid without the need for interpolating data onto an auxiliary spherical mesh.

Export citation and abstract BibTeX RIS

1. Introduction

Solving for Newtonian gravity in 3D is relevant in a large number of astrophysical and geophysical problems. For instance, the propagation of seismic waves on Earth (Komatitsch & Tromp 2002), the formation of planetesimals in protoplanetary disks (Simon et al. 2016), shock propagation in protostellar clouds (Falle et al. 2017), and the formation of the Earth's core (Mondal & Korenaga 2018) are 3D situations that require self-gravity to be taken into account. One example, which is in the focus of our interest, are core-collapse supernovae.

The explosion mechanism of core-collapse supernovae is one of the long-standing riddles in stellar astrophysics. Thanks to growing supercomputing power, 3D neutrino-hydrodynamics simulations can now be performed to study the physical processes responsible for the onset of the explosion with highly optimized codes on distributed-memory architectures (Takiwaki et al. 2012, 2014; Lentz et al. 2015; Melson et al. 2015a, 2015b; Roberts et al. 2016; Müller et al. 2017; Ott et al. 2018; Summa et al. 2018). The Garching group uses the Prometheus-Vertex package (Rampp & Janka 2002), which extends the finite-volume hydrodynamics module Prometheus (Fryxell et al. 1989) with a state-of-the-art neutrino transport and interaction treatment. Its latest code version applies the Yin–Yang grid (Kageyama & Sato 2004)—a composite spherical mesh—to discretize the spatial domain.

Until now, self-gravity of the stellar plasma in 3D simulations has been treated in spherical symmetry on an averaged density profile in Prometheus, because the gravitational potential is dominated by the spherical proto-neutron star in the center. As stellar collapse to neutron stars is only a mildly relativistic problem, the Garching group applies Newtonian hydrodynamics but uses a correction of the monopole of the gravitational potential (Marek et al. 2006), which has been turned out to yield results that are well compatible with fully relativistic calculations (Liebendörfer et al. 2005; Müller et al. 2010). However, as 3D core-collapse supernova simulations become more elaborate by taking more and more physical aspects into account, it is highly desirable to include a realistic 3D gravitational potential to treat large-scale asymmetries correctly. This holds true, in particular, in cases where the collapsing star develops a global deformation, e.g., due to centrifugal effects in the case of rapid rotation.

Müller & Chan (2018) recently presented a method for solving Poisson's equation on spherical polar grids using 3D Fast Fourier transforms. Although their algorithm yields an accurate solution of the gravitational potential even for highly aspherical density configurations, it is currently not available for the Yin–Yang grid, and its parallel efficiency needs to be improved to serve our purposes.

In this work, we present a method for efficiently computing the gravitational potential on the Yin–Yang grid based on the gravity solver of Müller & Steinmetz (1995). Also, Wongwathanarat et al. (2010) applied a 3D gravity solver on Yin–Yang data; however, their code was not parallelized for distributed-memory systems. It relied on mapping the data to an auxiliary spherical polar grid. With our approach presented here, we compute the gravitational potential on the Yin–Yang grid directly.

In Section 2, we will briefly summarize the algorithm developed by Müller & Steinmetz (1995) for 3D spherical grids. A method for its efficient parallelization will be discussed in Section 3, followed by a detailed explanation of the modifications for the Yin–Yang grid in Section 4. Test calculations will be presented in Section 5.

2. Solving Poisson's Equation on a Spherical Grid

In this section, we briefly summarize the procedure for computing the 3D gravitational potential on a spherical polar grid as shown by Müller & Steinmetz (1995).

Given the density distribution, $\varrho ({\boldsymbol{r}})$, the gravitational potential at the location ${\boldsymbol{r}}$ is determined by solving Poisson's equation, which can be expressed in its integral form as

Equation (1)

where G is the gravitational constant, and V denotes the whole computational domain. In the following, we employ spherical coordinates and express spatial vectors as ${\boldsymbol{r}}$ = (r,ϑ,φ). A decomposition of $| {\boldsymbol{r}}-{\boldsymbol{r}}^{\prime} {| }^{-1}$ into spherical harmonics Yℓm yields (Müller & Steinmetz 1995, Equation (8))

Equation (2)

In the latter equation, Cℓm and ${D}_{{\ell }m}$ are defined as

Equation (3)

Equation (4)

where $\overline{{Y}_{{\ell }m}}$ are the complex conjugates of the spherical harmonics. After inserting their definition and rearranging the terms, Müller & Steinmetz (1995) wrote the gravitational potential as a sum of two contributions: one for the gravitational potential inside a sphere of radius r, ${{\rm{\Phi }}}_{\mathrm{in}}^{({\ell }m)}({\boldsymbol{r}})$, and a second for the potential outside, ${{\rm{\Phi }}}_{\mathrm{out}}^{({\ell }m)}({\boldsymbol{r}})$,

Equation (5)

where ${P}_{{\ell }}^{m}$ are the associated Legendre polynomials. The integrals for these inner and outer contributions can be expressed as

Equation (6)

and

Equation (7)

The normalization factor in Equation (5) is given by

Equation (8)

with

Equation (9)

To discretize these equations, we divide the spatial domain into ${n}_{R}\times {n}_{\vartheta }\times {n}_{\varphi }$ grid cells, each spanning from $({r}_{i}^{-},{\vartheta }_{j}^{-},{\varphi }_{k}^{-})$ to $({r}_{i}^{+},{\vartheta }_{j}^{+},{\varphi }_{k}^{+})$ with $i=1,\,\ldots ,\,{n}_{R}$, $j=1,\,\ldots ,\,{n}_{\vartheta }$, and $k=1,\,\ldots ,\,{n}_{\varphi }$. The finite-volume method as being used in the Prometheus code assumes that in each cell, the density is given as a cell average, i.e.,

Equation (10)

for ${r}_{i}^{-}\leqslant r\leqslant {r}_{i}^{+}$, ${\vartheta }_{j}^{-}\leqslant \vartheta \leqslant {\vartheta }_{j}^{+}$, and ${\varphi }_{k}^{-}\leqslant \varphi \leqslant {\varphi }_{k}^{+}$. This assumption allows for simplifying Equations (6) and (7), which can then be written as

Equation (11)

Equation (12)

with

Equation (13)

In our chosen coordinates, the gravitational potential is computed at the cell interfaces. The radial index, nr, introduced above is equal to the cell index, i, if $r={r}_{i}^{+}$. The two integrals ${{ \mathcal C }}_{k}^{(m)}$ and ${{ \mathcal S }}_{k}^{(m)}$ can be evaluated analytically:

Equation (14)

and

Equation (15)

The radial integrals in Equations (11) and (12) can also be computed directly:

Equation (16)

Equation (17)

The remaining integrals in Equations (11) and (12),

Equation (18)

can be evaluated efficiently and analytically, i.e., without numerical integration errors, using recurrence relations (see the Appendix).

Solving Equation (5) numerically requires setting an upper bound for the summation over , which we denote as ${{\ell }}_{\max }$. We will discuss the choice of ${{\ell }}_{\max }$ below in Section 6.

3. Parallelization of the Method

For efficiently computing the gravitational potential on distributed-memory systems, it is important to minimize the amount of data being explicitly exchanged between compute tasks. The easiest parallel solution of the aforementioned equations would be to collect the entire density field from all computing units, calculate the gravitational potential in serial, and send the result back to all tasks. This, however, would require a large amount of data being communicated very inefficiently. More precisely, roughly ${n}_{R}\times {n}_{\vartheta }\times {n}_{\varphi }\times 2$ floating-point numbers would be sent serially, where the factor of two accounts for the collection of the data and their subsequent redistribution. The serial calculation of the gravitational potential would cause an additional load imbalance between the tasks. Especially if the number of grid cells is very large, this strategy is unrewarding.

Here, we present an approach for solving the gravitational potential accurately on a spherical grid, whose angular domain is decomposed among a large number of tasks, while the full radial slice is retained. This is precisely the setup used in our neutrino-hydrodynamics code, Prometheus-Vertex. If we rewrite Equations (11) and (12) as

Equation (19)

Equation (20)

and define

Equation (21)

Equation (22)

we immediately see that the summation over the angular domain is separated from the remaining calculation. After each compute task has solved Equations (21) and (22) locally, a global summation of ${A}_{{\rm{C}},i}^{({\ell }m)}$ and ${A}_{{\rm{S}},i}^{({\ell }m)}$ is performed. This is the only step, where explicit data communication is necessary. Parallel computing libraries provide efficient built-in methods to compute the sum across ntasks compute tasks. In case of the commonly used Message Passing Interface (MPI), such a function is MPI_Allreduce, which is usually an implementation of the "recursive doubling" algorithm using ${\mathrm{log}}_{2}{n}_{\mathrm{tasks}}$ communication steps to compute the global sum (Thakur et al. 2005).

With our method, we thus communicate $2\times {n}_{R}\,\times \tfrac{1}{2}[{({{\ell }}_{\max }+1)}^{2}+({{\ell }}_{\max }+1)]\times {\mathrm{log}}_{2}{n}_{\mathrm{tasks}}$ floating-point numbers serially, where the factor of two accounts for the two sums, ${A}_{{\rm{C}},i}^{({\ell }m)}$ and ${A}_{{\rm{S}},i}^{({\ell }m)}$, and the third factor counts the number of different terms for $0\leqslant {\ell }\leqslant {{\ell }}_{\max }$ and $0\leqslant m\leqslant {\ell }$. The fourth factor is based on the assumption that the recursive doubling algorithm is used implicitly by the parallel computing library, as explained above. Considering a computational mesh with an angular resolution of 2°, our procedure reduces the amount of data being sent as long as ${{\ell }}_{\max }\lesssim 55$. In the core-collapse supernova context, a realistic setting of ${{\ell }}_{\max }$ would be considerably smaller than this threshold (e.g., $\sim 25$), because the gravitational potential is dominated by the proto-neutron star, whose density distribution is spherically symmetric to good approximation.

The global summation of ${A}_{{\rm{C}},i}^{({\ell }m)}$ and ${A}_{{\rm{S}},i}^{({\ell }m)}$ can be dominated by round-off errors, even in double floating-point precision. The reproducibility of the results is not guaranteed, because the summation order depends on the number of compute tasks and on implementation details of the message-passing library used for data communication. To tackle this issue, the "double-double summation" algorithm described by He & Ding (2001) is applied, which treats each number as a pair of the value itself and its error being accumulated during floating-point operations. With this method, we achieve reproducibility of the results up to machine precision.

Generally, only the terms that explicitly depend on the density have to be calculated in each step of a time-dependent simulation. All other terms remain constant and therefore need to be computed only once, as long as the grid coordinates do not change with time.

4. Algorithm and its Parallelization for the Yin–Yang Grid

In its latest version, the Prometheus-Vertex code solves the hydrodynamics on the Yin–Yang grid (Kageyama & Sato 2004; Wongwathanarat et al. 2010), which is a combined mesh consisting of two low-latitude parts of a spherical polar grid. Its main advantage over a spherical polar grid covering the full 4π sphere is the exclusion of the grid singularities at the poles with their numerical difficulties. We show the geometry of the Yin–Yang grid in Figure 1.

Figure 1.

Figure 1. Three-dimensional visualization of the Yin–Yang grid cells on a sphere of constant radius.

Standard image High-resolution image

Let the coordinates on Yin and Yang be denoted as (rYin, ${\vartheta }_{\mathrm{Yin}}$, ${\varphi }_{\mathrm{Yin}}$) and (rYang, ${\vartheta }_{\mathrm{Yang}}$, ${\varphi }_{\mathrm{Yang}}$), respectively, and defined in the intervals

where rmax is the outer radius of the grid. ${\delta }_{\vartheta }$ and ${\delta }_{\varphi }$ are small values that control the width of the buffer zones in the Yin–Yang overlap region. Both grid patches consist of ${n}_{R}\times {n}_{\vartheta }\times {n}_{\varphi }$ cells, and have equal coordinate axes. The angular coordinates of a certain point on Yin are transformed into the Yang system according to

Equation (23)

Equation (24)

where the inverse transformation is obtained by exchanging the subscripts.

The computation of the gravitational potential on the Yin–Yang grid is not trivial, because both grid patches are rotated against each other and are partially overlapping. Wongwathanarat et al. (2010) also used a version of Prometheus with the Yin–Yang grid. They computed the gravitational potential by interpolating the density from the individual Yin–Yang grid parts onto a new "auxiliary" spherical polar grid that covers the entire domain. The algorithm by Müller & Steinmetz (1995) could then be applied without any modifications. After that, the gravitational potential was mapped back to the Yin–Yang grid patches. Although the orientation and grid resolution of the auxiliary grid is arbitrary, Wongwathanarat et al. (2010) aligned it with the Yin part for simplicity.

The method by Wongwathanarat et al. (2010), however, has two major drawbacks for us: their code runs on shared-memory systems only, while our implementation of the gravity solver should be optimized for distributed-memory architectures; and collecting the density data from all compute tasks to a single worker would be very inefficient, due to the amount of explicit data communication and the serialization of the calculation (see Section 3). Furthermore, interpolation errors of the density and the calculated gravitational potential occur on the grid patch that is not aligned with the auxiliary spherical polar grid (Yang), while the data on the Yin part can simply be copied. This causes an artificial asymmetry between the Yin–Yang grid patches.

To efficiently compute the gravitational potential on the parallelized Yin–Yang grid, we developed a new method based on Equations (21) and (22). It benefits from the linearity of integrals, i.e., integrals over the entire computational domain can be written as a sum of two integrals computed separately on both Yin and Yang. Only the overlap regions between the grid patches have to be treated differently by introducing a surface weight, wjk, being 0.5 for grid cells inside the overlap and one for cells outside. The weight factor for cells that are intersected by a boundary line, and therefore only partially overlapped, is calculated numerically (Peng et al. 2006). The sum of all surface elements multiplied with the surface weight yields

Equation (25)

Note that this summation is done on one grid patch (either Yin or Yang), which therefore yields 2π.

Similar to the parallelization procedure described in Section 3 being valid for spherical polar grids, our new approach for the Yin–Yang grid is based on computing ${A}_{{\rm{C}},i}^{({\ell }m)}$ and ${A}_{{\rm{S}},i}^{({\ell }m)}$ locally, followed by a global summation over all compute tasks. In the Yin–Yang setup, each task operates on data either from Yin or Yang, i.e., it calculates the following sums locally,

Equation (26)

Equation (27)

Note that due to the symmetry between both grid patches and their identical local coordinates, the values of wjk, ${{ \mathcal T }}_{j}^{({\ell }m)}$, ${{ \mathcal C }}_{k}^{(m)}$, and ${{ \mathcal S }}_{k}^{(m)}$ are equal on Yin and Yang.

After evaluating the latter equations, the global sums are computed separately for both grid parts resulting in different values for ${A}_{{\rm{C}},i,\mathrm{Yin}}^{({\ell }m)}$, ${A}_{{\rm{S}},i,\mathrm{Yin}}^{({\ell }m)}$, ${A}_{{\rm{C}},i,\mathrm{Yang}}^{({\ell }m)}$, and ${A}_{{\rm{S}},i,\mathrm{Yang}}^{({\ell }m)}$. Four different potentials are calculated using Equations (19), (20), and (5), namely ${{\rm{\Phi }}}_{\mathrm{Yin}}({{\boldsymbol{r}}}_{\mathrm{Yin}})$, ${{\rm{\Phi }}}_{\mathrm{Yin}}({{\boldsymbol{r}}}_{\mathrm{Yang}})$, ${{\rm{\Phi }}}_{\mathrm{Yang}}({{\boldsymbol{r}}}_{\mathrm{Yin}})$, and ${{\rm{\Phi }}}_{\mathrm{Yang}}({{\boldsymbol{r}}}_{\mathrm{Yang}})$. Here, the subscript of Φ denotes the grid part on which ${A}_{{\rm{C}},i}^{({\ell }m)}$ and ${A}_{{\rm{S}},i}^{({\ell }m)}$ are computed, while the subscript of ${\boldsymbol{r}}$ determines in which coordinate system the angles ϑ and φ in Equations (5), (19), and (20) are expressed.

The final gravitational potential for all compute tasks on Yin and Yang is, respectively,

Equation (28)

Equation (29)

Hence, the contributions from both grid parts are transformed into the local coordinates of the task and added. This procedure is completely symmetric with respect to both Yin–Yang grid patches and does not introduce a preferred direction. An interpolation onto an auxiliary grid is not required thus avoiding further numerical errors.

In Figure 2, we present the strong scaling behavior of our implementation of the Yin–Yang gravity solver. The gravitational potential was computed for a density distribution on the Yin–Yang grid with an angular resolution of 1°. While the grid configuration was kept constant, we altered the number of processors and measured the speedup given as the current wall-clock time normalized to the wall-clock time of the slowest run. The strong scaling deviates from the theoretically expected perfect behavior with increasing number of processors, because communication operations start to dominate. However, we emphasize that the computation of the gravitational potential by one single compute task as implemented in the unparallelized code by Wongwathanarat et al. (2010) would result in a flat curve in Figure 2. Our approach is thus suitable for large-scale simulations involving thousands of processors.

Figure 2.

Figure 2. Strong scaling of the 3D gravity solver on the Yin–Yang grid with an angular resolution of 1°. The dashed line indicates the theoretically perfect scaling behavior. Note the logarithmic scales of both coordinate axes.

Standard image High-resolution image

5. Test Calculations

To validate our approach for solving the gravitational potential on the Yin–Yang grid, we performed various tests. First, the gravitational potential of a spherically symmetric density distribution was verified to be equal to the corresponding analytical solution up to machine precision. Second, an axisymmetric density distribution was mapped to the Yin–Yang grid with 1° angular resolution, and its gravitational potential was compared with the result gained from an axisymmetric gravity solver3 yielding accurate agreement of 10−4. Third, the potential of a homogeneous ellipsoid was compared to the semi-analytical solution. We will elaborate on the latter test here in detail.

We constructed an ellipsoid of constant density, ϱ0, with semi-axes a, b, and c with b = 1.5a and c = 2a. In Cartesian coordinates, a point is inside the ellipsoid if

Equation (30)

The gravitational potential of this configuration is given by (Chandrasekhar 1969)

Equation (31)

where ${\boldsymbol{r}}=(x,y,z)$ and

Equation (32)

Equation (33)

Equation (34)

Equation (35)

If the point ${\boldsymbol{r}}$ is located inside the ellipsoid, ${u}_{0}({\boldsymbol{r}})=0$. Otherwise, ${u}_{0}({\boldsymbol{r}})$ is given as the positive root of the equation

Equation (36)

We evaluated the root and the integrals numerically up to a precision of 10−13.

The transformation from the Yin spherical polar coordinates to Cartesian coordinates is given by

Equation (37)

Equation (38)

Equation (39)

where we have introduced a rotation angle ${\rm{\Delta }}\varphi =\pi /8$ to break the symmetry between both Yin–Yang grid patches. To compute the Cartesian coordinates for Yang, the transformation given by Equations (23) and (24) has to be applied to express the Yang coordinates in the Yin system.

To estimate the correctness of our 3D gravity solver for the case of the homogeneous ellipsoid, we compared the calculated gravitational potential in the entire computational domain (i.e., inside and outside the ellipsoid) with the semi-analytical solution. This was done on the Yin–Yang grid, but also on the standard spherical polar grid covering the whole sphere. Both mesh configurations had an equidistant radial spacing and an outer radius rmax = c. We varied both the grid resolution and the maximum multipole order ${{\ell }}_{\max }$. In the following, we denote the models with nR =  400 and an angular resolution of 2° as low resolution (LR). The runs with twice as many grid points in all directions are referred to as high resolution (HR).

In Figure 3, we show the maximum relative error of the calculated gravitational potential with respect to the semi-analytical solution for the different mesh configurations and grid resolutions. As expected, the relative errors of the HR models are always smaller than in the LR cases, because the gravitational potential is sampled more accurately in the former runs and the surface of the ellipsoid is better resolved. While for the LR case the results computed on the Yin–Yang grid and on the spherical polar grid deviate from each other, both setups yield basically identical solutions in the HR case. This demonstrates that our adaptation of the gravity solver performs correctly on the Yin–Yang grid.

Figure 3.

Figure 3. Maximum relative error of the calculated gravitational potential of the homogeneous ellipsoid compared to the semi-analytical solution as a function of maximum multipole order ${{\ell }}_{\max }$. The results computed on the spherical polar grid and on the Yin–Yang grid are shown with red and blue lines, respectively. Dashed lines mark low-resolution (LR) setups with nR = 400 and a 2° angular resolution, while solid lines show high-resolution (HR) grids with nR =  800 and a 1° angular resolution.

Standard image High-resolution image

With increasing maximum multipole order ${{\ell }}_{\max }$, the relative error decreases in all setups, as the shape of the ellipsoid is approximated more accurately. Ideally, an infinite number of multipole terms would be required to precisely reproduce an ellipsoidal configuration. Including more higher-order terms thus reduces the deviation from the semi-analytical solution.

The spatial distribution of the relative error on the Yin–Yang grid is shown in Figure 4 for the HR case with ${{\ell }}_{\max }=80$ in the plane z = 0. It can clearly be seen that the error is dominated by the cells close to the surface of the ellipsoid. Their deviation from the semi-analytical solution is two to three orders of magnitude larger than for cells near the grid origin. Machine precision is not reached in the center, because the gravitational potential there is also dependent on the density distribution outside (see Equation (5) and the term ${{\rm{\Phi }}}_{\mathrm{out}}^{({\ell }m)}$ there). As the ellipsoidal surface is not sampled to machine precision, we do not expect this either for the gravitational potential in the central region.

Figure 4.

Figure 4. Relative error of the calculated gravitational potential in a cross-sectional plane through the homogeneous ellipsoid on the Yin–Yang grid compared to the semi-analytical solution. The data are displayed with a logarithmic color coding in the xy plane (z = 0) for the HR case with ${{\ell }}_{\max }=80$. The surface of the ellipsoid is indicated with a black ellipse. Note that the entire computational domain is shown.

Standard image High-resolution image

6. Discussion and Conclusions

In this work, we have presented a new method for solving the 3D gravitational potential of a density field on the Yin–Yang grid based on the algorithm developed by Müller & Steinmetz (1995). Our approach is particularly efficient on distributed-memory machines with a large number of compute tasks, because the amount of data being explicitly communicated is minimized and the work load is evenly distributed across all tasks resulting in a good scaling behavior.

In contrast to the strategy by Wongwathanarat et al. (2010), who interpolated the density from the Yin–Yang grid onto an auxiliary spherical mesh, we perform all operations on the original Yin–Yang grid. Our algorithm is completely symmetric with respect to the Yin–Yang grid patches and does not introduce a preferred direction. The validity of the results have been verified by performing test calculations that have an analytical or a semi-analytical solution.

Our ellipsoidal test configuration mimics the situation near the surface of the newly formed neutron star during the later post-bounce evolution of a core-collapse supernova. The increasingly steeper density gradient between the neutron star, which may be centrifugally deformed due to rapid rotation, and its surrounding medium is represented by the boundary of the ellipsoid. Because our solver can accurately determine the gravitational potential in this considered test setup with its extremely "sharp" surface, proper results can also be expected for the more moderate situation met during the birth of neutron stars during stellar core collapse and explosion.

The maximum multipole order ${{\ell }}_{\max }$ of the spherical harmonics decomposition determines the accuracy of the gravitational potential. The computational cost increases roughly as ${{\ell }}_{\max }^{2}$. A suitable balance between accuracy and computing time has to be found depending on the characteristics of the investigated physical problem. In core-collapse supernova models, the gravitational potential is dominated by the approximately spherically symmetric proto-neutron star. Therefore, a moderate ${{\ell }}_{\max }$ of $\sim 25$ should be able to capture most of the asymmetries.

We thank Alexander Summa and Annop Wongwathanarat for fruitful discussions. This project was supported by the European Research Council through grant ERC-AdG No. 341157-COCO2CASA and by the Deutsche Forschungsgemeinschaft through the Excellence Cluster "Universe" (EXC-153). The test calculations were performed on the Hydra system of the Max Planck Computing and Data Facility (MPCDF). The code scaling was tested on SuperMUC at the Leibniz Supercomputing Center with resources granted by the Gauss Centre for Supercomputing (LRZ project ID: pr53yi).

Software: Prometheus-Vertex (Fryxell et al. 1989; Rampp & Janka 2002; Buras et al. 2006), NumPy and Scipy (Oliphant 2007), IPython (Perez & Granger 2007), Matplotlib (Hunter 2007).

Appendix: Recurrence Relations for the Integrals of the Associated Legendre Polynomials

The integrals ${{ \mathcal T }}_{j}^{({\ell }m)}$ defined in Equation (18) can be calculated using the following recurrence relations (Zwerger 1995), which are evaluated in the order given below.

Equation (40)

Equation (41)

Equation (42)

Equation (43)

Equation (44)

Equation (45)

Equation (46)

Footnotes

  • The axisymmetric solver has already been used, e.g., by Buras et al. (2006), Marek & Janka (2009), and Summa et al. (2016).

Please wait… references are loading.
10.3847/1538-4357/aad33a