The following article is Open access

RAM: Rapid Advection Algorithm on Arbitrary Meshes

, , , and

Published 2023 July 20 © 2023. The Author(s). Published by the American Astronomical Society.
, , Citation Pablo Benítez-Llambay et al 2023 ApJ 952 106 DOI 10.3847/1538-4357/acd698

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0004-637X/952/2/106

Abstract

The study of many astrophysical flows requires computational algorithms that can capture high Mach number flows, while resolving a large dynamic range in spatial and density scales. In this paper we present a novel method, RAM: Rapid Advection Algorithm on Arbitrary Meshes. RAM is a time-explicit method to solve the advection equation in problems with large bulk velocity on arbitrary computational grids. In comparison with standard upwind algorithms, RAM enables advection with larger time steps and lower truncation errors. Our method is based on the operator splitting technique and conservative interpolation. Depending on the bulk velocity and resolution, RAM can decrease the numerical cost of hydrodynamics by more than one order of magnitude. To quantify the truncation errors and speed-up with RAM, we perform one- and two-dimensional hydrodynamics tests. We find that the order of our method is given by the order of the conservative interpolation and that the effective speed-up is in agreement with the relative increment in time step. RAM will be especially useful for numerical studies of disk−satellite interaction, characterized by high bulk orbital velocities and nontrivial geometries. Our method dramatically lowers the computational cost of simulations that simultaneously resolve the global disk and potential well inside the Hill radius of the secondary companion.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

A broad range of open questions in astrophysics rely on the nonlinear outcome of computational studies of fluid dynamics. These problems often involve high Reynolds number flows that must be studied across many orders of magnitude in scale. A central aspect of computational fluids dynamics is finding stable, accurate, conservative, and efficient numerical methods to solve the advection equation

Equation (1)

for a given initial condition Q(x, t = 0) = Q0(x) and suitable boundary conditions. Here Q(x, t) is the advected quantity along the x-direction by the velocity v over a time t. Without loss of generality, we can assume v = v0 + δ v, where the bulk velocity, v0, is constant in space and δ v(x) is the departure of the flow velocity v from v0.

In the Eulerian formalism, Equation (1) can be solved through the utilization of Riemann or finite-volume algorithms combined with upwind reconstruction methods, like piecewise linear (e.g., van Leer 1977) or piecewise parabolic (Colella & Woodward 1984) methods, among others. These conservative upwind methods introduce truncation errors that depend on the bulk velocity. Such methods introduce a direct correlation between the accuracy of the numerical solution and the bulk velocity of the flow (see, e.g., Robertson et al. 2010). Therefore, solving the advection equation in a frame of reference that moves at a speed equal to the bulk velocity minimizes the errors in the solution.

In addition, explicit finite-difference methods give rise to the so-called Courant–Friedrichs–Lewy (CFL) stability condition, which is inversely proportional to the velocity v. As such, a large bulk velocity limits the allowed time step Δt, therefore increasing the numerical cost of finding the solution at time t. Crucially, this numerical cost is unbounded, as it scales with the—arbitrarily large—bulk velocity v0. In contrast, finite-volume methods do not require a CFL condition as long as the upwind fluxes are integrated over the entire domain of dependence. However, it is customary to limit the distance that information can travel within a time step such that it is less than the size of the smaller cell.

Because of these two observations, dealing with a large bulk speed requires the development of special techniques (1) to decrease the large truncation errors and (2) to increase the effective time step. Increasing the time step (without compromising the stability of the solver) is an especially effective way to decrease the numerical cost of the simulations.

In terms of v0, Equation (1) can be written as

Equation (2)

In order to address the problems described above, it is desired to remove v0 from Equation (2). One way of doing that is to define the coordinate transformation

Equation (3)

such that Equation (2) becomes

Equation (4)

This approach has been followed by Shariff & Wray (2018) to solve advection on fixed uniform meshes. It consists in solving advection via standard methods in the frame of reference that moves at the bulk velocity. Then, in rest frame, the fixed mesh is filled after shifting the advected quantity an amount  v0Δt in Fourier space (see also Johansen et al. 2009). This can be done efficiently through the utilization of the fast Fourier transform (FFT) method. However, an FFT (and its inverse) can only be done efficiently on meshes that are uniform. In nonuniform spaces, a nonuniform discrete Fourier transform (NUDFT) can be used, but at the expense of additional cost or precision (Keiner et al. 2009). Moreover, arbitrary shifting of nonsmooth functions through the discrete Fourier transform is subject to the Gibbs phenomenon, which produces over/undershoots in the reconstructed (shifted) signal near large gradients in the sampled function. More importantly, the Gibbs phenomenon produces spurious values that could change the sign of the advected quantity. This could be particularly catastrophic for positive-valued quantities like the density or internal energy.

A second possibility to remove v0 from Equation (2), also related to a coordinate transformation, is to solve the advection equation in a Lagrangian mesh that moves at the bulk speed, as done, for example, by the codes AREPO (Springel 2010) and DISCO (Duffell 2016). In one dimension this is, perhaps, the optimal solution on both uniform and nonuniform domains. However, the time step could be limited in multidimensional meshes with strong shear unless the mesh is regularly connected.

Another possibility is the FARGO algorithm developed by Masset (2000). 5 The FARGO algorithm is a widely used approach in the context of disk dynamics to avoid the limitations of the time step due to a large bulk speed in differentially rotating fluids. The FARGO algorithm employs the operator splitting method on a static mesh (or one rotating as a rigid body; see, e.g., Masset 2000; Benítez-Llambay & Masset 2016), for which, to first order in time, Equation (2) is solved in two separate steps. In a first step, one solves

Equation (5)

and then, using the solution of Equation (5) as an initial condition, one solves through standard methods

Equation (6)

Solution of Equation (6) thus corresponds to the approximate solution of Equation (2). To first order, this method is equivalent to the one proposed by Shariff & Wray (2018). One crucial property of the FARGO algorithm is that Equation (5) has an analytical solution

Equation (7)

which corresponds to a shift of the initial condition along the x-coordinate. On uniform meshes, this shift can be cast as an integer shift of grid cells plus a residual advection. Furthermore, on periodic domains the integer shift corresponds to a cyclic permutation of grid cells, so it does not introduce any numerical error. While the simplicity of this integer shift makes the FARGO algorithm extremely efficient, it is not applicable to nonuniform meshes.

In this work, we present an algorithm to solve Equation (1) with arbitrarily large bulk speed v0 that brings the efficiency of the FARGO algorithm to uniform and nonuniform meshes alike. Our method is based, as FARGO, on the operator splitting technique and the method of characteristics with suitable conservative reconstruction methods that are used to evaluate Equation (7). In addition, the method of characteristics that we develop to shift the signal could be used to build higher-order schemes on nonuniform meshes (or to avoid potential issues with FFT methods on uniform meshes). We furthermore show that if the underlying mesh is distributed according to a predefined mesh density function, a coordinate transformation that maps the nonuniform space into a uniform one speeds up calculations significantly.

This paper is organized as follows. In Section 2 we define the basic equations used throughout the manuscript. In Section 3 we present the details of the RAM method for solving the advection equation. Results from test problems and benchmarking of our implementation are shown in Section 4. Finally, we summarize our work in Section 5.

2. Basic Definitions

Consider a static grid made of N + 1 nodes with monotonic coordinates ${x}_{\min }={x}_{-\tfrac{1}{2}}\lt {x}_{\tfrac{1}{2}}\lt ...\lt {x}_{N-\tfrac{1}{2}}={x}_{\max }$. We assume the function Q to be periodic within the domain and sampled at the cell coordinates ${x}_{i}=({x}_{i+\tfrac{1}{2}}+{x}_{i-\tfrac{1}{2}})/2$ at time tn and define ${Q}_{i}^{n}\equiv Q({x}_{i},{t}_{n})$. Consecutive pairs of nodes define the cell size ${\rm{\Delta }}{x}_{i}={x}_{i+\tfrac{1}{2}}-{x}_{i-\tfrac{1}{2}}$ for i = 0,..., N − 1. The volume average of Q within a cell is

Equation (8)

In this work, we assume

Equation (9)

It can be shown via a Taylor expansion of Q around xi that, for smooth functions, Equation (9) corresponds to a second-order approximation of Equation (8) within the i-cell. Therefore, the volume integral of Q within a cell is given by

Equation (10)

For a periodic domain, a conservative update of Q from time t = t0 to t = tn implies

Equation (11)

Because Q is sampled at a discrete set of points, we also define

Equation (12)

The function qn is made of a set of piecewise interpolating monotonic functions qi (x, tn ) that approximate Q(x, tn ) within each cell, such that

Equation (13)

In particular, we also assume

Equation (14)

Furthermore, it is required that q does not introduce new extrema, preventing the development of over/undershoots in the solution.

In terms of q, Equation (7) becomes

Equation (15)

With the above relation, the advection of the discrete function Q is transformed into the advection of a piecewise, continuous function q that shares the local and global volume integral of Q.

Finally, given a velocity v(x), it is customary to define the constant background bulk velocity v0 as the spatially averaged velocity (see Masset 2000). This definition is arbitrary; for example, it could be defined as the velocity of the center of mass. To obtain an optimal solution in terms of both efficiency and accuracy, v0 should be defined such that it minimizes the maximum amplitude of ∣δ v∣ = ∣vv0∣ within the domain, so the allowed time step is maximized. This is readily obtained after defining v0 as the average between the maximum and minimum of v (see Benítez-Llambay & Masset 2016, Section 3.5.1). It is worth noticing that the method presented in this paper is agnostic about the definition of v0 as long as it is constant in space. The way it is defined influences the effective speed-up and overall precision obtained.

3. RAM

As discussed previously in Section 1, a method to solve Equation (1) that maximizes the time step and minimizes truncation errors in arbitrary meshes requires an efficient solution of Equation (5). In other words, we need a method to evaluate the exact solution given by Equation (7) that is fast, stable, and conservative.

Evaluating Equation (7) from time tn to tn+1 = tn + Δt corresponds to shifting Q(x, tn ) an amount v0Δt in space. In the particular case where Q is sampled on a discrete but uniformly spaced grid and v0Δt is an exact multiple of the cell size, the shift is achieved via a cyclic permutation of indices (see Masset 2000). However, in general, Q could be sampled on a nonuniform domain. In this case, it is impossible that for every cell the cell size is an integer multiple of v0Δt. Therefore, shifting Q requires the utilization of interpolation methods. Here we provide a method to evaluate Equation (7) via Equation (15) that is conservative, i.e., it conserves the total volume average of Q to machine precision (as defined by Equation (11)).

To evaluate Equation (15) from time tn to time tn+1, we propose the following three steps:

  • 1.  
    Calculate the piecewise interpolating functions qi (x, tn ) using the discrete nodes Qi n (see Section 3.1).
  • 2.  
    Obtain ${M}_{i}^{n+1}$ after shifting analytically qn an amount v0Δt (see Section 3.2).
  • 3.  
    Use Equation (10) to obtain the advected profile (see Section 3.3).

These three steps are shown explicitly in Figure 1. Together with the operator splitting needed to build Equation (5), these three steps are the core of our Rapid Advection Algorithm on Arbitrary Meshes (RAM). In what follows, we explain each of these steps and give a practical example.

Figure 1.

Figure 1. RAM steps involved in solving Equation (5) for a positive-valued velocity v0 over a time Δt in a periodic domain. The top panel shows an arbitrary continuous function Q(x) at time tn , known at a discrete set of points xi , with value Qi n . These values allow us to calculate qi n , which we draw with red lines. For this particular example, we use PLM interpolation (see Section 3.1). Here we also draw the length v0Δt used to shift the mesh to the left in step 2. The middle panel shows ${M}_{i}^{n+1}$ obtained from evaluating Equation (22), where different colors are used for different cells. These colors allow us to link each area (integral within each cell of qn ) with the resulting area obtained when reconstructing ${Q}_{i}^{n+1}$ in step 3 (see Section 3.2). The bottom panel shows the reconstructed values ${Q}_{i}^{n+1}$ obtained from ${M}_{i}^{n+1}$ via Equation (10). We also show the exact solution Q(xvΔt, t0) for completeness.

Standard image High-resolution image

3.1. Step 1—Interpolating Qi n

The first step of the RAM method requires the interpolation of the nodes Qn i to obtain the piecewise functions ${q}_{i}^{n}(x)$. Ideally, this interpolation should be of the order of (or higher than) the one used to solve Equation (6), so errors are dominated by the standard advection step. The RAM method does not rely on any specific choice as long as the interpolation is conservative and does not introduce new extrema. In this work we explore two different conservative piecewise monotonic interpolation methods (i.e., do not introduce new maxima/minima) that are often utilized in hydrodynamics. These are the piecewise linear method (PLM) and piecewise parabolic method (PPM) that we explain below. Throughout this section, all the quantities are evaluated at time tn , so for the sake of simplicity we omit the superscript n.

Piecewise Linear Method.—This method provides an interpolation that is accurate up to second order (see, e.g., Mignone 2014, Section 3.1). Assume qi to be of the form

Equation (16)

where σi is the slope of the linear function. Monotonicity is preserved by utilizing the slope limiter proposed by van Leer (1974), where

Equation (17)

with ${\rm{\Delta }}{Q}_{i+\tfrac{1}{2}}=({Q}_{i+1}-{Q}_{i})/({x}_{i+1}-{x}_{i})$.

Piecewise Parabolic Method.—This method, first introduced by Colella & Woodward (1984), interpolates Q up to third order via quadratic polynomials. The piecewise parabolic function is

Equation (18)

with ${Q}_{6,i}=6\langle Q{\rangle }_{i}-3\left({Q}_{L,i}+{Q}_{R,i}\right)$. To obtain the quadratic polynomials, we need to calculate 〈Qi (see Section 2) and interpolate 〈Qi to the cell interfaces to obtain $\langle Q{\rangle }_{i+\tfrac{1}{2}}={Q}_{L,i+1}={Q}_{R,i}$ (see Appendix A.1). Furthermore, monotonicity constraints have to be applied (see Appendix A.2). It is worth noticing that the coefficients of the quadratic polynomial depend on 〈Qi , QL,i , and QR,i ; therefore, the errors of the latter limit the accuracy of the interpolation. In this work we assume 〈Qi Qi , which is a second-order approximation of 〈Qi . Therefore, we expect our interpolation errors to be smaller than third order. Higher accuracy could be obtained by improving the calculation of 〈Qi (Colella & Sekora 2008). Note, however, that since PPM conserves 〈Qi locally, a higher-order approximation of 〈Qi may not be conservative in the terms defined in this work, i.e., it does not conserve ∑i Qi Δxi over successive time steps. The conservation will depend on the approximation of the spatial derivative of Q. For example, a fourth-order approximation of 〈Qi = Qi + 1/24 (Qi+1 − 2Qi + Qi−1) is conservative globally over a periodic domain (i.e., it satisfies Equation (11)). In general, the accuracy of our method can be increased provided that a higher-order approximation of 〈Qi is conserved within a time step (e.g., Felker & Stone 2018).

Finally, we note that for both PLM and PPM monotonicity constraints increase truncation errors locally within nonsmooth regions, which in turn reduces the overall accuracy of the interpolation. An example of reduction of accuracy due to this effect is shown in Section 4.1, for the advection of a square profile.

3.2. Step 2—Obtaining Mn+1 from the Advection of qn

The volume integral ${M}_{i}^{n+1}$ is obtained from the definition (10) combined with the solution (15),

Equation (19)

After defining $\tilde{x}\equiv x-{v}_{0}{\rm{\Delta }}t$, we can write Equation (19) as

Equation (20)

We define positive integers, p(i), such that

Equation (21)

which allows Equation (20) to be evaluated as

Equation (22)

where p(N) = p(0) owing to periodicity. Here the sum over Mm n corresponds to the integral of qm over an entire cell centered at xm . Because the reconstruction is conservative, we replace this integral with Mm n .

Equation (22) is the generalization of standard upwind methods for arbitrary domains of dependence and can be interpreted as a conservative remapping of volume-averaged values (e.g., mass) from coordinates xi to the new coordinates ${\tilde{x}}_{i}={x}_{i}-{v}_{0}{\rm{\Delta }}t$. Note that since Δt is unbound, ${\tilde{x}}_{i}$ may be nowhere near xi , and/or a grid cell in x may intersect one, two, or multiple grid cells in $\tilde{x}$ and vice versa. However, this remapping is simplified by the index calculation p(i) (Equation (21) and Section 3.2.1) and the two conditionals in Equation (22). If the time step Δt is bounded (e.g., ${\rm{\Delta }}t\lesssim \min ({\rm{\Delta }}{x}_{i})/{v}_{0}$), Equation (22) reduces to the coordinate remap described in Lufkin & Hawley (1993, see their Equation (58)). In this case, p(i) = i, p(i + 1) = i + 1, and the algorithm in Equation (22) can be cast into the form of a standard upwind advection scheme, similar to the one used to solve Equation (6).

3.2.1. Efficient Calculation of p (i)

If it happens that a coordinate transformation u(x) exists such that ${\rm{\Delta }}u\equiv u({x}_{i+\tfrac{1}{2}})-u({x}_{i-\tfrac{1}{2}})$ is constant, Equation (21) can be written as

Equation (23)

In other words, p(i) corresponds to the index of the closest neighbor of the original mesh that is smaller than ${\tilde{u}}_{i-\tfrac{1}{2}}$. Because u is uniformly spaced, this calculation is straightforward,

Equation (24)

where "int(a)" represents the smaller integer that is closer to a. In what follows we introduce a method to build a mesh such that finding this transformation is trivial.

Consider the mesh density function $\psi :[{x}_{\min },{x}_{\max }]\to {{\mathbb{R}}}_{\gt 0}$ such that

Equation (25)

We define $F:[{x}_{\min },{x}_{\min }]\to [0,N-1]$ such that

Equation (26)

The cumulative function F serves as the "index function," as it returns the (continuous) index of a mesh distributed according to the mesh density ψ. This correspondence is evident from the fact that the number of cells per unit space is proportional to dF/dx = ψ. The mesh nodes are defined implicitly by

Equation (27)

with ${x}_{-\tfrac{1}{2}}\equiv {x}_{\min }$. Finding analytical expressions for F or its inverse may not be possible, so numerical integration, combined with root-finding methods, has to be used. Finally, the coordinate transformation $u:[{x}_{\min },{x}_{\max }]\to [0,1]$ such that

Equation (28)

ensures that, by definition, Δu is constant, and Equation (24) evaluates to

Equation (29)

which allows us to speed up index calculations needed in step 2 of RAM significantly. It is worth noticing that Hugger (1993) generalizes a similar method to multiple dimensions.

3.3. Step 3—Obtaining ${Q}_{i}^{n+1}$ from ${M}_{i}^{n+1}$

Once ${M}_{i}^{n+1}$ is obtained, the third step consists of using Equation (10) to obtain the advected profile ${Q}_{i}^{n+1}$ as

Equation (30)

3.4. Example of Advection with RAM

In Figure 1 we advect to the right a sinusoidal profile, Q(x, tn ), sampled on a discrete nonuniform grid over a length v0Δt. To advect it, we follow the three steps described previously. The first step (top panel) consists in calculating qi n , which, for simplicity, is done using PLM. These linear functions are plotted with red lines. To explain the process in more detail, we now focus on the interval $[{x}_{4-\tfrac{1}{2}},{x}_{4+\tfrac{1}{2}}]$ only, but a similar procedure has to be done for all the intervals. The goal is to find ${Q}_{4}^{n+1}$ at x4 and time tn+1 that results from advecting qn . For that we follow step 2 (middle panel). We transform x4 to ${\tilde{x}}_{4}$ and find all p(i) that are within $[{\tilde{x}}_{4-\tfrac{1}{2}},{\tilde{x}}_{4+\tfrac{1}{2}}]$. This step can be done efficiently if the mesh follows a mesh density function ψ as explained in Section 3.2.1. For this particular example, the specific indices needed to evaluate Equation (22) in this interval are p(4), p(5). To satisfy the inequality (21), p(4) = 1 and p(5) = 3, and Equation (22) becomes

Equation (31)

Thus, to calculate ${M}_{4}^{n+1}$, we have to integrate q1 n , q2 n , and q3 n in the interval $[{\tilde{x}}_{4-\tfrac{1}{2}},{\tilde{x}}_{4+\tfrac{1}{2}}]$. Here the integral of q2 n is replaced by M2 n simply because the step of building q2 is conservative. Finally, in step 3 (bottom panel) we take ${M}_{4}^{n+1}$ and use Equation (30) to calculate ${Q}_{4}^{n+1}$ (depicted with an open triangle) at x4 as ${Q}_{4}^{n+1}={M}_{4}^{n+1}/{\rm{\Delta }}{x}_{4}$.

3.5. General Applicability of RAM

RAM can be implemented in two- and three-dimensional hydrodynamics solvers through dimensional splitting (Stone & Norman 1992; Benítez-Llambay & Masset 2016, Section 3.4) and used to advect along the direction for which the bulk velocity is larger. In this case, an additional limitation to the CFL condition due to relative shear needs to be implemented (see Benítez-Llambay & Masset 2016, Section 3.5.2). In Section 4.2, we show an example of a two-dimensional polar disk in which RAM was implemented through dimensional splitting, using it to solve advection along the azimuthal direction, for which the Keplerian speed dominates the dynamics. RAM is also agnostic to the discretization scheme. It can be used for orbital advection in both finite-difference and finite-volume methods, as done previously with the uniform-mesh FARGO method (Johnson et al. 2008; Stone & Gardiner 2010; Mignone et al. 2012; Lyra et al. 2017). We finally note that if higher-order time integration is needed on multidimensional problems, an extra source term needs to be considered in the hydrodynamics equations (see Shariff & Wray 2018, Equation (33)).

4. Testing RAM

In this section, we present results from one- and two-dimensional test problems, solved with the code FARGO3D (Benítez-Llambay & Masset 2016). To test RAM on nonuniform meshes, we extended the code to allow for nonuniform spacing along the x-direction in Cartesian coordinates and ϕ-direction in polar coordinates. Additional modifications have been added to ensure mass and momentum conservation on nonuniform meshes. We describe these modifications in Appendix C. Our goal is twofold: first, we aim to characterize the convergence rate of RAM for the interpolation methods described in Section 3.1; second, we want to compare the efficiency of RAM with that of other advection methods. These methods are the so-called standard 6 (STD) and FARGO methods, the latter being the default advection algorithm used along x/ϕ in FARGO3D. STD solves advection through upwind methods using the full velocity (see Benítez-Llambay & Masset 2016, for details), whereas FARGO and RAM solve advection following the splitting described in Section 1. For both FARGO and RAM, additional STD advection steps are required if δ v is not zero. In addition, in the case of FARGO, even if δ v = 0, an STD step is required when Δt is chosen arbitrarily (as we do in Section 4.1). All the methods utilize the reconstructions discussed in Section 3.1, where we specify which one is being used for each particular test problem (PLM or PPM).

4.1. One-dimensional Advection with Constant Speed

To compare the errors and convergence rates for the different advection methods discussed above, we study the one-dimensional advection of smooth and sharp profiles on uniform and nonuniform meshes. In this test, we consider the transport step of FARGO3D only (see Benítez-Llambay & Masset 2016, for details). This test highlights the errors associated with advection that a general hydrodynamics integration has for different advection algorithms. The periodic domain spans the range [−π, π]. For the smooth profile, the initial condition is

Equation (32)

whereas for the square profile it is

Equation (33)

While Equation (32) is not strictly periodic within the domain, the discontinuity produced at the edges in the initial condition is very small and does not introduce any significant error. In the two cases, the advection velocity is v = π, and it is used as the bulk velocity v0 for both FARGO and RAM. We integrate the system for a time t = 2, so the profile moves a full period, returning to its initial position. To test convergence with resolution, we double the number of grid points from 32 up to 512 and estimate truncation errors, e1, through the L1-norm,

Equation (34)

where Qref is a reference solution, which we take to be equal to the initial condition. This is because after one period the exact solution corresponds to the same initial profile (see Equation (7)). We consider two cases, one in which the mesh is uniformly distributed and another where the mesh is distributed according to the periodic bump mesh density function described in Appendix B, with parameters a = π/2, ba = 0.1, c = 5 (see also Section 3.2.1). This density function enables a smooth transition between two refinement levels, each with uniform resolution. The time step for the integration is the standard CFL condition

Equation (35)

with β the CFL factor, which is usually smaller than 1 because of stability constraints. We arbitrarily choose β = 0.5 for STD 7 and β = 5.5 for FARGO and RAM, which is of the order of the upper bound for the typical speed-up obtained with FARGO in protoplanetary disk simulations.

In Figure 2 we summarize the results obtained for this test. We find that when advecting a smooth profile truncation errors for all the methods are second order, as they decrease quadratically with N (e1N−2), whereas for the sharp profile the errors scale ∝N−3/4, consistent with a first-order algorithm. For both uniform and nonuniform meshes, the absolute errors of RAM with PLM are slightly larger than errors with PPM, as expected. For the uniform mesh, errors using FARGO or RAM with PPM are identical and decrease with increasing resolution slightly faster than the errors using STD or RAM with PLM; however, the difference in slope is very small. The errors with RAM (PPM) are the truncation error of the PPM interpolation. On the other hand, the errors using FARGO correspond to an additional advection with the residual velocity ${v}_{{\rm{shift}}}^{R}={v}_{0}-N{\rm{\Delta }}x/{\rm{\Delta }}t$ (because the time step does not allow the profile to be shifted an integer number of cells only). This residual step is solved with the STD advection scheme with PPM reconstruction (see Equation (69) in Benítez-Llambay & Masset 2016). Thus, it is not surprising that FARGO and RAM with PPM produce the same errors.

Figure 2.

Figure 2. Results of the 1D test with RAM on uniform (top panels) and nonuniform (bottom panels) meshes described in Section 4.1. Left and middle panels show the initial condition (black curve) and a simulation snapshot at t = 2 (symbols). The left panel advects a smooth profile on a mesh with 32 grid points. The middle panel advects a square profile on a mesh with 128 grid points. Right panels show the errors, which for smooth profiles converge as N−2 with both PLM and PPM reconstructions.

Standard image High-resolution image

Finally, the large difference between the absolute errors of STD and RAM/FARGO is due to the significant difference between the time steps chosen for each method. As discussed in Section 1, truncation errors scale with the time step and the bulk velocity v0. This highlights the importance of having a method that allows advection using larger time steps on both uniform and nonuniform domains.

This first test demonstrates the validity of our formulation of RAM. As expected, the precision and convergence properties of RAM are controlled by the interpolation step. It also shows that RAM is equivalent to FARGO on uniform meshes, while being its generalization on nonuniform grids.

4.2. Planet Embedded in a Two-dimensional Polar Disk

In this section, we demonstrate that RAM, combined with the mesh density function, can be used to accelerate hydrodynamics calculations of differentially rotating fluids and decrease numerical diffusion on nonuniform grids. With this purpose in mind, we set up a two-dimensional viscous disk orbiting a central star of mass M = M0. The disk model 8 corresponds to a steady-state α-disk (Shakura & Sunyaev 1973), in which the gas surface density, Σ, and temperature, T, are power laws in radius with exponents –1/2 and –1, respectively. The disk aspect ratio and viscosity parameters are set to h = 0.05 and α = 10−3. We use polar coordinates, where the disk domain extends radially and azimuthally as $r/{r}_{0}\in \left[0.4,2.1\right]$, $\varphi \in \left[-\pi ,\pi \right]$, respectively. The disk is assumed to be locally isothermal, with a sound speed cs = hvK, where ${v}_{{\rm{K}}}=\sqrt{{{GM}}_{\star }/r}$ is the Keplerian speed and G is the gravitational constant. We include an embedded planet of mass m p = 10−3 M orbiting the central star on a circular orbit of radius r p /r0 = 1 and modeled as a Plummer potential with small softening length 3 × 10−3 r0. The planet is inserted at t = 0, and the calculations are stopped after one orbital period of the planet, at $t=2\pi {{\rm{\Omega }}}_{0}^{-1}$, with ${{\rm{\Omega }}}_{0}=\sqrt{{{GM}}_{0}/{r}_{0}^{3}}$ the Keplerian frequency at r0. The hydrodynamics equations are solved in a frame that corotates with the planet, where the planet is fixed at an azimuth φ p = 0. We choose a unit system such that G = r0 = M0 = 1. The intent of this test is not to simulate planet−disk interaction realistically but to demonstrate the efficiency of RAM for an astrophysical scenario with high Mach number bulk velocities, high amplitude, and small-scale flow features. We compare the computational cost of RAM with traditional advection schemes.

We run simulations with 45, 90, and 180 cells within the Hill radius ${R}_{{\rm{H}}}={r}_{p}\sqrt[3]{{{Gm}}_{p}/(3{M}_{\star })}$ on both azimuthally uniform and nonuniform meshes. To conserve resolution per scale height across the disk domain, meshes that are evenly spaced in azimuth are also radially uniform in logarithmic space, which is a common choice for the study of planet−disk interaction. Runs with nonuniform spacing are distributed azimuthally according to the periodic bump mesh density function described in Appendix B, with parameters a = 0.1387, b = 0.2773, and c = 15. In the radial direction, we use the modified bump of Appendix B.2, centered at s0 = 1, with the same a and b parameters, but with c = 2.5. In Figure 3, we depict the structured nonuniform mesh generated with these mesh density functions. The white region corresponds to the resolution transition controlled by the difference ba. This mesh increases resolution close to the planet in the radial and azimuthal directions. In addition, because of the logarithmic nature of the radial mesh, the cell size decreases inward. When the azimuthal spacing is uniform, we utilize both FARGO and STD as advection methods (we already showed in Section 4.1 that RAM is equivalent to FARGO on uniform domains). Otherwise, when the azimuthal spacing is nonuniform, advection is solved with STD and RAM. In all cases, we use the faster PLM reconstruction method. As demonstrated in Section 4.1, results with PPM are similar but with lower truncation errors. Due to the small azimuthal size of the cells in the innermost region of the disk and the large bulk speed, a RAM-like method is expected to produce a large speed-up.

Figure 3.

Figure 3. A schematic of the polar mesh utilized in the test described in Section 4.2. The mesh is built in (ϕ, r) coordinates using the periodic bump mesh density function described in Appendix B and Appendix B.2, respectively. The white region corresponds to the resolution transition controlled by the difference ba parameters.

Standard image High-resolution image

4.2.1. Theoretical Speed-up of RAM

We analyze the main terms in the CFL stability condition to obtain a first estimate of the expected computational gain using RAM for this particular problem. The time step is computed following Section 3.5 in Benítez-Llambay & Masset (2016). We also account for the strong gravitational acceleration produced by the planet by assuming that rotational balance can be reached quickly after the planet is inserted in the disk. In this way, Δt may be limited by the additional local Keplerian speed induced within the Hill radius. This estimation corresponds to a lower bound for Δt because, in reality, rotational equilibrium may take longer to be established fully or pressure support can make the disk rotate slower. The expression we use to calculate Δt is

Equation (36)

where ξ = 1/(rΔφ), η = 1/Δr, and vframe = ΩK(rp)r is the azimuthal velocity of the rotating frame. Following the FARGO3D implementation, we define the CFL parameter β = 0.44. The rotational velocity induced by the planet, vp, is given by

Equation (37)

with s = ∣rrp∣ and σ = RH/2. Here the Keplerian speed has been multiplied by an exponential cutoff to make the effect of rotation local.

Equation (36) includes the contribution of the bulk velocity of the disk, vKvframe, and the velocity induced by the planet, vp. As such, for a given mesh, it estimates Δt for STD. To obtain the time step allowed by RAM or FARGO, we set vKvframe = 0. In addition, the maximum speed-up possible by RAM can be calculated after setting vp = 0. In Figure 4, we show the results of Equation (36) when applied to nonuniform (left panels) and uniform (right panels) meshes with 180 cells/RH. For each mesh, the top and bottom panels show the allowed time step in a region that includes both the planet and the inner boundary for STD and RAM or FARGO, respectively. Since cs increases inward while the cell size decreases (see Figure 3), Δt decreases toward the inner boundary. In addition, because in the nonuniform mesh there is an azimuthal band in which the azimuthal size also decreases with decreasing r, Δt can become even more restrictive. This effect is particularly strong for STD, which also has the contribution of the bulk velocity, vKvframe, that increases rapidly for decreasing r. Another remarkable feature is the small Δt produced owing to the high rotational velocity induced by the planet within a high-resolution region around (x, y) = (r0, 0). Whether or not this velocity dominates the time step of the simulation depends exclusively on the location of the inner boundary (the closer it is to the central star, the smaller the time step). When using RAM, the absence of the term proportional to the bulk velocity allows Δt to be larger almost everywhere, except at the planet's orbit, where the bulk velocity is, by construction, zero. Therefore, in the case of RAM, the terms that limit Δt are those associated with cs and vp. The former may limit Δt owing to the double effect of cs being large inward and the cell size being also small there, while the latter may limit Δt owing to the small cell size and large velocity induced close to the planet. In the case in which the mesh is uniform, we also calculate Δt for FARGO and STD. The time step allowed globally by this mesh is almost indistinguishable from the one obtained for the nonuniform case. The same dependencies of Δt with r are observed. It is also worth noticing that, for both meshes, viscosity could limit Δt strongly if either α or the resolution is further increased.

Figure 4.

Figure 4. Time step obtained with Equation (36) when applied to nonuniform meshes (left panels) and uniform meshes (right panels) with a resolution of 180 cells/RH at the planet location (x, y) = (1, 0). The top panels show the allowed time step for STD, and the bottom panels are for RAM or FARGO.

Standard image High-resolution image

We now report the maximum allowed Δt obtained for RAM and STD with/without a planet on the nonuniform mesh. For STD with a planet, the maximum time step is ${\rm{\Delta }}{t}_{\mathrm{STD}}\,=5.7\times {10}^{-5}{{\rm{\Omega }}}_{0}^{-1}$, while for RAM we obtain ${\rm{\Delta }}{t}_{\mathrm{RAM}}\,=2.9\times {10}^{-4}{{\rm{\Omega }}}_{0}^{-1}$. This shows that RAM allows a time step that is ∼5 times larger than STD. If we do not consider the planet in this calculation, for STD we obtain ${\rm{\Delta }}{t}_{\mathrm{STD},\mathrm{np}}=5.7\times {10}^{-5}{{\rm{\Omega }}}_{0}^{-1}$ and ${\rm{\Delta }}{t}_{\mathrm{RAM},\mathrm{np}}=7.6\times {10}^{-4}{{\rm{\Omega }}}_{0}^{-1}$ for RAM, which is ∼13 times larger than that of STD.

Finally, when considering a uniform mesh, producing the same resolution of 180 cells/RH requires 16 times more cells than in the nonuniform case, using eight and two times more cells in azimuth and radius, respectively. Given that RAM and FARGO produce the same time step, RAM on a nonuniform mesh is 16 times faster than a FARGO run on a uniform mesh. When comparing RAM on nonuniform meshes with respect to STD on a uniform one, RAM can be ∼80 − 208 times faster than STD on a corotating frame, and even faster in noncorotating frames.

4.2.2. Simulation Results

On large scales, the features obtained with different methods/resolutions are indistinguishable from one another. We highlight these features in Figure 5, which shows the surface density, Σ, for the run advected with RAM and 180 cells/RH. In Figure 6, we show snapshots Σ for the runs with 45 cells/RH (left column) and 180 cells/RH (middle column), for RAM (top panels), FARGO (middle panels), and STD in a nonuniform domain (bottom panels) after one orbit of the planet. In each panel, we zoom in on the circumplanetary region where spirals develop. In addition, in the right panels we plot cuts along the black line drawn in the zoom-in for different resolutions (low/high resolution with blue/orange curves).

Figure 5.

Figure 5. Snapshot of the 2D global run for the RAM method with 180 cells/RH. The nonuniform mesh enables us to resolve the small-scale perturbation near the planet, as well as the spiral wakes propagating in the disk. A zoomed-in version of the plot is shown in Figure 6.

Standard image High-resolution image
Figure 6.

Figure 6. Snapshots of surface density after one orbit for runs with 45 cells/RH (left panels) and 180 cells/RH (middle panels), for RAM (top panels), FARGO with a uniform mesh (middle panels), and STD in a nonuniform domain (bottom panels), where we also include, for each resolution, one zoom-in panel within the circumplanetary region (the global scale of the runs is shown in Figure 5). Right panels show a cut of surface density along the solid line drawn in the zoom-in for each resolution (blue/orange curve corresponds to the low/high-resolution run). We plot the high-resolution RAM run with solid gray lines to facilitate the comparison between the methods. STD is clearly affected by larger numerical diffusion, whose main effect is to smooth out the spirals and to increase the ambient density within the circumplanetary region. Given that the bulk velocity is zero at the planet's orbit, the extra diffusion for STD is only explained by the smaller time step used and the corresponding larger number of hydrodynamics steps (which translates into a larger number of interpolations/reconstructions before reaching the output time). When comparing the circumplanetary region of RAM and FARGO, we observe a remarkable agreement between the methods, but with RAM being ≳10 times faster.

Standard image High-resolution image

It is when we look within the circumplanetary region that significant differences are observed for runs with low and high resolution. The ambient density within the Hill radius and the density contrast of the spirals are most impacted. Low-resolution runs are affected by larger truncation errors and numerical diffusion. The effect of these errors is to increase the ambient density within RH and smooth out the spirals. This can be recognized when comparing low-resolution runs with FARGO, RAM, and STD with both low and high resolution (blue curves in the right panels). We note that the high numerical diffusion of STD within the circumplanetary region is not due to the bulk velocity because this velocity is small in the corotating frame used. Hence, we conclude that the smaller time steps of STD and the corresponding larger number of interpolations/reconstructions required to advance the solution up to the final integration time are the main source of numerical diffusion close to the planet. It is worth noting that RAM on a nonuniform mesh reproduces the sharp features obtained with FARGO. However, the nonuniform mesh utilizes 16 times fewer grid points than FARGO, significantly reducing the computational cost while maintaining the same level of accuracy.

We now report measurements of the time step required for each method in different meshes for the high-resolution run with 180 cells/RH only, but similar results were obtained for the other runs with lower resolution. We find that ${\rm{\Delta }}{t}_{\mathrm{STD}}=5.7\times {10}^{-5}{{\rm{\Omega }}}_{0}^{-1}$ and ${\rm{\Delta }}{t}_{\mathrm{RAM}}=3.87\times {10}^{-4}{{\rm{\Omega }}}_{0}^{-1}$. Despite the simplified modeling, these values are in excellent agreement with the estimations described in Section 4.2.1. Ideally, the relative speed-up between methods is given by the ratio ΔtRAMtSTD ≃ 6.9; however, RAM introduces an overhead with respect to STD owing to the velocity splitting (see Section 1) and the additional calculations required to shift the profile (see Section 3). Using fast index calculation (see Section 3.2.1), in our tests we found that this overhead is ∼20%. Therefore, the effective speed-up is ΔtRAMtSTD × 0.8 ≃ 5.5. Note that a similar overhead is introduced with FARGO, but it requires eight times more cells in azimuth to match the resolution of 180 cells/RH. Since we find ΔtRAM ≃ ΔtFARGO, we conclude that FARGO is eight times more expensive than RAM on nonuniform meshes provided that the radial spacing used for both methods is identical. However, in uniform meshes like the one presented in Section 4.2, where the number of cells is two and eight times more in radius and azimuth, respectively, FARGO is 16 times more expensive than RAM on the nonuniform mesh.

The problem studied in this section illustrates the main advantages of RAM to study the dynamics close to a planet at a lower computational cost. Two main results can be highlighted from our analysis. First, the standard approach using FARGO to simulate the problem on an azimuthally uniform mesh is expensive. Interestingly, a cheaper method consists in utilizing STD on a nonuniform mesh (in both radius and azimuth). Second, our results clearly show the advantage of using nonuniform meshes combined with RAM, which has smaller truncation errors and is significantly cheaper than STD.

5. Summary and Conclusions

In this work, we developed RAM to solve the advection equation that allows larger time steps and lower truncation errors with respect to standard upwind explicit advection algorithms in problems dominated by a large bulk velocity. Our method is based on operator splitting and conservative interpolation with monotonic reconstruction. We provided PLM (van Leer 1974) and PPM (Colella & Woodward 1984) formulae to implement RAM obtaining a method that can be first- or second-order accurate, but other reconstruction methods could be used. RAM works with arbitrary meshes, but we provide a recipe to generate meshes through mesh density functions that allow faster interpolation. It can be applied to multidimensional problems via dimensional splitting. In this case, an additional time step limitation arising from local shear needs to be implemented (see Benítez-Llambay & Masset 2016). Higher-order (in time) implementations of RAM require an additional source term (see Shariff & Wray 2018).

We tested RAM by implementing it on the publicly available code FARGO3D. By advecting sharp and smooth one-dimensional profiles, we measured the convergence rate of our method for different reconstructions and compared them with FARGO and standard upwind advection on uniform and nonuniform meshes. Smooth profiles converged faster than sharp profiles for all the methods. In addition, this test demonstrated that FARGO and RAM give identical results on uniform meshes. On nonuniform meshes, RAM converges toward the right solution while being cheaper and more precise than standard advection.

Motivated by the long-standing problem of planet−disk interaction, we also tested RAM in a two-dimensional polar disk with very high resolution within the Hill radius of an embedded planet. For this problem, we found that RAM is ∼5.5 times faster than standard advection on nonuniform meshes. When comparing RAM on a nonuniform mesh with respect to FARGO on a uniform one with the same effective resolution within the Hill radius, we found that RAM required 1/16th the computational cost of FARGO. We emphasize that the speed-up or computational cost reduction obtained with RAM is problem dependent. Since RAM and FARGO allow the same time step (for the same effective resolution), the reduction in computational resources required scales with the ratio between the number of grid points used (i.e., computational overhead).

While in this paper we have worked with meshes generated by a mesh density function, RAM can be applied to arbitrary meshes, including nested meshes, or under the framework of adaptive mesh refinement. The utility of one mesh-generating method over another is surely problem dependent.

We conclude that RAM reduces the computational cost of hydrodynamics calculations significantly while decreasing truncation errors in problems dominated by a large bulk velocity. This, combined with the possibility of applying the method to nonuniform meshes, allows the numerical cost of hydrodynamics to be decreased by more than one order of magnitude. The actual value depends on the bulk velocity. This method may impact several fields concerning studies of the so-called disk−satellite problem. In particular, it can be used to speed up two- and three-dimensional calculations of planet−disk interaction in problems that require both global disks and high resolution within the Hill radius of the planet and planetary orbit.

Acknowledgments

We thank the anonymous referee, whose comments helped us improve this manuscript. We thank Frédéric Masset for a thorough reading of an early version of this manuscript. P.B.L. acknowledges support from ANID, QUIMAL fund ASTRO21-0039, and FONDECYT project 1231205. L.K. and K.M.K. acknowledge support from the Heising-Simons 51 Pegasi b postdoctoral fellowship and TCAN grant 80NSSC19K0639. X.S.R. acknowledges support from ANID, Millennium Science Initiative, ICN12_009, and the Independent Research Fund Denmark via grant No. DFF 8021-00400B.

Appendix A: PPM

A.1. Interface Interpolation for the PPM Method

Following Colella & Woodward (1984), $\langle Q{\rangle }_{i+\tfrac{1}{2}}$ are calculated as

Equation (A1)

with

Equation (A2)

and

Equation (A3)

A.2. PPM Monotonicity Constraint

To apply the monotonicity constraint, first δQi is modified in the calculation $\langle Q{\rangle }_{i+\tfrac{1}{2}}$ as

With updated values of $\langle Q{\rangle }_{i+\tfrac{1}{2}}$, the coefficients QR,i and QL,i are obtained as

Equation (A4)

Equation (A5)

Equation (A6)

with ${Q}_{a,i}=\langle Q{\rangle }_{i}-\tfrac{1}{2}\left({Q}_{L,i}+{Q}_{R,i}\right)$.

Appendix B: Mesh Density Functions

B.1. Periodic Bump

In this work, we restrict our analysis to functions F with known analytical expressions for the integrals and use the bisection method to solve Equation (27) or the expression of the inverse when it is easy to find.

We define the function ψ(s) as

Equation (B1)

with ${x}_{\min }\leqslant s\leqslant {x}_{\max }$. To satisfy the condition given by Equation (25), we normalize ψ(s) by Iψ , which is obtained after integrating the density function over the domain $\left[{x}_{\max },{x}_{\min }\right]$, that is,

Equation (B2)

Thus, the desired mesh density function corresponds to

Equation (B3)

This function generates a mesh with two levels of uniform resolution and includes a smooth transition between the two levels. The value of the parameter a sets the extent of the highest-resolution level. The parameter b sets the location of the transition between two levels, and therefore ∣b∣ > ∣a∣. The extent of the transition is given by ba. The value of the parameter c sets the relative resolution between the two levels.

For this case, the cumulative $F(x)={\int }_{-\pi }^{x}\bar{\psi }(s){ds}$, which corresponds to the definition of the coordinate u(x), is

Equation (B4)

with the functions f, g±, h defined as

Equation (B5)

Equation (B6)

Equation (B7)

In Figure 7 we plot an example of a mesh density function with $s\in \left[-\pi ,\pi \right]$. The parameters are defined to provide a ∼5 × resolution contrast between the two levels. In the right panel of Figure 7 we include the cumulative distribution for a mesh with N = 64 points. Finally, we note that the piecewise density function proposed here can be modified to allow for arbitrary levels of resolution with continuous transitions.

Figure 7.

Figure 7. Example of the mesh density functions. The left panel corresponds to the periodic bump function with a = 1.0, b = 1.1, and c = 9. In the right panel we plot the cumulative function, F(x), as a function of the nonuniform coordinate x for a mesh with N = 64 grid points. This mesh was used in the test described in Section 4.1.

Standard image High-resolution image

B.2. Radial Bump

In Section 4.2 we utilize a mesh density function in the radial direction centered at s0 = 1 and defined as

Equation (B8)

with ${I}_{\psi }=\mathrm{log}({x}_{\max })-\mathrm{log}({x}_{\min })+c(a+b)$ (see Equation (B2)). Note that when the parameter c = 0 we obtain a log-uniform mesh density.

Appendix C: Momentum and Mass Conservation in Nonuniform Meshes

Here we describe modifications that need to be done to the standard algorithms described in Benítez-Llambay & Masset (2016) to conserve mass and momentum to machine precision in nonuniform meshes. Let us consider a 3D Cartesian mesh with coordinates x, y, z and corresponding cell center index i, j, k. Momentum (along y-direction) direction is defined as

Equation (C1)

where ${v}_{y,j-\tfrac{1}{2}}$ is the velocity along the y-direction at the cell interface and ρj is the density at the cell center.

The mass of a cell corresponds to

Equation (C2)

where Vijk is the volume of the grid cell. Conservation of momentum and mass implies that ${\sum }_{j}{{\rm{\Pi }}}_{y,{ijk}}^{n+1}{V}_{{ijk}}={\sum }_{j}{{\rm{\Pi }}}_{y,{ijk}}^{n}{V}_{{ijk}}$ and ${\sum }_{k}{\sum }_{j}{\sum }_{i}{M}_{{ijk}}^{n+1}={\sum }_{k}{\sum }_{j}{\sum }_{i}{M}_{{ijk}}^{n}$, respectively. Updates in density and velocity must be consistent with these conservation conditions. Therefore, we revisit the updates performed during the source and transport defined in Benítez-Llambay & Masset (2016).

C.1. Source Step

Consider a 1D pressure gradient along the y-direction. The velocity is updated as

Equation (C3)

where n+s indicates a time update from step n and ${\left\langle {\rho }^{n}\right\rangle }_{{ij}-\tfrac{1}{2}k}$ is a spatial averaged density (see Benítez-Llambay & Masset 2016). It can be shown that momentum is conserved in uniform and nonuniform grids if the average density is defined as

Equation (C4)

The same holds for cylindrical and spherical coordinates. Note that the same update needs to be done in substep2 for artificial viscosity calculations.

C.2. Transport Step

Consistency in the transport step is crucial for the conservation of mass and momentum. In nonuniform meshes, Equation (50) in Benítez-Llambay & Masset (2016) must be updated to

Equation (C5)

The same must be done for vx , vz .

Footnotes

  • 5  

    FARGO, or Fast Advection for Rotating Gaseous Objects, is an advection algorithm that shares its name with the original hydrodynamics code in which it was implemented. Here, when we refer to FARGO, we mean the algorithm, rather than the namesake code.

  • 6  

    Enabled in FARGO3D through the option -DSTANDARD in the .opt configuration file.

  • 7  

    It is worth noting that for this test β could be 1 and none of the methods would introduce truncation errors on uniform meshes. This is because the signal would be advected exactly one cell per time step and the conservative reconstruction used for all the methods (including STD) results in a shift that is exact to machine precision. In addition, for this particular time step the solution is not sensitive to the reconstruction method used or its order. Therefore, one could also use the STD method combined with operator splitting (as done by FARGO) to advance the solution in a subcycle of time steps Δx/v0, and, at the end of the cycle, advance the solution using the remainder, which would result in a significant speed-up and reduction of truncation errors.

  • 8  

    The disk model is a slightly modified version of the publicly available setup fargo.

Please wait… references are loading.
10.3847/1538-4357/acd698