Verification methods for drift–diffusion reaction models for plasma simulations

Compared to other computational physics areas such as codes for general computational fluid dynamics, the documentation of verification methods for plasma fluid codes remains under developed. Current analytical solutions for plasma are often highly limited in terms of testing highly coupled physics, due to the harsh assumptions needed to derive even simple plasma equations. This work highlights these limitations, suggesting the method of manufactured solutions (MMSs) as a potential option for future verification efforts. To demonstrate the flexibility of MMS in verifying these highly coupled systems, the Multiphysics Object-Oriented Simulation Environment (MOOSE) framework was utilized. Thanks to the MOOSE framework’s robustness and modularity, as well as to its physics module capabilities and ecosystem applications (i.e. Zapdos and the chemical reaction network) developed for plasma physics modeling and simulation, this report lays the groundwork for a structured method of conducting plasma fluid code verification.


Introduction
Verification and validation (V&V) are the backbone of any computational physics code. Those two principles help assure users that the tools they work with are reliable and can * Author to whom any correspondence should be addressed.
Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. guarantee a certain amount of accuracy under a given set of physical conditions. For the low-temperature plasma (LTP) computational community, code validation is a commonly utilized and well-understood process. While there are still challenges for plasma validation, such as the uncertainty of input data (such as cross sections, reaction networks, material properties, etc), the LTP community has made great strides in advancing its validation capabilities, thanks to various reference sources for experimental validation (e.g. the Gaseous Electronics Conference radio frequency [RF] reference cell and the COST Reference Microplasma Jet) [1,2]. However, despite recently receiving increased attention [3][4][5][6], the computational LTP community lacks a formal approach to verification of the type employed in similar computational fields [7][8][9][10][11]. This lack of commonly used structured verification methods will become an increasingly significant issue as more open-source codes continue to be developed [12][13][14][15] in order to keep up with the heightened importance of LTP in established and emerging fields (e.g. microchip manufacturing, biomedical applications, and water treatment, to name a few [16][17][18]). The aim of this work is not necessarily to establish a fully structured verification method for LTP codes (such guidelines would require input from leaders and community members in the field of computational LTP), but rather to lay the necessary groundwork for launching such an undertaking. To begin, two particular previous V&V guideline documents, developed by the American Institute of Aeronautics and Astronautics (AIAA) and Sandia National Laboratories (SNL) [7,9], were heavily relied on. There are two reasons why these guidelines were followed more closely than ones from other computational fields: (1) the AIAA guidelines are considered the gold standard in V&V methodologies, based on the numerous works that use it as a template for computational fluid dynamics (CFDs) [8][9][10][11]19], and (2) SNL was able to improve on those standards, mostly by rigorously defining the different areas of verification.
To better understand the importance of V&V and why the LTP community should have its own verification guidelines, one must understand the differences between the two principles. Based on the definitions provided by AIAA [7]: • Validation is the process of determining the degree to which a model accurately represents the real world-from the perspective of the model's intended uses. • Verification is the process of determining that a model implementation accurately represents the developer's conceptual description of, and solution to, the model.
While validation is the final act of ensuring that a code is ready for use in predictive modeling, without a comprehensive verification study it becomes increasingly difficult to reduce the number of potential code inaccuracies. Without verification, one cannot be certain whether identified errors are due to a lack of physical representation in the model, or to computational errors and limitations. One form of pseudoverification used throughout the LTP community (but also one whose usefulness is potentially diminished without more rigid verification) is benchmarking. In this paper, benchmarking refers to comparing two or more codes applied to a given problem or physical scenario [20][21][22]. For the LTP community, these problems often involve highly coupled partial differential equations (PDEs), and the solutions are usually unknown. The core idea behind benchmarking is that if two or more codes have an identical-or even similar-solution, the codes are deemed correct and valid. A few difficulties can potentially arise when comparing codes without exact solutions. First, the scope of the benchmarking entails limited operational conditions-and thus limited utility outside of those conditions when compared to more stringent verification methods. Not only are the physical conditions being compared, but so are the conditions related to computational approaches (e.g. time steppers, mesh size, and variable order). Secondly, when a difference between the codes does exist (as is very often the case), the question of which code and which component of that code is at fault can become tricky to answer. Thirdly, even if the multiple codes produce similar results, that does not necessarily mean that all the codes are error free. Depending on the development history of the codes in question and the method of which the benchmarking study was conducted, unintended errors shared in both codes could propagate. That could result in codes conducted in the benchmarking study to have similar results due to similar errors. These concerns can be mitigated via a standard verification procedure that involves software quality checks, comparisons to exact solutions, and convergence testing. A more detailed overview of the proposed verification methods is given in section 3.
Regarding verification processes in plasma physics, this work focuses on the drift-diffusion reaction model as the physics model of choice. The drift-diffusion reaction model is commonly studied and implemented in plasma simulations [13,20,23,24]. It requires fewer assumptions in comparison to global models, and is less computationally expensive than molecular dynamics or particle-in-cell models. The driftdiffusion reaction model also closely resembles the structure of PDEs that describe the physics of CFD codes, as they themselves are advection-diffusion equations. This is of relevance because the AIAA and SNL guidelines were developed with CFD in mind.
The Multiphysics Object-Oriented Simulation Environment (MOOSE) framework was chosen as the computational platform. The MOOSE framework is an open-source, multiphysics code platform written in C++ and capable of implementing both finite element and finite volume methods, utilizing the Portable, Extensible Toolkit for Scientific Computation (PETSc) [25] and libMesh [26] as the underlying solver and finite element library, respectively [27]. Additional information on MOOSE can be found in appendix. Within the wider MOOSE application ecosystem, the physics of the drift-diffusion reaction model are divided between two applications: Zapdos for the flux and electrostatic equations [12,23], and the chemical reaction network (CRANE) [28,29] for the CRANE equations. These two applications can be further coupled with the MOOSE electromagnetics module for non-electrostatic cases, and with the MOOSE Navier-Stokes module for advection-driven cases caused by background fluid flow.
It should be noted that though this work focuses on the applications within the MOOSE ecosystem, the methods and practices of verification demonstrated here can be applicable to any codes using the drift-diffusion reaction model. By using the MOOSE ecosystem as a test bed for the verification process of plasma simulations, the authors' hope that this work will act as a jumping off point for further discussion and standardization of verification within the LTP community, when compared to other similar computational communities.

Plasma fluid application: Zapdos
Zapdos is a MOOSE-based application designed for simulating multi-fluid plasmas. It relies on the drift-diffusion approximation, which assumes that collective behaviors dominate over single particle interactions. This results in plasma species (e.g. electrons, ions, and metastables) being treated as separate fluids. For a non-magnetized plasma, the drift-diffusion equations are as follows: where n is the density, the subscript j represents a given fluid species (e.g. electron, ion, or metastable), sign j indicates the advection behavior (+1 for positively charged species, −1 for negatively charged species, and 0 for neutrals), Γ is the flux of the species, S is the source term, µ is the mobility coefficient, D is the diffusivity coefficient, and E is the electric field. The source term consists of a reaction network, discussed in more detail in section 2.2. Within Zapdos, the electric field is resolved by assuming the electrostatic approximation, such that: where V is the electric potential and ε 0 is the permittivity of free space. To account for energy dependence in regard to electron transport coefficients and reaction rates, Zapdos calculates the energy equation for electrons via the following form: where ϵ is the mean electron energy, e is the elementary charge, m e is the electron mass, m g is the mass of the background gas, n g is the background gas density, k elastic is the elastic rate coefficient, T e is the electron temperature (i.e. T e = 2 3 ϵ), E j is the threshold energy of inelastic collisions, and K j is the inelastic rate coefficient. Using the mean electron energy, Zapdos interpolates the electron transport coefficients from a matrix that was precalculated by a Boltzmann solver such as BOLSIG+ [30].

Chemistry application: CRANE
CRANE is the chemistry solver used to calculate the source term in equation (1). CRANE solves reaction sets by using either rate or Townsend coefficients, in the form of: where v is the stoichiometric coefficient, k is the rate coefficient, n j is the density of the reactant species j, and α is the Townsend coefficient. Energy-dependent coefficients can either be handled by the user entering an energy-dependent function, or by precalculating them using a Boltzmann solver.

MOOSE electromagnetics module
The MOOSE electromagnetics module provides objects, components, and infrastructure to enable modeling and simulation of electromagnetic wave problems within the MOOSE ecosystem in 1D and 2D domains [31,32]. Due to being based on the MOOSE framework, it also facilitates the coupling of electromagnetic simulations to other physical domains and models. The base equations of the module rely on Maxwell's equations, the differential forms of which are given here as: where E is the electric field intensity, D is the electric flux density, H is the magnetic field intensity, B is the magnetic flux density, J is the electric current density, and ρ is the electric charge density. Maxwell's equations can be made definite through the following relations which tie field quantities to the properties of the medium: where ε is the electric permittivity, µ is the magnetic permeability, and σ is the electrical conductivity.
To obtain the wave equation form of Maxwell's equations that is predominantly used in the module, the curl of equations (11) and (12) can be taken, and, utilizing the constitutive relations mentioned in equations (13)-(15), a timedomain wave equation of the following generic form can be derived: where, for the electric field: and for the magnetic field: where the further assumption has been made that the medium properties are constant. For time-harmonic analysis in the frequency domain, it is assumed within the module that fields can be represented by a Fourier series with exp(j ωt) time dependence, where: where j = √ −1 and ω = 2π f (with f being the operating frequency) and: The generic wave equation given by equation (16) is then modified and represented in the module code accordingly.
The properties of the physical medium are entered into the module by utilizing the MOOSE material system, and complex vector field components (both real and imaginary) are represented as individual finite element variables, approximated using first-order Nédélec basis functions [33]. Boundary and interface conditions related to other aspects of basic wave propagation and electrostatic contact are also available [32].

Software quality
According to SNL, the verification methodology can be divided into three components: software quality engineering (SQE), numerical algorithm verification, and solution verification [9]. With respect to what is being tested, SQE tests whether the code is reliable on different hardware and in different software environments, numerical algorithm verification tests whether the code is correctly implementing the required numerical algorithms, and solution verification tests whether the code is accurately solving the system of PDEs. Though this work does not put a great deal of focus on SQE as compared to numerical algorithm verification and solution verification, SQE is extremely important, and the approach used by the MOOSE ecosystem in order to maintain good SQE standards will be mentioned briefly in this section. As multiphysics codes grow, it is important to house documentation on established models and verification tests, as well as to ensure that code additions do not unintentionally negatively affect previously verified components. To address these concerns, MOOSE utilizes a custom system for in-code documentation and website generation (MooseDocs) along with the custom Continuous Integration, Verification, Enhancement, and Testing (CIVET) tool, which allows for multi-platform, multi-architecture testing to enable the introduction of new code capabilities or fixes [34]. Zapdos also uses the GitHub Pages system to host the code documentation website, alongside the code repository [35]. Once new code is ready to be added to MOOSE or to one of its tracked applications, the required testing of that capability is added to the test suite in order to avoid code regressions. These tests are intended to be short in terms of computational time (i.e. on the order of seconds), and in the case of Zapdos, are usually derived from longer simulation cases within the V&V process. The proposed update to MOOSE or the MOOSE-based application then uses CIVET to run all existing tests on all compatible operating systems and architectures before accepting or rejecting the merge of the proposed updates into the production code base. Not only is SQE used to improve code quality, but it should be used to reduce errors and malpractices that can propagate due to poor code formatting and to mark incomplete documentation. In addition to running the tests previously mentioned, additional flags are implemented within CIVET and MooseDoc to ensure uniform code formatting and gaps within documentation. Also during code development, users are encouraged to use additional tools-such as the MOOSE Language extension for the text editor, VSCode-to help pass the MOOSE GitHub CI pre-checks and reduce time spent on reformatting.

Analytical solutions and the method of manufactured solutions (MMSs)
For both numerical algorithm verification and solution verification, highly accurate solutions are needed. Within numerical algorithm verification, these are used for comparison purposes to ensure that the numerical algorithms are coded correctly. This is similar to the benchmarking method mentioned in section 1, but without the uncertainty of comparing unknown solutions. Within solution verification, convergence studies are often performed to test whether the PDEs are being solved correctly for a given discretization in space or time. The process for these convergence studies depends on the accuracy of the available known solution (as discussed in more detail in section 3.3). SNL categorizes highly accurate solutions into four areas: analytical solutions, MMS, highly accurate numerical solutions to ordinary differential equations (ODEs), and highly accurate numerical solutions to PDEs [9]. In this work, only analytical solutions and MMS are discussed in the context of plasma simulation.
Analytical solutions and MMS serve as verification methods for a wide range of physics. Both types of methods verify a developed code in the following manner, which involves the utilization of a known solution: (i) Input the user-defined parameters and boundary conditions required for the known solution. (ii) Compare the calculated solution to the known solution (usually via some type of error metric).
Though both methods instill confidence that the code can perform as expected, they diverge in terms of how the known solutions are obtained.
For analytical solutions, the known solution is constructed by solving the given set of equations in accordance with known initial and boundary conditions. This leads to a welldefined solution within the domain of the provided assumptions. The LTP community has sets of analytical solutions for single-phenomenon cases, and even for some limited coupled cases as well [36,37]. The disadvantage of this method is that the assumptions required to obtain an analytical solution often limit the physics being tested. One work around for this is to apply many different simple analytical solutions over multiple simulations in order to individually test many different parts of the code, then assume that if all the simple solutions are acceptable, the highly coupled ones will be as well. In actuality, this may not be the case, as unintentional errors can still propagate (e.g. when combinations of different discretization methods lead to non-ideal convergence [10]), so special care still needs to be taken.
To test these highly coupled systems, for which analytical solutions may be unavailable, MMS can be used. This method has been rigorously developed and tested within the CFD community [38,39]. Within the plasma community, MMS has been used for fusion plasma applications, though it has been underutilized for LTP [40,41]. Unlike an analytical solution that imposes known initial or boundary conditions, MMS starts with a known solution devised by the developer. This known solution is entered into the system of PDEs being solved, enabling determination of the initial/boundary conditions and any other input parameters required for the simulation (e.g. input coefficients). This verification method minimizes the number of assumptions applied within the model, but requires a detailed construction of the known solution and the input coefficients in order to ensure that no term in the tested equation set is neglected. This does not mean that the solution must be physically accurate; however, it should exercise all terms of the equations in question. For example, in an advection-diffusion problem, the solution's advection and diffusion components should be of similar magnitudes. Thus, trigonometric functions are used to ensure that non-trivial derivatives exist in the solution set. And despite there being no requirement that the manufactured solutions be physically accurate, if the solutions have physical traits that software attempts to preserve, these traits should be preserved within the manufactured solutions. For example, Zapdos ensures positive density profiles by solving the density variable in a logarithmic form (i.e. N j = ln(n j ), where N j is the solution variable within the solver and n j is the number density for species j). Since the code preserves such physical traits, when MMS is employed to test electron density in an RF-driven plasma, the solution function should exhibit nonnegative values and feature an oscillation proportional to the drive frequency. Section 4.2.1 outlines the formulation of the known solutions for a given MMS test. To create a 'perfect' solution to the PDE by using MMS, an extra forcing function term (i.e. an artificial source term) is often required to ensure that the tested set of equations generates the unique desired solution. This source term is added to the code for testing purposes, which might be intrusive. For modular codes or in cases in which one may easily include additional source terms, this additional requirement becomes trivial and nonintrusive. To derive this additional source term for complicated PDEs (for which generating solutions via pen and paper could prove tedious), a symbolic mathematics system may be used. For example, MOOSE ecosystem codes often use the Python module SymPy [42].

Spatial and temporal convergence studies
Once a highly accurate solution is chosen and the calculation errors are made sufficiently low for a given set of computational parameters (e.g. time steppers, mesh size, and iteration/convergence tolerances), the question often arises as to whether the code is correctly solving the set of equations, or just solving it within the bounds of the tested set of computational parameters. Convergence studies can help address this concern. When performing a convergence study for a known spatial or time discretization scheme, there is typically a logarithmic relationship between the error of the approximated solution and the element size or time step [9,10,39,43]. This relationship can be generalized in the forms of: where e is the error, h is the element size, ∆t is the time step, C is a constant, and p spatial and p temporal is the order of convergence for spatial and temporal convergence studies, respectively. Common errors that are usually associated with convergence studies are the L 2 and L ∞ errors. These errors can be defined as: where u exact is the exact solution, u is the approximated solution for element size h and time step ∆t, N is the total number of elements, i is a single element out of N elements, and ∥e∥ 2 and ∥e∥ ∞ represent the L 2 and L ∞ errors, respectively. The formulation of the convergence rate, p, can depend on several factors (such as type of error, uniform vs non-uniform mesh refinement, type of spatial or temporal discretization, etc). For this work, all cases used the second-order backward differentiation formula (BDF2) for temporal discretization, and the Galerkin method for the spatial discretization with first-order Lagrange scalar finite element basis functions, unless stated otherwise. For uniform refinement, both the temporal and spatial discretization should have an ideal convergence rate of 2 for both the L 2 and L ∞ errors [43]. Besides the known convergence rate, two other criteria must be met. First, the known solutions for the PDE should be smooth. Second, the mesh resolution and time step should be within the range for the leading-order error term to dominate over the total discretization error [39]. This second criteria implies two conditions, that the solutions are in the region of asymptotic power convergence and the error caused by floating point precision negligibly contributes to the total error. If these criteria are met, any significant divergence from the characteristic convergence rate could be the result of an error in how the model is being implemented within the code. It should be warranted that additional effects could result in the divergence of the convergence rate, such as mixed discretization order at boundaries and unwanted changes in mesh resolution. A common method for convergence testing that is not mentioned in future sections, but that should still be addressed is the method of Richardson extrapolation. Instead of needing a highly accurate solution, such as an analytical solution or a manufactured solution, Richardson extrapolation can approximate the exact solution by estimating a higher-order solution from the solution sets of two different mesh resolutions, in the form of: where u RE is the higher-order solution using Richardson extrapolation, h is the element size, u is the lower-order solution, p is the order of convergence, and the subscripts 1 and 2 represent the cases for the course and refined meshes, respectively. While Richardson extrapolation is a great tool in confirming if a code's discretization scheme is implemented correctly, it does not help ensure if the model of choice is implemented correctly. To demonstrate this, figure 1 shows the spatial convergence for the steady-state form of equation (2) for Richardson extrapolation and MMS solutions. Figure 1(a) has the correct formulation of equation (2), while figure 1(b) introduced a sign error for the diffusion term. While both methods of verification show similar slopes for the correct formulation, Richardson extrapolation cannot capture the incorrect implementation of the model, since the discretization scheme is the same for both cases.

Analytical solutions
In many cases of deriving analytical solutions for plasma physics, severe assumptions are needed even for simple cases.
While more complex models can approach these analytical solutions when given the correct input conditions that approximate these assumptions, the error introduced by these approximations leads to non-ideal rates of convergence mentioned in section 3.3. Though analytical solutions are limited for LTPs-due to assumptions that often heavily restrict one or more variables within a highly coupled system-they can be a great choice for unit tests. Unit tests are simplified problems that decouple physics in order to verify different components of a software [10]. The following sections do not show all possible sets of analytical solutions for LTPs, but rather how these sets of solutions may be used for unit tests and what their limitations are.

CRANE.
The analytical solutions that exist for CRANE demonstrate both time-dependent single-and multibody reactions. The first case is a three species system coupled by single-body reactions, such that: where n i is the number density of a species and K i is the reaction rate for a given single-body reaction. For initial conditions of n A = n A0 and n B = n C = 0 at t = 0, the known solutions are as follows: The second case involves a two-species system coupled with two-and three-body reactions, such that: dn e dt = K iz n e n Ar − K r n e n Ar+ n Ar dn Ar+ dt = K iz n e n Ar − K r n e n Ar+ n Ar (33) where n e is the electron density, n Ar+ is the ion density, n Ar is the background gas, K iz is the ionization rate coefficient, and K r is the recombination rate coefficient. This system represents an argon plasma in which only ionization and recombination are tracked. For a constant argon density and ionization rate coefficient, the solution for the electron density is asymptotic, with a K iz n Ar slope that approaches the value of K iz /K r . Figure 2 shows the CRANE results for the analytical solutions, where K A = 1, K B = 5, K iz = 7.26 × 10 −12 cm 3 s −1 , K r = 1 × 10 −25 cm 6 s −1 , and n Ar = 2.5 × 10 19 cm −3 . While these analytical tests are great for the first stage of verification, they do not incorporate energy-dependent reaction rates. And though there are options to add such ratessuch as coefficients found in Lieberman's works, as well as other analytical solutions for multi-body reaction systemsthis work was not intended to document all possible analytical cases [36]. Rather, it is meant to offer a starting outline for LTP verification. For this reason, the verification of energy-dependent reaction rates will be introduced in sections 4.1.2 and 4.2.1.

Zapdos.
Analytical solutions for plasma fluids are difficult to derive, and coupled problems often require heavy restrictions to either the electric field and potential or to the fluid densities. Furthermore, analytical solutions are often divided up into piecewise functions representing the highly differing regions within the plasma itself (e.g. the plasma bulk, pre-sheath, and sheath regions). While these regional differences make it difficult to employ fully self-consistent models (e.g. the drift-diffusion reaction model covered in this report), the analytical solutions for the regional differences can be used for unit tests. Since Zapdos houses both the drift-diffusion equations and Poisson's equation for the electrostatic approximation of the electric field, examples of utilizing both will be presented here.
For the electrostatic approximation, where n i = n 0 and n e = n 0 e V/Te , equation (4) becomes: For 1D the case in which V << T e , the potential is pinned to the wall at the value of V 0 , and the natural boundary condition (such that ∇V · n = 0) is applied to the other end of the mesh, the solution becomes: where λ De = ε0Te en0 1/2 . Figure 3 compares the Zapdos results for equation (34), where T e /V = 1000, V 0 = −1V, and n 0 = 1 × 10 16 m −3 . This example demonstrates one type of assumption often used within the LTP community, which requires extreme conditions to reduce the higher-order terms of the analytical solution. For this case, the condition V << T e reduces this problem to only the high electron temperature or low potential ranges.
For the drift-diffusion flux term, analytical solutions for diffusion problems can be used, alongside the assumptions Γ e = Γ i and n e = n i = n. These sets of equations are commonly known as ambipolar diffusion. With ambipolar diffusion, for an electric field of: the flux term for ions and electrons can be reduced to a single diffusion term: where D a = µ i De+µeD i µ i +µe . With this formulation, if the electric field of equation (36) is supplied to the flux equations for the elections and ions of equation (2), the results will be identical to any solution of equation (37). One such analytical solution for ambipolar diffusion can be derived from the problem of: where G 0 is the source term. For the 1D boundary conditions of n = 1 at x = ± l 2 , the analytical solution for equation (38) is: Figure 4 shows the Zapdos results for the electron and ion densities, given equation (38). The coefficients used for this comparison were D e = µ e = 10 and D i = µ i = 6. This case demonstrates that assumptions are needed to decouple the problems and derive an analytical solution. While this case tests both the advection and diffusion terms of the driftdiffusion equation, neither the ions, electrons, nor electric field are truly coupled together, since all terms are reduced to terms of the variable n.
The next set of problems involves coupling Zapdos and CRANE. These analytical solutions are present in the work of Liebermann and the full indepth derivations [36]. The following will be a summarized explanation of the derivation. The first is used to plot the electron temperature vs. background gas density multiplied by the effective diameter (n g d eff ), which the ideal model can be found in section 10.2 of Principles of Plasma Discharges and Materials Processing [36]. Assuming that the total surface particle loss is equal to the total volume ionization, the relationship n g d eff then becomes: where the Bohm velocity is u B = (eT e /M) 1/2 and M is the mass of the ions. Using the ionization rates found in the literature, an analytical solution for electron temperature can be derived [36]. Figure 5 shows the results of comparing Zapdos simulation output to the analytical solution taken from Lieberman [36]. While the case works for an ionization-only source of electrons, the analytical solution becomes more difficult to derive as additional sources are introduced. This simplification of source terms is often needed for these types of solutions. The last set of analytical solutions was chosen to help illustrate the limitation of coupled solutions for LTPs. For this case, Zapdos was compared to the analytical solution for the electronegative region of an oxygen plasma, which the ideal model can be found in section 10.5 of Principles of Plasma Discharges and Materials Processing [36]. Within the electronegative region at low pressures, the charge density species follow the profile of: where n + is the positive ion density, n − is the negative ion density, α 0 is the ratio of centerline negative ion density to electrons, and l 1 marks the end of the electronegative region. l 1 can be derived by knowing n e0 , n g , α 0 , and T e for the rate coefficients, and by assuming a simplified particle balance within the electronegative region, such that: While this analytical solution can only be obtained by omitting plasma sheath effects, the results can still be compared to Zapdos to ensure that the electronegative bulk profiles between the models are similar. Figure 6 compares the Zapdos solution against the analytical model. Zapdos supplied the values n e0 = 3.261 × 10 14 m −3 , n g = 1.6 × 10 21 m −3 , and α 0 = 9.3898 to the analytical model. The electron temperature was adjusted to better align with the Zapdos results, where T e ≈ 2.93 for the calculation and T e = 2.54 for the analytical model. While the analytical solution was accurate for the electronegative bulk, it still needed adjustment, as the electron temperatures of the two models did not match, due to the analytical model failing to achieve a smooth solution between the electronegative, electropositive, and sheath regions.

MOOSE electromagnetics module.
An example initial verification test of the electromagnetics module (prior to using it alongside the coupled Zapdos-CRANE codes), given the PDE formulation described in section 2.3, tests the module's ability to reproduce a known instantaneous field expression for a given component of an electric field. In this scenario, a single-frequency rectangular waveguide is considered. The geometry, shown in figure 7, is a vacuum-filled waveguide of length L and width b. In this waveguide, an electric plane wave representing the TM11 mode travels from the left entry port (x = 0) to the right exit port (x = L). Parameters for this scenario are shown in table 1.
Ideally, a unit test should prioritize accuracy of the calculated solution as well as good simulation convergence. Therefore, a well-defined problem should be chosen to test the individual physics components of the code. Following these guidelines, this scenario was designed to compare a   (16)) for the electric field only. Other unit tests should be designed to cover other scenarios and code components. Harrington showed that, because transverse electric field distributions could be determined from the component in the direction of wave travel, only the E x component is required for making an adequate comparison in this case [44]. Perfectly conducting walls were used in the waveguide, and an absorber was placed on the waveguide exit. At the entry port, an incoming wave shape was prescribed, set to: The present study uses the analytic expressions provided by Cheng for the electric field structure of a TM11 mode within a vacuum-filled rectangular waveguide [45]. For simplicity, only the real component of the electric field in a 2D waveguide cross section will be compared. The E x component of the field is given by: where β = k 2 − (π /b) 2 is the propagation constant of the wave, k = ω/c is the free space wavenumber (with c being the speed of light), ω = 2π f is the operating frequency, E 0 is the peak electric field amplitude (set to 1 in this case, given equation (45)), and b is the width of the waveguide in the y direction.
The complex-field result of this verification case is separated into real and imaginary components, and is shown in figures 8(a) and (b).
For the comparison, data points were taken down the centerline of the waveguide domain (y = 5) and compared to Cheng's analytic solution given in equation (46). This comparison is shown in figure 9(a). The magnitude of the computed result aligns well with that of the solution, but note the increasing phase error along the length of the simulation domain. This result brings to light another aspect of model verification: confirmation of known numerical effects in the result. In the literature, this is known as 'numerical dispersion' or 'numerical phase error' and is a byproduct of poor resolution of the calculated wave velocity. Two ways of mitigating this are by using (1) higher-order finite element basis functions and (2) smaller elements [46][47][48]. The module is currently limited to a first-order vector basis, but figure 9(b) shows the result obtained following an 8× refinement of the mesh, with the solution being improved over the entire domain. An 8× refinement might be too much for regular usage of the code; however, generally speaking, determining the numerical effect on the result by employing thorough code verification testing enables mitigation strategies and the development of code-related best practices for increasing the robustness of the calculated results.

Convergence studies using the MMSs
MMS can be utilized to overcome the limitations mentioned in section 4.1. MMS can be applied in a modular fashion in order to test each individual component of a code-just like the unit tests for analytical solutions-or to broadly test highly coupled systems. This is because the only assumption needed for MMS is that the solution be smooth through time and space. This does, however, limit the testing of terms in whose solutions we expect there to be a discontinuity. For these cases, multiple MMS solutions can be derived, such that one MMS tests the region before the discontinuity, while another tests the region after. The following sections offer a set of solutions for testing a modular setup of the drift-diffusion reaction models.

CRANE and Zapdos.
For Zapdos and CRANE, a set of MMS solutions was derived for a 2D oscillating electron density and mean energy, paired with a steady-state ion density. The solutions for electron density, ion density, electrostatic potential, and electron mean energy are as follows: n e = sin π y l y + 1.0 + cos π 2 x l x + 0.2 sin (2π tf ) cos π y l y (47) n i = sin π y l y + 1.0 + 0.9 cos π 2 x l x (48)   x l x + sin (2π tf ) cos π y l y sin π y l y (49) V = −e eff l 2 y cos π y l y sin (2π ft) where l i is the scale length in either the x or y direction, f is the operating frequency, e eff is an effective elementary charge, and ε eff is an effective permittivity. It is worth noting that although these solutions were derived to obtain similar physical characteristics of an RF plasma, such need not be the case. This is because MMS is testing whether the model is correctly implemented in the code, not whether it is physically valid. Using MMS, modular convergence studies were conducted, whose categories are described in table 2. From table 2, we see two main reasons why a temporal convergence study was only conducted for the 1D diffusion problem. First, the time derivative term is non-coupled and only depends on the variable of interest in the equation in the time domain. This leads to the assumption that if the kernel works for a 1D, singlespecies problem, it must also work for multi-dimensional, multi-species problems. Second, the errors used for convergence are dependent on both time and space. This requires a temporal convergence study to have a significantly refined mesh so that the error introduced by the spatial discretization does not impact the overall error-and vice versa for spatial convergence studies. It should be noted that this method of neglecting the temporal convergence for higher spatial dimensions is only valid in the case where the discretization of the kernels in question are separated purely into either temporal or spatial discretization. For cases where kernels involve both temporal and spatial discretization, temporal convergence for higher spatial dimensions is needed.
The coefficients within the drift-diffusion reaction model for MMS were chosen, such that all terms being tested resulted in similar magnitudes. A list of these coefficients can be found in table 3.     (16) was tested. Thus, the spatial rate of convergence will be the only metric shown in this section. The module currently utilizes a Nédélec first-order vector finite element basis that is curl-conforming for vector field simulations [33]. Given a first-order Nédélec basis, the spatial rate of convergence has been shown to be first order in terms of element size as well [49]. This differs from the spatial rate of convergence of the first-order Lagrange scalar finite element basis-the rate that has been discussed up to this point with simulations using Zapdos and CRANE, and is one order greater than the variable order (e.g. second-order accurate for a first-order Lagrange basis).
The solution for the complex electric vector field in this study is given as: where 'Re' and 'Im' indicate the real and imaginary components of the complex field, respectively, andx andŷ are the unit vectors in the x and y spatial directions, respectively. All coefficients used in this study are given in equation (16)-µ 0 , ϵ, and ω were all given a value of unity. The coefficient values were chosen such that the magnitude of all terms being tested were within the same order of magnitude. Given the simple structure of the above MMS solutions, the source terms leading to the MMS solutions can be directly determined as follows: which can be easily fit into the code structure for the termincluding the current source J-so as to enable it to be directly included in the testing. Figure 12 demonstrates the outcome of the spatial convergence study, showing that the first-order Nédélec rate of spatial convergence as calculated by the module was determined to within 1% of the expected value found in the literature [49].

Ongoing development.
For ongoing and future development of Zapdos, MMS and convergences studies will continue to be used as the first steps in verifying the implementation of other coupled problems. An example of one such problem is the coupling of the MOOSE Navier-Stokes module to Zapdos when simulating gas-driven flow. While both applications have been individually verified, verification of the coupling between the two applications must still be performed. Figure 13 shows the results of the one-way coupling between Zapdos and the Navier-Stokes module, with the ion flux having a gas-driven advection term, such that: where v is the velocity profile of the flow-driven background gas. The solution set for the plasma parameters is the same as (56) Figure 13 shows good spatial convergence for the one-way coupling. With this verification test, a shortened dynamic test can be constructed for future SQE, while next steps for the V&V process would include benchmarking and experimental validation.

Verification documentation
For LTP code verification to achieve parity with that of other areas of physics modeling and simulation, a database of relevant and reproducible verification cases should be developed. For such documentation, this work again references the guidelines from SNL [9], which we summarize here. SNL categorizes verification documentation into four areas: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional user information.
A conceptual description is an overview of a particular verification test, and is distinguished by the fact that it is purely textual, with no equations or symbols used. Use of purely textual descriptions makes storage and reference in electronic databases easier, such as when one needs to search for particular keywords. SNL divides the conceptual description into five parts: title, initial conditions and boundary conditions, related physical processes, type of benchmark, and numerical and/or code features tested. An example of the conceptual description for Test type 'e' from table 2 would be:

Title:
Two fluid simulation, with steady-state ions, time-dependent electrons, and electrostatic potential Initial conditions and boundary conditions: Non-zero initial conditions for species densities, 2D Cartesian coordinates, boundary condition of the first kind.

Related physical processes:
Charged species fluid continuity, electrostatic potential Type of benchmark: Manufactured solution Numerical and/or code features tested: Two-way coupling between potential-dependent advection term for ion and electron fluids, and charged-species-dependent source term for the potential By contrast, the mathematical description contains more detailed equations and/or symbols. This would include the set of equations being tested, the known solutions, and any additional source term(s) needed for the cases involving MMS. This section would include all assumptions used to derive the equation set being tested. For example, the assumptions needed for the drift-diffusion equation and the electrostatic approximation would need to be addressed when documenting Test type 'e' from table 2. All values for input parameters, initial conditions, and boundary conditions would also be outlined in detail within this section. However, it would not include any mention of the numerical solver, discretization, or numerical methods.
The accuracy assessment documents the domains for which the known solutions are highly accurate, as well as whether there are any spatial or temporal limitations in the applicability of the known solution or in the allowable ranges of any input parameters. For MMS, this might be the acceptable ranges of coefficients needed to keep each term of the modeled PDEs in question within the same order of magnitude. For analytical solutions, this might be the acceptable ranges of any inequalities needed to obtain the known solutions, such as for equation (34).
The section for additional user information is intended to provide any remaining information needed to repeat the verification method in question. This is where any documented results would include information about the hardware and software used, the discretization or numerical methods, and the methods for obtaining the error between the known solutions and the computed solutions.

Conclusion and future work
The current under-reliance on verification for LTP codes (as compared to computational codes and frameworks in other areas of physics) could become increasingly problematic as the LTP community continues to grow. As mentioned in section 1, the work presented here is not meant as a complete outline of a verification process for LTP, but as the potential groundwork upon which to begin formulating a standardized process for the wider LTP community. Just as the LTP computational community has worked with the experimental community in order to establish validation methods, domain computational scientists should work together to better establish verification methods. While analytical solutions work for simplified unit tests, the limitations needed to obtain such solutions make them ill-suited for verifying highly coupled systems. The work presented here demonstrates that MMS can help with those highly coupled systems by enabling custom solutions to be derived for testing individual terms or multi-physics simulations. For future work, the authors suggest that a database of relevant verification cases be developed for existing LTP codes, using the documentation style discussed by SNL [9]. Further discussion on SQE is needed for LTP codes. Inspiration could be taken from the Institute of Electrical and Electronics Engineers (IEEE), who have developed comprehensive SQE standards as early as 1980 [50]. The current version of the IEEE SQE standard is found in IEEE 730-2014, which includes detailed criteria for software quality assurance covering all stages of software development and maintenance [51].
While standardizing testing for verification will be an iterative process and require input from leaders and community members, similar standardizing practices have been done in the past for the LTP community within the Gaseous Electronics Conference (GEC). The GEC community has played an important role for LTP validation and databases, such as the development of the GEC RF reference cell for experimental validation and the expansion of the LXCat database for LTP modeling [1,52]. The authors believe that a similar community-based project held within GEC could serve as an opportune location for continued development and the verification of LTP codes.

Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).

Appendix. Physics coupling within MOOSE
MOOSE was chosen for its open-source nature and its modular method of coupling physics within the framework. Within the MOOSE code repository itself, several modules focusing on particular types of physics (e.g. tensor/solid mechanics, geochemistry, thermal hydraulics, ray tracing, and heat conduction) have been developed by the MOOSE community. By default, externally-created MOOSE applications depend on the core framework, though they can optionally be made to depend on any (or all) of the modules in order to help create a unique set of capabilities. All of these applications rely on the framework as their backbone, and their code objects are interoperable with each other, thus enabling multiphysics simulations to be performed. These can be solved in a fully coupled manner-using the same solver, solver parameters, and matrix-or by using some form of customized, looser coupling, utilizing the MOOSE MultiApp system to allow data transfer and unique sub-cycling between individual simulations.
Due to the MOOSE framework's consistent handling of input syntax, solvers, and finite element/volume discretizations, all MOOSE codes can easily be coupled together. The underlying code structure and included physics modules of MOOSE allow an impressive level of flexibility that, when extended with custom physics application code, allows scientists and researchers to solve a wide range of physics problems. MOOSE physics modules and MOOSE-based applications can either be run as standalone binary executables or can be linked in as libraries for larger/aggregate applications, which can then apply any of the objects in the linked libraries to its input file. These aggregate applications can then perform multiphysics simulations via either a fully coupled (i.e. solved with all variables contained within the same matrix) methodology, or a more loosely coupled methodology (i.e. separate MOOSE MultiApp simulations that can transfer data and communicate with each other).
To help with code modularity and code reuse for complex, fully coupled PDEs, physical equations are divided up into individual mathematical terms within MOOSE that are given the category label 'kernels.' These kernels are written in the finite element weak form of the equations, which multiplies the equation of interest by a test function and then integrates the function over the physical domain. To demonstrate how these kernels are structured, we present an example of the weak form for the electron flux continuity equation, featuring an ionization-only source term: Equation (A.1) both contains kernels housed in Zapdos and in CRANE, but since MOOSE allows for full coupling between applications, all kernels can be used within the same input file and all kernel contributions can be added to the same matrix. Listing shows how equation (A.1) can be entered into a Zapdos input file, such that the time derivative, advection, diffusion, and ionization term are inputted as separate components. This modular setup of physics terms within a given MOOSEbased input file lends itself to serving as a straightforward path to verification, as obtained through the MMS and comparisons to analytical solutions, without the need for intensive code modifications.