On scalable liquid-metal MHD solvers for fusion breeder blanket multiphysics applications

While substantial research effort has been made recently in the development of computational liquid-metal magnetohydrodynamics (MHD) solvers, this has typically been confined to closed-source and commercial codes. This work aimed to investigate some open-source alternatives. Two OpenFOAM-based MHD solvers, mhdFoam and epotFoam, were found to show strong scaling profiles typical of fluid dynamics codes, while weak scaling was impeded by an increase in iterations per timestep with increasing resolution. Both were found to accurately solve the Shercliff and Hunt flow problems for Hartmann numbers from 20 to 1000, except for mhdFoam which failed in the Hunt flow Ha=1000 case. A basic inductionless MHD solver was implemented in the Proteus MOOSE application as a proof of concept, using two methods referred to as the kernel method and material method. Future work will aim to build on these studies, exploring more advanced OpenFOAM MHD solvers as well as improving the Proteus MHD solver.


Introduction
A key step in accelerating the development of magnetic confinement fusion (MCF) power plants is developing the capability to study and design tokamak components in silico, reducing the financial costs of prototyping and testing through predictive modelling [1].Digital replicas and twins of these components, even extending to full plant simulations, pose a significant challenge, requiring carefully validated multiphysics software that can make efficient use of upcoming exascale high performance computing (HPC) systems.These multiphysics packages must therefore be scalable (to maximise the size of problems that can be studied) and portable (such that they function on the architectures of future HPC resources), in addition to being capable of accurately simulating the physics involved.
For MCF devices to become commercially viable, a key component will be the breeder blanket used to generate tritium required for the fusion reaction [2].Some designs of these blankets use a liquid metal as the breeding material, the flow of which is governed by magnetohydrodynamics (MHD) due to the interaction with the strong magnetic fields confining the plasma.These components, and the complex liquid-metal flows within them, have been the target of significant research effort in recent years, with many studies aiming to improve liquid-metal magnetohydrodynamics (LM-MHD) simulation capabilities [3,4].These efforts have typically been focused on closed-source research codes and commercial solvers, which have shown promising results [5].However, the closed-source nature of the research codes makes access challenging, while prohibitive licensing and digital rights management of commercial solvers restricts portability.
In order to capture all the key physics involved in breeder blanket designs with liquid-metal flows in multiphysics simulations, LM-MHD must be included in multiphysics packages.The capability already exists to study some of the physics involved, such as in the AURORA code which couples MOOSE and OpenMC for neutron transport and thermomechanical analysis [6].MOOSE is an open-source, parallel finite element method (FEM) library which simplifies coupling between MOOSE-based applications and external codes through its MultiApp system [7], and is reported to be highly scalable [8].AURORA is part of a wider suite of opensource tools under development at https://github.com/auroramultiphysics,covering fluid dynamics (Proteus), electromagnetism (Apollo and Hephaestus), tritium transport (Achlys) and more.Adding LM-MHD to this collection will enable more detailed breeder blanket simulations, and fill a gap in the current capability of the suite.This could be achieved by introducing a new application to couple in an external open-source solver, or by adding new physics to one of the existing applications such as Proteus using the inbuilt FEM solver capabilities of MOOSE.
To simulate conducting fluids in magnetic fields, several effects must be considered in addition to capturing the fundamental coupling between fluid dynamics and electromagnetism in the bulk of the fluid.The behaviour at the walls must be resolved, particularly in terms of electromagnetic coupling to the walls (and beyond) and boundary layers that become extremely narrow for high magnetic flux densities [3].Flows may be turbulent, and so this behaviour must either be resolved through direct numerical simulation (DNS) or modelled using methods such as large-eddy simulation (LES) [9].Thermal effects, such as buoyancy and temperature dependence of material properties, must also be considered [3].
This work aims to investigate existing open-source LM-MHD solvers, as well as introduce a new proof-of-concept implementation using MOOSE.An overview of relevant LM-MHD and parallel scaling theory is provided in section 2. Section 3 details a study of two OpenFOAM-based solvers, mhdFoam and epotFoam, evaluating them in terms of parallel scaling and validation against analytic solutions.Section 4 describes an early implementation of a new LM-MHD solver in the Proteus MOOSE application, before drawing overall conclusions in section 5.

LM-MHD
LM-MHD is a well documented and active field, with reviews of the physics found in several text books [10][11][12].The set of partial differential equations (PDEs) governing liquid metals in magnetic fields are derived by combining the incompressible Navier-Stokes equations with Maxwell's equations, as detailed in [12].For a single fluid with constant properties (density ρ, kinematic viscosity ν, magnetic permeability µ and conductivity σ), the resulting equations are where u is flow velocity, p is pressure, B is magnetic flux density and J is current density.These equations describe momentum conservation (1a), incompressibility (1b), magnetic induction (1c), the solenoidal nature of magnetic fields (1d) and Ampère's law neglecting electric displacement currents (1e).Current density J can be eliminated by substituting (1e) into the Lorentz force term in (1a).
Several key dimensionless groups can be derived from these equations.The Hartmann number is defined as the square of which describes the strength of magnetic forces relative to viscous forces, where B is the magnetic flux density scale [4].The magnetic Reynolds number is analogous to the Reynolds number Re [13] but instead describes the ratio between magnetic induction and magnetic diffusion [4].If R m ≪ 1, magnetic induction is negligible and the induced magnetic field is negligible, such that B can be treated as a constant imposed field, B ≈ B 0 .This is typically the regime of fusion blanket applications [4].In this limit the inductionless approximation can be taken [11], resulting in a simplified set of equations where ϕ is the electric potential, with the Poisson equation (4c) derived by taking the divergence of Ohm's law (4d) and imposing charge conservation, ∇ • J = 0.The Hartmann and Reynolds numbers can be combined to form the Stuart number N = Ha 2  Re , which provides information about the turbulent behaviour of the fluid [14].This work considers only the case of N ≫ 1 with low Re and simple geometries, such that flow is laminar.
While liquid-metal flows are often complex, analytic solutions have been obtained for some cases of laminar flow in homogeneous magnetic fields [15][16][17].For cases with the magnetic field applied perpendicular to the flow, the solution for the case of perfectly insulating walls was derived by Shercliff [15] as shown in figure 1, while Hunt studied the more general case of mixed wall conductivities including arbitrary wall conductivity [16], as shown for perfectly conducting and insulating walls in figure 2.
A key challenge in simulating LM-MHD flows is resolving the Hartmann and side layers.As Ha increases, the Hartmann and side layers narrow.The Hartmann layer thickness δ varies strongly with Ha as δ ∝ Ha −1 , while the side layers have a weaker dependence of δ ∝ Ha − 1 2 [11].This becomes a significant challenge as Ha approaches Ha ∼ 10 4 , as is common in fusion [4,18], when the Hartmann layers in particular become vanishingly small.

Parallel scaling
A critical aspect of simulations targeting the exascale is parallel scaling.This can be considered in terms of two key properties.Strong scaling is the relationship between the time required to perform a fixed amount of work and the amount of computational resources used.Weak scaling in contrast is the relationship between the time required for an increasing amount of work with proportionally increasing computational resources.To quantify these scaling relations, first consider the rate R at which a computer does work where W is a measure of the amount of work done by the computer, such as the total number of degrees of freedom (DOFs), and t is the computation time [19].As computational resources are increased, the relative speedup S N can then be considered where which is sometimes referred to as scaled speedup [20].Parallel efficiency η N can be defined as the ratio of the total resource use time per amount of work for on N ref cores to that on N cores, which for strong scaling gives η N = t ref N ref tNN , while for weak scaling η N = t ref tN .Ideal scaling is the case in which η N = 1 ∀ N. In the case of strong scaling, a threshold number of cells per core can be considered, below which increasing the number of resources is inefficient.If n is the number of cells per core, the threshold value of n below which efficiency drops below 80% is defined as n 0.8 .

LM-MHD in OpenFOAM
OpenFOAM is a freely distributed open-source finite volume method (FVM) software package, primarily designed for computational fluid dynamics (CFD) [21].LM-MHD capability for OpenFOAM has been developed within OpenFOAM itself (the mhdFoam solver) as well as through more advanced closed-source academic solvers implemented in OpenFOAM, such as the work of Mistrangelo and Bühler [22].While the usefulness of closed-source codes is limited to those with access, there are some freely available open-source OpenFOAM solvers that may extend similar capabilities to the wider community.

Open-source OpenFOAM MHD solvers
The mhdFoam solver, distributed with OpenFOAM, solves a variation of the full induction formulation (1) for u, p and B for incompressible conducting fluids.In this implementation, the Lorentz force is reformulated in terms of B only, eliminating J using (1e).The pressure implicit with splitting of operators (PISO) algorithm [23] is used for the coupling between pressure and velocity to effectively apply ∇ • u = 0. Additionally, the solenoidal magnetic field constraint (1d) is not directly imposed.Noting the similarity between the momentum (1a) and incompressibility constraint (1b), and the induction (1c) equation and solenoidal constraint (1d), a fictitious magnetic flux pressure p B is introduced.This allows for a comparable (but simplified) method to the PISO algorithm to be applied, referred to as the B-PISO loop.This algorithm creates the divergence free magnetic flux field required by the ∇ • B = 0 constraint, with the magnetic induction equation taking the place of the momentum predictor [24].A more detailed description of the mhdFoam solver can be found in [24].
An inductionless MHD solver has been implemented in the open-source epotFoam solver presented by Tassone [24] based on the works of Dousset [25] and Mas de les Valls [26], solving equations based on the electric potential inductionless formulation (4) for u, p, and ϕ, while B is assumed to be unchanged by the flow.Again a PISO loop is used for pressure-velocity coupling, however the fluid and electromagnetic solves are coupled only by the electric potential solve consuming the velocity u computed earlier in the timestep and outputting the Lorentz force to be input into the momentum equation solve in the next timestep.Because charge conservation is assumed in the derivation of the inductionless equations (4), and is therefore not directly imposed by the code, the Lorentz force is implemented using the scheme developed by Ni et al [27,28] to ensure the scheme is conservative and consistent.Full details of the epotFoam solver can be found in [24].
For the studies presented here, OpenFOAM 9 from the OpenFOAM Foundation was used for both mhdFoam and epotFoam.This choice was made due to this being the latest version of OpenFOAM with which epotFoam was found to be compatible.
Case input files for the OpenFOAM simulations providing full details of the setup can be found in the open data repository associated with this paper (see data availability statement) [29].

Scalability methods
To assess the parallel scaling properties of the mhdFoam and epotFoam solvers, the laminar Shercliff flow case was used as a benchmark [15].For this, a 20 m × 2 m × 2 m square duct centred on (10, 0, 0) was used, with flow along the x-axis and a magnetic field B 0 = (0, 20, 0) T was imposed parallel to the side walls (as opposed to the Hartmann walls) of the domain as defined in section 2.1.The 3D domain was chosen, as opposed to solving with a narrow domain in the flow direction with periodic boundary conditions (BCs), in order to increase the number of cells, and so to enable demonstration of scaling out to a higher number of cores.All constant properties (density, viscosity, permeability and conductivity) were set to unity, such that Ha = |B 0 | = 20.The inlet at x = 0 m was set with a uniform velocity u in = (1, 0, 0) m s −1 BC, resulting in laminar flow with Re ∼ 2. The walls were set with no-slip u = 0 conditions and the outlet at x = 20 m was set a zero normal gradient BC.Pressure was set to p = 0 Pa at the outlet, with zerogradient conditions on all other boundaries.In the mhdFoam case, the magnetic field was imposed as a constant vector BC on the walls with zero-gradient conditions on the inlet and outlet, and the fictitious magnetic flux pressure p B was set to zero at the inlet and outlet with zero-gradient conditions on the walls.For epotFoam, instead the magnetic field was set as a constant property, with zero-gradient conditions for electric potential ϕ on all boundaries to give perfectly insulating walls.For mhdFoam, the magnetic field was set with an initial condition equal to the imposed field B 0 = (0, 20, 0) T, while all other variables (u, p, p B ) were set to zero.For epotFoam, all variables (u, p, ϕ) were set with initial conditions of zero.
The simulations were run in a transient solve despite the benchmark case being a steady laminar solution, due to the lack of steady-state support from these solvers.The solvers were run for 2 s simulation time to allow pressure to stabilise through the domain and allow the system to reach steady-state.Euler timestepping with second-order Gaussian integration for spatial gradients was used, with linear interpolation used to obtain cell-surface values from cell-centred values.Unbounded second order conservative Laplacian terms were used.The simulation was therefore first order in time and second order in space.Orthogonal surface-normal gradient schemes were used due to the regularity of the hexahedral mesh used.For both mhdFoam and epotFoam, the velocity field equation was solved for using the symmetric Gauss-Seidel smoother method.In mhdFoam, the same solver was used for the magnetic flux density.With both mhdFoam and epotFoam, the pressure field was solved using the preconditioned conjugate gradient (PCG) solver with the simplified diagonal-based incomplete Cholesky (DIC) preconditioner, using three corrector iterations without non-orthogonal correctors in the PISO loop as well as in the B-PISO loop for mhdFoam.Similarly, for the electric potential solve in epotFoam the PCG solver was used with DIC, but without corrector iterations.PCG and DIC were also used for the p B solve in mhdFoam.
In all cases for strong and weak scaling, the mesh was generated using OpenFOAM's blockMesh utility, generating block-structured hexahedral meshes.The domain was decomposed for parallelisation using the Scotch algorithm [30] distributed with OpenFOAM, using default settings.The Scotch algorithm features automated decomposition strategies such as minimising the number of processor boundaries.Strong scaling was studied using three cases of the base problem at different resolutions, with the timestep adjusted according to the Courant-Friedrichs-Lewy (CFL) condition [13], as detailed in table 1. Mesh grading was used in the 80k and 640k cell cases to increase resolution smoothly between the centre of the domain and each wall.This was achieved by setting the expansion ratio E, defined in OpenFOAM as the cell size ratio between the first and last cell in a direction, with the individual cell sizes between those cells determined by a geometric progression.For each case a single grading ratio E y,z = E y = E z (specified in table 1) was set in both the yand z-directions, while the resolution in the flow direction was uniform.A crosssection of the mesh for the 80k cell case is shown in figure 3. The 80k and 640k cell cases used N ref = 1 core.The highest resolution 10 million cell case used a full node for the reference case, N ref = 76 cores, due to the expected wall times exceeding limits on the HPC cluster used for N ref < 76 at this resolution.
To study weak scaling, the reference case was set up with (50, 20, 10) cells run on a single core (N ref = 1), with a timestep of 0.008 s.Constant n = 10 000 ± 1% cells per core was maintained as N was increased by increasing the number of cells in each direction from the reference case by a factor of 3  √ N/N ref , manually adjusting the resolution by up to three cells in each direction to minimise deviation from n = 10 000 cells per core.The timestep was decreased by the same factor in accordance with the CFL condition [13].Because mesh grading would not be needed for the higher resolution cases, it was not used for any case in the weak scaling study in order to maintain mesh uniformity as a control variable; accuracy was therefore expected to improve with increasing number of cells (and so increasing number of cores) as the boundary layers became better resolved.The root-mean-square error (RMSE) in velocity u x = u • x and error in pressure drop K were also calculated based on the analytic Shercliff solution [15], with the cross-sectional velocity profile taken at x = 13 m and pressure drop measured as the average pressure gradient over the region 12.5 ⩽ x ⩽ 13.5 m.The RMSE was normalised by dividing by the mean absolute analytic value of u x over the slice, while pressure drop error was normalised by the analytic pressure drop.For the analytic solution, the Hunt solution [16] was used in the exponential form described by Ni et al [28] with 200 Fourier iterations, which simplifies to the Shercliff solution in the case of perfectly insulating Hartmann and side walls.This form of the solution was used due to higher accuracy in numerically computing the exponential terms, rather than computing the hyperbolic functions of the original solutions.
The computation time was calculated as the mean time per timestep for the 2nd to 21st timestep.This eliminated the 4. Strong scaling profiles for the 80k cell (solid lines) and 640k cell (dashed lines) cases, for mhdFoam (orange) and epotFoam (blue).Ideal strong scaling is shown in (b) by the dotted black line.The shaded region marks runs using only a single node, where the number of cores is varied up to the total number of cores on the node.Approximate values of n 0.8 are marked in (d), with the horizontal line showing 80% efficiency.
dependence on the timestep, as the overall simulation time was expected to increase with decreasing timestep in the weak scaling case, such that the scaling assessment was based only on work per timestep.Each strong scaling or weak scaling case with each solver was repeated three times, with the results averaged.In the weak scaling case, this process was also performed for the total number of iterations per timestep for all the field solves in order to understand the behaviour of the problem with increasing resolution.
Scaling behaviour was studied using the Cambridge Service for Data Driven Discovery (CSD3) HPC cluster's Ice Lake nodes, each consisting of two 38 core Intel Xeon Platinum 8368Q CPUs for a total of 76 cores, with 256 GiB RAM per node and connected via Mellanox HDR200 Infiniband.

Scalability results and discussion
The strong scaling profiles for the 80k and 640k cases are shown in figure 4. It can be seen that epotFoam was faster per timestep, due to the simpler system of equations.The scaling profiles observed are typical of CFD codes, with parallel efficiency diminishing as the number of cores increased due to the increase in memory communication between the domain partitions becoming dominant.As the resolution of the problem increased, n 0.8 increased, from n 0.8 ∼ 7.0 × 10 3 cells per core for the 80k cell case to n 0.8 ∼ 2.5 × 10 4 for the 640k cell case.In the higher resolution 10M cell case shown in figure 5, it can be seen that scaling was initially super-linear, demonstrating very good strong scaling for n > 10 4 .While it can be seen that n 0.8 ∼ 6 × 10 3 cells per core in this case, this cannot be directly compared to the lower resolution strong scaling profiles due to the higher value of N ref .In all cases, the simulation converged to the correct solution, with accuracy increasing between the three cases with increasing resolution as expected.Solution accuracy was confirmed to be independent of the number of processors and mesh partitions.These results are detailed in table 2.
Results of the weak scaling study are shown in figure 6.While Gustafson's law predicts a linear relationship between scaled speedup S weak N and N/N ref , a non-linear relationship was instead found with the rate of increase of scaled speedup dropping off as N increased, resulting in poor weak scaling efficiency as seen in figure 6(c) This was found to be due to the number of iterations of each solver increasing as the resolution increased, as shown in figure 6(b).As a result, the amount of work in each timestep increased more than the intended change from resolution increasing proportionally with resources, slowing down the solution.This may have been due to the nature of the sparse matrix problem involved in the 6.Weak scaling profiles for mhdFoam (orange) and epotFoam (blue).Ideal weak scaling is shown in (a) and (c) by the dotted black line.An increase in number of iterations is seen in (b), which plots the total number of iterations for all matrix solvers per timestep.RMSE in ux and relative error in K are shown in (d) by solid and dashed lines respectively for each solver.It can be seen from figure 6(d) that as resolution increased, so too did the accuracy of the simulation, as was the case across the different strong scaling studies as detailed in table 2. At lower resolution, epotFoam was found to be unstable and failed to converge on the solution, indicating that epotFoam particularly struggled when the Hartmann and side layers were insufficiently resolved (with only 1 to 2 cell centres in the Hartmann layers in the N = 1 and N = 2 cases).This was less of an issue for mhdFoam, for which the relative errors steadily improved as resolution increased.However, the errors became large in both cases for the N = 1216 core case, and the simulation failed due to numerical instability before reaching the 2 s endpoint of the simulation for the cases with higher resolution.In each of these high resolution failed cases, the Courant number was seen to grow before the simulation failed, despite the CFL condition being met in the problem setup.

Validation methods
To further validate these solvers, the Shercliff (perfectly insulating walls) and Hunt flow (perfectly insulating side walls with perfectly conducting Hartmann walls) cases (as described in section 2.1) were studied up to higher Hartmann numbers (Ha = 20, Ha = 100 and Ha = 1000).
The Shercliff case was set up using the same methods described in section 3.2 for both mhdFoam and epotFoam.The Ha = 20 cases were set up exactly as described for the the 80k cell case in table 1.To set up the Ha = 100 and Ha = 1000 cases, the magnitude of the magnetic field applied parallel to the side walls was increased to |B 0 | = 100 T and |B 0 | = 1000 T respectively, and the resolution and mesh grading ratios were increased as described in table 3. The higher Ha cases used different grading ratios in the y-and z-directions to capture the stronger dependence of the Hartmann layer thickness on Ha than that of the side layers, varying this along with the resolution in order to reduce the size of the cells at the Hartmann and side walls by the same factor as the expected reduction in Hartmann and side layer thicknesses.Initially the timestep for the higher Ha cases was decreased by the same factor as the change in the size of the smallest cell in y (equal to the change in Hartmann number), based on the CFL condition.However, this was found to be insufficient, with stability instead requiring the timestep to be decreased by a factor closer to the increase in Ha 2 , as shown in table 3. The Ha = 20 and Ha = 100 cases were run on eight cores, while the Ha = 1000 cases were run on 76 cores due to the increased resolution.Due to the increased wall time requirements from the small timestep, the end time of the Ha = 1000 case was reduced to 0.02 s.To ensure the solution was fully developed, the inlet velocity in the Ha = 1000 case was increased to 100 m s −1 , with the flow remaining laminar at Re ∼ 200.
To set up the Hunt flow cases, the Shercliff flow cases were used as a basis, modifying only the BCs in order to reflect the perfectly conducting Hartmann walls.For mhdFoam, a zero-gradient BC was set on the Hartmann walls, while for epotFoam the electric potential BC on the Hartmann walls was set to ϕ = 0.
The RMSE in the velocity field and drop at x = 13 m compared to the analytic solution were calculated for each case, as described in section 3.2.

Validation results
Figure 7 shows the mhdFoam and epotFoam velocity profiles for Shercliff and Hunt flow compared to the analytic solutions, while tables 4 and 5 give the relative errors for the Shercliff and Hunt cases respectively.In most cases, both solvers were capable of accurately resolving the problem, with epotFoam typically calculating u x more accurately than mhdFoam, but with an order of magnitude higher error in the pressure drop K; this is consistent with the results observed in figure 6, showing that this difference persisted over a wide range in both Hartmann number and resolution.For both solvers, u x RMSE was notably higher in the Hunt flow problem.Discrepancies in u x (y, z = 0) can be seen in figure 7 near the core of the flow.
However, mhdFoam was found to fail due to numerical instability in the Ha = 1000 Hunt flow problem after many timesteps (progressing to t = 0.0016 s before failing), even with the timestep decreased to 1 × 10 −7 s (reaching t = 0.0028 s).The cause of this remains to be identified.

Validation discussion
Both solvers were largely capable of simulating both problems, indicating that OpenFOAM may be a promising candidate for further development of LM-MHD solvers.However, various issues with these solvers were identified.Firstly, the cause of the failure of the mhdFoam in the Ha = 1000 Hunt   flow case is unknown to the authors, and requires further investigation.Additionally, the requirement of a reduced timestep as Hartmann number is increased poses an interesting challenge.This could be an additional stability requirement for LM-MHD, however the authors were unable to find much research on this topic in the literature.The dependence appeared to be consistent with ∆t ∝ Ha −2 , which could be related to the magnetic damping time τ = ρ σ|B| 2 which is understood to be relevant in the low R m limit [10].An argument is made in [31] that the timestep should be adjusted due to the addition of the Lorentz force as a source term in the momentum equation.The result is the modification of the timestep required for stability from that calculated from the CFL condition to however this is not dimensionally consistent, differing by a factor of density; instead replacing the second term on the right hand side with τ would resolve this inconsistency.A parameter sweep of density and conductivity, and identifying the maximum timestep at which the simulations remain stable, would inform this hypothesis, as would studying the dependence of stability on other parameters.It should be noted that this could be a dependence specific to OpenFOAM, the chosen Euler timestepping scheme, FVM or this specific regime, or it could be a fundamental limitation of LM-MHD simulations.Regardless of the cause, this poses a challenge to fusionrelevant simulations, with high Hartmann number significantly increasing simulation resources both from the increased number of timesteps required for a given time in silico as well as the high resolution required to resolve the narrow Hartmann and side layers, with the former imposing a limitation that cannot be countered easily without parallel-in-time methods.
The results obtained during this work, using two readily available open-source solvers, demonstrate that these tools can be effective in the study of LM-MHD, but that they should be used with caution.Higher accuracy may be obtained by following refinements to these methods that can be found in the literature, such as the introduction of a magnetic field correction step to the mhdFoam solver investigated in [26], as well as various solvers which been demonstrated in published work but have not been made directly available as opensource codes [5].Additional limitations of these codes can be seen when considering that practical applications may involve finite wall conductivity, turbulence, thermal coupling and other complications, the simulation of which would require modification of the simple solvers demonstrated in this work.Promise is shown by the OpenFOAM-based inductionless solvers developed by Blishchik et al [32], which have become formally available in an open-source repository (https:// github.com/Kommbinator/MHD_Solvers_OpenFOAM)since the conclusion of the work presented in this paper.

Initial implementation of an LM-MHD solver in Proteus
Proteus is an application built on the MOOSE FEM framework [33], focusing on fluid dynamics and related domains for multiphysics coupling, which can be found at https://github.com/aurora-multiphysics/proteus.This work aimed to introduce a simple steady state incompressible resistive MHD solver into Proteus as a proof-of-concept, in order to test the concept of using MOOSE for LM-MHD problems.For this initial implementation, the inductionless approximation (4) was used.This decision was made to minimise the numerical complexity and to test the simplest case of incompressible resistive MHD, with the inductionless approximation being valid in many typical fusion-relevant problems.This section details the attempted implementation and highlights some of the challenges and limitations that were identified.Future work aims to resolve these issues and develop an improved implementation for the Proteus application.

Implementation
To implement this solver, the incompressible FEM part of the MOOSE Navier-Stokes module was used as a base [34].This includes Kernel objects, which compute weak form terms in PDEs and their Jacobians, for most of the terms in the continuity and momentum equations, with the exception of the Lorentz force term.Automatic differentiation (AD) was used in all kernels in this implementation to calculate Jacobians, reducing the complexity of implementing new kernels as well as improving convergence by using an accurate Jacobian, at the cost of increased memory requirements [35].
The required additional kernels were identified by expressing each of the terms of (4) in weak form by rearranging each equation such that the right hand side is zero, multiplying by a test function ψ and integrating over the domain Ω.Terms with time derivatives were dropped in order to solve in steady state.Current density J was eliminated by substituting (4d) into the momentum equation (4a).In (4a) the only additional term is the Lorentz force, (11) which can alternatively be expressed as two separate kernels for the electrostatic and velocity-dependent terms respectively.The Poisson equation for electric potential converts to weak form as where the terms integrated over the boundaries of the domain ∂Ω are the BCs.The first term in ( 12) represents diffusion of electric potential and is implemented in MOOSE in the ADDiffusion kernel, while the third term represents the production of electric potential.
The first attempt at implementing these kernels in MOOSE, referred to here as the kernel method, utilised existing kernels in the MOOSE Navier-Stokes module for the incompressibility constraint and momentum equation.New kernels were added to Proteus and used to implement the separate electrostatic and velocity-dependent terms of (11).This implementation requires using second order velocity and first order pressure variables, due to the lack of stabilisation terms.The Poisson equation for electric potential (12) was implemented using an existing MOOSE diffusion kernel for the ∇ 2 ϕ term and a new kernel in Proteus for the ∇ • (u × B 0 ) electric potential production term.
The second method took a similar approach, but instead more closely followed the design philosophy of the MOOSE incompressible Navier-Stokes implementation in which strong form residuals are calculated in Material objects, and then computed in weak form in Kernel objects.This allows the computed strong forms to be used elsewhere in the solve, such as in stabilisation terms.Adding the pressure-stabilised Petrov-Galerkin (PSPG) and streamline-upwind Petrov-Galerkin (SUPG) terms into the incompressibility constraint and momentum equation respectively provides stabilisation such that equal first order pressure and velocity variables can be used [34].These MOOSE Navier-Stokes Material objects were duplicated in Proteus, adding the Lorentz force strong form parts σ∇ϕ × B 0 and σ(u × B 0 ) × B 0 to the total strong form residual of the momentum equation.This allowed these terms to be included in the residuals required for the PSPG and SUPG stabilisation terms.A kernel was added to add both Lorentz force components to the momentum equation.This implementation is referred to as the material method.To set up a problem with this method, the same approach as the kernel method was used but instead with first order velocity and pressure variables, using the new Material objects and adding the PSPG and SUPG kernels.

Methods
Both methods were tested against the Shercliff and Hunt problems for Ha = 20.The cases were set up to match the Ha = 20 validation cases described in section 3.4 in terms of physical properties and BCs, with the exception of the inlet which was set with a parabolic velocity profile with a mean velocity of 1 m s −1 .As with those cases, a 100 × 40 × 20 mesh was used, however mesh grading was limited to a grading ratio of 5 due to the solvers due to poor convergence with stronger grading (however, this was tested only with the asm preconditioner).The kernel method used 20 node hexahedra as required for the second order vector Lagrange velocity shape functions, with eight-node hexahedra used for the material method for first order velocity.Velocity was initialised with the parabolic flow profile used for the inlet.For both methods, pressure and electric potential were represented by first order Lagrange shape functions.The magnetic field was represented by a first order Lagrange AuxVariable, which could in principle reflect a non-uniform magnetic field or be replaced by a coupled variable, but in this case was simply calculated as uniform B 0 = (0, 20, 0) T. Each case was solved in steady state using Newton's method [35], using automatic scaling and limiting linear iterations to 100 and non-linear iterations to 1000.Each method was tested in the Shercliff and Hunt flow cases with several preconditioners: asm, bjacobi, gamg, gasm, ilu and ksp.Additionally, tests were conducted using 8 cores and 28 cores in order to gain a basic understanding of strong scaling, setting a time limit of 24 h and 12 h for 8 cores and 28 cores respectively.On 28 cores, both replicated and distributed mesh methods were tried, however this was not found to make a significant difference to the results.
Case input files for these simulations can be found in the open data repository associated with this paper (see data availability statement) [29] .

Results and discussion
The kernel method failed to solve the Shercliff case with the preconditioners tested, but was more successful in the Hunt case for which convergence was achieved with multiple preconditioners.The material method was found to converge with various preconditioners, for both cases.For the preconditioners with which the simulations converged with both 8 cores and 28 cores, strong scaling was demonstrated for both methods, as expected based on [8], with a 3.5 times increase in computational resources resulting in speedups of approximately 2.5.
Significant room for improvement was identified by assessing the accuracy of the solvers.Inaccuracy was a major issue for the material method, with errors for Shercliff flow of RMSE(u x ) = 14% and K error = 10%, and for Hunt flow RMSE(u x ) = 18% and K error = 11%.The kernel method was found to be more accurate, achieving RMSE(u x ) = 4.5% and K error = 2.0% for Hunt flow, though this accuracy is still considered insufficient for most applications.The higher accuracy of the kernel method could be due to either the increased number of DOFs from the higher element order for velocity or potential inaccuracies in the material method stabilisation terms.Convergence was found to be limited to low mesh grading ratios E ⩽ 8.The Proteus solvers were also found to be at least an order of magnitude slower than the OpenFOAM solvers when given comparable computing resources.
Overall, the relatively simple approach taken in this initial proof-of-concept work, casting the inductionless MHD equations (4) in weak form and implementing these directly in the Proteus MOOSE app, was found to be insufficient.Instead future efforts will work to implement LM-MHD in Proteus by conducting a more rigorous derivation of the weak form required for finite element analysis, considering more carefully the implicit assumptions made such as finite element spaces and shape functions.This is expected to require more complex element types than the Lagrange elements used in the current implementation.Additionally, this work only considered the monolithic approach of solving for all fields in a single matrix, however splitting the problem by fields would allow for different preconditioners to be selected for each field, which could improve convergence.
Previous work in this field has demonstrated successful implementation of inductionless MHD in an FEM framework [36,37], as well as extending this to incorporate thermal effects [38]; these resources will be valuable for informing the future Proteus LM-MHD implementation.The algorithms used are rigorously derived to provide stable, accurate implementations, with significant complexity compared to the simple FEM implementation presented in this paper.Ensuring a proper conformal representation of the LM-MHD equations in finite element form is expected to improve accuracy, convergence and simulation times.Future work will investigate following similar methods to develop the MOOSE implementation.

Conclusions
This work has identified OpenFOAM to be a useful package for portable, scalable, open-source CFD, and which could be a strong candidate for developing open-source LM-MHD capability.The mhdFoam and epotFoam solvers are good examples of the full induction and inductionless formulations respectively, with both capable of solving most of the simple LM-MHD cases considered, however the accuracy of these solvers as reported in this paper (on the order of 1% error) should be contrasted to that of closed-source research codes presented in the literature.It may be possible to achieve higher accuracy results with these solvers by increasing resolution, solving to lower residuals, or varying the solution parameters such as interpolation, derivative and integration schemes.Both solvers demonstrated good strong scaling, though n 0.8 was seen to increase with increasing resolution.Room for improvement was observed in terms of weak scaling, though this could be approached by performing a preconditioner study to identify the optimal set of preconditioners for weak scaling.
However, both solvers lack features that would be required for more practical applications, such as the ability to simulate walls of arbitrary conductivity or couple in thermal effects such as buoyancy and electromagnetic heating.Furthermore, this work only validated the solvers against simple problems with analytic solutions.More complex validation cases exist, such as those outlined in [39], which include turbulent behaviour, thermal effects, and extension to Ha ∼ 10 4 .Turbulent flow could be simulated in these solvers by DNS, however for complex multiphysics simulations it would likely be beneficial to reduce resolution requirements by using turbulence modelling methods such as LES [9].Future work aims to explore more advanced open-source LM-MHD solvers, such as those developed by Blishchik et al [32], which incorporate electromagnetic coupling with the walls and dynamic Smagorinsky LES turbulence modelling in addition to other features.Additionally, it would be beneficial for some simple cases to be able to solve in steady state, however this is only a benefit if behaviour is steady which is not necessarily the case for practical fusion applications.
An initial implementation of LM-MHD was added to the Proteus MOOSE application and studied, in order to test the viability of this approach.This was found to require significant improvements, with poor accuracy and convergence (particularly with high mesh grading ratios) becoming evident.However, this study has identified these areas which can be improved on in future work, and may inform future research efforts on FEM LM-MHD implementations.
Overall, it was found that open-source LM-MHD solvers have the capability to become viable scalable, portable solvers for inclusion into multiphysics packages for fusion research.OpenFOAM is clearly a promising candidate, with significant existing research in the literature and some more advanced solvers, provided issues with weak scaling can be overcome.However, it is clear that this area of research has fallen behind the closed-source development of prominent solvers in the field such as those demonstrated in [5], and as such there is significant room for further open-source development in this field.Building on these existing LM-MHD solvers and providing improved open-source methods for simulating liquid metals in fusion-relevant conditions would be of great benefit to the wider fusion community, particularly if developed with an emphasis on scalability and portability in order to maximise the usefulness of these solvers as components coupled into multiphysics packages for studying integrated fusion components such as liquid-metal breeder blankets.

Figure 1 .
Figure 1.Shercliff flow[15] (perfectly insulating walls) for Ha = 20.Flow is into the page, and the magnetic field is imposed vertically.The colour map shows velocity, while the contours show the induced magnetic field and also correspond to induced current field lines.The opposing pair of walls perpendicular to B are known as the Hartmann walls and the remaining walls parallel to B are the side walls.The core flow is seen to be inviscid.Velocity varies rapidly in the Hartmann and side layers.

Figure 2 .
Figure 2. Hunt flow [16] (conducting Hartmann walls and perfectly insulating side walls) for Ha = 20.Velocity is concentrated into jets along the side wall.At high Ha reverse flow jets can form between the side wall jet and core regions.

Figure 3 .
Figure 3. Transverse cross-section of the mesh for the 80k cell strong scaling case, with 40 cells in y and 20 cells in z with expansion ratio Ey,z = 10 in both directions.

Figure 5 .
Figure 5. Inter-node strong scaling profiles for the 10M cell case, for mhdFoam (orange) and strong scaling is shown in (a) by the black line, where super-linear scaling can be seen between N = N ref and N = 2N ref , and between N = 2N ref and N = 4N ref .

Figure 7 .
Figure 7.Comparison of velocity profiles computed with mhdFoam and epotFoam to the analytic solution for the Shercliff and Hunt problems.Plots of ux(y, z = 0) and ux(y = 0, z), measured at x = 13 m are shown in the left and right parts of each subplot respectively.For Ha = 20, shown in (a) and (b), the full slices are shown.For Ha = 100, shown in (c) and (d), and for Ha = 1000, shown in (e) and (f), the domain shown is restricted to the narrow boundary layers on one side of the symmetrical domain to show the ability of the solvers to simulate the strong velocity gradients.

Table 1 .
Spatial and temporal discretisation in the strong scaling cases.Number of cores in the reference cases are also stated.

Table 2 .
Relative errors in cross-sectional velocity field ux and pressure drop K at x = 13 m for both mhdFoam and epotFoam with varying resolution in the strong scaling studies.The error values presented were averaged across all runs for different number of cores, with minor variations in the error values displaying no trend with increasing number of cores.

Table 3 .
Spatial and temporal discretisation in the Ha = 20, 100 and 1000 Shercliff and Hunt flow validation cases.

Table 4 .
Relative errors in cross-sectional velocity field ux and pressure drop K at x = 13 m for both mhdFoam and epotFoam for the Shercliff flow case with varying Hartmann number.

Table 5 .
Relative errors in cross-sectional velocity field ux and pressure drop K at x = 13 m for both mhdFoam and epotFoam for the Hunt flow case with varying Hartmann number.