Probability flow solution of the Fokker–Planck equation

The method of choice for integrating the time-dependent Fokker–Planck equation (FPE) in high-dimension is to generate samples from the solution via integration of the associated stochastic differential equation (SDE). Here, we study an alternative scheme based on integrating an ordinary differential equation that describes the flow of probability. Acting as a transport map, this equation deterministically pushes samples from the initial density onto samples from the solution at any later time. Unlike integration of the stochastic dynamics, the method has the advantage of giving direct access to quantities that are challenging to estimate from trajectories alone, such as the probability current, the density itself, and its entropy. The probability flow equation depends on the gradient of the logarithm of the solution (its ‘score’), and so is a-priori unknown. To resolve this dependence, we model the score with a deep neural network that is learned on-the-fly by propagating a set of samples according to the instantaneous probability current. We show theoretically that the proposed approach controls the Kullback–Leibler (KL) divergence from the learned solution to the target, while learning on external samples from the SDE does not control either direction of the KL divergence. Empirically, we consider several high-dimensional FPEs from the physics of interacting particle systems. We find that the method accurately matches analytical solutions when they are available as well as moments computed via Monte-Carlo when they are not. Moreover, the method offers compelling predictions for the global entropy production rate that out-perform those obtained from learning on stochastic trajectories, and can effectively capture non-equilibrium steady-state probability currents over long time intervals.


Introduction
The time evolution of many dynamical processes occurring in the natural sciences, engineering, economics, and statistics are naturally described in the language of stochastic differential equations (SDE) [14, 39,12]. Typically, one is interested in the probability density function (PDF) of these processes, which describes the probability that the system will occupy a given state at a given time. The density can be obtained as the solution to a Fokker-Planck equation (FPE), which can generically be written as [44,1] (FPE) where ρ * t (x) ∈ R ≥0 denotes the value of the density at time t, b t (x) ∈ R d is a vector field known as the drift, and D t (x) ∈ R d×d is a positive-semidefinite tensor known as the diffusion matrix. (FPE) must be solved for t ≥ 0 from some initial condition ρ * t=0 (x) = ρ 0 (x), but in all but the simplest cases, the solution is not available analytically and can only be approximated via numerical integration.
High-dimensionality. For many systems of interest -such as interacting particle systems in statistical physics [4,55], stochastic control systems [26], and models in mathematical finance [39] -the dimensionality d can be very large. This renders standard numerical methods for partial differential equations inapplicable, which become infeasible for d as small as five or six due to an exponential scaling of the computational complexity with d. The standard solution to this problem is a Monte-Carlo approach, whereby the SDE associated with (FPE) is evolved via numerical integration to obtain a large number n of trajectories [24]. In (1), σ t (x) satisfies σ t (x)σ T t (x) = D t (x) and W t is a standard Brownian motion on R d . Assuming that we can draw samples {x i 0 } n i=1 from the initial PDF ρ 0 , simulation of (1) enables the estimation of expectations via empirical averages where φ : Ω → R is an observable of interest. While widely used, this method only provides samples from ρ * t , and hence other quantities of interest like the value of ρ * t itself or the time-dependent differential entropy of the system H t = − Ω log ρ * t (x)ρ * t (x)dx require sophisticated interpolation methods that typically do not scale well to high-dimension.
A transport map approach. Another possibility, building on recent theoretical advances that connect transportation of measures to the Fokker-Planck equation [22], is to recast (FPE) as the transport equation [59,48] (3) ∂ t ρ * t (x) = −∇ · (v * t (x)ρ * t (x)) where we have defined the velocity field . This formulation reveals that ρ * t can be viewed as the pushforward of ρ 0 under the flow map X * τ,t (·) of the ordinary differential equation Equation (5) is known as the probability flow equation, and its solution has the remarkable property that if x is a sample from ρ 0 , then X * 0,t (x) will be a sample from ρ * t . Viewing X * τ,t : Ω → Ω as a transport map, ρ * t = X * 0,t ρ 0 can be evaluated at any position in Ω via the change of variables formula [59,48] (6) ρ * t (x) = ρ 0 (X * t,0 (x)) exp − t 0 ∇ · v * τ (X * t,τ (x))dτ where X * t,0 (x) is obtained by solving (5) backward from some given x. Importantly, access to the PDF as provided by (6) immediately gives the ability to compute quantities such as the probability current or the entropy; by contrast, this capability is absent when directly simulating the SDE.
Learning the flow. The simplicity of the probability flow equation (5) is somewhat deceptive, because the velocity v * t depends explicitly on the solution ρ * t to the Fokker-Planck equation (FPE). Nevertheless, recent work in generative modeling via score-based diffusion [51,52,53] has shown that it is possible to use deep neural networks to estimate v * t , or equivalently the so-called score ∇ log ρ * t of the solution density. Here, we introduce a variant of score-based diffusion modeling in which the score is learned on-the-fly over samples generated by the probability flow equation itself. The method is self-contained and enables us to bypass simulation of the SDE entirely; moreover, we provide both empirical and theoretical evidence that the resulting self-consistent training procedure offers improved performance when compared to training via samples produced from simulation of the SDE.
1.1. Contributions. Our contributions are both theoretical and computational: • We provide a bound on the Kullback-Leibler divergence from the estimate ρ t produced via an approximate velocity field v t to the target ρ * t . This bound motivates our approach, and shows that minimizing the discrepancy between the learned score and the score of the push-forward distribution systematically improves the accuracy of ρ t .
• Based on this bound, we introduce two optimization problems that can be used to learn the velocity field (4) in the transport equation (3) so that its solution coincides with that of the Fokker Planck equation (FPE). Due to its similarities with score-based diffusion approaches in generative modeling (SBDM), we call the resulting method score-based transport modeling (SBTM). • We provide specific estimators for quantities that can be computed via SBTM but are not directly available from samples alone, like point-wise evaluation of ρ t itself, the differential entropy, and the probability current. • We test SBTM on several examples involving interacting particles that pairwise repel but are kept close by common attraction to a moving trap. In these systems, the FPE is high-dimensional due to the large number of particles, which vary from 5 to 50 in the examples below. Problems of this type frequently appear in the molecular dynamics of externally-driven soft matter systems [13,55]. We show that our method can be used to accurately compute the entropy production rate, a quantity of interest in the active matter community [38], as it quantifies the out-of-equilibrium nature of the system's dynamics.
1.2. Notation and assumptions. Throughout, we assume that the stochastic process (1) evolves over a domain Ω ⊆ R d in which it remains at all times t ≥ 0. We assume that the drift vector b t : Ω → R d and the diffusion tensor D t : Ω → R d×d are twice-differentiable and bounded in both x and t, so that the solution to the SDE (1) is well-defined at all times t ≥ 0. The symmetric tensor D t (x) = D T t (x) is assumed to be positive semi-definite for each (t, x), with Cholesky decomposition D t (x) = σ t (x)σ T t (x). We further assume that the initial PDF ρ 0 is three-times differentiable, positive everywhere on Ω, and such that H 0 = − Ω log ρ 0 (x)ρ 0 (x)dx < ∞. This guarantees that ρ * t enjoys the same properties at all times t > 0. Finally, we assume that log ρ * t is K-smooth globally for (t, x) ∈ [0, ∞) × Ω, i.e.
∃K > 0 : This technical assumption is needed to guarantee global existence and uniqueness of the solution of the probability flow equation. Throughout, we use the shorthand notationẏ t = d dt y t interchangeably for a time-dependent quantity y t .

Related work
Score matching. Our approach builds directly on the toolbox of score matching originally developed by Hyvärinen [18,17,19,20] and more recently extended in the context of diffusion-based generative modeling [51,52,54,7,10,37]. These approaches assume access to training samples from the target distribution (e.g., in the form of examples of natural images). Here, we bypass this need and use the probability flow equation to obtain the samples needed to learn an approximation of the score. Lu et al. [33] recently showed that using the transport equation (TE) with a velocity field learned via SBDM can lead to inaccuracies in the likelihood unless higher-order score terms are well-approximated. Proposition 1 shows that the self-consistent approach used in SBTM solves these issues and ensures a systematic approximation of the target ρ * t . Lai et al. [27] recently used a similar idea to improve sample quality with score-based probability flow equations in generative modeling.
Density estimation and Bayesian inference. Our method shares commonalities with transport map-based approaches [36] for density estimation and variational inference [61,2] such as normalizing flows [57,56,43,16,40,25]. Moreover, because expectations are approximated over a set of samples according to (2), the method also inherits elements of classical "particle-based" approaches for density estimation such as Markov chain Monte Carlo [45] and sequential Monte Carlo [6,9].
Our approach is also reminiscent of a recent line of work in Bayesian inference that aims to combine the strengths of particle methods with those of variational approximations [5,47]. In particular, the method we propose bears some similarity with Stein variational gradient descent (SVGD) [30,31,32] (see also [34,28]), in that both methods approximate the target distribution via deterministic propagation of a set of samples. The key differences are that (i) our method learns the map used to propagate the samples, while the map in SVGD corresponds to optimization of the kernelized Stein discrepancy, and (ii) the methods have distinct goals, as we are interested in capturing the dynamical evolution of ρ * t rather than sampling from an equilibrium density. Indeed, many of the examples we consider do not have an equilibrium density, i.e. lim t→∞ ρ * t does not exist.
Approaches for solving the FPE. Most closely connected to our paper are the works by Maoutsa et al. [35] and Shen et al. [49], who similarly propose to bypass the SDE through use of the probability flow equation, building on earlier work by Degond and Mustieles [8] and Russo [46]. The critical differences between Maoutsa et al. [35] and our approach are that they perform estimation over a linear space or a reproducing kernel Hilbert space rather than over the significantly richer class of neural networks, and that they train using the original score matching loss of Hyvärinen [18], while the use of neural networks requires the introduction of regularized variants. Because of this, [35] studies systems of dimension less than or equal to five; in contrast, we study systems with dimensionality as high as 100.
Concurrently to our work, Shen et al. [49] proposed a variational problem similar to SBTM. A key difference is that SBTM is not limited to Fokker-Planck equations that can be viewed as a gradient flow in the Wasserstein metric over some energy (i.e., the drift term in the SDE (1) need not be the gradient of a potential), and that it allows for spatially-dependent and rank-deficient diffusion matrices. Moreover, our theoretical results are similar, but by avoiding the use of costly Sobolev norms lead to a practical optimization problem that we show can be solved in high dimension and over long times. In a follow-up to Shen et al. [49] and our present work, Li et al. [29] propose an algorithm that can be seen as an expectation-maximization algorithm for the loss function in (SBTM), which avoids calculation of G t according to equation (10).
Neural-network solutions to PDEs. Our approach can also be viewed as an alternative to recent neural network-based methods for the solution of partial differential equations (see e.g. [11,41,15,50,3]). Unlike these existing approaches, our method is tailored to the solution of the Fokker-Planck equation and guarantees that the solution is a valid probability density. Our approach is fundamentally Lagrangian in nature, which has the advantage that it only involves learning quantities locally at the positions of a set of evolving samples; this is naturally conducive to efficient scaling for high-dimensional systems.

Methodology
3.1. Score-based transport modeling. Let s t : Ω → R d denote an approximation to the score of the target ∇ log ρ * t , and consider the solution ρ t : Ω → R ≥0 to the transport equation Our goal is to develop a variational principle that may be used to adjust s t so that ρ t tracks ρ * t . Our approach is based on the following inequality, whose proof may be found in Appendix B.1: Proposition 1 (Control of the KL divergence). Assume that the conditions listed in Sec. 1.2 hold. Let ρ t denote the solution to the transport equation (TE), and let ρ * t denote the solution to the Fokker-Planck equation (FPE). Assume that ρ t=0 (x) = ρ * t=0 (x) = ρ 0 (x) for all x ∈ Ω. Then where | · | 2 Dt(x) = ·, D t (x)· . In particular, (8) implies that for any T ∈ [0, ∞) we have explicit control on the KL divergence Remarkably, (9) only depends on the approximate ρ t and does not include ρ * t : it states that the accuracy of ρ t as an approximation of ρ * t can be improved by enforcing agreement between s t and ∇ log ρ t . This means that we can optimize (9) without making use of external data from ρ * t , which offers a self-consistent objective function to learn the score s t using (TE) alone.
The primary difficulty with this approach is that ρ t must be considered as a functional of s t , since the velocity v t used in (TE) depends on s t . To render the resulting minimization of the right-hand side of (9) practical, we can exploit that (TE) can be solved via the method of characteristics, as summarized in Appendix A. Specifically, ifẊ t (x) = v t (X t (x)) is the probability flow equation associated with the velocity v t , then ρ t = X t ρ 0 . This means that the expectation of any function φ(x) over ρ t (x) can be expressed as the expectation of φ t (X t (x)) over ρ 0 (x). Observing that the score of the solution to (TE) along trajectories of the probability flow ∇ log ρ t (X t (x)) solves a closed equation leads to the following proposition.
Proposition 2 (Score-based transport modeling). Assume that the conditions listed in Sec.
Then ρ t = X t ρ 0 solves (TE), the equality G t (x) = ∇ log ρ t (X t (x)) holds, and for any T ∈ [0, ∞) Moreover, if s * t is a minimizer of the constrained optimization problem The map X * t associated to any minimizer is a transport map from ρ 0 to ρ * t , i.e. (12) x ∼ ρ 0 implies that Proposition 2 is proven in Appendix B.3. The result also holds with a standard Euclidean norm replacing the diffusion-weighted norm, in which case the minimizer is unique and is given by s * t (x) = ∇ log ρ * t (x). In the special case when the SDE is an Ornstein-Uhlenbeck process, the score and the equations for both X t and G t can be written explicitly; they are studied in Appendix C.
In practice, the objective in (SBTM) can be estimated empirically by generating samples from ρ 0 and solving the equations for X t (x) and G t (x) with x ∼ ρ 0 . The constrained minimization problem (SBTM) can then in principle be solved with gradient-based techniques via the adjoint method. The corresponding equations are written in Appendix B.3, but they involve fourth-order spatial derivatives that are computationally expensive to compute via automatic differentiation. Moreover, each gradient step requires solving a system of ordinary differential equations whose dimensionality is equal to the number of samples used to compute expectations times the dimension of (FPE). Instead, we now develop a sequential timestepping procedure that avoids these difficulties entirely, and as a byproduct can scale to arbitrarily long time windows.
3.2. Sequential score-based transport modeling. An alternative to the constrained minimization in Proposition 2 is to consider an approach whereby the score s t is obtained independently at each time to ensure that KL(ρ t ρ * t ) remains small. This suggests choosing s t to minimize d dt KL(ρ t ρ * t ), which admits a simple closed-form bound, as shown in Proposition 1. While this explicit form can be used directly, an application of Stein's identity recovers an implicit objective analogous to Hyvärinen score-matching that is equivalent to minimizing d dt KL(ρ t ρ * t ) Algorithm 1 Sequential score-based transport modeling. .

5:
Propagate samples: but obviates the calculation of G t . Expanding the square in (8) and applying we may neglect the corresponding square term during optimization. This leads to a simple and comparatively less expensive way to build the pushforward X * t such that X * t ρ 0 = ρ * t sequentially in time, as stated in the following proposition.
Proposition 3 (Sequential SBTM). In the same setting as Proposition 2, let X t (x) solve the first equation in (10) Proposition 3 is proven in Appendix B.4. Critically, (SSBTM) is no longer a constrained optimization problem. Given the current value of X t at any time t, we can obtain s t via direct minimization of the objective in (SSBTM). Given s t , we may compute the right-hand side of (10) and propagate X t (and possibly G t ) forward in time. The resulting procedure, which alternates between self-consistent score estimation and sample propagation, is presented in Algorithm 1 for the choice of a forward-Euler integration routine in time. The output of the method produces a feasible solution for (SBTM) with an a-posteriori bound on the loss obtained via integration. A few remarks on Algorithm 1 are now in order.
Higher-order integrators. Algorithm 1 is stated for choice of forward-Euler integration for simplicity. In practice, any off-the-shelf integrator can be used, such as an adaptive Runge-Kutta method, by temporal discretization of the dynamicṡ and spatial discretization of the expectation over a set of samples propagated according to the equation for X t (x). In practice, the minimization can be performed over a parametric class of functions such as neural networks via a few steps of gradient descent.
Divergence computation. To avoid computation of the divergence -which can be costly for neural networks with high input dimension -we can use the denoising score matching loss function introduced by [60], which we discuss in Appendix B.6. Empirically, we find that use of either the denoising objective or explicit derivative regularization is necessary for stable training to avoid overfitting to the training data; the level of regularization (or the noise scale in the denoising objective) can be decreased as the size of the dataset increases.
Time-dependence. When optimizing over a parametric class of functions, the score can be taken to be explicitly time-dependent, or the time-dependence can originate only through the parameters. In either case, all required outputs can be computed on-the-fly to avoid saving the entire history of parameters, which could be memory-intensive for large neural networks. If a time-dependent architecture is used, the method is amenable to online learning by randomly re-drawing initial conditions and optimizing over the resulting trajectory. In the numerical experiments below, we consider time-independent models with time-dependent parameters, because we found them to be sufficient. SBTM vs. Sequential SBTM. Given the simplicity of the optimization problem (SSBTM), one may wonder if (SBTM) is useful in practice, or if it is simply a stepping stone to arrive at (SSBTM). The primary difference is that (SBTM) offers global control on the discrepancy between s t and ∇ log ρ t over t ∈ [0, T ] that unavoidably arises in practice due to learning and time-discretization errors. By contrast, because (SSBTM) proceeds sequentially, these errors could accumulate over time in a way that is harder to control. In the numerical examples below, we took the timestep ∆t sufficiently small, and the number of samples n sufficiently large, that we did not observe any accumulation of error. Nevertheless, (SBTM) may allow for more accurate approximation, because the loss is exactly minimized at zero and high-order derivatives of s t are controlled through calculation ofĠ t .
Why not train on external data? An alternative to the sequential procedure outlined here would be to generate samples from the target ρ * t via simulation of the associated SDE, and to approximate the score ∇ log ρ * t via minimization of the loss are controlled when using this procedure, where ρ t = X t ρ 0 is the density of the probability flow equation. Empirically, we find in the numerical experiments that this approach is significantly less stable than sequential SBTM. In particular, and importantly for the applications we consider, we could not stably estimate the trajectory of the entropy production rate using a score model learned from the SDE with the same number of samples as used for SBTM.

Numerical experiments
In the following, we study two high-dimensional examples from the physics of interacting particle systems, where the spatial variable of the Fokker-Planck equation (FPE) can be written as Here,d describes a lower-dimensional ambient space, e.g.d = 2, so that the dimensionality of the Fokker-Planck equation d = Nd will be high if the number of particles N is even moderate 1 . The still figures shown in this section do not fully depict the complexity of the interacting particle dynamics, and we encourage the reader to view the movies available here. With a timestep ∆t = 10 −3 , a horizon T = 10, and a fixed nNd = 10 5 , we find that the sequential SBTM procedure takes around two hours for each simulation on a single NVIDIA RTX8000 GPU. In addition, we conclude with a low-dimensional example from the physics of active matter, which highlights the ability of sequential SBTM to remain stable over long times and to capture non-equilibrium probability currents.

Harmonically interacting particles in a harmonic trap.
Setup. Here we study a problem that admits a tractable analytical solution for direct comparison. We consider N two-dimensional particles (d = 2) that repel according to a harmonic interaction but experience harmonic attraction towards a moving trap β t ∈ R 2 . The motion of the physical particles is governed by the stochastic dynamics where α ∈ (0, 1) is a fixed coefficient that sets the magnitude of the repulsion. The dynamics (13) is an Ornstein-Uhlenbeck process in the extended variable x ∈ Rd N with block components x (i) . Assuming a Gaussian initial condition, the solution to the Fokker-Planck equation associated with (13) is a Gaussian for all time and hence can be characterized entirely by its mean m t and covariance C t . These can be obtained analytically (Appendices C and D), which facilitates a quantitative comparison to the learned model. The differential entropy S t is given by In the experiments, we take β t = a(cos πωt, sin πωt) T with a = 2, ω = 1, D = 0.25, α = 0.5, and N = 50, giving rise to a 100-dimensional Fokker-Planck equation. The particles are initialized from an isotropic Gaussian with mean β 0 (the initial trap position) and variance σ 2 0 = 0.25.
1 We would like to emphasize at this stage the difference between the number of physical particles N , which is a parameter for the system under study and sets the dimensionality of the resulting FPE, and the number of algorithmic samples n, which is a hyper-parameter that can be chosen at will to improve the accuracy of the learning. The learned solution matches the entropy production rate, score, and covariance well. A movie of the particle motion can be found here.
Network architecture. We take s t (x) = −∇U θt (x), where the potential U θt (·) is given as a sum of one-and two-particle terms which ensures permutation symmetry amongst the physical particles by direct summation over all pairs. Modeling at the level of the potential introduces an additional gradient into the loss function, but makes it simple to enforce permutation symmetry; moreover, by writing the potential as a sum of one-and two-particle terms, the dimensionality of the function estimation problem is reduced. As motivation for this choice of architecture, we show in Appendix D.1 that the class of scores representable by (15) contains the analytical score for the harmonic problem considered in this section. To obtain the parameters θ t k +∆t k , we perform a warm start and initialize from θ t k , which reduces the number of optimization steps that need to be performed at each iteration. All networks are taken to be multi-layer perceptrons with the swish activation function [42]; further details on the architectures used can be found in Appendix D.
Quantitative comparison. For a quantitative comparison between the learned model and the exact solution, we study the empirical covariance Σ over the samples and the entropy production rate dSt dt . Because an analytical solution is available for this system, we may also compute the target ∇ log ρ t (x) = −C −1 t (x − m t ) and measure the goodness of fit via the relative Fisher divergence In Equation (16),ρ can be taken to be equal to the current empirical estimate of ρ t (the training data), or estimated using samples from the stochastic differential equation (the SDE data).
Results. The representation of the dynamics (13) in terms of the flow of probability leads to an intuitive deterministic motion that accurately captures the statistics of the underlying stochastic process. Snapshots of particle trajectories from the learned probability flow (5), the SDE (13), and the noise-free equation obtained by setting D = 0 in (13) are shown in Figure 1A.
Results for this quantitative comparison are shown in Figure 1B. The learned model accurately predicts the entropy production rate of the system and minimizes the relative metric (16) to the order of 10 −2 . The noise-free system incorrectly predicts a constant and negative entropy production rate, while the SDE cannot make a prediction for the entropy production rate without an additional learning component; we study this possibility in the next example. In addition, the learned model accurately predicts the high-dimensional covariance Σ of the system (curves lie directly on top of the analytical result, trace shown for simplicity). The SDE also captures the covariance, but exhibits more fluctuations in the estimate; the noise-free system incorrectly estimates all covariance components as decaying to zero.

Soft spheres in an anharmonic trap.
Setup. Here, we consider a system of N = 5 physical particles in an anharmonic trap in dimensiond = 2 that exhibit soft-sphere repulsion. This system gives rise to a 10-dimensional (FPE), which is significantly too high for standard PDE solvers. The stochastic dynamics is given by where β t again represents a moving trap, A > 0 sets the strength of the repulsion between the spheres, r sets their size, and B > 0 sets the strength of the trap. We set β(t) = a(cos πωt, sin πωt) T or β(t) = a(cos πωt, 0) T with a = 2, ω = 1, D = 0.25, A = 10, and r = 0.5. We fix B = D/R 2 with R = √ γN r and γ = 5.0. This ensures that the trap scales with the number of particles and that they have sufficient room in the trap to generate a complex dynamics. The circular case converges to a distribution ρ * t = ρ * • Q t that can be described as a fixed distribution ρ * composed with a time-dependent rotation Q t , and hence the entropy production rate converges to zero by change of variables. The linear case does not exhibit this kind of convergence, and the entropy production rate should oscillate around zero as the particles are repeatedly pushed and pulled by the trap. We make use of the same network architecture as in Sec. 4.1.
Results. Similar to Section 4.1, an example trajectory from the learned system, the SDE (17), and the noise-free system obtained by setting D = 0 are shown in Figure 2A in the circular case. The learned particle trajectories exhibit an intuitive circular motion when compared to the SDE trajectory. When compared to the noise-free system, the learned trajectories exhibit a greater amount of spread, which enables the deterministic dynamics to accurately capture the statistics of the stochastic dynamics. Numerical estimates of a single component of the covariance and of the entropy production rate are shown in Figure 2B/C, with all moments shown in Appendix D.2. The learned and SDE systems accurately capture the covariance, while the noise-free system underestimates the covariance in both the linear and the circular case. The prediction of the entropy production rate via Algorithm 1 is reasonable in both cases, exhibiting the expected convergence to and oscillation around zero in the circular and linear cases, respectively. In the inset, we show the prediction of the entropy production rate when learning on samples from the SDE; the prediction is initially offset, and later becomes divergent. We found that this behavior was generic when training on the SDE, but never observed it when training on self-consistent samples.

An active swimmer.
Setup. We now consider a model from the physics of active matter, which describes the motion of a single motile swimmer in an anharmonic trap. The swimmer can be thought of as a run-and-tumble bacterium [58]; it travels in a fixed direction for a fluctuating duration before picking a new direction at random in which to swim. The system is two-dimensional, and is given by the stochastic differential equation for the position x and velocity v While low-dimensional, (17) exhibits convergence to a non-equilibrium statistical steady state in which the probability current j t (x) = v t (x)ρ t (x) is non-zero. Here, we show that sequential SBTM is capable of accurately capturing such currents, which is necessary to resolve the dynamics of the Fokker-Planck equation: if our goal were solely to sample at equilibrium, it would be sufficient to freeze the samples after an initial transient. Moreover, we show that the method preserves the stationary distribution over long times relative to the persistence time 1/γ of the swimmer, and does not display appreciable accumulation of error. We set γ = 0.1 and D = 1.0. Because noise only enters the system through the velocity variable v in (17), the score can be taken to be one-dimensional, which is equivalent to learning the score only in the range of the rank-deficient diffusion matrix. Further details on the architecture can be found in Appendix D.3. Figure 3, computed by rolling out an additional set of 50 trajectories for time 5/γ with a fixed set of parameters (after learning for time 10/γ). The phase portrait depicts closed limit cycles between and centered within the modes reminiscent of the classical phase portrait for the pendulum. Here, the closed limit cycles correspond to non-equilibrium currents that preserve the steady-state density. Phase portrait of the probability flow, computed with parameters frozen at the fixed time t = 10/γ. Low-opacity curves depict closed limit cycles, while arrows indicate the direction of the probability flow. The phase portrait reveals non-equilibrium steady-state currents, both within and between the two modes. The nullcline v = x 3 passes through the two modes (shown in blue), with an unstable equilibrium at the origin.

Results. A phase portrait for the learned probability flow dynamics is shown in
A kernel density estimate for the distribution of samples produced by the learned system, the stochastic system, and the noise-free systems are shown in Figure 4, which demonstrate that the distribution of the learned samples qualitatively matches the distribution of the SDE samples. Comparatively, the noise-free system grows overly concentrated with time, ultimately converging to a singular dirac measure at the origin. A movie of the motion of the samples (x i (t), v i (t)) t≥0 over a duration 10/γ in phase space can be seen at this link. The movie highlights convergence of the learned solution to one with a non-zero steady-state probability current that qualitatively matches that of the SDE, but which enjoys more interpretable sample trajectories.

Outlook and conclusions
Building on the toolbox of score-based diffusion recently developed for generative modeling, we introduced a related approach -score-based transport modeling (SBTM) -that gives an alternative to simulating the corresponding SDE to solve the   Here, we derive some results linking the solution of the transport equation (TE) with that of the probability flow equation (5).
A.1. Probability density and probability current. We begin with a lemma.
Lemma A.1. Let ρ t : Ω → R ≥0 satisfy the transport equation Assume that v t (x) is C 2 in both t and x for t ≥ 0 and globally Lipschitz in x. Then, given any t, t ≥ 0, the solution of (A.1) satisfies where X τ,t is the probability flow solution to (5). In addition, given any test function φ : Ω → R, we have In words, Lemma A.1 states that an evaluation of the PDF ρ t at a given point x may be obtained by evolving the probability flow equation (5) backwards to some earlier time t to find the point x that evolves to x at time t, assuming that ρ t (x ) is available. In particular, for t = 0, we obtain Since the probability current is by definition v t (x)ρ t (x), using (A.4) to express ρ t (x) also gives the follwing equation for the current: Proof. The assumed C 2 and globally Lipschitz conditions on v t guarantee global existence (on t ≥ 0) and uniqueness of the solution to (5). Differentiating ρ t (X t ,t (x)) with respect to t and using (5) and (A.1) we deduce Integrating this equation in t from t = t to t = t gives Evaluating this expression at x = X t,t (x) and using the group properties (i) X t ,t (X t,t (x)) = x and (ii) X t ,τ (X t,t (x)) = X t,τ (x) gives (A.2). Equation (A.3) can be derived by using (A.2) to express ρ t (x) in the integral at the left hand-side, changing integration variable x → X t ,t (x) and noting that the factor exp − is precisely the Jacobian of this change of variable. The result is the integral at the right hand-side of (A.3).
Lemma A.1 also holds locally in time for any v t (x) that is C 2 in both t and x. In particular, it holds locally if we set s t (x) = ∇ log ρ t (x) and if we assume that ρ 0 (x) is (i) positive everywhere on Ω and (ii) C 3 in x. In this case, (A.1) is the Fokker-Planck equation (FPE) and (A.2) holds for the solution to that equation.
A.2. Calculation of the differential entropy. We now consider computation of the differential entropy, and state a similar result.
Lemma A.2. Assume that ρ 0 : Ω → R ≥0 is positive everywhere on Ω and C 3 in its argument. Let ρ t : Ω → R ≥0 denote the solution to the Fokker Planck equation (FPE) (or equivalently, to the transport equation (A.1) with s t (x) = ∇ log ρ t (x) in the definition of v t (x)). Then the differential entropy H t = − Ω log ρ t (x) ρ t (x)dx can expressed as Proof. We first derive (A.9). Observe that applying (A.5) with φ = log ρ t leads to the first equality. The second can then be deduced from (A.4). To derive (A.10), notice that from (A.1), Above, we used integration by parts to obtain the second equality and s t = ∇ log ρ t to get the third. Now, using (A.5) with φ = s t ·v t integrating the result gives (A.10).
A.3. Resampling of ρ t at any time t. If the score s t ≈ ∇ log ρ t is known to sufficient accuracy, ρ t can be resampled at any time t using the dynamics In (A.12), τ is an artificial time used for sampling that is distinct from the physical time in (1). For s t = ∇ log ρ t , the equilibrium distribution of (A.12) is exactly ρ t . In practice, s t will be imperfect and will have an error that increases away from the samples used to learn it; as a result, (A.12) should be used near samples for a fixed amount of time to avoid the introduction of additional errors.
Appendix B. Further details on Score-Based Transport Modeling B.1. Bounding the KL divergence. Let us restate Proposition 1 for convenience: Proposition 1 (Control of the KL divergence). Assume that the conditions listed Proof. The constrained minimization problem (SBTM2) can be handled by considering the extended objective is a Lagrange multiplier. The Euler-Lagrange equations associated with (B.2) read Clearly, these equations will be satisfied if s * t (x) = ∇ log ρ * t (x) for all x ∈ Ω, µ * t (x) = 0 for all x, and ρ * t solves (FPE). This solution is also a global minimizer, because it zeroes the value of the objective. Moreover, all global minimizers must , as this is the only way to zero the objective.
It is also easy to see that there are no other local minimizers. To check this, we can use the fourth equation to write This reduces the first three equations to Since the equation for µ t is homogeneous in µ t and µ T = 0, we must have µ t = 0 for all t ∈ [0, T ), and the equation for ρ t reduces to (FPE).
B.3. SBTM in the Lagrangian frame. As stated, Proposition B.1 is not practical, because it is phrased in terms of the density ρ t . The following result demonstrates that the transport map identity (6) can be used to re-express Proposition B.1 entirely in terms of known quantities.
Proposition 2 (Score-based transport modeling). Assume that the conditions listed in Sec. 1.2 hold.
Then ρ t = X t ρ 0 solves (TE), the equality G t (x) = ∇ log ρ t (X t (x)) holds, and for any T ∈ [0, ∞) Moreover, if s * t is a minimizer of the constrained optimization problem Dt(Xt(x)) ρ 0 (x)dxdt subject to (10) where ρ * t solves the Fokker-Planck equation (FPE). The map X * t associated to any minimizer is a transport map from ρ 0 to ρ * t , i.e.
Proof. Let us first show that G t (x) = ∇ log ρ t (X t (x)) satisfies (10) if ρ t = X t ρ 0 , i.e. if ρ t satisfies the transport equation (TE). Since (TE) implies that taking the gradient gives .
which recovers the equation for G t (x) in (10). Hence, the objective in (SBTM) can also be written as where the second equality follows from (A.5) if ρ t (x) satisfies (A.1). Hence, (SBTM) is equivalent to (SBTM2). The bound on KL(X T ρ 0 ρ * T ) follows from (9).
Adjoint equations. In terms of a practical implementation, the objective in (SBTM2) can be evaluated by generating samples {x i } n i=1 from ρ 0 and solving the equations for X t and G t using the initial conditions X 0 (x i ) = x i and G 0 (x i ) = ∇ log ρ 0 (x i ). Note that evaluating this second initial condition only requires one to know ρ 0 up to a normalization factor. To evaluate the gradient of the objective, we can introduce equations adjoint to those for X t and G t . They read, respectively (B.9) In terms of these functions, the gradient of the objective is the gradient with respect to s t (x) (or the parameters in this function when it is modeled by a neural network) of the extended objective: B.4. Sequential SBTM. Let us restate Proposition 3 for convenience: Proposition 3 (Sequential SBTM). In the same setting as Proposition 2, let X t (x) solve the first equation in (10) Then, each minimizer s * t of (SSBTM) satisfies D t (x)s * t (x) = D t (x)∇ log ρ * t (x) where ρ * t is the solution to (FPE). Moreover, the map X * t associated to s * t is a transport map from ρ 0 to ρ * t . Proof. If X t ρ 0 = ρ t , then by definition we have the identity This means that the optimization problem in (SSBTM) is equivalent to All minimizers s * t of this optimization problem satisfy D t (x)s * t (x) = D t (x)∇ log ρ t (x). Hence, by (TE), which recovers (FPE), so that ρ t (x) = ρ * t (x) solves (FPE).
B.5. Learning from the SDE. In this section, we show that learning from the SDE alone -i.e., avoiding the use of self-consistent samples -does not provide a guarantee on the accuracy of ρ t . We have already seen in (9) that it is sufficient to control T 0 The proof of Proposition 1 shows that control on as would be provided by training on samples from the SDE, does not ensure control on KL(ρ T ρ * T ). The following proposition shows that control on (B.13) does not guarantee control on KL(ρ * B.6. Denoising Loss. The following standard trick can be used to avoid computing the divergence of s t (x): The expectation of the first term on the right-hand side of this equation is zero; the expectation of the second gives the result in (B.17). Hence, taking the expectation of (B.18) and evaluating the result in the limit as α ↓ 0 gives the first equation in (B.17). The second equation in (B.17) can be proven similarly using Replacing ∇ · s t (x) in (SSBTM) with the first expression in (B.18) for a fixed α > 0 gives the loss Evaluating the square term at a perturbed data point recovers the denoising loss of Vincent [60] (B.20) We can improve the accuracy of the approximation with a "doubling trick" that applies two draws of the noise of opposite sign to reduce the variance. This amounts to replacing the expectations in (B.17) with (B.21) ξ , whose limits as α → 0 are ∇ · s t (x) and tr (D t (x)∇s t (x)), respectively. In practice, we observe that this approach always helps stabilize training. Moreover, we observe that use of the denoising loss also stabilizes training, so that it is preferable to full computation of ∇ · s t (x) even when the latter is feasible.
The unique PDF minimizing this energy is Z −1 e −U (x) , and as t → ∞ X t converges towards a transport map between the initial ρ 0 and Z −1 e −U (x) .
Because the particles are indistinguishable so long as they are initialized from a distribution that is symmetric with respect to permutations of their labeling, the moments will satisfy the ansatz The dynamics for the vectorm : R ≥0 → Rd, as well as the matrices C d : R ≥0 → Rd ×d and C o : R ≥0 → Rd ×d can then be obtained from (D.2) and (D.3) aṡ m = β t −m, For a given β : R → Rd, these equations can be solved analytically in Mathematica as a function of time, giving the mean m t =m(t) ⊗ 1 N ∈ R Nd and covariance Because the solution is Gaussian for all t, we then obtain the analytical solution to the Fokker-Planck equation ρ * t = N (m t , C t ) and the corresponding analytical score −∇ log ρ * Potential structure. Here, we show that the potential for this example lies in the class of potentials described by (15). From Equation D.5, we have a characterization of the structure of the covariance matrix C t for the analytical potential U t (x) = In particular, C t is block circulant, and hence is block diagonalized by the roots of unity (the block discrete Fourier transform). That is, we may take a "block eigenvector" of the form ω k = Id ×d ρ k , Id ×d ρ 2k , . . . , Id ×d ρ (N −1)k T with ρ = exp(−2πi/N ) for k = 0, . . . N − 1. By direct calculation, this block diagonalization leads to two distinct block eigenmatrices, where V ∈ R Nd×Nd denotes the matrix with block columns ω k . The inverse matrix C −1 t then must similarly have only two distinct block eigenmatrices given by By inversion of the block Fourier transform, we then find that Above, we may identify the first term in the last line as N i=1 U 1 (x (i) ) and the second term in the last line as 1 N N i =j U 2 (x (i) , x (j) ). Moreover, U 2 (·, ·) is symmetric with respect to its arguments.
Analytical Entropy. For this example, the entropy can be computed analytically and compared directly to the learned numerical estimate. By definition, Additional figures. Images of the learned velocity field and potential in comparison to the corresponding analytical solutions can be found in Figures D.1 and D.2, respectively. Further detail can be found in the corresponding captions. We stress that the two-dimensional images represent single-particle slices of the high-dimensional functions. D.2. Soft spheres in an anharmonic trap.
Network architecture. Both potential terms U θt,1 and U θt,2 are modeled as four hidden-layer deep fully connected networks with n_hidden = 32 neurons in each layer. The initialization is identical to Appendix D.2.
Optimization and initialization. The Adam optimizer is used with an initial learning rate of η = 5 × 10 −3 and otherwise default settings. At time t = 0, the loss (D.1) is minimized to a value less than 10 −6 over n samples X (i) 0 ∼ ⊗ N j=1 N(β 0 , σ 2 0 I), i = 1, . . . , n with σ 0 = 0.5 and n = 10 4 . Past this initial stage, the denoising loss is used with a noise scale σ = 0.1; we found that a higher noise scale regularized the problem and led to a smoother prediction for the entropy, at the expense of a slight bias in the moments. By increasing the number of samples n, the noise scale can be reduced while maintaining an accurate prediction for the entropy. The loss is minimized by taking n_opt_steps = 25 steps of Adam at each timestep. The physical timestep is set to ∆t = 10 −3 . Cross sections of the velocity field for N = 50 harmonically interacting particles in a moving harmonic trap. Columns depict the learned, analytical, noise-free, and error between the learned and analytical velocity fields, respectively. Rows indicate different time points, corresponding to t = 1.25, 2.5, 3.75, and 5.0, respectively. Each velocity field is plotted as a function of a single particle's coordinate (denoted as x and y); all other particle coordinates are fixed to be at the location of a sample. Color depicts the magnitude of the velocity field while arrows indicate the direction. Learned, analytical, and noise-free share a colorbar for direct comparison; the error occurs on a different scale and is plotted with its own colorbar. White circles in the error plot indicate samples projected onto the xy plane; locations of low error correlate well with the presence of samples. Cross sections of the potential field U θt (x) computed via (15). Columns depict the learned, analytical, and error between the learned and analytical, respectively. Rows indicate distinct time points, corresponding to t = 1.25, 2.5, 3.75, and 5.0, respectively. As in Figure D.1, each potential field is plotted as a function of a single particle's coordinate (denoted as x and y) with other particle coordinates fixed on a sample. All potentials are normalized via an overall shift so that the minimum value is zero. White circles in the error plot indicate samples from the learned system projected onto the xy plane. Setup. We parameterize the score directly s t : R 2 → R using a three hidden layer neural network with n_hidden = 32 neurons per hidden layer. Because the dynamics is anti-symmetric, we impose that s(x, v) = −s(−x, −v). A system of N = 5 soft-sphere particles in an anharmonic trap: moments. All components of the covariance matrix over time for the linear trap motion. The learned system and the stochastic system agree well, while the noise free system underestimates the moments.
Optimization and initialization. The network initialization is identical to the previous two experiments. The physical timestep is set to ∆t = 10 −3 . The Adam optimizer is used with an initial learning rate of η = 10 −4 . At time t = 0 the loss (D.1) is minimized to a tolerance of 10 −4 over n = 10 4 samples drawn from an initial distribution N(0, σ 2 0 I) with σ 0 = 1. The denoising loss is used with a noise scale σ = 0.05, using n_opt_steps = 25 steps of Adam until the norm of the gradient is below gtol = 0.5.