Direct Poisson neural networks: learning non-symplectic mechanical systems

In this paper, we present neural networks learning mechanical systems that are both symplectic (for instance particle mechanics) and non-symplectic (for instance rotating rigid body). Mechanical systems have Hamiltonian evolution, which consists of two building blocks: a Poisson bracket and an energy functional. We feed a set of snapshots of a Hamiltonian system to our neural network models which then find both the two building blocks. In particular, the models distinguish between symplectic systems (with non-degenerate Poisson brackets) and non-symplectic systems (degenerate brackets). In contrast with earlier works, our approach does not assume any further a priori information about the dynamics except its Hamiltonianity, and it returns Poisson brackets that satisfy Jacobi identity. Finally, the models indicate whether a system of equations is Hamiltonian or not.


Introduction
The estimation of unknown parameters in physics and engineering is a standard step in many well-established methods and workflows.One usually starts with a model -a set of assumptions and equations that are considered given and then, based on the available data, estimates the exact form of the evolution equations for the system of interest.As an example, we can consider a situation where we need to estimate the mass of a star far away based on its interaction with light [1], or when the moments of inertia of an asteroid are inferred from its rotations [2].The assumptions can be of varying complexity and the method for parameter estimation should be therefore adequately chosen.
Techniques for machine learning of dynamical systems have sparked significant interest in recent years.With the rise of neural network-related advances, several methods have been developed for capturing the behavior of dynamical systems, each with its advantages and drawbacks.A symbolic approach (for instance [3]) allows us to learn precise symbolic form of equations from the predefined set of allowed operations.This can be often the most efficient approach that frequently leads to an exact match between the learned and target system, but the class of captured equations is by definition limited by the algebraic operations we consider as candidates.
Alternatively, one can learn directly the equations of motion by learning f parameterized by weights θ.The function can be represented by any function approximator, in many cases by a neural network.Although this approach is very general, it does not incorporate any known physics into our procedure.There is no concept of energy of the system, no quantities are implicitly conserved, and the method thus might produce unphysical predictions.A remedy is the concept of physics-informed machine learning that constrains the neural network models so that they obey some required laws of physics [4].In particular, models of mechanical systems, which can be described by Hamiltonian mechanics, preserve several physical quantities like energy or angular momentum, as well as geometric quantities (for instance the symplectic two-form) that ensure the self-consistency of the systems.A neural-network model learning a Hamiltonian system from its trajectories that is compatible with the underlying geometry without any a priori knowledge about the system has been missing, to the best of our knowledge, and it is the main purpose of the current manuscript to introduce it.Moreover, we present several models that vary in how strictly they reproduce the underlying geometry and the degree to which these models learn a system can be used to estimate whether the system is Hamiltonian or not.
Geometrization of a dynamical system.A dynamical system is described by a differential equation, in particular, a mechanical system obeys Hamiltonian evolution equations.These equations are of geometric origin that is invariant with respect to changes of coordinates and which is preserved during the motion of the system.The geometry of Hamiltonian systems goes back to Sophus Lie and Henry Poincaré [5,6].Modern approaches extend to infinite-dimensional systems and provide foundations for many parts of nowadays physics [7,8,9,10].Formulating a mechanical system geometrically typically means finding a bracket algebra (such as symplectic, Poisson, Jacobi, Leibniz, etc.) and a generating function (such as Hamiltonian, energy, or entropy).The bracket is generally required to satisfy some algebraic conditions (Jacobi identity, Leibniz identity, etc.).However, there is no general algorithmic way to obtain the Hamiltonian formulation (even if it exists) of a given system by means of analytical calculations.So such an analysis is proper for machine learning.
Apart from Hamiltonian mechanics, one can also include dissipation [11,12,13] or extend the learning of Hamiltonian systems to control problems [14].Such an approach then, with the suitable choice of integrator ensures the conservation of physically important quantities, such as energy, momentum or angular momentum.
A reversible system is a candidate for being a Hamiltonian system.For a reversible system, the beginning point could be to search for a symplectic manifold and a Hamiltonian function.Learning the symplectic character (if it exists) of a physical system (including particles in potential fields, pendulums of various complexities) can be done utilizing neural networks, see, for example, [15,16].The symplectic geometry exists only in even-dimensional models and due to the nondegeneracy criteria, it is very rigid.A generalization of symplectic geometry is Poisson geometry where the non-degeneracy requirement is relaxed [17,18].In Poisson geometry, there exists a Poisson bracket (defining an algebra on the space of functions) satisfying the Leibniz and the Jacobi identities.This generalization permits (Hamiltonian) analysis in odd dimensions.The degenerate character of Poisson geometry brings some advantages for the investigation of (total, or even super) integrability of the dynamics.We actually provide three flavors of DPNNs.The least-informed flavor directly learns the Hamiltonian function and the Poisson bivector, assuming its skew-symmetry but not the Jacobi identity.Another flavor adds squares of Jacobi identity to the loss function and thus softly imposes its validity.The most geometry-informed flavor automatically satisfies Jacobi identity by building the Poisson bivector as a general solution to Jacobi identity in three dimensions.
While the most geometry-informed version is typically most successful in learning Hamiltonian systems, it is restricted to three-dimensional systems, where the general solution of Jacobi identity is available.The second flavor is typically a bit less precise, and the least-informed flavor is usually the least precise, albeit still being able to learn Hamiltonian systems to a good degree of precision.
Interestingly, when we try to learn a non-Hamiltonian model by these three flavors of DPNNs, the order of precision is reversed and the least-informed flavor becomes most precise.The order of precision of the DPNNs flavors thus indicates whether a system of equations is Hamiltonian or not.
Section 2 recalls Poisson dynamics, in particular symplectic dynamics, rigid body mechanics, Shivamoggi equations, and evolution of heavy top.Section 3 introduces DPNNs and illustrates their use on learning Hamiltonian systems.
Finally, Section 4 shows DPPNs applied on a non-Hamiltonian system (dissipative rigid body).

Hamiltonian Dynamics on Poisson Geometry 2.1 General Formulation
A Poisson bracket on a manifold M (physically corresponding to the state space, for instance position and momentum of the body) is a skew-symmetric bilinear algebra on the space F(M) of smooth functions on M given by {F, HG} = {F, H}G + H{F, G}, and the Jacobi identity, for arbitrary functions F , H and G, [10,17,18].A manifold equipped with a Poisson bracket is called a Poisson manifold and is denoted by a two-tuple (M, {•, •}).A function C is called a Casimir function if it commutes with all other functions that is {F, C} = 0 for all F .For instance, the magnitude of the angular momentum of a rigid body is a Casimir function.
Hamiltonian Dynamics.Hamiltonian dynamics can be seen as evolution on a Poisson manifold.For a Hamiltonian function (physically energy) H on M, the Hamiltonian vector field and Hamilton's equation are respectively, where x ∈ M is a parametrization of manifold M. The algebraic properties of the Poisson bracket have some physical consequences.Skew-symmetry implies energy conservation, This identification makes it possible to define a Poisson manifold by a tuple (M, L) consisting of a manifold and a Poisson bivector.In this notation, the Jacobi identity can be rewritten as L X H L = 0, that is the Lie derivative of the Poisson bivector with respect to the Hamiltonian vector field is zero [21].In other words, the Jacobi identity expresses In this realization, the Poisson bracket and Hamilton's equations are written as respectively.Here, we have assumed summation over the repeated indices.Further, the Jacobi identity (3) turns out to be the following system of PDEs The left-hand side of this equation is called Jacobiator, and in the case of Hamiltonian systems, it is equal to zero.
Jacobi identity (9) is a system of differential equations consisting of PDEs.In 3D, Jacobi identity ( 9) is a single PDE whose general solution is known [23].In 4D, Jacobi identity (9) consists of four PDEs, and there are some partial results, but for an arbitrary n, according to our knowledge, there is no general solution yet.We shall focus on 3D, 4D implies Y = 0.A two-form is closed when being in the kernel of deRham exterior derivative, dΩ = 0.A Hamiltonian vector field X H on a symplectic manifold (M, Ω) is defined as where ι is the contraction operator (more precisely the interior derivative) [21].Referring to a symplectic manifold one can define a Poisson bracket where the Hamiltonian vector fields are defined through Equation (11).The closedness of the symplectic two-form with the coefficient functions λ αβ equal zero at the origin.If k = 0 in the local formula (13), then there remains only the first term on the right-hand side and the Poisson manifold turns out to be a 2m-dimensional symplectic manifold.
Newtonian mechanics, for instance, fits this kind of realization.On the other hand, if m = 0 in (13) in the sense that This compatibility condition is employed later in Section 3 to measure the error of DPNNs when learning symplectic Poisson bivectors.

3D Hamiltonian Dynamics
In this subsection, we focus on three-dimensional Poisson manifolds, following [24,25,26].One of the important observations in 3D is the isomorphism between the space of vectors and the space of skew-symmetric matrices given by This isomorphism lets us write Jacobi identity (9) as a single scalar equation see, for example, [25,27,28,29].The general solution of Jacobi identity ( 17) is for arbitrary functions φ and C, where C is a Casimir function.Hamilton's equation then takes the particular form Note that by changing the roles of the Hamiltonian function H and the Casimir C one can arrive at another Hamiltonian structure for the same system.In this case, the Poisson vector is defined as J = −(1/φ)∇H and the Hamiltonian function is C.This is an example of a bi-Hamiltonian system, that manifests integrability [23,30,31,32].A bi-Hamiltonian system admits two different but compatible Hamilton formulations.In 3D, two Poisson vectors, say J 1 DIRECT POISSON NEURAL NETWORKS and J 2 are compatible if This compatibility condition will be used later in Section 3 to measure the error of learning Poisson bivectors in 3D by DPNNs.
Rigid Body Dynamics.Let us consider an example of a 3D Hamiltonian system, a freely rotating rigid body.The state variable M ∈ M is the angular momentum in the frame of reference co-rotating with the rigid body.The Poisson structure is see [33].Poisson bracket ( 21) is degenerate because it preserves any function of the magnitude of M. The Hamiltonian function is the energy where I x , I y and I z are moments of inertia of the body.In this case, Hamilton's equation gives Euler's rigid body equation [34].

4D Hamiltonian Dynamics
In four dimensions, we consider the following local coordinates (u, x, y, z) = (u, x).A skew-symmetric matrix L can be identified with a couple of vectors After this identification, Jacobi identity (9) turns out to be a system of PDEs consisting of four equations [35] ∂ Note that L is degenerate (its determinant being zero) if and only if U • V = 0. So, for degenerate Poisson matrices, the Jacobi identity is satisfied if DIRECT POISSON NEURAL NETWORKS up to multiplication with a conformal factor ϑ in all three cases.Note that the 4D gradient is denoted by ∇H =

Semi-direct Extension to a 6D system
Six-dimensional Hamiltonian systems can again be symplectic or non-symplectic (degenerate).The former case is represented by a particle in three dimensions while the latter is for instance the heavy top dynamics.Since the evolution of a particle in 3D is canonical and thus analogical to the 2D dynamics, we shall recall only the heavy top dynamics.
A supported rotating rigid body in a uniform gravitational field is called heavy top [39].The mechanical state of the body is described by the position of the center of mass r and angular momentum M. In this case, the Poisson bracket is Even though the model is even dimensional, it is not symplectic.Two non-constant Casimir functions are r 2 and M • r.
In this case, we assume the Hamiltonian function as where −gχ is the vector of gravitational acceleration.Hamilton's equation is then In the following sections, we apply DPNNs to the here recalled models and we show that DPNNs are capable to extract the Poisson bivector and Hamiltonian from simulated trajectories of the models.

Learning Hamiltonian systems
When we have a collection of snapshots of a trajectory of a Hamiltonian system, how to identify the underlying mechanics?In other words, how to learn the Poisson bivector and energy from the snapshots?Machine learning provides a robust method for such task.It has been previously shown that machine learning can reconstruct GENERIC models [40,11,12], but the Poisson bivector is typically known and symplectic.Poisson Neural Networks [20] provide a method for learning also non-symplectic mechanics, which however relies on the identification of dimension of the symplectic subdynamics in the Darboux-Weinstein coordinates and on a transformation to the coordinates.Here, we show a robust method that does not need to know the dimension of the symplectic subsystem and that satisfies Jacobi identity also in the coordinates in which the data are prescribed.Therefore, we refer to the method as Direct Poisson Neural Networks (DPNNs).

DIRECT POISSON NEURAL NETWORKS
DPNNs learn Hamiltonian mechanics directly by training a model for the L(x) matrix and a model for the Hamiltonian H(x) simultaneously.The neural network that encodes L(x) only learns the upper triangular part of L and skewsymmetry is then automatically satisfied.The network has one hidden fully connected layer equipped with the softplus activation.The network that learns H(x) has the same structure.The actual learning was implemented within the standard framework PyTorch [41], using the Adam optimizer [42].The loss function contains squares of deviation of the training data and predicted trajectories that are obtained by the implicit midpoint rule (IMR) numerically solving the exact equations (for the training data) or Hamilton's equation with the trained models for L(x) and H(x) (for the predicted data).Although such a model leads to a good match between the validation trajectories and predicted trajectories, it does not need to satisfy Jacobi identity.Therefore, we use also an alternative model where squares of the Jacobiator ( 9) are added to the loss function, which enforces Jacobi identity in a soft way, see Figure 1.Finally, in 3D we know the form of the Poisson bivector since we have the general solution of Jacobi identity (19).In such a case, the neural network encoding L can be simplified to a network learning C(x) and Jacobi identity is automatically satisfied, see Figure 2. In summary, we use three methods: @ Skew-symmetrization • (SJ) Training L(x) and H(x) with soft Jacobi identity, where the L 2 -norm of the Jacobiator ( 9) is a part of the loss function.
• (IJ) Training C(x) and H(x) with implicitly valid Jacobi identity, based on the general solution of Jacobi identity in 3D (19).
The training itself then proceeds in the following steps: 1. Simulation of the training and validation data.For a randomly generated set of initial conditions, we simulate a set of trajectories by IMR.These trajectories are then split into steps and the collection of steps is split into a training set and a validation set.
2. Parameters of the neural networks WJ, SJ, and IJ are trained by back-propagation on the training data, minimizing the loss function.Then, the loss function is evaluated on the validation data to report the errors.

3.
A new set of initial conditions is randomly chosen and new trajectories are generated using IMR, which gives the ground truth (GT).
4. Trajectories with the GT initial conditions are simulated using the trained models for L and H and compared with GT.
In the following Sections, we illustrate this procedure for learning rigid body mechanics, a particle in 2D, Shivamoggi equations, a particle in 3D, and heavy top dynamics.

Rigid body
Figure 3a shows a sample trajectory of rigid body dynamics (23) from the GT set, as well as trajectories with the same initial conditions, generated the DPNNs.The training was carried out on 200 trajectories while GT consisted of 400 trajectories.Errors of the three learning methods (WJ, SJ, and IJ) are shown in Table 6.All three methods were capable to learn the dynamics well.Figure 3b shows the norm of the Jacobiator evaluated on the validation set.Jacobiator is zero for IJ and small in SJ, while it does not go to zero in WJ.Therefore, IJ and SJ satisfy Jacobi identity while WJ does not.Figure 4 shows the compatibility error of learning the Poisson bivector (20).All three methods learn the Poisson bivector well, but IJ is the most precise, followed by SJ and WJ.Finally, Figure 5 shows errors in learning the trajectories M(t).All three methods learn the trajectories well, but in this case, the SJ method works slightly better.
Table 6 shows the medians of the errors for all the models and applied learning methods.N/A indicates quantities that are not to be conserved.Error ∆M is calculated as the median of square deviation of M 2 over all time steps.Error ∆r, ∆M • r, ∆M 2 , and ∆r 2 are calculated analogically.Error ∆L in the RB case is calculated as log 10 of the L 2 norm of the compatibility condition (20), calculated for the learned J divided by its trace and multiplied by factor 1000 (using the exact J when generating the GT).In the P2D and P3D cases, where the Poisson bivector is symplectic, the error is calculated as Log 10 of squares of the symplecticity condition (15).In the case of Shivamoggi equations, the ∆L error is the Log 10 of the squared superintegrable compatibility condition (29).∆ det L errors are medians of squares of learned det L, and in the Shivamoggi and the heavy top cases, the values are logarithmic, since determinants are supposed to be zero in those cases.DIRECT POISSON NEURAL NETWORKS

Particle in 2D
A particle moving in a 2D potential field represents a four-dimensional symplectic system.The simulated trajectories were learned by WJ and SJ methods.No implicit IJ method was used because no general solution of Jacobi identity in 4D is available that would work for both the degenerate and symplectic Poisson bivectors.Results of the learning are in Table 6, and both WJ and SJ learn the dynamics comparably well.Figure 7 shows a sample trajectory, and Figure 8 shows the distribution of learned det(L).The median determinant (after a normalization such that the determinant is equal to 1.0 in GT), was close to this value for both SJ and WJ, indicating a symplectic system.z ∈ [−0.5, 0.5].It was necessary to constraint the range of r = (u, x, y, z) because for instance when u = 0, the solutions explode [37].Figure 9 shows the u-component over a sample trajectory.Figure 10 shows the distribution of log 10 (det(L)), indicating that the system is indeed degenerate.
In comparison with determinants of L learned in the symplectic case of a two-dimensional particle (P2D), see Table 6, the learned determinants are quite low in the Shivamoggi case (after the same normalization as in the P2D case).
Therefore, DPNNs are able to distinguish between symplectic and non-symplectic Hamiltonian systems.

Particle in 3D
Figure 11 shows momentum during a sample trajectory of a particle in 3D space taken from the GT set, as well as trajectories with the same initial conditions obtained by DPNNs (WJ and SJ flavors).The training and validation were done in two sets of trajectories (with 200 and 400 trajectories, respectively).Table 6 contains numerical values of the The median determinant is close to unity, which indicates a symplectic system.

Heavy top
Figures 12a and 12b show a sample trajectory of a heavy top from the GT set and trajectories with the same initial conditions obtained by DPNNs.The training and validation were done in two sets of trajectories (with 300 and 400 trajectories, respectively).Numerical values of the learning errors can be found in Table 6.For instance, the L matrix is close to being singular, indicating a non-symplectic system, but SJ learns slightly better than WJ.Similarly, as in the four-dimensional case, DPNNs distinguish between symplectic (P3D) and non-symplectic cases (HT).

Learning non-Hamiltonian systems
Let us now try to apply the WJ, SJ, and IJ methods, that are developed for learning purely Hamiltonian systems, to a non-Hamiltonina system, specifically a dissipative rigid body.A way to formulate the dissipative evolution of a rigid body is called the energetic Ehrenfest regularization [43], where the Hamiltonian evolution of a rigid body is supplemented with dissipative terms that keep the magnitude of angular momentum constant while dissipating the energy.The evolution equations are where τ is a positive dissipation parameter and where Ξ = L T d 2 EL is a positive symmetric definite matrix (assuming that energy be positive definite) constructed from the Poisson bivector of the rigid body L ij = − ijk M k and energy E(M).These equations satisfy that Ṁ2 = 0 while Ė ≤ 0, and their solutions converge to pure rotations around the principal axis of the rigid body (an axis with the highest moment of inertia), which is the physically relevant solution.
Results of learning trajectories generated by solving Equations (1) are shown in Figure 13.All the methods (WJ, SJ, and IJ) are capable to learn the trajectories to some extent, but WJ is the most successful, followed by SJ and IJ.As SJ, and especially IJ, use deeper properties of Hamiltonian systems (soft and exact validity of Jacobi identity), they are less robust in the case of non-Hamiltonian systems.WJ method is the most robust, capable to learn also the dissipative system relatively well (although worse than in the purely Hamiltonian case).The SJ method, which moreover softly imposes Jacobi identity, is less capable to learn the dissipative system.The IJ method, which has the best performance in purely Hamiltonian systems, see Table 6, has the worst learning capability in the dissipative case.
This paper proposes a machine learning method for learning Hamiltonian systems from data.Direct Poisson Neural Networks (DPNN) learn directly the Poisson bivector and Hamiltonian of the mechanical systems with no further assumptions about the structure of the systems.In particular, DPNN can distinguish between symplectic and nonsymplectic systems by measuring the determinant of the learned Poisson bivector.
DPNNs come in three flavors: (i) without Jacobi identity (WJ), (ii) with softly imposed Jacobi identity (SJ), and with implicitly valid Jacobi identity (IJ).Although all the methods are capable to learn the dynamics, only SJ and IJ satisfy also the Jacobi identity.Typical behavior is that IJ learns Hamiltonian models most precisely, see Table 6, followed by SJ and WJ.
When the three flavors of DPNNs are applied to learn a non-Hamiltonian system, it is expected that the order of precision gets reversed, making WJ the most precise, followed by SJ and IJ.This reversed order of precision can be used as an indicator that distinguishes between Hamiltonian and non-Hamiltonian systems.
In future, we would like to extend DPNNs to systems with dissipation prescribed by gradient dynamics.
while the Leibniz rule ensures that the dynamics does not depend on biasing the energy by a constant.Referring to a Poisson bracket, one may determine the Poisson bivector field according to L(dF, dH) := {F, H}.
the self-consistency of the Hamiltonian dynamics in the sense that both the building blocks (Hamiltonian function and the bivector field) are constant along the evolution.Assuming a local coordinate system x = (x i ) on M, Poisson bivector determines Poisson matrix L = [L kl ] which enables us to write[22] guarantees the Jacobi identity(3).The non-degeneracy condition of Ω puts an extra condition to the bracket(12) that the Casimir functions are only the constant functions, in contrast with Poisson manifolds, which may have also non-constant Casimirs.The Darboux-Weinstein coordinates show more explicitly the relationship between Poisson and symplectic manifolds in a local picture.Darboux-Weinstein Coordinates.We start with n = (2m + k)-dimensional Poisson manifold M equipped with Poisson bivector L. Near every point of the Poisson manifold, the Darboux-Weinstein coordinates (x i ) = (q a , p b , u α ) (here a runs from 1 to m, and α runs from 1 to k) give a local form of the Poisson bivector

Figure 1 :JFigure 2 :
Figure 1: Scheme SJ (Soft Jacobi) of the methods that learn both the energy and Poisson bivector.
(a) Comparison of an exact trajectory (GT) and trajectories obtained by integrating the learned models.All three methods fit the trajectories well.(b) Error of Jacobi identity on the validation set.The error of IJ is zero by construction, the error of SJ goes to zero while the error of WJ does not.

Figure 4 :
Figure 4: Rigid body: Compatibility errors for RB evaluated as log 10 of squares of Equation(20).The distribution of errors is approximately log-normal.The Compatibility error of the IJ method is the lowest, followed by SJ and WJ.

Figure 5 : 6 Figure 6 :
Figure 5: Rigid body: Distribution of log 10 of squares of errors in M.

Figure 7 :
Figure 7: P2D: A sample trajectory.Both SJ and WJ learn the dynamics of a particle in two dimensions well.

Figure 11 :
Figure 11: P3D: Comparison of momentum M(t) on an exact trajectory (GT) and trajectories obtained by integrating the learned models (without Jacobi and with soft Jacobi) in the case of a 3D harmonic oscillator.
(a) Angular momentum M(t) during a sample trajectory.(b) Vector r(t) during the same trajectory.

Figure 12 :
Figure 12: Heavy top: Comparison on an exact trajectory (GT) and trajectories obtained by integrating the learned models (without Jacobi and with soft Jacobi) in case of the heavy top.

Figure 13 can
Figure 13  can be actually seen as an indication of non-Hamiltonianity of Equations(1).Systems where IJ learns best, followed by SJ and WJ much more likely to be Hamiltonian, in contrast with non-Hamiltonian systems where WJ learns best, followed by SJ and IJ.In other words, DPNNs can distinguish between Hamiltonian and non-Hamiltonian systems by the order in which the flavors of DPNNs perform.

Figure 13 :
Figure13: Distribution of errors in the angular momentum M when learning dissipative rigid-body dynamics (1) by methods assuming purely Hamiltonian systems (WJ, SJ, and IJ).WJ method is the most robust, capable to learn also the dissipative system relatively well (although worse than in the purely Hamiltonian case).The SJ method, which moreover softly imposes Jacobi identity, is less capable to learn the dissipative system.The IJ method, which has the best performance in purely Hamiltonian systems, see Table6, has the worst learning capability in the dissipative case.
and 6D cases in the upcoming subsections.Symplectic Manifolds.If there is no non-constant Casimir function for a Poisson manifold, then it is also a symplectic manifold.Although we can see symplectic manifolds as examples of Poisson manifolds, it is possible to define a symplectic manifold in a direct way without referring to a Poisson manifold.A manifold M is called symplectic if it is equipped with a closed non-degenerate two-form (called a symplectic two-form) Ω.A two-form is called non-degenerate when , then there remains only the second term which is a full-degenerate Poisson bivector.A large class of Poisson manifolds is of this form, namely Lie-Poisson structure on the dual of a Lie algebra including rigid body dynamics, Vlasov dynamics, etc.In general, Poisson bivectors have both the symplectic part as well as the fully degenerate part, for instance, the heavy top dynamics in Section 2.4.When the Poisson bivector is non-degenerate, it generates a symplectic Poisson bracket, and it commutes with the canonical Poisson bivector