Qualitative inverse problems: mapping data to the features of trajectories and parameter values of an ODE model

Our recent work on linear and affine dynamical systems has laid out a general framework for inferring the parameters of a differential equation model from a discrete set of data points collected from a system being modeled. It introduced a new class of inverse problems where qualitative information about the parameters and the associated dynamics of the system is determined for regions of the data space, rather than just for isolated experiments. Rigorous mathematical results have justified this approach and have identified common features that arise for certain classes of integrable models. In this work we present a thorough numerical investigation that shows that several of these core features extend to a paradigmatic linear-in-parameters model, the Lotka–Volterra (LV) system, which we consider in the conservative case as well as under the addition of terms that perturb the system away from this regime. A central construct for this analysis is a concise representation of parameter and dynamical features in the data space that we call the P n -diagram, which is particularly useful for visualization of the qualitative dependence of the system dynamics on data for low-dimensional (small n) systems. Our work also exposes some new properties related to non-uniqueness that arise for these LV systems, with non-uniqueness manifesting as a multi-layered structure in the associated P 2-diagrams.


Introduction
In a wide range of settings, we have enough information about a physical system of interest that we can develop a mathematical model that captures its key properties. The mathematical model may be derived from physical principles or may be a phenomenological representation designed to match system dynamics. Either way, if the model evolves continuously in time and the rate of change of the state variables is a function of the current state vector, the model can be written as a system of ordinary differential equationṡ x = f(x, p), (1.1) where f describes the functional relationship for interactions between the components of the time-dependent state vector x(t) (withẋ representing dx(t)/dt), and the parameter vector p describes the strengths and signs of such interactions. If the functional form of f is known and the values of the components of p are directly measured, one can solve the forward problem of determining the solution of (1.1) for given initial conditions. More often, however, the available information about the system consists of measurements of a trajectory x(t), or of some known function of the trajectory (i.e. the output), y(t) = h(x(t)), at specified time points t 0 , t 1 , . . ., which together form a data vector d. The aim of solving an inverse problem in this case is to infer functional and parametric properties of the model f(x(t), p) from d. This inverse problem has a variety of flavors that differ in the amount of prior information assumed about f, the amount of data available for the system, and the sought-after information about the model. Commonly, measurements of the output from one or several trajectories are accessible, and the inverse problem focuses on parameter identification, i.e. estimating p from d using deterministic (e.g. [1][2][3][4][5]) or probabilistic (e.g. [6][7][8][9][10][11]) approaches. If more data is available and physical principles are not obvious, then researchers may seek to identify not only the parameters but also the appropriate f by system identification [12,13]. On the other hand, when given limited data, researchers can seek qualitative information about the parameters of the model or dynamical properties of the continuation of the observed trajectory. The work presented here addresses this third type of inverse problem, which we refer to as the qualitative inverse problem. In particular, assuming a specific form of f in model (1.1), we seek to identify data vectors d or regions in data space such that when data points lie in such a region, certain qualitative features of (1.1) are assured. Examples of such features include the existence or uniqueness of solutions; parameter identifiability or robustness (or sensitivity) of parameter estimates; or convergence, divergence, boundedness, or periodicity of trajectories passing through d.
The reasons for seeking a solution to the qualitative inverse problem are manifold. Suppose, for example, that for a fixed f we have determined algebraic conditions on data d that guarantee that there is no parameter vector p ∈ Ω for which the trajectory x(t; p) of (1.1) passes through the data d. Then, if measurements are performed with sufficient accuracy and it is found that the data satisfy those conditions, one should reject the model (1.1) as inadequate for representing the observed trajectory. Alternatively, suppose we have determined conditions on d for which there may be more than one choice of p that gives rise to such a x(t; p), and the observed data satisfy those conditions. Then we cannot draw a unique conclusion about the behavior of the system, and we should expect that any numerical procedure for parameter estimation will only yield one of the several potential parameter alternatives. Finally, although the trajectories (and hence data) depend differentiably on p, the reverse need not be true, which makes parameter identification an ill-conditioned problem. As a result, small changes in data may have great influence on inferred parameters and qualitative behavior of the system, and we can seek conditions on d for which this situation may or may not occur. Ideally, in any modeling situation, the qualitative inverse problem would be studied before attempts are made either to identify numerically the model parameter values associated with a specific data set or to estimate effects of fluctuations and measurement noise [14], in order to properly interpret the results of these procedures.
Our approach to solving the qualitative inverse problem is to combine classical results on existence and uniqueness of solutions of ODEs and their differentiable dependence on parameters [15,16] with basic results from singularity theory on invertibility of nonlinear maps [17,18] to derive a mapping between the data and corresponding qualitative characteristics of the parameter values or dynamic features of the solution set of (1.1). This approach is related to, but distinct from, traditional analyses performed on dynamical systems using bifurcation theory, where one seeks to identify parameters p or regions in parameter space that lead to specific qualitative dynamical features [19,20]. The emphasis on conditions on the data also distinguishes our work from related work on identifiability, observability, and controllability, which has focused on conditions satisfied by the parameters of the system (see, e.g. [21][22][23][24][25]).
To make the problem tractable, we analyze the qualitative inverse problem for situations in which we have discrete data points derived from a single trajectory collected at uniformly spaced time points. The restriction to data from a single experiment arises in modeling many real-world systems for which experiments are non-repeatable, including living systems in the lab or the clinic with heterogeneity across individuals as well as climate and other systems that cannot be set to specific initial conditions by an experimentalist. Because of the generality of equation (1.1) and the associated inverse problem, we do not expect a one-size-fits-all theory that encompasses all such scenarios. To make headway in addressing the qualitative inverse problem for data from a single trajectory, we began to develop mathematical theory in the setting of linear systems with specific assumptions on the data available [26,27]. Analytical results for linear, affine and matrix Riccatti systems are concisely summarized in section 2 with new results described by theorems 2.1 and 2.2. The conjecture underlying this systematic progression through relatively simple classes of equations is that these results, or at least some of their features, can be generalized in directions that include allowing specific forms of nonlinearity in f, and in this work, we describe one such generalization by considering how our results extend to linear-in-parameter (LIP) systems. We make a few comments on general LIP systems in section 3 and then focus on a conservative Lotka-Volterra (LV) model in section 4, where we describe in detail what new features emerge in this nonlinear setting. We also discuss results for several non-conservative extensions of that model in section 5. This study provides a natural opportunity to collect results from across several model classes and integrate them in a way that, we hope, will be useful to guide continued work in related directions on the understudied but important topic of qualitative inverse problems.

Integrable systems
When explicit solutions of the initial value problem for (1.1) are available, investigations of any type of inverse problem reduce to the study of inverses of nonlinear algebraic functions [18]. A thorough analysis of linear, affine, and matrix Riccati systems reveals a picture of a complex, but understandable, dependence of the existence, uniqueness, and properties of the inverse problem solution on the data, which is outlined in the subsections below. In each case we find it revealing to display the results of this dependence schematically using what we henceforth call the P n -diagram (with n replaced by an appropriate integer). The diagram depicts the dependence of the existence, uniqueness, and other qualitative features of the solution of the inverse problem on the last data point P n (with coordinates x n ) when the first n data points P 0 , P 1 , . . . , P n−1 (with coordinates x 0 , . . . , x n−1 ) are held fixed. The accompanying theorems 2.1 and 2.2 then describe how the P n -diagram transforms when P 0 , P 1 , . . . , P n−1 are changed.

Linear systems
Linear systems provide a suitable starting point for theoretical investigations of parameter identifiability and estimation because their solutions can be readily expressed in closed form. These systems take the forṁ with dependent variable x ∈ R n , parameter matrix A ∈ R n×n , and initial condition x(0) = b ∈ R n for a natural number n. Despite the linearity of the ODE in (2.1), the relation between the parameters and the data is nonlinear; thus, we can already start to derive non-trivial insights into the inverse problem in this dynamically simple setting. For a unique solution to the inverse problem to exist, it is intuitively natural that we need to have at least as many pieces of information available as we have parameter values in the system. For system (2.1), there are (n + 1)n such parameters consisting of the entries of A and b. For simplicity we assume that all components of x can be observed; with that assumption our information requirement amounts to the need for a set of at least n + 1 data points, with coordinates x j ∈ R n . Let us focus for now on the case when we have exactly n + 1 data points, collected at equally spaced times t j such that t j+1 − t j = ∆t is constant; moreover, for simplicity, we will take ∆t = 1 and translate time as needed to take b = x 0 . (Extensions to non-equally spaced points and a subset of variables being observed can be found in [27].) In this setting, we have obtained four fundamental results on the qualitative inverse problem for system (2.1). First, a necessary but not sufficient condition for the uniqueness of a parameter matrix A, with b = x 0 , is that d is a linearly independent set [26]. Second, if d is linearly independent, then necessary and sufficient conditions for existence of an inverse problem solution and for the uniqueness of the solution when it exists can be expressed in terms of the eigenvalues and Jordan block structure of the matrix Φ = X 1 (X 0 ) −1 , where X k , k ∈ {0, 1}, is the square matrix with columns [x k |x k+1 | . . . |x k+n−1 ], and the invertibility of X 0 follows from the linear independence of d [27]. Third, under slightly stronger conditions on the spectrum of Φ, small perturbations to all elements of d do not change the qualitative nature of the inverse problem solution; rather, existence or non-existence, uniqueness, and even various other properties of the solution matrix are preserved in open neighborhoods [27]. Fourth, analytical lower and upper bounds relating to the size of the neighborhood where a given qualitative property is preserved have been computed [27].
Theorem 3.4 of [27] and its extensions to higher dimensions reveal nice features of the regions in data space that correspond to the solution of the inverse problem. Specifically, once data pointsd := {P 0 , . . . , P n−1 } are fixed, the n-dimensional space parameterized by the coordinates x n consists of a collection of open regions such that if two possible choices of P n1 , P n2 are made from the same region, then the inverse problem solution will have the same properties for d 1 := {d, P n1 } and for d 2 := {d, P n2 }. This is exemplified in 2 dimensions by the P 2 -diagram depicted in figure 1. This diagram makes clear that the boundaries of these regions are smooth curves given by simple algebraic expressions [27]. Furthermore, this diagram is universal, in that although it depends on the positions of the data points P 0 , P 1 , that dependence is defined by simple affine transformation of the 2D coordinate system given by stretching and rotation about the origin, with the possible addition of a mirror image transformation. Formally, this is a result of the invariance of solutions of the dynamical system (2.1) to transformations as follows: Theorem 2.1. Suppose that for data points {P j } n j =0 with coordinates {x j } n j =0 , of which the first n are linearly independent, the system (2.1) has the inverse problem solution given by the parameter matrix A. Then for any linearly independent coordinates {x j } n−1 j =0 there is an invertible matrix S such that for the data {x j } n j =0 withx n = Sx n the system (2.1) has the inverse problem solution given by parameter matrixĀ = SAS −1 .
Proof. There is a solution A of the inverse problem with given data point coordinates {x j } n j =0 , if and only if the system (2.1) with parameter matrix A has a trajectory x(t) that passes through the data, i.e. x j = x(j) for j = 0, 1, .., n. Let S =X 0 X −1 0 where X 0 is as above and , the functionx(t) = Sx(t) solves the system (2.1) with coefficient matrix SAS −1 and such a solution passes through the datā x j =x(j) for j = 0, 1, . . . , n, wherex n = Sx n . Theorem 2.1 makes clear that the P n -diagram is determined by the position of the first n data points, labeled P 0 , P 1 , . . . , P n−1 . If those points are changed, there is a one-to-one transformation between the points of the original and the transformed diagram, realized as multiplication by the matrix S. The preservation of eigenstructure of the parameter matrix upon transformation results in preservation of the stability properties of the system and hence the nature of the domains in the P n -diagram and their boundaries. Another consequence of the P n -diagram and theorem 2.1 is that no information can be deduced about the qualitative behavior of a trajectory of (2.1) passing through n points P 0 , P 1 , . . . , P n−1 without the knowledge of P n .

Affine systems
A first, relatively gentle extension beyond linear systems is provided by consideration of affine systems, which take the forṁ with states x ∈ R n , parameters A ∈ R n×n and c ∈ R n , and initial condition x(0) = b ∈ R n . Affine systems, like linear systems, can be solved explicitly. In fact, if A is invertible with c ∈ range(A), then system (2.2) has a unique equilibrium point at x * = −A −1 c and can be transformed into a linear system in y for y = x − x * . For general affine systems (2.2), even without the assumption of invertibility of A, many of the identifiability properties shown to hold for linear systems (2.1) still carry over [28]. The key step in establishing this generalization is to define the augmented linear systeṁ and if Z 0 is invertible, then we can also write Ψ = Z 1 Z −1 0 , which is useful for studying the inverse problem for (2.2).
As in the linear case, we find that linear independence of data points is helpful for the uniqueness of a solution to the inverse problem for (2.2) and that the properties of the solution relate to the spectrum of a matrix that can be derived from the data. In fact, in the affine case, the matrix Ψ relates to the solution matrix Φ for (2.1) and hence, not surprisingly, properties of the spectrum of Φ also impact the solution of the inverse problem for (2.2). Moreover, the existence of open regions in data space where inverse problem features persist also extends to the affine case, with some modifications to the analytical lower bounds on the sizes of regions where persistence occurs [28]. In n-dimensional state space such regions make up the P n+1 -diagram for the inverse problem, which for n = 2 is depicted in figure 3(a) of [28] and reproduced here in figure 2. Similarly to the linear case, this diagram depends on data points P 0 , P 1 , P 2 via a simple transformation of the state space consisting of stretching, rotation, and mirror inversion, combined with translation. Formally this property is summarized by the following result:

there are an invertible matrix S and vector r such that for the data
Proof. There is a solution (A, c) of the inverse problem with given data {x j } n+1 j =0 if and only if the system (2.2) with parameters (A, c) has a trajectory x(t) that passes through the data, The P 3 -diagram for affine system (2.2) with n = 2, which depicts the regions of location of P 3 for which the inverse problem has a solution with specified stability properties when the location of data points P 0 , P 1 , P 2 is fixed as indicated. Note that in the river and parabolic regimes, the system has no fixed point, which cannot occur in the linear case.
i.e. x j = x(j) for j = 0, 1, . . . , n + 1. Let X 0 , X 1 be as above, and which clearly is invertible as well. Then for each j = 0, . . . , n − 1 we havex j+1 −x j = S(x j+1 − x j ), which implies that there is a vector r such that r =x j − Sx j for j = 0, . . . , n. If we now letx(t) = Sx(t) + r, we find thaṫ Theorem 2.2 makes clear that in n-dimensional state space the P n+1 -diagram for affine systems is determined by the position of the first n + 1 data points P 0 , P 1 , . . . , P n . If those points are changed, there is a one-to-one transformation between the points of the original and the transformed diagram, realized with the matrix S and translation r. The preservation of eigenstructure of the parameter matrix upon transformation results in preservation of the stability properties of the system and hence the nature of the domains in the P n+1 -diagram and their boundaries.

Nonsymmetric matrix Riccati systems
An interesting extension of the above results arises from considering a nonlinear dynamical system provided by the nonsymmetric matrix Riccati differential equation (RDE): Classical theory of Radon tells us that, like the affine system, any solution of this system is locally equivalent to a solution of a linear ODE system with block coefficient matrix M = [A B; C D] (see [29]). This result can be used to solve the inverse problem for the system (2.4), i.e. to determine the parameter matrices [30]. Given that there are (m + n) 2 parameters and m + n initial conditions in the problem, one expects that unique determination of the parameters can be achieved with p = ⌈(m + n + 1)(m + n)/(mn)⌉ data points, with the exception of a structural unidentifiability in the system where changing A into A + αI and D into D + αI does not change the dynamics. Although a detailed analysis of this case is yet to be done, the availability of explicit solutions and formulas for the parameter matrices in terms of the data makes it straightforward to formulate conditions for existence and uniqueness of a solution to the inverse problem for (2.4) and hence to generalize the fundamental results first observed for linear systems. As an example, a straightforward extension of theorems 2.1 and 2.2 can be formulated to show that the P p -diagram is invariant with respect to transformations of the data point coordinates of the formX j = S(X j + R)Q where S ∈ R m×m and Q ∈ R n×n are invertible matrices and R ∈ R m×n .

Linear-in-parameters models
The extension of fundamental properties of the inverse problem from linear to affine systems raises the intriguing possibility that they may hold for more general classes of nonlinear systems. The class that we will consider in the remainder of this work are the LIP systems, which take the formẋ for time-dependent state vector x ∈ R n , parameter matrix B ∈ R n×m , and function f : R n → R m . Previous numerical work [28] showed that affine approximation of this system can be highly effective for estimating the inverse problem solution B for (3.1) from a discrete data set. Indeed, for continuously differentiable f(x), linearization of (3.1) around any point x * in the domain of f yields . This result, however, does not provide any insight into the qualitative properties of inverse problem solutions for such LIP systems.
To date only the first fundamental result has been generalized as follows [26]: for a given trajectory γ(B) := {x(t; B)|t ∈ R} of system (3.1), the uniqueness of B as a solution to the inverse problem holds if and only if {f(x(t; B)|t ∈ R} is not confined to a proper subspace of R m . In terms of discrete data, a necessary (but not sufficient) condition for uniqueness of B is therefore that the images {f(x 0 ), f(x 1 ), . . . , f(x n−1 )} are linearly independent.
The lack of explicit solutions of the forward problem, and thereby a lack of explicit or implicit functions that describe the dependence of such solutions on the parameters of the system, hinders any systematic description of regions in data space that correspond to existence, uniqueness, or various qualitative aspects of the solutions of the inverse problem. We therefore turn to numerical studies of example nonlinear systems to gauge the range of validity of the fundamental results observed for linear and affine systems and to identify departures from those results. We use boundary value problem solvers combined with continuation methods to numerically compute the boundaries between existence/nonexistence regions and various types of dynamics in data space and describe the results below. Because we have seen its utility for summarizing much information about the solvability and solutions of the inverse problem in other cases, we are particularly interested in finding the P n -diagram for LIP systems and observing both how it transforms with changes in data point coordinates and how it differs qualitatively from the diagrams found for linear and affine systems.

A conservative Lotka-Volterra system
As we have discussed, a key idea from the study, in linear and affine systems, of the inverse problem of determining (n + 1)n unknown parameters from n + 1 discrete, linearly independent data points in R n is that if we treat n of these data points as fixed, then we can partition R n into simply bounded regions such that the region in which the (n + 1)st data point lies determines the properties of the inverse problem solution. To test the extent to which this idea carries over to LIP systems, we performed a thorough numerical study of this approach to the inverse problem for discrete data in a specific such system, namely the famous LV equations, which have a rich history, wide range of applications, and a well-developed theory [31][32][33].
We consider the LV equations in the form Advantages of this choice are the low-dimensionality of the model, with n = 2, and the fact that the model has potential biological relevance over a wide range of parameter sign structures. The model is not integrable except for special parameter combinations [34] but one can construct power-series approximations of solutions [35]. We focus on positive solutions of (4.1), since x and y generally denote either the numbers or the densities of species in two interacting populations, the first quadrant is invariant under the flow of (4.1), and the results for any other quadrant (likewise invariant) can be related to those in the first quadrant by a proper adjustment of the signs of parameters. Note that system (4.1) always has a critical point at the origin but need not have a critical point in the interior of the positive quadrant.
One can write the LV system in the LIP form (3.1) as follows: For brevity of notation we will use the matrix A to represent the parameters (α 1 , β 1 , β 2 , α 2 ) of the system (4.1), implicitly assuming that in any such matrix A the entries a 13 and a 21 are equal to zero and never take any other values. We denote by σ A the signature of the system, i.e.
which determines the type of interactions described by the model. In the classical theory [36], positive values of α 1 , β 1 , α 2 , β 2 correspond to a cooperative interaction which characterizes mutualism or symbiosis, negative values of β 1 and β 2 combined with positive values of α 1 and α 2 correspond to a competition, while when β 1 β 2 < 0, α 1 β 1 < 0 and α 2 β 2 < 0, the system is classified as a predator-prey system. While other combinations of signs do not have established nomenclatures, we say the system describes parasitic behavior if α 1 > 0 and α 2 > 0 (i.e. survival of both species is possible without interaction), and β 1 β 2 < 0 (i.e. the interaction benefits only one of the species). Furthermore, we call the behavior cooperative dependency if β 1 and β 2 are positive but one of α 1 and α 2 is negative (i.e. survival of one species requires the presence of the other) and codependency if β 1 and β 2 are positive but both α 1 and α 2 are negative (i.e. survival of both species requires the presence of the other).
We will primarily consider a data set d = {P 0 , P 1 , P 2 } where P j , j = 0, 1, 2, has coordinates (x j , y j ) such that 0 < x 0 < x 1 and 0 < y 0 < y 1 . As before, will assume that P 0 and P 1 are fixed and that P 2 can lie anywhere in the first quadrant of the (x, y) plane. Within this domain we ask the following questions: (i) Existence: What is the set of values of P 2 for which there exists some A (i.e. parameters (α 1 , β 1 , β 2 , α 2 )), such that the system defined by (4.1) or equivalently (4.2) and (4.3) has a trajectory ϕ(t; A) with P j = ϕ(j, A), j ∈ {0, 1, 2}? (ii) Uniqueness: What is the set of values of P 2 for which the parameter matrix A that solves the inverse problem is unique? (iii) Parameter properties: What is the set of values of P 2 for which the parameter matrix A has specific signs of its entries, σ A (corresponding to specific types of behavior)?
Via extensive use of numerics, described in section 1 of supplementary materials, we find that the desirable properties obtained in linear and affine systems carry over to this LIP system; specifically, we obtain clearly defined, simply bounded regions such that the properties of the inverse problem solution (or lack thereof) are the same for all choices of data points P 2 within the same region. The numerical results can be interpreted in two ways: (a) as a manifold M in 6-dimensional space with coordinates (x 2 , y 2 , α 1 , β 1 , β 2 , α 2 ) comprised of points for which parameter matrix A constructed from parameters α 1 , β 1 , β 2 , α 2 solves the inverse problem with P 2 located at (x 2 , y 2 ), or (b) as a projection of the manifold M onto the 2-dimensional space with coordinates (x 2 , y 2 ), shown in figure 3 and described in table 1, which, in analogy to the earlier sections, we call the P 2 -diagram. The P 2 -diagram contains a large amount of information and depicts schematically the answers to questions 1-3 above; that is, it displays the locations of points P 2 for which we have evidence about the existence, uniqueness and properties of the inverse problem solution.
As depicted in the P 2 -diagram (figure 3), we make the following statements about the solutions of the inverse problem: (i) Any trajectory of system (4.1) that starts at P 0 remains in the first quadrant. Thus, there are no solutions of the inverse problem with P 2 outside of the first quadrant, and the P 2 -diagram is thereby restricted to that quadrant. (ii) The first quadrant can be partitioned into open regions R Ω in which there are solutions to the inverse problem with particular sign structure σ A of A. Regions labeled by the same subscript and shown in the same color share the same sign structure for A, as indicated in table 1.
j =1 R NEj , then the inverse problem has no solution. (iv) If P 2 is located in any labeled region R Ω not included in R NE except for regions R G or regions labeled with two letters, or P 2 lies on any boundary between regions represented by a solid curve in the P 2 -diagram, then the inverse problem has a unique solution. (v) In regions R G the inverse problem has a countable family of solutions that correspond to periodic orbits (discussed below in section 4.1). (vi) In regions R Ω labeled by two letters (but not by NE), two solutions arise due to a fold in the manifold M. (vii) The regions are separated by curves C ω on which some of the parameters vanish (discussed below in section 4.2). Figure 4 shows examples of trajectories that pass through specific choices of the point P 2 in various existence regions. Notice that each such trajectory passes through several regions in figure 3 including the non-existence region, R NE . These behaviors are consistent with the nature of this diagram; that is, the P 2 -diagram provides information about what we can infer about A from the location of P 2 and does not tell us what regions of the (x, y) phase space the trajectory does or does not visit before or after it reaches P 2 .

Trivial non-uniqueness for periodic solutions
We next go into more detail about the P 2 -diagram for the LV system (4.1), including some aspects that differ from those that can occur for linear and affine systems. First, we note that due to the presence of a conserved quantity, or Hamiltonian, in the system (see section 2 of the supplementary materials), in some regions of P 2 values, there is a form of trivial nonuniqueness of parameter matrices A for which P j = ϕ(j, A) for all j ∈ {0, 1, 2}. In the cases when σ A = [− + −+] or [+ − +−], each trajectory through P 0 , P 1 , and P 2 lies on a periodic trajectory, which is a level set of the Hamiltonian. This gives infinitely many solutions to the inverse problem by the following rescaling argument: Let A be a parameter matrix that solves the inverse problem for data d =  passage from P 0 to P 1 (or shortest counterclockwise passage from P 1 to P 0 ). If the trajectory with ϕ(0) = P 0 travels m times clockwise around the orbit before ϕ(1) = P 1 , then we have 1 = mT + τ . The same relation holds for a trajectory with ϕ(0) = P 0 that travels counterclockwise, with m = −1 for a trajectory that reaches ϕ(1) = P 1 before completing a full orbit, and m < −1 for a trajectory that travels counterclockwise 1 − m full orbits before reaching ϕ(1) = P 1 . Autonomy of system (4.1) implies that the same relation also holds for travel from P 1 = ϕ(1) to P 2 = ϕ (2). Consider now a system (4.1) (i.e. (4.2)) with the parameter matrixÃ = γA. Since the system is LIP, the orbit of the trajectory starting at P 0 will not change but the period of the trajectory will change toT = T/γ and the shortest time between P 0 and P 1 toτ = τ /γ. If γ = (m − m)T + 1 for some integerm then the trajectory of the system with parameter matrix A again obeys ϕ(i) = P i , i = 0, 1, 2 and henceÃ is another solution of the inverse problem, one which travelsm times clockwise around the orbit. This observation explains why in table 1  We treat this type of non-uniqueness as trivial and count all of these matrices as a single solution to the inverse problem, which we represent with the parameter matrix A for which m = 0 (clockwise trajectory) or m = −1 (counterclockwise trajectory), depending on which of those does not travel a full orbit before reaching P 2 when starting from P 0 . Moreover, as the periodic orbit is convex (see section 2 of supplementary materials) this choice uniquely determines the signature of the inverse problem solution to be of predator-prey type, i.e. we have σ A = [− + −+] (and the trajectory travels clockwise) if ϕ(2) is below the straight line P 0 P 1 , and σ A = [+ − +−] (and the trajectory travels counterclockwise) if ϕ(2) is above that line. Note, however, that in contrast to this trivial form of non-uniqueness, the R GG region in the P 2 -diagram and in table 1 corresponds to more than one solution with σ A = [− + −+] or [+ − +−]; here, the multiple trajectories truly represent different orbits. The trivial form of non-uniqueness and the argument we used to establish its presence carry over to periodic solutions of linear and affine systems, since we can use the same multiplicative parameter rescaling idea in those settings as well. The non-trivial non-uniqueness of R GG , on the other hand, is a new feature not found in linear and affine systems.

Boundary curves in the P 2 -diagram
A key feature of the P 2 -diagram is that the regions with different sign signatures or identifiability properties are separated by smooth, simple boundary curves. Thus, a full specification of the P 2 -diagram requires an explanation of what these boundary curves represent, in terms of model parameters. Therefore, we next focus on these curves, which we label in figure 5 and classify into three different types: (i) the curves along which at least one of the parameters α 1 , β 1 , β 2 , α 2 is zero and which therefore separate the regions of the diagram with different σ A ; (ii) the curves along which at least one of the parameters α 1 , β 1 , β 2 , α 2 approaches infinity; and (iii) the curves that describe folds in the manifold M resulting in different solutions A that co-exist for the same value of P 2 . Most of the curves are found numerically and in section 1 of supplementary materials we discuss the methods used in this section. We note that the overlap of regions in P 2 -space in which solutions with different dynamical behavior co-exist is a new feature that arises in this LIP setting, relative to the linear and affine cases, and thus represents an important example of the complexity that nonlinearity can add to solving the inverse problem.

Signature boundaries
The boundaries separating regions of fixed σ A in the P 2 -diagram correspond to the cases when at least one of the parameters α 1 , β 1 , β 2 , α 2 is zero. For certain combinations of zero parameters one can find an analytical solution of system (4.1). The detailed derivations of solutions for these cases are given in section 3 of supplementary materials. We summarize those results below.
When β 1 = 0, the x-equation in system (4.1) becomes decoupled and has the trajectory x(t) = e α1t x 0 . Hence, α 1 = ln(x 1 /x 0 ) and x 2 = x 2 1 /x 0 , so P 2 for such a system lies on a vertical straight line C β1 in the (x 2 , y 2 )−plane (see figure 5). We can determine explicitly the parameter values corresponding to any point on C β1 as functions of P 0 , P 1 , P 2 . For example, the intersection between C β1 and C α2 is given by An analogous situation holds in the case when β 2 = 0, with P 2 then lying on the horizontal line y = y 2 1 /y 0 , which we denote C β2 . The intersection between C β2 and C α1 is given by We have not been able to find explicit formulas for the curves C α1 and C α2 . When α 1 = 0, the first equation of the system becomesẋ = β 1 xy, and x 1 > x 0 implies β 1 > 0. We use numerical computation (see section 1 of supplementary materials) to obtain the curve C α1 of points P 2 corresponding to α 1 = 0. The curve C α1 appears to be single-valued in y 2 and defined for y 2 ∈ (0, ∞). Since β 1 > 0, this curve lies to the right of x 1 . Similarly, we can find numerically the curve C α2 corresponding to α 2 = 0.
When both α 1 = α 2 = 0, however, the system (4.1) can be rewritten as a Bernoulli equation for x, and the trajectory is a straight line passing through P 0 and P 1 . The intersection point of the curves C α1 and C α2 is Note that the scaling factor multiplying the expression is of the form (a + b − ab) −1 where a = x1 x0 > 1 and b = y1 y0 > 1 for our choice of the data points P 0 , P 1 . The scaling factor is either larger than one or negative under these conditions. In the former case, by comparison with the expressions (4.5) and (4.6), we observe that P α1,α2 is to the right of the line C β1 and above C β2 as in figures 3 and 5. On the other hand, as a and b grow, the scaling factor and hence the intersection point blow up to infinity, beyond which the ratios are large so that a −1 + b −1 < 1, the scaling factor is negative, and the curves C α1 and C α2 do not intersect in the positive quadrant. One implication of these observations is that the P 2 -diagram is not invariant under a change in the location of the points P 0 , P 1 even if that change preserves the inequalities between their x and y coordinates.
Note that in the P 2 -diagram, an intersection occurs between C α1 and C β1 . However, any solution of the system (4.1) with α 1 = 0 and β 1 = 0 has x(t) = x 0 and hence cannot pass through both of the specified points P 0 and P 1 . Thus, the apparent intersection point of these curves is only due to the projection of M onto the P 2 -plane. Such an intersection necessarily corresponds to two distinct solutions and hence must be a point of non-uniqueness for solutions to the inverse problem. We discuss more details about non-uniqueness below.

Separatrix
Cs. The separatrix C s is the part of the boundary of the red region defined by a set of locations in the P 2 plane at which the solution of the inverse problem ceases to exist. Numerical computations show that as C s is approached from the direction of R R , the solutions of the inverse problem are such that all four parameters α 1 , β 1 , β 2 , α 2 diverge, with α 1 , α 2 → ∞ and β 1 , β 2 → −∞. Indeed, as P 2 approaches P 1 through R R , the parameters must scale in such a way that the overall rates of change of x and y near P 1 become arbitrarily small, corresponding to the shrinking distance between P 1 and P 2 that is covered in one time unit. It is difficult to compute C s numerically because one cannot simply proceed with continuation along a specific parameter.
The separatrix C s has an apparent intersection with C β1 that lies below the horizontal line y = y 0 . Similarly, C s seems to intersect the line C β2 to the left of x = x 0 . Intersections between a curve where parameter values diverge and curves where individual parameter values are 0 suggest the existence of some non-trivial three-dimensional structure to M, which we discuss further below. The graph of C s is shown in panel (a) of figure 5 as the dashed curve going from the top left corner to the bottom right corner, forming the boundary between the red regions R R and the white regions R NE4 , R NE2 , and R NE3 (see figure 3 for region labels).

Periodic orbit limits
There are two curves in the P 2 -diagram labeled C p1 , C p2 , which form boundaries of the green regions of the P 2 -diagram that correspond to periodic orbits. Several parameters go to infinity when P 2 approaches the curve C p1 from below or the curve C p2 from the left. They are discussed below and shown in figures 6-8. For example, in figure 6, we take β 1 as a representative parameter and plot a two-dimensional structure M β1 that corresponds to the P 2 -diagram near P 0 with β 1 as a third coordinate, which illustrates the divergence of β 1 along certain curves with {y = y 0 }.

The fold curves
We have found that there are two folds in the projection of the manifold M onto the P 2 -diagram, and each of these folds forms a boundary between a region of P 2 on which the inverse problem has two solutions in parameter space and a region with no inverse. To obtain a more detailed picture of each fold we fix x 2 > x 2 1 /x 0 and numerically compute the value of β 1 as a function of y 2 using numerical methods described in section 1 of the supplementary materials. In figure 7, a point on the fold curve is represented by a blue dot Figure 6. The structure M β1 formed by using the β 1 values along M to embed its projection over the region (0, x 1 ) × (0, y 1 ) into (x 2 , y 2 , β 1 )-space. The dark green curves correspond to lines passing through P 0 with different x/y slopes and β 1 heights. M β1 behaves like a ruled surface near P 0 . Note the divergence of β 1 as x 2 approaches x 0 and as y 2 approaches y 0 .
with y 2 = y F < y 0 . On the bottom branch of the diagram, β 1 decreases as y 2 increases, while on the top branch, β 1 increases to ∞ as y 2 approaches y 0 (and (x 2 , y 2 ) approaches C p1 ). The fold results in non-uniqueness of the solution of the inverse problem for y 2 ∈ (y F , y 0 ). Both branches of solutions have σ A = [− + −+].
The curve C f1 obtained as a two-parameter continuation in the (x 2 , y 2 ) diagram of the fold point shown in figure 7(a) is one of the fold curves and appears in the P 2 -diagram in figure 5. On the manifold M the curve C f1 passes through the regions R G , R M , and R R . When projected onto the P 2 -diagram those regions overlap and each overlap gives rise to two possible solutions of the inverse problem. As noted previously, in the P 2 −diagram we denote the overlaps as R Ω1Ω2 , where each Ω i is one of the letters R, G, M, and C as in table 1. Two distinct letters imply that in that overlap region the signature σ A for the inverse problem can attain two distinct values, i.e. there are two distinct types of systems compatible with points P 2 located in that region.
Notice that there are two fold curves shown in figure 5, C f1 and C f2 . Fold curve C f1 borders the region R NE3 and the regions labeled R R , R G , and R M , which is sketched in figure 5 panels and P 1 = [2, 1.5]) near a fold, located at (β 1 , x 2 , y 2 ) = (β 1 , x 2 , y F ). The blue dot is the fold, while the deep and light green dots are nearby points on M β1 that share the same y 2 coordinate, slightly greater than y F . (b) The P 2 -diagram augmented with the two trajectories of system (4.1) that solve the inverse problem when x 2 = 4.5 and y 2 takes the value indicated by the green dots in (a). These dots lie on top of each other in (b). The blue ray from the green dot is a segment in {x 2 = 4.5} corresponding to the range of y 2 values, from just below y F to above y 0 , for which the cross-section in (a) is displayed. and the regions labeled R R , R G , and R C ; the latter is only sketched in panel (a). As is evident in panel (c) of figure 5, the projection of the fold curve C f1 onto the P 2 diagram intersects tangentially with the projections of the curves C α1 , C β1 , and C s .

The visualization of the P 2 -diagram in three dimensions
Now that we have captured the fold curves along M, we continue to provide a more complete illustration of the folds and twists in the P 2 -diagram, as shown in figures 6, 8 and 9. To this end, we take β 1 as a representative parameter and plot the surface M β1 of solutions of the inverse problem in (x 2 , y 2 , β 1 )-space. Figure 8 presents two different views of M β1 . We show the images in M β1 of some of the curves we described in the previous subsection, namely C α1 , C β1 , C β2 , C α2 , C f1 , and C f2 . The separatrix C s is not visible as a curve in this figure since it corresponds to β 1 → −∞. We see that when P 2 ∈ [x 2 1 /x 0 , ∞) × [y 2 1 /y 0 , ∞), the surface is relatively flat. The surface twists over near P 0 ; we have already seen in figure 6 that when P 2 is exactly at P 0 , there is nonuniqueness of the solutions of the inverse problem and the distribution of the solutions is no longer discrete; we will discuss this phenomenon in the next subsection. The surface also folds over in the region corresponding to panel (c) of figures 3 and 5 (that is, the region where the four curves C α1 , C β1 , C f1 , and C s interact), so we sketch zoomed views focusing on these two regions separately in figures 6 and 9, respectively. In these regions, when we color parts of M β1 according to their sign signatures σ A , we in fact have two surfaces to consider. Within each surface, we apply our usual convention of using red for σ A = [+ − −+], magenta for σ A = [+ + −+], and cyan for σ A = [+ − ++], as can be seen in figures 8 and 9. In figure 9 specifically, within the (x 2 , y 2 )-plane that forms the base of each panel, we label each of several regions with the pair of letters indicating those σ A that arise on these regions.  Now we study a special case of non-uniqueness when P 2 = P 0 . This case also yields σ A = [− + −+] or [+ − +−], but as we have seen in figure 6, here there exist infinitely many nontrivial solutions of the inverse problem, which form a continuum in parameter space; that is, in figure 6, the surface M β1 contains a vertical line over P 0 , where {x 2 = x 0 } intersects {y 2 = y 0 }.
We can explain this special non-uniqueness from the following perspective. There is a family T of trajectories of system (4.1) starting at P 0 and passing through P 1 at t = 1, but only one of them contains a trajectory passing through P 2 at t = 2. When P 2 = P 0 , only two distinct points on the trajectory are specified, and hence the cardinality of the family T is larger, so that the number of trajectories passing through P 2 (= P 0 ) at t = 2 becomes infinite.
Notice that in this case, the period of each of the elliptic orbits is two, which means that starting at P 0 , it takes 1 time unit for the flow to evolve to P 1 and an additional 1 time unit for the flow to evolve from P 1 to P 2 . Thus, if we have a solution A, then the matrix −A (trivially) provides another solution, as discussed in section 4.1.

P 2 -diagrams for alternative positions of P 0 and P 1
In previous subsections we focused in detail on the situation where P 0 and P 1 are positioned so that condition (C 1 ) applies: x 0 < x 1 and y 0 < y 1 . For completeness and comparison, figure 10 shows the P 2 diagram for cases in which the positions of P 0 and P 1 obey alternative conditions (C 2 ): x 1 < x 0 , y 1 > y 0 , or (C 3 ): x 1 < x 0 , y 1 < y 0 . With the new conditions, new possible signatures σ A appear for solutions of the inverse problem, and different regions now correspond to the signatures we have already discussed above, but there are also many similarities between the new diagrams and the case when (C 1 ) holds. For example, in these P 2 -diagrams, the curves C αi , C βi , folds C f , and separatrices C s also exist, and the lines C βi are located at the same coordinate values, i.e. x = x 2 1 /x 0 and y = y 2 1 /y 0 . The nonexistence region in both cases again separates into four disjoint components. There are again regions of non-trivial non-uniqueness, which for simplicity we color dark green in figure 10, irrespective of which types of solutions coexist. We do not show a diagram for the case when x 1 > x 0 and y 1 < y 0 as it is equivalent to (C 2 ) with x and y interchanged.

Non-conservative Lotka-Volterra systems
The LV system (4.1) that we analyzed in previous sections is a conservative system; that is, there is a quantity, the Hamiltonian, that is conserved along trajectories. As a result, we have a trivial non-uniqueness of solutions of the inverse problem corresponding to periodic trajectories, where there are countable families of trajectories that travel along identical closed orbits, as discussed in section 4.1. Linear and affine systems, discussed in sections 2.1 and 2.2, do not have such trivial non-uniqueness and instead non-unique solutions of the inverse problem give rise to spiraling solutions with various numbers of rotations occurring between the times when the trajectory passes through the data points. We show in the following sections that removal of the conservative nature of the system eliminates the trivial non-uniqueness observed for (4.1).

Lotka-Volterra system with rotated field
One way in which we can take away the conservative nature of the system is by introducing a rotation-like perturbation that mixes the terms in the vector field [37,38]. Specifically, we consider the system {ẋ = α 1 x + β 1 xy − p(α 2 y + β 2 xy), y = α 2 y + β 2 xy + p(α 1 x + β 1 xy), (5.1) where p is a small parameter, which reduces to system (4.1) when p = 0, and which has a vector field that is rotated at each point by a fixed angle with respect to the field of (4.1). As a result, the trajectories of (5.1) intersect transversally the level sets of the Hamiltonian when p ̸ = 0. In figure 7, we showed a cross-section of the manifold M β1 for system (4.1) at fixed x 2 near a fold. In contrast, figure 11 shows a cross-section of the manifold M β1 for system (5.1) at the same fixed value of x 2 . Note that the fold in M β1 , which gives rise to nonuniqueness in solutions of the inverse problem, persists and deepens for negative values of p but disappears when p exceeds some positive value. Disappearance of the fold at positive values of p implies that on this particular segment of M β1 the inverse problem has a unique solution. However, we suspect that for low values of y 2 this branch will overlap with the part of M β1 corresponding to the solutions with signature [+ − −+], which would yield an enlargement of the R RG region.

Saturated Lotka-Volterra model
Another natural variation on system (4.1) is the introduction of saturation in the interaction terms in the model. For example, in a predator-prey model with predator-prey interaction term r(x)y, the rate r(x) can naturally saturate as x increases, due to satiation of the predators. To implement this modification, we consider the LV model where ϵ is a nonnegative parameter. For ϵ = 0 system (5.2) reduces to system (4.1), while with ϵ > 0 the system includes a saturating dependence of the species interaction rate on the value of x. For this system, we first choose data points P 0 = [1, 1], P 1 = [2, 1.5], P 2 = [2.45, 4] and calculate the parameter matrix A for which the trajectory of (5.2) passes through these data points at times t = 0, t = 1, t = 2, respectively. Here σ A = [+ − +−], and from section 4.1 we know that there exist infinitely many solutions with trajectories that overlap when ϵ = 0.
We consider what happens to a selection of these solutions as we increase ϵ from 0. We choose seven trajectories, three with clockwise (cw) rotation and four with counterclockwise (ccw) rotation. We continue each of these trajectories as a solution to the inverse problem for (5.2) as ϵ increases and we visualize the resulting curves of obtained parameter values plotted versus ϵ in figure 12. We find that only two of the seven curves (ccw 0 and cw 1 ) persist over the whole interval ϵ ∈ [0, 1]. The other curves each have folds at some ϵ ∈ (0, 1), which implies that the corresponding solutions are lost before ϵ reaches 1, in some cases for quite small values of ϵ.
Interestingly, we do find multiple co-existing inverse problem solutions at ϵ = 1. The coexistent solutions yield spiral trajectories, some cw and some ccw, some but not all of which exhibit the same number of rotations between passages through the data points. In figure 13, we show the continuation of six such solution curves as ϵ decreases from 1, plotted versus each of the four original model parameters. The dashed green curve and the solid orange curve are exactly the same curves as shown in figure 12. The other four curves are only defined for ϵ ∈ (0, 1], and appear to approach vertical asymptotes, where one or more parameter values go to ±∞, at ϵ = 0. In order to distinguish the cw 2 and cw 3 appearing in both figures 12 and 13, we label them with superscripts 0 or 1 respectively. In figure 13, there are different alternative solutions that all have one rotation between data points (examples of trajectories corresponding to selected points on the curves in figures 12 and 13 with ϵ = 0.01, including these one-rotation solutions, are shown in figures 1 and 2 in the supplementary material). These alternative solutions are indicated as curves cw 1 (orange), cw ′ 1 (cyan) and cw ′ ′ 1 (magenta). This non-uniqueness of spirals with the same number of rotations between data points does not necessarily appear for all choices of P 2 . In the supplementary material, we consider the curves generated with the same P 0 and P 1 but P 2 = [2.45, 0.5], and show in supplementary figure 3 that in this case, there are three curves that persist over the whole interval ϵ ∈ [0, 1], while we no longer have non-uniqueness of spirals with the same number of rotations. This example implies that the P n -diagram for the saturated LV system (5.2) displays a richer variety of non-uniqueness of solutions of the inverse problem than, for example, arises in linear ODE systems.

Discussion
Estimation of model parameters from data is a key step in modeling physical systems. The usual approaches to this task involve identifying the parameter set that minimizes some measure of deviation of model trajectories from known data points or deriving a probability distribution on parameter space [6,10]. The latter approach can be used sequentially to update the probability distribution, or the parameter estimate together with estimates of uncertainty, as new measurements are obtained [39]. In this work, we take an alternative approach in which, instead of fixing the data points and the associated inverse problem, we consider a whole space of possible data values and the corresponding class of inverse problems. We show that this approach, first developed for linear [27] and affine [28] systems, extends easily to LIP models, exemplified by the LV system and associated perturbations. In this approach, we obtain a manifold M over the space of data coordinates and unknown parameters, which represents the mapping from the former to the latter.
Our results show that if we project the manifold M onto the coordinates x n , y n , of the last data point necessary to to eliminate systemic unidentifiability, we obtain a diagram (i.e. P 2 or, more generally, P n -diagram) that encompasses the answers to almost all of the questions posed in section 4, and we can visualize additional information by plotting M β1 as shown in figure 8. Among the similarities of this diagram with the corresponding diagrams for linear and affine systems are the following: (i) Domains containing data that yields existence or nonexistence of the solution to the inverse problem are simply connected regions with boundaries that are composed of smooth curves. (ii) Domains containing data for which the inverse problem solution has specified characteristics are also simply connected regions with boundaries composed of smooth curves.
The knowledge that the regions of data points for which the inverse problem solutions share the same properties are simply connected allows a modeler to have confidence in conclusions drawn from fitting the model to the data, because these conclusions are robust to slight changes in the data; in particular, if the data used to solve the inverse problem deviates from the true properties of the system being modeled due to measurement error, that need not yield erroneous conclusions. Care must be taken, however, to understand the structure of the P n -diagram for a system of interest, because some of its regions may be small. Estimating the sizes of these regions analytically has been done for linear systems [27], but performing similar analysis for nonlinear systems remains a challenge.
The main differences between this diagram and the corresponding diagrams for linear and affine systems are the following: (i) There are folds in the diagram that give rise to regions of non-uniqueness where two qualitatively distinct models fit the same data. (ii) The structure of the diagram is no longer invariant under changes in the location of the points P 0 , P 1 . (iii) Structural stability of the inverse problem solution set may be lost.
In our analysis, the folds referenced in point (i) appeared as fold curves in figure 3 and were illustrated in three-dimensional views in section 4.3 above. Such a fold does not occur in the P 2 -diagrams for linear and affine systems, where the only form of non-uniqueness arises in regions with spiraling solutions and occurs due to countable families of solutions featuring different numbers of rotations in between crossings through the data points. This form of non-uniqueness occurs in the LV system as well and it was described as the trivial nonuniqueness in section 4.1.
Regarding point (ii), in our construction of the P 2 -diagram we have focused on the case in which P 0 and P 1 are located so that x 0 < x 1 and y 0 < y 1 . Figure 10 in section 4.5 shows the diagram corresponding to other choices of the relative location of P 0 and P 1 . Although there are similarities between those diagrams and that of figure 3 in the main text, for example in the location of the lines C β1 and C β2 , the differences are in the appearance of regions for new signature types and in the location of the nonexistence region R NE and the boundary curves C α1 and C α2 . Even if the inequalities x 0 < x 1 and y 0 < y 1 are preserved, however, we have shown that the position of the intersection point of the curves C α1 and C α2 is sensitive to the location of P 0 and P 1 , and that the region R B4 in the diagram may not survive when the ratios x 1 /x 0 and y 1 /y 0 are too large.
Finally, we briefly illustrated point (iii) in section 5. With the introduction of nonlinearity comes new opportunities for changes in solution structure with changes in the vector field. For example, arbitrarily small perturbations to a conservative system can make the perturbed system non-conservative. The inverse problem solutions that we considered for the conservative LV system did continue along smooth curves with the introduction of a saturating perturbation, although the interval of persistence was small in some cases. On the other hand, solutions present for arbitrarily small, positive values of the perturbation parameter did not necessarily persist when the perturbation was completely eliminated. The loss of these solutions corresponded to parameter values along a solution branch diverging to infinity. This outcome is analogous to what we observed along some curves in the P 2 -diagram with certain variations of data point locations.
We made several simplifying assumptions in this study. One central assumption was that the number of data points provided was the smallest amount capable of identifying the system, i.e. half of the number of unknown parameters (six), such that the number of given pieces of information about trajectory coordinates was exactly equal to six. Clearly, the need to fit fewer data points would represent weaker constraints on inverse problem solutions leading to unidentifiability, while the inclusion of more data, while possibly eliminating non-uniqueness, would add constraints or greatly expand the region of non-existence for a strict inverse problem solution in the absence of measurement errors. Another central assumption was that the intervals between times when trajectories cross through the data points were equal. This is a convenient assumption for analytical treatment of linear and affine systems, so we kept it here for consistency. The qualitative properties observed in our study, however, do not depend on this assumption. For example, for any time of passage t > 0 between P 1 and P 2 , there will always be positions in the P 2 -diagram such that as P 2 approaches such a position, the parameter values must go to ±∞ to maintain compatibility with the time of passage from P 0 to P 1 . Nonetheless, there remain many open directions related to the ideas in this study, including rigorous analysis of our numerical observations and extension of the ideas in this work to more general classes of nonlinear systems.

Data availability statement
No new data were created or analyzed in this study.
Acknowledgments J E R was partially supported by NSF award DMS 1951095.