Curves in quantum state space, geometric phases, and the brachistophase

Given a curve in quantum spin state space, we inquire what is the relation between its geometry and the geometric phase accumulated along it. Motivated by Mukunda and Simon’s result that geodesics (in the standard Fubini-Study metric) do not accumulate geometric phase, we find a general expression for the derivatives (of various orders) of the geometric phase in terms of the covariant derivatives of the curve. As an application of our results, we put forward the brachistophase problem: given a quantum state, find the (appropriately normalized) Hamiltonian that maximizes the accumulated geometric phase after time τ—we find an analytical solution for all spin values, valid for small τ. For example, the optimal evolution of a spin coherent state consists of a single Majorana star separating from the rest and tracing out a circle on the Majorana sphere.

The geometric phase accumulated during the evolution of a quantum system plays an essential role in a variety of physical phenomena, such as the nuclear dynamics in the Born-Oppenheimer molecular theory [1,2], physical properties of materials like polarization, magnetization, or in the various Hall effects [3][4][5][6][7], to mention but a few.Additionally, it has been proposed as a key ingredient in the implementation of quantum computing through holonomic quantum gates [8].Moreover, several sets of universal quantum gates include a unitary operation that imparts a generic geometric phase to a state [9].The first formal deduction of geometric phase in the quantum realm was given by Michael Berry in 1984 [10], considering a system in a non-degenerate hamiltonian eigenstate, in the adiabatic approximation.Eventually, the concept was generalized to non-abelian (Wilczek-Zee) geometric phases [11], nonadiabatic evolutions [12,13] and even in the case of non-cyclic curves [14], reaching its most general form in the work of Simon and Mukunda in [15].Nowadays, there is experimental evidence of the geometric phase for both cyclic and non-cyclic curves [16,17].The mathematical characterization of the geometric phase as a holonomy of a connection dictated by the Schrödinger evolution of the quantum state [18] underlies the generalizations mentioned above.Following this geometrical point of view, we ask the following question: given a curve in quantum state space, what geometrical properties of the curve give rise to the geometric phase?A key result, within this mathematical framework, is that geodesic curves (in the natural Fubini-Study metric), i.e., curves without acceleration, do not accumulate geometric phase [15].Consequently, the geometric phase associated with a curve, parametrized by arclength, depends on the (covariant) derivatives of second and higher orders of the curve.Spelling out in detail this relation is one of the main goals of the present work.
The experimental generation of geometric phases in quantum computation faces multiple challenges, like decoherence and other systematic errors [19], necessitating the implementation of quantum gates in the shortest possible time, within the most efficient scheme.Several scenarios have been studied in this context [19][20][21], exemplified in the well-known quantum brachistochrone problem [22]: realize a quantum gate, or "control protocol", in the shortest possible time under suitable conditions.We contribute to this application-oriented direction by first determining the hamiltonian that maximizes the initial acceleration of a given state, and then posing (and solving analytically) the brachistophase problem: for a given initial state, find the (timeindependent) hamiltonian that maximizes the geometric phase accumulated after a given time .
The paper is organized as follows: In Sec.II we review pertinent geometrical aspects of quantum state space.Covariant derivatives of a general curve in quantum state space are studied in Sec.III, including the particular case of a Schrödinger curve, i.e., a curve that evolves according to the homonymous equation, with a time-independent hamiltonian.Section IV discusses the relation between the geometric phase and the covariant derivatives of the curve.The maximization problems mentioned above are studied in Sec.V. A summary of our results and some concluding comments are presented in Section VI.

A. Coordinates, metric, connection, and curvature of the projective space
Let  ≡ ℂ +1 be the Hilbert space of a spin-quantum system, where = 2 .The elements | ⟩ ∈  that differ by a non-zero scalar factor, the latter being a point in the complex projective space ℂ , i.e., the space of complex lines through the origin in , with the quantities = ∕ 0 , together with their complex conjugates ̄ ≡ being coordinates in the chart 0 of ℂ , where 0 ≠ 0 -we will denote them collectively by , with ranging over {1, … , , 1, … , ̄ }, implying the slight abuse of notation ̄ ≡ ̄ ≡ .We denote the group of unitary matrices of dimension +1 and its corresponding Lie algebra of hermitian matrices by ( +1) and ( + 1), respectively (we follow the physicists' convention in which the structure constants are pure imaginary).ℂ may be embedded into ( + 1) as the ( + 1)−adjoint orbit of the density matrix 0 = diag(1, 0, … , 0) (see, e.g., [23]), the latter living naturally in with Δ ≡ 1 + ∑

=1
. We denote the image of ℂ under by ℙ ⊂ ( + 1).The dimension of is that of , equal to + 1 -we enumerate the components of | ⟩ and the rows and columns of by greek indices ranging from 0 to .We also use the notation ( ) = (1, 1 , … , ), so that, e.g., A "basis" in the tangent space ℙ is given by the matrices ≡ ∕ , with real tangent vectors constrained to satisfy ̄ = , where ̄ denotes the component of along , = + ̄ ≡ + ̄ ̄ .Note that the matrices are not hermitean, and, hence, are not by themselves tangent to ℙ -to obtain tangent vectors to ℙ we need to restrict to real ones, satisfying the above constraint, a true basis in ℙ is then given by, e.g., { + ̄ , ( − ̄ )}).In the "basis" { }, the Fubini-Study (FS) metric and its inverse have components with ̄ = ̄ (i.e., is symmetric), and ̄ = ̄ ̄ (i.e., ( ̄ ) is hermitean) and similar statements holding true for the inverse metric.Note that the fact that comes from a Kähler potential ( = 2 log Δ) implies that ̄ , = ̄ , and ̄ , ̄ = ̄ , ̄ .
The Christoffel symbols are found to be [24] with all mixed components vanishing, while the Riemann tensor is given by all other independent components being zero [24].Accordingly,

B. Geometry of the embedding ℂ ↪ ( + )
The tangent space | ⟩  can be decomposed into parallel and normal subspaces, The FS metric on ℙ is obtained from the hermitean inner product in  via The complex structure on  given by (| ⟩) = | ⟩ induces a complex structure (also denoted by ) on ℙ given by Note that and, e.g., ].An arbitrary matrix ∈ , considered as hamiltonian operating on , generates the Schrödinger vector field | ̇ ⟩ = − | ⟩, which projects to the fundamental field ̂ on ℙ, The natural metric in is given by ( , ) = 1 2 Tr( ), which is invariant under the adjoint action of ( + 1), the infinitesimal version of which is The tangent space , ∈ ℙ, can be decomposed in subspaces tangent and normal to ℙ respectively (in the metric ), = ℙ ⊕ ℙ, with ℙ ⟂ ℙ.The vectors ̂ , with ∈ , generate ℙ, since the action of ( + 1) on ℙ is transitivesymbolically, Note that both and ̂ are matrices in , and if belongs to ℙ then ̂ = ( ) also belongs to ℙ, and vice versa.As shown in [23], the normal space ℙ is generated by matrices ∈ such that [ , ] = 0.In view of (11), this means that the normal part, w.r.t., of an ∈ , does not contribute to ̂ ( ).For a given ∈ and ∈ ℙ, we define the even and odd part of (w.r.t. to ) by and = e + o .It can be shown that ad e maps ℙ to ℙ, and ℙ to ℙ, while ad o maps ℙ to ℙ and vice-versa.Indeed, for a tangent vector [ , ], we have where, in the first equality we used the Jacobi identity, while in the second one the fact that [ , e ] = 0. Thus, ad e sends [ , ] ∈ ℙ to [ , [ e , ]] ∈ ℙ.On the other hand, for a tangent vector where the fact that , commute with , ̃ was used.Thus, [ o , ] is orthogonal to ℙ, and, hence, it belongs to ℙ.It can also be seen easily that the above decomposition of a general hermitean matrix , regarded as a tangent vector to at , coincides with the decomposition = ℙ ⊕ ℙ, with o ∈ ℙ and e ∈ ℙ.Indeed, [ , e ] = 0 implies that e ∈ ℙ, while for any ∈ ℙ, i.e., projection onto the tangent space of ℙ is obtained by a double commutator with .Another way to obtain the last result is by writing = e + o , and noting that the normal (even) part is filtered out in the first commutator,

A. General curves in ℙ
Consider any basis { } of , with position-independent entries ( ) (we choose to enumerate the ( + 1) 2 elements of the basis by the composite index ).The metric on , in that basis, has position-independent components, so that the Christoffel symbols vanish, and, for a position-dependent matrix , interpreted as a vector field on , we have In the vicinity of ℙ in one may choose coordinates for complementing the 2 coordinates on ℙ by 2 − 2 additional coordinates, transversal to ℙ, and extending continuously in a neighborhood of ℙ.Then, ∇ ( ) = ≡ and a short(ish) calculation, starting from (3), shows that Finally, we use (17), with a change in notation for o → ∥ and e → ⟂ , to obtain implying that (compare ( 24), ( 25), ( 26) to ( 7)), for , ∈ ℙ, ∇ ( ) = (∇ ( ) ) ∥ .In other words, the Levi-Civita covariant derivative for on ℙ may be obtained by projecting the ambient covariant derivative (corresponding to the euclidean on ) onto the tangent space of ℙ, as in the standard treatment of e.g., surfaces in ℝ 3 .Given a curve in ℙ, its velocity is = ̇ ∈ ℙ while ̇ (when is viewed as a curve in ) is ̇ = ∇ ( ) = ̈ , so that (dropping the subscript ) Note that for any ∈ , with ∈ ℙ, It might prove useful to cast (27) in a "big matrix" form.An arbitrary ∈ may be represented by a 2 -dimensional vector | ⟩ containing the entries in the standard order, | ⟩ = ( 11 , 12 , … , ) .Then ( 27) can be written as | ⟩ = | ̈ ⟩, with the 2 × 2 hermitean matrix given by or, in matrix tensor product form, Being a projection operator, satisfies 2 = .Since projection onto the tangent space of ℙ is effected by a double commutator with , we get = ℜ 2 , where For the modulus squared of we find where † = and 2 = were used.

B. Schrödinger curves in ℙ
By Schrödinger curves in ℙ we mean curves that are solutions to the Schrödinger equation, ̇ = − [ , ].We will limit our attention to the case where does not depend on time, then = − 0 and We compute and, finally, from (29) we get where For the modulus squared of we find where ℌ † = ℌ was used.Introducing the "density matrix of the density matrix" = | ⟩⟨ | = ⊗ , the above can also be written as Finally, we may also define the acceleration of the curve as its second covariant derivative w.r.t.length, rather than time.
Since the modulus of the velocity of a curve , describing time evolution generated by Schrödinger's equation with a timeindependent hamiltonian is constant in time, | | = 0, we get = ∕| ̇ |, and Note that, in this case, the modulus of is the curvature of the curve.

IV. GEOMETRIC PHASE AND COVARIANT DERIVATIVES OF CURVES IN ℙ
A generalization of the standard geometric phase, valid for open (i.e., non-cyclic) curves, is given in [15].As shown there, the geometric phase accumulated along a geodesic (in the Fubini-Study metric) is zero.On the other hand, geodesics are characterized by their vanishing acceleration.It seems reasonable then to inquire about the relation between the acceleration of a curve and the associated geometric phase -note that this relation ought to exist independently of the Schrödinger dynamics.
Given a curve , 0 ≤ ≤ , in ℙ, the accumulated geometric phase up to a time 0 ≤ ≤ is [15], where and a dot denotes a time derivative, while exp a path-ordered exponential.By definition, this means that satisfies the differential equation subject to the initial condition 0 = .The open-curve phase has a simple geometrical interpretation: it is the usual Berry phase of the closed curve obtained by gluing the curve , 0 ≤ ≤ to the geodesic that connects with 0 .In what follows, we find an explicit formula for the derivatives of at = 0 in terms of quantities intrinsic to ℙ.

The first three derivatives of the geometric phase
Using the notation ⧼ ⧽ ≡ Tr 0 for an arbitrary time-dependent operator , one gets ⧼ ⧽ = ⧼ ̇ + ̇ ⧽ , so that, e.g., (dropping the index ) Note that, by (45), = Im log⧼ ⧽ , so that All time derivatives of are hermitean so, using cyclicity of the trace gives leading to the conclusion that the first two time derivatives of vanish, at = 0, for every curve .Similarly, The quantity in the r.h.s. of ( 56) is, for a general curve , nonzero, so the first nonzero derivative of for a general curve is the third one.The matrix ̈ 0 in (56) represents a vector tangent to the ambient vector space ( + 1), but not to ℙ -we may remedy this noting that, for a general curve , only the part of ̈ 0 tangential to ℙ, , so we may write (with ̇ 0 = ), where (10) was used to obtain the second line, i.e., ⃛ 0 is equal to the symplectic area of the parallelogram spanned by the (initial) velocity and acceleration of the curve.It follows that, for geodesics, where = 0, ⃛ 0 vanishes, a result that we extend below to derivatives of all orders.
For Schrödinger curves, a short calculation gives where ℎ , as defined in section III B, is evaluated at 0 .

The fourth and fifth derivative of the geometric phase
For the fourth time derivative of we find We proceed to express this in terms of the covariant derivatives of .From 2 = one gets which implies so that ̇ 2 = ̇ 2 , i.e., ̇ 2 ∈ ℙ, and the second term in the r.h.s. of (59) vanishes.Putting ≡ ∇ ( ) = ̇ ∥ , and using we find from (59) (putting ̇ → ) so that the second term in the r.h.s. of (63) vanishes, and we arrive at Next, we proceed with the fifth derivative.Denote by the third covariant derivative of , = ∇ ( ) .The fifth derivative of at = 0 involves (4) ∥ 0 , which, in turn, can be expressed in terms of and lower order -derivatives.Expressing the latter in terms of , , we find The first two terms on the right only involve covariant geometric quantities explicitly -the rest need some work.From (60), taking one more derivative, one gets Multiplying by and taking trace one finds where ≡ ⟨ ̇ | ̇ ⟩.Note also that, from (68), it is easily inferred that so that the third and sixth terms in the r.h.s. of (65 . The fourth term in the r.h.s. of ( 65) is zero because [ , 2 ] = 0.For the fifth term, start with where ⟨ ̈ | ⟩ = ⟨ | ̈ ⟩ = − has been used (derived by taking derivative of ⟨ ̇ | ⟩ = 0).A straightforward calculation now shows that Tr( [ ̈ 2 , ̇ ]) = 0, so that, finally, As can be appreciated in the above examples, this line of attack quickly becomes intractable -in the next section we follow an alternative approach that simplifies the calculation of higher order derivatives of the geometric phase.

B. Derivatives of the geometric phase in terms of integrals
Let be a time-independent -form defined over a manifold.Consider the integral where the domain of integration is flowing (as varies) along the integral curves of a vector field , = ( 0 ), with ( )| =0 = ( ).It can be shown that (see, e.g., [25]), where is the Lie derivative along .We intend to use this formula to get an expression for the various time derivatives of the geometric phase -a toy example illustrating the use of (76) appears in appendix B.
Recall that is the usual Berry phase of the closed curve •ℎ (with • denoting concatenation), where denotes the curve , 0 ≤ ≤ , and ℎ is the geodesic that connects 0 with .By using the fact that the symplectic form is proportional to the Berry curvature [26], we obtain (with the conventions for adopted above) where is any surface with boundary •ℎ -we choose as the surface swept out by the geodesics ℎ , 0 ≤ ≤ .
In the sketch on the right, the black curve denotes , while the green curve is ℎ -we take the latter parametrized by an affine parameter , 0 ≤ ≤ 1, for all .Assuming that the various geodesics ℎ for distinct values of only intersect at 0 , we can use the coordinates ( , ) , 0 ≤ ≤ , 0 ≤ ≤ 1, to label the point corresponding to on ℎ -the surface , which is the hatched area in the figure, corresponds to the range 0 ≤ ≤ , 0 ≤ ≤ 1 and its boundary is = −ℎ .Define the tangent vectors ≡ , ≡ .Note that parametrizing ℎ by the affine parameter implies that all the points on have = 1, so that the vector field is tangent to .Also, note that the surface + can be obtained by flowing the points of along the integral curves of , as assumed in (76).
Since is closed, by Cartan's formula, = + ( ) = ( ) (where denotes the contraction with ) holds, giving, where we used Stokes' theorem for the second equality and the fact that = 0 at the curve , since the corresponding line element is along by construction.Note that the integration in the above formula is only along the geodesic ℎ -the curve affects the result via the vector field , which depends on it.By parametrizing ℎ with the coordinates ( = , ), 0 ≤ ≤ 1, we obtain and by computing the time derivative times, As shown below, the integral on the r.h.s. of the above equation can be computed exactly.Given a point , consider the state | ⟩ such that = | ⟩⟨ | and ⟨ 0 | ⟩ = cos ≥ 0, where is the distance between 0 and .Then the geodesic ℎ can be parametrized by as follows where with being the point of ℎ orthogonal to | 0 ⟩.Defining = ℎ ( ) we may write with unitary.Given 0 and the point , (84) does not determine uniquely.We fix this ambiguity by choosing , for fixed , to be the one-parameter subgroup, It is easily seen that with | ⟩ defined in (82).Note that evolves | 0 ⟩ along a geodesic (for fixed and varying ).From (84) we get where a prime denotes the partial derivative and Substitution in (79) gives where We calculate now ̂ explicitly.We start with the following expression for , which is easily derived by noting that (− ) 3 = , implying that the eigenvalues of − are 0, ± , and that can be written as = − = + + 2 , with , determined by substitution of the above eigenvalues for .A straightforward calculation gives a somewhat lengthy expression for [ , ] (see (A21) of appendix A) -in projecting this result onto 0 , only terms proportional to 0 contribute, so that where we defined ≡ ⟨ | ̇ ⟩.Finally, from the last equality of (89), we get We cast now (93) in terms of the symplectic structure.To this end we find that which projects to Tr 0 , = 2 2 .Solving this for and substituting in the r.h.s. of (93) gives, where we used that can be regarded as an element of 0 ℙ (as it can be verified by noting that its normal part is zero) to obtain the second line, and defined, Higher order derivatives of the geometric phase are given by We conclude this section by noting that , regarded as an element of 0 ℙ, has a precise geometrical interpretation; a short calculation reveals that the geodesic exponential map of − is by construction.Indeed, as noticed previously, for fixed and varying 0 ≤ ≤ 1, gives the geodesic from 0 to , and its tangent vector at = 0 is − , so − is the inverse of the exponential map of the curve .If we assume it has expansion of the form, where ̃ ( ) is tangent vector in 0 ℙ, and note that 0 ( , ) = 0 ( , ), we obtain, 6 0 = 4 ( ̃ (4) , ̃ (0) ) + 5 ( ̃ (3) , ̃ (1) ) − The relation between ̃ ( ) and the covariant derivatives (∇ ( ) ) is as follows and so on for higher values of .Finally, note that the tangent vectors ̃ ( ) , ≥ 1, are trivially zero for geodesics, making it clear that all derivatives of the geometric phase (and hence, the phase itself) are zero in this case.

V. THE BRACHISTOPHASE
We study Schrödinger curves in ℙ that accumulate the maximum possible geometric phase for a given evolution time .

A. The hamiltonian of maximal acceleration
As a warmup, we consider the following problem: find the time-independent hamiltonian that, when used in Schrödinger's equation, maximizes the initial acceleration of a given state 0 .From (39) we conclude that we need to maximize with ℎ ≡ Tr( 0 ) and where 0 is viewed as a function from ( + 1), where lives, to the nonnegative reals.Since 0 ( ) = 4 0 ( ), we need to fix the norm of to, e.g., unity, to get a well-posed problem.Also, any component of along the unit matrix does not contribute to the dynamics of , so the solution to our problem should have zero such component -we arrive then at the following two constraints Both 0 and the constraints are invariant under the transformation with ∈ ( + 1).Since the above action of ( + 1) on ℙ is transitive, we can solve the problem for any conveniently chosen state 0 , and then transform the solution as above, to solve it for any other state .We choose then as 0 the coherent state along , and write accordingly the hamiltonian in the form where ∈ ℝ , ∈ ℂ , and ∈ ( ).The stability subgroup 0 of the above 0 consists of matrices of the form with ∈ ( ).Under a transformation by such a matrix, 0 remains invariant while transforms to ′ with and the solution space for , for a given 0 , is the entire orbit 0 ⊳ 0 of a particular solution 0 under 0 , together with the orbit of − 0 , since the latter hamiltonian clearly produces the same (modulus of) acceleration.
Using the above form for , we find where2 ≡ † , with ≥ 0, which leads to while the constraints, expressed in terms of ̃ , assume the form We may use a 0 transformation (as in (115)) to bring (and, hence, ̃ ) in a diagonal form, ̃ = diag( 1 , … , ).To maximize the modulus of ̃ , we need to align along the eigenvector corresponding to the maximal (in the absolute sense) ̃ -eigenvalue and then make | | (which gives the modulus of the resulting vector) the maximum possible.We may assume, without loss of generality, that the maximal, in absolute sense, eigenvalue of ̃ is 1 .The parameter space of the maximization problem is ℝ, where ranges, times ( ), where lives, times ℂ , where lives, modulo the constraints.Using Lagrange multipliers ( 1 , 2 ) to incorporate the constraints, we get Now we compute the derivatives of with respect to every variable (fixing the other variables) and we get where we consider that, for fixed and , ̃ has to lie on the Tr ̃ = −( + 1) hyperplane in ( + 1), satisfy additionally and also have its first eigenvector along .Substituting Tr ̃ from (129) in ( 126) we find that 2 = 0. Now substituting 2 = 0 in (124) we obtain If 1 = 0, by (125) we get that = 0, and then = 0 (its the minimum acceleration), hence we assume that 1 ≠ 0 and = − for = 2, … , .Now from (129) we have 1 − ( − 1) + ( + 1) = 1 + 2 = 0, thus 1 = −2 .Substituting in (125) we get that = −2 1 ( 1 + ) = −2 1 (−2 ) = 2 1 .Then, from (127), we have If = 0, = 0, then we assume that ≠ 0 and −1 + 4 2 1 = 0, i.e. 1 = ± 1 2 , and, accordingly, = ± .Finally, from (128), we get Thus, the general form of that maximizes the initial acceleration of 0 is given by Example 1. Maximum acceleration for a spin-1/2 state For = 1 2 we have the maximum value of given by , in physical terms, to a magnetic field at an angle of 45 (or 135) degrees w.r.t. the -axis.The stability subgroup action rotates the direction of the magnetic field around the -axis.□

Statement of the brachistophase problem
We now consider the following problem: given an initial state 0 , find a time-independent hamiltonian , such that, after a fixed time of evolution generated by , the geometric phase ( ) accumulated by the state is maximal.Since the phase is only defined modulo 2 , we assume is sufficiently small for the phase to be less than at all times.Define the function Φ , 0 ( ) to be the geometric phase accumulated by the initial state 0 when it is evolved for time by the hamiltonian .A Taylor expansion of Φ , 0 around = 0 gives where we used the fact that and its first two derivatives at = 0 vanish, while its higher-order derivatives can be computed with the help of (97).

Truncation up to = 3
Truncating the expansion to only include the = 3 term, we need to maximize 3  | =0 .Note that (58) gives directly the third time derivative of the phase, so, proceeding as before, we find (to order 3 ) where, in the last step, has been assumed aligned as before.Using the Lagrange multipliers method as before we have Calculating the partial derivatives of with respect to every variable we note that (124), ( 126), ( 128) and (129) are still valid, thus 2 = 0 still holds, and additionally we have from ( 124) and (138) we conclude that = − , for = 2, … , , and from (129) we have 1 = −2 .Now using (138) we obtain Substituting in (139) we get and, since we know that 1 and are not 0, we have 1 = 1 6 , and finally, substituting in (128) leads to Therefore, the general form of , solution to the brachistophase problem, is given by FIG. 2. Geometric phase corresponding to the numerically determined optimal hamiltonian (dotted blue curve) and the third order approximation (continuous red curve).

Evolution of GHZ and tetrahedral states
Using (112) we find the optimal hamiltonian for the spin-3/2 GHZ and spin-2 tetrahedral states.We have (1, 0, 0, −1) and The evolution of the above quantum states is given by (150) with plots appearing in figures 3 and 4, respectively.

VI. SUMMARY AND CONCLUDING REMARKS
We have studied the relation between geometric phase and covariant derivatives for a smooth curve in quantum state space.We found that the various derivatives of the geometric phase are proportional to the symplectic area of the parallelograms generated by various pairs of covariant derivatives, e.g., the first nonvanishing derivative of the phase (of the third order) is exactly equal to the symplectic area of the parallelogram generated by the velocity and the acceleration of the curve (see (57)).When the curve in question corresponds to evolution generated by a time-independent hamiltonian, the time derivatives of the phase can be related to the expectation values of powers of the hamiltonian (see, e.g., (58)).A general formula for the various time derivatives of the phase is given in (97).It is worth emphasizing at this point that the geometric phase accumulated by a curve is not additive under curve concatenation, e.g., if a curve 1 , going from point to point , is glued to a curve 2 , going from to , the geometric phase for the resulting curve, going from to , is not the sum of the phases for the .This implies that the phase derivatives mentioned above depend on the starting point, = 0, of the curve -in our analysis said derivatives are calculated exactly at the starting point.
As an application of our geometric analysis, we discussed two maximization problems: given an initial state, find the (appropriately normalized) hamiltonian that maximizes i) the modulus of its initial acceleration and ii) (the modulus of) the geometric phase accumulated after a fixed time .Both problems were solved with the initial state being a coherent state along ̂ (see (134), (143)).For both problems, the solution for the maximizing hamiltonian is not unique -given a particular solution, one obtains more solutions by acting on it with the stability subgroup of the initial state, while the time evolution of the state is independent of the particular optimal hamiltonian chosen.Starting with a coherent state along ̂ , the time evolution generated by any optimal hamiltonian consists of a single star leaving the north pole and tracing out a circle, the characteristics of which are different for the two problems, and also depend on the spin of the state -a few examples are depicted in Figure 1.The time evolution for other initial states is then easily obtained using the transitive action of the unitary group on the state space -see (149) and Figures 3, 4, for the brachistophase solution for the spin-3/2 GHZ and the tetrahedral state.Note that the optimal hamiltonian for the brachistophase problem depends, in general, on the time .Our analytic solution is valid for appropriately defined small times, where the cubic term (in ) dominates.In this approximation the solution does not depend on -including higher-order terms seems like a rather hard problem analytically and should probably be attempted with numerical methods.
There are several open problems that we are currently pursuing as a follow up to the present paper's considerations.In particular, we would like to elucidate the physical significance of the modulus of the various covariant derivatives of a state, appropriately averaged over the driving hamiltonians.There is empirical evidence that the functions on state space so defined can be used as measures of interesting physical properties, like entanglement (when the spin-state is considered as a symmetrized state of 2 spin-1/2 subsystems) -this points to possible connections with the "total variance" concept in [29,30].
preliminaries A. Coordinates, metric, connection, and curvature of the projective space B. Geometry of the embedding ℂ ↪ ( + 1) III.On the acceleration of curves in ℙ A. General curves in ℙ B. Schrödinger curves in ℙ IV.Geometric phase and covariant derivatives of curves in ℙ A. Explicit computation of derivatives of the geometric phase at = 0 1.The first three derivatives of the geometric phase 2. The fourth and fifth derivative of the geometric phase I. INTRODUCTION which makes the third term in the r.h.s. of (65) equal to 10Tr( ̇ 2 )[ , ].On the other hand, putting = | ⟩⟨ |, with ⟨ ̇ | ⟩ = 0, one gets ̇ = | ⟩⟨ ̇ | + | ̇ ⟩⟨ |, so that

FIG. 3 .
FIG. 3. Optimal time evolution for a spin-3/2 GHZ state: shown is the Majorana constellation for = 0, 0.5, 1, 1.5, 2, 3.2 (left-to-right, top-to-bottom).The red points represent the original constellation, the curves in blue describe the trajectory of the stars during the evolution, while the triangles shown help visualize the constellation at the given time .