An Algebraic Theory of Non-Relativistic Spin

In this paper we present a new, elementary derivation of non-relativistic spin using exclusively real algebraic methods. To do this, we formulate a novel method to decompose the domain of a real endomorphism according to its algebraic properties. We reveal non-commutative multipole tensors as the primary physically meaningful observables of spin, and indicate that spin is fundamentally geometric in nature. In so doing, we demonstrate that neither dynamics nor complex numbers are essential to the fundamental description of spin.


Introduction
The fundamental nature of spin is one of the most important topics in modern physics.It is most commonly viewed as a form of angular momentum intrinsic to particles and fields that aligns with a given axis in discrete amounts.This behaviour is widely considered an example of quantisation.In other settings, fundamental spins are the building blocks of emergent space-time, with deep connections to gravity as a result.However, studying spin in isolation from other areas of physics is challenging.This paper will present an elementary construction of all non-relativistic spins by real algebraic methods without the use of: complex numbers; manifolds; calculus; spinors; explicit matrix representations; quantum mechanical notions such as states or probabilities; or dynamical notions such as angular momentum, energy, or time.In so doing we will reveal a new perspective on its fundamental nature.To begin, let us examine the current mathematical formalism of spin.

Current Mathematical Description of Spin
The usual description of a spin-s system is as a finite-dimensional, irreducible representation ρ (s) of the real Lie algebra su(2, C), of the Lie group SU(2, C).
While this method succeeds in finding every spin representation, we only gain limited insight into their fundamental nature.To see this, we note the usual interpretation of the ladder operators: they increase or decrease the amount of alignment or anti-alignment the spin angular momentum has with a particular spacial direction.This is a physically meaningful description, however the ladder operators are only definable in the complexification su(2, C)⊗ R C; as such their behaviour cannot form a foundational physical description of representations of the real su(2, C).
Another difficulty the standard formalism encounters is in revealing all the physically meaningful observables of the theory.To begin with, the ladder operators are not Hermitian, thus not observable.The two observables the formalism does highlight are, up to isomorphism, the generator ρ (s) (S z ) and the Casimir operator (1.4).Since in this picture these are sufficient to derive all of the spin representations, it is easy to believe that these are the only relevant ones.However, it is known that there are higher-order observables hidden in the spin matrices, such as the quadrupole and higher-order moments [2].Since these play no role in the traditional development of the spin theory, it is not clear if their existence is significant.
The physical significance of the group SU(2, C) is also unclear.Some insight can be gained by recognising that SU(2, C) is the double cover (and in this case also universal cover) of the homogeneous symmetry group of Euclidean threespace SO(3, R).Unlike SU(2, C), SO(3, R) has a direct physical interpretation as the group of rotations.Furthermore, su(2, C) ∼ = so(3, R) as Lie algebras.This connection between spin and geometric symmetry is not just mathematical, it is demonstrably non-trivial: fermionic systems require a 4π rotation to return to their original state.
From these observations, we state that a physically more meaningful description of spin will be derived by working exclusively with the natural structures associated with the rotation group SO(3, R), thus notions of dynamics shall be avoided.It is worth noting here that the rotations of SO(3, R) are not gradual transformations of a space over time, but atemporal mappings between two states of the space.As such they are adynamical, and their preservation of the Euclidean metric is what imparts their geometric character.As a symmetry of real three-space we will maintain this close link by avoiding the use of complex numbers.
This presents an immediate challenge, as without the algebraic closure of the complex numbers we have no guarantee of eigenvalues for our operators.This makes the usual root system analysis inaccessible.To overcome this difficulty, we will utilise entirely real algebraic methods, which we will show give a more meaningful description of spin entirely in terms of its physical observables.To proceed in this direction, let us first consider algebraic theories in physics more generally.

Algebraic Theories in Physics
The working definition of an algebraic theory we shall use throughout this work is as follows: an algebraic theory is an algebra over the field F which completely describes the properties of the system of study within its structure.In particular, this means that all of the objects of the algebraic theory have physical significance, and that all of the states of the system can be described in terms of elements of the algebra.The first of these two points will be the focus of this paper, as each deserves careful discussion.
The pursuit of algebraic theories in physics was advocated by Einstein [3] as a means to more naturally describe quanta than a continuum theory.Though rarely used exclusively, many aspects of algebraic approaches have been behind several seminal results in physics.For example, Dirac's standard bra/ket [4] enabled him to construct quantum theory almost entirely in terms of operators.Furthermore, his derivation of the Dirac equation from the Klein-Gordon equation necessitated the definition of an algebra between the α k and β.The emergence of spin in that setting was not a result of relativity however, as Lévy-Leblond derived the Pauli equation from the Schrödinger equation [5] by similar means.This, again, required the acceptance of an algebraic structure inherent to the system he was describing.As a final historical example, Von Neumann defined an idempotent element with respect to the position-momentum algebra in his proof of the Stone-Von Neumann theorem [6].More recently, Hestenes [7], Doran and Lasenby [8], and Hiley and Callaghan [9,10] have used Clifford algebras to study the Schrödinger, Pauli, and Dirac equations, indicating that more extensive algebraic study of quantum theory is possible.
In the case of non-relativistic spin, there already exist theories with strong algebraic influences such as Racah's spherical tensor operator formalism [11].However, these remain strongly bound to quantum mechanics and to its Hilbert space formalism, precluding the elementary study of spin that we are advocating.For spin-1 2 and spin-1 though, there exist real algebraic constructions independent of quantum mechanics.These are the Clifford [8]: and Kemmer [12,13]: algebras respectively, where ρ (s) (S a ), a ∈ {1, 2, 3} are the usual matrix generators for spin-s [14].
Together with the Lie bracket: and the Casimir element: these identities completely specify the properties of spin-1 2 and spin-1, including the eigenspectrum of the generators ρ (s) (S a ).The algebras become completely real if we perform the transformation ρ (s) (S a ) → iρ (s) (S a ) as then (1.1), (1.2), (1.3), and (1.4) have all real constants.This demonstrates that real algebraic descriptions are possible for spin-1 2 and spin-1.However, it is unclear how to generalise this description to arbitrary spin, or what the physical observables are in the theory.
To make progress, let us consider in the broadest terms what properties the spin representations share.All spin representations represent their generators by finite-dimensional matrices {ρ (s) (S a )}, or equivalently by their actions on spinors.The identity I 2s+1 , and all linear combinations of matrix products of the {I 2s+1 , ρ (s) (S a )} are also allowable actions on spinors.Finally, (1.3) always holds.Implicit in this account are the following properties: the actions form a finite-dimensional vector space spanned by a subset of {I 2s+1 , ρ (s) (S a ), ρ (s) (S a )•ρ (s) (S b ), ...}; there exists an associative, bilinear product between them that respects (1.3); this vector space formed by the actions is closed under this product (by construction and finite-dimensionality [15]).More succinctly, the actions form a real unital associative algebra, where the commutator between two generators yields their Lie bracket.We shall use these observations as a heuristic to guide our arguments.

Initial Setup for Algebraic Spin Analysis
Let us now begin our analysis, starting from the Lie group of rotational symmetry SO(3, R).It is well known that SO(3, R) is generated by a real Lie algebra so(3, R). so(3, R) is a three-dimensional real vector space equipped with an alternating bilinear Jacobi product often called a Lie bracket.We will use the terms Lie bracket and Lie product interchangeably in this work.The Lie product of so(3, R) is isomorphic to the cross-product; we adopt this notation to avoid confusion with commutators.In terms of the standard basis {S a }, the Lie product is: Note immediately that we are describing the real Lie algebra so(3, R) and not its complexification so(3, R)⊗ R C, which was described in (1.3); to return to the standard basis of so(3, R)⊗ R C simply transform: All results will be derived using (1.5), so care should be taken to convert when needed.
Since we wish to describe spin with a unital associative algebra, let us first consider the most general unital associative algebra of the elements of so(3, R).This is the tensor algebra [16] T (so(3, R)) of so(3, R): where ⊗ is associative, bilinear, and has identity element 1.The elements of this algebra are called "tensors", and are all R-linear combinations of "k-adic" tensors: We define the "tensor order" of a k-adic to be k, and extend to arbitrary linear combinations by taking the largest tensor order amongst the terms.T (so(3, R)) combines most of the properties common to spin representations except finite-dimensionality and encoding the identity (1.5) within its commutator.It is at least clear how we may implement any one of these two properties.We may find a finite-dimensional algebra from T (so(3, R)) by quotienting [16] out all tensors above a certain tensor order k: where I so(3, R) ⊗(k+1) is the ideal generated [16] by the order-(k + 1) tensors.
On the other hand, we may impose the identity (1.5) by constructing the universal enveloping algebra [17] U (so(3, R)) of so(3, R): where the I (S a ⊗S b − S b ⊗S a − S a ×S b ) is the ideal generated by elements of the form in its argument.This embeds the Lie product (1.5)into the commutator of the algebra.By construction, U (so(3, R)) is the most general associative algebra of the elements of so(3, R) with (1.5) embedded in this way, and thus all other algebras sharing these properties must derive from it.Abusing notation slightly we denote the product on U (so(3, R)) by ⊗ as well, but which ⊗ is intended will be clear from context.Like T (so(3, R)), an arbitrary element of U (so(3, R)) can be written as a linear combination of k-adic tensors.However, we derive mostly trivial algebras if we try to implement both finitedimensionality and the identity (1.5) by performing both quotients; more precisely this happens when the quotiented tensors are of order 2 or greater.This is because in U (so(3, R)) the summands of (1.7) are no longer orthogonal.To see this consider that in U (so(3, R)): (1.10) Informally the quotient fails because the ideal we construct in (1.8) does not respect the structure the Lie product identity gives the algebra.
The tools and procedures required to derive finite-dimensional algebras from U (so(3, R)) by quotient will be the focus of the remainder of this paper.In section 2, we will overcome the lack of eigenvalues by constructing a novel mathematical framework to decompose vector spaces according to the algebraic properties of real operators.In section 3, we will apply this formalism to U (so(3, R)) to re-express it as a direct sum of its physically meaningful components which respect the structure imparted by the Lie product.Finally, in section 4 we will demonstrate how quotienting all but a finite number of these components leads to purely algebraic formulations of all spins, and necessarily a new physical description of it.

Real Operator Formalism
We will now construct a novel, basis-independent, algebraic formalism to decompose a vector space according to the properties of a real operator on that space.Let V be a vector space over F and A ∈ End(V ).The minimal polynomial [15] of A is a polynomial n(x) of least order with coefficients in F such that: on V , where 0 V is the zero map on V .It is unique up to scalar multiple and always exists when dim(V ) ∈ N. Let us factorise the unique monic minimal polynomial m(x) into a product of non-zero powers of irreducible polynomials over F: where ∀j 1 = j 2 , gcd(f j1 (x), f j2 (x)) = 1.As we are working with a monic polynomial, none of these factors are constant.Let us choose one value of j = k and write: (2.3c) By Bézout's Identity [18], there exist polynomials a k , b k such that: with • denoting composition.For notational convenience let us define: (2.6b) By equations (2.1) and (2.3a) we see that I k and Π k are orthogonal: Additionally, equations (2.1) and (2.7) imply idempotency: Thus, we have performed a partial orthogonal decomposition: where: respectively.It can be proven that q k (x) is the minimal polynomial of A on Im(I k ); thus we may iterate this bipartite decomposition on Im(I k ), until the final q l (x) has only a single multiplicand.This process is guaranteed to terminate since |m|∈ Z + .Thus, from the minimal polynomial of A we have arrived at a basis independent orthogonal decomposition of V through the projectors Π j : with Π j defined as in equation (2.6b).This has been achieved without the use of complex numbers, or reference to vectors in V ; it has been derived entirely from the algebraic properties of A.
In the case where there is only one multiplicand in equation (2.2) with d j > 1, we cannot find a resolution of the identity into more than one projector by this method alone.Furthermore, there may be subspaces closed under the action of A within each Im(Π j ) which we cannot differentiate using only the above.These limitations will not hamper our analysis of spin however, as we will find that every minimal polynomial encountered has d j = 1 for all factors.Furthermore, our method of decomposition will ensure that all subspaces closed under the action of A are isolated.
It is instructive to relate what has just been developed to existing concepts.A traditional eigenspace is an Im(Π k ) : The case when |f k |> 1 does not occur when considering operators on complex vector spaces, as C is algebraic closed.A simple example of a real operator with such a subspace is a planar rotation by an angle θ: the subspace Im(Π k ) in the plane of rotation satisfies If instead of a single operator we have a finite collection of mutually commuting operators {A n }, then each decomposition of the identity id V = an Π an (A n ) we find using the above method may be composed together to give a unique mutual decomposition: Doing this, we cannot guarantee that the image of each combined projector is non-trivial; exactly which projectors have non-trivial image depends on the relationship between the operators in the collection.This is a basis-independent generalisation of simultaneous diagonalisation of operators to the real operator case, where even generalised eigenspaces may fail to exist.If the operators do not all mutually commute the situation is markedly more complex.
Of particular interest to our present problem is the case where all |f k |= 1 and d k = 1.In this case: and |b k |= 0, i.e. constant, so: The method of algebraic orthogonal decomposition of a vector space developed here, in the particular case of |f k |= 1 and d j = 1, will be used extensively in our analysis.

Actions on U(so(3, R))
To begin the decomposition of U (so(3, R)) we must identify some suitable operators to use.The first natural Lie algebra action [17] defined on U (so(3, R)) is left multiplication: Using left multiplication we can describe a recursive relationship between the k-adic and (k − 1)-adic tensors: We may use this relationship to aid our decomposition of an arbitrary A ∈ U (so(3, R)) in the following way.Since our method of decomposition is linear, we may decompose A by decomposing its constituent k-adic tensors.The above recursive relationship shows that we may decompose a k-adic tensor by considering the action of left multiplication on the decomposition of a (k − 1)adic tensor.Therefore, to decompose U (so(3, R)) it is sufficient to start with a scalar and decompose the result of each left multiplication L(v) by elements v ∈ so(3, R).This allows us to build up our decomposition order-by-order.Unfortunately, the left multiplication itself cannot also be used to decompose a finite-dimensional subspace of U (so(3, R)).This is because the tensor order of L(v)(A), ∀v ∈ so(3, R), A ∈ U (so(3, R)) is one greater than the tensor order of A; thus we have left the finite-dimensional subspace we were trying to study.If U (so(3, R)) were finite-dimensional this would not be an issue, as repeated left multiplications would eventually become linearly dependent.However, as U (so(3, R)) is infinite-dimensional no minimal polynomial for left multiplication by any non-zero element can be expected to exist.
To overcome this difficulty, we must utilise the adjoint action [1], the second natural Lie algebra action defined on U (so(3, R)): The tensor order of ad(B)(A), ∀A, B ∈ U (so(3, R)) is the same as the tensor order of A, and thus we remain confined to the finite-dimensional subspace of study, provided that subspace is closed under the action of ad(B), which is the case for the k-adics.Specifically we will use the adjoint action of the centre of U (so(3, R)): The centre Z(U (so(3, R))) can be generated by sums and products of scalars R and the Casimir element: (3.5) The action of ad(α), ∀α ∈ R are just scalings, which act uniformly on all of U (so(3, R)).Thus, we need only consider the action of ad(S 2 ).To this end, we introduce the following notation: (3.6b) The factor of k(k + 1) in (3.6b) shall be explained shortly.

Relationship Between L(v) and E
As previously outlined, the key to decomposing U (so(3, R)) is to understand how the action of L(v) for v ∈ so(3, R) interacts with E, as this enables us to decompose a k-adic tensor order-by-order starting from a scalar.This relationship is derived in Appendix C: where [•, •] is the commutator, and every implied product is composition.This identity holds on the whole of U (so(3, R)).
To understand the consequences of (3.7) it is instructive to consider its action on a subspace Im(Π E+t ) for which (E + t id)• Π E+t = 0. Doing this, we find that a polynomial in E of the form: annihilates L(v)•Π E+t .It can be proven that the two roots with radicals are natural numbers iff t = m(m + 1), m ∈ N. In that case: For m = 0 all three roots are consecutive naturals of the form m(m + 1), and when m = 0 the roots are 0, 0, and 2.
To see the significance of this observation, let us begin our order-by-order decomposition by noting that for 0-adics α ∈ R and 1-adics v ∈ so(3, R): Therefore, on R and so(3, R) the action of E has minimal polynomials: respectively.Since these contain a power of a single irreducible polynomial, no further decomposition can be made here.Noting that 0 = 0(0 + 1) and 2 = 1(1 + 1), we see that our iterative process of decomposition is initialised by subspaces annihilated by E + m(m + 1) id for some m ∈ N. Thus, by (3.9), all non-trivial orthogonal subspaces in our decomposition are annihilated by compositions of ε(n) = E + n(n + 1) id for various n ∈ N.This accounts for the constant in (3.6b).

Scheme of Decomposition
Clearly from (3.11a) and (3.11b), minimality of an E polynomial of the form (3.9) will depend on the subspace we are acting on.Regardless, we may still use it to gain some insight into the gross structure of U (so(3, R)): starting from R, let us alternately apply L(v j ), then decompose the resulting subspaces according to (3.9).It is useful to capture each subspace derived this way as the image of a map from the k-order tensors so(3, R) ⊗k → U (so(3, R)), where k is the total number of L(v j ) that have been applied.We note that for this to make sense, the domains of these maps are so(3, R) ⊗k ⊂ T (so(3, R)).
On such a subspace Im(Π ε(k) ), applying first L(v) and then decomposing with E by the methods of section 2, we find ∀B ∈ so(3, R) ⊗m : where we have introduced the "step-down/step-level/step-up by v ∈ so(3, R)" operators ∀k ∈ N, which by construction satisfy: , then some of these steps will be into the trivial subspace.This will prove to be the case for L − (v)•Π ε(0) .
Using the step operators our decomposition scheme for U (so(3, R)) is equivalent to applying to 1 all sequences of steps by v j ∈ so(3, R) that yield non-trivial subspaces.We are guaranteed to decompose so(3, R) ⊗k fully in terms of the nontrivial subspaces reached after k steps due to (3.12a) holding at every step.This process is summarised graphically in Figure 1.
Figure 1: Diagramatic representation of the decomposition of U (so(3, R)), where ε(j) • V j = 0.The vertical yellow bands contain subspaces of a given tensor order.The horizontal yellow bands contain subspaces annihilated by a given polynomial of E. The green vertical band contains the closures of the unions of all subspaces in each yellow band, expressed as modules M (k) of multipoles Im(M (k) ) over the centre Z(U (so(3, R))).These objects will be defined in section 3.4.

Decomposition of U(so(3, R)) into Multipoles
From (3.9), (3.11a), and (3.11b), we see ∀k ∈ N that amongst all subspaces reached in k steps starting from 1, there is exactly one subspace annihilated by ε(k).This subspace was reached by stepping-up from the unique subspace reached in k − 1 steps that is annihilated by ε(k − 1).Furthermore, there are no subspaces annihilated by ε(k) reachable in fewer than k steps.
This implies the existence of a family of maps: whose images are the subspaces of least tensor order such that: It can be proven that for each of these maps: with (3.16c) recognised as vacuously true for M (0) and M (1) .Given these properties, we find that (3.9) is minimal on Im(L(v)• M (k) ) for k ∈ Z + , implying that all M (k) have non-trivial image.We also find that (3.11b) is minimal on Im(L(v)•M (0) ), implying: Examining Im(M (k) ) more closely, we find that it bears striking, but not exact, resemblance to the Cartesian 2 k -pole tensor.For this reason we term the maps M (k) "multipoles".As shown in Appendix A, the images of the multipoles agree with the forms implied by [19], though their algebraic properties and interrelationships are much clearer from this method.
The significance of the multipoles to our decomposition can be seen by considering the images of (3.12b) and (3.12c) on M (k) , as derived in Appendix D. On M (0) , L ↓ (v) is trivial by definition and L − (v) is trivial by (3.17a).On M (1) , the images are: and for M (k) with k > 1 they are: for k > 0 can be written entirely in terms of M (k−1) and M (k) respectively.Since by definition, (3.12a) shows that L(v)•M (k) (B) can be written entirely in terms of multipoles ∀k ∈ N.
Since in our decomposition we apply all combinations of steps starting from M (0) , this means that every subspace we reach can be written entirely in terms of the multipoles.More precisely, we conclude that every subspace of U (so(3, R)) is a linear combination of products between the multipoles and central elements of Z(U (so(3, R))).This allows us to extend our proofs of minimality to all subspaces reached that are annihilated by the same ε(k) as M (k) .In particular, this means that (3.17a) and (3.17b) hold on all subspaces annihilated by ε(0).
This also allows us to linearly extend the step operators to arbitrary subspaces of U (so(3, R)).Being able to write all subspaces in terms of the multipoles is a manifestation of Weyl's theorem on complete reducibility [20], since the multipoles are all simple as U (so(3, R))-modules under the adjoint action.
With that, we have completed our decomposition of U (so(3, R)).In the process, we have identified a countable infinity of multipoles and given a recursive method for their construction.We have seen that all subspaces of U (so(3, R)) are isomorphic to direct sums of these multipoles, up to multiples of central elements z ∈ Z(U (so(3, R))).Our decomposition may thus be summarised: where M (j) = { p z p ⊗m p | ∀z p ∈ Z(U (so(3, R))), m p ∈ Im(M (j) )}.

Spin Algebras
Unlike the tensor order decomposition (1.7) of U (so(3, R)), each summand in the multipole decomposition (3.20) is orthogonal.Thus, we may derive a family of algebras: This process leaves only a finite number of basis elements, since (3.14) ensures that What is not obvious is that this process will yield a real algebra, since each summand in (3.20) is a module over Z(U (so(3, R))).
Appendix E proves that our quotient necessarily entails that: in the new algebra, i.e. the left action of the Casimir element S 2 becomes the action of a real scalar.Reindexing k = 2s, we see that the action of the Casimir element in algebra A (s) is exactly what is expected for the spin-s representation.
In fact, this connection is total.
It can be proven that dim Im(M (k) ) = 2k +1, and thus A ( k 2 ) has dimension k j=0 2j + 1 = (k + 1) 2 = (2s + 1) 2 .This is exactly the complex dimension of the operators in the usual complexified spin-s representation.Furthermore, Im(M (k+1) ) = {0} implies that if k is odd: and if k is even: which yields the complete eigenspectrum expected for our basis.Due to this correspondence we will name the algebra A (s) the "spin-s algebra".More concretely, the spin-s representation is simply an associative algebra representation of A (s) , and derives its bulk structure from it.
In deriving the algebras A (s) , we have established that any spin may be specified entirely by its largest non-zero multipole, and equivalently described by: a finite collection of multipoles {M (n) | n ∈ {0, ..., 2s}}; their multiplication table; and the implied relation L(S 2 ) = L(−s(s + 1)).Such a multiplication table is given in Appendix B.
However, there are certain aspects of the usual formalism implied but not immediately accessible in the algebraic theory.For example, the non-zero parts of the eigenspectra from (4.3) and (4.4) are pure imaginary; this means projectors into the corresponding eigenspaces are not constructable within the real A (s) .Similarly, the matrix representations of the odd n multipoles are anti-Hermitian, while those of the even n multipoles are Hermitian.This is not a defect within the spin algebra formalism however; it indicates that the observable multipoles and spin eigenstates are the result of coupling with complex structure from another algebra within a larger physical theory.Therefore, the observability of these objects is an emergent, non-trivial prediction of such a theory.
For example, if we follow standard quantum mechanics and construct the algebra tensor product between A (s) : ∀A j ∈ A (s) , B j ∈ H ( ) , and the associative Heisenberg algebra: with = 0, which is formed from the real Heisenberg lie algebra, h 2n+1 = span({q j , p k , r}): we find we are now able to factorise previously irreducible polynomials: From the identities (4.3) and (4.4), this implies that S a ⊠ r has the usual real eigenspectrum expected for spin operators in quantum physics: and reveals why this eigenspectrum has units of .Furthermore, S a ⊠ r has Hermitian matrix representations.Together, these observations reveal that S a ⊠ r has similar properties to the angular momentum operators, whereas we have seen that S a ⊠ 1 does not.Therefore, we can say that the character spin has in quantum mechanics, as a form of angular momentum, is an emergent property of the algebra coupling (4.5); it is a non-trivial prediction of quantum mechanics, not an intrinsic property of spin.
From the above, it also follows that, by their total symmetry, all multipoles formed from S a ⊠ r also have Hermitian matrix representations.The measurability of these observables in experiment will ultimately depend on the precise form of their electromagnetic couplings; however one might expect that the coupling strength of the 2 k -pole for a particle of charge e and mass m would be of the order: to be consistent with the norms of the (S a ⊠ r)-multipoles and the coupling strength of the spin magnetic dipole moment.Independently of which algebras we might couple A (s) to, we have established that it captures the essential structure of a spin-s system.By using our real algebraic methods we have shown that this structure can be derived without the use of dynamics, matrix representations, or complex numbers, amongst other things.Therefore, by only using those structures naturally associated with the geometric symmetry group SO(3, R) of Euclidean three-space, we must conclude that spin is similarly geometric in nature.

Conclusion
In this paper we have constructed a completely algebraic theory of non-relativistic spin from spacial symmetry using only elementary arguments, without the use of: quantisation, dynamics, calculus, matrix representations, or complex numbers.To do this we developed a formalism appropriate to the study of real operators, which can readily be applied to study other symmetries, such as those arising in field theory, amongst other mathematical and physical contexts.Through this formalism, we have shown that a spin-s system is a finite collection of non-commutative generalisations of Cartesian multipole tensors, and completely determined by specifying only the largest non-zero multipole.In working exclusively with structures naturally related to a geometric symmetry group, we have indicated that spin is fundamentally geometric in nature.

Multipole
Image of S a ⊗S b ⊗... (p,q)∈S({a,b}) (p,q,r)∈S({a,b,c}) (p,q,r,s)∈S({a,b,c,d}) This may be extended using the images of multipoles in Appendix A, and the results of Appendix D.

C Proof of Left Action Identity
To facilitate this and other proofs we must first discuss some identities.Let us define the right multiplication: where we note this is not a Lie algebra action on U (so(3, R)), since: This allows us to describe the adjoint action of v ∈ so(3, R): We may now proceed with the proof.

D Derivation of the Images of Multipoles under
Step-Level and Step-Down Here we will prove the results given in (3.19a) and (3.19b).

D.1 Step-Level Image
The step-level by v ∈ so(3, R) of a multipole M (k) is given by: Commuting through the ε(•) we find: .
From (C.13) we see that: which we combine with the previous equation and (C.3) to find: 2) from which (3.19b) follows.

D.2 Step-Down Image
The step-down by S a ∈ so(3, R) of a multipole M (k) is given by: To proceed, let us consider how E and E 2 act on L(S a )•M (k) .First, we see from (C.5): S bj and so: We may evaluate this second term with the help of (C.12): For notational convenience let us denote: so we may write: Applying a second E, we may reuse the results so far to find: where we have defined: To progress further we must extract the summed S d from within M (k) in (D.4) and (D.7).We do this by using the multipole recursion relation (3.14), and an identity we shall now derive.
For k = 1, we see: L(S e )•M (1) (S e ) = For k > 1, consider: This result is consistent with the k = 1 case.Applying this to B and D in (D.9) we find the identity (3.19a).

D.3 Right Multiplication Images
The results of the previous subsections may be utilised to derive the form of a right multiplication of a multipole.This is essential to expand the multiplication table of Appendix B. We observe that from the definition of ad(v) where v ∈ so(3, R): and so: which implies that: where k j=n denotes composition over the indexed maps.A priori, any combination of these L(4S 2 + j(j + 2)) could be responsible for annihilating Im(M (n) ).To make progress, let us first consider a non-empty subset I ⊂ {0, ..., k − 1} and suppose: j∈I L(4S 2 + j(j + 2)) •M (n) = 0 (E.7) for some n ∈ {0, ..., k}.Since M (k) may be written as a series of step-ups from M (n) by (3.14), we may use the fact that the composition in (E.7) commutes with step-ups to find: We note that since k / ∈ I that there is a left multiplication of a non-zero scalar in (E.10).Thus, from (E.5) we find: j∈I −(k − j)(k + j + 2) M (k) = 0, (E.11) which implies Im(M (k) ) is trivial.This is in contradiction with our construction of A ( k 2 ) and thus (E.7) must be impossible ∀n ∈ {0, ..., k}.This means any annihilating action of a composition of factors L(4S 2 + p(p + 2)) must include the factor with p = k.

4 )
where |a k |+|p k |< |m|, |b k |+|q k |< |m|, and the final equality follows by construction.a k and b k may in general be computed by, for example, the extended GCD algorithm.We observe that the polynomial ring F[A] is naturally ring isomorphic to a quotient ring of F[x], since their polynomials differ only by A ↔ x and the identity given by equation(2.1).This implies an identity in F[A]:

Table 2 :
A partial table of multiplication for multipoles, where M a1a2...a k := M (k) (S a1 ⊗S a2 ⊗...⊗S a k ).Repeated indices in the same term are summed over.