Canonical methods in classical and quantum gravity: An invitation to canonical LQG

Loop Quantum Gravity (LQG) is a candidate quantum theory of gravity still under construction. LQG was originally conceived as a background independent canonical quantization of Einstein’s general relativity theory. This contribution provides some physical motivations and an overview of some mathematical tools employed in canonical Loop Quantum Gravity. First, Hamiltonian classical methods are reviewed from a geometric perspective. Canonical Dirac quantization of general gauge systems is sketched next. The Hamiltonian formultation of gravity in geometric ADM and connection-triad variables is then presented to finally lay down the canonical loop quantization program. The presentation is geared toward advanced undergradute or graduate students in physics and/or non-specialists curious about LQG.


Why quantum gravity?
Classical general relativity is the most successful theory of gravity but it is incomplete. As made manifest by the celebrated singularity theorems due to Hawking and Penrose, on regimes of high energy density and strong curvature, geodesic incompleteness and other singularities signal the loss of predictability of the theory. On the other hand, quantum mechanics seems to describe remarkably well the physics down to the 10 19 m scale probed by particle accelerator experiments like the LHC. All other three known fundamental interactions in nature obey the rules of quantum mechanics, so we believe these rules should apply at a fundamental level, even at the Planck scale of 10 35 m!. In particular, we expect a fundamental description of the gravitational interaction to adhere to these rules and certainly, Einstein's theory does not.
A theory of gravity obeying quantum mechanics is expected to 'resolve' the classical singularities of general relativity (cosmological and black hole) along with other 'puzzles' of black holes (information loss, entropy count, etc.). It is also expected to serve as a natural regulator of ultravialotet divergences in QFT. Such sought for theory should do all this while at the same time recovering general relativity in the semi-classical limit.
The need for a quantum theory of gravity is not only conceptual. Even if quantization of gravity will not be directly detectable in the near future (e.g. F. Dyson), indirect evidence by its imprints on cosmological observations may be accessible in the not so distant future (e.g. In any case, one expects to learn a great deal and to get deep insights into the structure of spacetime by constructing a consistent quantum theory of gravity. There are however several conceptual and technical problems one must tackle before claiming victory and it must be emphasized here that no such theory fulfilling the requirements or expectations above is available yet. There are many (tentative) roads to quantum gravity. Amongst the most notable candidate theories or -more accurately-approaches to finding such theory are: String Theory, Noncommutative Geometry, Asymptotic Safety, Causal Sets, Causal Dynamical Triangulations (CDT) and Loop Quantum Gravity (LQG). Each path has its own limitations, advantages and perhaps modest but non trivial achievements.

Why Loop Quantum Gravity?
The path to quantum gravity one may take depends heavily on one's own prejudices about what physical (and correspondingly mathematical) principles must guide the construction of the theory and hold true in the quantum regime. This bias is usually tied to physicists' own training and skills. For example, high energy physicists may prefer String Theory or Asymptotic Safety scenarios, mathematically minded physicists may choose Non-commutativity, while relativists will perhaps lean towards background independent CDT or LQG. Whether any of these theories will lead to the 'correct' description of nature is -as we have said-still an open question.
In these notes we will consider LQG [1,2,3]. From the very beginning, the underlying philosophy behind the LQG program has been to take the lessons from Einstein's theory of gravity very seriously. The guiding concept in the construction of LQG is the physical principle of general relativity. This is mathematically encoded in the requirement of di↵eomorphism invariance and background independence of the theory. The latter necessarily implies a nonperturbative quantization -as opposed to the perturbative approach of quantum field theory on Minkowski spacetime which has proved so successful in the description of the standard model of particle physics-. The universality of gravity makes it di↵erent from the other known interactions in nature and perhaps it should come as no surprise that di↵erent methods or description are needed.
There are several alternatives to constructing a LQG theory. These alternatives may be grouped in essentially two approaches: • Canonical (Dirac) quantization • Path integral-type quantizations (spin-foams) Historically, a LQG theory was first obtained by applying well tested (Dirac) canonical quantization techniques to Einstein's general relativity theory. This procedure is -in the author's opinion-the more mathematically rigorous and solid way to construct kinematics. Path integral type quantizations were introduced later to include from the outset dynamics and explicit covariance in the framework. These two aspects have certainly proved harder to implement (or make explicit) in the canonical setting.
While the resulting (non separable) Hilbert spaces of LQG may seem 'exotic' compared to the good old Fock spaces used in particle physics, LQG is the result of applying well tested techniques and physical principles with only minimal assumptions. No extra dimensions or symmetries are required (although they may be incorporated in the framework) nor any supposition on non-commutativity. Under some mild assumptions, di↵eomorphism invariance and background independence uniquely select the so called loop representation.
As we will review later, general relativity in Hamiltonian form may be expressed in terms of a so called connection and a triad. These canonical variables allow for natural smearings that enable one to construct a Poisson subalgebra of observables independent of any background structure or auxiliary metric: the algebra of holonomies and fluxes. The hallmark of loop quantization is to select and promote this particular set of observables and their Poisson bracket Gravity is geometry, and as a candidate quantum theory of gravity, LQG has shed light on several aspects of the quantum geometry of space, including a possible discreteness of space and some kinematical aspects of quantum black holes. Furthermore, the application of loop techniques in the quantization of symmetry-reduced mini or midisuperspace models has allowed the exploration of implications of quantum dynamics in these reduced settings and in particular, it gave birth to Loop Quantum Cosmology [4]. These results provide concrete mechanisms for singularity resolution, black hole entropy, and UV regulation for matter fields.
The kinematics of LQG are well understood. The kinematical Hilbert space and basic operators thereon can be rigorously constructed, there are no hidden infinities. Dynamics of the theory on the other hand, is not yet under control. Despite proposals for candidate Hamiltonian operators governing dynamics, or propagators in the corresponding manifestly covariant formulation of spin foams, there remain several technical and conceptual problems to be solved. In particular, for the canonical approach there are the quantization ambiguities in the definition of the Hamiltonian operator and most importantly, showing that these operators lead to a theory free of anomalies 1 . As we shall review briefly, this is a technical but crucial consistency condition, related to the covariance of the theory and to its semiclassical limit. Finding the latter is perhaps one of the most pressing issues also in the path integral formulation of spin foams. One may take Einstein's general relativity as a staring point for quantization, but a priori none of our known quantization recipes guarantees that the resulting quantum theory will reproduce -in an appropriately defined semiclassical regime-the classical theory one began with.
The problem of extracting dynamical information along with the semiclassical limit and physical predictions that could make LQG a falsifiable theory is compounded by the fact that semiclassical states are expected to be highly excited states, constructed nontrivially from the basic spin-network states of the kinematical Hilbert space. Even in symmetry reduced models, construction of Hamiltonian operators and the analysis of the dynamical di↵erence equations they imply have proved di cult to deal with.
One possible route to overcome these problems and to arrive at physical implications of the quantum theory is to use e↵ective classical equations [5,6,7], which are in closer contact with classical spacetime notions but that nevertheless incorporate corrections from a loop quantization. Indeed, e↵ective equations have successfully been used in Loop Quantum Cosmology. For the simplest solvable systems they have been shown to closely follow quantum evolution not only for semiclassical regimes but even in the high energy density regime where the classical Big Bang singularity is replaced by a 'bounce'. Furthermore, by careful application of these e↵ective techniques, phenomenology for the early universe that may be tested in future observations can be obtained [8]. We will however not delve into the canonical e↵ective description here. For a brief introduction from a geometric perspective see e.g. [9].
The purpose of these notes is to make clearer some of the aspects and problems just exposed. We will not delve into the details of LQG, but rather focus on very basic notions and tools which are prerequisites to understanding these problems. Hopefully this will give the reader a general perspective on the subject and its methods, and serve as an incentive to get him or her more interested in this challenging but fascinating field.
For a more detailed but still concise introduction to LQG, the reader is referred to [1] and to [2,3] for a more ample coverage. 1 Not to mention other universal and long-standing conceptual issues like the problem of time.

Classical vs quantum descriptions
In physics, when we say we have a theory of some physical phenomena, what we imply is that we have a mathematical model characterizing and describing the behavior of some particular physical system or set of systems. This model includes some mathematical constructs which are supposed to completely codify all the relevant information one could possibly ask about the physical system and a correspondence between these mathematical objects and the things we can measure in a lab: the observables. While a purely relational description of the physical world including gravity is perhaps more fundamental, for physical phenomena more immediately accessible to our senses (and also for those far from our every day experience), it has been most fruitful or convenient to separate and formulate this description in terms states of the system and their dynamical evolution. For our everyday non relativistic theories -and even for relativistic ones-these states characterize the system 'at a given time' and their dynamical evolution describes their change 'with respect to time'.
Another ingredient in our formulations of physical reality are symmetries which play a major role for a concise description and four our understanding of physical theories. They are central in the gauge theories of the standard model and in general relativity.
One of the revolutions of the 20th century was the discovery that the behavior of physical systems at atomic and subatomic scales requires a mathematical description fundamentally di↵erent from the classical description tied to our every day experience. We believe this last found quantum description to be more fundamental, with the classical description being only an approximation. So given a classical theory like general relativity, how does one find the corresponding more fundamental quantum theory? Unfortunately, there is no straight forward answer to this question. Fortunately, there exist patterns and structures in both classical and quantum formulations which have given us general guidelines or quantization procedures to construct candidate quantum theories from classical ones. We are interested here in Dirac's canonical quantization procedure which -as we have said-can be followed to construct LQG. Dirac's procedure requires a classical theory to be in Hamiltonian form, so we will first go back to basics and review the Hamiltonian formulation putting emphasis on the relevant structures for quantization. We will compare the classical and quantum descriptions in terms of how the di↵erent aspects mentioned above: states, observables, dynamics and symmetries are implemented.

Classical description
Let us first recall how one arrives at a Hamiltonian formulation starting from a classical action. Consider for simplicity a non relativistic mechanical system with finite number N degrees of freedom q i , generalized velocitiesq i , with i = 1, . . . , N, and action For example, for n particles of mass m in R 3 , N = 3n, and our typical Lagrangian has the form for some potential function V . Variation of the action gives Euler-Lagrange equations of motion S = @L @q i d dt @L @q i = 0 resulting in this case in Newton's equations: To pass to the Hamiltonian formulation, one performs a Legendre transform: The canonical action becomes and its variation gives Hamilton's equations S = 0: The Legendre transform changes the second order di↵erential equations of motion into a first order system for generalized positions q i and momenta p i . It follows that the state of the system at a given time t is completely determined by the values q i (t), p i (t) . All possible physical observables are simply functions of these 2N variables and their dynamics is governed by Hamilton's equations. What are then the underlying geometrical and algebraic structures behind this Hamiltonian formulation?
2.1.1. States: Classical phase space In the Hamiltonian formulation the state of the system is described by the generalized coordinates (q i , p i ). The set of all possible states of a physical system defines its phase space . This is a geometrical object: a manifold. For example, the state for a non relativistic point particle in three-dimensional space is determined by its position and momentum vectors in Cartesian coordinates: (X i , P i ). These coordinate components may take arbitrary values, so the phase space is = R 6 . In contrast, for a particle confined to move around a circle, phase space is a cylinder: = S 1 ⇥ R, with coordinates given by the particle's angular position and momentum (✓, P ✓ ). The geometrical nature of phase space is clear if one recalls there is more than one way to describe a physical system and choose generalized coordinates for it. The state a physical system is in and the relations among di↵erent states are independent of these coordinates. This is essentially the defining property of a (topological) manifold. Since generalized coordinate transformations are further assumed to be smooth, is a di↵erentiable manifold. Each point on the manifold represents a physical state of the system (Fig. 1).
The phase space of a classical theory is equipped with additional structures. Typical phase spaces obtained from a Lagrangian theory have the structure or may be identified with a special type of manifold 2 : the cotangent bundle T ⇤ Q over the configuration space Q. This is a particular example of a more general class of manifolds called symplectic. Symplectic manifolds ( , ⌦) are equipped with a special anti-symmetric bilinear form ⌦ ↵ (this is analogous to the physical four-dimensional spacetime manifold of special or general relativity which is equipped with a special symmetric bilinear form: the Minkowski metric ⌘ µ⌫ for special relativity or a more general pseudo-Riemannian metric g µ⌫ for general relativity). We will say more about this symplectic structure which plays a prominent role in the Hamiltonian description in the next subsections. All phase spaces we consider have a symplectic structure and are hence symplectic manifolds. Most important for canonical quantization is that on these spaces, one may define a Poisson bracket {·, ·} : C 1 ( ) ⇥ C 1 ( ) ! C 1 ( ). This is a map, satisfying certain properties, that takes two di↵erentiable functions on and assigns another di↵erentiable function on . As it is standard, here C 1 ( ) will denote the collection of all di↵erentiable real functions on . Going back to our original example of the phase space obtained from a Lagrangian theory, given the generalized coordinates (q i , p i ) obtained after a Legendre transformation and for arbitrary functions f, g 2 C 1 ( ), the Poisson bracket is defined as However, we emphasize that just as points (and functions) on a manifold are independent of coordinates, this mapping of functions is also coordinate independent. As a manifold, the phase space may be coordinatized by di↵erent generalized or arbitrary coordinates (q i ,p i ). For a general phase space, canonical or Darboux coordinates (q i , p i ) are precisely defined to be coordinates for which the Poisson bracket is expresssed as in (2). For canonical coordinates, we have the Canonical Relations: Just for fun, we can abstract things even further. Symplectic manifolds are actually a subclass of a more general type of manifolds: Poisson manifolds ( , {·, ·}). These are manifolds equipped with a bracket {·, ·} : C 1 ( ) ⇥ C 1 ( ) ! C 1 ( ) acting on functions on the manifold and satisfying the basic properties that (2) does 3 . Every phase space being a symplectic manifold is a Poisson manifold and its bracket may be written as in (2), but the converse is not true. Not every Poisson manifold is symplectic and its bracket may not have the form (2).
So far we have focused on mechanical systems with only a finite number of degrees of freedom for which phase spaces are finite dimensional. For field theories like Maxwell's electromagnetism (or gravity) there is an infinite number of degrees of freedom. Essentially, for each point x = (x a ) on three-dimensional space, there is a set of canonical pairs. For example, for electromagnetism the pair defined by the components of magnetic potential and electric field ⌘ defines canonical variables at each point of space. The corresponding phase spaces are therefore infinite dimensional manifolds. Nevertheless -at least formally-all the constructions above generalize to this case 4 . Prime examples are the total energy H(q, p) in a conservative system or the very same generalized coordinate functions ⇡ i (q j , p j ) = q i , ⇡ i (q j , p j ) = p j of non relativistic mechanical theories, which may represent coordinates and momenta of individual components or particles of the system. At the end of the day, physical observables are the most important, or even the only thing, one cares about. It is not surprising then that relations among these functions play a central role in the description of the classical theory and its quantization. These relations are algebraic in nature. Even mathematically, the underlying geometry of phase space may be subordinated to these algebraic relations and reconstructed from them. This is key in canonical quantization and it is also at the root of approaches to quantization of gravity like noncommutative geometry. Intuitively, one may guess this encoding to be true since the coordinate functions mapping and di↵erentiating points on phase space are a part of the set of observable functions.
What are these referred algebraic structures? Let us first recall the mathematical definition of an algebra. An algebra A is a vector space (over R or C) equipped with a multiplication of vectors compatible with the sum and scalar product, what this means is that for any vectors A, B, C 2 A and scalars a, b, c 2 C, this multiplication satisfies expected properties: An algebra is said to be associative if the product further satisfies: A(BC) = (AB)C. It is abelian if the product of two vectors commutes: AB = BA. If the product of an algebra is neither abelian nor associative but instead it is anti-symmetric and satisfies the Jacobi identity, i.e. AB = BA and A(BC) + B(CA) + C(AB) = 0 , then A is called a Lie algebra 5 . The set of observables C 1 ( ) is naturally an abelian associative algebra with product defined pointwise, but most importantly, it is also a Lie algebra with respect to a product defined by the Poisson bracket. Indeed, C 1 ( ) is clearly a vector space with sum and product by scalar c defined pointwise: C 1 ( ) can be made an abelian associative algebra with product also defined pointwise: However, since phase space is a Poisson manifold, the Poisson bracket {·, ·} : C 1 ( )⇥C 1 ( ) ! C 1 ( ) defines another product: It is trivial to see that the properties of the Poisson bracket are equivalent to this product being compatible with the linear structure of C 1 ( ) and satisfying Thus, the set of observable functions C 1 ( ) is a Lie algebra with the product defined by the Poisson bracket. Notice however that, even for finite dimensional phase spaces, the algebra is infinite dimensional (as a vector space). This algebra is what one calls the Poisson algebra of observables, and it is the relevant algebraic structure for quantization.
For theories with local gauge symmetries, there is a complication. Due to the ambiguities in the description of the system, which translate to di↵erent points on representing the same physical state, C 1 ( ) does not directly correspond to actual physical observables we measure. In order to set this correspondence, one needs to further 'factor out' these ambiguities. We will comment more on this later.

Dynamics: Symplectic geometry Evolution is given by Hamilton's equationṡ
or for a general function f (q, p) on phase space: But how does one describe dynamical evolution in the geometric picture given by phase space? or in other words, what do Hamilton equations tell us about the geometry of phase space and viceversa? Hamiltonian evolution is characterized by two structures: a privileged real function on , i.e. the Hamiltonian H (usually associated with the energy of the system, e.g. , and the symplectic structure ⌦. To see how such structures emerge from Hamilton's equations (1), we may first write these equations in block matrix form: 0 with I denoting the N ⇥ N identity matrix. Let us now recall that canonical transformations are also defined as those transformations leaving Hamilton's equations unchanged. One can then see that A , and therefore also its inverse: ⌦ := are invariant under canonical transformations or equivalently, they are independent of canonical coordinates. But there is more. At each point on phase space, ⌦ defines a nondegenerate, antisymmetric bilinear transformation of tangent vectors. In the language of di↵erential geometry, ⌦ is a nondegenerate two-form. This is the symplectic structure we referred to before. It is a geometrical object, independent of any coordinates on . The symplectic structure defines an isomorphism between tangent vectors and their dual one-forms. In other words, we can use ⌦ ↵ to 'lower' vector indices: X ↵ = ⌦ ↵ X , and we can use its inverse ⌦ ↵ := P ↵ to 'raise' these indices back: X ↵ = ⌦ ↵ X (just like one uses the metric g µ⌫ and its inverse g µ⌫ to 'lower' and 'raise' indices of tensors in special or general relativity). For our examples here, indices (matrix components) run as ↵, = 1, · · · , 2N .
The inverse of the symplectic structure P = ⌦ 1 is also called the Poisson bivector and there is a reason for that. The Poisson bracket is really this bivector acting on di↵erentials To complete the geometric picture of Hamiltonian evolution, we observe that the symplectic structure (or equivalently the Poisson bracket) provides a very neat way to construct vector fields on . Indeed, given a function f , its di↵erential df determines a vector field X f thanks to the isomorphism -defined by the symplectic structure-between tangent vectors and their dual one-forms. [From (5), this vector field is sometimes also denoted as X H = {·, H}]. So given the Hamiltonian function of the system H : ! R, by raising the indices of its di↵erential dH, we get a Hamiltonian vector field 7 : X ↵ H := P ↵ (dH) . Now the picture is complete. As the physical system evolves in time, it describes a curve (t) := q i (t), p i (t) on phase space (Fig. 1). What Hamilton's equations (4) tell us is that this evolution is along directions determined by the Hamiltonian vector field X H (Fig. 2). Indeed, (4) may be rewritten as˙ (t) = X H , which says (t) is an integral curve of X H (a curve whose tangent velocities˙ (t) coincide with the vector field X H ).

Symmetries: Gauge theories and constrained systems
Symmetries are very important in physics. Symmetries may be global (acting on the physical system as a whole) or local (acting independently on individual parts of the system, e.g. on fields at each point in space). Here we will focus on the latter. All our fundamental theories of nature: the standard model and general relativity, are gauge theories. This is to say they are theories with local symmetries. Intuitively, local symmetries reflect our freedom to change the 'reference frame' with respect to which we describe the physical system locally. So di↵erent (mathematical) states will result from changing to di↵erent 'reference frames' locally, but they will all represent or correspond to the same physical state. Local symmetries are hence a redundancy in the mathematical description of the theory. This redundancy translates into two facts in the Hamiltonian description: (i) The number of (canonical) variables used is larger than what is really needed to describe the system at a given 'instant' of its evolution, but eliminating the redundant variables is not trivial. More specifically, the phase space that results from allowing arbitrary values of the canonical variables is larger than needed. Actual physical states and evolution are necessarily confined to a subregion within this larger space . This subregion may be characterized by a set of equations for the canonical variables (just like the unit circle on the (x, y)-plane is characterized by the equation x 2 + y 2 1 = 0). In other words, for gauge theories there are non trivial relations or constraints among the canonical variables: These constraint equations define the constraint (hyper) surface on , to which physical states and evolution are restricted (Fig. 3). The index I runs over the number of constraints necessary to characterize this surface. (ii) The redundancy also implies that di↵erent points on may represent the same physical state. These points are necessarily related by a local (gauge) symmetry. Starting from one point on the constraint surface and applying successively (infinitesimal) gauge symmetry transformations, one may obtain all points in representing the same physical state as the original point. These are the gauge orbits (Fig. 3).
When one 'changes the gauge', i.e. when one applies a local symmetry transformation, the physical system does not change but the actual point on representing the system does. This transformation should not change the (Hamiltonian) equations of motion, so mathematically it is analogous to dynamical evolution. (Infinitesimal) gauge transformations are therefore also represented by Hamiltonian flows, i.e. they correspond to Hamiltonian vector fields X C I defined using generator functions C I : ! R as in the previous discussion on dynamics: The same geometric picture for evolution ( Fig. 2), with X H replaced by X C I , is valid for gauge transformations.
The notation for the generator functions of gauge transformations is no coincidence. Generically, if the constraint functions in (6) are -in Dirac's terminology-first class, i.e. their Poisson bracket is a linear combination of constraints: for some functions f IJ K (q, p), then one can show they also correspond to generator functions for vector fields X C I generating gauge symmetry orbits.
For an in depth discussion of the Hamiltonian description and constrained systems, the reader is referred to the excelent references [10,11,12].

Quantum description
We now turn to quantum mechanics. At the center of the mathematical quantum description, there is the notion of Hilbert spaces and operators thereon. The probabilistic interpretation then connects these mathematical objects to physical or measurable observables. While there are important generalizations or reformulations of the quantum theory which do not rely directly on these mathematical notions, for the moment we will not consider such extensions, and focus on the Hilbert space formulation canonical quantization takes aim at. Again, we will briefly recall how states, observables, dynamics and symmetries of the physical system are represented in this case.

States: Hilbert spaces
In the quantum description, the states of a physical system are represented by normalized vectors | i (or more accurately by rays e i✓ | i) in a Hilbert space H.
Hilbert spaces generalize notions on finite dimensional Euclidean vector spaces -with their 'dot product'-to include infinite dimensional vector spaces. More precisely, a Hilbert space H is a complex vector space equipped with an inner product h·|·i : H ⇥ H ! C satisfying: The inner product induces a norm || || := p h | i and therefore a metric d( , ) := || || on H. The metric introduces topological notions on H, in particular the notion of convergence of sequences of vectors. As a metric space, H is required to be complete (a complete space contains limit points of all its Cauchy sequences).
The completeness condition guarantees that on every Hilbert space there exist an orthonormal basis. This is a set of vectors {|e ↵ i} in H satisfying and such that for any vector | i 2 H we have an expansion of the form We want to stress that for each vector | i, the number of basis elements in the previous sum may be finite or infinite, but always numerable (n 2 N). However, notice that we have not specified the index set for the labels ↵ identifying each element |e ↵ i of the basis. This is the number of elements in the basis necessary to expand every vector in H in this way. A Hilbert space may be classified by the cardinality of its orthonormal basis: • Separable Hilbert spaces have a countable basis, that is either basic quantum mechanis, or more precisely L 2 (R, dx) the space of square integrable R R dx|f | 2 < 1 functions f : R ! C, and even the Fock spaces used in quantum field theory they are all Hilbert spaces of this type.
• Non-separable Hilbert spaces have an uncountable basis {e ↵ i} ↵2R , . . . . That is, bases with the cardinality of the real numbers R (or bigger!). The Hilbert spaces resulting from loop quantization are precisely of this type. Bases labeled by points on the real line as in so called polymer quantum mechanics, by the total volume of the universe as in Loop Quantum cosmology, or by graphs in three-dimensional space representing excitations of geometry as in full LQG.
Actually, in the category of Hilbert spaces, the cardinality or number of elements in an orthonormal basis completely characterizes the Hilbert space. If two Hilbert spaces H andH have corresponding bases {|e ↵ i} ↵2I and {|ẽ ↵ i} ↵2I labeled by the same index I, the mapping U : H !H, |e ↵ i 7 ! |ẽ ↵ i, for each ↵ 2 I defines an isometry. In particular, all infinite dimensional separable Hilbert spaces are isometric to the seemingly more simple l 2 space. Nevertheless, as models of physical systems, it may be certainly more convenient to work with or construct separable Hilbert spaces which do not resemble l 2 . When Heisenberg introduced his 'matrix quantum mechanics' he was essentially working with l 2 . At the same time Schrödinger, with his 'wave functions' for particles in one dimension, was using L 2 (R, dx). The latter realization of the Hilbert space for the quantum behavior of a point particle is equivalent to l 2 , but it has the virtue of having a more direct connection or physical interpretation in terms of positions, momenta and undulatory phenomena associated with the way we measure quantum properties of particles in a laboratory.
These observations already make manifest a di↵erence between the role of states in classical and quantum (mathematical) descriptions. Whereas in the classical formulation, all physical information is essentially already encoded in the states, i.e. in the manifold (which mathematically -using the symplectic structure-determines the algebra of observables), in the quantum description, the Hilbert space H by itself does not contain or codify the whole information about the physical system. Additional (external) input is required to complete the mathematical description. This additional information is codified in the quantum algebra of observables.

Observables: operator algebras
The correspondence between the mathematical objects defining the quantum theory and the physical quantities one may measure in a lab is less direct than in the classical theory. Each physical observable is associated in the quantum theory with a linear operator on the Hilbert space: The value , a physical observable may possibly take after a measurement is necessarily an eigenvalue of its corresponding operator Real observables then generally correspond to Hermitian or self-adjoint operators which ensures their eigenvalues are real.
The inner product and normalization of vectors make possible the probabilistic interpretation which provides physical predictions through expectation values of measured observables on a state | i and transition probability amplitudes between states: Just like the algebra of observable functions on phase space in a classical description, the set of observable operators (operators associated with physical observables) forms a (Lie) algebra. Indeed, observable operators are a subset of GL(H), the set of general linear operators on H. GL(H) not only has the structure of a vector space, but also of an associative algebra and of a Lie algebra. Vector addition of operators b f + b g and scalar multiplication c b f are defined by its action on vectors | i 2 H as The product given by the composition of operators: makes GL(H) a noncommutative associative algebra [in contrast to the abelian associative algebra defined by classical observable functions with pointwise product (3)]. Using this composition product, the commutator : (defines yet another product which) makes GL(H) a Lie algebra. The set of observable operators in necessarily a subalgebra of it. Analogously to its classical counterpart, due to the chief role of physical observables, the latter algebra of operator observables is a key ingredient of the quantum description. The 'abstract algebra' is however not enough. To extract physical information one further needs a state. This is where the representation, i.e. the actual Hilbert space with its inner product and specific operators, plays its part.

Dynamics
Dynamical or time evolution must preserve the probabilistic interpretation and the superposition principle, i.e. it must preserve the normalization of vectors and it must be linear. Evolution must therefore be implemented by unitary transformations U t : H ! H: linear) unconstrained systems, this evolution operator may be (infinitesimally) generated by a Hamiltonian operator b H (see next subsection). In this case, evolution of a state is dictated by the Schrödinger equation: Since the information about the physical system is honestly split mathematically between the states and the algebra of observable operators in the quantum theory, it is not surprising that evolution may be equivalently thought as evolution of the observable operators (Heisenberg as opposed to evolution of the states (Schrödinger picture). Schrödinger equation is then equivalent to Heisenberg equation Notice the special role of t (and the Hamiltonian b H) in these quantum descriptions. Schrödinger or Heisenberg equations do not easily generalize to more complicated systems. This is specially true for totally constrained systems -which have vanishing Hamiltonian-like general relativity.
Recall, an operator U is unitary i↵ so indeed physical predictions -eigenvalues, expectation values, transitions amplitudes-are unchanged under such transformations (e.g. for the transition amplitude between two states: gives a unitary representation of the symmetry group of transformations G, this means it satisfies If g( ) with 2 R is a one-parameter family of symmetry transformations, that is the family of transformations labeled by satisfies g( , at least formally, we can build transformation U g( ) by lots of little ones: In the limit N ! 1, these pieces are infinitesimal transformations which are 'almost the identity transformation' and therefore may be written as: A finite transformation is then recovered from b C: C is the infinitesimal generator of the one-parameter family of transformations. A state | i representing a configuration of the physical system invariant under symmetry transformations must satify U g( ) | i = | i, and hence it also satisfies b C| i = 0.

Canonical quantization
The behavior of atomic and subatomic systems requires quantum mechanics, so we believe quantum theory to be more fundamental (and richer) than the classical description which is supposed to emerge from the first as an e↵ective or coarse-grained description. Unfortunately, we have no physical first principles for quantum theory yet. To construct a quantum theory for a physical system from scratch, we really only have the correspondence principle as guidance: Excluding certain quantum observables -like the spin of particles-which have no classical analog, the quantum theory is expected and must be made to correspond to the classical mechanics (or field theory) one started with in an appropriately defined semi-classical regime. That is, through the semi-classical limit, where by some kind of generalization of Erhenfest theorem, expectation values of quantum operators in so called semi-classical states should reproduce the classical dynamics.
Of course in principle, to construct a quantum theory of a given physical system, one could devise and perform a great many experiments on the system (perhaps infinitely many!) and cleverly organize and synthesize the results according to the basic (mathematical) rules of quantum mechanics 9 . Regardless of being unsatisfactory from a theoretical point of view, this way of proceeding is generally impractical, and for systems like gravity, technologically impossible today (and must likely for many more decades to come). The only known systematic way to formulate a possible quantum theory describing a physical system is through quantization of the corresponding classical description. This is the process of building the seemingly very di↵erent mathematical objects describing states, observables, evolution and symmetries of a candidate quantum theory from their mathematical counter parts in the classical theory, in such a way that the correspondence principle is satisfied.
So given a classical theory, how do we actually quantize it? how do we find the particular Hilbert space H, operators b f and all that which will actually describe the quantum system? There are several general approaches or rather recipes to finding such theory. To name the 'most popular' we have • Canonical quantization • Path integral quantization • Geometric quantization • Deformation quantization with the first two probably being at the top of physicists' likings. While observables are the heart of both classical and quantum descriptions, each procedure privileges certain objects over the others and uses them di↵erently to construct a candidate theory. Notice we have said quantization of a classical theory in general, and in particular the procedures mentioned above, result not in the quantum theory, but in a candidate quantum theory. The classical theory is assumed to be a coarse-grained version of the more fundamental quantum theory, a mere shadow of the richer quantum description. So it is no surprise that generically the classical description cannot uniquely determine the quantum theory. For complex theories there are many ambiguities. In each of these general procedures there are choices to be made. Surely, many of these choices may be physically or mathematically motivated, but a priori, nothing built-in in these procedures guarantees the constructed theory will be the one preferred by nature. Actually, for complex theories like gravity, these general procedures do not even guarantee that the resulting theory will satisfy the correspondence principle and recover the semi-classical limit one started with! This is something that needs to be checked by hand.
We focus now in canonical quantization. As we have reviewed, in the quantum theory as we have described it, a big part of the physical information is contained in the operators and relations among them. In canonical quantization these relations are postulated or extracted directly from the relations classical observables satisfy (which contain all the information of the classical description). But which are exactly the relevant relations for quantization? A physical justification or motivation comes from the fact that in physical quantum systems certain pairs of observable variables satisfy generalized uncertainty relations which mathematically are the result of their corresponding operators not commuting. Through the correspondence principle, these variables turn out to be canonical pairs in the classical description. This hints at the Poisson algebra structure of the classical observables and the commutator algebra of quantum operators as the relevant algebraic structures to connect both descriptions. Indeed, mathematically, both (C 1 ( ), {·, ·}) and the quantum algebra of observable operators [as a subset of (GL(H), [·, ·])] are Lie algebras. From now on then, for quantization purposes we will extend C 1 ( ) and consider not only real but also complex-valued functions f : ! C. We will keep the same notation and still refer to this extension as the set of classical observable functions 10 .
Dirac was first to postulate the canonical commutation relations (CCRs) from the Poisson bracket. In fact, he was more ambitious and also proposed a broader correspondence for quantization. In modern and slightly more rigorous terms: The existence of a Hilbert space H and a quantization mapping:ˆ: satisfying (among other more technical requirements) that: We now know that for general phase spaces it is impossible for all the algebra of observables C 1 ( ) to satisfy these conditions consistently. Even for simple cases like = R 2n , there are "No-go" theorems (e.g. Van-Hove) which point at obstructions to constructing this mapping. So in general we can only choose a proper subalgebra of C 1 ( ) to be represented unambiguously as quantum operators. This is the first and a key choice in canonical quantization.

Dirac's procedure
Let us now elaborate on how one can exploit this (partial) correspondence between quantum and classical observables to construct the quantum theory. Canonical quantization may be thought roughly to consist of four steps: (i) Choose subalgebra of basic observables: This is a subset A ⇢ C 1 ( ) of (real or complex) functions on phase space which will correspond to unambiguous quantum operators. This choice is however not arbitrary. At the very least this set A of basic observables must satisfy: • The set A should be 'controllable'. This usually means A is closed under the operation of taking the Poisson bracket, i.e. A should be a Poisson subalgebra of the algebra of observables. • The set of basic observables should not be too large to preclude the existence of a quantization map, but it should be large enough to separate points on . This means, we should distinguish di↵erent points on phase space just by the values observables in A take on these points: x, x 0 2 and x 6 = x 0 implies 9f 2 A such that f (x) 6 = f (x 0 ). This guarantees A contains enough geometric information of the phase space.
Example: In standard quantum mechanics (finite number of degrees of freedom), one fixes canonical coordinates (q, p) 2 and chooses coordinate functions and the constant unit function. For a single canonical pair: C}. This is the Heisenberg algebra. Clearly the Heisenberg algebra separates points since it contains the coordinate functions. For the same system, we may however choose di↵erent subalgebras of observables: • One may consider the family of functions The generated Poisson subalgebra A consists of linear combinations of these functions and it also separates points.
• A similar but di↵erent choice would be the Poisson algebra generated by the family: F 1 (q, p) = p and U µ (q, p) = e iµq , for µ 2 R These last two choices of basic observable subalgebras A are analogs of the so called Holonomy-flux algebra chosen in LQG. These choices may seem less natural here, but they are well motivated in the gravitational case.
(ii) Construct the quantum algebra A of basic observables: This is the detailed version of 'imposing the canonical commutation relations'. More accurately, what one is doing is: • Construct abstract (free-associative) algebra of basic operators by promoting basic observable functions in A to abstract operators: • along with an abstract involution ⇤ : A ! A. Recall this is a map satisfying: This involution represents reality conditions. So for real functions f =f , impose b f = b f ⇤ (eventually this will correspond to adjoint operation in a concrete Hilbert space) A is called the quantum ⇤-algebra of basic observables. This is an abstract mathematical object containing the relevant algebraic information of the system. At this point there is no specific Hilbert space or concrete operators yet, only the abstract commutation relations.
Example: In standard quantum mechanics abstract operators b x and b p generate free algebra of polynomials in b x and b p, and we have the Canonical Commutation Relations (CCRs): (iii) Find representations of the quantum ⇤-algebra A: This is the crucial step of implementing the CCRs, i.e. finding a concrete Hilbert space H and specific operators that obey the CCRs. More rigorously, one needs to find a (kinematical) Hilbert space H and a map ⇡ : A ! GL(H) which preserves the ⇤-algebra structure: But how do we find this Hilbert space and operators? do we have to use our instinct and guess the answer? Luckily, this is now a highly nontrivial but purely mathematical problem in the realm of so called representation theory. To try to find representations for the quantum ⇤algebra A, one can resort to powerful representation theorems and constructions like the Gelfand isomorphism, the GNS-construction (Gelfand-Neimark-Segal), Riesz representation theorem, etc.
Example: For the CCRs (8) Schrödinger gave us the answer. In his position representation of standard quantum mechanics: H Sch = L 2 (R, dq) and the map ⇡ : A ! GL(H Sch ) represents abstract operators b q and b p as multiplication an derivation operators acting on square integrable 'wave functions' 2 H sch : Of course, to avoid cluttering our formulas, we physicists simply write:q (q) = q (q) and p (q) = i @ (q) @q (we have also set~= 1). The well known example above may give the impression that we are overly complicating things. However, for more sophisticated theories -like gravity-it pays o↵ to have a refined or more formal understanding of what 'imposing the commutation relations' really means.
Already for the canonical commutation relations of standard non relativistic quantum mechanical systems of our example, our discussion raises the important question about the uniqueness of the Schrödinger representation. Indeed, by a simple Fourier transformation, we may come up with a 'di↵erent' representation of the CCRs. In the Schrödinger momentum representation H = L 2 (R, dp). While the Hilbert space of 'momentum wave functions' is mathematically the same as that of 'position wave functions' (square integrable functions), the representation of each abstract b q and b p is completely di↵erent: For 2 H: we can go further and give another Fock-type representation of the CCRs with di↵erent Hilbert space H = l 2 of square summable infinite sequences. For any sequence (z 0 , z 1 , z 2 , . . . ) 2 l 2 we may formally define shifting operators: The reason we do not generally worry about other representations in our basic quantum mechanics course is because for the finite dimensional mechanical case, 'almost all of them' are unitarily equivalent and predict the same physics. In some very precise sense the Schrödinger representation is unique: Stone-von Neumann Theorem: Any irreducible, unitary representation of the Weyl algebra (exponentiated version of the CCRs) which is continuous in some sense, is unitarily equivalent to the Schrödinger representation.
For field theories there are no general uniqueness results for the analog of the CCRs. There are actually infinitely many inequivalent 'nice' representations of the CCR. However if we are in Minkowski space and we require the quantum theory to 'inherit its symmetries' then one can single out a representation: Uniqueness  functions does not have a definite quantum analog (for f, g 2 A, the product {f, g} has unambiguous quantum representation as a comutator but not fg). As we have reviewed, for gauge theories like general relativity there is an additional complication. The Hamiltonian theory has constraints C I associated to gauge symmetries which need to be implemented in the quantum theory to eliminate redundacies and identify the true physical states. To do so, one needs to: (a) Construct quantum constraint operators b C I from basic operators on kinematical Hilbert space H kin . Again, this step is not trivial and subject to quantization ambiguities because generically the constraint functions C I : ! R do not belong to A. (b) Impose constraints to find gauge invariant states: or alternatively implement action of finite gauge transformations U g = e iC I and solve: For the first option there is an important subtlety. If the operators b C I are to consistently represent infinitesimal generators of quantum gauge symmetries, the quantization must be anomaly-free, i.e. an analog of the first class contraint algebra (7) must hold for the quantum operators: (c) Find physical Hilbert space H phys to start doing physics! This extra step is usually necessary because generically, the gauge invariant states | i solving equations above, may not belong to the Hilbert space H kin . The kinematical Hilbert space H kin (where the constraints have not been imposed) then needs to be replaced or extended somehow to contain the gauge invariant states 11 .

Gravity
We finally turn to the Hamiltonian formulation of gravity and its (loop) quantization. Since our purpose is only to give an overview of the application to gravity of ideas previously discussed, we will be slightly less rigorous and we will only sketch the general ideas.
Recall that in Einstein's theory of general relativity, spacetime is modeled by a fourdimensional manifold M whose points represent each an event (some 'here and now') in the universe. According to Einstein, and as a consequence of the equivalence principle, gravity is a manifestation of the curvature of spacetime. Mathematically, this curvature -which we can physically measure-is given by the so called Riemann curvature tensor R ⇢ µ⌫ . In turn, the curvature tensor may be reconstructed from a simpler object: the metric g µ⌫ . The components of R ⇢ µ⌫ are quadratic functions of the metric: R(g, g 2 , @g, @ 2 g), which hence acts as some kind of potential for curvature [similarly to potential functions V (q) for conservative forces in mechanics: F = @V /@q, or the magnetic vector potentialÃ for the magnetic field:B = r ⇥Ã]. The principle of general relativity implies first that any free falling observer is entitled to assume he/she is in an inertial reference frame. Mathematically, this translates to the requirement that at each point p on M one may find coordinates for which g µ⌫ looks like the Minkowski metric of special relativity: ⌘ µ⌫ = diag ( 1, 1, 1, 1). In other words, spacetime (M, g µ⌫ ) is a four-dimensional pseudo-Riemannian manifold 12 .
Secondly, the principle of general relativity also implies that physical laws should be independent of the coordinates an observer uses to describe them. In other words, the laws of physics should be invariant under arbitrary coordinate transformations on M or equivalently di↵eomorphisms : M ! M of the spacetime manifold. The equivalence principle combined with this coordinate independence then implies general covariance of the theory. This general covariance means equations in general relativity theory must be spacetime tensorial and should only involve the metric -or quantities derived from it like curvature-to describe dynamics of spacetime itself. This entails in particular background independence of the theory: when written in tensorial form, no additional preferred or 'background' geometrical structures other than the metric should appear in the equations.
Finally, gravity or the geometry of spacetime is dynamical. 'Free falling observers' or test particles in general relativity follow geodesics. This is how curvature manifests itself or influences matter. But in turn, T µ⌫ the energy-momentum tensor of matter in a spacetime shapes curvature according to Einstein's equations: In summary, we may destill three key lessons from general relativity: Gravity is a dynamical gauge theory of geometry of a spacetime (M, g µ⌫ ). As such, its formulation must be background independent and invariant under di↵eomorphisms, i.e. the group of gauge or local symmetries of the theory must necessarily contain Di↵(M), the group of di↵eomorphisms of M.

Hamiltonian ADM formulation
General covariance implies there is no preferred notion of time in general relativity. However, as we have reviewed, in a classical canonical or Hamiltonian formulation, a definite notion of a time coordinate t not only plays a crucial role to describe dynamics but is also essential to define generalized momenta or a Legendre transform. Einstein's covariant field equations (9) are not in Hamiltonian form. In order to reformulate general relativity in Hamiltonian form, one first needs to select a 'notion of time' in M. This necessarily breaks explicit di↵eomorphism invariance and covariance. Nevertheless, at the end of the day, the canonical formulation will still be di↵eomorphism invariant and background independent.
One may think the goal of this reformulation is to rewrite general relativity as a theory of fields on a three-dimensional space ⌃ and evolving in time t. The key idea is to slice spacetime into three-dimensional 'constant time t' hypersurfaces ⌃ t (one for each t). The picture is similar to how one slices a loaf of bread. This is a folation of spacetime with space-like hypersufaces ⌃ t . One may think of the ⌃ t 's as the simultaneity hypersurfaces of some special observer in M. That this can actually be done necessarily implies certain global properties for M, like having topology R ⇥ ⌃, but we will not worry about all such technicalities here. The foliation by itself is not su cient to define a notion of time evolution in M, one further needs a (generally time-like) direction transverse to the foliation in order to map and identify points on di↵erent hypersurfaces ⌃ t . This is given by an evolution vector field t µ on M or equivalently by its  Using these two geometric background structures, by means of the orthogonal projection of vectors to the hypersurfaces ⌃ t , one may decompose every spacetime 4-vector or tensor into a purely spatial part tangent to the foliation and a normal time-like part. This choice of foliation and splitting of tensors is called a 3+1 decomposition. For example, for the evolution vector field one has the decomposition: n µ is the unit normal to ⌃ t , N µ is the tangential component called the shift vector, and N the normal component called the lapse function [ Fig. 4(b)]. For the metric one has the decomposition: g µ⌫ = q µ⌫ + n µ n ⌫ .
The 'tangential' or purely spatial part q µ⌫ defines a three-dimensional Riemannian metric [of signature (+,+,+)] on ⌃ t . Every other vector or tensor has a similar decomposition. Now, by following the integral curves of t µ , one may map one ⌃ t to the next and see how the purely spatial or tangential components of tensors change as we move from one ⌃ t to the next, i.e. as we 'evolve in time'. It turns out this 'evolution' of three-dimensional fields fully reconstructs the four-dimensional geometry or spacetime dynamics. This identification procedure also defines adapted coordinates in M. Given coordinates x a on a particular ⌃ t , by 'dragging' them along t µ , we may use (t, x a ) as coordinates for M. Using these coordinates the metric is written in block matrix form as with N a the 3-dimensional shift vector and q ab the 3 ⇥ 3 matrix representing the spatial metric. Spatial indices are lowered or raised with this 3-metric. The 3-metric q ab fully describes the intrinsic geometry of ⌃ t . Additionally, there is another important bilinear form describing the extrinsic geometry of ⌃ t . This form is the extrinsic curvature K ab . The extrinsic curvature describes how ⌃ t is embedded in full spacetime M, i.e. how ⌃ t bends inside M 13 . One can now derive the Hamiltonian theory. One may start with the covariant Einstein-Hilbert action: and perform a 3+1 decomposition to select adapted coordinates (t, x a ) and a notion of time.
One can then apply a Legendre transform to define canonical variables and write the action in Hamiltonian form. These are non-trivial steps but we will not go over the details here. The end result is that the induced three-dimensional Riemannian metric q ab (t, x) and the extrinsic curvature K ab (t, x) of ⌃ t act as the canonical variables of the theory. More precisely, the momentum density p ab (t, x) conjugate to the metric is p ab := p q(K ab q ab K) and one has the Poisson bracket canonical relations: Einstein's equations in Hamiltonian form are: There are four constraints per point on ⌃ t : three are grouped in a vector or spatial difeomorphism constraint C a D ⇡ 0, a = 1, 2, 3, and the remaining one is the scalar or Hamiltonian constraint C H ⇡ 0, their functional form in terms of the canonical variables is of little relevance for our discussion but nevertheless we write it here for completeness: Notice only that the Hamiltonian constraint has the form of a typical Hamiltonian in mechanics: a kinetic term quadratic in momenta p ab plus a (quadratic) potential function R(q, q 2 , Dq, D 2 q) of the configuration variables q ab . This potential is not simple though, it is the Ricci scalar of the three dimensional intrinsic curvature. The total Hamiltonian prescribing evolution in the bulk spacetime turns out to be a sum of the constraints: , with constraints smeared with lapse and shift: General relativity is hence a totally constrained system, i.e. its total Hamiltonian vanishes on the constraint surface H Total ⇡ 0. This is a reflection of the general covariance of the theory, of the normal n µ to the hypersurfaces ⌃t: where P µ ⌫ denotes the orthogonal projector to the hypersurfaces: P µ ⌫ := µ ⌫ + n µ n⌫ . Extrinsic curvature is purely spatial, so one may use adapted coordinates or abstract spatial indices to denote it: K ab .
i.e. their Poisson brackets are linear combinations of the constraints. As we have said before, this implies they are canonical generators of the gauge symmetry transformations of the theory. As we have emphasized, the gauge symmetries of general relativity encompass spacetime di↵eomorphisms. The infinitesimal generators of di↵eomorphisms in spacetime are spacetime 4-vector fields V µ , so that for example, an infinitesimal transformation of the spacetime metric g µ⌫ ! g µ⌫ + g µ⌫ under a di↵eomorphism generated by V µ is given by its Lie derivative along V µ : In the 3+1 decomposition for the Hamiltonian theory, one may split this generator into normal and spatial parts:  (10). So the theory is independent of these background structures because we are free to choose them arbitrarily.
The information that we have a covariant theory is actually completely contained or codified in the Poisson bracket relations (11). These relations are a representation of the Hypersurface deformation algebra. This set is a fundamental object, encoding not only the gauge symmetries of Einstein's theory but the structure of spacetime itself. Poisson bracket relations (11) express the fact that dynamics takes place on space-like hypersurfaces embedded in a pseudo-Riemannian manifold (Hojman,Teitelboim,Kuchar). With explicit di↵eomorphism invariance and covariance broken, it is then key that these relations be appropriately represented in the corresponding quantum theory. This is exactly what Wheeler and DeWitt first tried but, except for simplified or reduced models (like cosmological ones), failed to accomplish for full general relativity. At present, there is still no satisfactory construction following this path. One gets stuck with the procedure already at step (iii). Due mainly to the unwieldy nature of the so called superspace of spatial metrics, it has proved extremely di cult to rigorously construct such Hilbert spaces of functions of the metric or any other representation for that matter, let alone constraint operators thereon. A fortunate turn of events came with the realization that if the superspace of metrics is so di cult to handle, then perhaps one is better o↵ doing without it and choosing instead a di↵erent set of variables to describe the gravitational field. This is the root of loop quantization.

Vielbein-connection variables and loop quantization
The gravitational field may indeed be described by a di↵erent set of variables: a tetrad e µ I and a Lorentz connection ! I µJ . Let us take a moment to gain some physical (and mathematical) insight for these alternate variables.
The spacetime metric g µ⌫ acts as a 'potential' for curvature and contains all information about gravity, but there is a di↵erent and more fundamental potential coding the same information about the gravitational field. Gravity dictates which observers are 'free falling' and such observers define or carry an inertial reference frame. Thus the gravitational field can be viewed as the field that singles out which reference frames are inertial at each point, and further determines how these reference frames at di↵erent points are related. This suggests we can use these inertial reference frames as the basic variables to describe gravity.
Mathematically, these preferred reference frames are described by the cotetrad one-form field e I µ . The cotetrad projects spacetime tangent vector components V µ in arbitrary frames to its components V I in an inertial frame (so I = 0, 1, 2, 3). One can think every inertial observer has attached to him/her-self an internal Minkowski space with Minkowski metric ⌘ IJ . The cotetrad then defines a map (actually an isomorphism) from the tangent space at each point to this internal Minkowski space (Fig. 5).
The cotetrad contains all the information about geometry. The spacetime metric is derived from it as g µ⌫ = ⌘ IJ e I µ e J ⌫ . U · V = gµ⌫ U µ V ⌫ (b) More fundamentally, the local geometry is Minkowski and e I µ projects components from arbitrary frames to inertial frames, e.g. Figure 5. Fundamentally, gravity defines at each point in spacetime inertial reference frames described by a cotetrad field e I µ . Comparing with the standard picture, one can deduce the metric field is derived from the cotetrad g µ⌫ = ⌘ IJ e I µ e J ⌫ .
The components of the cotetrad field define a 4 ⇥ 4 matrix. Its inverse e µ I is called the tetrad and defines a (non-coordinate) orthonormal basis on tangent spaces: g µ⌫ e µ I e ⌫ J = ⌘ IJ . A free falling observer defines or carries not one but an infinite family of inertial reference frames. An inertial observer at a given point in spacetime is free to rotate or boost his/her own frame as he/she pleases, independently of inertial observers at other points. So the cotetrad description of the gravitational field has an additional local 'internal' Lorentz gauge symmetry. Mathematically, a rotated or boosted cotetradẽ I µ := ⇤ I J e J µ , for ⇤ I J 2 SO(1, 3), determines the same spacetime metric because for a Lorentz matrix ⇤ T ⌘⇤ = ⌘: On the other hand, the way reference frames at di↵erent spacetime points are related to one another is mathematically described by curvature, or more accurately by a connection. In general relativity this interconnection among frames is also part of the information the gravitational field provides, so the connection is fully determined by the metric (and therefore also by the cotetrad). Mathematically however, there are many ways to connect or relate frames or vectors on tangent spaces at di↵erent points which need not be determined by the frames themselves.
Let us recall a general connection defines a notion of parallel transport: given a curve on spacetime from point x to point y, a connection gives a prescription on how to 'translate' vectors from the tangent space at x to the tangent space at y, so one may compare vectors on di↵erent tangent spaces. Equivalently then, a connection defines a way to take 'directional derivatives' of vector fields on a manifold: a covariant derivative operator r µ (Fig. 6). In general relativity the connection or covariant derivative prescribing parallel transport is the Levi-Civita connection compatible with (and hence determined by) the metric: r ⇢ g µ⌫ = 0. Operationally, the Levi-Civita connection or covariant derivative, is given by the Christo↵el 'symbols' ⌫ µ⇢ which provide a correction to the flat coordinate derivative: This allows us to define e.g.
directional derivatives: On a general (curved) manifold there is no such preferred notion to compare vectors. These vectors belong to di↵erent tangent spaces or more general vector spaces attached at each point. One needs a connection to define derivatives of such vector fields. The notion of a connection is actually more general. In the standard metric description of general relativity, one is only concerned with parallel transport and covariant derivatives of vectors V µ on tangent spaces or their tensor products. But one may define these notions for vectors on more general vector spaces attached to each point of the spacetime manifold 14 . In particular, we have said inertial observers carry an 'internal Minkowski space' defined by their reference frame or cotetrad e I µ . To compare vectors V I on these di↵erent internal Minkowski spaces at each spacetime point or -equivalently-to compare or connect di↵erent frames e I µ , one needs a notion of parallel transport and covariant derivative D µ of internal vector fields V I , i.e. a connection for the internal spaces. Operationally, this connection or covariant derivative is given by a connection potential ! I µJ which 'knows how to act on internal indices' and provides a correction to the flat coordinate derivative: Physicists usually refer to D µ or ! I µJ simply as 'the connection'. Among the infinitely many possibilities for connections on internal Minkowski space, Lorentz connections are defined as those 'compatible with the internal Minkowski metric', i.e. those satisfying D µ ⌘ IJ = 0.
Just like the Levi-Civita connection defines the Riemann curvature R ⇢ µ⌫ , the connection D µ has an associated curvature F I µ⌫J . As we have said, in general relativity the gravitational field e I µ itself determines the notion of parallel transport. So the connection ! I µJ must be determined by or be a function of the cotetrad !(e). The connection determined by the cotetrad is called the spin connnection and it is the one 'compatible with it', in other words it satisfies: and it is necessarily Lorentzian. In a first order formulation of gravity however, the Lorentz connection ! I µJ is not assumed from the outset to be compatible with the cotetrad. This condition becomes a dynamical equation. Both fields (e I µ , ! I µJ ) then substitute the metric g µ⌫ as the basic dynamical variables to describe gravity. After this detour to covariant tetrad-connection variables, we are now ready to describe the Hamiltonian formulation of gravity in terms of so called Ashtekar-Barbero variables. These variables actually describe a larger phase space that extends or contains the ADM phase space described with metric variables (q ab , p ab ). This alternate description is the starting point for a loop quantization.
In the standard Hamiltonian formulation, gravity is described by the time evolution of Riemannian metric q ab on three-dimensional space ⌃. For the three-dimensional (time dependent) geometry (⌃, q ab ), there is a description analogous to the first order formulation in terms of the cotetrad and Lorentz connection (e I µ , ! I µJ ) for the four-dimensional geometry (M, g µ⌫ ). In the Hamiltonian formulation, the analog of an inertial frame is a spatial reference frame described by a cotriad e i a , with a = 1, 2, 3, a spatial index and i = 1, 2, 3, an 'internal' index. In this case the internal space is good old three dimensional Euclidean space with flat Euclidean metric ij =Diag(1, 1, 1), and we have: with e a i the inverse orthonormal triad. The internal gauge symmetries are SO(3) rotations at each point in ⌃. A cotriad or triad can be directly obtained from a cotetrad or tetrad in a 3+1 decomposition of spacetime. Indeed, one can partially reduce the Lorentz gauge freedom of the cotetrad e I µ by restricting the frames so that they are 'compatible' with the 3+1 foliation. This basically means that the tetrad basis e µ I the spacetime frames define, is such that the time-like basis vector e µ 0 is normal to ⌃ t , and the remaining space-like basis vectors e µ i , with i = 1, 2, 3, are necessarily tangential and hence can be used as a triad e a i . This condition (called the time gauge) automatically guarantees (13).
To complete the analogy with four dimensions one needs a connection variable. Just like the cotetrad determines a preferred connection: the four-dimensional spin connection ! I µJ satisfying (12), a cotriad or triad determines a compatible three-dimensional spin connection i a satisfying D a e i b = 0 .
While the pair (e i a , i a ) fully describes the fixed geometry (⌃ t , q ab ) at a given instant of time, in the Hamiltonian formulation of gravity we know the intrinsic geometry of ⌃ t defined by the metric q ab , and equivalently by the cotriad e i a , is not the complete story. To fully determine the gravitational field one also needs information about the extrinsic geometry of ⌃ t , encapsulated in extrinsic curvature K ab . To describe a dynamical or time dependent three-dimensional geometry matching the spacetime of general relativity, one then needs a di↵erent connection which incorporates this information about extrinsic geometry and combines it with that given by intrinsic geometry.
In a 3+1 decomposition, one can derive such connection variable as a combination of certain components of the covariant Lorentz connection ! I µJ . This connection -which we denote A i a -is called the Ashtekar-Barbero connection. Unfortunately, the relation between A i a and ! I µJ is not as simple geometrically as the relation between the cotriad e i a and the cotetrad e I µ . This however does not mean A i a lacks a geometrical interpretation within the 3+1 Hamiltonian formulation. In the part of the extended phase space which corresponds to the standard ADM formulation, the Ashtekar-Barbero connection is expressed as is a free constant parameter of the theory called the Barbero-Immirzi parameter and K i a = K ab e b j ij . 29 1234567890 ''"" The Hamiltonian equations of general relativity in terms of Ashtekar-Barbero variables are similar to (10), except that there is an additional Gauss constraint C Gi ⇡ 0 responsible of implementing the additional SO(3) gauge freedom of the triad-connection formulation: Total } with total Hamiltonian prescribing evolution in the bulk spacetime again a sum of the di↵eomorphim and Hamiltonian constraints: . Just for completeness again, we write the form of the di↵eomorphism and Hamiltonian constraints in terms of Ashtekar-Barbero variables: 4.2.1. Loop quantization Finally we sketch the Dirac (loop) quantization program for gravity: (i) Choice of subalgebra of basic observables: Instead of choosing the very same canonical variables (A i a , E b j ) and the Heisenberg-type algebra they generate as basic functions on phase space to quantize , the hallmark of a loop quantization is to select a di↵erent subalgebra of basic observables: the algebra of holonomies and fluxes. As we have said, the connection A i a defines a notion of parallel transport. A way to translate vectors V i given a path a (s) in space ⌃. Mathematically, the parallel transported vector V i along the curve a (s) from point x to point y (Fig. 7) is given by the solution to the parallel transport equation This is essentially the 'exponential of the line integral of the connection A i a along '. It is a mathematical fact that that if we know the holonomies of a fixed connection for all paths in space we can reconstruct the connection. So holonomies define a special family of functions {h } on phase space. This family is parameterized by paths a (s) on ⌃ and contains all the geometric information about A i a . This family is chosen as a basic subset of observables to quantize. The other set of basic observables to quantize are the fluxes of the densitized triad E a i across arbitrary two-dimensonal surfaces S in space ⌃: These surface integrals are the gravitational analogs of electric fluxes in electromagnetism. This family {F S,f } is parameterized by 2-surfaces embedded in space ⌃ and smearing functions f i on them which are technically necessary to have well defined functions on phase space independent of coordinates and internal bases. This choice of basic observables may seem odd at first. However it is well justified by its geometric interpretation and by the requirement of background independence 16 . The Poisson brackets of holonomies h and fluxes F S,f have a very neat geometrical description in terms of the paths and surfaces S parameterizing them, and they form a closed subalgebra which further separates points on phase space. (ii) Construction of quantum holonomy-flux algebra: The next step is to promote holonomies and fluxes to abstract operators: and construct an abstract quantum algebra from the Poisson bracket relations: This is again a nontrivial step, but a rigorous construction was provided by Ashtekar, Corichi and Zapata. (iii) Loop representation: It turns out one can rigorously construct a representation of the (quantum) holonomy-flux algebra (16). The representation may be obtained using the GNS construction and also by using some other equally robust techniques. The kinematical Hilbert space H kin is the space of square integrable complex wave functions (A) of (generalized) connections: H kin = L 2 (Ā, dµ AL ) .
In this Hilbert space, holonomies b h act by multiplication (nonlinear "creation operators"), and fluxes b F S,f act by derivation. The subtle part in the construction of the Hilbert space is the definition of a proper (spatial) di↵eomorphism invariant measure dµ AL (the analog of dx) to define integrals for functions on the space of connectionsĀ which are invariant under changes of coordinates on ⌃. One of the most important results of the theory is that this loop representation is essentially unique [13]: LOST-Fleischhak Theorem (Lewandowski, Okolow, Sahlmann, Thiemann, Fleischhak): "Any cyclic representation of the holonomy-flux algebra invariant under spatial di↵eomorphisms is unitarily equivalent to the loop representation". The Hilbert space of the loop representation is non-separable. Still it is quite manegable. In practice, one may use cylindrical wave functions. These are special wave functions that depend on the connection A i a only through its holonomies along a finite set of 'edges' (paths) composing a 'graph'~ = { 1 , 2 , . . . , N }: In fact using a result from harmonic analysis on compact groups, called the Peter-Weyl theorem, allows to construct an orthonormal basis of spin networks (Baez): These are cylindrical functions labeled by graphs~ embedded in space ⌃, with a 'coloring' of edges by half-integer numbers [irreducible representations of SO(3)] and another 'coloring' on the vertices where the edges of the graph~ intersect (this coloring is an assignment of certain matrices called intertwiners). It turns out furthermore, that these basis vectors may be interpreted as polymer-like excitations of geometry (Fig. 8). Finally, given the complexity of the Hamiltonian constraint (15), the implementation of the corresponding operator and the solution of the quantum constraint equation still pose many issues. There are some heroic e↵orts and impressively, actual candidate operators have been constructed. However, apart from the usual quantization ambiguities for these candidate operators (which are many in this case), the most pressing problem is to guarantee that any of these implementations actually result in a quantum theory free of anomalies which further gives the correct semiclassical limit.
Much work remains to be done in the canonical front of LQG!