Paper The following article is Open access

Entanglement decoherence in a gravitational well according to the event formalism

and

Published 19 August 2014 © 2014 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Focus on Gravitational Quantum Physics Citation T C Ralph and J Pienaar 2014 New J. Phys. 16 085008 DOI 10.1088/1367-2630/16/8/085008

1367-2630/16/8/085008

Abstract

The event formalism is a nonlinear extension of quantum field theory designed to be compatible with the closed time-like curves that appear in general relativity. Whilst reducing to standard quantum field theory in flat space-time the formalism leads to testably different predictions for entanglement distribution in curved space. In this paper we introduce a more general version of the formalism and use it to analyse the practicality of an experimental test of its predictions in the Earthʼs gravitational well.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

A complete theory of quantum gravity has remained elusive for almost a century since the discovery of quantum mechanics. While quantum mechanics is well tested and confirmed on Earthʼs surface, where gravity is approximately uniform, there is little experimental data on quantum systems across significant gravitational potentials. This is true even for relatively accessible regimes such as the gravitational potential of Earth, for which there exist well-established theoretical models based on semi-classical techniques. Given the lack of data, and the lack of consensus on a fundamental theory of quantum gravity, there is still potential for new experimental discoveries to be made in these regimes.

In proposing such experiments, it is important to consider alternatives to the usual semi-classical approach [1]. On one hand, such alternative theories challenge the status-quo and encourage experimental tests to deepen our understanding of the existing paradigm. Less appreciated, however, is the role of such alternatives in hypothesis testing: they allow us to design effective experiments. In particular, a negative result for the standard hypothesis might provide positive support to an alternative theory, instead of being written off as anomalous or due to experimental error. The role of alternative theoretical models in driving and guiding experimental progress therefore should not be understated.

At the interface between quantum mechanics and gravity at the meso- or macroscopic scale, most models predict a decoherence-like effect on quantum entanglement and quantum superpositions. However, the precise mechanism of decoherence and its relation to gravity tends to differ widely between approaches (see e.g. [2] for a more complete discussion). The more conservative approaches focus on weak gravitational fields in a semi-classical setting [37]; in addition, decoherence due to centre-of-mass coupling to internal degrees of freedom in the presence of time dilation has also recently been considered [8]. On the other hand, more radical objective state-reduction models call for a break-down of quantum mechanics [911]. These latter models are related to a famous thought experiment of Penrose, in which it was argued that a massive object placed in a superposition should quickly decohere in the position basis due to the inherent uncertainty induced in the space-time metric. By contrast to the above approaches, the mechanism considered in the present work is based on a completely different thought experiment due to Deutsch: the self-consistent dynamics of quantum systems near closed time-like curves [12].

Deutschʼs thought experiment is perhaps less well-known than Penroseʼs. It considers exotic space-times in which gravity creates closed time-like curves and hence permits time-travel into the past. Deutsch argued that the usual paradoxes associated with such solutions of general relativity can be resolved by quantum mechanics. Deutsch does not attempt to quantize gravity, but considers quantum systems localized to semi-classical trajectories in a classical background space-time. Deutsch argues that a system scattering from a closed time-like curve in space-time exhibits globally nonlinear and non-unitary dynamics. The event formalism extrapolates Deutschʼs model to massless fields propagating in a globally hyperbolic space-time background, in which case it predicts a de-correlation of entanglement due to gravitational curvature [13]. Unlike Penrose and other models that also treat space-time classically and posit a nonlinear dynamical equation, the event formalism has a number of novel features: it predicts decoherence only for entangled systems and not single systems in a superposition; the effect is in principle reversible by further gravitational interactions (hence it is better called 'de-correlation' than decoherence); and it may exhibit information processing power greater than that of standard quantum mechanics [14].

Nonlinear modifications of quantum mechanics, whether due to gravity or otherwise, tend to be susceptible to pathologies and in particular to faster-than-light signalling, which is widely considered to be unphysical, particularly in a relativistic setting like quantum gravity. While the theoretical soundness of the Schrödinger–Newton equation remains a topic of debate [35], recent work by Kent [15] indicates that it is possible to formulate a sub-class of nonlinear theories that are manifestly free of such pathologies. So far, few physically motivated models have been constructed using this idea, but Deutschʼs model is one example that can be cast in this form [16], as is the modelʼs extension to quantum optics in gravitational fields that we consider here. The present model is therefore theoretically interesting, as well as being experimentally testable.

In this paper we calculate the expected decoherence effect from distributing time energy entanglement from a ground station to a detector in orbit. The calculation is based mainly on the event formalism introduced by Ralph, Milburn and Downes in [13], but the formalism is generalized to more realistically account for the expected experimental situation. We begin by introducing the event formalism in a more general way. For simplicity we restrict ourselves to $1+1$ conformal space-times that admit foliation into space-like hyper-surfaces with respect to some global time parameter t. We use units in which $c=\hbar =1$.

2. Event formalism

Deutsch considers a situation in which a particle (qubit) interacts with a future incarnation of itself via a closed timelike curve and shows this situation can be solved consistently [12]. The event formalism makes minimalist modifications to quantum optics on a curved background such that it reproduces the predictions of Deutsch in appropriate limits for spacetimes containing closed timelike curves formed via wormhole type metrics [17]. In Deutschʼs model, operators defined at later times along the particles' trajectory commute with operators defined at earlier times; this allows the future version of a system to interact with itself in the past. Equivalently, we can think of the time-travelling particle as being represented by a pair of particles, one labelled 'younger' and the other 'older'. The younger particle disappears at time $t={{t}_{F}}$ and the older particle appears at time $t={{t}_{P}}$ (where ${{t}_{P}}\lt {{t}_{F}}$) and the initial state of the older particle is required to be equal to the final state of the younger particle. We can define a parameter τ that is monotonically increasing along the semi-classical trajectory of the single time-travelling particle and we can associate two Hilbert spaces to the same particle: one for its younger self $\tau \lt {{t}_{F}}$ and the other for its older self $\tau \gt {{t}_{P}}$. At times ${{t}_{P}}\lt t\lt {{t}_{F}}$ for which the parameter τ is two-valued, the Hilbert space of the particle is doubled, allowing the past and future versions to interact. The result is a non-linear map from the state just before tP to the state just after tF .

The generalization of the model to fields follows similar reasoning: we replace the point particle by a wavepacket, centred on a well-defined semi-classical trajectory. The parameter $\tau (t,{{t}_{d}})$ is a monotonically increasing function of the global time t, and is defined for all times up to the detection of the propagating mode at time td . Specifically, τ is the time elapsed from t until td , as measured incrementally by a set of local observers, all at rest with respect to the choice of co-ordinates $(x,t)$ and stationed along the semi-classical trajectory of the wavepacket. Physical quantities will not depend directly on τ but rather on the relative mismatch in this parameter induced between two different modes of the field by propagation along different paths in space-time. Using this definition of τ, we require that operators acting on the field at sufficiently different values of τ should commute with each other. For this purpose we introduce Ω, the Fourier complement of τ, and modify the standard commutation relations as follows. Given an appropriate choice of coordinates the standard quantum optical mode annihilation operator can be written

Equation (1)

The event formalism generalizes this to the event operator

Equation (2)

The operators ${{\hat{a}}_{k,\Omega }}$ behave as standard boson operators with the commutator

Equation (3)

and the property that they annihilate the vacuum: ${{\hat{a}}_{k,\Omega }}|0\rangle =0$. The same-time event commutator is defined

Equation (4)

where the normalization term is necessary to avoid double counting the mode overlap. Equation (4) has the following properties:

Equation (5)

and

Equation (6)

Equation (5) guarantees that when $\Delta -\Delta ^{\prime} =0$ all commutators reduce to those predicted by the mode operators. Equation (6) guarantees that when $\Delta -\Delta ^{\prime} \ne 0$ equation (4) is still a well behaved commutator. Δ parametrizes the difference between the globally defined detection time td and a locally defined time $\tau (t)$:

Equation (7)

As noted earlier, the parameter $\tau (t,{{t}_{d}})$ records the propagation time between the detection time, td , and t, as incrementally measured by a set of local observers along the light path of this particular mode, i.e.

Equation (8)

where ds is the propagation time across an incremental local frame. We require that these local frames are all at rest with respect to the chosen frame of reference, i.e. with respect to the choice of co-ordinates $(x,t)$. This co-ordinate dependence of Δ is necessary to ensure that all physical predictions are reference frame independent (see end of this section).

For a sufficiently large space-time curvature as measured by Δ, the operators commute at different times along the trajectory. If the system traverses a closed time-like curve, this ensures that the Deutsch model is recovered [14, 18]. Conversely, for an inertial detection frame in flat space, all the local observers along the mode paths are in the same inertial frame (for example, the detection frame) so from equation (8), $\tau ={{t}_{d}}-t$. Hence for this situation we have $\Delta =t$ and hence $\Delta -\Delta ^{\prime} =0$ in all same-time commutators and we recover the standard theory. For curved space in general (not necessarily containing closed time-like curves), $\Delta \ne t$ and hence for modes that follow different paths we can have $\Delta -\Delta ^{\prime} \ne 0$, potentially leading to non-linear effects.

These definitions are sufficient to write down a simple recipe for calculating expectation values in the Heisenberg picture with the event formalism. First, write the desired expectation value in terms of a Hermitian function of mode operators representing the final measurement, $M({{\hat{a}}_{K}},\hat{a}_{K}^{\dagger },...)$, acting on an initial state formed via a unitary transformation of the global ground state $|\phi \rangle =U({{\hat{a}}_{K^{\prime} }},\hat{a}_{K^{\prime} }^{\dagger },...)|0\rangle $. The distinction between K and $K^{\prime} $ here represents the possibility that the measurement and preparation modes differ. We obtain

Equation (9)

where $M^{\prime} ({{\hat{a}}_{K}},\hat{a}_{K}^{\dagger },{{\hat{a}}_{K^{\prime} }},\hat{a}_{K^{\prime} }^{\dagger },...)={{U}^{\dagger }}MU$ is the Heisenberg picture measurement operator and the subscript ti indicates that all mode operators are evaluated at the same initial time (ti ). The equivalent event expectation value is obtained by simply replacing mode operators with event operators (and mode commutators with event commutators) in equation (9) such that

Equation (10)

where $\bar{M}^{\prime} =M^{\prime} ({{\bar{a}}_{K}},\bar{a}_{K}^{\dagger },{{\bar{a}}_{K^{\prime} }},\bar{a}_{K^{\prime} }^{\dagger },...)$. The definitions given in equations (2)–(4) are then sufficient to calculate the expectation value. Notice that in flat space the fact that $\Delta -\Delta ^{\prime} =0$ means that all expectation values will be the same as their mode operator equivalents.

Notice that expectation values only depend on the values of same-time commutators. In the standard same-time mode commutator (rhs of equation (5)) Lorentz invariance is ensured because a change of reference frame leads to $|K(k){{|}^{2}}{{e}^{ik(x-{{x}^{\prime }})}}\to |K(k^{\prime} ){{|}^{2}}{{e}^{ik^{\prime} (x-{{x}^{\prime }})}}$ in the new frame, provided a suitable transformation of the dummy variable $k\to k^{\prime} $ is made3 . As a result integrals in the commutator remain invariant under the change of reference frame. Similarly, we also have $|K(\Omega ){{|}^{2}}{{e}^{i\Omega (\Delta -{{\Delta }^{\prime }})}}\to |K(\Omega ^{\prime} ){{|}^{2}}{{e}^{i\Omega ^{\prime} (\Delta -{{\Delta }^{\prime }})}}$ under a change of frame. This in turn ensures that the same-time event commutator, equation (4), and hence all expectation values in the event formalism, are reference frame independent.

3. Experimental proposal

We consider a generalized version of the scenario analysed in [13] in which time energy entanglement is produced by a down converter and distributed along two paths that experience different curvatures (see figure 1). We will first consider a general scenario in which the detectors and sources can be placed at arbitrary heights within the gravitational field of the earth. We will then consider a specific, realistic scenario in which the source and one of the detectors is on the surface of the earth whilst the other detector is in low earth orbit.

Figure 1.

Figure 1. Schematic of generic correlation experiment. The source populates a vacuum mode with photon pairs of orthogonal polarizations that propagate towards a massive body. The pairs are split up and reflected away from the body to two different photon counting detectors using a polarizing beamsplitter (pbs) and a mirror. The photo-currents from the photon counters are sent to a correlator (C) to count coincidences.

Standard image High-resolution image

The gravitational field of the earth is modelled via the Schwarszchild metric in polar coordinates [1]:

Equation (11)

An infinitesimal proper distance in space-time as measured by a local observer is denoted ds. The proper distance between two space-time points is an invariant quantity, which all observers can agree on, regardless of the coordinate system they use. Here dt is an infinitesimal time interval as measured by an observer far from the earth, $r={\rm circumference}/2\pi $ is the reduced circumference, and θ and ϕ are spherical coordinates that remain the same for the observer on earth and the far away observer. We are using units where G, the universal gravitational constant and c, the speed of light, are both set to unity. M is the mass of the earth expressed in geometric units (metres).

In particular we consider the production of time energy entanglement from vacuum inputs via a parametric unitary. Using the recipe outlined above and the properties of parametric amplification [19] we obtain the following event operators

Equation (12)

where

Equation (13)

and

Equation (14)

and

Equation (15)

with j = 1, 2.

The function G(k) is the mode function of the detector, whilst H(k) is the mode function of the source. In the general recipe of equations (9)–(10) these correspond to K and $K^{\prime} $ respectively. For simplicity we will consider the case of weak parametric amplification for which ${\rm Cosh}(\chi )\approx 1$ and ${\rm Sinh}(\chi )\approx \chi $. Under this condition and unit transmission and detection efficiency the rate of coincidence detection according to the event formalism is given by

Equation (16)

where

Equation (17)

Notice that if ${{\Delta }_{t}}=0$ the Ω integral quotient goes to 1. The expression for C then reduces to its standard quantum optical prediction. Here we are assuming that ${{t}_{d2}}\gt {{t}_{d1}}$ and hence that the detectors are space-like separated. We will consider the case of time-like separated detectors in section 4.

Assuming radial propagation of narrow (with respect to earth radius) beams gives [13]

Equation (18)

describing the phase shifts acquired by the modes through propagation. Here xm , xp , xd1, xd2, td1 and td2 are defined in figure 1. The phase shifts acquired by the event degree of freedom are obtained by integrating the time coordinates along a series of stationary shell frames connecting the source to the detectors [13]

Equation (19)

where we have simplified the results by assuming $r\gg 2M$ for all radii of interest. Also ti is an arbitrary initial time, ${{x}_{i1}}=-{{t}_{i}}+2{{x}_{m}}+4Mln({{x}_{m}})+{{t}_{d1}}-{{x}_{d1}}-2Mln({{x}_{d1}})$ and ${{x}_{i2}}=-{{t}_{i}}\;+2{{x}_{p}}+4Mln({{x}_{p}})+{{t}_{d2}}-{{x}_{d2}}-2Mln({{x}_{d2}})$. This leads to

Equation (20)

We will assume that the detector has a much sharper intrinsic temporal response than the source. Hong, Ou, Mandel type interference measurements indicate that the intrinsic resolution of silicon APDs is $\leqslant 100$ fs [20]. Under such conditions we can approximate the detector mode function as a constant, $G(k)=1/\sqrt{2\pi }$.

To estimate the size of the effect we now consider the specific scenario in which the source, mirror, PBS, correlator and detector 1 in figure 1 are all approximately at height re , whilst detector 2 is at height ${{r}_{e}}+h$. This corresponds to having the source and detector 1 on the ground, whilst detector 2 is on a satellite. A classical channel links the second detector to the correlator on the ground. Equation (20) describes the magnitude of Δt . Substituting ${{x}_{d1}}\approx {{x}_{m}}\approx {{x}_{p}}$ (and maximizing the modal functions by choosing ${{x}_{i1}}={{x}_{i2}}$) we have

Equation (21)

We assume a Gaussian form for the function $H(\Omega )$,

Equation (22)

In figure 2 we plot normalized coincidences as a function of the off-set time delay between the detectors (where the off-set has been picked such that zero is the maximum) for the second detector lying at 500 km, and for various source coherence lengths. Also shown are the plots expected from standard quantum mechanics (obtained by setting ${{\Delta }_{t}}=0$). It is seen that as the source coherence lengths become narrower the coincidence counts are suppressed compared to the standard predictions.

Figure 2.

Figure 2. Ratio of coincidences to singles as a function of off-set time delay between the detectors in the standard theory (purple) and the event theory (blue). The off-set is chosen such that the maxima lie at 0. Three different coherence lengths are plotted for a fixed height of 500 km.

Standard image High-resolution image

It is easier to see what is going on if we assume that the coincidence number is obtained by integrating over the pulse length, i.e. the area under the curves in figure 2 (this is also a likely scenario for the experiment). We obtain

Equation (23)

and hence we get

Equation (24)

The coherence length of current sources suggested for space-based experiments is around ${{t}_{c}}=30$ ps [21] and hence we set the standard deviation in units of length to ${{d}_{t}}={{t}_{c}}\times c=9\times {{10}^{-3}}$ m. Using equation (21), the mass of earth in units of length, $M=G/{{c}^{2}}{{M}_{{\rm kg}}}=4.4\times {{10}^{-3}}{\rm m}$ and the radius of earth ${{r}_{e}}=6.38\times {{10}^{6}}$ m we find this implies significant decorrelation when $h\gt 10\;000$ km (see figure 3)—not very practical. In order to get an effect at the height of say the International Space Station we need a source with a coherence length $\leqslant 1$ ps (see figure 4).

Figure 3.

Figure 3. Ratio of coincidences to singles as a function of height when the source coherence length is 30 ps. Standard quantum mechanics would predict a ratio of 1 for all heights.

Standard image High-resolution image
Figure 4.

Figure 4. Ratio of coincidences to singles as a function of height when the source coherence length is 1ps. Standard quantum mechanics would predict a ratio of 1 for all heights.

Standard image High-resolution image

The coincidence rates in the figures are normalized against the singles rate $|{{\chi }_{j}}{{|}^{2}}$. In the presence of transmission loss it might be better to normalize against the product of the singles rates, thus removing the efficiency, but then $|{{\chi }_{1}}{{|}^{2}}$ and $|{{\chi }_{2}}{{|}^{2}}$ need to be determined independently. In an actual experiment the satellite will be in motion however, the effect of detector 2ʼs motion can probably be ignored because the rates are dominated by the source mode function so a Doppler shift on the detector will not significantly affect the result.

4. The causal relationship of detectors

In standard quantum mechanics proper and improper mixtures look operationally identical to observers with no information about the way they were created. However, in nonlinear extensions of quantum mechanics proper and improper mixtures may become distinguishable. This leads to the so-called preparation problem [16]—when should a particular preparation technique be considered to lead to a proper mixture, and when should it be considered to lead to an improper mixture. The significance of this question is that bad choices can lead to theories which allow instantaneous signalling to occur or other pathologies.

Two different solutions in the literature, which do not lead to signalling, are due to Bennett et al [22] and Kent [15]. Bennett et al basically assign an improper mixture to all preparation procedures involving the collapse of a quantum state. This includes not only the collapse of all entangled states but also situations in which states are produced via macroscopic settings, but in accordance with statistics given by a quantum random number generator. In this scenario the only proper mixtures are those produced via macroscopic settings that are determined shot to shot by a deterministic program.

In contrast, Kent assigns a proper mixture to all preparations in which the prepared quantum state lies in the forward light-cone of the preparation outcome. As a result the only improper mixtures in Kentʼs scheme are those involving entangled states for which the measurement that collapses the state occurs in a region of space-time which is space-like separated from the region in which the non-linear evolution takes place.

Applying the event formalism as so far described, regardless of the space-time relationship of the two detectors, corresponds, in the appropriate limits, to the Bennett et al solution. Although simple, this solution is not altogether satisfactory [16]. We are thus motivated to make an adjustment to the way the Δs are calculated which then corresponds to the Kent solution.

Consider the geometry of figure 1. In the previous section we assumed that ${{t}_{d2}}\gt {{t}_{d1}}$ and hence that the detectors are space-like separated. We now relax that condition but require that

Equation (25)

where

Equation (26)

In words, the end point of the evolution for beam 1, $t_{d1}^{\prime }$, is either taken to be its detection time, or the point at which beam 1 enters the forward light cone of the detection point of beam 2—whichever comes first. With this assumption the behaviour of the event formalism corresponds, in the appropriate limits, with the Kent solution, and transitions smoothly between those limits. The experimental proposal would be unaffected if the entanglement is distributed directly to the satellite and the ground station. However, if the beam to the satellite was delayed on the ground sufficiently long such that it fell within the forward light cone of the ground detector, then, using equation (25), we would predict that the decoherence effect would vanish and the coincidence rates would return to the standard quantum mechanical prediction. This effect could provide a straightforward way to confirm that any decoherence observed in the experiment is due to the physical model of event operators, as opposed to any alternative models or sources of decoherence. In particular, such an effect is certainly not predicted if the decoherence is of a purely semi-classical origin, or if the decoherence were caused by an ordinary coupling to some environmental degrees of freedom, as might be introduced by imperfections in the experimental setup. If a sharp change is seen in the coincidence rates as one adjusts the interval between the detection events, this would provide a 'smoking gun' confirmation of the model. On the other hand, if one observes an anomalous decoherence rate, unexplained by other effects but unaffected by the causal relationship of the detectors, this might support the solution of Bennett et al.

5. Conclusion

The event operator model is a novel alternative to the standard semi-classical theory of quantum mechanics in curved space-time. It is distinct from most other alternatives because it is based on Deutschʼs quantum gravity thought experiment on closed time-like curves, as opposed to Penroseʼs better known thought experiment of a mass in superposition, the latter being the basis for most other popular nonlinear models of decoherence due to gravity. As a consequence, the event operator model makes novel physical predictions in a regime quite different from other models: specifically the distribution of optical entanglement through regions of different curvature, where a general relativistic description is essential. In contrast Penrose type models predict differences when massive objects are put into superposition in Newtonian potentials. The very different regimes of the models leaves open the possibility that they are both limits of some more general model.

Finally, the outcome of such an experiment would have implications for future theoretical work on the topic. Given the possibility of formulating non-pathological nonlinear theories, such as by employing the method of Kent [15], these theories remain an interesting possibility for modelling quantum-gravitational effects. However, such efforts are necessarily contingent on the results of any experiments performed in this new regime, as is our willingness to extrapolate the standard formulation of quantum mechanics into regimes where it might not belong.

Acknowledgments

We thank Rupert Ursin and Thomas Scheidl for useful discussions.

Footnotes

  • For example, for the boost v, the appropriate change of variable is $k^{\prime} =\sqrt{1-{{v}^{2}}}k$.

Please wait… references are loading.