Empty Bottle: The Revenge of Schrödinger’s Cat?

We discuss some measurement problems which in spite of all progress of the statistical interpretation seem not well understood in the present day quantum theory.


Introduction
Since the basic idea of E. Schrödinger [1,2] about the linear evolution of the wave packets, the quantum theories show an impressive mathematical progress, but rather little fundamental changes. The pure states of a microsystem are always represented by vectors in the Hilbert spaces in which they always navigate linearly, except if the process is interrupted by a measurement. The results of a measurement are statistical and can be predicted with certainty only for some special states. The probabilities correspond to the quadratic forms of the state vectors and their statistical averages are ψ|A|ψ , where A are certain self-adjoint operators. This form of probabilities was predicted already by Schrödinger who, however, was against their present day interpretation.
A key was the stability problem, represented most naively on Fig. 1. In the first instants or in the fast repetitions of any measurement, the apparatus readings (e.g., the indication of the needle) should not change, or must be almost the same. Otherwise, the measurement could not be performed, because of the instability of the results (i.e. the dancing of the needle on the apparatus scale). So, the initial micro-object state must suddenly be reduced to one of states for which the measurement result is certain (and therefore stable). Almost all physicists accepted the argument...
But not everybody! While he himself guessed the quadratic form of the probabilities 1 , Schrödinger was in strong disagreement with the sudden jumps ('reductions'). He exclaimed: Diese Verdammte Quantenspringerei! , cited by Sudbery [5]. In fact, is there any fundamental barrier between the measuring devices and the rest of physical objects? Is there any fundamental difference between the microscopic and macroscopic bodies? If not, then why certain macroscopic objects should cause some unpredictable jumps? Moreover, why the quantum 'indeterminacy', originally restricted to micro-objects, could not be shared by macroscopic systems? In his ample and fundamental paper of 1935 [6] (see also [2]), Schrödinger presents his short but famous tale about a half alive and a half dead animal: "One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (it must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small that perhaps in the course of the hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. if one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and the dead cat (pardon the expression) mixed or smeared in equal parts." The fragment occupies less than 1/300 part of an ample 22 page article. Yet, almost nobody remembers the profound ideas of E. Schrödinger, but almost everybody heard about his humorous story about the cat in a superposed state of being dead and being alive. However, almost everybody thinks, it is just a dark anecdote of the past. But is it?
• The Hawking's gun. The situation antagonized strongly Stephen Hawking: When I hear of the Schrödinger's cat, I reach for my gun! However, the Schrödinger's cat will probably survive the Hawking's gun. The point is that until now we cannot construct a truly consistent picture of the quantum state reduction.

The decoherence?
One of widely announced answers is that the mechanism of wave packet reductions, "these damned quantum jumps", during the measurements are indeed the decoherence effects of the dynamical evolution, leading always to an entropy increase [7]. In fact, some increase of disorder in our prediction of the future states is rather obvious. The data about the present state of either classical or quantum particle system, even if good, are never infinitely exact. So even the little errors of our "initial knowledge" might cause an increasing blurring in our predictions of the future. The problem, however, is whether this lost of information is in any way responsible for the sudden state jumps (interpretable rather as an escape from the disorder?) By trying to understand the phenomenon in an arbitrarily complex but still micro-reversible theory, one inevitably faces the seldom remembered Loschmidt paradox. It states that while the prediction of the future might be increasingly blurred (entropy increase!) then if only the theory is micro-reversible, i.e. the change of t into −t is irrelevant for the dynamical laws, the disorder (blurring) has the tendency to grow either if we try to predict the future or to reconstruct the past. So the 'entropy' (or at least the 'entropy of our personal information') is rather bad defined: the smallest is always at the 'present moment', growing both in the future and in the past! This, however is not limited to our subjective feeling, since in micro-reversible physical theories the system evolution in the direction of the past is governed by an identical law as its evolution toward the future [8,9] (c.f. Fig 2).

Figure 2.
A naive image of Lozschmidt paradox: an extremely regular particle state shows an increasing disorder in the future, but also in the past.
The paradox makes senseless the persistent belief of growing disorder (entropy) in a large class of physical theories. It includes all known cases of classical (multi-particle) dynamics without friction. No different in quantum theories. No matter, how many subsystems describes a quantum state, in all customarily considered cases of quantum dynamics, the evolution 'toward the past', after a simple CPT conjugation, is identical to the 'motion to the future.
An independent argument in favor of the decoherence is frequently presented by considering a macroscopic object with two macro-states A and B (for instance A = cat alive, B = cat dead). Now, if the statistical measurements reduce any initial superposition into a random mixture of either A or B, this means the decoherence; but it does not yet mean an inverse statement, that the decoherence reduces the wave packet. Indeed, a different measurement, with the eigenstates C = 1 √ 2 (A + B) and D = 1 √ 2 (A − B) might create the same mixture (decoherence) without 'reducing' the state to either A or B. The question then remains, why an increment in he entropy must 'reduce' the system state to either A or B (alive or dead) but not to e.g. C + D (either alive+dead or alive−dead), or to a continuous mixture of all qubit states representing the same state of complete chaos [10]?
Hence, if the unitary evolution is the only element of the story, there is no such miracle, that one of these directions will introduce more disorder than the other (c.f. also [8,9]). The situation can be different if the unitary evolution is not the only law, but is completed by some additional mechanism. Only then, it might explain the growing decoherence in the future. Yet, the question still remains, whether apart of the growing disorder (entropy) it will indeed describe the sudden 'reduction jumps' of quantum states?
• The computer experiment of Gisin and Percival. An intriguing computer study of Gisin and Percival published in 1992 [11] shows a distinguished role of the energy eigenstates for the harmonic oscillator perturbed by a dissipative term consistent with Lindblad formalism [12]. It turns out, that the average number of photons emitted stabilizes asymptotically around the energy levels (see e.g. Fig. 10 in Gisin and Percival paper [11]). The results awoke an enormous interest (more than 600 citations, though curiously, it is not even mentioned in the latest Gisin article [13]).The authors of [11] choose the Lindblad dissipative term which privileges the direction of time and is determined by the oscillator algebra. Would it represent the missing element of the theory?

The relativistic problems
The equally serious problems with the Verdammte Quantenspringerei become obvious in the relativistic theories. As it seems, they must appear even in the special relativity, if only the evolving (and reduced) states are defined on the infinite space-like hyperplanes. The simplest case is presented on Fig. 3. The more general symptoms of the trouble are reported e.g. as the Paradox of two bottles [14], see Fig. 4. Suppose, a wave packet of one particle is trapped in a closed bottle which is then split carefully into two parts (without affecting coherence), tentatively with 1/2 and 1/2 probabilities of containing the particle. The bottles then travel into widely separated places of our galaxy, both still containing 50% of the initial packet. Now, the problem is about the wave packet reduction on some extended space-like hyperplane Σ 0 . Suppose an observer whose simultaneity hyperplane is Σ 0 checks the interior of bottle 1 and finds the particle. Hence, the probability of finding the particle in bottle2 becomes 0. But not only on Σ 0 . If now another observer, who moves differently and has the slightly distinct simultaneity hyperplane (Σ 1 ) checks the interior of the bottle 1 immediately after, by continuity, he must find the particle in bottle 1 as well. Hence, the bottle 2 was empty already on Σ 1 , etc. By induction, the bottle 2 was already empty on the hyperplanes Σ 1 , Σ 3 , Σ 6 , . . ., and the bottle 1 certainly contained the particle on Σ 0 , Σ 2 , Σ 4 , . . . By an obvious limiting transition, one must conclude that the bottle 2 was empty and the particle was from the very beginning in the bottle 1; the wave packet reduction affected the whole past, from the very moment when the original bottle was split. About one year later I suddenly noticed that exactly the same problem was already considered much earlier by Y. Aharonov and D. Albert [15,16] In 1992 J. Finkelstein criticize the very idea of an observer dealing with (and reducing) the quantum state defined on a space-like hyperplane in Minkowski space [18]. The measurements are always local, he argued, so the wave packet reduction can affect only the future causality cone of the measuring apparatus, never an entire space-like hyperplane! A polemic answer was published by Mosley in 1993 [19], with an opposite argument, following Hellwig and Krauss [20]. Any local observer can construct the wave packet of a micro-object only on the basis of some signals arriving from the past causal cone. If some experiment brings a new information, it means that the state description must be corrected, implying the state reduction on the past cone. Both arguments show some logical merits. However, if the wave packet reduction affected only the future light cones (Finkelstein) then it could not explain the (benign) teleportation, cryptography, etc of the entangled states. On the other hand, the quantum states limited to the past cones (Mosley), could not offer unitary evolution laws which require more ample dynamical informations (on the simultaneity hyperplanes). What we need are the quantum evolution equations for states defined on the families of Cauchy hyper-surfaces (

Interaction free interaction?
In their inspired article [21,22], Elitzur and Vaidman consider a photon in a system of optical fibers, with beam splitters and mirrors of Mach-Zehnder interferometer, see Fig. 6. The photon wave function is divided into two coherent parts by the first beam splitter, then reflected by two mirrors toward the second splitter, where they unify again, recovering their original state of motion. So, if there is no obstacle, the photon recovers its original propagation momentum, falling into the detector D. However, if one of the branches is blocked e.g. by a perfectly absorbing obstacle, then what occurs is the first state reduction. Either the obstacle detects (by absorbing) the photon which therefore arrives neither to D nor to E. Or it reduces the entire propagation scheme, cancelling the blocked trajectory. The second splitter will then receive the photon which propagated only along the unblocked path (as if the first splitter acted as a mirror). Its state is then decomposed by the second splitter into the superposition of two parts, tentatively reaching D or E. The choice of one of them will be the second state reduction. The peculiar effect is that in 1/2 cases the first reduction eliminates completely one of the trajectories, while the second one makes an equilibrated choice between D and E, so in 1/4 of cases the photon will appear in E, thus revealing the existence of an obstacle, to which it has never approached. Elitzur and Vaidman choose still a more challenging version of the experiment, assuming that the obstacle is a supersensitive bomb, which would explode immediately under any contact with the photon. Hence, if the detector E clicks, it would mean that the bomb could be detected (without exploding) by a photon which could pass many kilometers away! All this can seem natural if the photon is just an instantaneous pulse. However, what is precisely the single photon? Must it propagate always as an infinitely short pulse? Or perhaps, it can also form a very long, narrow wave divided by the first splitter into a pair of still weaker but as long components which laboriously reconstruct their initial form at the second splitter, falling then (gradually) into the detector D? If so, the problem is, at which moment precisely the detector responds to the single photon? At the beginning or at the end of the process? In fact, whatever we try, it seems completely impossible to imagine the single photon wave in presence of an obstacle. Can it know from the very beginning that one of the trajectories is blocked? If not, then what can it do after arriving to the obstacle. Does it announce: "Oh excuse me, I was wrong, I have to return!". But then, too late, since it would have no time to reappear in the free fiber! 2 .
Worse, because if one of the (EV) trajectories is blocked by the bomb, then after what time the bomb explodes? If it does not, then after what time the (large but incomplete) photon trajectory which tried to cross the bomb is mysteriously annihilated and contributes (again mysteriously), to the other weak component creating finally the (entire) one-photon state, which arrives to second beam splitter, but now with the probability to activate the second detector E? We can only conclude that the story is incomplete: indeed, it is impossible to form any mental picture of the obligatory linear propagation if it means the extinction of the whole propagation arm, detecting an obstacle which exists precisely in the place where the photon, finally, newer was! Here, it is worth to remind the point made by Sudbery [23]: "It is often stated that however puzzling some of its features may be, quantum mechanics does constitute a well defined algorithm for calculating physical quantities. Unless some form of continuous projection postulate is included as a part of the algorithm, this is not true." All this, except some simple cases, does not yet solve entirely the photon behavior in the topological nets of optical fibers. The tensor products of Hilbert spaces were introduced to assure the linear propagation of the entangled states between the multiple detectors. success of the picture was the prediction of the benign teleportation acts, (which did not permit to read immediately the teleported messages) [24]. It started the ample works on quantum cryptography with hopes to program the efficient quantum computers [13,25,26]. Despite all interpretational questions, the (EV) idea [21] opened also unsuspected perspectives of "seeing in the dark" [27], with the perspectives of new age in quantum information permitting the use of powerful quantum computing [28]. On the other hand, sequence of studies of the imperfect cases of (EV) bombs was undertaken [29][30][31][32]. Apart all these achievements, some ambitious trends in Quantum Field Theories (QFT) started already to talk about the "Theory of Everything" as if the end of fundamental research was not too far away. Yet, some signs indicate that the satisfaction might be premature. Besides all known relativistic problems, some simple evaluations indicate that the experiments with linearly propagating entangled states can affect the past [29]. The similar paradoxical conclusion on quantum steering into the past seems to reemerge from the 2012 study of Vienna-Innsbruck group [33]. In spite of the 'benign' teleportation mechanism one can wonder whether our quantum theories do not to cross some common sense limits neglected also in the past. Indeed, the persistent image of the entangled states navigating in the tensor product spaces has some similarity to the thousands of years of the ancient astronomy. Just remember the Ptolemean idea to describe all trajectories by the holy shape of the circle. The present day "tensor products" use another holy shape of the 2D surface of the 3D euclidean sphere (the quantum qubit, see Fig. 7). The Ptolemean astronomy was, in fact, not wrong, even if complicated. However, what about our theory of microscopic phenomena?
In order to recover some independent view, let us postpone for a moment the locality problems, returning to the traditional quantum paradoxes still waiting for some better understanding.

Half full, half empty
Our story will concern a quantum system in a superposed energy state, which will be reducedthough not when the experimentalist decides, but when the system itself decides by emitting a photon (compare with the 'time of arrival' [34][35][36]). As a simplified model, we consider a bottle containing an atom in a state of equitable superposition of two lowest energy levels, ground state φ 0 and an excited state φ 1 .
Since our theory until today obeys a persistent intuition of quantum (pure) state which must always linearly navigate until suffering some 'reduction', let me try to adopt it in the most naive way and examine consequences.
In some distant past, the experimentalists examining the spectral lines imagined an atom always in one of the energy eigenstates. Today the picture changed. The existence of the superposed (but pure) energy states is (or it seems?) unavoidable if one takes seriously the quantum mechanical formalism. Now, if the atom is in the excited state, we shall say that "the bottle is full", but if in the ground state, "the bottle is empty". The bottle is just to assure that the atom is left in peace, isolated from the external perturbations. It should be wide enough to neglect the influence of its surface on the atom behavior, but it should be also able to detect the events of radiation. So, at the top of each bottle, at some safe distance, there is a sensitive screen, prepared to detect the photon, should the atom radiate. If it does, the top of the cell turns black (it is burned!). For purely illustrative reasons, the bottles on Fig. 8, are painted hexagonal, resembling the bee hive (bees today are also in troubles!). By observing the detectors which turned black, we can see, how many photons already 'incubated'. Now, in almost all studies of the atom radiation one can find the description of the process starting from the excited state φ 1 but not from the superposed one. This allows the suggestive representation of the excited states as some narrow superpositions of slightly different energy eigenstates, forming an unstable composition, with the average lifetime τ inverse to the (little) energy width δE, in agreement with the time-energy uncertainty (even though, all this awakes a lot of unfinished discussions [34][35][36]). Anyhow, by reading the literature you can always find the considerations in which the beginning of the decay process is an excited state, with a slightly diffused spectral line, which seems to confirm the validity of the idea. However, what about the study of a decay starting from the superposition of quite distant levels? Perhaps, the difference is superfluous, but it may be worth to examine.
To fix attention, let us thus assume that our initial state φ is an equitable superposition φ = a 0 φ 0 + a 1 φ 1 , where |a 0 | 2 = |a 1 | 2 = 1/2 (bottle half full, half empty). From a credible phenomenology we know the behavior of an atom in its ground state φ 0 . If unperturbed, it just remains in φ 0 forever, φ 0 (t) = exp(−itE 0 )φ 0 . We also know something about the behavior of the excited state φ 1 . In purely quantum mechanical approximation, this state is as stationary In reality though, the stationary evolution might be interrupted by an unpredictable photon emission with a sudden jump to the ground state φ 0 . The number of bottles N (t) with undecayed atoms thus goes down; assuming an almost exponential decay, it behaves approximately as N (t) = N 0 e −Γt . This is sometimes represented in the literature by assigning to the unstable state a complex energy E = E 1 + i Γ 2 so that (1) can be replaced by φ 1 (t) = e −itE 1 − Γt 2 φ 1 . Yet, this is just a verbal allegory, pretending to attribute to the single atoms the properties of their ensemble. Suppose, however, that the transitions φ 1 → φ 0 are incomparably faster than the average lifetime of the whole ensemble in the excited state φ 1 . Then we are again in the old idea of the excited states in their linear navigation interrupted by sudden jumps (indeed, the state reductions), caused by the photon detectors. This does not yet affect essentially our model of the excited state radiation. However, what happens with the atoms in the initial superposed energy state φ = 1 √ 2 (φ 0 + φ 1 ) (the bottle neither full, nor empty)? At the first sight, it may seem that there is hardly any problem here: the evolution of the atom must simply obey the standard law granted by the superposition principle, except if (bam!), it suddenly radiates emitting a photon of energy E 1 − E 0 , colocating itself on the ground state φ 0 . Yet, this plausible picture contains certain disquieting gap. We are accustomed to think, that the quantum system in presence of a measuring device performs first of all a unitary evolution process in a certain Hilbert space (an extremely continuous picture?), until interrupted by the sudden reduction (an extremely discontinuous picture?).
On the other hand, in our initial state the average energy is 1 2 (E 1 − E 0 ) and nobody observed the photons emitted with (incomplete) energies smaller than ∆E = E 1 − E 0 . Does it means that before radiating, some atoms must perform first of all a spontaneous (introspective?) state reduction (see Fig. 9), deciding whether they are in φ 0 or φ 1 ? The question then is, whether they must ask some energy credit from their detector? If so, is the detector kindness due to its very existence, even if the photon was not yet emitted [37,38], or is it a kind of shadowing [39], or generous grant from the polarized vacuum? (Though notice the criticism of Penrose about the very concept of the 'polarized vacuum' [40]) Furthermore, if the spontaneous reduction failed to bring some more energy, locating the semi-excited atom on the ground state φ 0 then it will stay there without never emitting anything. Even if the total energy balance is not violated, the single atom behavior hides still some mystery.

The experiment of vanishing hope?
It may be therefore curious to imagine a population of N atoms in the initial state φ = 1 √ 2 (φ 0 + φ 1 ), each closed in its own bottle, in form of a little, mesoscopic cell. The top surface of each cell is simultaneously a detector, sensitive to the photons of particular energyhω = E 1 −E 0 . By calculating the (increasing) number of the black cells, we know how many atoms have already radiated, see Fig. 9. If all atoms are initially in an identical superposed state φ, then for t → +∞ all must end up in the ground state φ 0 , though for different reasons: 1/2 of them, since they have radiated and settled down on φ 0 ; the remaining 1/2, just because the long waiting was a kind of yes-no measurement (i.e., asking whether the atom was able to radiate at all), the negative answer reducing the φ 1 component to nonexistence, even though no photon was emitted. As the matter of fact, what has caused the state collapse in this last case, was not any active intervention, but just the vanishing hope (that the atom could have been in the excited state φ 1 ). Thus e.g., supposing that the average lifetime of the excited state φ 1 , was 1 nsec., but the atom in the initial state φ did not radiate for 1 year, then, we can be practically certain that it will radiate never: the atom state can no longer correspond to φ, but it is indistinguishable from φ 0 . Even if the global energy balance is not affected, the situation seems a bit strange. While the atoms which have radiated cause already some problem, the ones which did not contain a true puzzle! Their superposed energy state vanished, giving place to φ 0 . The only external factor was our vanishing hope (take it as a rhetoric figure if you dislike!), or perhaps shadowing [39], instability or some kind of non-linearity? Anyhow, the bottle was half full, half empty, nothing escaped, yet the bottle is empty!
The way to avoid the trouble would be to answer that we have post-selected our ensemble. The principle of "not post-select" (equivalently, "not retrospect") is reasonable if some micro-objects were submitted to a measurement. Their initial states (in general) were reduced and it makes no sense to look for their past. The principle is not so clear if the micro-objects escaped the detection. In their fascinating paper the Swiss group considers the non-orthogonal transformations of (pure into pure) states for an ensemble whose particles escaped absorption [37,38]. The (EV) story of interaction free experiment too contain an element of retrospection. The same, the famous "delayed choice measurements" of J.A. Wheeler [2]. Similarly, as all paradoxes implying the reduction of the past states. So have we to restrict our theory only to strictly pragmatic rules, like ensemble and only ensemble, correlation and only correlations, resigning from considering the pure states of single particles? Limited by "do not retrospect" and other "do not think" principles? However the naive ideas were the true source of our sophisticated theories! On the other hand, what if our idea of "spontaneous reductions" was overdone? Note that all difficulties of our strange story vanish if we simply assume that no coherent superposition of two bound atomic states can be indeed created. Then a variant of Bohr-Sommerfeld formula would turn sufficient to define atomic energy levels (back in the beginning of XX-century?). The quantum state of undecided energy could be only a mixtures and our over-sophisticated problems would not exist. Is it better?

Einstein boxes
About a year ago, I had an occasion to discuss with D. Albert in Tepoztlan, and I learned about something unexpected. "Oh!", said Albert after hearing my arguments on the measurement problems, "then you are the follower of Einstein boxes!". This turned out an old subject, unknown to participants of many discussions. After participating in the EPR paper [41], (truly written by Podolsky), deeply frustrated Einstein exchanged some angry opinions with Luis de Broglie, not officially published but reconstructed from some materials, see Norsen [42,43]. The discussion was about a single particle state closed in two boxes (like in the paradox of 2 bottles [14], though without any reference to the relativistic aspects. The boxes were simply traveling to two distinct cities, Fig. 10). Let me just quote the categoric opinion of de Broglie: "Suppose, a particle is enclosed in a box B with impermeable walls. The associated wave ψ is confined in the box and cannot leave it. (...) Let us suppose that by some process or other, for example, by inserting a partition into the box, the box B is divided into two separate parts B 1 and B 2 and that B 1 and B 2 are then transported to two very distant places, for example to Paris and Tokyo. (...) The particle which has not yet appeared thus remains potentially in the assembly of the two boxes and its wave function ψ consists of two parts, one of which ψ 1 is located in B 1 and the other ψ 2 in B 2 . (...) According to the usual interpretation (...) the particle (...) would be immediately localized in box B 1 in the case of a positive result in Paris. This does not seem to me to be acceptable. If we show that the particle is in box B 1 it implies that it was already there prior to localization", see Fig. 10. I must admit that if not the discussion with Albert, I would still knew nothing about the Einstein boxes (the same as many other colleagues discussing the conceptual problems of teleportation, quantum computing etc.). The subject proves that the quantum paradoxes were quite well understood already by L. de Broglie, perhaps already before the 'official' doctrine of the quantum state reduction? Now, however, if the argument of de Broglie was right and if the linear superposition cannot exist for two perfectly isolated quantum states, then what about the ground and the first excited state of any hydrogen or hydrogen-like atom? True, they can never be completely isolated, but what if our present day theories are not exactly true, and so, in some circumstances (e.g. absence of Rabi rotations), ψ 0 and ψ 1 tend to behave like the Einstein boxes, with the linear combination inexistent? -so that in spite of illusions of creating such state we would always end up with the mixture instead of the superposition. In this case, the evolution problem of the linear combination of 1 √ 2 (ψ 0 + ψ 1 ) would disappear, the number of emitted photons after any time τ would correspond exactly to the decaying ψ 1 i.e, N (τ ) = N 0 (1 − e −Γτ )? This form of N (τ ) would mean the decay a bit faster than the decay slightly delayed if the decayed atoms should wait for the spontaneous reductions of their φ = 1 √ 2 (φ 0 + φ 1 ) to φ 1 states. Can it be experimentally distinguished? If so, some information about the universal linearity (and the fate of the Shrödinger's cats) could be obtained.