Redundant Information from Thermal Illumination: Quantum Darwinism in Scattered Photons

We study quantum Darwinism, the redundant recording of information about the preferred states of a decohering system by its environment, for an object illuminated by a blackbody. We calculate the quantum mutual information between the object and its photon environment for blackbodies that cover an arbitrary section of the sky. In particular, we demonstrate that more extended sources have a reduced ability to create redundant information about the system, in agreement with previous evidence that initial mixedness of an environment slows---but does not stop---the production of records. We also show that the qualitative results are robust for more general initial states of the system.


Introduction
The theory of decoherence [1,2,3] is supported by striking experimental evidence [4,5,6] and helps explain the emergence of the classical realm in a purely quantum universe. Classicality is marked by several characteristics that, at first sight, appear not to be native to quantum mechanics. The progress over the past three decades has been due to the realization that these classical aspects of our Universe can arise dynamically, and are sensitive to the details of the systems and interactions being studied.
Decoherence can explain why effectively classical pointer states (e.g. Gaussian wavepackets) are preferred for certain systems, in the sense that the system's pointer state is unaffected by its interaction with the environment, so that any other initial state will quickly become approximately diagonal in the associated pointer basis. It also explains why (and under what circumstances) interference between components in a system-environment entangled state can be ignored, the environment can be traced out, and the system state can be regarded as having "reduced" to an approximate mixture of pointer states.
Nevertheless, there are still deep unanswered questions about the quantum-classical transition. In particular, decoherence alone does not explain two aspects of classical mechanics which we take for granted: the states of classical systems are robust and objective. By "robust", we mean that observers may discover an initially unknown state of the classical system without disturbing it. By "objective", we mean that multiple observers may independently find out the state of the same classical system, that they will all agree on the answer, and that their measurements will leave the system in the preexisting (objective) state.
In general, quantum states are neither robust nor objective even after decoherence has eliminated obviously quantum superpositions. When an observer measures a system in something other than it's pointer basis, he dramatically affects the state by repreparing the system in an eigenstate of the observable he has measured. Therefore, when multiple observers each measure a system in different bases, they will each get different, incompatible results, and the last measurement will leave the system in a state that has little to do with the states revealed by its predecessors. What, then, explains the objective and robust nature of classical macroscopic objects?
Quantum Darwinism provides the answer to this question [7,8]. It recognizes that real observers do not typically interact directly with the system they measure. Instead, the system is immersed in and correlated with a (decohering) environment. The observer then interacts with a fraction of that environment to find out its states -to become correlated with the system -measuring it indirectly. For instance, when we "measure" the position of a chair by looking at it, our eyes do not directly interact with the chair. The chair's state is not affected by whether or not we open our eyes. By opening our eyes, we merely allow them (and hence, our neurons) to become correlated with some of the photons scattered by chair (and hence, its position).
The environment in the Universe we inhabit acts as an information channel through which the observer finds out about the system [9,8]. But this information is filtered, as only the observables that are recorded in many copies in the environment can be found out from the intercepted fragment. This means that observers cannot choose any arbitrary basis in which to measure; they are restricted in the type of information that can be acquired about the system by the nature of the system-environment interaction. Under the condition of effective decoherence, this information can only describe the pointer states of the system, not superpositions thereof [10,9]. Indeed, the no-cloning theorem [11,12] implies that arbitrary quantum states of the system will not be able to proliferate in this manner, as only certain preferred states (that turn out to be pointer states) can be imprinted onto many fragments of the environment [13]. In fact, observables complementary to the pointer states effectively become inaccessible after decoherence has set in since they can only be recovered through global measurements on the whole of the environment. So based only on the assumption that typical observers learn about the system through the environment (i.e. independent of any details about the size of the observers or the way they interact with the environment), we can conclude that (1) observers do not disturb the system ("robustness") and (2) all observers can learn only about the pointer basis and, consequently, will agree ("objectivity").
This description, which is related [14,15] to the formalism of quantum trajectories [16], can only hold when multiple observers can actually determine the state of the system by sampling just part of the environment. That is, information about the system must be recorded redundantly in the environment. This will not always be the true. In some cases of effective decoherence, the environment simply does not make multiple copies of the information (e.g. collisional decoherence from a single high-energy environmental particle). In other cases, strong self-interactions scramble correlations and prevent information about the system from being extracted from any accessible fraction of the environment (e.g. collisional decoherence from air molecules).
The capacity of quantum Darwinism to explain the quantum-classical transition then rests on whether decoherence in everyday settings actually induces sufficient redundancy such that the classical approximations of robustness and objectivity are justified. This has been investigated for a spin-1 2 particle monitored by a pure [17] and mixed [18,19] bath of spins and a harmonic oscillator monitored by a pure bath of oscillators [20,21]. We recently showed for the first time that a physically realistic setting-an object illuminated by a point-source blackbody-does in fact lead to enormous redundancies [22]. In this work, we generalize that analysis to include partially-angularly-mixed illumination and arbitrary initial object wavefunctions. We confirm evidence in earlier studies [18,19] that initial partial mixedness of the environment hampers-but does not eliminate-its ability to redundantly record the state of the system. We also show that quantum Darwinism in our model is robust for general initial states of the system.

Quantum Darwinism in Photon Collisional Decoherence
When an object in a mesoscopic superposition is exposed to radiation, scattering photons will quickly reduce its pure, nonlocal state to a mixture of localized alternatives via collisional decoherence [23]. (See also [24,25,26,27] for refinements and corrections.) The fantastic rate of collisional decoherence has been confirmed experimentally [28,29].
Observers typically access a small part of the environment (in this case, the photons that enter one's eye), so we will estimate how much information about the object is available in a subset of the environmental photons. For simplicity, we assume our environment consists of a large but fixed number N of identical photons: E = N n=1 E i , where E i is the Hilbert space of a single photon in a box of volume V . We then define F f = f N n=1 E i to be the fragment corresponding to some fraction f of the environment composed of f N photons. Since each photon has the same initial conditions and interactions, the choice of photons with which to construct the fragment is unimportant. To get our final results, we will take V and N to infinity while holding the physical photon density N/V constant.
The primary quantity investigated will be the quantum mutual information between the system S and fragment F, where H denotes the von Neumann entropy. From this we will calculate the redundancy R δ , which is the number of distinct fragments in the environment that supply, up to an information deficit δ, the classical information about the state of the system. More precisely, R δ = 1/f δ , where f δ is the smallest fragment such that I S:F f δ = (1 − δ)H S . (H S is the maximum entropy of S. Only very large fragments f ≥ 0.5 will be able to have perfect classical information, I S:F f δ = H S , about the object [17].) At any given time, the redundancy is the measure of objectivity; it counts the number of observers who could each independently determine the state of the system (up to a small residual uncertainty δ) by interacting with disjoint fragments of the environment. Following Joos and Zeh [23], we take our system to be a dielectric sphere of radius a and relative permittivity in a pure state. The system and the photons in the environment are assumed to be initially unentangled: ρ 0 = ρ 0 S ⊗ ρ 0 e ⊗ · · · ⊗ ρ 0 e , where ρ S and ρ e are the density matrices of the system and of a single photon, respectively, and a superscript "0" denotes prescattering states.
The system is illuminated by photons originating from a far away blackbody of temperature T that covers B ⊂ S, where S is the unit sphere "sky" as seen from S. Let Ω ≤ 4π be the solid angle measure of B. See figure 1.
In the Hilbert space of a single photon, we break the momentum eigenstates into a tensor product | k = |k i |n /k of magnitude and directional eigenstates. (The factor of 1/k comes from the Jacobian determinant associated with fact that the kets in this infinite-dimensional Hilbert space are densities, not normalized vectors.) Blackbody Figure 1. A dielecric sphere of radius a and permittivity is initially in a superposition with separation ∆x = | x 1 − x 2 |. The object is subjected to radiation from a blackbody at temperature T that originates from a patch B ⊂ S (where S is the unit sphere "sky" as seen from S) with solid angle Ω ≤ 4π. The complement of B is B. The photons propagate in directions labeled byn, which makes an angle θ with the vector ∆x.
illumination is then described by where p(k) ∝ k 2 /[exp(kc/k B T ) − 1] and c is the speed of light. Above, the probability distribution over the solid angle is the normalized characteristic function of B. This "step-function" illumination is not contrived. Perfect blackbodies are Lambertian, which means that surface elements appear to have the same brightness no matter the angle of viewing. In other words, the sun appears as a uniform disk of illumination in the sky; it is not dimmer near the edge. We ignore the self-Hamiltonian of the object and assume it is heavy enough to have negligible recoil from photon scattering, so the evolution is governed by where S x is a scattering matrix acting on the single photon state when the particle is located at x. Elastic scattering leads to under the magnitude-direction decomposition of the photon momentum states.

Decoherence
For now, we take the initial state of the object to be a (Schrödinger) "cat" state: The decoherence of the superposition is governed by the decay of the off-diagonal terms in the position basis, where The complex value γ is the decoherence factor attributable to a single photon. The two-dimensional ρ S can be diagonalized and its entropy is where Γ ≡ γ N is the decoherence factor associated with the environment as a whole. The function will appear often. Note that h(0) = 0, h(1) = ln 2, and Also, h(x) is analytic and monotonic on the interval [0, 1] and, for small x, h(x) ≈ x/2.
Applying (4) gives To get the key matrix element appearing on the r.h.s., we use the classical cross section of a dielectric sphere [30] in the dipole approximation (λ a) and assume the photons are not sufficiently energetic to resolve the superposition individually (λ ∆x) ‡. This gives [22] whereã ≡ a[( − 1)/( − 2)] 1/3 is the effective radius of the object, θ is the angle between n and ∆x, and t is the elapsed time.
For increasing V , photon momentum eigenstates become diffuse so individual photons decohere the state less and less. In other words, γ → 1 because S k x → I. This is balanced by an increasing number of photons in the box. In the limit V, N → ∞, we combine (6), (11), and (12) and use e = lim q→∞ (1 + 1/q) q to get the decoherence factor where τ D is the decoherence time. It's inverse is the decoherence rate §, = Ω 8!ζ(9) 15π 2 ã 6 ∆x 2 k 9 B T 9 c 8 9 3 + 11 cos 2 θ B , (15) ‡ High energy photons are uninteresting as they can easily "see" the separation ∆x and each will become completely correlated with the position of the object; the redundancy will be just be equal to the number of scattered photons. § This is related to Joos and Zeh's convention by τ −1 D = 2τ −1 = 2Λ∆x 2 , where τ and Λ are the "characteristic time" and "localization rate" of [23].
where ζ(n) denotes the Riemann zeta function, which arises from integrating over the thermal distribution. In the second line, we have expressed the number density of blackbody radiation in terms of the apparent solid angle, N/V = Ωζ(3)k B T /(2π 3 c 3 3 ), and we have rewritten the integral to emphasize that it is merely the average of the trigonometric quantity 3 + 11 cos 2 θ over B. If we ignore the order-unity change in that average (it is obviously constrained to lie between 3 and 14) then the decoherence rate is linear with the apparent size of the blackbody. This is natural because thermal radiation is uncorrelated. Each new photon contributes an independent multiplicative decoherence factor, which combine additively in the decoherence rate.
For blackbodies that are far enough away to be approximated as point sources, the irradiance I (radiative power per unit area) is a more physically accessible quantity than the the solid angle Ω, especially in the presence of optical distortion. In that case we use where θ is the angle between ∆x and the point source.
At the opposite extreme where Ω = 4π, B = S, we recover the decoherence rate for isotropic thermal illumination [26]: This is the decoherence rate when the system is surrounded by a uniform blackbody, e.g. inside an oven. For the corresponding decoherence time, we have introduced the symbol T D ≡ τ D | B=S since it will serve as a useful B-independent timescale in the rest of this paper. For concreteness, consider a blackbody that appears as a disk on the sky centered on a directionẑ which makes an angle χ with ∆x. That is, B = {(θ, φ) ∈ S | θ ≤ θ 0 } for some maximum polar angle θ 0 . See figure 4. For such disks, the decoherence rate is This is plotted in figure 4(a) in terms of the solid angle Ω = 2π(1 − cos θ 0 ). It is monotonically increasing with Ω since each additional photon can only further decohere the system.

Quantum Darwinism
To get the redundancy in the environment, we will need to find the mutual information I S:F = H S + H F − H SF . We can avoid calculating H SF by using the identity [Eq. (8) of [19]] where H SdE = H S is the entropy of the system as decohered by the entire environment E, and H SdE/F is the entropy of the system if it were decohered by only E/F. We get The calculation of H F is tedious, and we relegate the details to the appendix. The change in the entropy of the fragment F is where the effect of the initial mixedness of the environment on the production of records is accounted for by the single parameter For reasons that we explain below, we call α the receptivity of the environment with respect to the decoherence process. Above, B ≡ S\B is the complement of B and we define |g(n,m) The function |g(n,m)| 2 is a measure of the distinguishability of the out states S x 1 |n and S x 2 |m for different incoming anglesn andm of E and for different locations x 1 and x 2 of S. The receptivity α is a dimensionless ratio constructed from this function that, from the form of (21), we know obeys 0 ≤ α ≤ 1. Now that we have the change in fragment entropy ∆HF = HF − H 0 F , we use (19) to finally write down the mutual information The summations in (22) can be written in a closed form analogous to (8), but the power series is more useful for calculating the redundancy. For large times, Γ = exp(−t/τ D ) is exponentially small and the sum is dominated by the lowest power of Γ. If 0 < f < 1/2 and α = 0, then 0 < αf < (1 − f ) < 1 and So long as the information deficit is not unreasonably large, δ < 1/(2 ln 2) ≈ 0.72, we can estimate the redundancy in the limit t τ D : where is the redundancy rate-the characteristic rate at which records about the state of the system are produced .
The redundancy (24) depends only weakly (logarithmically) on the information deficit δ, which is consistent with previous results [19,31]. The redundancy increases The times for which this is a good approximation to the true redundancy are shown in figure 3. For added rigor, we can use (10) to get I S:F > ln 2(1 − Γ αf − Γ), which, for t > τ d ln(2/δ), yields this lower bound on the redundancy: . Since Γ decays exponentially in time, this conservative bound tracks our estimate (24) very closely for all δ < 1/(2 ln 2). The linearity in f means each piece of the environment contains new, independent information. For t > τ D (blue solid lines), the plateau shape of the curve indicates redundancy; the first few pieces of the environment reveal a lot of information about the system, but additional pieces just confirm what is already known. On the plateau, the mutual information approaches it's maximum classical value, H S = 1 bit = ln 2 nats. The remaining information (i.e., above the plateau) is highly encoded in the global state, in the sense that it can only be read by capturing almost all of E. Top right: α = 0.5. Redundant copies are only produced at half the rate and the mutual information is no longer anti-symmetric about f = 0.5 because the environment is mixed in the information-storing degrees of freedom. Nevertheless, the plot approaches the same symmetric plateau shape for large t, illustrating that extensive redundancy is still achieved. Bottom left: α = 0.01. For receptivity this low (which is only expected for nearly isotropic illumination) the mutual information is initially greatly skewed. Still, this just slows down acquisition of information by a factor of α −1 = 100. (Compare the t = 10 3.5 τ D curve for α = 0.01 to the t = 10 1.5 τ D for α = 1.) Bottom right: α = 0. Only for this idealized case of perfectly uniform illumination is information storage halted. This is because the directional photon states are already "full" and cannot store more information about the state of the object. Zero redundant copies are produced and the mutual information approaches 0 as t → ∞ for all f < 1.
linearly with time at a rate proportional to the decoherence rate. This is intuitive because (1) photons scatter off the object at a constant rate and (2) it is precisely the dependence of photon out states on the position of the object (roughly corresponding to a record) that causes decoherence.
It can now be seen why we call α the receptivity of the environment; it determines, for a fixed rate of decoherence, the rate at which records about the systems are created in the environment. For maximum receptivity (α = 1), the redundancy rate is equal to the decoherence rate and records are produced at the maximum speed. For vanishing receptivity (α = 0), no redundant copies are produced no matter how effective the decoherence.

Receptivity
We now examine how the receptivity depends on the distribution of the illumination and, through the operator S k x 1 † S k x 2 , the scattering matrix. However, most of our discussion will not rely on the particular angular dependence of the differential cross section.
In general, both the decoherence rate τ −1 D and the receptivity α will vary with different choices of B. The decoherence rate τ −1 D changes for two reasons: (a) Different incoming photons contribute independently (that is, additively) to the decoherence rate, so larger or closer blackbodies covering more of the sky will naturally decohere the system faster. (b) Less dramatically, the contribution per unit solid angle to the rate of decoherence changes as the blackbody is moved around the unit sphere because of the factor of 3 + 11 cos 2 θ in (12). The receptivity α similarly depends both on (a) the solid angle of the regions integrated over in (21), and (b) the angular dependence of the integrand |g(n,m)| 2 .
To disentangle these two quantities, simply consider a blackbody dimmed by some uniform intermediate medium. Let β ∈ [0, 1] be the fractional degree to which the intensity of the illumination is reduced, expressed by modifying the number density N/V → βN/V (equivalently, the intensity I → βI). The reduces the decoherence rate accordingly, τ −1 D → βτ −1 D , but leaves the receptivity α unchanged. We can then consider an arbitrary family of blackbodies B (i) with different α (i) , taking i = 0 to be the one with the largest decoherence rate: τ , we equalize all the decoherence rates without changing the receptivities. This gives a physical interpretation for considering, with a fixed decoherence rate, how the receptivity (and hence the redundancy rate) depends on the shape of the blackbody.
In the appendix we show that |g(n,m)| 2 , which is bounded, has angular dependence |g(n,m)| 2 ∝ (1 + cos 2 θ n,m )(cos θ ∆x,n − cos θ ∆x,m ) 2 , where cos θ a,b =â ·b is the cosine of the angle between the unit vectorsâ andb. The explicit form of the receptivity for general B is then We know that 0 ≤ α ≤ 1 and, since |g(n,m)| 2 only vanishes when cos θ ∆x,n = − cos θ ∆x,m (a set of measure zero on S × S), we see that the two extremes are realized only in the following physical situations: • For B = {n 0 } (point source illumination from the directionn 0 ), Ω = 0, α = 1, and we recover (13) of [22]: See figure 2. Of course as noted above, exact point sources are a mathematical idealization; total flux of blackbody radiation is proportional to Ω, so this is really the case when finite-size effects can be ignored.
• For B = S (isotropic illumination), Ω = 4π, α = 0, and we recover (17) of [22]: For times t τ D , the entropy of the system as decohered by the entire environment, H SdE , and by just the complement of the fragment, H SdE/F , are both exponentially close to ln 2, so that I S:F f = H SdE −H SdE/F is only non-negligible for the brief period when t ∼ τ D . This is plotted in figure 2, which shows that the mutual information barely rises from zero before fading away, never yielding a single redundant copy. (For strictly vanishing α, the approximation used for (24) breaks down.) The photon directional states, which are the component of the environment in which information about the object is stored, are initially fully mixed and so cannot hold any new information. In other words, an observer relying on scattered radiation in an oven can see nothing [32]. This makes it clear that decoherence-which is maximized for isotropic illumination-is not sufficient to guarantee redundancy [8].
Between these two extremes, the form (21) indicates that the receptivity α will tend to decrease with increasingly large B ¶. In physical situations (e.g., objects lit by light bulbs, the Sun, or ambient light), we expect illumination to be nonuniform. This will correspond to receptivity that is not particularly close to either 0 or 1, so that the initial mixedness of the photon environment decreases the redundancy only by roughly a factor of order unity (in accordance with detailed calculations made of spin-1 2 systems [19]). Since even very tiny objects have extremely short decoherence times [1,23], redundancies will still be very large for realistic illumination.
To get intuition about the dependency of α on B, we again specialize to the case of the blackbody disk with solid angle Ω so that the integrals in (27) can be preformed. (See the appendix.) The results for the decoherence rate, the receptivity, and redundancy rate have been plotted in figure 4.
The receptivity of the decoherence is related to the haziness [18,19] (initial entropy) of the environment. Environments with zero haziness in the information-storing degrees of freedom (photon directional states) will have maximum receptivity. Environments with maximum haziness will have zero receptivity. However, the receptivity is not a strict function of the haziness since it depends also on the form of the scattering operator, i.e. the way in which S makes its mark on E.

Origin of difference between decoherence and redundancy rates
The key difference between the redundancy rate τ −1 R and the decoherence rate τ −1 D is the type of matrix elements on which each depend. Mere decoherence of the two pointer states | x 1 and | x 2 of S is determined only by the overlap of the S-conditioned states of E, ρ So long as this average overlap is unchanged, mixedness of the environment does not affect decoherence. Importantly, the decoherence rate depends only on inner products ¶ Even for a strictly enlarging sequence of blackbodies B(s) (where B(s 1 ) ⊂ B(s 2 ) and Ω(s 1 ) < Ω(s 2 ) for s 1 < s 2 ) the receptivity does not necessarily decrease monotonically with Ω. For the dipole scattering cross section considered in this work, as well as a few other example scattering operators, we have been able to construct unusual examples with B 1 ⊂ B 2 and Ω 1 < Ω 2 such that α 1 < α 2 . However, simple blackbody shapes like uniform disks have receptivity that decreases monotonically with solid angle Ω. The number of local extrema α takes with growing B is restricted by the size of higher frequency terms in the Fourier expansion of |g(n,m)| 2 and the complexity of the shape of B. Trivially, if |g(n,m)| 2 were constant, then the receptivity would just decrease linearly with Ω. For Ω = 0, there is zero illumination and so there is neither decoherence nor quantum Darwinism. For Ω = 4π, the decoherence is maximized but there is no Darwinism because the receptivity vanishes; the angular mixing of the environment is total, allowing no recording of information. Away from either extreme the redundancy rate is within an order of magnitude of the decoherence rate. between the same + initial states of E conditioned on different states of S.
On the other hand, the production of records is very sensitive to the initial mixedness. The redundancy rate is proportional to the numerator of (A.26). That numerator includes matrix elements of the form n|S k n =m. For redundant records to be produced, there must be small overlap between S-conditioned states for different pure states of E in the initial mixture. In other words, the observer must be able to distinguish the imprint of the pointer states of the system on different initial environment states. When n|S k x 1 † S k x 2 |m 2 is not small forn =m, then the observer cannot tell whether she has sampled (a) a photon that started in state |n and scattered off the system in state | x 1 or (b) a photon that started in state |m and scattered off the system in state | x 2 . See figure 5.

General superpositions
So far we have considered only objects localized in a balanced "cat" state: . We now gather some evidence that relaxing this assumption to allow more general initial object states does not give qualitatively new behavior.
For an arbitrary initial wavefunction ψ( x), the decoherence behavior is very simple. There is no self-evolution of the object, and the initial off-diagonal terms ρ S ( x, x ) decay exponentially in time at a rate that is a function of ∆x = | x − x |. As described in [25], the rate is proportional to ∆x 2 for small distances and saturates to a constant value for large distances. We would like to know if the mutual information I S:F is similarly + The apparent dependence of (A.30) on matrix elements of the form n|S k x1 † S k x2 |m forn =m can be removed by using the completeness relation S dm|m m| = I. The same cannot be done for (A.31).
well-behaved for general initial object wavefunctions. First, we consider the mutual information for the "unbalanced cat" state illuminated by a point-source. The new post-scattering density matrices are and Diagonalizing ρ S is not any more difficult than the balanced case, yielding the eigenvalues where µ ≡ (p 1 − p 2 ) 2 . From this, we see H S for the unbalanced cat can be obtained from the balanced case by making the replacement Γ → µ + (1 − µ)Γ. (The decoherence factor, defined as the inner product between the environment states conditioned on pure system pointer states, is still Γ.) We decompose the photon Hilbert space as in the appendix, (A.1), only now with It is only a little more work to show that the eigenvalues of ρ χ F (which carry momentum dependence) are similarly modified as Carrying out the momentum integrals leads to the similar replacement Γ f → µ + (1 − µ)Γ f . The net effect of unbalancing the cat state on the mutual information is to make the replacement x → µ+(1−µ)x, for x = Γ, Γ f , or Γ 1−f , in (28). The only qualitative change to the partial information plot is to lower the classical plateau to the new maximum system entropy, H S = −p 1 ln p 1 − p 2 ln p 2 < ln 2, and to "soften" the shoulders. (See figure 6.) The mutual information for the extreme case (µ = 0) is trivially zero, but the limit of the renormalized mutual information for extremal probabilities is Note that there is finite softening as µ → 1. Now we look at a "balanced M -way-cat" state, ψ( x) = M −1/2 M a=1 δ( x − x a ), exposed to a point-source blackbody. Let us assume the special case where the decoherence factors for all off-diagonal elements of ρ S are equal. That is, assume n|S xa † S x b |n is the same for all a = b (such as when M = 3 and x 1 , x 2 , and x 3 form an equilateral triangle in a plane perpendicular to the direction of illumination). Generalizing from the original case of the balanced 2-way-cat does not require any new tricks. The corresponding mutual information is which can be checked to reduce to (28) for M = 2. This yields a partial information plot with slightly softer shoulders and, as expected, a classical plateau at H S = ln M . (See figure 7.) The limit of the renormalized mutual information for large M is lim M →∞ Again, there is only finite softening for even the extreme case (M → ∞).
Balanced M -way-cat states with arbitrary decoherence factors are difficult to handle because they require diagonalizing a matrix with M (M − 1)/2 different off-diagonal terms. We can still say the following. Let γ i,j , for i = j, be the set of decoherence factors. Call the factor with the largest (smallest) absolute value γ W (γ S ), with "W" ("S") standing for weak (strong) decoherence. Let ρ W S (ρ S S ) be the hypothetical state resulting from setting all decoherence factors to γ W (γ S ), and likewise for ρ W SF (ρ S SF ) and ρ W F (ρ S F ). There is strong numerical evidence that we can bound H W S ≤ H S ≤ H S S , in agreement with intuition. The same is true for H F and H SF because, in the proper basis, ρ F and ρ SF take the same form as ρ S . Finally, we note that in the large-t (small-Γ) limit, This limit is quickly reached since Γ = e −t/τ D , and this is exactly where we would like to calculate redundancy. For times t long enough to make higher order powers of Γ negligible, the mutual information I S:F f -and hence the redundancy R δ -is bounded by the associated values for the strong and weak decoherence: for f < 0.5. In other words, having unequal decoherence factors for a balanced M -way cat state does not drastically alter the behavior of the mutual information; for times much larger than the decoherence time τ D , the redundancy can be bounded from above and below by calculations using the largest and smallest decoherence factor. For "unbalanced M-way-cat" states, ψ( x) = M a=1 p a δ( x − x a ), (43 -45) still hold, but we cannot use (41) to get a power series for the entropies in terms of Γ. But we do know that the classical plateau must still form at H S = − M a=1 p a ln p a since, for large times, all off-diagonal terms are driven to zero. As M → ∞, these states can approximate a generic continuous wavefunction, but subtleties enter. The maximum entropy H S = ln M diverges, while the decoherence factor associated with two sufficiently nearby points becomes arbitrarily weak. For these small distances, the self-Hamiltonian of the system will no longer be negligible on the times scales of decoherence.

Conclusion
We illustrate quantum Darwinism [7,9,8,33] in systems decohered by blackbody radiation. Such radiation is by far the dominant form of illumination in everyday life and is the medium though which we, as observers, gather most of our information. The huge redundancy growth rates we have calculated support the claim that a purely quantum universe can account for the appearance of objective and robust classical states. This is the first realistic model of quantum Darwinism, and the first with an environment with the capacity for two distinct types of mixing. The most important type of mixing is in that component of the environment responsible for storing information about the system (the angular degrees of freedom). We have shown, in agreement with previous studies of abstract systems [18,19], that mixing of this type decreases the environment's ability to record information about the system-without decreasing its ability to decohere. The other type of mixing (the energy spectrum) only effects records insofar as certain modes (higher energy) are better able to resolve, and therefore record, states of the system.
We have extended the model to more general initial states, giving evidence that the qualitative features of the mutual information and redundancy are robust. |χ χ| = i∈F |k i k i |. The normalized ρ χ F , which lives in the Hilbert spaceF of angular eigenstates, is the density matrix conditional on the particular set of momenta χ.
Since ρ F is block-diagonal, it's entropy is where H χ F is the entropy of ρ χ F and H[p(k)] = H[p(χ)]/f N is the entropy associated with the energy distribution p(k) of a single thermal photon (which diverges since the photon Hilbert space is infinite dimensional).
Handling the mutual information is easier if we discretize the angular directions. Let ∆Ω be the solid angle associated with a single discretized state and D B = Ω/∆Ω be the dimension of the projector onto the directions in B. Then send In the discrete picture, the conditional angular states are Notice that P (i) and Q (i) are D B -dimensional projectors unitarily related by U (i) = S k i x 2 S k i x 1 † . P and Q are likewise unitarily equivalent and therefore they can be written in an appropriate basis as [34] * where C and S are commuting positive matrices obeying C 2 +S 2 = I. (Their eigenvalues are the cosines and sines of the canonical angles between the subspaces into which P and Q project.) The eigenvalues of ρ χ F = (P + Q)/2D f N B are given by (1 ± |C|)/2D f N B . To calculate the C, it turns out that it is easier to diagonalize the related matrix There is a technical complication that subspaces on which P and Q commute (which necessarily exist when D B is odd) must be handled separately. This is straightforward, with no changes to the final results.

(A.40)
This formula allows one to calculate the exact receptivity for any blackbody B for which the integrals can be performed.