Brought to you by:
Topical Review

Entanglement typicality

, , and

Published 22 August 2014 © 2014 IOP Publishing Ltd
, , Citation Oscar C O Dahlsten et al 2014 J. Phys. A: Math. Theor. 47 363001 DOI 10.1088/1751-8113/47/36/363001

1751-8121/47/36/363001

Abstract

We provide a summary of both seminal and recent results on typical entanglement. By 'typical' values of entanglement, we refer here to values of entanglement quantifiers that (given a reasonable measure on the manifold of states) appear with arbitrarily high probability for quantum systems of sufficiently high dimensionality. We shall focus on pure states and work within the Haar measure framework for discrete quantum variables, where we report on results concerning the average von Neumann and linear entropies as well as arguments implying the typicality of such values in the asymptotic limit. We then proceed to discuss the generation of typical quantum states with random circuitry. Different phases of entanglement, and the connection between typical entanglement and thermodynamics are discussed. We also cover approaches to measures on the non-compact set of Gaussian states of continuous variable quantum systems.

Export citation and abstract BibTeX RIS

1. Introduction

The term 'quantum entanglement' was coined by Schrödinger [57] in connection with the criticism against quantum theory put forward by Einstein, Podolsky and Rosen [19]. Since then, it became synonymous with quantum correlations that cannot be explained by any local and real (hence classical) theory. As such it was traditionally relegated to foundational (and philosophical) issues until a few decades ago. In fact, in the nineties its usefulness in quantum information processing was realized, with the seminal protocols of superdense coding and teleportation [4, 5]. Then it started to be considered as a resource, and consequently much attention has been devoted to its characterization and quantification through the introduction of suitable measures (see e.g. [33, 54]).

While the structure of two-particle entanglement has been almost thoroughly explored, it has become evident that the extension to many-particles systems results in a prohibitive task [33, 54]. This is because complexity (and diversity) of multi-particle entanglement grows exponentially with the number of particles. A viable approach towards a characterization of entanglement in systems with many constituents, consists in focusing on the 'typical' entanglement. Here by 'typical' we mean the type of entanglement that appears with arbitrarily high probability in a quantum system of sufficiently high dimensionality. This subsumes the use of random states (first introduced in [40]) so that an entanglement measure becomes a function of a random variable, hence a random variable itself, and will have an associated probability distribution. Then the strategy is aimed at simplifying the problem at hand by restricting the attention to only those states corresponding to the most pronounced part of the above probability distribution and neglecting the others. To sample random states, it is reasonable to resort to the most unbiased probability measure, that is the one emerging from the Haar measure of the unitary group (the invariant measure under application of any unitary transformation) whose elements allow one to get any state when applied to a given starting pure state. Of course the consideration of typical entanglement returns exact results if the distribution of the entanglement measure will become strongly peaked in the limit of a large number of particles. In this paper we review the achievements concerning typical bipartite entanglement for random quantum states involving a large number of particles. We shall emphasize the statistical properties when such a number tends to infinity and discuss the finite size effects as well.

Besides its interest in the context of quantum information theory, typical entanglement has also been put forward as a fundamental explanation for the emergence of thermodynamics, where randomized global quantum states result in mixed, 'thermal' local Gibbs states. We shall also concisely review this area of application of entanglement typicality.

The layout of the paper is the following. Section 2 recalls basic notions about pure states' entanglement and its measures. Then, random quantum states will be presented in section 3 and their generation through random quantum circuits discussed in section 4. The statistical properties of typical entanglement in terms of different phases are detailed in section 5 and the relations to thermodynamics considered in section 6. Finally, in section 7 the issue of typical entanglement will be addressed within the continuous variable (CV) framework. Conclusions are drawn in section 8. The appendix contains a cursory treatment of random mixed states of many qubits.

2. Entanglement in a nutshell

We summarize here basic notions about the entanglement of pure states and refer the reader to recent reviews on the subject of quantum entanglement for further details [33, 54].

Let us consider a bipartite system with associated Hilbert space ${{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$. Then, any bipartite pure state $|{{\Psi }_{AB}}\rangle \in {{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$ is said to be separable if there exist $|{{\psi }_{A}}\rangle \in {{\mathcal{H}}_{A}},|{{\psi }_{B}}\rangle \in {{\mathcal{H}}_{B}}$ such that

Equation (1)

i.e. it can be written as a tensor product of vectors belonging to the Hilbert spaces of the subsystems5 . On the contrary, if there not exist any $|{{\psi }_{A}}\rangle \in {{\mathcal{H}}_{A}},{{\psi }_{B}}\rangle \in {{\mathcal{H}}_{B}}$ of such kind, the state $|{{\Psi }_{AB}}\rangle $ is said to be entangled.

Quite generally, given orthonormal bases $\{|e_{A}^{i}\rangle {{\}}_{i}}$ for ${{\mathcal{H}}_{A}}$ and $\{|e_{B}^{j}\rangle {{\}}_{j}}$ for ${{\mathcal{H}}_{B}}$, we can write

Equation (2)

where ${{N}_{A}}={\rm dim}{{\mathcal{H}}_{A}}$ and ${{N}_{B}}={\rm dim}{{\mathcal{H}}_{B}}$. Then it turns out that the state $|{{\psi }_{AB}}\rangle $ is separable if and only if the matrix Ψ of coefficients Ψij has rank one. Furthermore, there exists a bi-orthonormal basis $\{|\tilde{e}_{A}^{i}\rangle |\tilde{e}_{B}^{i}\rangle {{\}}_{i}}$ for ${{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$ where $\left| {{\psi }_{AB}} \right\rangle $ takes the form

Equation (3)

known as Schmidt decomposition. The quantities λi (Schmidt coefficients) are non-zero singular eigenvalues of Ψ, or in other words ${{p}_{i}}=\lambda _{i}^{2}$ are non-zero eigenvalues of either reduced density operator ${{\rho }_{A}}={\rm t}{{{\rm r}}_{B}}\left( \left| {{\psi }_{AB}} \right\rangle \left\langle {{\psi }_{AB}} \right| \right)$ or ${{\rho }_{B}}={\rm t}{{{\rm r}}_{A}}\left( \left| {{\psi }_{AB}} \right\rangle \left\langle {{\psi }_{AB}} \right| \right)$.

Entanglement is invariant under local unitary operations ${{U}_{A}}\otimes {{U}_{B}}$. Since the coefficients λi (or equivalently the coefficients pi ) are the only parameters invariant under such transformations, they completely determine the bipartite entanglement.

A quantitative measure of entanglement must satisfy two fundamental properties6 :

  • (i)  
    It cannot increase under local operation and classical communication;
  • (ii)  
    It must vanish for separable states;

In addition we may require normalization so that the amount of entanglement is ${\rm log} N$ when $\left| {{\psi }_{AB}} \right\rangle =\sum _{i=1}^{N}\left| \tilde{e}_{A}^{i} \right\rangle \left| \tilde{e}_{B}^{i} \right\rangle /\sqrt{N}$, i.e. for a maximally entangled state.

The entropy of entanglement, i.e. entropy of subsystem (either A or B)

Equation (4)

satisfies these conditions and will be considered throughout this paper7 . More generally, one could consider quantum Renyi entropy replacing the von Neumann entropy, that is

Equation (5)

For q = 1 it reduces to (4), while for q = 2 it is related to the so called purity

Equation (6)

by the relationship

Equation (7)

2.1. Separability and entanglement of CV

The extension of the above arguments to infinite dimensional Hilbert spaces presents some oddities. Systems with associated Hilbert space isomorphic to ${{\ell }^{2}}(\mathbb{C})$ or ${{L}^{2}}(\mathbb{R})$ are usually referred to as CV systems.

For a bipartite CV pure state living in ${{\ell }^{2}}{{(\mathbb{C})}_{A}}\otimes {{\ell }^{2}}{{(\mathbb{C})}_{B}}$ (where Fock bases are standardly used) separability occurs if and only if it has Schmidt rank equal to one, i.e. one Schmidt coefficient equal to one and all the other zero. However there cannot be maximally entangled states simply because a state with all Schmidt coefficients equal would lead to an infinite norm. Moreover, the set of separable (mixed) states is nowhere dense [11] and as consequence would have volume zero, in contrast to what happens in finite dimension [65].

The entropy of entanglement remains a good measure for pure CV states. However, it is unbounded and becomes infinite for certain states. To avoid this problem one has to impose suitable constraints. That can be more easily done within the set of Gaussian states (see [9, 62] for reviews on Gaussian states in quantum information).

A Gaussian states ${{\rho }_{AB}}$ of two modes on the Hilbert space ${{L}^{2}}{{(\mathbb{R})}_{A}}\otimes {{L}^{2}}{{(\mathbb{R})}_{B}}$ of functions of position variables $({{q}_{A}},{{q}_{B}})$, is characterized by the displacement vector (first moments)

Equation (8)

and the covariance matrix (second moments)

Equation (9)

where $R=({{Q}_{A}},{{P}_{A}},{{Q}_{B}},{{P}_{B}})$ and Q, P are mode position and momentum observables respectively. These operators satisfy the canonical commutation relations $[{{R}_{k}},{{R}_{k^{\prime} }}]=2i{{J}_{kk^{\prime} }}$ where

Equation (10)

is the symplectic form (we assume the convention that the covariance matrix of the vacuum state is set to I).

Since the displacement d can be removed by local unitary operations, only the covariance matrix σ is relevant for entanglement. Notice that Heisenberg uncertainty imposes that [9, 62]

Equation (11)

The covariance matrix σ describes a pure state if and only if ${{(\sigma J)}^{2}}=-I$.

In order to avoid divergences of physical quantities it is standard within the manifold of Gaussian states to constrain the mean value of the 'energy' (assuming free, non interacting oscillators) in each subsystem. This amounts to fixing the following values

Equation (12)

Equation (13)

where $a,{{a}^{\dagger }}$ (resp. $b,{{b}^{\dagger }}$) are ladder operators, with real and Hermitian part given by ${{Q}_{A}},{{P}_{A}}$ (resp. ${{Q}_{B}},{{P}_{B}}$), that provide a natural link to ${{\ell }^{2}}{{(\mathbb{C})}_{A}}\otimes {{\ell }^{2}}{{(\mathbb{C})}_{B}}$.

Now, suppose that σ describes a bipartite Gaussian pure state ${{\rho }_{AB}}$ of $1+1$ modes, then the reduced density operator ${{\rho }_{A}}$ will be still Gaussian and characterized by a covariance matrix ${{\sigma }_{A}}$. The latter can always be diagonalized by means of some symplectic matrix SA , so to have $S{{\sigma }_{A}}{{S}^{T}}={\rm diag}(\nu ,\nu )$ with $\nu \in [1,\infty )$ is the so called symplectic eigenvalue of ${{\sigma }_{A}}$ (we have that the symplectic eigenvalues of ${{\sigma }_{A}}$ equals those of ${{\sigma }_{B}}$ if ${{\rho }_{AB}}$ is a pure Gaussian state). Similarly, for a bipartite Gaussian state of ${{n}_{A}}+{{n}_{B}}$ the symplectic diagonalization defines a set of nA (assuming ${{n}_{A}}\leqslant {{n}_{B}}$) symplectic eigenvalues, ${{\nu }_{1}},{{\nu }_{2}},...,{{\nu }_{{{n}_{A}}}}$. One can show that any entanglement measure is a function of the symplectic eigenvalues only. In particular, the entropy of entanglement (4) reads [32]

Equation (14)

where

Equation (15)

The purity (6) can be expressed in terms of symplectic eigenvalues too, as

Equation (16)

3. Random quantum states

The set of pure states on a N dimensional Hilbert space $\mathcal{H}$ forms a complex projective space $\mathbb{C}{{P}^{N-1}}$ on which there exists a natural uniform measure in the sense that, because of its unitary invariance, it equally weighs different regions of the space. This measure is constructed by borrowing the Haar measure on U(N). In this framework, to generate a random pure state one applies a random unitary $U\in {\rm U}(N)$ to a fixed state $\left| {{\psi }_{0}} \right\rangle \in \mathcal{H}$, which is equivalent to taking a vector (column) from a random unitary $U\in {\rm U}(N)$. This particular choice of a measure on the set of quantum states enjoys the privilege of being invariant under any Hamiltonian evolution which, in a sense, establishes a connection with dynamics.

Being ${\rm U}(N)$ isomorphic to a N2 dimensional manifold embedded on ${{\mathbb{R}}^{2{{N}^{2}}}}$, expressing the Haar measure there implies the use of N2 local coordinates. Exploiting the Hurwitz parametrization, that generalizes the Euler angles, these read [7]

Equation (17)

with $1\leqslant k\lt \ell \leqslant N$. Then the measure on ${\rm U}(N)$ (normalized to 1) can be written as [63]

Equation (18)

Notice that, being ${\rm U}(N)={\rm U}(1)\times {\rm SU}(N)$ (with α the ${\rm U}(1)$ parameter), from (18) we can also get the measure on ${\rm SU}(N)$. In turn, the measure on $\mathbb{C}{{P}^{N-1}}$ can be derived by observing that $\mathbb{C}{{P}^{N-1}}={\rm SU}(N)/{\rm U}(N-1)$ and it results (normalized to 1) [64]

Equation (19)

where

Equation (20)

with $1\leqslant k\leqslant N-1$.

Let us now discuss the induced measure on the set of reduced density operators for a bipartite system. Consider a bipartite quantum system with Hilbert space ${{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$ of dimension ${{N}_{A}}\times {{N}_{B}}$. A pure state $\left| {{\psi }_{AB}} \right\rangle \in {{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$ can be expanded as in equation(2) and thus can be represented by a rectangular (${{N}_{A}}\times {{N}_{B}}$) matrix Ψij . Upon normalization, the ensemble of uniformly distributed pure states coincides with the Ginibre ensemble of random matrices with i.i.d. Gaussian distributed entries with zero mean and finite variance. In turn, the density operator $\left| {{\psi }_{AB}} \right\rangle \left\langle {{\psi }_{AB}} \right|$ will be represented by a square (${{N}_{A}}{{N}_{B}}\times {{N}_{A}}{{N}_{B}}$) matrix ${{\Psi }_{ij}}\Psi _{i^{\prime} j^{\prime} }^{*}$. The partial trace with respect to the subspace ${{\mathcal{H}}_{B}}$ gives the reduced density matrix (of the A subsystem)

Equation (21)

where we have omitted the label indicating the A subsystem.

Assuming ${{N}_{B}}\geqslant {{N}_{A}}$, we write as consequence of (21) $\rho =\Psi {{\Psi }^{\dagger }}$. Then, we have the following distribution of (Hermitian) matrices

Equation (22)

where the measure ${\rm d}\mu \left( \Psi \right)\equiv {\rm d}\mu (\left| \psi \right\rangle )$ is the one in $\mathbb{C}{{P}^{{{N}_{A}}{{N}_{B}}-1}}$ (see (19)). Furthermore, the first delta imposes that $\rho =\Psi {{\Psi }^{\dagger }}$, while the second one imposes the unit trace. That is, the distribution of the reduced density matrix coincides (upon normalization) to the distribution of the Wishart matrix $\Psi {{\Psi }^{\dagger }}$.

Making the change of variable $\Psi =\sqrt{\rho }\tilde{\Psi }$, ${\rm d}\mu \left( \Psi \right)={\rm det} {{\rho }^{{{N}_{B}}}}{\rm d}\mu \left( \tilde{\Psi } \right)$ and noticing that $\delta \left( \sqrt{\rho }\left( 1-\tilde{\Psi }{{\tilde{\Psi }}^{\dagger }} \right)\sqrt{\rho } \right)={\rm det} {{\rho }^{-{{N}_{A}}}}\delta \left( 1-\tilde{\Psi }{{\tilde{\Psi }}^{\dagger }} \right)$ we obtain

Equation (23)

where the theta function guarantees the positivity of ρ. Now, being ρ unitarily diagonalizable we write it as $\rho =U\Lambda {{U}^{\dagger }}$ with $\Lambda =diag\{{{p}_{1}},\ldots ,{{p}_{{{N}_{A}}}}\}$. Then, by integrating over ${\rm d}\mu (U)$ given by (18) we obtain the joint eigenvalues density distribution (this expression was first derived in [39])

Equation (24)

where the normalization constant reads [64]

Equation (25)

with Γ denoting the Euler gamma function.

At this point we may notice that (24) is the distribution of the squared Schmidt coefficients of the state $\left| {{\psi }_{AB}} \right\rangle $. By using (24) one can derive the distribution for the subsystem entropy $\mathcal{S}$

Equation (26)

which results in the mean value

Equation (27)

This expression was first conjectured by Page [51] and then proved later on by several authors [24, 56, 58].

Recall that the subsystem entropy is an entanglement measure for bipartite pure states and its maximum is ${\rm log} {{N}_{A}}$ . From (27) it follows [40]

Equation (28)

which scales like ${\rm log} {{N}_{A}}$ when ${{N}_{A}}\simeq {{N}_{B}}$.

Likewise we can derive the expectation value of any Renyi entropy. Then, for q = 2, the expectation value of the purity can be derived

Equation (29)

Now a value of random variable, like $\mathcal{S}(\rho )$, is called typical if its probability distribution is peaked around the mean. Actually, in [30] it has been shown that the probability that a state $\left| {{\psi }_{AB}} \right\rangle $ drawn randomly from ${{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$ gives the subsystem entropy smaller than ${\rm log} {{N}_{A}}$ is exponentially smaller; more precisely

Equation (30)

This is referred to as concentration measure effect and comes form the concentration of the spectrum of the reduced density matrix of a bipartite system when the dimensions of both subsystems become large. This effect can be traced back to the fact that the uniform measure on the k-sphere ${{\mathbb{S}}^{k}}$ concentrates about any equator as k gets large and any polar cap smaller than the hemisphere has a relative volume exponentially smaller in k. This implies that similar results hold true for the value of any smooth function on the sphere: all such functions will be overwhelmingly likely to take values close to the average except for a set of volume exponentially small in k. Hence, random pure states are typically highly entangled (though not necessarily maximally entangled).

The complete distribution of bipartite entanglement of random pure states may be exactly reconstructed in terms of purity [20, 27], von Neumann entropy [23, 37, 46], and all Renyi entropies with $q\gt 1$ [45]. For a possible approach to typical multipartite entanglement, see [21, 22].

The extension of the above arguments from the case of pure states $\left| {{\psi }_{AB}} \right\rangle $ to the case of mixed states ${{\rho }_{AB}}$ is briefly discussed in the appendix.

4. Random quantum circuits

We provide here an operational interpretation of typical entanglement by means of random circuits.

Given a set of n qubits, a random circuit ${{C}_{\ell }}$ is a product ${{W}_{\ell }}\ldots {{W}_{1}}$ of two-qubit gates where each Wi is independently constructed in the following way: a pair of distinct integers $c\ne t$ is randomly and uniformly chosen from $1,\ldots ,n$. Then, single-qubit unitaries $U[c]$ and $V[t]$ acting on qubit c and t respectively are drawn independently from the uniform measure on ${\rm U}(2)$. Finally, $W={\rm CNOT}[c,t]U[c]V[t]$ where ${\rm CNOT}[c,t]$ is the controlled-NOT gate with control and target qubit c and t respectively.

Since the universal set (of all one qubit gates together with CNOT) can generate the whole of ${\rm U}({{2}^{n}})$, such random circuits can produce any unitary. This process converges to a unitarily invariant distribution, but the Haar distribution is unique, hence the resulting unitary will be uniformly distributed in ${\rm U}({{2}^{n}})$. However, the convergence rate results exponentially slow in the number of qubit n, since approximating an arbitrary unitary to a given accuracy using a set of fixed size gates requires a number of steps that grows exponentially with n [36]. Thus obtaining the uniform distribution to a fixed accuracy may look unphysical.

On the other hand, entanglement gives rise to peculiar properties of a quantum state. Then, their faithful reproduction may be possible with fewer physical resources, i.e. elementary gates, than those required for the generation of the expectation value for an arbitrary observable. Ref. [49] explored whether typical entanglement properties can be obtained efficiently, i.e. polynomially in the number of qubits, using only one- and two-qubit gates.

There, it was considered the set of n-qubits split into two subsets A (with nA qubits) and B (with nB qubits). Let $\left| {{\psi }_{0}} \right\rangle $ be a initial state in AB and consider a random circuit ${{C}_{\ell }}$ consisting of ℓ randomly chosen two-qubit quantum gates. Defining $\left| {{\psi }_{\ell }} \right\rangle ={{C}_{\ell }}\left| {{\psi }_{0}} \right\rangle $ the amount of entanglement of reduced density operator ${{\rho }_{A,\ell }}={\rm t}{{{\rm r}}_{B}}(\left| {{\psi }_{\ell }} \right\rangle \left\langle {{\psi }_{\ell }} \right|)$ of subsystem A will be $\mathcal{S}({{\rho }_{A,\ell }})$ according to equation(4). Then, in Ref. [49] it was shown that, independently of the initial state $\left| {{\psi }_{0}} \right\rangle $, convergence of the expected entanglement to its asymptotic value to an arbitrary fixed accuracy epsilon is achieved after a number of random two-qubit gates that is polynomial in the number of qubits. More precisely, given ${{n}_{B}}\geqslant {{n}_{A}}$, $\epsilon \in (0,1)$ and a number ℓ of gates in ${{C}_{\ell }}$ satisfying

Equation (31)

we have

Equation (32)

which is similar to the bound of equation (28). The convergence occurs in approximately $n{\rm log} n$ steps, so the bound is not tight.

This can be explained as follows. Writing $\left| {{\psi }_{\ell }} \right\rangle \left\langle {{\psi }_{\ell }} \right|={{2}^{-n/2}}{{\sum }_{s\in {{\{0,x,y,z\}}^{n}}}}{{\xi }_{\ell }}(s)\otimes _{i=1}^{\ell }{{\sigma }^{{{s}_{i}}}}[i]$, where ${{\xi }_{\ell }}(s)={{2}^{-n/2}}{\rm tr}\left( \otimes _{i=1}^{n}{{\sigma }^{{{s}_{i}}}}[{\rm i}]\left| {{\psi }_{\ell }} \right\rangle \left\langle {{\psi }_{\ell }} \right| \right)$ and ${{\sigma }^{{{s}_{i}}}}[i]$ is the si th Pauli operator acting on the ith qubit, the reduced density operator ${{\rho }_{A}}={\rm t}{{{\rm r}}_{B}}\left( \left| {{\psi }_{\ell }} \right\rangle \left\langle {{\psi }_{\ell }} \right| \right)$ yields

Equation (33)

The coefficients $\mathbb{E}\left[ \xi _{\ell }^{2}(p) \right]$ form a probability distribution on ${{\{0,x,y,z\}}^{n}}$ for all ℓ, and these probabilities evolve as a Markov chain with transition matrix taking p distributed according to ${{\left( \mathbb{E}\left[ \xi _{\ell }^{2}(s) \right] \right)}_{p}}$ in one step to q distributed according to ${{\left( \mathbb{E}\left[ \xi _{\ell +1}^{2}(s) \right] \right)}_{q}}$. Then, after a certain number of steps taken in the Markov chain, an abrupt approach to the stationary distribution occurs giving rise to a cut-off effect in the entanglement probability distribution Ref. [49]. [13] discussed how to identify the transition from the phase of rapid spread of entanglement to the stationary phase where entanglement is typically maximal.

Actually, the efficient generation of typical entanglement features can be traced back to the fact that random circuits of only polynomial length form approximate 1- and 2-designs [14, 28]. The notion of k-designs quantifies the extent to which pseudo-random operators behave like the uniform distribution. Let $\{{{p}_{i}},{{U}_{i}}\}$ be an ensemble of unitary operators and define

Equation (34)

then the ensemble is a unitary k-design if ${{\mathcal{G}}_{W}}={{\mathcal{G}}_{H}}$ and is epsilon-approximate unitary k-design if $\parallel {{\mathcal{G}}_{W}}-{{\mathcal{G}}_{H}}{{\parallel }_{\diamond }}\leqslant \epsilon $, with $\parallel \bullet {{\parallel }_{\diamond }}$ the diamond norm [35].

In Ref. [28] it is conjectured, based on an analogous classical result, that a random circuit on n qubits of length $poly(n,k)$ is an approximate k-design. While this is not proved, it is instead rigorously showed that a circuit of length $O(n(n+{\rm log} 1/\epsilon ))$ yields an epsilon-approximate 2-design (so the first two moments are equal within epsilon to those of the Haar distribution). More recent results show that certain random circuits are actually approximate polynomial designs for any k [6]. (This does not mean that the circuits approximately generate the Haar measure in a reasonable number of gates, since the statement is that if you fix k, then one needs only poly(n) gates; it is not a statement about how the number of gates required scale with k for fixed n.)

In Ref. [53] a different scheme to generate random quantum circuits that does not make use of classical random numbers was proposed. It relies on a particular type of entangled state called weighted graph state. Consider n × m qubits sitting on vertices of a simple graph G embedded in a two-dimensional lattice, each one in the state $\left| + \right\rangle $. Then a weighted graph state is

Equation (35)

where the product is taken overall edges and unitaries are defined as

Equation (36)

where ${{A}_{a,b}}$ are the entries of the adjacency matrix. Suppose that the first column of quits represent the input state, then measurements are performed successively on columns 1 through $m-1$ leaving the output state on the last column (the mth). Furthermore, on each column just performs projective measurements in the ${{\{\left| + \right\rangle ,\left| - \right\rangle \}}^{\otimes n}}$ basis. In this way the randomness of the measurement outcomes chooses the particular circuit and no classical random numbers are necessary.

Moreover non-universal set of gates can also generate typically maximal entanglement. Stabilizer states, an important discrete subset of general quantum states on finite dimensions, have typically maximal entanglement [12, 61] and applying gates that are universal for stabilizer states generates this typical entanglement in finite time [13]. In addition it has been shown that random circuits of elementary diagonal (in the computational basis) unitary gates are also 2-designs, for a suitable definition of diagonal 2-designs [47, 48]. The typical entanglement generated by these depends on the initial state, e.g. a product state ${{\left| 0 \right\rangle }^{\otimes }}n$ is left invariant but certain superposition product states will typically become maximally entangled.

5. Phase transitions of entanglement

We have seen in section 3 that uniformily distributed states are typically close to be maximally entangled. That is, the average entanglement (as quantified, e.g., by the Renyi entropy of entanglement) is close to its maximum value, and the probability of deviation from the average are exponentially suppressed in the dimension of the quantum system.

Suppose to have a bipartite system of dimension ${{N}_{A}}\times {{N}_{B}}$, with ${{N}_{A}}\leqslant {{N}_{B}}$. In terms of the squared Schmidt coefficients the typicality of entanglement is expressed by the fact that for typical states we have ${{p}_{i}}\sim 1/{{N}_{A}}$. However, this information alone is not sufficient to characterize the typical distribution of the squared Schmidt coefficients (also known as entanglement spectrum). This goal can be achieved by applying the method of stationary phase in the limit ${{N}_{A}}\to \infty $. Moreover, the same method allows one to derive the explicit distribution of certain entanglement measures, e.g., the Renyi entropies of order $q\gt 1$.

Let us consider the integral of the probability density of the squared Schmidt coefficients given by equation(24)

Equation (37)

It results

Equation (38)

where the integration is over the region $C(1)$ determined by the constraints ${{\sum }_{i}}{{p}_{i}}=1$ and ${{p}_{i}}\geqslant 0$. The latter expression for Z shows the formal analogy between the statistics of random states of a quantum system and the thermodynamics of a two-dimensional Coulomb gas (a well known fact in random matrix theory). According to this analogy, Z corresponds to the partition function of NA charged particles on a line with coordinates ${{p}_{i}}\in [0,1]$ (that interacts through the two-dimensional Coulomb potential $V=-2{{\sum }_{i\lt j}}{\rm ln} |{{p}_{i}}-{{p}_{j}}|$), in the external potential ${{V}_{{\rm ext}}}=-({{N}_{B}}-{{N}_{A}}){{\sum }_{i}}{\rm ln} {{p}_{i}}$.

For ${{N}_{A}}\gg 1$ one can apply the method of stationary phase and evaluate

Equation (39)

where

Equation (40)

is the minimum energy under the constraints ${{\sum }_{i}}{{p}_{1}}=1$ and ${{p}_{i}}\geqslant 0$, and μ is the associated Lagrange multiplier. Notice that the typical distribution of the squared Schmidt coefficients is the one that minimizes the energy and is the solution of the stationary phase equation

Equation (41)

In the limit ${{N}_{A}}\to \infty $ we replace the sum with an integral and obtain the equation

Equation (42)

which has to be solved under the constraint

Equation (43)

where $\omega (p)={{\sum }_{i}}\delta (p-{{p}_{i}})$ is the density of the squared Schmidt coefficients (satisfying $\int _{0}^{1}{\rm d}p\;\omega (p)={{N}_{A}}$). The solution is given by the Marchenko–Pastur distribution [42, 51] which, under the assumption of ${{N}_{A}}\leqslant {{N}_{B}}$ made, reads

Equation (44)

where

Equation (45)

Given an entanglement measure $\mathcal{E}(p)$, the same approach can be applied to compute its probability density

Equation (46)

where the integration is over the region $C(1,\mathcal{E})$ determined by the constraints ${{\sum }_{i}}{{p}_{i}}=1$, $\mathcal{E}(p)=\mathcal{E}$ and ${{p}_{i}}\geqslant 0$. For ${{N}_{A}}\to \infty $ the stationary phase approximation yields

Equation (47)

where ${{E}_{s}}(\mathcal{E})$ is the minimum energy under the above constraints. The corresponding solution ${{\omega }_{s}}(p|\mathcal{E})$ describes the entanglement spectrum of states belonging to the submanifold with $\mathcal{E}(p)=\mathcal{E}$.

As already mentioned, the entanglement spectrum has been characterized as a function of the purity [20, 27], the von Neumann entropy [23], and all Renyi entropies with $q\gt 1$ [45] (see also [15]). It has been observed that the entanglement spectrum changes abruptly in correspondence with two critical values of the Renyi entropy. This property defines three phases with different entanglement features [17, 20, 45, 46]. The phase corresponding to large values of the entropy contains maximally entangled states. The central phase contains the typical states. Finally, the third phase contains separable states. In particular, while in the first two phases all the squared Schmidt coefficients are typically $\Omega (1/{{N}_{A}})$, for low values of the entropy there is a finite probability that a single Schmidt coefficient equals $\mu =\Omega (1)$. (The latter phase contains a rich structure of metastable configurations corresponding to local minima of the energy, see [17].) The qualitative features of the entanglement spectrum in the three phases are depicted in figure 1.

Figure 1.

Figure 1. Qualitative entanglement spectra at ${{N}_{A}}={{N}_{B}}=N$ for states in the (from left to right): phase containing maximally entangled states; phase containing typical states; phase containing separable states (not in scale).

Standard image High-resolution image

6. Typical entanglement as an approach to thermodynamics

This fundamental law [that systems are in a Gibbs thermal state] is the summit of statistical mechanics, and the entire subject is either the slide-down from this summit, as the principle is applied to various cases, or the climb-up to where the fundamental law is derived and the concepts of thermal equilibrium and temperature T clarified.

Feynman, Statistical Physics, Benjamin-Cummings, 1972

The phenomenon of typical entanglement has been advocated as an alternative to approaches based on ergodicity and mixing to ascend Feynmanʼs summit by probing and justifying the thermal state assumption [25, 26, 38, 40, 55].

To understand whether the thermal state can be justified in terms of typical entanglement we begin by considering the cleanest case wherein all states of the system have the same energy (such that the Hamiltonian is proportional to the identity, e.g. H = 0.) In this case the Gibbs thermal state ${{\rho }_{th}}(\beta ,H)={{\sum }_{i}}\frac{{\rm exp} (-\beta {{E}_{i}})}{Z}\left| i \right\rangle \left\langle i \right|$ reduces to ${{\rho }_{th}}(\beta ,H)=\sum _{i=1}^{{{N}_{A}}}\frac{1}{{{N}_{A}}}\left| i \right\rangle \left\langle i \right|=\frac{I}{{{N}_{A}}}$, where NA is the dimension of the system in question. For our purposes it is crucial to note that this is equivalent to the reduced state ${\rm t}{{{\rm r}}_{B}}{{\rho }_{AB}}$ in the event that ${{\rho }_{AB}}$ is pure and A and B are maximally entangled (assuming ${{N}_{B}}\geqslant {{N}_{A}}$), since pure states are defined to be maximally entangled when the von Neumann entropy of the smaller subsystem is maximal and this is the case precisely for the state of the smaller system being ${{\rho }_{A}}=\frac{I}{{{N}_{A}}}$. Thus—in this case of H = 0—an argument that says that the subsystem should be maximally entangled with the environment would also imply that the system is in a thermal state. One can then use the typical entanglement phenomenon—for dimensions of A and B where it exists—to say that typically the smaller system should be in a thermal state.

In the case of a non-trivial Hamiltonian, the argument for justifying the Gibbs state in terms of typical entanglement is less direct and still a topic of active research. In Refs. [25, 26, 38, 40, 55] the following line is taken. The principle of equal a priori probability states that all allowed states, given e.g. a restriction on the energy of the total system (A) + environment (B) state, are equally likely, i.e. that ${{\rho }_{AB}}=\frac{I}{{{N}_{R}}}$ where NR is the dimension (the minimal number of basis vectors) needed to express the most general state satisfying the restriction (e.g. $H{{\left| \psi \right\rangle }_{AB}}=E{{\left| \psi \right\rangle }_{AB}}$). Under certain additional assumptions it is well-established that the principle of equal a priori probability implies the Gibbs state on the smaller system A. Now, they show, if one rather than relying on the principle of equal a priori probability considered states ${{\left| \psi \right\rangle }_{AB}}$ from the Haar measure, then the reduced states are typically the same, i.e. typically ${\rm t}{{{\rm r}}_{B}}\frac{I}{{{N}_{R}}}\approx T{{r}_{B}}{{\left| \psi \right\rangle }_{AB}}\left\langle \psi \right|$. To gain an intuition for why this is the case, note firstly that for some given observable, if we pick a large typical state it is very unlikely that the observable carries predictability. On that particular measurement it is thus very unlikely that there is a difference to the uniform distribution on the restricted state space. Secondly, recall again that the number of local measurements on A, i.e. of the form ${{g}_{A}}\otimes I$ is very small and fixed by the subsystem size, so that increasing the environment system makes it increasingly unlikely that any of these will carry predictability. If none of them carry predictability the state is indistinguishable from the reduced state of the uniform distribution.

For completeness, even though this argument is independent of typical entanglement, we briefly give a standard argument for how to get the canonical Gibbs state once one has the uniform distribution over states with a given energy (or in a given energy interval), see e.g. [38]. Firstly assume that the interaction part of the Hamiltonian is so weak that the energy eigenstates of energy E can be well approximated as $\left| {{E}_{i}} \right\rangle \left| E-{{E}_{i}} \right\rangle $. There may, crucially, be degeneracies; we assume for simplicity that these are only on the environment system (it can easily be generalized), so that the states are labelled $\left| {{E}_{i}} \right\rangle \left| E-{{E}_{i}},j \right\rangle $, with $j=1,2,..{{j}_{{\rm max} }}(i)$ and the total dimension of the subspace is ${{\sum }_{i}}{{j}_{{\rm max} }}(i)$. Then

Equation (48)

Now make the second crucial assumption that jmax(i) is large and only changed slightly by altering i, such that ${{j}_{{\rm max} }}(i):={{j}_{{\rm max} }}(E-{{E}_{i}})\approx {{j}_{{\rm max} }}(E)-{{E}_{i}}j{{^{\prime} }_{{\rm max} }}(E)\approx {{j}_{{\rm max} }}(E){\rm exp} \left( -{{E}_{i}}\frac{j^{\prime} {{}_{{\rm max} }}(E)}{{{j}_{{\rm max} }}(E)} \right)$. Now define the inverse temperature as $\beta :=\frac{j^{\prime} {{}_{{\rm max} }}(E)}{{{j}_{{\rm max} }}(E)}$. The Gibbs state follows.

One can identify certain advantages and disadvantages with justifying the thermal state in this manner. If one models a quantum system interacting with an environment it is natural to expect them to become entangled so the idea that thermalization is associated with entanglement with the environment seems to follow naturally from assuming that quantum theory is a good model of reality. Moreover it gives thermalization more of an objective character than e.g. Jayneʼs maximum entropy principle [34]. This is because if two systems are highly entangled the outcomes of measurements on the individual systems cannot be predictable to any external observer, whereas Jaynes' argument is concerned with classical subjective lack of knowledge. At the same time there is an arbitrariness regarding the measure from which to pick the global pure state. The Haar measure is mathematically natural but there are several physical issues. Many of these have already been mentioned in this review, and it seems the conclusions often survive adding further physical restrictions to the set of states, but understanding how physical the typical entanglement approach is remains an area of active research.

A key original paper on typical entanglement concerned black hole thermodynamics, more specifically black hole formation and evaporation [52]; very similar ideas appeared already in [38]. A starting point was to treat a black hole formation and evaporation process as fully quantum and in particular assume that the total formation and evaporation process acts as a unitary evolution on the systems involved. This is in contrast to Hawkingʼs original semi-classical description where the total evolution is modelled as irreversible. The basic idea of Pageʼs model for black hole formation and evaporation is that there is some matter sitting in space which collapses to a black hole, undergoes a unitary evolution, and then gradually splits into an increasing number of subsystems outside the hole and a decreasing number inside the hole. At the level of the quantum model, the total system is a pure state ${{\left| \psi \right\rangle }^{(0)}}$ which undergoes a unitary evolution to ${{\left| \psi \right\rangle }^{(1)}}=U{{\left| \psi \right\rangle }^{(0)}}$. Page models this U as picked from the Haar measure. Next we introduce a partition of the system into two parts, representing the inside and outside of the black hole event horizon as described by a distant observer. This partition is gradually shifted to represent subsystems leaking out of the hole. Now the typical phenomenon tells us that the entropy of the subsystem outside will typically increase until the dimension of the outside equals the inside (this is sometimes called the Page-time), and then decrease towards 0. This means that any information encoded in the choice of initial pure state of the total system is recovered in principle after the evaporation is complete, according to this model. At the same time, as Page points out, if one only looked at the reduced states of the subsystems as they come out it would appear all information is lost, as it is hidden in the correlations. Thus, he argues, calculations which show entropy increase in the reduced states of subsystems do not prove that information is lost. In fact typically states have both maximally mixed subsystems and maximal entanglement in which the information about the original state is encoded. Apart from those early insights random unitaries and typical entanglement remain an important part of the discourse concerning the Black Hole information paradox, see for example [8], which states

Our best understanding of the growth and decay of the (von Neumann) entropy of black hole radiation comes from models based on random unitaries.

There is in addition a more general perspective on typical entanglement which is finding use both in thermodynamics and black hole arguments, going by the name of the decoupling theorem, see e.g. [29]. The standard description above concerns a bipartite state ${{\rho }_{AB}}$ as described by some implicit observer C. One may view the decoupling theorem as concerning the same scenario but viewed from a second external observer who also includes C in the description. When A and B are maximally entangled, so that C assigns maximal entropy to A, this would from the external observerʼs perspective be described as the state on AC being

Equation (49)

One says that A is decoupled from C. The decoupling theorem is a proven quantification of the statement that when a Haar random unitary is applied to AB, A typically becomes decoupled from C. More specifically, ${{\rho }_{AC}}\approx \frac{I}{{{N}_{A}}}\otimes {{\rho }_{C}}$, provided that the dimension NB satisfies

Equation (50)

This scenario is more general than the standard typical entanglement scenario in that there may be entanglement between AB and C, whereas writing down ${{\rho }_{AB}}$ as a well defined state as assigned by C implicitly assumes this is not the case (it implicitly assumes the ABC state is 'quantum-classical' so that the conditional reduced states on AB corresponding to different states of C are well-defined.). The decoupling theorem is for example used to make a more sophisticated version of Pageʼs argument [31]. In this scenario, the black hole has formed and been evaporating for some time in the same manner as in Pageʼs argument, but then a new system is introduced, which consists of two maximally entangled halves (one representing some secret information and the other half the record thereof). One of the entangled halves is thrown into the black hole and one half kept outside as a reference (R) and the question is how quickly, in terms of number of subsystems emitted, this information comes back out of the black hole, meaning how quickly R is purified by systems outside of the black hole. Haar unitary black hole evolution is again applied to the new larger black hole. The decoupling theorem can then be used to show that this information may leak out very quickly as a function of the number of subsystems emitted, hence the expression 'black holes as mirrors'. To see this more concretely, let A be the remnant of the Black hole and B the part emitted (since after the diary was thrown in). The decoupling theorem then says that for sufficiently large B, A will be decoupled from R, i.e. ${{\rho }_{A,R}}\approx {{\rho }_{A}}\otimes {{\rho }_{R}}$. This means that R must be purified by something outside of A: one possible purification of ${{\rho }_{A}}\otimes {{\rho }_{R}}$ is a product state ${{\left| \psi \right\rangle }_{AT}}\otimes \left| R,T^{\prime} \right\rangle $ where T and $T^{\prime} $ are not overlapping with A and thus outside the black hole. For this purification R is indeed purified by something outside of A and all purifications are equivalent up to a unitary on the purifying system ($TT^{\prime} $), which cannot change the entanglement between A and R. We also note that another version of the decoupling theorem is formulated in [10] to describe the evaporation of black holes with transevent horizon entanglement and provide a potential solution to the blackhole firewall paradox [1, 10].

To end this section we remark that black holes are one of the few scenarios where one might not want to assume that quantum theory holds, and two papers consider the same kind of question for probabilistic theories with reversible dynamics more generally, in what is known as the convex framework [43, 44]. Generalizations of the typical entanglement statement [43] and the decoupling theorem [44] are proven in those theorems, elucidating which features of quantum theory are involved in this phenomenon. In the case of black holes it is for example shown that one may imagine a non-quantum world inside the black hole which holds on to most of the information even after most subsystems have leaked; basically this could happen if the subsystems inside the hole have more free parameters than in the quantum case [44].

7. Random quantum states for CV systems

The starting point for the discussion of entanglement typicality in N dimensional quantum system is the probability density distribution of the (squared) Schmidt coefficients across a bipartition of the system. One natural way to think at the N dimensional system is as a collection of ${{{\rm log} }_{2}}\;N$ qubits, that are then split into two subsets.

A different approach has to be taken if instead one wants to describes a system composed of n CV subsystems (introduced in section 2.1). A case of special interest is that of Gaussian states for the n CV quantum subsystems. In such case, given a bipartition of the system in nA and nB CV systems (with ${{n}_{A}}+{{n}_{B}}=n$ and ${{n}_{A}}\leqslant {{n}_{B}}$), the relevant object is the probability density distribution of the symplectic eigenvalues of the subsystem A. In order to compute it one has to recall that Gaussian states are the submanifold of states that are obtained by applying the subgroup of Gaussian unitaries on the vacuum state, that is

Equation (51)

Let us also recall that Gaussian unitaries are of the form [7]

Equation (52)

where ${{a}_{i}},a_{i}^{\dagger }$ s are mode ladder operators, Y and $Y^{\prime} $ are Hermitian matrices, and ${{s}_{1}},{{s}_{2}},...,{{s}_{n}}$ are real and non-negative parameters. The Gaussian unitaries of the form ${\rm exp} \left( -{\rm i}\sum _{i,j=1}^{n}{{Y}_{ij}}a_{i}^{\dagger }{{a}_{j}} \right)$, with Y Hermitian, define a representation of the group ${\rm U}(n)$. equation (52) expresses the well known Euler, or Bloch Messiah, decomposition of symplectic operations in passive optical elements (essentially beam splitters and phase plates in the quantum optics laboratory, parametrized here by Y and $Y^{\prime} $) and single-mode squeezing operations (implemented, optically, in parametric down conversion processes through nonlinear crystals, and represented here by the parameters $\{{{s}_{k}}\}$).

The invariance of Gaussian states under the action of Gaussian unitaries allows one to compute the probability density distribution of the symplectic eigenvalues ${{\nu }_{1}},{{\nu }_{2}},...,{{\nu }_{{{n}_{A}}}}$ on the manifold of ${{n}_{A}}+{{n}_{B}}$ Gaussian states (${{n}_{A}}\leqslant {{n}_{B}}$). One then obtains [41]

Equation (53)

where $Z_{CV}^{-1}$ is the normalization function. Notice that, due to the fact that the manifold of Gaussian states is unbounded, the density $P(\nu )$ cannot be globally normalized.

To cope with this problem Refs. [59, 60] resorted, in line with the strand of works that relates typicality to thermodynamical behaviour reviewed in section 6, to imposing a statistical (canonical) principle on the desired measure [26, 55]:

Given a sufficiently small subsystem of the universe, almost every pure state of the universe is such that the subsystem is approximately in the 'canonical state' ρc .

The canonical state is the local reduction of the global state picked from a distribution of states with maximal entropy under the constraint of expectation value for the total energy operator HG given by E. This amounts to take a thermal canonical state, namely a Gaussian state with null first moments and covariance matrix ${{\sigma }_{c}}=(1+T/2)I$. Here the temperature T is defined by passage to the thermodynamical limit, that is for $n\to \infty $ and $E\to \infty $, $(E-2n)/n\to T$ (assuming kB = 1 for the Boltzmann constant).

Then, in analogy with thermodynamics, two privileged options can be considered: introducing a temperature or fixing the energy. These two possibilities correspond to canonical and micro-canonical approaches respectively.

Within the former approach the modes' energies $\vec{E}=({{E}_{1}},\ldots ,{{E}_{n}})$—defined as in equation (12) and given, in terms of the variables sk of equation (52), by ${{E}_{k}}=\left( {{{\rm e}}^{{{s}_{k}}}}+{{{\rm e}}^{-{{s}_{k}}}} \right)$—are assumed to be distributed accordingly to a probability distribution

Equation (54)

Actually, this distribution maximizes the entropy on the knowledge of the CVs Ej ʼs for given average total energy ${{E}_{av}}=nT$. It follows that the canonical average Qc (T) over pure Gaussian states at temperature T of a quantity $Q(E,U,U^{\prime} )$ determined by the second moments alone will read

Equation (55)

where the integration over the energies is understood to be carried out over the whole allowed domain.

Focusing on the behavior of the entanglement of a subsystem of m modes (as quantified by the von Neumann entropy of the reduction describing such a subsystem), keeping m fixed and letting the total number of modes $n\to \infty $, it has been determined the average asymptotic entanglement and rigorously proved that the variance of the entanglement tends to zero in the thermodynamical limit [60].

The micro-canonical approach consist in assuming a Lebesgue ('flat') measure for the energies Ej ʼs inside the region ${{\Gamma }_{E}}=\{\vec{E}:{{\sum }_{j}}{{E}_{j}}\leqslant E\}$. Denoting by ${\rm d}{{P}_{mc}}(\vec{E})$ the probability of the occurrence of the energies $\vec{E}$, one has

Equation (56)

where $\mathcal{N}$ is a normalization constant equal to the inverse of the volume of ${{\Gamma }_{E}}$. Such a flat distribution maximizes the entropy in the knowledge of the local energies under the constraint of fixed total energy.

The micro-canonical average Qmc (E) over pure Gaussian states at maximal energy E of the quantity $Q(\vec{E},\vartheta )$ determined by the second moments alone will thus be defined as

Equation (57)

where the integration over the Haar measure is understood to be carried out over the whole compact domain of the variables Yij of equation (52), compactly represented by the unitary U in agreement with the convention above. The normalization can be easily determined as $\mathcal{N}=n!/{{(E-2n)}^{n}}$ and leads to a marginal density of probability ${{P}_{n}}({{E}_{j}},E)$ for each of the energies Ej given by

Equation (58)

Although the energies Ej are not independently and identically distributed for finite n, they are so in the thermodynamical limit, where one has ${{P}_{n}}({{E}_{j}},E)\simeq {{{\rm e}}^{-\frac{{{E}_{j}}-2}{T}}}/T$ by defining $T=(E-2n)/n$. The equivalence of statistical ensembles of classical thermodynamics is thus recovered, and the general canonical principle fulfilled.

The average purity of a subsystem of n modes may be determined exactly under both the canonical and the micro-canonical measures. Considering, for simplicity, a single mode subsystem of an n-mode system, and denoting its canonical and micro-canonical average purity by, respectively ${{\mathcal{P}}_{c}}$ and ${{\mathcal{P}}_{mc}}$, one has [60]

Equation (59)

where T is the temperature associated with the canonical measure, and

Equation (60)

where E is the micro-canonical energy. As a reference value, the maximal purity ${{\mathcal{P}}_{M}}$ for given energy $E=\tilde{E}+2n$ obeys the relationship

Equation (61)

The standard deviations on such average quantities may also be derived analytically, by evaluating the fourth order moments

Equation (62)

and

Equation (63)

The micro-canonical mean $\mathcal{P}_{mc}^{-2}$ is increases monotonically with E for fixed n and, for $n\gt 2$, decreases monotonically with n for given E (although a peculiar finite size effect is apparent for $\tilde{E}\leqslant 10$, with $\mathcal{P}_{mc}^{-2}$ that increases in going from 2 to 3 modes). This general trend just reflects the fact that more available energy generally allows for higher entanglement, while the presence of more modes drains energy away to establish correlations which do not involve the particular chosen mode. The canonical average entanglement is instead monotonically increasing with both temperature and number of modes. This behavior is encountered also for the micro-canonical entanglement with given maximal total energy per mode, which is, even for small n, very similar to the canonical average (upon replacing $\tilde{E}/n$ with T). The increase of the average canonical entanglement with increasing number of modes but fixed temperature is a non trivial, purely 'geometric' effect, due to the average over the Haar distributed compact variables. An analogous increase is observed assuming a given, fixed value for the squeezing variables and averaging only over the compact variables: as the number of total modes increases, a given mode has more possibilities of getting entangled, even keeping a fixed mean energy per mode. The standard deviations above are generally increasing with total energy and temperature for fixed total number of modes (as more energy allows for a broader range of entanglement). Significantly, these partial analytical results clearly show the arising of the concentration of measure around a thermal average. Both for the canonical case and for the micro-canonical one with $\tilde{E}=nT$ the standard deviation decreases with increasing number of modes n, falling to zero asymptotically (after transient finite size effects, for very small n).

Moreover, in the micro-canonical instance, the thermal average of concentration is generally very distant, even for relatively small n, from the allowed maximum of (61) (which clearly diverges in the thermodynamical limit): e.g., for $\tilde{E}=10n$ one has that the average $\mathcal{P}_{mc}^{-2}$ is, respectively 16.5 and 257.1 standard deviations away from the maximal value $\mathcal{P}_{M}^{-2}$ for n = 5 and n = 20. Such a distance increases monotonically with the total number of modes. This highly peaked concentration for finite n allows one to obtain strict upper and lower bounds on the average von Neumann entropy of entanglement of single-mode subsystems [60].

8. Outlook

This article is meant to be a summary of mathematical findings related to the notion of typicality of composite quantum states. This approach to the study of local entropies has produced an established framework that can now be applied in several fields of mathematical and fundamental physics. The phase transitions discussed in section 5 are purely classical, but quantum entanglement is liable to provide one with a signature for quantum phase transitions [50]: investigating possible links between typicality and quantum state transitions would hence be worthwhile. In a different direction, a phase transition occurring for typical purity [20] can be paralleled to a well known phase transition occurring in conformal field theory [16]. As already mentioned in section 6, the typicality approach to thermodynamics could be extended to the notion of black hole entropy. The notion of typical entanglement for CV systems should also be developed further, and more explicitly connected to random processes and thermodynamics.

Acknowledgments

OCOD acknowledges support from EU Network TherMiQ. CL thanks F Delan Cunden for useful comments.

Appendix. Random mixed states

In this appendix, we sketch a couple of possible approaches to the issue of mixed random states of finite dimensional systems (for a more general approach to the topic, please refer to [66]; see also [18] for the local purity distribution of globally mixed random states). A way to define them is to borrow results from section 3 about the induced measure from random pure states. Specifically, suppose to write a mixed state ρ of a system S as coming from a pure state $|\psi {{\rangle }_{SE}}\in {{\mathcal{H}}_{S}}\otimes {{\mathcal{H}}_{E}}$ by tracing away the environment E

Equation (A.1)

where $|\psi {{\rangle }_{SE}}$ is picked up randomly according to the measure (19). As seen in section 3, this induces a measure on the density operators of the system S. Let us now split the latter into two further subsystems, considering ${{\mathcal{H}}_{S}}={{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$. Then we can look for entanglement of the random induced states on ${{\mathcal{H}}_{S}}$ with respect to the $A-B$ bipartition. In Refs. [2, 3], using tools from high-dimensional convexity (i.e. asymptotic geometric analysis), it has been shown that random induced states on ${{\mathcal{H}}_{S}}={{\mathcal{H}}_{A}}\otimes {{\mathcal{H}}_{B}}$ exhibit a phase transition phenomenon with respect to the dimension NE of the environment space. Assuming ${{N}_{A}}\leqslant {{N}_{B}}$, one can define a threshold given by $N_{A}^{2}{{N}_{B}}$ (up to a poly-log factor): that is, if one has a system AB comprising a number $n={{n}_{A}}+{{n}_{B}}$ of individual qubit systems, then the two subsystems A and B typically share entanglement if ${{n}_{E}}\lt 2{{n}_{A}}+{{n}_{B}}$, and typically do not share entanglement if ${{n}_{E}}\gt 2{{n}_{A}}+{{n}_{B}}$. This approach provides a relationship between entanglement properties and the environment.

Alternatively, to define random mixed states, one can get rid of the environment and start from a global mixed state ρ with fixed purity. Then, following [43], let us consider the ensemble of states obtained by applying a Haar-distributed unitary, according to (18), to ρ. Specifically, suppose to have a system AB comprising a number $n={{n}_{A}}+{{n}_{B}}$ of individual qubit systems. Any state ρ of such a system may always be expressed in the basis $\{{{g}_{j}}\}$ of the space of Hermitian operators as

Equation (A.2)

where each gj is a tensor product of Pauli operators (and identities). The operator basis chosen is clearly orthogonal with respect to the Hilbert–Schmidt inner product

Equation (A.3)

and such that ${{g}_{0}}={{I}_{{{2}^{n}}}}$. Then, ${{\xi }_{0}}=\frac{1}{{{2}^{n}}}$ by normalization of the quantum state. Besides, this basis features the remarkable property that any two gj and gk for positive j and k are related by a unitary, i.e. $\exists V\in {\rm U}({{2}^{n}}):$ ${{g}_{j}}=V{{g}_{k}}{{V}^{\dagger }}$, $\forall j,k\geqslant 1$.

Let us now define the state ρ with a given Renyi-2 entropy: ${\rm tr}{{\rho }^{2}}=\mathcal{P}$, and consider the ensemble of states obtained by applying a Haar-distributed unitary to ρ. Averages with respect to such an ensemble will be indicated by $\mathbb{E}$. We want to determine the average local purity of nA degrees of freedom under such a global average, at fixed global purity.

A proper ordering of the basis elements allows one to write the local state ${{\rho }_{A}}$, without loss of generality, as

Equation (A.4)

(only operators which reduce to the identity on the space of the nB qubits contribute to the partial trace). Also, the orthogonality of the basis leads to

Equation (A.5)

while the constraint on the global purity implies

Equation (A.6)

Now, because the basis elements gj are all unitarily equivalent for $j\geqslant 1$, one has that the Haar averages of the coefficients ξj must be the same: $\mathbb{E}\xi _{j}^{2}=\mathbb{E}\xi _{k}^{2}\quad \forall j,k\geqslant 1.$ Combining this fact with the unitarily invariant constraint (A.6) yields

Equation (A.7)

We can now determine the exact average of the local purity of equation (A.5) as

Equation (A.8)

which may be recast as $\frac{\mathbb{E}{\rm tr}\rho _{A}^{2}-{{2}^{-{{n}_{A}}}}}{\mathcal{P}-{{2}^{-n}}}={{2}^{{{n}_{B}}}}\frac{{{4}^{{{n}_{A}}}}-1}{{{4}^{n}}-1}.$ This shows that the ratio between the average deviation from the minimal possible purity of a subsystem and the deviation from the minimum possible global purity is proportional to the ratio between the number of local and global degrees of freedom. It is also apparent that, regardless of the global purity $\mathcal{P}$ and in agreement with asymptotic considerations, the local purity tends to its minimum value in the limit where nB goes to infinity and nA stays finite. Even in the asymptotic case where n goes to infinity but the ratio between nA and n tends to a constant, the local purity tends to the minimum value (zero, in this instance). In the case of a global pure state ($\mathcal{P}=1$), equation (A.8) simplifies to (29).

Footnotes

  • For pure states the notion of separability coincides with that of factorability.

  • It is still not clear if the convexity is a further mandatory ingredient.

  • In the asymptotic regime all pure states entanglement measures correspond to this one.

Please wait… references are loading.
10.1088/1751-8113/47/36/363001