Entropic uncertainty relations—a survey

Uncertainty relations play a central role in quantum mechanics. Entropic uncertainty relations in particular have gained significant importance within quantum information, providing the foundation for the security of many quantum cryptographic protocols. Yet, little is known about entropic uncertainty relations with more than two measurement settings. In the present survey, we review known results and open questions.

The uncertainty principle is one of the fundamental ideas of quantum mechanics.Since Heisenberg's uncertainty relations for canonically conjugate variables, they have been one of the most prominent examples of how quantum mechanics differs from the classical world (Heisenberg, 1927).Uncertainty relations today are probably best known in the form given by (Robertson, 1929), who extended Heisenberg's result to two arbitrary observables A and B. Robertson's relation states that if we prepare many copies of the state |ψ , and measure each copy individually using either A or B, we have where ∆X = ψ|X 2 |ψ − ψ|X|ψ 2 for X ∈ {A, B} is the standard deviation resulting from measuring |ψ with observable X.The consequence is the complementarity of quantum mechanics: there is no way to simultaneously specify definite values of non-commuting observables.This, and later, formulations concern themselves with the tradeoff between the "uncertainties" in the value of non-commuting observables on the same state preparation.In other words, they are comparing counterfactual situations.
It was eventually realized that other measures of "spread" of the distribution on measurement outcomes can be used to capture the essence of uncertainty relations, which can be advantageous.Arguably the universal such measure is the entropy of the distribution, which led Hirschmann to propose the first entropic uncertainty relation for position and momentum observables (Hirschmann, 1957).His results were later improved by the inequalities of (Beckner, 1975) and the uncertainty relations of (Białynicki-Birula and Mycielski, 1975), which we will review below.In (Białynicki-Birula and Mycielski, 1975) it is shown that this relation implies the Heisenberg uncertainty relation (1), and thus entropic uncertainty relations provide us with a more general framework of quantifying "uncertainty".
That entropic uncertainty relations are indeed desirable was pointed out by (Deutsch, 1983), who emphasized the fact that the lower bound given by Robertson's uncertainty relation depends on the state |ψ .In particular, this lower bound is trivial when |ψ happens to give zero expectation on [A, B] -which in finite dimension is always possible.He addressed this problem by proving a first entropic uncertainty relation in terms of the Shannon entropy for any two non-degenerate observables, which gives a bound that is independent of the state to be measured.His uncertainty relation was later improved by (Maassen and Uffink, 1988), following a conjecture by (Kraus, 1987), which we will discuss in detail below.Apart from allowing to put universal lower bounds on uncertainty even in finite dimension, another side effect of considering entropy uncertainty relations is a conceptual liberation.Indeed, Robertson's inequality (1) is best when the right hand side is 1 1, i.e.A and B are canonically conjugate which happens if and only if they are related by a Fourier transform.In the finite dimensional case, (Maassen and Uffink, 1988) show that the largest uncertainty is obtained more generally for so-called mutually unbiased observables, which opens the way for uncertainty tradeoffs of more than two observables.Even though entropic uncertainty relations thus play an important role in our understanding of quantum mechanics, and have interesting applications ranging from quantum cryptography (Damgaard et al., 2005;Koashi, 2005), information locking (DiVincenzo et al., 2004) to the question of separability (Guehne, 2004), very little is known about them.Indeed, only in the case of two measurement settings do we have a reasonable understanding of such relations.The purpose of this review is to present what is known about entropic uncertainty relations for a number of different entropic quantities.
Let us first consider the general form of an entropic arXiv:0907.3704v1[quant-ph] 21 Jul 2009 uncertainty relation more formally.Let M j = {M x j | M x j ∈ B(H)} be a measurement on the space H with a (finite) set of outcomes x ∈ X , that is, for all x we have M x j ≥ 0 and x M x j = 1 1.For any quantum state ρ, the measurement M j induces a distribution P j over the outcomes given by P j (x) = Tr(M x j ρ).We will write H α (M j |ρ) for an entropy H α of the resulting distribution.For example, for the Shannon entropy we have An entropic uncertainty relation captures the incompatibility of several measurements M 1 , . . ., M L .In particular, any such relation takes the form for all ρ ∈ S(H) where c {Mj } is a constant depending solely on our choice of measurements, and not on the state ρ.It is a particularly interesting question to find measurements for which c {Mj } is as large as possible.
Outline.In Section I we first provide an overview of the entropic quantities we will use throughout this text.We also introduce the concept of maximally strong uncertainty relations and discuss mutually unbiased bases, which play a special role in the study of uncertainty relations.We then first consider the case of two measurement settings (L = 2) in Section II which is the only case well-understood.In Section III we then present an overview of the few results known for multiple measurements.We conlude in Section IV with some applications of uncertainty relations in cryptography.

A. Entropic quantities
We begin by introducing all entropic quantities used in this text.The expert reader may safely skip this section.Let P X be a distribution over a set X , where we write P X (x) for the probability of choosing a particular element x ∈ X .The Rényi entropy (Rényi, 1960) of this distribution is defined as for any α ≥ 0. It will be useful to note that the Rényi entropy is in fact related to the α-norm of the vector v of probabilities by taking the logarithm A special case of the Rényi entropy is the well-known Shannon entropy (Shannon, 1948) obtained by taking the limit We are especially interested in the so-called collision entropy, that is, the Rényi entropy of order α = 2 given by and the min-entropy given by the limit α → ∞ as The Rényi entropies are monotonically decreasing in α, i.e.
Note that any such entropies can take on values in the interval 0 ≤ H α (•) ≤ log |X |, where the lower bound is clearly attained if the distribution is sharply defined with P X (x) = 1 for some x ∈ X , and the upper bound is attained when P X (x) = 1/|X | is the uniform distribution.
In the following, we will write to denote the entropy arising from a measurement in an orthonormal basis to denote the entropy arising from measuring with observables A given by the projectors {A x }.

B. Maximally strong uncertainty relations
An intriguing question is to find measurements which are very incompatible, in the sense that the r.h.s of (2) is very large.We will refer to this as a strong uncertainty relation.Note that given any set of projective measurements M 1 , . . ., M L , we can always find a state ρ such that for one of the measurements M j , namely by choosing ρ to be an eigenstate of one of the measurement operators.We thus know that the r.h.s of (2) can never exceed If for any choice of measurements the lower bound is given by c {Mj } = log |X |(1 − 1/L), we know that if ρ has zero entropy for one of the measurements, the entropy is maximal for all others.We call a set of measurements that satisfy this property maximally incompatible, and refer to the corresponding uncertainty relation as being maximally strong.As outlined below, mutually unbiased bases lead to maximally strong uncertainty relations for L = 2 measurements.This however does not hold in general for the case of L > 2. We will also see that maximally incompatible measurements can be found for any L if we only consider |X | = 2 outcomes.
For measurements in different bases, note that all bases must be mutually unbiased in order for us to obtain strong uncertainty relations: Suppose two bases B 1 and B 2 are not mutually unbiased, and there exist two basis vectors |x ∈ B 1 and |y ∈ B 2 that have higher overlap | x|y | 2 > 1/d.Then choosing ρ = |x x| yields zero entropy when measured in basis B 1 and less than full entropy when measured in the basis B 2 .

C. Mutually unbiased bases
Since mutually unbiased bases play an important role in the study of uncertainty relations, we briefly review two well-known constructions for which particular uncertainty relations are known to hold.For example, the well-known computational and Hadamard basis are mutually unbiased.We use N (d) to denote the maximal number of MUBs in dimension d.In any dimension d, we have that N(d) ≤ d + 1 (Bandyopadhyay et al., 2002).If d = p k is a prime power, we have that N(d) = d + 1 and explicit constructions are known (Bandyopadhyay et al., 2002;Wootters and Fields, 1989).If d = s 2 is a square, N(d) ≥ MOLS(s) where MOLS(s) denotes the number of mutually orthogonal s × s Latin squares (Wocjan and Beth, 2005).In general, we have N(nm) ≥ min{N(n), N(m)} for all n, m ∈ N (Klappenecker and R ötteler, 2004;Zauner, 1999).From this it follows that in any dimension, there is an explicit construction for 3 MUBs (Grassl, 2004).Unfortunately, not much else is known.For example, it is still an open problem whether there exists a set of 7 (or even 4!) MUBs in dimension d = 6.In this text, we refer to two specific constructions of mutually unbiased bases.There exists a third construction based on Galois rings (Klappenecker and R ötteler, 2004), which we do not consider here, since we do not know of any specific uncertainty relations in this setting.

Latin squares
First, we consider MUBs based on mutually orthogonal Latin squares (Wocjan and Beth, 2005).Informally, an s × s Latin square over the symbol set [s] is an arrangement of elements of [s] into an s × s square such that in each row and each column every element occurs exactly once.Let L ij denote the entry in a Latin square in row i and column j.Two Latin squares L and L are called mutually orthogonal if and only if Intuitively, this means that if we place one square on top of the other, and look at all pairs generated by the overlaying elements, all possible pairs occur.An example is given in Figure 1 below.From any s × s Latin square we can obtain a basis for C s ⊗ C s .First, we construct s of the basis vectors from the entries of the Latin square itself.Let where E L is a predicate such that E L i,j ( ) = 1 if and only if L i,j = .Note that for each we have exactly s pairs i, j such that E i,j ( ) = 1, because each element of [s] occurs exactly s times in the Latin square.Secondly, from each such vector we obtain s − 1 additional vectors by adding successive rows of an s × s complex Hadamard matrix H = (h ij ) as coefficients to obtain the remaining |v t,j for t ∈ [s], where h ij = ω ij with i, j ∈ {0, . . ., s − 1} and ω = e 2πi/s .Two additional MUBs can then be obtained in the same way from the two non-Latin squares where each element occurs for an entire row or column respectively.From each mutually orthogonal Latin square and these two extra squares which also satisfy the above orthogonality condition, we obtain one basis.This construction therefore gives MOLS(s) + 2 many MUBs.It is known that if s = p k is a prime power itself, we obtain p k + 1 ≈ √ d MUBs from this construction.Note, however, that there do exist many more MUBs in prime power dimensions, namely d + 1.If s is not a prime power, it is merely known that MOLS(s) ≥ s 1/14.8 (Wocjan and Beth, 2005).

FIG. 1 Mutually orthogonal latin squares
As an example, consider the first 3 × 3 Latin square depicted in Figure 1 and the 3 × 3 complex Hadamard matrix where ω = e 2πi/3 .First, we obtain vectors With the help of H we obtain 3 additional vectors from the ones above.From the vector |v 1,1 , for example, we obtain

This gives us basis
The construction of another basis follows in exactly the same way from a mutually orthogonal Latin square.The fact that two such squares L and L are mutually orthogonal ensures that the resulting bases will be mutually unbiased.Indeed, suppose we are given another such basis, B = {|u t, |t, ∈ [s]} belonging to L .
We then have for any , , as there exists exactly only one pair , ∈ [s] such that E L i,j ( )E L i,j ( ) = 1.Clearly, the same argument holds for the additional vectors derived from the complex Hadamard matrix.

Generalized Pauli matrices
The second construction we consider is based on the generalized Pauli matrices X d and Z d (Bandyopadhyay et al., 2002), defined by their actions on the computational basis C = {|0 , . . ., |d − 1 } as follows: where ω = e 2πi/d .We say that (X d ) is a string of Pauli matrices.Note that for d = 2 these are just the usual Pauli matrices.
If d is a prime, it is known that the d + 1 MUBs constructed first by Wootters and Fields (Wootters and Fields, 1989) can also be obtained as the eigenvectors of the matrices opadhyay et al., 2002).If d = p k is a prime power, consider all d 2 − 1 possible strings of Pauli matrices excluding the identity and group them into sets C 1 , . . ., C d+1 such that |C i | = d − 1 and C i ∩ C j = {1 1} for i = j and all elements of C i commute.Let B i be the common eigenbasis of all elements of C i .Then B 1 , . . ., B d+1 are MUBs (Bandyopadhyay et al., 2002).A similar result for d = 2 k has also been shown in (Lawrence et al., 2002).A special case of this construction are the three mutually unbiased bases in dimension d = 2 k given by the unitaries 1 1 ⊗k , H ⊗k and K ⊗k applied to the computational basis, where H is the Hadamard transform and A simple example of this construction are the mutually unbiased bases in dimension d = 2 which are given by the eigenvectors of the Pauli matrices X, Z and Y .A very interesting aspect of such mutually unbiased bases is that there exists an ordering B 1 , . . ., B d+1 and a unitary U that cyclically permutes all bases, that is, U B j = U B j+1 for all j, where U B d+1 = B 1 (Wootters and Sussman, 2007).

II. TWO MEASUREMENTS
The case of two measurements (L = 2) is reasonably well understood in any dimension, and for any number of outcomes.This case was of particular interest as is directly inspired by the two measurements for which Heisenberg had originally formulated his uncertainty relation, i.e., position and momentum.We begin by recalling some of the history of this fascinating problem, before reviewing the currently relevant results.

A. History
The first entropic uncertainty relation was given by (Hirschmann, 1957) for position and momentum observables, which was improved by the inequalities of (Beckner, 1975) and the entropic uncertainty relations of (Białynicki-Birula and Mycielski, 1975) to an entropic uncertainty relation for systems of n canonical pairs of position and momentum coordinates X i and P i : where H(Q 1 . . .Q n |ϕ) refers to the (differential) Shannon entropy of the joint distribution of the coordinates Q 1 , . . .Q n when measured on the state ρ.
That entropic uncertainty relations are of great importance was pointed out by (Deutsch, 1983), who proved that for measurements in two bases A and B we have where c(A, B) := max{| a|b | | |a ∈ A, |b ∈ B}.We will see later that the same bound holds for the min-entropies H ∞ (•).His results were extended to a continuous setting for angle-angular momentum and position and momentum by (Partovi, 1983), which in turn was improved by (Białynicki-Birula, 1984).Different relations for particular angular momentum observables were later also derived by (Białynicki-Birula and Madajczyk, 1985).A Rényi entropic version of such an uncertainty relation may be found in (Białynicki-Birula, 2006).

Any choice of bases
Following a conjecture by Kraus (Kraus, 1987), Maassen and Uffink (Maassen and Uffink, 1988) 3)?It turns out that the maximum is reached when the two bases are mutually unbiased (see Section I.C) i.e. when all the inner products on the right hand side above are equal to 1/ √ d.We then obtain that the entropy sum is lower bounded by 1 2 log d.This is tight, as the example of |ϕ = |a 1 shows.Note that for general observables, this lower bound is not necessarily tight, but its usefulness lies in the fact that it is in terms of very simple geometric information of the relative position of the bases.

Improved bounds for specific bases
For dimension d = 2 optimal uncertainty relations have been obtained for two observables A = a • σ and B = b • σ where σ = (X, Y, Z), for some angles of the Bloch vectors a • b analytically, and for others numerically (Ghirardi et al., 2003).Uncertainty relations which give improved bounds for a large class of measurements in two different bases A and B have also been obtained in (de Vicente and Sanchez-Ruiz, 2008) for the case that the overlap between two basis vectors is large, that is, c(A, B) ≥ 1/ √ 2. Letting c := c(A, B), the following analytical bound is shown for this regime and a numerical bound is provided that is slightly better for 1/ √ 2 ≤ c ≤ 0.834.

Relations for Rényi entropies
It is an often overlooked fact that Maassen and Uffink actually also show uncertainty relations in terms of the Rényi entropies.In particular, they extend a result by (Landau and Pollack, 1961) to show that for any |ψ To see that this bound can be tight for some choices of A and B, consider two mutually unbiased bases in dimension d = 2.For example, the computational A = {|0 , |1 } and the Hadamard basis B = {|+ , |− }.

Shannon entropy
The result by (Maassen and Uffink, 1988) has been extended to the case of a general POVM.The first such result was given by (Hall, 1997), who pointed out that their result can easily be extended to the case of rank one POVMs.His result was subsequently strengthened (Massar, 2007;Rastegin, 2008a) (Krishna and Parthasarathy, 2002) who showed that ∈ B(H)} and any state |ψ ∈ H.

Rényi entropy
Entropic uncertainty relations for Rényi entropies have also been obtained for the case of POVMs.In particular, it has been shown by (Rastegin, 2008b,c) that for any two POVMS M 1 and M 2 and any state

D. Beyond classical entropies
In the context of quantum information theoretical applications some other uncertainty relations were discovered, which are entropic in spirit, but lie outside of the formalism introduced above.
Here we quote two, which can be viewed as extensions of the inequality of (Maassen and Uffink, 1988) in the case of two measurement bases related by the Fourier transform, to multipartite quantum systems and involving the von Neumann entropy S(ρ) = − Tr ρ log ρ.
With this entropy, one can formally construct a mutual information and a conditional entropy, respectively, for bipartite states ρ AB with marginals ρ A = Tr B ρ AB and ρ B = Tr A ρ AB : Both inequalities compare two conjugate bases, i.e. without loss of generality, one is the standard basis {|z : z = 0, . . ., d − 1}, the other one its Fourier transform |x = z e 2πixz/d |z : x = 0, . . .d − 1 .(These are just the eigenbases of the generalized Z and X Pauli operators.)Denote the projections onto these bases by Z, X , respectively: The first uncertainty relation is by (Christandl and Winter, 2005): For a bipartite quantum state ρ AB such that ρ A is maximally mixed, The second is by (Renes and Boileau, 2009), who show similarly that for any tripartite state ρ ABC , S(A|B) Z⊗id(ρ) + S(A|C) X ⊗id(ρ) ≥ log d. (5) Note that this directly reduces to (3) for trivial systems B and C -which is why (Renes and Boileau, 2009) conjecture the following inequality when Z and X are more generally the projections onto two arbitrary bases A and B, respectively: S(A|B) Z⊗id(ρ) + S(A|C) X ⊗id(ρ) ≥ − log c(A, B).

III. MORE THAN TWO MEASUREMENTS
We now review the known results for entropic uncertainty relations for more than two measurement settings.Rather little is known in this scenario, except for a number of special cases.In particular, it is an interesting open question whether strong uncertainty relations even exist for a small constant number of measurement settings and more than two measurement outcomes.As pointed out already in the beginning, this is conceivable because unlike canonically conjugate variables, which come in pairs, there are generally more than two mutually unbiased observables.

A. Random choice of bases
First of all, it may not be at all obvious that strong uncertainty relations can even be obtained at all for more than two measurement settings, independent of the number of measurement outcomes.We will use B j = {U j |x | x ∈ {0, . . ., d − 1}} where |x forms an orthonormal basis for H to denote the basis obtained by rotating the standard basis into the basis determined by the unitary U j .It was shown in (Hayden et al., 2004) that L = (log d) 4 unitaries U j chosen from the Haar measure randomly and independently obey with high probability, and for sufficiently large dimension d.It is important to note that the number of measurement settings is not constant but depends on the dimension.

B. Mutually unbiased bases
Now that we know that it is in principle possible to obtain reasonably strong uncertainty relations, can we construct explicit measurements for which we obtain such relations?Recall that it is a necessary condition for bases to be mutually unbiased in order to obtain a maximally strong uncertainty relation in the first place.
Given the fact that if we have two measurement settings, choosing the measurement bases to be mutually unbiased leads to maximally strong uncertainty relations, it may be tempting to conclude that choosing our measurements to be mutually unbiased is in general also a sufficient condition.Perhaps surprisingly, this is not the case.

For d + 1 mutually unbiased bases
We first consider the case of all d + 1 mutually unbiased bases, for which we can obtain strong uncertainty relations.In particular, (Ivanovic, 1992;Sanchez, 1993) has shown that for the mutually unbiased bases B 1 , . . ., B d+1 we have for any state ρ If the dimension d is even, this can further be improved to (Sanchez-Ruiz, 1995) In dimension d = 2, the latter bound gives 2/3, which is tight for the mutually unbiased bases given by the eigenvectors of the Pauli matrices X, Z and Y .The case of d = 2 was also addressed separately in (Sanchez-Ruiz, 1998).
It is worth noting that the first bound ( 6) is in fact obtained by first lower bounding the Shannon entropy H(•) by the collision entropy H 2 (•), and then one proves that This inequality can also be proven using the fact that a full set of mutually unbiased bases forms a 2design (Ballester and Wehner, 2007), and we provide a completely elementary proof of this inequality in the appendix.Interestingly, it has been shown (Wootters and Sussman, 2007) that the states ρ minimizing the l.h.s of ( 7) are states which are invariant under a unitary transformation that permutes the mutually unbiased bases as discussed in Section I.C.

For less than d + 1 mutually unbiased bases
What about less than d + 1 mutually unbiased bases?First of all, note that it is easy to see that we do not always obtain a maximally strong uncertainty relation in this setting.Consider dimension d = 3 and three mutually unbiased bases B 1 , B 2 and B 3 given by the eigenvectors of X 3 , Z 3 and X 3 Z 3 respectively.Then a simple calculation shows that for example for the state |ψ = (|1 − |2 )/ √ 2 we have H(B j ||ψ ) = 1 for all bases j ∈ {1, 2, 3} and hence In (DiVincenzo et al., 2004) (see the eprint version) numerical work on three and more mutually unbiased bases in prime dimensions up to 29 is reported, which are consistent with a behavior of The mutually unbiased bases are taken as a subset of the MUBs constructed via the generalized Pauli matrices in prime power dimension.
Trivial bounds for more than two and less than d + 1 can be derived quite easily.For example, for any number of mutually unbiased bases B 1 , . . ., B L we obtain by combining (3) for each pair of bases B i and B j that As shown in the appendix, it is also easy to see that Curiously, it turns out (Ballester and Wehner, 2007) that in square prime power dimensions d = p 2 there exist up to L = p + 1 MUBs derived from the generalized Pauli matrices for which we obtain extremely weak uncertainty relations!In particular, we have for any such set of MUBs that the lower bound of (8) can be attained1 , that is, Furthermore, the same is true for all mutually unbiased bases derived from Latin squares.These results clearly show that mutual unbiasedness is not enough to obtain strong uncertainty relations.Combined with the numerical results from above, we also note that the dimension d, as well as the choice of mutually unbiased bases may indeed matter.In (Ballester and Wehner, 2007) it was noted that the set of mutually unbiased bases derived from the generalized Pauli matrices for which we obtain weak uncertainty relations are exactly those which are separable across the space C p ⊗ C p .However, we can now conclude from the results of (Wootters and Sussman, 2007) that there is nothing inherently special about these separable bases, since there exists a unitary U that maps them to a set of entangled bases (see Section I.C).
It has also been shown by (Ambainis, 2006) that for any three bases from the "standard" mutually unbiased bases construction in prime power dimension the lower bound cannot exceed 1 2 + o(1) log d, for large dimension, and assuming the Generalized Riemann Hypothesis.Furthermore, for any 0 ≤ ≤ 1/2, there always exist k = d many of these bases such that the lower bound cannot be larger than 1 2 + + o(1) log d.It remains an interesting open question to show tight uncertainty relations for all mutually unbiased bases.

C. Anti-commuting observables
Maximally strong uncertainty relations are known to exist for any number of measurement settings L, if we limit ourselves to log |X | = 2 outcomes.These uncertainty relations are derived for generators of a Clifford algebra (Dietz, 2006;Lounesto, 2001), which has many beautiful geometrical properties.For any integer n, the free real associative algebra generated by Γ 1 , . . ., Γ 2n , subject to the anti-commutation relations is called Clifford algebra.It has a unique representation by Hermitian matrices on n qubits (up to unitary equivalence).This representation can be obtained via the famous Jordan-Wigner transformation (Jordan and Wigner, 1928): for j = 1, . . ., n, where we use X, Y and Z to denote the Pauli matrices.An additional such matrix can be found by taking the product Γ 0 := Γ 1 Γ 2 . . .Γ 2n , which is sometimes known as the pseudo-scalar.To see how such operators are observables with two measurement outcomes, note that the eigenvalues of Γ i always come in pairs: Let |η be an eigenvector of Γ i with eigenvalue λ.From Γ 2 i = 1 1 we have that λ 2 = 1.Note that both ±1 occur since we have Γ i (Γ j |η ) = −λΓ j |η .We can therefore express each Γ i as where Γ 0 i and Γ 1 i are projectors onto the positive and negative eigenspace of Γ i respectively.Furthermore, note that we have for i = j That is, all such operators are orthogonal.To gain some intuition of why such operators may give good uncertainty relations note that the positive and negative eigenspaces of such operators are mutually unbiased (analogous to bases), since for all i = j, and an arbitrary eigenvector |ψ i of Γ i , Hence, if we were measure the maximally mixed state on the positive eigenspace of Γ j with any of the other observables, the probability of obtaining a measurement outcome of 0 is the same as for obtaining outcome 1.For simplicity, we will write H α (Γ j |ρ) := H α ({Γ 0 j , Γ 1 j }|ρ).It was shown (Wehner and Winter, 2008) that the following maximally strong uncertainty relation holds for any set of anti-commuting observables S ⊆ {Γ j | j ∈ {0, . . ., 2n}} For dimension d = 2, this reduces to an uncertainty relation for the mutually unbiased bases given by the eigenvectors of X, Z and Y respectively.For the collision entropy, the bound becomes and for the min-entropy we have Interestingly, uncertainty relations for anti-commuting observables can also be used to prove Tsirelson's bound (Ver Steeg and Wehner, 2009).It is not known how to extend this result to more than two measurement outcomes.One may conjecture that the generalized Clifford algebra generated by operators Λ 1 , . . ., Λ n , where for all i = j we have with ω = e 2Πi/ may give strong uncertainty relations for measurements with measurement outcomes.However, the example for X 3 , Z 3 and X 3 Z 3 given above, and numerical evidence for higher dimensions refute this conjecture.

IV. APPLICATIONS
Uncertainty relations for measurements in different bases have recently played an important role in proving security of cryptographic protocols in the bounded (Damgaard et al., 2007) and noisy-storage model (K önig et al., 2009;Wehner et al., 2008) respectively.Here, uncertainty relations are used to bound the information that a cheating party has about bits which are encoded into several possible bases, where the choice of basis is initially unknown to him.The simplest example is an encoding of a single bit x j ∈ {0, 1} into either the computational (as |x i ) or Hadamard basis (as H|x j ).Suppose we choose the bit x j , as well as the basis uniformly at random, and suppose further that the cheating party is allowed to perform any measurement on the encoded qubit giving him some classical information K.After his measurement, we provide him with the basis information Θ.It can be shown using a purification argument, that we can turn the uncertainty relation for the min-entropy for the computational B 1 and Hadamard basis B 2 (see ( 10)) into the following bound for the adversary's knowledge about the bit X j given K and the basis information Θ The conditional min-entropy thereby has a very intuitive interpretation as H ∞ (X j |KΘ) = − log P guess (X j |KΘ), where P guess (X j |KΘ) is the average probability that the cheating party can guess X j given K and Θ, maximized over all strategies (K önig et al., 2008).
In a cryptographic setting, we are especially interested in the case where we repeat the encoding above many times.Suppose we choose an n-bit string X 1 , . . ., X n uniformly at random, and encode each bit in either the computational or Hadamard basis, also chosen uniformly and independently at random.Using the SDP formalism of (Ballester et al., 2008) it is easily seen (Wehner et al., 2008) that this gives us In the limit of large n, it is known that for independent states, the min-entropy behaves approximately like the Shannon entropy (Renner, 2005;Tomamichel et al., 2008).This allows one to turn the uncertainty relation of (Maassen and Uffink, 1988) for the Shannon entropy into a better bound on the adversaries knowledge about the long string X 1 , . . ., X n in terms of the min-entropy.More precisely, it is known (Damgaard et al., 2007) that , where H ∞ is the -smooth min-entropy defined in (Renner, 2005).Intuitively, this quantity behaves like the min-entropy, except with probability .We refer to (K önig et al., 2009) for more information, where this uncertainty relation was recently used to prove security in the noisy-storage model.

V. OPEN PROBLEMS
Since a full set of mutually unbiased bases form a 2design, it may be interesting to consider sets of bases forming a t-design for any t > 2. Using the result of (Klappenecker and R ötteler, 2005) and the technique of (Ballester and Wehner, 2007) it is straightforward to prove an incredibly weak uncertainty relation for the Rényi entropy of order t, where the lower bound obeys 1/(1 − t) log((t!d!)/(t + d − 1)!).Evidently, this lower bound becomes weaker for higher values of t, which is exactly the opposite of what one would hope for.It is an interesting open question, whether one can find good uncertainty relations for higher designs.
The most interesting open problem, however, is to find any sets of measurements at all for which we do obtain maximally strong uncertainty relations for more than two measurement settings, and a constant number of measurement outcomes |X | > 2. Note that always for any set of measurements {M j } with outcomes in the set X .The problem of the entropic uncertainty relations at its most general is to find an expression, or at least a lower bound, for the quantity c {Mj } in "simple" terms of the geometry of the measurements M j .
For measurements in different bases, which are of special interest for example in locking applications (DiVincenzo et al., 2004), one is interested in the quantity where the maximization is taken over bases B 1 , . . ., B L .Note that if in dimension d there exist L mutually unbiased bases, then by virtue of (8) and the above (11), which depends now only on the number of bases L. For example, h(2) = 1/2, and it is clear that but we don't know if h(L) actually strictly grows with L. If so, does it approach the value 1 − 1/L suggested by the upper bound, or at least 1 − 1/f (L) with some growing function f of L?
that the set of all mutually unbiased bases forms a 2design (Ballester and Wehner, 2007).The present a very simple alternative proof for dimension d = 2 n which has the advantage that it neither requires the introduction of 2-designs, nor the results of (Larsen, 1990) that were used in the previous proof by Sanchez-Ruiz (Sanchez, 1993).Instead, our proof (Wehner, 2008) is elementary: After choosing a convenient parametrization of quantum states, the statement follows immediately from Fourier analysis.
For the parametrization, we first introduce a basis for the space of 2 n × 2 n matrices with the help of mutually unbiased bases.Recall that in dimension 2 n , we can find exactly 2 n + 1 MUBs.We will use the short-hand notation [k] := {1, . . ., k}, and write j ⊕ j to denote the bitwise xor of strings j and j .We are now ready to prove an entropic uncertainty relation for L mutually unbiased bases.

h
and one would like to have a characterization of the sets of bases attaining the maximum.Seeing thus the scaling of h(d; L) with log d, and assuming an asymptotic viewpoint of large dimension, we finally consider the quantity 2
improved Deutsch's uncertainty relation for measurements in two different bases.In particular, they showed that if we measure any state ρ ∈ H with dim H = d using observables with orthonormal eigenbases A = {|a 1 , . . ., |a d } and B = {|b 1 , . . ., |b d } respectively, we have 1 2 H(A||ψ ) + H(B||ψ ) ≥ − log c(A, B) ,(3)where c(A, B):= max{| a|b | | |a ∈ A, |b ∈ B}.Since H(•) is concave in |ψ ,this result also applies to mixed states ρ.What is the strongest possible relation we could obtain?That is, which choices of A and B maximize the r.h.s. of equation ( by noting that any two POVMS M 1 = {|x 1 x 1 | | |x 1 ∈ H} and M 2 = {|x 2 x 2 | | |x 2 ∈ H} acting on the Hilbert space H have a Naimark extension to an ancillary space H anc such that U |x 1 = |x 1 + U |x 1 , and U |x 2 = |x 2 + U |x 2 for any unitary U = (1 1 H ⊕ V anc ) acting only on H anc , where {|x 1 , |x 2 ∈ H anc } form an orthonormal bases on the ancillary system.Maximizing over such unitaries, that is, possible extensions of the POVM, one obtains the