This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

When quantum tomography goes wrong: drift of quantum sources and other errors

and

Published 18 February 2013 © IOP Publishing and Deutsche Physikalische Gesellschaft
, , Citation S J van Enk and Robin Blume-Kohout 2013 New J. Phys. 15 025024 DOI 10.1088/1367-2630/15/2/025024

1367-2630/15/2/025024

Abstract

The principle behind quantum tomography is that a large set of observations—many samples from a 'quorum' of distinct observables—can all be explained satisfactorily as measurements on a single underlying quantum state or process. Unfortunately, this principle may not hold. When it fails, any standard tomographic estimate should be viewed skeptically. Here we propose a simple way to test for this kind of failure using the Akaike information criterion. We point out that the application of this criterion in a quantum context, while still powerful, is not as straightforward as it is in classical physics. This is especially the case when future observables differ from those constituting the quorum.

Export citation and abstract BibTeX RIS

Content from this work may be used under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

1.1. General remarks

The goal of quantum-state tomography [1] is to give a statistically reliable estimate of a quantum state ρ. Two further questions may come to mind: (i) what is the purpose of that estimate ρ? and (ii) why or when are we correct in giving an estimate of just one quantum state?

There are at least two answers to the first question: our experiment may be aimed at producing a particular state, say, a cluster state, and we may just want to verify how close ρ is to the desired state. But that answer provides really just an intermediate goal. The ultimate goal is always to use the desired state for some particular quantum information processing task. So we could say that the goal of producing an estimate ρ is to be able to predict the future performance in a particular protocol of one or more unmeasured quantum system(s) produced by the same source.

Now there is a nice statistical method for ranking different models according to their ability to predict future measurement results (not on how well they fit the past data!), based on the Akaike information criterion (AIC) [2]. That criterion was developed entirely within a classical context, but it ought to apply to quantum-state estimation, too. We show this is true, even though we will point out some interesting differences between classical and quantum statistics.

The motivation behind the second question is as follows. Since we do not have full control over all physical quantities relevant to the quantum-state generation process (for example, even the best laser suffers from phase diffusion; and there are always spatially and temporally fluctuating magnetic and electric fields), the quantum states produced by a quantum source are not all identical. A possible description of the individual states of M systems k = 1,...,M would be a sequence {ρk,k = 1,...,M} where each ρk+1 is a little different from the previous one (even with entanglement or correlation between the different systems, we can define ρk by tracing out all the other systems). So, why would we use just a single estimate ρ in this case? One aspect of the answer is, of course, that we have no way of estimating each individual ρk. A more positive answer is that multiple measurements of a given observable $\hat {O}$ only yield estimates of average quantities such as $\left \langle O\right \rangle =\overline {\mathrm {Tr}\,\rho _k\hat {O}}$ or $p_n=\overline {\mathrm {Tr}\,\rho _k| O_n\rangle \!\langle O_n |}$ , where the average is over those k on which $\hat {O}$ was measured, and where $\left | O_n \right \rangle $ denotes an eigenstate of $\hat {O}$ . These averages being linear in ρk are determined by a single density matrix, namely the average density matrix $\rho =\overline {\rho _k}$ . This simple picture has been made much more rigorous by Renner in [3]. He showed that the crucial ingredient (missing in the simple picture) is permutation invariance. That is, if we randomly permute the sequence of quantum systems, and then trace out some subset, the joint state of the remaining systems is to a good approximation independently and identically distributed (i.i.d.). In our context this means that as long as the quorum of observables is measured in a random order, then to a good approximation any one of the remaining unmeasured systems can be described by a single density matrix ρ. We now discuss what may go wrong if we measure the observables constituting a quorum in a nonrandom order.

1.2. Possible errors in standard quantum state tomography

It is much easier to measure a given observable from the quorum many times in a row, before switching to measurement of the next observable. Such a procedure is standard practice, but it voids Renner's proof, and so it may be that there is not a single density matrix that can be validly assigned to the remaining unmeasured quantum systems.

Let us introduce this problem with a simple example. Given an ensemble of 3N ≫ 1 qubits that—we assume!—are identically and independently prepared, we want to estimate their density matrix. So we divide them into three equal and sequential groups, and measure σx on samples 1,...,N, σy on samples N + 1,...,2N, and σz on the last N. Now, if the samples are indeed identically prepared in some state ρ, then we can safely perform the measurements in this order—the state ρ⊗3N is invariant under permutations, so all orderings are equivalent. But if the source is drifting over time, the first N copies are best described by a mean density matrix $\bar {\rho }_{1}$ , while the second and third sets of N qubits are best described by (possibly different) average states $\bar {\rho }_{2}$ and $\bar {\rho }_{3}$ , respectively.

For an amusing (albeit extreme) example, consider a situation where the first N copies are best described by $\bar {\rho }_{1} = | +\rangle \!\langle + |$ , the second group by $\bar {\rho }_{2}=| +i\rangle \!\langle +i |$ , and the third by $\bar {\rho }_{3}=| 0\rangle \!\langle 0 |$ . The measurement outcomes in this case are not random at all: every single measurement (of σx, σy, σz) will yield eigenvalue +1. Linear inversion tomography will yield a radically nonpositive state

Equation (1)

and maximum likelihood estimation (MLE) yields the projector onto $\hat {\rho }_{\mathrm {tomo}}$ 's positive eigenspace. Although both estimates are plausible answers to 'what single matrix best fits the observed data?', neither one of them is of any predictive use at all! The source is drifting so rapidly and drastically that this set of 3N samples really tells us almost nothing about future observations. This is the simplest and best conclusion at which our data analysis should arrive.

This is a rather extreme and contrived example of experimental drift (below we will discuss a more common type of nonrandom experiment where the above cycle of measurements is repeated once: so we measure σx on the first N/2 copies, then σy, then σz, and then σx,σy,σz again, each on N/2 sequential copies). More realistic examples show similar behavior, though. The statistics given above are actually more consistent with a different (and still plausible) mechanism: when the measurement apparatus is 'rotated' to perform a different measurement, the experimenter inadvertently 'rotates' the samples as well. A particularly naïve version of this could occur with photon polarization, where one way to physically rotate a polarizer is for the experimentalist to simply rotate his own frame of reference (e.g., by lying down). Such a passive rotation obviously fails to change the relative orientation of samples and apparatus. More realistic examples occur when similar quantum gate devices are used to (i) prepare states (e.g. EPR states) and (ii) implement measurements. In quantum process tomography, this sort of pitfall is well known; it violates the conditions for complete positivity of processes, and causes negative eigenvalues just as in our example above [4].

All of these failures are examples of a single phenomenon: sample-apparatus correlation. In process tomography, this is usually explained by correlation between the system and its environment. In state tomography, there is no environment per se, but if the state of the kth sample is (in any way) correlated with the behavior of the measurement apparatus (e.g. with what measurement it is oriented to perform), then tomography goes wrong. Experimental drift is a simple and easy to understand example: the sample state is correlated with time, and if the apparatus setting is also allowed to vary with time, then there will be sample-apparatus correlation. As noted above, this can be eliminated by explicitly randomizing the order of measurements, so that while the samples are still time-dependent, the apparatus is not. Other kinds of sample-apparatus correlation are not so easy to remedy.

In the example given above, the extremity of the data—and the fact that the linear inversion estimate is radically negative—are a dead giveaway. On the other hand, linear inversion can produce negative estimates even with ideal data [5, 6] because of statistical fluctuations. The raison d'etre of MLE is to fix this negativity, but by constraining the estimate to positive states, MLE also hides the tell-tale signature of failed tomography. Moreover, negative estimates are not (in general) a reliable symptom even of drastic experimental drift. If the drifting states in the example above were a bit more mixed—e.g. $\bar {\rho }_{k}' = \frac{1}{2}\bar {\rho }_{k} + \frac {1}{4}{\mathbbm{1}}$ —then linear inversion and MLE would yield identical and positive density matrices. But, just as in the original example, those estimates would be useless and not predictive.

Fortunately, there is a general solution to this problem. It elegantly generalizes the observation (made above) that a radically negative $\hat {\rho }_{\mathrm {tomo}}$ should trigger skepticism. It can also diagnose drift in the absence of negativity if the data are sufficiently rich. It is called model selection.

The core principle is that, when tomography fails:

  • 1.  
    The standard model for tomography—i.i.d. samples described by a single density matrix—is bad.
  • 2.  
    Some other model will be better.
  • 3.  
    We can quantify 'bad' and 'better', and use the results to decide whether our tomography went wrong.

Clearly, putting this into practice requires that we come up with alternative models to describe the data. Model design is more of an art than a science. Here, we demonstrate alternative models for some simple and relevant problems, and leave the rich problems of general and optimal alternative-model design to future work. Instead, we focus on model selection, which means determining whether (i) the standard tomographic model is pretty good, or (ii) some other model (e.g. a drifting source model) is better.

1.3. Akaike to the rescue

To accomplish this, we propose, as we mentioned above, to use the AIC [2]. Widely used outside of physics [7, 8], the AIC is relatively unknown within the physics community. However, it has been applied in astrophysics [9], entanglement verification [10] and quantum state estimation [1113]. Its function is to quantify (by assigning a real number) how well a given model describes the data from a given experiment. The AIC's absolute value is not meaningful, but the relative AIC values for multiple different models have a deep and useful meaning (see following section for a more detailed discussion of the AIC, its meaning, and its derivation). Their simplest use is to rank all the different models, and thus to identify (a) which is the best, and (b) how significantly 'worse' the others are.

The AIC assigns a number Ωk to each model k, given by5

Equation (2)

where ${\cal L}_k$ is the likelihood of model k—or, if model k has adjustable parameters (as is usually the case), the maximum of the likelihood over all those parameters—and Kk is the number of independent model parameters used in model k to fit the data6. The larger the AIC (Ωk) is, the higher the model is ranked. While Ωk's absolute value is meaningless, the difference Δ = Ωk − Ωk' represents (roughly speaking) the weight of evidence in favor of k over k', measured in bits. So, for example, if we want to report a weighted average of the two models, the ratio of the weights assigned to models k and k' should be $w_k/w_{k'} = \exp (\Omega _k-\Omega _{k'})$ .

The AIC's simple form admits a simple interpretation: fitting the data better (higher likelihood) is good, but extra parameters are bad. Additional parameters must justify their existence by improving the likelihood (a measure of goodness-of-fit) by at least a factor of e. This helps to prevent overfitting. Adding adjustable parameters will always improve a model's fit—but a good fit to past data is not a guarantee that the model will accurately predicting future measurements.

Example. If we measure each of 3N qubits, measuring $\hat {O}_j$ on qubits j for j = 1,...,3N, then the best possible fit to the data is to assume that each qubit j just happened to be in the appropriate eigenstate of $\hat {O}_j$ so that the probability of the observed data is $\mathcal {L} = 1$ !

Intuitively, this 'explanation' is absurd. The AIC quantifies that intuition; that model requires a huge (O(N)) number of parameters, and the resulting penalty will overwhelm its higher likelihood, ensuring that its AIC is far worse than that of simpler models.

To apply the AIC to our example of section 1.2, we need an alternative model (the 'standard model' just uses a single density matrix for all 3N qubits). A simple alternative that describes experimental drift (as well as some other forms of sample-apparatus correlation) is to use one density matrix for each of the three groups of samples. This alternative model will always fit the data at least as well, but it may use more parameters7. The AIC ranks both models, and quantifies how much better one is than the other. We perform and analyze this calculation for our single-qubit example (where just two models are sufficient) in section 2.1, and address more complicated variations on this theme—with multiple alternative models—in section 2.3.

To conclude this (long) introduction, we note that the appearance of maximum likelihoods in the AIC does not imply any privileged role for MLE estimation of states or any other physical quantities. The likelihood is a central concept in statistics, and appears in almost every method. In the AIC, it is used specifically to quantify goodness-of-fit, and (obviously) the AIC balances this quantity against another (model complexity). Moreover, the AIC is used only to rank different models. There is no implicit requirement that the highest-ranked model must be chosen exclusively (in fact, a common strategy is to average over high-ranked models), and even if the 'best' model is chosen, we remain free to analyze that model without MLE (e.g. via Bayesian averaging).

2. Examples

In this section we first treat the example from the introduction, tomography on single qubits, in more detail (section 2.1). In this example, inconsistencies can arise only when the observed average values of σx,σy,σz are inconsistent with each other, which in turn can only happen if the density matrix obtained by linear inversion is unphysical. The next example, discussed in section 2.2, also concerns single qubits, but now measurements of σx,σy,σz are each repeated once. In this (experimentally more relevant) case inconsistencies can arise when two estimates of the same quantity are statistically different. Ad hoc methods that just consider this particularly simple type of inconsistencies work just as well as the AIC. In the last section, 2.3, we will consider the case of multiple qubits, in which the validity of ad hoc methods is much harder to verify, but the AIC still works in the same manner, thus showing the universality of that method.

2.1. One qubit, part 1

We return to tomography of single qubits, where we measure σx on the first N qubits, then σy on the next N, and σz on the last N qubits. Denote the three thusly observed averages by $X:=\left \langle \sigma _x\right \rangle _{{\mathrm { obs}}}$ , $Y:=\left \langle \sigma _y\right \rangle _{{\mathrm { obs}}}$ , and $Z:=\left \langle \sigma _z\right \rangle _{{\mathrm { obs}}}$ . In order to calculate likelihoods, we need the frequencies of having observed spin up (+) and down (−), respectively. They are given in terms of these averages by

Equation (3a)
Equation (3b)
Equation (3c)
A density matrix describing just the first set of N measurements really uses or needs only one parameter, X (the other two parameters are, obviously, not at all determined by those data). And no matter what X is, there is always a perfect fit to the data. The logarithm of the (maximum) likelihood of such a density matrix is, therefore,

Equation (4)

with H(.) the Shannon entropy. The same story holds for the next two sets of measurements, and so there is always a perfect fit to the data when we use the 'alternative model' with three density matrices, and that model needs three independent parameters. We conclude that the AIC assigns the following ranking to the alternative model:

Equation (5)

The performance of the 'standard model' depends on the value of just one number. If

Equation (6)

there is a single maximum likelihood density matrix $\bar {\rho }$ (with purity $\mathrm {Tr}\,\bar {\rho }^2=(R^2+1)/2$ ) that describes the whole measurement perfectly, just as the alternative model does. The standard model also needs three parameters in this case, and the maximum likelihood is also the same as for the alternative model. So, in this case there is no real difference between the two models—we could pick $\bar {\rho }_{1}=\bar {\rho }_{2}=\bar {\rho }_{3}=\bar {\rho }$ —and we have Ωs = Ωa. There is no reason to reject the standard model when R ⩽ 1.

Now let us suppose that R > 1. We have then the choice between two descriptions:

  • 1.  
    Alternative model. We describe each of the three measurements by their own density matrix. The maximum likelihood estimates of those three states satisfy
    Equation (7a)
    Equation (7b)
    Equation (7c)
    Three independent parameters are needed for this model. ( $\bar {\rho }_{1},\bar {\rho }_{2},\bar {\rho }_{3}$ are underdetermined, of course, but for the purpose of finding the maximum likelihood ${\cal L}_{\mathrm {a}}$ the information suffices.)
  • 2.  
    Standard model. We use one density matrix to describe all three measurements together. The maximum likelihood estimate of that state will be pure. There is no known method to compute it exactly, but a generally good approximation is given by
    Equation (8a)
    Equation (8b)
    Equation (8c)
    and this state's likelihood is a strict (but generally pretty tight) lower bound on the maximum likelihood for the standard model. Two independent parameters are needed in this model8.

The reason we end up with a pure maximum likelihood state in the standard model is that the single matrix fitting the data perfectly lies outside the set of physical states (it has a negative eigenvalue), and the closest physical state lies on the boundary [5]. In the case of qubits, this means a pure state. More precisely, if the unphysical best-fit matrix $\tilde {\rho }$ is written in its diagonal form, $\tilde {\rho }=\sum _{k=+,-}\lambda _k| \psi _k\rangle \!\langle \psi _k |$ , with λ+ > 1 and λ < 0, then the maximum likelihood estimate would be $\bar {\rho }_{\mathrm {s}}=| \psi _+\rangle \!\langle \psi _+ |$ . The latter state has the properties (8), as can be easily verified by explicit calculation.

Thus, when R > 1 the alternative model fits the data better but uses one more parameter than does the standard model. We can calculate the maximum likelihoods analytically in each of the two models, and thus obtain the relative AIC score of the two models:

Equation (9)

We accept the standard model as consistent iff Ωs ⩾ Ωa. This will happen only if R is sufficiently close to 1. If we expand R around 1, we can Taylor expand the right-hand side of (9) as

Equation (10)

provided (R − 1)2 ≪ (1 − M2) for M = X,Y,Z. That is, with this proviso, the standard model is consistent only when

Equation (11)

with the constant C given by

Equation (12)

The dependence of the condition (11) on N agrees with the simple idea that it is sufficient for R to be less than about a standard deviation or two above 1 for the standard model to still apply, and that standard deviation, of course, decays like $1/\sqrt {N}$ for N → .

2.2. One qubit, part 2

The implementation of tomography in the previous example is probably too simple and too obviously wrong for it to have been applied in an actual experiment. The straightforward improvement to measure each of σx,σy,σz in two separate blocks will allow one to detect drift. Let us denote the six observed averages by $X_{1,2}:=\left \langle \sigma _x\right \rangle _{{\mathrm { obs 1,2}}}$ , $Y_{1,2}:=\left \langle \sigma _y\right \rangle _{{\mathrm { obs 1,2}}}$ , and $Z_{1,2}:=\left \langle \sigma _z\right \rangle _{{\mathrm { obs 1,2}}}$ . Drift can be detected by comparing the pairs of estimates X1,2 with each other, Y1,2 with each other, and Z1,2 with each other. The AIC works as follows: we need again at least two different models for describing the data. One will be the standard model, with one density matrix describing all six measurements. This density matrix will be determined by the three averages (X1 + X2)/2, etc. The alternative model may consist of two independent density matrices (with six parameters in total) or of two density matrices that are not independent with either four or five parameters in total. Let us test the AIC in a simulation of data generated by single-qubit states of the form

Equation (13)

where the pure state $\left | \psi _\phi \right \rangle $ depends on an angle ϕ, which we assume to undergo a random walk, and with the following meaning:

Equation (14)

For p we take the value p = 0.9. We perform in total 3000 measurements, divided into six groups of 500, in which we measure σx,σy,σz,σx,σy,σz in that order.

In figures 1 and 2 we plot two qualitatively different cases. In the first case the diffusion of ϕ is so fast that it leads to noticeably different values of X1,2 and Y1,2. The AIC in this case gives a very clear preference for the alternative model of using two density matrices with five parameters in total (only the expectation value of σz does not change over the course of the experiment). In the second case the drift over the course of the experiment is small enough so that the standard model is still the best, even though there is some drift, and even though the more complicated model does, of course, fit the data slightly better.

Figure 1.

Figure 1. Left: simulation of diffusion of the angle ϕ in the state $\left | \psi _\phi \right \rangle $ over the course of 3000 measurements. Right: the number of 'spin up' results for the measurements of σx (in red, for measurements 1–500 and 1501–2000), of σy (in green, for measurements 501–1000, and 2001–2500), and of σz (in black, for the remaining measurements). The numbers for σy are statistically different, and this is reflected in the relative ranking the AIC accords to the different models. Here we have Ωs − Ωa = −5.07, where the negative sign implies the standard model of a single density matrix is significantly worse than the alternative model (here, two density matrices with five parameters in total, two parameters more than the standard model). Tomography failed in this case.

Standard image
Figure 2.

Figure 2. Same as figure 1, but for a case where the drift is much smaller over the course of 3000 measurements. Here Ωs − Ωa = 1.38, so that the standard model is better than the best alternative model (which has two extra parameters). Tomography succeeded.

Standard image

2.3. Two or more qubits

Consider now a tomographically complete measurement on 9N copies of two qubits, where on the first N pairs of qubits we measure σx on both qubits independently, then on the next N pairs we measure σx on one qubit and σy on the other, then on the third set of N pairs we measure σx on the one and σz on the other ... until on the last (ninth) set of N pairs we measure σz on both qubits. The first measurement is described by three independent averages that are obtained from measuring σx on both qubits independently:

Equation (15a)
Equation (15b)
Equation (15c)
Thus, a two-qubit density matrix perfectly fitting the data of the first measurements needs three parameters. The description of the second measurement of σx on one qubit and σy on the other is likewise determined by three observed averages
Equation (16a)
Equation (16b)
Equation (16c)
The new feature arising here is that we get a second estimate of the same parameter, XI in this case. That is, if there were only a single two-qubit state in the experiment, the estimates (15c) and (16c) would have to agree (within error bars). Conversely, if they do not agree, we have encountered a new diagnosis of inconsistent tomography.

Writing down all different averages obtained from this particular experiment, we find nine quantities that are measured once, and six other quantities that are measured thrice. It becomes now much harder to judge when all the differences between those different estimates of the same quantities are, in total, statistically significant or not. That is, the generalization of the ad hoc method that worked fine for a single qubit, becomes troublesome. This, of course, becomes exponentially worse for more than two qubits.

On the other hand, the AIC can be applied straightforwardly to various alternative models. It is sufficient to find just one alternative model superior to the standard model in order to have succeeded in diagnosing an inconsistency in our tomographic experiment. Of course there is a large multitude of alternative models, but one can be guided in searching for such models by looking for those estimates of the same quantities that are the least consistent.

3. Model selection, the Akaike information criterion and quantum quirks

Data are generally assumed to be generated by some stochastic process9 —e.g. a probability distribution f(x) (where x denotes the sample space containing all possible events). Unfortunately, these 'true' probabilities are unknown to us. All we have are some data. So, in order to (i) describe the data; (ii) approximate the underlying process f; and (iii) most importantly, predict future observations, we use models.

A model is just another probability distribution g(x). Almost always, the model contains a whole family of parameterized distributions gθ(x), where θ comprises the values of K distinct (real-valued) parameters. One obvious model is the universal one where each of the probabilities g(x)—for every possible value of x—is itself a free parameter. This is the richest possible model, with the most parameters. If x takes on uncountably many values, this model is utterly intractable (and the AIC penalizes it infinitely for its richness). The ubiquity of this problem in statistics motivates the use of restricted parameterized models (e.g. Gaussian distributions) where finitely many parameters can specify g(x) for every possible x.

Quantum tomography applications usually involve finitely many parameters, but few-parameter models are still important. This is partly because of the simplification obtained by eliminating many parameters (e.g. when a quantum state in 2N dimensions is approximated by a matrix product state with poly(N) parameters), but even more importantly because it guards against overfitting. This is precisely where well-designed model selection techniques come in, and the AIC is a canonical example. When there is a choice between different candidate models describing one and the same experiment, the AIC provides a numerical ranking of the different models.

The AIC (as given in equation (2)) appears very simple. Moreover, it bears a strong resemblance to quantities that appear in likelihood-ratio (LR) hypothesis testing (see, e.g., [14]). But in fact, the AIC's theoretical underpinnings are rather different, and remarkably elegant (see [7] for extensive discussion). LRs are a fundamentally frequentist technique: given two competing models, we calculate ahead of time the probability that various values of the LR statistic will be observed if one model or the other is 'correct', and then we formulate a rule for what to announce upon seeing any given value of the LR statistic. Many canonical results on LR tests require that the models be nested—i.e. that one be a subset of the other. In particular, given this and a few other conditions, it is possible to derive expectation values of the LR statistic that look identical to equation (2) because the loglikelihood ratio is χ2K distributed, and has mean value K.

But despite this similarity, the AIC is derived differently. Akaike began by postulating that 'goodness' of a model is quantified by the Kullback–Leibler divergence [15] between the model and the 'true model' that actually underlies the data. Then, rather remarkably, he showed that it is possible to estimate this divergence10even when the true model is unknown! The AIC is the expected value of the (unknown) Kullback–Leibler divergence between a specified model and the (unknown) true model, conditional upon the data in our possession. So the AIC (i) has a powerful and universal interpretation, and (ii) can be used to compare arbitrary models, without any requirement for nesting.

This is not to say that the AIC is the acme of model selection, nor that it is perfectly adapted to quantum tomography problems. First, there are competing derivations of other model ranking statistics, such as the Bayesian information criterion (again, see [7]). Moreover, the AIC is inherently an asymptotic result—much like, for example, the efficiency of MLE. So, even though there is a finite sample size correction (the AICc), this correction is part of an asymptotic expansion and may be unreliable for any fixed N.

One significant consequence of this is that, for finite samples, an event x whose true probability is nonzero may not be observed—in which case a model might assign zero probability to it. (The MLE within the full model, where each probability g(x) is a parameter, behaves this way.) This results inevitably in an infinite Kullback–Leibler divergence. Asymptotically, the probability of such a pathology occurring goes to zero almost certainly. But for any finite sample size it is a concern. So, beware of rank-deficient estimates in tomography!

A related phenomenon is (almost) unique to quantum tomography. Akaike's derivation assumes that a very good (if not the best) measure of predictive power is the Kullback–Leibler divergence between the true model f(x) for the observed process x and the assigned model g(x). But in quantum tomography, the observed process 'x' is some particular (and rather arbitrary) quorum of measurements that the tomographer has performed. We do not necessarily care about predicting those measurements! Instead, we care about the underlying quantum state—or, to put it more operationally, we care about a large and unknown set of other measurements that might be performed on samples of that state in the future. Quite frequently, we care about measurements of that state's diagonal basis. This completely undermines Akaike's assumption (that predicting x is the goal). This does not mean that the AIC should not be used—but it does strongly suggest that

  • 1.  
    conclusions drawn from the AIC, or any other classical statistical method, should be treated with thoughtful care,
  • 2.  
    better methods may still be derived (e.g. a 'quantum AIC'),
  • 3.  
    estimates obtained via the AIC should not be expected to have good properties with respect to quantum relative entropy (the quantum version of Kullback–Leibler divergence).

Importantly, however, there are cases where our future measurements will be the same as those used for our preliminary quantum tomography experiment. For instance, in the case of quantum computing, where error correction is implemented by CSS codes, all measurements will be Pauli measurements. In such a case, the conclusions of the AIC, applied to a tomography experiment that used Pauli measurements as well, should be trustworthy.

4. Summary and discussion

Our central message here is that when the assumptions of tomography fail, it is often due to some sort of sample-apparatus correlation, and that this can be detected with statistical reliability by model selection using the AIC. One particular example, the drifting source, clearly voids the single-density-matrix model, but can be described naturally (and more accurately!) by multiple density matrices associated with different times and/or measurement settings. The AIC is a particularly good and elegant tool for identifying whether the added complexity of this model is justified. Ultimately, the point of model selection (especially using the AIC) is to get better predictions of future measurement outcomes—not just better fits to observed data.

While the AIC ranks competing models, by assigning each model k a number Ωk, through equation (2), we have great flexibility in what to do with that ranking. Small differences in AIC are not significant; if |Ωk − Ωk'| ≪ 1, then both models are equally good. But even when significant differences exist, we may choose to use the 'best' model exclusively, or to hedge by mixing it with lower-ranked models (with weights determined by their respective AICs). We could apply Bayesian methods to the highest-ranked model, or use maximum likelihood estimates to choose model parameters. Choosing between these alternatives is beyond the scope of this paper.

If a model-selection (e.g. AIC) analysis finds overwhelming evidence of sample-apparatus correlation (e.g. source drift), it is often possible to go beyond the conclusion 'tomography has failed!'. What has really failed is the i.i.d. assumption—we have convincing evidence that the samples are not identically distributed. The joint state is therefore not (with high confidence) of De Finetti form (see [16]). But it may be possible to assign states with a relaxed De Finetti form, and thereafter to do tomography with this in mind. For example, if the AIC declares the alternative three-state model much superior to the single-state model, one could assign a state of the form

Equation (17)

to the 3N qubits, where Pa(.,.,.) is a joint probability distribution over three two-dimensional density matrices. This form itself needs to be tested and validated, by comparison to a richer model (e.g. a model with six, nine, or more different states). In general, validating a model requires more sophisticated model design—e.g. to describe more arbitrary forms of source drift—and perhaps different measurements or experiments specifically aimed at detecting those models, as proposed in [17]. But once a given model is validated, if it implies a relaxed De Finetti form as in equation (17), then we can in principle perform tomography independently on each of the i.i.d. subsets of the whole sample.

In the simplest case of tomography on single qubits, we discussed two competing models. Either one uses just a single density matrix $\bar {\rho }$ to describe the experiment (the standard model), or one uses three— $\bar {\rho }_{1},\bar {\rho }_{2},\bar {\rho }_{3}$ —one for each set of N qubits used to measure σx, σy, and σz, respectively (the alternative model). But what does it mean to use three density matrices for predicting future measurement outcomes? The answer is that the predictions refer to measurements on qubits that have not been measured yet (of course). Consider one unmeasured qubit taken from, say, a set of N + n qubits, from which N qubits were randomly picked to be measured in the σx basis and n were not measured. In this case, those n qubits would be assigned a state of the form

Equation (19)

valid for any n, including n = 1. The mixed model, as mentioned above, would combine the standard and alternative models and assign an even more mixed state. For example, in the case n = 1 it would assign the estimate

Equation (20)

with $w_{\mathrm {a}}=\exp (\Omega _{\mathrm {a}})/(\exp (\Omega _{\mathrm {a}})+\exp (\Omega _{\mathrm {s}}))$ and ws = 1 − wa the relative weights of the two models, as assigned by the AIC, and with Ps(.) the standard De Finetti probability distribution over single density matrices.

Although we have avoided discussion of model design here, one simple but powerful technique deserves mention. In the example at the beginning of the paper, we introduced an alternative model wherein each measurement setting is associated with a different density matrix. When the measurements are informationally complete, this alternative model has precisely as many parameters as the standard model. But if they are overcomplete, then the alternative model has more parameters. As long as the samples really are i.i.d., we expect the alternative model to fit slightly better, and the AIC to declare them (on average) equally good. However, in the presence of experimental drift, we will find inconsistencies within the overcomplete measurement set—i.e. we will not be able to fit all the measurements well with a single density matrix! This is a simple test for experimental drift that does not rely on negativity of $\hat {\rho }_{\mathrm {tomo}}$ .

For the main point of this paper, however, all these complications are unnecessary. All that matters is whether assigning a single density matrix to our tomography experiment constitutes the best model or not. If not, something is amiss, but at least we have diagnosed the problem.

The main issue we left open is the following: is there a sense in which the AIC works reliably if future measurements are different than those used in our tomography experiment? If not, is there a 'quantum' version of the AIC that, e.g. takes into account the quorum of observables that have been measured, as well as the set of observables that will be measured?

Acknowledgments

This work was supported by NSF grant no. PHY-1004219. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy, National Nuclear Security Administration under contract no. DE-AC04-94AL85000.

Note added in proof:

Upon completion of this paper [18] appeared, which is similar in spirit to our paper, but which uses χ2 tests to detect errors in tomography. It points out, too, the problem with pure-state assignments for those tests. After submission of the proofs we also became aware of a very relevant article [19].

Footnotes

  • Usually an overall minus sign and an extra factor of 2 appear on the right-hand side of (2), but for our purposes it is more convenient not to include those.

  • Actually, when the number of measurements (e.g. 3N in the example given) is small, there is a correction term. The corrected AIC (AICc) is given by Ωck = Ωk − Kk(Kk + 1)/(3N − Kk − 1), with Ωk given by 2. Hereafter, we will simply assume 3N ≫ Kk.

  • Intuitively, the alternative model is more contrived, and this should be reflected in its ranking. However, it is not immediately obvious how to quantify this complexity.

  • One of the authors remains bothered by the decision to separate the standard model, effectively, into two distinct models that contain (i) all the mixed states (K = 3) and (ii) the pure states (K = 2), respectively. However, this protocol is specifically discussed and justified by Burnham and Anderson [7, section 6.9.6]. They express a similar concern, but also provide some preliminary justification for it. So, while further thought and research seems warranted, so does this choice.

  • A semi-philosophical note: it is not necessary to invoke intrinsic randomness. The underlying process might be deterministic but so complex that its description is hopeless or just not worth the time. The Bayesian view of probabilities is entirely compatible with this view, and permits us to describe data and the processes that generate them without needing to 'believe in' randomness.

  • 10 

    More precisely, this estimate is in fact determined only up to a constant, but that constant is the same for all models, and hence drops out when comparing different models.

Please wait… references are loading.