This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Brought to you by:
Paper The following article is Open access

Randomness in post-selected events

, , , and

Published 7 March 2016 © 2016 IOP Publishing Ltd and Deutsche Physikalische Gesellschaft
, , Focus on Device Independent Quantum Information Citation Le Phuc Thinh et al 2016 New J. Phys. 18 035007 DOI 10.1088/1367-2630/18/3/035007

1367-2630/18/3/035007

Abstract

Bell inequality violations can be used to certify private randomness for use in cryptographic applications. In photonic Bell experiments, a large amount of the data that is generated comes from no-detection events and presumably contains little randomness. This raises the question as to whether randomness can be extracted only from the smaller post-selected subset corresponding to proper detection events, instead of from the entire set of data. This could in principle be feasible without opening an analogue of the detection loophole as long as the min-entropy of the post-selected data is evaluated by taking all the information into account, including no-detection events. The possibility of extracting randomness from a short string has a practical advantage, because it reduces the computational time of the extraction. Here, we investigate the above idea in a simple scenario, where the devices and the adversary behave according to i.i.d. strategies. We show that indeed almost all the randomness is present in the pair of outcomes for which at least one detection happened. We further show that in some cases applying a pre-processing on the data can capture features that an analysis based on global frequencies only misses, thus resulting in the certification of more randomness. We then briefly consider non-i.i.d strategies and provide an explicit example of such a strategy that is more powerful than any i.i.d. one even in the asymptotic limit of infinitely many measurement rounds, something that was not reported before in the context of Bell inequalities.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Sources of randomness have numerous applications: in algorithms, samplings, numerical simulations, gambling, and of course cryptography [13]. The last application demands sources that can be certified as being uncorrelated to any outside process or variable, i.e. private randomness. Typically, the output of a physical process (thermal noise, shot noise, ...) is considered random in this sense only if certain assumptions are made on its underlying behavior. The violation of Bell inequalities, however, certifies private randomness in a device-independent way [4, 5]. From the amount of violation, one obtains a lower bound on the min-entropy H of the output string generated by the process [57]. This information is then sufficient to extract randomness: indeed, one can design seeded extractors, whose output is a string of (roughly) H bits guaranteed to be uniformly random, even according to an external adversary.

A Bell experiment, however, produces much more information than the mere violation of a single inequality. For instance, one can estimate the single-run frequencies $p(a,b| x,y)$ of the outcomes (a, b) conditioned on the settings (x, y). When this knowledge is taken into account, higher values for the lower bounds on H can in principle be obtained [8, 9]. More generally, there may be other ways to process the data that can lead to improved bounds on the randomness, as the following example illustrates.

Consider a Bell experiment running for two days, each day consisting of $N\gg 1$ runs. Suppose that, on the first day, the setup produces outcomes that violate the CHSH inequality maximally; on the second day, for some technical glitch, the detectors do not fire, so the list of outcomes consists only of double no-detection events. Suppose that the users estimate the amount of randomness generated using solely the observed CHSH violation I, using the simple bound $H\geqslant 1-{\mathrm{log}}_{2}(1+\sqrt{2-{I}^{2}/4})$ [5]. Suppose further that they planned to extract randomness every two day. Over the two day period, they observe an average CHSH violation of $(2\sqrt{2}+2)/2\simeq 2.41$ (we take the convention that no-detection events are mapped to +1 outcomes), from which they deduce a randomness rate of ∼0.2 bit run−1 for Alice's outcomes, that is $\sim 0.4N$ bits in total for the two-day period. However, the users might have chosen to extract randomness at the end of each day instead. The same techniques certify now 1 bit/run for Alice on the first day and 0 on the second, for a total of N bits over the two days6 . What happened is clear: the data contain the information that two processes are involved; this information was missed by the overall analysis, but was revealed by the choice of sorting the data in two blocks.

The example is extreme, but a simple variation is very relevant: the case in which no-detection events are evenly spread during the whole duration of the experiment is a good approximation to the data produced in photonics Bell tests, in which no-detection events constitute a large fraction of the runs (see e.g. table I in [10]). No-detection events come from two processes: the finite efficiency of the detectors, and the fact that parametric down-conversion often produces the vacuum state. The physics of both suggests that these events contain little or no randomness: it is thus tempting to sort the outcomes of the Bell test in two groups, the detections and the no-detections. As in the previous example, this may lead to certify more randomness. Even if it does not, one may get a practical advantage by extracting randomness only from the detection events. Indeed, randomness extractors require an independent random seed: the longer the initial string, the longer the needed seed and the computational time to output the result; in fact, it is an active research direction to construct randomness extractor with short seed length [3]. Thus, it is beneficial to be able to extract randomness from a short string.

Here, we investigate the amount of randomness that can be certified in Bell tests within the subset of detection events. For this first study, our aim is simply to determine whether this is actually a viable strategy. We thus perform our analysis in the simplified scenario in which the devices and the adversary behave in an i.i.d. way and in the limit of infinitely many measurement rounds. If randomness cannot be certified in this simple scenario, then it can also certainly not be certified in the non-i.i.d. finite statistics case.

The post-selection of detection events notoriously opens the detection loophole [11, 12]. It is important to clarify that our approach does not fall into that trap. We shall compute a lower bound on the randomness that can be extracted from a subset of events, but the bound is obtained by taking into account the whole set of events. In particular, if the behavior of the devices is compatible with local realism due to the detection loophole, our method will say that no randomness can be certified in the post-selected set of detection events.

Let us remark that a similar analysis in the context of violating local realism, namely the p-value of post-selected events which does not contain no-detections, has been done recently [13].

After introducing the technique that we will use to bound randomness in section 2, we apply it to several physically-motivated examples in section 3. In section 4 we analyse more precisely the effect of post-selection in a simplified case. A glimpse beyond the i.i.d. restriction is given in section 5 before the conclusion.

2. Average randomness in post-selected events

Consider a Bell experiment consisting of two separate devices in which each party inputs $x\in { \mathcal X }$ and $y\in { \mathcal Y }$ and obtains outputs $a\in { \mathcal A }$ and $b\in { \mathcal B }$, respectively. The behavior of such devices over n successive runs can be characterized by the—generally unknown—joint probabilities $p({\bf{ab}}| {\bf{xy}})$ to obtain the output string ${\bf{ab}}=({a}_{1}{b}_{1},\ldots ,{a}_{n}{b}_{n})$ given the input string ${\bf{xy}}=({x}_{1}{y}_{1},\ldots ,{x}_{n}{y}_{n})$. The information that an adversary has over the output string can be characterized by a tripartite quantum distribution $p({\bf{abe}}| {\bf{xyz}})$, where ${\bf{e}}$ denotes the output the adversary obtains when he makes a measurement ${\bf{z}}$ on a system possibly entangled with Alice and Bob's devices. In general ${\bf{e}}$ can be a string of arbitrary size representing the total information that the adversary can get about Alice and Bob's outcomes and ${\bf{z}}$ can be an arbitrary measurement that depends on the information available to the adversary in the protocol before his measurement.

Here we shall make the following simplifying assumptions. First, we will assume that the device behave in an i.i.d. way and similarly that the adversary extracts his information in an i.i.d. way by performing at each run individual measurements zi. We can thus write $p({\bf{abe}}| {\bf{xyz}})={\prod }_{i=1}^{n}p({a}_{i}{b}_{i}{e}_{i}| {x}_{i}{y}_{i}{z}_{i})$. Second, we are going to assume that Alice and Bob's marginal $p({ab}| {xy})$ at each run are known and given. In this way, we do not need to take care of estimation. With these assumptions, finding the adversary's optimal attack thus amounts at optimizing some quantity over all tripartite quantum distributions $p({abe}| {xyz})=\langle {\rm{\Psi }}| {M}_{a| x}\otimes {M}_{b| y}\otimes {M}_{e| z}| {\rm{\Psi }}\rangle $ compatible with a given bipartite marginal $p({ab}| {xy})={\sum }_{e}p({abe}| {xyz})=\langle {\rm{\Psi }}| {M}_{a| x}\otimes {M}_{b| y}\otimes I| {\rm{\Psi }}\rangle $.

Let us now introduce the additional ingredient of post-selection. For this, we consider a bipartition of the joint output alphabet ${ \mathcal O }={ \mathcal A }\times { \mathcal B }$ into two sets ${ \mathcal V }$ (valid symbols) and ${ \mathcal N }$. If the outputs at a given round $(a,b)\in { \mathcal V }$, we say that the round is valid, and otherwise, if $(a,b)\in { \mathcal N }$, that it is invalid. We refer to the events obtained in valid runs only as the post-selected events. Our goal is to estimate how much randomness can be extracted from these post-selected events.

A priori, an adversary trying to guess the post-selected events might not have access to the information about which run turned out to be valid or invalid, since he should not have access to the outputs observed by the parties. For simplicity, however, we will assume here that the adversary has access to this information. This allows him to know exactly which run he should try to guess and is thus advantageous for him. The amount of randomness that can be certified in this case thus constitutes a lower bound on the amount that can be certified when the adversary is not given this information. This assumption might however be problematic in a non-i.i.d. situation (see section 5).

We are going to assume in the following that Alice and Bob use a certain pair of inputs ($\bar{x}$, $\bar{y}$) for randomness generation7 . Since there is a promise on the marginal $p({ab}| {xy})$ and since we do not need to consider how to estimate this quantity, we are going to assume for simplicity that Alice and Bob always measure their systems using the inputs ($\bar{x}$, $\bar{y}$). Suppose that by measuring n systems, they obtain m results in ${ \mathcal V }$ and n − m results in ${ \mathcal N }$. The number m of valid results is a random variable with probability distribution $p(m)=\left(\begin{array}{l}n\\ m\end{array}\right){p}_{\bar{x}\bar{y}}^{m}{(1-{p}_{\bar{x}\bar{y}})}^{n-m}$, where ${p}_{\bar{x}\bar{y}}={\sum }_{{ab}\in { \mathcal V }}p({ab}| \bar{x}\bar{y})$ is the single-run probability to obtain a pair of valid results when using inputs $(\bar{x},\bar{y})$.

By the i.i.d. assumption, the min-entropy of the m-elements post-selected string is $m\;{H}_{\bar{x}\bar{y}}$, where ${H}_{\bar{x}\bar{y}}$ is the single-run min-entropy and is defined below. Applying a randomness extractor to this string, then yields $m\;{H}_{\bar{x}\bar{y}}$ bits of randomness (such extractors exist up to epsilon correction, see [14, 15]). The average length of the final random string is then ${\sum }_{m=0}^{n}p(m){{mH}}_{\bar{x}\bar{y}}=n\;{p}_{\bar{x}\bar{y}}\;{H}_{\bar{x}\bar{y}}$. We can also interepret this last quantity as an 'average' min-entropy8 .The rate of randomness extraction per use of the device can then be defined as ${p}_{\bar{x}\bar{y}}\;{H}_{\bar{x}\bar{y}}$.

To complete the analysis, it remains to determine ${H}_{\bar{x}\bar{y}}$. By definition, the min-entropy is related to the guessing probability ${G}_{\bar{x}\bar{y}}$ as ${H}_{\bar{x}\bar{y}}=-\;{\mathrm{log}}_{2}\;{G}_{\bar{x}\bar{y}}$, where the guessing probability is the maximal probability that the adversary correctly guesses Alice and Bob's outputs by performing an optimal measurement on his quantum side information [16]. Here since we condition on valid runs, this quantum side information can be represented by the cq-state ${\rho }_{{ABE}}=\frac{1}{{p}_{\bar{x}\bar{y}}}{\sum }_{{ab}\in { \mathcal V }}| {ab}\rangle \langle {ab}| \otimes {\rho }_{E}^{{ab}}$, where ${\rho }_{E}^{{ab}}=\mathrm{tr}({M}_{a| \bar{x}}\otimes {M}_{b| \bar{y}}\otimes I| {\rm{\Psi }}\rangle \langle {\rm{\Psi }}| )$. The probability that the adversary then makes a correct guess $e=(a,b)$ of Alice and Bob's outputs $a,b$ by performing a measurement z on his system is, averaged over Alice and Bob's possible outputs, $\frac{1}{{p}_{\bar{x}\bar{y}}}{\sum }_{{ab}\in { \mathcal V }}\mathrm{tr}({M}_{{ab}| z}{\rho }_{E}^{{ab}})=\frac{1}{{p}_{\bar{x}\bar{y}}}{\sum }_{{ab}\in { \mathcal V }}\langle {\rm{\Psi }}| {M}_{a| \bar{x}}\otimes {M}_{b| \bar{y}}\otimes {M}_{{ab}| z}| {\rm{\Psi }}\rangle $. To determine the maximal value of this guessing probability, we should maximize it over all quantum realizations $R=(| {\rm{\Psi }}\rangle ,\{{M}_{a| x}\},\{{M}_{b| y}\},\{{M}_{e| z}\})$ compatible with the given marginals $p({ab}| {xy})$ characterizing Alice and Bob's devices. We thus have

Equation (1)

Following [9] and introducing the bipartite subnormalized quantum correlations ${\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$ $\;=\langle {\rm{\Psi }}| {M}_{a| x}\otimes {M}_{b| y}\otimes {M}_{{a}^{\prime }{b}^{\prime }| \bar{z}}| {\rm{\Psi }}\rangle $ where $\bar{z}$ denotes the adversary's optimal measurement which maximizes (1), the above optimization program can be rewritten as

Equation (2)

where $\tilde{Q}$ denotes the set of unormalized bipartite quantum correlations. The meaning of this program is intuitive: Eve prepares one of $| { \mathcal V }| $ systems for Alice and Bob, one for each outcome pair $({a}^{\prime }{b}^{\prime })$. Each system is characterized by joint probabilities ${p}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})={\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})/{q}_{{a}^{\prime }{b}^{\prime }}$ and is prepared with probability ${q}_{{a}^{\prime }{b}^{\prime }}={\sum }_{{ab}}{\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$. When Eve prepares system ab, she guesses that Alice's and Bob's outputs are ab, hence the probability that she guesses correctly on average is given by the objective function in (2). Eve's preparations should of course on average reproduce the given correlations $p({ab}| {xy})$, hence the first constraint of (2). The second constraint simply expresses that Eve's preparations should be compatible with quantum theory.

Notice that the constraints in the second line of (1) and in the second one of (2) involve all outputs $a,b$ and not only those belonging to the post-selected set ${ \mathcal V }$. This reflects the fact that our analysis is not subject to the detection loophole.

To summarize, for a given set of bipartite correlations $p({ab}| {xy})$ characterising the behavior of the devices, the figure of merit that we are going to consider in this paper, which we call the randomness rate, is ${p}_{\bar{x}\bar{y}}{H}_{\bar{x}\bar{y}}={p}_{\bar{x}\bar{y}}\times (-{\mathrm{log}}_{2}\;{G}_{\bar{x}\bar{y}})$, where ${G}_{\bar{x}\bar{y}}$ is the output of the optimization problem (2).

In general, it is not possible to carry out explicitly this optimization as there is no closed form for the set of quantum correlations $\tilde{Q}$. However, we can upper-bound the optimal value of (2), and thus lower-bound the randomness rate, through semidefinite programming by relaxing the last condition ${\tilde{P}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})\in \tilde{Q}$ and asking that ${\tilde{P}}_{{a}^{\prime }{b}^{\prime }}({ab}| \bar{x}\bar{y})$ belongs to some level of the NPA hierarchy [1719] instead of the exact quantum set. All optimizations reported here were performed at local level 1 of the SDP hierarchy [20].

3. Approximating photonic experiments

The natural benchmark to test our tools are the correlations expected in a Bell experiment using spontaneous parametric down-conversion (SPDC). In the single-mode case, such a pulsed SPDC source produces a state of the form

Equation (3)

where ${a}_{H/V}$ $({b}_{H/V})$ are polarization modes for Alice (Bob), $| 0\rangle $ is the vacuum state, and $c(g,\bar{g})=\sqrt{1-{\mathrm{tanh}}^{2}\;g}\sqrt{1-{\mathrm{tanh}}^{2}\bar{g}}$ for $g,\bar{g}$ being the two squeezing parameters. The parties Alice and Bob can measure this state by placing two detectors after the usual set of wave plates and a polarization beam splitter. If the detectors do not resolve the number of incident photons, four cases can then be observed: no detection, a click in the first detector, a click in the second detector, or two clicks. In the following, we label a click in the first detector as 0, a click in the second detector as 1, and the case where either no detection or double detections are observed as ∅, so that each party effectively produces one of three possible outcomes. The statistics observed in this situation as a function of the polarization measurements and the detection efficiency (or equivalently the losses between the source and the detectors) are described in [21].

Using the program (2), we are going to compute lower bounds on the extractable randomness that can be found in presence of these statistics in the following cases:

  • (a) All outcomes are considered (no post-selection), i.e. ${ \mathcal N }={{ \mathcal N }}_{{\rm{a}}}=\{\}$ (the empty set).
  • (b) The post-selected string of outcomes does not contain double occurrences of ∅, i.e. ${ \mathcal N }={{ \mathcal N }}_{{\rm{b}}}=\{\varnothing \varnothing \}$.
  • (c) The post-selected string of outcomes does not contain any occurrence of a no-detection event ∅, i.e. ${ \mathcal N }={{ \mathcal N }}_{{\rm{c}}}=\{0\varnothing $, $1\varnothing $, $\varnothing \varnothing $, $\varnothing 0$, $\varnothing 1\}$.

For the sake of comparison, we will sometimes also consider the case in which the measurements are performed only when at least one photon pair is produced by the source, i.e.

  • (h) The source is heralded.

An example of heralded experiment is the recent one of Hensen et al [22]. Note that in this particular case the state is encoded in a non-photonic system and always yields a detection whenever measured.

3.1. Perfect detectors, variable squeezing

We first consider the case of an experiment with no loss, and with unit efficiency detectors. In this case it seems natural to try to generate a maximally entangled state. We thus set $g=\bar{g}$ and vary the squeezing g. Varying g can also be understood as changing the time window τ during which detectors are monitored. Indeed, the average number of photon pairs produced within this window is given by $\nu ={\mathrm{sinh}}^{2}\;g+{\mathrm{sinh}}^{2}\bar{g}=2{\mathrm{sinh}}^{2}\;g$.

Figure 1 shows the randomness per run obtained when setting the polarization measurement according to the standard CHSH settings. The various discarding strategies yield different amounts of certified randomness, the largest amount being obtained using strategy (b).

Figure 1.

Figure 1. Randomness from an SPDC source when setting the polarization measurement according to the standard CHSH settings, as a function of the average number of photon pairs produced in each detection window. No losses and unit efficiency detectors are assumed. The qualitative shape of the curves can be understood as follows: for small g, the generated state contains mostly the vacuum; for large g, the source generates several pairs, which worsens the statistics [23]. Strategies (a), (b) and (c) certify various amounts of randomness. Here and in the following figures, all the curves are normalized to the same number of runs, namely the total number of runs. Inset: randomness certified in a given time period when the length of a time window varies (and the number of time windows varies accordingly). This curve is obtained at constant pumping g.

Standard image High-resolution image

One may be tempted to infer that, for randomness extraction, SPDC sources should be operated with detection window at $\nu \sim 0.6$. However, this is the amount of randomness per run, not per time. For a given pump power, decreasing the window size τ decreases the average number of photon pairs in a proportional manner: $\nu \propto \tau $. At the same time, the number of time windows increases as $\sim 1/\tau \propto 1/\nu $. If $f(\nu )$ denotes the randomness rate per time window, the randomness that can be certified in a given time interval is thus given, up to a constant factor, by $f(\nu )/\nu $. This quantity is plotted in the inset of figure 1, where one can see that total amount of randomness certified is larger when ν is small, i.e. the time window τ is small. Therefore, in the asymptotic limit of infinitely many runs, one should set $\tau \to 0$ to get more randomness per time against an i.i.d adversary. In this case, the observed data set is dominated by double no-detection events, which reinforces the relevance of our post-selection approach. The regime of small ν is also the regime in which optical experiments closing the detection loophole have been performed [10, 24, 25], for a different reason: in the presence of losses and imperfect detectors, the Bell violation disappears if too many pairs are created, while is preserved in the limit of small windows [26].

3.2. Imperfect detectors, small squeezing

For the reasons just mentioned, we focus now on $g,\bar{g}\lt \lt 1$. (i.e. small ν). In this case, a large number of no-detection events is expected. In spite of this, we are going to see that strategy (b) continues to perform better than the others. Concretely, we choose to fix the average number of photon per detection window as $\nu =0.01$. The state produced by the source can be approximated to first order in g and $\bar{g}$ by

Equation (4)

In analogy with the partially entangled state $\mathrm{cos}\theta | 01\rangle -\mathrm{sin}\theta | 10\rangle $, we define the entanglement parameter of the state as $\theta =\mathrm{arctan}(\mathrm{tanh}\bar{g}/\mathrm{tanh}g)$.

We now introduce finite detection efficiency η and study how the certification of randomness varies with this parameter. We then consider two families of correlations. In the first, the two-photon state is maximally entangled, i.e. with $\theta =\pi /4$, and we fix the standard CHSH polarization measurements. The expected randomness per run as a function of η is shown in figure 2. We note that no randomness can be extracted if $\eta \leqslant 82.8\%$ which is known to be the boundary at which those correlations can be explained with a local model exploiting the detection loophole. The second case is that of Eberhard's famous study [11], in which the entanglement parameter θ depends on the detector efficiency η, and Alice's measurements are parametrized by two angles ${\alpha }_{0},{\alpha }_{1}$ which also depend on η. These parameters are chosen to optimize the violation of a lifting [27] of the CHSH inequality, in the case where exactly one pair of photons is measured, for each value of η. The resulting randomness rate is plotted in figure 3. Again, no randomness can be extracted below the known detection loophole threshold $\eta \leqslant 66.6\%$.

Figure 2.

Figure 2. Randomness from a singlet with finite detection efficiency. Curves (b) and (h) coincide almost perfectly and approach 0 at the detection loophole limit 0.828 [12].

Standard image High-resolution image
Figure 3.

Figure 3. Randomness from Eberhard correlations. Curves (b) and (h) coincide and approach 0 at the Eberhard limit of 2/3 [11]. Two recent experiments used this Eberhard correlations. In [25], the overall efficiencies are estimated at 78.6% for Alice and 76.2% for Bob; in [24], at 74.7% for Alice and 75.6% for Bob. Thus, strategies (a) and (b) would extract a very similar (small) amount of randomness. If efficiencies are increased in the future, strategy (b) should be preferred.

Standard image High-resolution image

In both cases we notice again that, within a numerical precision ∼10−5, strategy (b) certifies the largest amount of randomness and in fact recovers the result that one would obtained with a heralded source (h). The expected proportion of discarded events is $\sim (1-\nu )+\nu {(1-\eta )}^{2}$, which can be substantial: it is larger than 99% in our case for all η. Strategy (c), i.e. removing all events where some no-detection occurred, results in clearly lower randomness per run; and for efficiencies lower than 86% and 85%, no randomness at all is even certified. This kind of post-selection is thus too strong if one is interested in certifying an optimal amount of randomness. Strategy (a) certifies essentially the maximum amount of randomness for efficiencies $\eta \lesssim 90\%$, but would become suboptimal as efficiency increases.

4. Understanding why one certifies more randomness from a subset of data

Let us stress again that in figures 1 3, all the curves are normalized to the same number of runs, the total one. Thus, they show that if a suitable small fraction of the symbols is processed, a strictly larger amount of total randomness can be certified, as compared to the case where all the symbols are processed. In order to shed light on this behavior, we consider a simplified model in which the source emits a perfect maximally-entangled state with probability $\nu $, and the vacuum otherwise (in other words, compared to the previous section, we neglect completely the possibility of double detections in each party's measurement setup). We also work at perfect detection efficiency $\eta =1$. The statistics observed with such a source can be written as

Equation (5)

Notice that, for the source efficiency $\nu =\frac{1}{2}$, these correlations can be seen as the scrambled version of the two-day extreme situation mentioned in the introduction.

In figure 4, we show how much randomness can be certified for these statistics when ν varies. In this case, the lower bound on the randomness computed from the raw data is this time consistantly lower than the one obtained after removing double no-detections from the data. In fact, after discarding double no-detections, the same amount of randomness that could be certified if the source was heralded is recovered (i.e. it is proportional to the source efficiency ν).

Figure 4.

Figure 4. Randomness from a singlet produced with finite probability ν, with $\eta =1$. Curves (b) and (c) are identical, since there are no events with one detection and one no-detection in the raw data (the post-selection procedures (b) and (c) are actually the same for this correlation). Curve (h), which gives the randomness from raw string of outcomes upon the heralding of a successful preparation of the state (i.e. randomness from the correlation 5), exactly coincides with curves (b) and (c). Curve (a) lies below the other ones.

Standard image High-resolution image

We thus recover the same behavior as discussed in section 3.2 and in the two-day example of the introduction. If we do not consider it an overwhelmingly improbable fluctuation, the two-day example clearly suggests a non-i.i.d. process, for which the possibility of identifying two separate processes is easy to understand. Here, on the contrary, the statistics are manifestly i.i.d.—and nevertheless, the extraction of randomness based on the single-run frequencies $p({ab}| {xy})$ can be improved. We are going to show that the cause is the same: because of the structure of the correlations, one can actually identify the presence of two distinct processes, and the post-selection of detection events happens to capture this fact. That the alternation between the two processes is done in an i.i.d. way, instead of a disruptive way as in the two-day example, eventually does not matter.

Note first that by definition $p({ab}| {xy})$ has the block structure $p({ab}| {xy})=\nu \;q({ab}| {xy})+(1-\nu )r({ab}| {xy})$, where

Equation (6)

and

Equation (7)

It follows that in the decomposition $p({ab}| {xy})={\sum }_{{a}^{\prime }{b}^{\prime }\in { \mathcal V }}{\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$ in the second line of the program (2), every ${\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$ must also have this block structure, since if $p({ab}| {xy})$ is equal to zero for some $a,b,x,y$ then ${\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$ must also necessarily be equal to zero. We can thus write ${\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})={\nu }_{{a}^{\prime }{b}^{\prime }}\;{q}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})+(1-{\nu }_{{a}^{\prime }{b}^{\prime }})r({ab}| {xy})$, where ${q}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$ is normalized and has the same general form as $q({ab}| {xy})$ above. The condition $p({ab}| {xy})={\sum }_{{a}^{\prime }{b}^{\prime }\in { \mathcal V }}{\tilde{p}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$ is then equivalent to ${\sum }_{{a}^{\prime }{b}^{\prime }\in { \mathcal V }}{\nu }_{{a}^{\prime }{b}^{\prime }}=\nu $ and ${\sum }_{{a}^{\prime }{b}^{\prime }\in { \mathcal V }}{q}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})=\nu q({ab}| {xy})$.

Furthermore, when we post-select events according to (b) or (c), the effective set of valid symbols is in both cases ${ \mathcal V }=\{00,01,10,11\}$ since outcome pairs $0\varnothing $, $1\varnothing $, $\varnothing 0$, $\varnothing 1$ have zero probability. The objective value in (2) therefore only involves the ${q}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})$ part and is equal to $1/\nu \;\mathrm{max}{\sum }_{{ab}\in { \mathcal V }}{\nu }_{{ab}}\;{q}_{{ab}}({ab}| {xy})$, where we used that ${p}_{\bar{x}\bar{y}}={\sum }_{{ab}\in { \mathcal V }}{\nu }_{{ab}}=\nu $.

All together, we can thus rewrite the optimization (2) as

Equation (8)

where ${ \mathcal Q }$ denotes the set of normalized quantum correlations. Defining ${\tilde{q}}_{{a}^{\prime }{b}^{\prime }}({ab}| {xy})={\nu }_{{a}^{\prime }{b}^{\prime }}/\nu \times {q}_{{ab}}({ab}| \bar{x}\bar{y})$, we can further rewrite it as

Equation (9)

This optimization is nothing but the one associated to a heralded source characterized by the correlations $q({ab}| {xy})$ and explains why curve (h) of figure 4 coincides with curves (b) and (c).

5. Going beyond i.i.d. for the source

In this section, we are going to relaxing the i.i.d. assumption for the source. We will not be able to derive bounds for the extraction of randomness from the most general non-i.i.d. source. But we are going to provide two example of non-i.i.d. strategies that are strictly more powerful than i.i.d. strategies even in the asymptotic limit of infinitely many runs. To our knowledge, this is a feature not found in previous works on randomness from Bell tests [5, 28, 29] or on quantum key distribution [30]. In the strategies we found, the adversary exploits the knowledge of whether each outcome is kept or discarded. As mentioned in section 2, it would be definitely reasonable not to reveal anything, but such scenario may introduce other security concerns (e.g. the raw key is private conditional on some other information being kept private).

Specifically, suppose that the outcomes of run k are valid, i.e. they are kept for the raw key; the adversary would like to know their value. In a non-i.i.d. case, the fact of keeping or discarding the outcome at run $k+1$, an information which we assume the adversary will learn, may leak some information about the outcome that is kept at run k. This is similar to the argument of [31] against reusing QKD devices in the device-independent level of characterization [32]. Notice that this behavior does not require the adversary to have tampered with the device in a malicious way, it may be simply a defect of fabrication that the adversary is aware of. For instance, suppose that the detector corresponding to outcome 0 has an inordinately long jitter time compared to the other detector: if a detection happens at run $k+1$, it means that the outcome at run k was 1; if no detection, the outcome at run k was most probably 0.

5.1. First example

The simplest example we found requires both Alice's and Bob's devices to depend on the previous inputs and outputs of both sides. Note that this is not in contradiction with the basic assumption in all device-independent protocols that the two boxes are non-communicating, since this assumption must only be verified during the measurement runs. Between measurement runs, however, boxes could in principle be free to communicate. For instance, before the measurement runs, the boxes may open a door within a small time interval to let enter incoming quantum systems, those generated by and coming from the source. Malicious boxes could take advantage of this interval to exchange the inputs and outputs obtained in previous runs. In the next subsection, we will present a more convoluted example that does not require signalling between the boxes, and thus which also works if measure are taken to insure that the boxes do not exchange such kind of information between measurement runs.

Consider the i.i.d. correlations obtained when the parties measure a singlet with probability ν, and nothing with probability $1-\nu $. We have encountered this situation in paragraph 4: for any $\nu \gt 0$, some randomness remains in the non-discarded outcomes (see figure 4).

In all existing protocols, the amount of randomness that is extracted is determined from a statistical test which is based on the input and output pair counts $\#(x,y)$ and $\#(a,b)$ (or simply relative outcome frequencies $\#(a,b)/\#(x,y)$. However, the same statistics obtained for $\nu =2/5$ can be obtained with high probability when measurements are always performed on a perfect singlet, but runs with double no-detections are artificially added by using the following non-i.i.d. rule:

Equation (10)

where M means that an usual measurement is performed on the perfect singlet to determine the outcome of that run. In this case, counting the number of successive discarded events fully informs about the value of both parties' outcomes. Thus, in the non-i.i.d case, and allowing signalling from one box to the other between measurement runs, no private randomness can be certified from a non heralded source characterized by $\nu \leqslant 2/5$ (unless some more complicated processing beyond looking at simple outcome counts is done).

5.2. Second example

The second example was found numerically. It is admittedly hard to find a narrative justification for it, besides the general intuition given above; but we describe it in detail since, to our knowledge, it is the first example in which a non-i.i.d. strategy actually outperforms the i.i.d. ones in a Bell scenario in the asymptotic limit.

Resources. In each run, Alice and Bob share two binary variables $\lambda ,\mu \in \{0,1\}$ and one out of five quantum correlations that we denote by Pj with $j\in \{1,2,3\}$ and ${P}_{\lambda }^{\prime }$. These correlations are such that Alice's box has three outcomes $\{0,1,\varnothing \}$, while Bob's box has only the two outcomes $\{0,1\}$: in other words, information about previous outcomes will be leaked out by Alice's box detection or no-detection events. We can write these correlations as above in the form of Collins–Gisin tables [33]:

because by no-signaling it holds $P(a1| {xy})={P}_{A}(a| x)-P(a0| {xy})$ and $P(\varnothing b| {xy})={P}_{B}(b| y)-P(0b| {xy})-P(1b| {xy});$ and of course ${\sum }_{a}P(a| x)={\sum }_{b}P(b| y)=1$. The example that we find uses:

Equation (11)

Equation (12)

Equation (13)

Equation (14)

Equation (15)

Protocol. One starts with one of the three Pj's. As long as j = 1 or j = 2, the next round will also use one of the three Pj's. When P3 was chosen, the next box will be ${P}_{\lambda }^{\prime }$ with the value of λ available in that run. Besides, if Alice's outcome from P3 was $a=\mu $, in the next run Alice uses the box ${P}_{\lambda }^{\prime };$ if the outcome was $a=1-\mu $, in the next run Alice ignores ${P}_{\lambda }^{\prime }$ and outputs ∅. After this, the process starts again by selecting one of the three Pj's.

Now, when x = 0, either outcome 0 or outcome 1 cannot occur for each potential correlation except P3; and when P3 is used, its outcomes is fully leaked out in the next run by the information of whether the subsequent outcome is kept or not, since ${P}_{\lambda }^{\prime }(\varnothing | x)=0$.

One can check, however, that it would not be possible to fully guess Alice's outcome if the same outcome relative frequencies as the one generated by the above process where produced by devices behaving in an i.i.d manner. For instance, let us specify ${q}_{1}=0.4097$, ${q}_{2}=0.4992$, ${q}_{3}=0.0911$ as the frequencies at which the Pj's are chosen; and $p(\lambda =0)=1-p(\lambda =1)=0.0013$, $p(\mu =0)=p(\mu =1)=1/2$. The expected relative frequencies in the asymptotic limit are then peaked around the following values

Equation (16)

where ${{P}_{\lambda }^{\prime }}^{B}$ denote the correlations obtained when Bob uses ${P}_{\lambda }^{\prime }$ and Alice outputs ∅. Applying our i.i.d. programme to these correlations, one can show that in case Alice uses x = 0 and the run is not discarded, the guessing probability on her outcome is upper-bounded by 0.9874.

6. Conclusion

This work stems from the general remark that randomness extraction does not need to be performed on all of the raw data and can be done by blocks, or on a subset of data. In the context of randomness certification by Bell inequalities, we have investigated in a simple scenario whether this could provide an advantage when post-selecting detection events, which is relevant for photonics Bell tests. Because we estimate the randomness present in a subset of data conditioned on the knowledge of the whole set of data, this certification does not open the detection loophole.

Naively, one could a priori think that 'full detection' events, where a detection happens on both side, are the most important for randomness certification and that discarding all other events would influence only negligibly the randomness rate. However, our findings show for several physically-motivated models of the observed statistics that this is not the case. In particular, figures 2 and 3 show that the resistance to detection inefficiencies is substantially lower (up to 20% for the scenario figure 3) when the post-selected data does not contain any occurrence of a no-detection event.

The physical intuition that the double no-detection events contain almost no randomness is, however, vindicated. In some cases, the post-selection actually help identify a better way of reading the data. From a practical perspective, our work suggests the possibility of hashing a small post-selected subset of the original data, thereby reducing the needed seed length, and ultimately the computational time. However, one should still embed this idea within a full randomness certification protocol, in particular one that can deal with finite statistics and non-i.i.d. devices.

Regarding this last point, the physical intuition that double no-detection events can safely be discarded, as vindicated by our numerical results in an i.i.d. setting, should, however, be contrasted with the example of section 5 in which we prove that non-i.i.d. strategies outperform i.i.d. ones even in the asymptotic limit of infinitely many runs, something that had not been reported previously in the context of Bell inequalities. Whether these strategies are actually harmful in a more general and realistic case remains to be determined.

In particular, we remind that for simplicity we have performed our analysis assuming that the adversary gets to know which runs are kept and which ones are discarded in the post-selection. This scenario is rather artificial for randomness generation, insofar as the two boxes for the Bell experiment do not need to be in separate labs. Relaxing this assumption could increase the randomness rate and the security of the final string. Specifically, the non-i.i.d. attacks of section 5 would not apply anymore in this case.

Acknowledgments

We thank Nicolas Brunner and Nicolas Sangouard for stimulating discussions. This work is funded by the Singapore Ministry of Education (partly through the Academic Research Fund Tier 3 MOE2012-T3-1-009) and by the National Research Foundation of Singapore. GdlT acknowledges support from Spanish FPI grant (FIS2010-14830) and the subsequent hospitality from CQT. SP acknowledges financial support from the European Union under the project QALGO, from the F.R.S.-FNRS under the project DIQIP, and by the Interuniversity Attraction Poles program of the Belgian Science Policy Office under the grant IAP P7-35 photonics@be. SP is a Research Associate of the Fonds de la Recherche Scientifique F.R.S.-FNRS (Belgium).

Footnotes

  • Notice that the difference grows linearly with N, thus it cannot be accounted for by finite-size corrections related to processing two N-symbol sets instead of a single $2N$-symbol one.

  • In the case they would rather use several pairs of inputs for randomness generation, the analysis below could be extended by using the tools presented in [8].

  • Note that usually, the average min-entropy of a variable A (e.g. the output string) given some information M (e.g. the length of the post-selected string) is defined as $-\;{\mathrm{log}}_{2}\;{\sum }_{m}P(m)G(A| M=m)$, where $G(A| M=m)$ is the guessing probability of A given M = m. Here our definition of 'average' min-entropy is ${\sum }_{m}P(m)(-{\mathrm{log}}_{2}\;G(A| M=m))$, that is, we inverted the sum over m and the logarithm. The reason is that in our scenario, the user, and not only the adversary, actually knows the value of m and thus a bound on $G(A| M=m)$ which allows him to apply a different extractor depending on m.

Please wait… references are loading.
10.1088/1367-2630/18/3/035007