Brought to you by:
Paper

On the equivalence of the Clauser–Horne and Eberhard inequality based tests

, , , , and

Published 19 December 2014 © 2014 The Royal Swedish Academy of Sciences
, , Citation Andrei Khrennikov et al 2014 Phys. Scr. 2014 014019 DOI 10.1088/0031-8949/2014/T163/014019

1402-4896/2014/T163/014019

Abstract

Recently, the results of the first experimental test for entangled photons closing the detection loophole (also referred to as the fair sampling loophole) were published (Vienna, 2013). From the theoretical viewpoint the main distinguishing feature of this long-aspired to experiment was that the Eberhard inequality was used. Almost simultaneously another experiment closing this loophole was performed (Urbana-Champaign, 2013) and it was based on the Clauser–Horne inequality (for probabilities). The aim of this note is to analyze the mathematical and experimental equivalence of tests based on the Eberhard inequality and various forms of the Clauser–Horne inequality. The structure of the mathematical equivalence is nontrivial. In particular, it is necessary to distinguish between algebraic and statistical equivalence. Although the tests based on these inequalities are algebraically equivalent, they need not be equivalent statistically, i.e., theoretically the level of statistical significance can drop under transition from one test to another (at least for finite samples). Nevertheless, the data collected in the Vienna test implies not only a statistically significant violation of the Eberhard inequality, but also of the Clauser–Horne inequality (in the ratio-rate form): for both a violation $\gt 60\sigma $.

Export citation and abstract BibTeX RIS

1. Introduction

The experimental realization of a loophole-free test for Bell inequalities [1] will impact both quantum foundations and quantum technologies. In both cases the present situation, e.g., in quantum cryptography [2] and randomness generation [3] (see also [4] for discussion), is unsatisfying from the scientific viewpoint. To experimentally falsify local realism, a so-called loophole-free Bell experiment will have to be accomplished successfully7 . This had not been claimed so far in any of the reported experiments. Up to now, space-like separation of measurements and basis choices has been accomplished in the pioneering experiments of Aspect et al [5, 6] and Weihs et al [7], closing the so-called locality loophole8 . These experimental tests were based on the Clauser–Horne–Shimony–Holt (CHSH) inequality [9]. For this inequality to falsify local realism with maximally entangled states one must either approach very high total detection efficiency (which includes the optical losses of the setup and the efficiency of the detectors) $\eta =82.8\%$ [10, 11], or proceed under an assumption to circumvent the loss—the so-called fair sampling assumption, see section 2 for discussion. It has been shown that using unfair sampling at sufficiently low detection efficiencies, very simple local models with hidden variables can violate the CHSH-inequality, e.g., [12, 1321]. In spite of technological progress and the existence of detectors whose efficiency nears unity (e.g., TES detectors with an efficiency of around 95% [22]), it is very challenging to approach the required total detection efficiency.

In 1974, Clauser and Horne [13] proposed a new inequality that is not based on the fair sampling assumption. This inequality is expressed in terms of probabilities and we shall call it the CH-inequality for probabilities:

Equation (1)

where $p\left( \alpha ,\beta \right)$ and ${{p}^{A}}\left( \alpha \right),{{p}^{B}}\left( \beta \right)$ are probabilities for coincidence and single counts9 , respectively; see section 4 for more detail.

In this paper we shall discuss various forms of the CH-inequality, see section 4 (see also the review of Clauser and Shimony [14] and the Stanford encyclopedia entry of Shimony [23] for details). Therefore we address each form with the corresponding label. However, we restrict our considerations to the class of the CH-inequalities not based on experimentally untestable auxiliary assumptions. Thus we shall not consider the CH-inequality whose derivation is based on the 'no-enhancement assumption' [13]: if an analyzer is removed from one of the paths, the resulting probability of detection is at least as great as with an analyzer.

To determine probabilities in (1), one has to know the total number of emitted pairs of photons. As was pointed out by Clauser and Horne [13] (see also the review of Clauser and Shimony [14] for extended discussion), it is practically impossible to determine this number experimentally. To escape this problem, it was proposed [14] to exclude the total number of emitted pairs from consideration by considering a version of the CH-inequality in the form of ratio of detection counts rates 10 :

Equation (2)

where $R\left( \alpha ,\beta \right)$ and ${{R}^{A}}\left( \alpha \right),{{R}^{B}}\left( \beta \right)$ are coincidence and single rates, respectively; see section 4 for details.

For the CH-inequality for probabilities (1), Clauser and Horne [13] formulated the restrictions on the experimental setup in a straightforward way: first presupposing the 'optimal angles' $\left( \alpha ,\beta \right)$ and then calculating other experimental parameters, namely the degree of entanglement and detection efficiency, to violate the inequality, see equations (5) and (6) in [13]. It can be shown that this procedure leads to very high required detection efficiency. To violate the CH-inequality for probabilities, the detection efficiency has to be at least 82.8%, see [12, 24].

In [25] Eberhard proposed a different approach by jointly optimizing all aforementioned parameters. He derived a new Bell inequality which we will abbreviate by 'E-inequality':

Equation (3)

where ${{n}_{xy}}\left( {{\alpha }_{i}},{{\beta }_{j}} \right)$ is the number of pairs detected in a given time period for settings ${{\alpha }_{i}},{{\beta }_{j}}$ with outcomes $x,y=o,e,u$ and the outcomes $\left( o \right)$ and $\left( e \right)$ correspond to detections in the ordinary and extraordinary beams, respectively, and the event that a photon is undetected is denoted by the symbol $\left( u \right).$ We point to the main distinguishing features of the E-inequality:

  • (a)  
    derivation without the fair sampling assumption (and without the no-enhancement assumption);
  • (b)  
    taking into account undetected photons;
  • (c)  
    background events are taken into account;
  • (d)  
    the linear form of presentation (non-negativity of a linear combination of coincidence and single rates).

The latter feature (which is typically not emphasized in the literature) is crucial to find a simple procedure of optimization of experimental parameters and, hence, it makes the E-inequality the most promising experimental test to close the detection loophole and to reject local realism without the fair sampling assumption. Eberhardʼs optimization has two main outputs which play an important role in the experimental design:

  • It is possible to perform an experiment without the fair sampling assumption for detection efficiency less than 82.8%. Nevertheless, the detection efficiency must still be very high, at least 66.6% (in the absence of background).
  • The optimal parameters correspond to non-maximally entangled states.

In 2013, the possibility of proceeding with overall efficiencies lower than 82.8% (but larger than 66.6%) was explored for the E-inequality and the first experimental test ('the Vienna test') closing the detection loophole was published [26], for a more detailed presentation of statistical data see also [27, 28].

Almost simultaneously another experiment closing the detection loophole was performed [29, 30] based on the probability version of the CH-inequality [13], see (1).

In this note we analyze the mathematical and experimental equivalence of tests based on the E-inequality (3), and the CH-inequality in the ratio-rate form (2)—in fact, its modification for the ratio of detection counts, see section 4, the inequality (9). In particular, one has to distinguish between algebraic and statistical equivalence (and for the latter, the cases of finite and infinite samples). Although these inequalities are (trivially) algebraically equivalent (see section 3.2), the tests based on them need not be equivalent statistically (for finite samples), i.e., theoretically the level of statistical significance can change essentially under transition from one test to another, see section 3.4. Nevertheless, the data collected in the Vienna test [26] implies not only a statistically significant violation of the E-inequality, but also of the CH-inequality for ratio of detection counts and, hence, for ratio of detection rates: for both a violation $\gt \;60\sigma .$

One of the aims of this note is to determine confidence intervals from the statistics of the data collected in [26]. We remark that if one does not assume that data is Gaussian, then it is impossible to determine the confidence interval exactly with the aid of the standard deviation. However, it is possible to estimate it by using the Chebyshev inequality, see, e.g., [31]. This inequality is applicable under the assumption of finite dispersion and independence of the experimental runs. Although the Chebyshev inequality gives only rough estimates of probabilities, in our case (for the data collected in the Vienna test) it is sufficiently powerful to estimate confidence intervals showing that the hypothesis of the local realistic description of the Vienna data must be rejected.

2. Fair sampling assumption

The fair sampling assumption plays a crucial role in justification of tests based on the CHSH-inequality.

Extended discussions on the role of this assumption in the resolution of the classical-quantum dilemma can be found in the papers of Pearle [12] and Clauser and Horne [13] and Aspectʼs PhD thesis [6], see also Aspectʼs 'naive experimentalist presentation' of Bellʼs tests [32]. Later the fair sampling assumption was analyzed in detail in the PhD theses of Larsson [33] and Adenier [34].

The fair sampling assumption is not made in a Bell test based on the E-inequality or any CH-inequality. We also remark that the no-enhancement assumption is not present in the list of assumptions for the derivation of the E-inequality, see [25] (assumptions (i)–(iii)).

Finally, we remark that in this paper we do not discuss two other important loopholes, the coincidence-time loophole (and the role of space-time in the Bell argument in general, cf [20, 3543]) and the freedom-of-choice loophole [44] (and its relation to the impossibility of using the conventional model of classical probability theory, the Kolmogorov model, 1933, see [20, 38, 39, 4550]).

3. On the equivalence of the E-inequality to the CH-inequalities

3.1. E-inequality

We follow Eberhard [25]: photons are emitted in pairs $\left( a,b \right).$ Under each measurement setting combination $\left( \alpha ,\beta \right)$, the events in which the photon a is detected in the ordinary and extraordinary beams are denoted by the symbols $\left( o \right)$ and $\left( e \right)$, respectively, and the event that it is undetected is denoted by the symbol $\left( u \right).$ The same symbols are used to denote the corresponding events for the photon b. Therefore, for the pairs of photons there are nine types of events: $\left( o,o \right),\left( o,u \right),\left( o,e \right),$ $\left( u,o \right),\left( u,u \right),\left( u,e \right),\left( e,o \right),\left( e,u \right),$ and $\left( e,e \right).$

Under the conditions of locality, realism and statistical reproducibility the inequality (3), see section 1, was derived.

3.2. Algebraic equivalence

As was mentioned in section 3.1, originally Eberhard derived his inequality for the four-detectors experiment, one detector at each output of two PBSs. In [26] it was shown that the 'e-outputs' can be eliminated from the E-inequality (3), i.e., the four-detectors experimental design can be transformed into the two-detectors design corresponding to detection of only 'o-outputs'. In this section we demonstrate that the latter version of the E-inequality is algebraically equivalent to the CH-inequalities in various forms: for probabilities, ratio of probabilities, ratio of rates, and ratio of detection counts.

As was pointed out [26], the E-inequality can be transformed into the following inequality:

Equation (4)

where $S_{o}^{A}\left( {{\alpha }_{1}} \right)$ and $S_{o}^{B}\left( {{\beta }_{1}} \right)$ are numbers of single counts in the o-channels for 'Alice' and 'Bob', in settings ${{\alpha }_{1}}$ and ${{\beta }_{1}}$ respectively. To match with the CH-inequalities completely, we change the sign and collect singles terms in the right-hand side:

Equation (5)

Then by dividing by the number N of emitted pairs per setting combination on both sides (and omitting the index 'o') we obtain the CH-inequality for probabilities 11 . (Here we proceed under the assumption of statistically constant production rate for pairs of photons, cf section 4).

Equation (6)

where $p\left( {{\alpha }_{i}},{{\beta }_{j}} \right)={{n}_{oo}}\left( {{\alpha }_{i}},{{\beta }_{j}} \right)/N,{{p}^{A}}\left( {{\alpha }_{1}} \right)=S_{o}^{A}\left( {{\alpha }_{1}} \right)/N,{{p}^{B}}\left( {{\beta }_{1}} \right)=S_{o}^{B}\left( {{\beta }_{1}} \right)/N.$

However, as pointed out by Clauser and Horne [13], this inequality suffers from the problem that the number N and, hence, probabilities, are not well-determined in an experiment. To solve this problem, (6) can be transformed into the inequality:

Equation (7)

(We call this inequality [14] CH-inequality for ratio of probabilities, see remark 1 later). And finally T can be represented as the ratio of detection count rates:

Equation (8)

where $R\left( \alpha ,\beta \right)$ and ${{R}^{A}}\left( {{\alpha }_{1}} \right),{{R}^{B}}\left( {{\beta }_{1}} \right)$ are coincidence and single rates, respectively. (Following Clauser and Shimony [14] we call this inequality CH-inequality for ratio of detection count rates or simply the ratio-rates CH-inequality, see again Remark 1). This inequality is evidently equivalent to the following inequality in Eberhardʼs notation, i.e., with the total numbers of coincidences and single counts, instead of the rates:

Equation (9)

(We call this inequality CH-inequality for ratio of detection counts or simply the ratio-counts CH-inequality, see again Remark 1.)

This inequality can be directly derived from (5) without relying on making any statements or assumptions about $N.$ We can also proceed another way around and derive the E-inequality in the form (5) from the CH-inequality in the form (9). Thus these two inequalities are algebraically equivalent. The problem of their statistical equivalence will be studied in section 3.4. And it is more complicated.

Remark 1 (On terminology). The CH-inequality for probabilities, see (6), is sometimes called simply the CH-inequality. On the other hand, in the paper of Clauser and Shimony [14] this inequality was considered as just an intermediate step towards the ratio-rates CH-inequality, see (8). Thus it would be natural to refer to the latter as the CH-inequality. This terminology is used, e.g., by Shimony [23], who called the ratio-rates inequality, the BCH-inequality, see also the experimental proposal of Fry and Walther [52]. The main source of referring to (6) as the CH-inequality is that the material in the original CH-paper [13] was presented in a very compact form; in particular, the inequalities for ratios of probabilities and rates, see (7) and (8), were not written anywhere. The authors just remarked that the upper limit in (6) can be experimentally testable without N being known. This statement can be interpreted as simply the (algebraic) equivalence of the inequality (6) to the E-inequality (5) achieved with multiplication of the right- and left-hand sides of (6) by N (again under the assumption of statistically constant production rate).

3.3. Application of Chebyshevʼs inequality to data from the Vienna test

Consider statistical data that is normally distributed. The information about the mean value μ and the standard deviation σ is sufficient to find the spread of these data relative to the number of standard deviations from the mean value. Denote the mean value and the standard deviation by the symbols μ and $\sigma ,$ respectively. It is known that 68% of these data are within $1\sigma $-deviation from μ, 95% of the data are within $2\sigma $-deviations from μ, and approximately 99% of the data are within $3\sigma $-deviation from $\mu .$

However, if the statistical data set is not normally distributed, i.e., its density deviates from the bell shape, then a different amount could be within $k\sigma $-deviation, $k=1,2,3\ldots $. In this case one can apply Chebyshevʼs inequality [51]—a powerful tool to get to know what fraction of the statistical data falls within a few standard deviations from the mean value. We recall that the Chebyshev theorem states that, for any random variable ξ with finite second moment12 , i.e., $E\left( |\xi {{|}^{2}} \right)\lt \infty ,$ where E denotes the expectation value, and any positive number $c,$

Equation (10)

Typically in applications one starts with a random variable η and in (10) selects $\xi =\eta -\mu ,$ where $\mu =E\left( \eta \right)$ is the mean value of $\eta .$ Thus

Equation (11)

where σ is standard deviation of $\eta .$ We remark that a violation with $c=k\sigma $ results in confidence probability for a violation of $\left( 1-1/{{k}^{2}} \right).$

By using the values for the mean value and standard deviation calculated in [26] and by applying Chebyshevʼs inequality, we can find (without knowing the probability distribution exactly, but only assuming that the dispersion is finite, see footnote 13) that the E-inequality is violated statistically significantly.

Remark 2. It is important to present the procedure of calculation of the empirical values ${{m}_{J}}$ and ${{s}_{J}},$ which was used by Giustina et al [26]: 'After recording for a total of 300 ${\rm s}$ per setting we divided our data into 10-${\rm s}$ blocks and calculated the standard deviation of the resulting 30 different J values.'

We consider random variables corresponding to the left-hand and right-hand sides of the inequality (5):

Equation (12)

and

Equation (13)

Thus

Equation (14)

Denote the mean value and standard deviation of the random variable J by the symbols ${{\mu }_{J}}$ and ${{\sigma }_{J}},$ respectively13 . Thus ${{\mu }_{J}}=E\left( J \right)$ and $\sigma _{J}^{2}=E{{\left( J-{{\mu }_{J}} \right)}^{2}}.$

Our aim is to estimate the confidence interval for the mean. As always we shall use the statistical estimates of the mean and dispersion:

Equation (15)

(in the experiment [26] $L=30,$ this is the number of the 10-${\rm s}$ blocks, see Remark 2, each block is used to calculate ${{J}_{i}},$ the number of pairs in each block is very large and in such a framework its exact number is not so important).

We shall also use the standard error of the mean (the standard deviation of the sample-meanʼs estimate of a population mean):

Equation (16)

and the standard deviation of the mean:

Equation (17)

We remark that ${\rm S}{{{\rm E}}_{{\bar{J}}}}$ decreases as $\sqrt{L}$ with increasing size of sample $L$ 14 .

By using the Chebyshev inequality we obtain:

Equation (18)

We proceed by using the standard error of the mean, instead of the standard deviation of the sample mean (in the formal mathematical presentation one has to use the correction related to the finite $L,$ see [53, 54] for details):

Equation (19)

From [26] we take the values $\bar{J}\approx -4224$ and ${\rm S}{{{\rm E}}_{{\bar{J}}}}\approx 61.23.$ This is a $\gt 60\sigma $ violation, where $\sigma \equiv {\rm S}{{{\rm E}}_{{\bar{J}}}}.$ To avoid assuming normally distributed data further analysis is needed. By using the the Chebyshev inequality we estimate the confidence interval corresponding to the confidence level 99.95%. (In applied statistics often the level 95% is considered as sufficiently high.) Here $c\approx 2738.$ Thus:

Equation (20)

Thus the confidence that can be placed in the result of the Vienna test is very high. The demonstrated violation of the E-inequality is very unlikely a matter of chance.

3.4. Statistical (non-)equivalence

Generally (i.e., without additional assumptions on probability distributions) the algebraic equivalence of two tests does not imply their statistical equivalence, at least for finite samples. In particular, the tests based on the E-inequality and the CH-inequality in the form (9) are not statistically equivalent. The latter means that violation of one of them with $k\sigma ,$ where k is sufficiently large, need not imply that another will be violated with the same $k.$ It may happen that the significance of the violation changes essentially. This is a general statistical feature, i.e., it is not coupled rigidly with the two statistical tests under consideration, see appendix.

We remark that if the conditions of the central limit theorem are satisfied (in particular, for identically distributed independent random variables), then by using the δ-method [55] (error propagation method) one can prove the statistical equivalence of the E-test and the CH-test for $L\to \infty .$ However, for finite $L,$ in general it is true that by looking at different functions of statistics of interest and using the delta method, one can get any answer one likes. All of these answers are just approximations, and some approximations are better than others. As was pointed out, for $L\to \infty ,$ they give the same answer, but for fixed L they all give different answers.

3.5. Statistically significant violation of the CH-inequality (in the ratio form) for the Vienna test

Thus on the basis of purely theoretical arguments one cannot derive a statistically significant violation of the CH-inequality in the form (9) from statistically significant violation of the E-inequality. One has to use again the experimental data.

In this section we apply Chebyshevʼs inequality to show that the statistical data collected in the Vienna test also implies statistically significant violation of the (ratio-counts) CH-inequality. Set ${{\mu }_{T}}=E\left( T \right)$ (the mean value of the random variable $T)$ and $\sigma _{T}^{2}=E{{\left( T-{{\mu }_{T}} \right)}^{2}}$ (its dispersion). We shall use the statistical estimates of the mean and dispersion: $\bar{T}=\frac{1}{L}\sum _{i=1}^{L}{{T}_{i}},s_{T}^{2}=\frac{1}{L-1}\sum _{i=1}^{L}{{\left( {{T}_{i}}-\bar{T} \right)}^{2}}$ (in the experiment [26] $L=30,$ this is the number of the 10-${\rm s}$ blocks, each block is used to calculate ${{T}_{i}}$). We also consider the standard deviation of the mean ${\rm S}{{{\rm D}}_{{\bar{T}}}}=\frac{{{\sigma }_{T}}}{\sqrt{L}}$ and and the standard error of the mean ${\rm S}{{{\rm E}}_{{\bar{T}}}}=\frac{{{{\rm s}}_{T}}}{\sqrt{L}}.$

By using the data from [26] and more detailed presentation in [28] we obtain $\bar{T}\approx 1.0394$ and ${\rm S}{{{\rm E}}_{{\bar{T}}}}\approx 0.0006.$ This yields a $\gt 60\sigma $ violation, where $\sigma ={\rm S}{{{\rm E}}_{{\bar{T}}}}.$ From the Chebyshev inequality

Equation (21)

we estimate the confidence interval corresponding to the confidence level 99.95%. We have $c\approx 0.027.$

Equation (22)

Thus the confidence that can be placed in the result of the Vienna test is very high. The demonstrated violation of the (ratio-counts) CH-inequality is very unlikely a matter of chance.

4. The Vienna test for the CH-inequality (in the ratio form): taking into account intensity drift

In all previous considerations we assumed, as Eberhard did originally [25], that the number of emitted pairs N is constant during the experiment and does not depend on angles $\left( {{\alpha }_{i}},{{\beta }_{j}} \right).$ In the real experiment, the intensity drift was very small [26]. In [28], a data analysis procedure was proposed, based on the following assumption:

Then one proceeds not simply with coincidence and single counts ${{n}_{oo}},{{n}_{oe}},\ldots ,S_{o}^{A},S_{o}^{B},$ but with their normalized values based on the above mentioned proportion of intensities. We denote normalized quantities as ${{\tilde{n}}_{oo}},{{\tilde{n}}_{oe}}\ldots ,\tilde{S}_{o}^{A},\tilde{S}_{o}^{B}.$ For such normalized numbers of coincidences and singles, we can use the E-inequality:

Equation (23)

And we jump directly to the CH-inequality for the total numbers of coincidences and singles (which is equivalent to the CH-inequality for the rates):

Equation (24)

Using data collected in [26] and [28], we get $\overline{{{T}^{\prime }}}\approx 1.0384$ with a violation $\gt 60\sigma $ which shows that also the experimental data taking into account intensity drift lead to the same amount of non-classical correlations.

5. Concluding remark

The statistical data collected in the Vienna test [26, 28] violated statistically significantly not only the E-inequality, but equivalently also the CH-inequality, for ratio of detection counts and, hence, the ratio-rate inequality, cf [29, 30]. Thus, we can consider this experimental test as closing the detection loophole also with regard to the CH-inequality in the ratio-form.

6. Appendix: Statistical non-equivalence of (algebraically equivalent) linear and ratio test

Let X be a set of data sampled from realizations of some random variable $x.$ Take two (for a moment arbitrary positive valued) functions, ${{J}_{1}}\left( x \right)$ and ${{J}_{2}}\left( x \right);$ set $J\left( x \right)={{J}_{1}}\left( x \right)-{{J}_{2}}\left( x \right)$ and $T\left( x \right)=\frac{{{J}_{2}}\left( x \right)}{{{J}_{1}}\left( x \right)}.$ Consider two tests for statistical data:

Equation (25)

Equation (26)

Suppose that the data X showed $k{{\sigma }_{J}}$ violation of the first inequality, where k is large. Thus violation of this inequality is significant. Our aim is to show that the same data can in principle show insignificant violation of the second inequality, say $\gamma {{\sigma }_{T}},$ where γ is very small.

Suppose, for example, that the data X was obtained as the result of measurements of a discrete random variable $x={{x}_{1}},{{x}_{2}},$ where ${{x}_{1}},{{x}_{2}}$ are two arbitrary real numbers. It takes these values with probabilities ${{p}_{1}}$ and ${{p}_{2}}=1-{{p}_{1}}.$ Here we need to set the values of all functions only in the two points, ${{x}_{1}}$ and ${{x}_{2}}:$

We take ${{B}_{i}}=\left( 1+{{\epsilon }_{i}} \right){{A}_{i}},{{\epsilon }_{i}}\gt 0.$ We have:

By playing with parameters we want to make ${{R}_{J}}\gg 1$ and at the same time ${{R}_{T}}\ll 1.$ First we set ${{\epsilon }_{1}}=\lambda {{\epsilon }_{2}},$ where $\lambda \gt 1.$ We have

To make the first term very small, we select ${{p}_{1}}={{\delta }^{2}}\ll 1,$ so ${{p}_{2}}\approx 1;$ to make the second term very small, we select λ in such a way that its denominator is very large, i.e., $\sqrt{{{p}_{1}}{{p}_{2}}}\left( \lambda -1 \right)\gg 1.$ Thus, for the model parameters satisfying conditions

Equation (27)

the inequality (26) is violated insignificantly. We remark that, since the parameter ${{\epsilon }_{2}}$ has not yet been constrained, it is possible to make the absolute value of expectation very large, ${{\mu }_{T}}\gg 1.$ We now want to make ${{R}_{J}}\gg 1.$ We represent ${{A}_{1}}=a{{A}_{2}},$ and we have: ${{R}_{J}}=\frac{a\lambda {{p}_{1}}+{{p}_{2}}}{\sqrt{{{p}_{1}}{{p}_{2}}}|a\lambda -1|}.$ Now we select a as

Equation (28)

Thus

The first summand is negligibly small, see (27), it does not play any role in our considerations. The parameter a has to be selected in such a way that the second summand will be very large. Thus $\sqrt{{{p}_{1}}{{p}_{2}}}\left( a\lambda -1 \right)\ll 1,$ or by taking into account (28) we obtain that $0\lt a\lambda -1\ll 1/\sqrt{{{p}_{1}}{{p}_{2}}}\approx 1/\delta .$ Take very large natural number $k.$ Suppose that

Equation (29)

Then ${{R}_{J}}\approx k$ 15 .

Acknowledgments

We would like to thank all authors of the original paper [26] as they contributed to the data we used in this paper; in particular, M Giustina whose numerous comments and suggestions substantia improved our paper. We would like to thank R Gill, and R Pettersson for discussions on statistical aspects of the paper, and J-Å Larsson and G Adenier for discussions on the role of the fair sampling assumption in the derivation of various versions of Bellʼs inequality. This work was partially supported by MPNS COST Action MP1006, Fundamental Problems in Quantum Physics (I Basieva), a visiting fellowship (A Khrennikov) to Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, and EU Marie-Curie Fellowship (S Ramelow), PIOF-GA-2012-32985.

Footnotes

  • We remark that a priori one still cannot exclude the possibility that in the final loophole-free experiment the Bell inequality would be satisfied. In such a (very improbable) case, since quantum theory predicts that for the state under preparation, Bellʼs inequality has to be violated, the experiment would imply rejection of the quantum model. Thus the Bell test can also be considered as an attempt to falsify quantum mechanics. (At the initial stage of Bell experimentation the expectation that quantum mechanics would be falsified was common to some extent.)

  • Although we discuss only Bell tests for entangled photons, it is relevant that the first closure of the detection loophole was achieved with massive particles [8].

  • 'Single counts' are defined as all counts registered on one side for a given setting.

  • 10 

    We remark that an important implicit assumption of applicability of this inequality is the assumption of a (statistically) constant production rate for pairs of photons, see also section 4 for discussion.

  • 11 

    This inequality is sometimes referred to simply as 'CH-inequality'; see remark 1 for a short discussion on the terminology related to the papers of Clauser and Horne [13] and Clauser and Shimony [14].

  • 12 

    In particular, the Chebyshev inequality is applicable to any bounded random variable.

  • 13 

    We remark that, since, for experimental runs of fixed duration, the number of emitted photon pairs is a bounded random variable, the random variable J is bounded and, hence, $E|J{{|}^{2}}\lt \infty .$ Therefore dispersion is well defined and the Chebyshev inequality is applicable.

  • 14 

    This intuitively makes a lot of sense also: the larger the sample one has, the smaller the confidence interval for the mean value. At the same time, one must not overestimate the role of getting a very small standard error of the mean. It mainly means that one was able to perform measurements for very long runs of the experiment and, in particular, to guarantee the stability of the functioning of the source and the measurement devices.

  • 15 

    For example, set $\delta =0.1,$ i.e., ${{p}_{1}}=0.01,{{p}_{2}}=0.99.$ We remark that to get very small ${{R}_{T}}$ and at the same time very large ${{R}_{J}}$ the probability distribution of our model has to be strongly asymmetric. Then $\lambda =102$ guarantees that ${{R}_{T}}\lt 0.2.$ Finally, by setting k = 69 we obtain that it is sufficient to take a = 0.0112.

Please wait… references are loading.
10.1088/0031-8949/2014/T163/014019