Experimental certification of more than one bit of quantum randomness in the two inputs and two outputs scenario

One of the striking properties of quantum mechanics is the occurrence of the Bell-type non-locality. They are a fundamental feature of the theory that allows two parties that share an entangled quantum system to observe correlations stronger than possible in classical physics. In addition to their theoretical significance, non-local correlations have practical applications, such as device-independent randomness generation, providing private unpredictable numbers even when they are obtained using devices delivered by an untrusted vendor. Thus, determining the quantity of certifiable randomness that can be produced using a specific set of non-local correlations is of significant interest. In this paper, we present an experimental realization of recent Bell-type operators designed to provide private random numbers that are secure against adversaries with quantum resources. We use semi-definite programming to provide lower bounds on the generated randomness in terms of both min-entropy and von Neumann entropy in a device-independent scenario. We compare experimental setups providing Bell violations close to the Tsirelson’s bound with lower rates of events, with setups having slightly worse levels of violation but higher event rates. Our results demonstrate the first experiment that certifies close to two bits of randomness from binary measurements of two parties. Apart from single-round certification, we provide an analysis of finite-key protocol for quantum randomness expansion using the Entropy Accumulation theorem and show its advantages compared to existing solutions.


I. INTRODUCTION
Randomness is one of the basic resources in information processing.Randomly generated numbers find applications in areas such as cryptography, where they are one of the key elements of protocols such as Data Encryption Standard (DES) or Advanced Encryption Standard (AES).The standard document RFC 4086 -Randomness Requirements for Security [1] lists such fields of application as creating private keys for algorithms used in digital signatures, keys and initialization values for encryption, generating secure PINs and passwords, keys for MAC (Message Authentication Code) algorithms or nonces, i.e. numbers that are being used just once in a cryptographic communication, and cannot be further reused.
It is a known fact that using classical computers, which operate on deterministic algorithms, it is not possible to generate truly random numbers, but only sequences of pseudo-random values, which at first glance resemble truly random numbers, but are not able to guarantee to be unpredictable.Anyone who knows the algorithm used to create them and its input parameters, i.e. the so-called seed, can determine all the numbers that will ever be obtained using the deterministic generator.
The situation is conceptually different in quantum mechanics since its essence is processes that behave in a non-deterministic way.Thus, many quantum phenomena have intrinsic randomness.The so-called coherence * piotr.mironowicz@gmail.com of quantum states can be shown to be directly related to the complete unpredictability of certain quantities [2].
Nevertheless, verifying that a quantum device works as expected is a much more difficult issue than checking the correctness of deterministic algorithms.Typically, we cannot tell if a quantum device behaves exactly as designed, and imperfections in both quantum states and measurements can cause the entire process to lose quantum characteristics, such as Bell inequality violation [3].The situation is even worse due to the complexity of quantum devices and their finesse, as we are often unable to even check whether the components we use have not been intentionally manipulated by a malevolent adversary.
An important milestone towards solving this problem was the emergence of the so-called device-independent approach [4], which allows the assessment of the fidelity of a quantum device based on its visible external behavior.The early works containing experimental implementations of quantum randomness protocols presented a proof of concept [5], but they were not very efficient in terms of the generation ratio.Recently, a new theorem called Entropy Accumulation Theorem (EAT) was introduced and proven [6][7][8].This theorem allowed for a determination of the amount of randomness which is certified to be generated by a particular quantum device.
One of the important problems is how to obtain the largest possible amount of certified randomness in terms of individual rounds by using a given device with the simplest configuration of settings and outcomes.It is easy to see that a device involving two components, A and B, each generating one output bit, allows for a maximum of 2 random bits generation per round.The first protocols allowing a certification of the maximal amount of two bits where one of the parties use three measurement settings are presented in [9].
Recently the simplest known protocol for certification of two bits, involving only two binary measurements of two parties have been introduced [10].In this work, we will present its experimental implementation, along with the analysis of certified randomness using various numerical techniques to provide a lower bound on the randomness generated per round [11,12].

II. METHODS
For a given behavior of a quantum device, our task is to specify lower bounds on the generated randomness.This task is called randomness certification.To this end, it is necessary to perform a complex optimization taking into account all devices implementing this behavior allowed by the laws of quantum physics.This optimization is essentially a consideration of the set of all possible probability distributions obtainable by quantum devices.The are no known tools to optimize accurately over this set of probability distributions.Fortunately, there are approximate techniques that determine the socalled relaxations of the set of all possible distributions of quantum probabilities.It turns out that if the optimization over probability distributions is allowed to cover a set slightly wider than that allowed by quantum mechanics, the optimization problem can be dealt with efficiently using convex optimization techniques, in particular, semi-definite programming (SDP) [13].One of the most widely used methods is the so-called Navascués-Pironio-Acin (NPA) [14,15].
The NPA is used to optimize the probability of guessing the value of a random number by an adversary [16].The guessing probability is directly related to the socalled min-entropy [17].It can be shown that the minentropy is a lower bound on the Shannon entropy of a given random variable.Thus, the values obtained using the NPA are suitable for certification of the generated randomness from quantum devices.The NPA is limited only to optimize functions that are linear expressions of probabilities.
Let us consider the set of all probability distributions {P (a, b|x, y)}, where a and b are the outcomes of the measurements performed by Alice and Bob, when their measurement settings are x and y, respectively.One of the proposed protocols from [9] used the following Bell operator as a randomness privacy certificate: where the correlators are defined as follows: C(x, y) ≡ P (0, 0|x, y) + P (1, 1|x, y) − P (0, 1|x, y) − P (1, 0|x, y). ( One may note that (1) consists of a well-known CHSH expression [18] plus an additional term.The maximal value allowed in quantum mechanics, i.e. the Tsirelson bound, of ( 1) is 1 + 2 √ 2. When the Tsirelson bound is achieved, then the quantum state and all measurement operators are uniquely determined [19], and for the pair of settings x = 0, y = 0, the measurement results are uniformly distributed, ∀a, bP (a, b|0, 0) = 0.25.Protocols that employ a particular setting for randomness generation are called spot-checking protocols [20].A full analysis of the protocol of randomness generation using (1) including randomness accumulation and extraction aspect has been presented in [11].
Initially, SDP was used to produce an approximation of the matrix logarithm function [21], resulting in an SDP for efficient optimization of expressions on the so-called quantum relative entropy [22].Furthermore, this method have be used to determine the lower bounds of the conditional von Neumann entropy certified in the deviceindependent approach [23] using the extended NPA [24] with the NCPOL2SDPA tool [25].
The technique can be applied to calculate a lower bound on the conditional von Neumann entropy where e represents any sort of knowledge (classical or quantum) that an adversary may possess if governed by the laws of quantum mechanics.Let Q A , Q B , and Q E be the Hilbert spaces of devices of Alice, Bob, and adversary, respectively, and ρ Q A ,Q B ,Q E their shared tri-partite quantum system.Let {{M a|x } a } x and {{N b|y } b } y denote operators of the positive operator valued measurements (POVMs) performed by Alice and Bob, respectively.This employs the Gauss-Radau quadrature rule to lower bound (3).Let w i and t i be the nodes and weights defined by this quadrature.A lower bound can be obtained from [26]: where satisfies a certain, specified by the protocol, set of linear constraints.The operators O 1 and O 2 are defined by c i are coefficients calculated from the Gauss-Radau quadrature as c i ≡ w i /(t i log(2)).The index i in the summation (4) takes the values indexing the nodes in the quadrature, omitting the last one.
(6) The members of this family have self-testing properties, use two settings for each party, and can certify two bits of randomness for the measurement settings x * = y * = 0.
The second family, parametrized by γ ∈ [0, π/12] defines the Bell expressions: (7) These Bell expressions also use two settings and have selftesting properties, yet in most cases do not certify two bits of randomness.
The Tsirelson bounds for ( 6) and ( 7 6) and ( 7) obtained in the experiment, respectively.The relative Bell value attains the value of 1 in the noiseless cases.For correlation-based Bell expressions, like those analyzed in this paper, if η is the relative value of the Bell expression, and the relative value η is attained with the noised state: where ρ optimal Q A ,Q B and ρ white Q A ,Q B are the quantum state providing the Tsirelson bound and the maximally mixed state, respectively.

III. RESULTS
In this section, we describe the experimental setup, and report regarding the analysis of the randomness generated in the series of experiments.

A. Experimental setup
Ultraviolet light centered at a wavelength of 390 nm is focused onto two 2 mm thick β barium borate (BBO) nonlinear crystals placed in interferometric configuration to produce photon pairs emitted into two spatial modes (a) and (b) through the second order degenerate type-I spontaneous parametric down-conversion process.The spatial, spectral, and temporal distinguishability between the down-converted photons is carefully removed by coupling to single-mode fiber, passed through narrowbandwidth interference filters (F) and quartz wedges respectively.We have realized these quantum protocols by using polarization entangled pairs of photons The measurements for Alice are performed by a halfwave plate (HWP) oriented θ A0 or θ A1 , and the measurements for Bob are performed by an HWP-oriented θ B0 or θ B1 .The polarization measurement was performed using PBS and single-photon detectors (D) placed at the two output modes of the PBS.Our detectors are actively quenched Si-avalanche photodiodes.All single-detection events were registered using a VHDL-programmed multichannel coincidence logic unit, with a time coincidence window of 1.7 ns.
We performed the experiment for the Bell expressions (6) at a low rate (approximately 675 two-photon coincidences per second).At these low rates, the multiphoton pairs emission are small and accidental events can be neglected.We benchmark the state preparation by measuring the average visibility in the diagonal polarization basis of 99.07.Each of the measurement runs was taken 180 times with each run with a collection time of 250 seconds.We have also performed a state tomography to estimate the fidelity of the state and obtained 99.63 ± 0.04.
For the Bell expressions (7), the rate was around 780 two-photon coincidences per second with 160 measurements of 250 seconds and the visibility in the diagonal polarization basis of 99.13%.The fidelity of the state obtained by state tomography is 99.75 ± 0.02.For these two experiments, the total average number of events is around 120 000 000.
We have also performed an experiment for the Bell expressions (6) at a higher rate of 2000 two-photon coincidences per second, the visibility was on average 98.73 in diagonal polarization basis.Each of the measurement runs was taken 100 times with each run with a collection time of 250 seconds.
To reduce experimental errors in the measurements, we used computer-controlled high-precision motorized rotation stages to set the orientation of wave-plates with repeatability precision 0.02 • .The error was estimated for each of the experiments by taking the standard deviation of the measurements.The experimental setup is illustrated in Fig. 1.The angles of the HWPs for the experiments are provided in the Appenidx A.

B. Certified randomness
To calculate the guessing probability, we reflected the experimental results by imposing on the maximization a constraint that the value of the Bell expression is equal to the one observed in the experiment.
To calculate the conditional von Neumann entropy, we have considered two different sets of constraints for the certification of the von Neumann entropy using the optimization (4).The standard approach [5] is to impose a constraint that the value of the relevant Bell expression is equal to the one from the experiment.A more involved method [9,[27][28][29] is to constrain the optimization with more than one parameter.The purpose of this is to increase the amount of the certified randomness, at a price of more demanding error analysis for finite data sets, and complicated numerical calculations.We imposed a constraint that each of the correlators C(0, 0), C(0, 1), C(1, 0), and C(1, 1) are equal to those from the experiment.Note that the latter constraints are stronger than the former one, as the Bell expressions ( 6) and (7) are functions of correlators.
To be more precise, we have further relaxed the above constraints.We formulated the single parameter constraint in a form that the value of the relevant Bell expression is not smaller than the one from the experiment.The constraints for more parameters we formulated in a manner that each of the correlators C(0, 0), C(0, 1) and C(1, 0) are not smaller than the one obtained in the experiment, and C(1, 1) is not greater than the one from the experiment.It is easy to see that the minimization of the conditional von Neumann entropy with equalities as a constraint will be lower bound by the minimization with inequalities.The reason behind this relaxation is that this improves the stability of the numerical optimization, as the feasible region has a wider interior than with equality constraints.Similarly, for the guessing probability calculations, we relaxed the equality with an inequality imposing a constraint of the optimization that the value of the Bell is not smaller than the one obtained in the experiment.
As mentioned, the method [23] requires specifying the number of nodes in the quadrature.We calculated both variants of constraints with 6 nodes, and the optimization with the correlation constraints also with 8 nodes.To improve the certification of entropy, one can increase the number of nodes, but this comes with the price of a longer optimization time.Firstly, we performed the experiment for the Bell expressions (6).We concentrated on the quality of the source, at the cost of the generated events rate.The experiment has been performed for δ = 0.45, 0.5, 0.52.The obtained relative Bell values were 0.994, 0.994, and 0.997, respectively, and the certified randomness is shown in Tab.I.  We observed 675 events per second, and thus the randomness generation rate for δ = 0.52 is 1270 bits of von Neumann entropy or 1012 bits of min-entropy, per second.
If only finite statistics are taken into account, one should consider also the uncertainty in evaluation e.g. the Bell expression value.In the case of the performed experiment, the values are shown in Tab.II with theoretical boundaries for comparison, for δ = 0.45, 0.5, 0.52, respectively.The Gauss-Radau approximation with six nodes showed that this Bell violation allows certifying 1.54, 1.58, and 1.72 bits of von Neumann entropy, respectively, thus slightly less than the asymptotic case of Tab.I.
The observed event rate was 780 per second, giving the randomness generation rate 1300 bits of von Neumann entropy or 1030 bits of min-entropy, per second.
The violation of Bell's inequality is given in Tab.IV.

I δ Bell expressions and high event rate
The third performed experiment concerned also the Bell expressions (6).We performed it for values δ = 0.5, 0.4, 0.3 observing the relative Bell values 0.987, 0.991, and 0.991, respectively.We show the certified randomness in Tab.V.The rate of observed events was 2000 per second, so the randomness generation rate, when taking δ = 0.4 is 3180 bits of von Neumann entropy or 2120 bits of min-entropy, per second.
The violation of Bell's inequality is given in Tab.VI. for the experiment concentrated on the high rate of the observed events.

IV. DISCUSION AND CONCLUSIONS
We have presented an experimental setup aiming to generate close to the maximum amount of randomness possible in the binary measurement setup with two parties.We have realized experiments for two different families of Bell expressions and obtained up to 1.88 bits per round, that is close to the teoretical maximum of two bits.We have also performed a comparison of different approaches to randomness, the von Neumann and minentropy.The min-entropy is smaller than the von Neumann entropy, whereas some applications take advantage of the latter one.Finally, we have shown, that it may be beneficial for the randomness generation rate, to increase the events rate at the cost of decreasing the quality of the quantum realization.We find it interesting for a future research to answer the question of how does it influence the net gain of the randomness extraction?We expect that having close to two bits per elementary event will simplify the randomness extraction procedure, in terms of both requirements for the extractor's seed, and the extraction processing time.We leave the problem of employing the setup to full quantum randomness extraction for future work.

FIG. 1 .
FIG. 1. Experimental setup.Entangled photons pairs are generated through SPDC process.The signal is filtered.The two stations aof measurements are composed each by an halfwave-plate (HWP) and a polarization beam spliter (PBS).(See main text for details)

1 .
I δ Bell expressions and high relative violation

TABLE I .
Randomness certified by Bell expressions (6) for the experiment concentrated on high relative violation of the Bell inequality.

TABLE V .
Randomness certified by Bell expressions (6) for the experiment concentrated on the high rate of the observed events.