Calibration and High Fidelity Measurement of a Quantum Photonic Chip

Integrated quantum photonic circuits are becoming increasingly complex. Accurate calibration of device parameters and detailed characterization of the prepared quantum states are critically important for future progress. Here we report on an effective experimental calibration method based on Bayesian updating and Markov chain Monte Carlo integration. We use this calibration technique to characterize a two qubit chip and extract the reflectivities of its directional couplers. An average quantum state tomography fidelity of 93.79+/-1.05% against the four Bell states is achieved. Furthermore, comparing the measured density matrices against a model using the non-ideal device parameters derived from the calibration we achieve an average fidelity of 97.57+/-0.96%. This pinpoints non-ideality of chip parameters as a major factor in the decrease of Bell state fidelity. We also perform quantum state tomography for Bell states while continuously varying photon distinguishability and find excellent agreement with theory.

A large variety of quantum systems have been studied in recent years for use in quantum information processing [1,2,3,4]. Among the possible implementations, quantum information processing based on photons stands out due to its high stability, wide availability and ease of manipulation [1,5]. Recent developments in integrated photonics have been promising from the point of view of future large scale on-chip quantum information processing [6,7,8,9]. Moreover, integrated reconfigurable quantum photonic circuits have shown great potential for generic quantum operations [10,11]. For instance, it has been shown that the generation of arbitrary two-qubit states and the corresponding state tomography can be realised on a chip with high fidelity [12]. However, with increasing device complexity the accurate calibration of quantum devices becomes a crucially important task. In this paper we study a reconfigurable two-qubit quantum photonic device designed to create maximally entangled states and to perform quantum state tomography on them. In particular we focus on statistically rigorous calibration and tomography which allow us to reach a very high fidelity between theoretically expected and measured states.
The paper is organised as follows. We first describe the experimental arrangement and the device design. We continue by detailing the statistical calibration procedure and the used theoretical model that takes into account the finite quantum interference. We also explain how the state tomography is performed. We then proceed to the benchmarking results for states ranging from fully mixed to nearly maximally entangled by adjusting photon delay. We conclude by discussing the results.
The photonic circuit investigated here was fabricated using silica-on-silicon technology [6]. Figure 1 shows a schematic design of the device. A qubit is encoded in the amplitude and phase of a single photon travelling on a pair of waveguides (path encoding) [15,16]. For realising a two-qubit state two identical photons and four waveguides are necessary. For instance one photon at each of the inputs 2 and 4 corresponds to the two qubit state |10 . The chip can be viewed as composed of three parts: the first part on the left prepares arbitrary single qubit states (see the pink P/M1 and blue P/M2 blocks in Fig. 1). The central part (see the yellow block C in Fig. 1) is responsible for the quantum entanglement owing to the probabilistic Controlled-NOT gate (CNOT) [17]. The blocks on the right hand side are mirror images of the preparation blocks on the left and are used to choose the basis for projective measurements on the two single qubits. Each block consists of a number of directional couplers (DC) and voltage-controlled thermal variable phase shifters [10].
The experimental setup that we used is similar to that of Ref. [11], the only difference being the reconfigurable photonic chip. The chip was mounted on a chip holder and butt coupled with optical fibre V-groove arrays at the input and output. Photon pairs were generated by a type-I spontaneous parametric down-conversion source, pumped using a 50 mW 405 nm laser. The 810 nm photons were filtered with 2 nm bandpass filters and collected into polarisation maintaining fibres with aspheric lenses and then directed to the chip. The detection of photons was done by single photon avalanche diodes connected to the output fibre array with an efficiency of about 50%. All photon arrival times were recorded by a counting card at a time resolution of about 165 ps. Two photon coincidences could then be detected between every pair of output waveguides. The relative delay of the response time between all the detectors has been calibrated and deducted before counting coincidences. A time window of 1.5 ns was employed to count coincidence in the experiment. This width of the window is required because of statistic variation of detector response time. The random two photon coincidence rate for this time window was calculated in the experiments and was always less than 0.4%.
The generation, manipulation and measurement of entanglement and single qubit mixed states has been demonstrated with a linear photonic two-qubit chip of the same design [12] as in the present paper. In those experiments an average quantum state tomography fidelity of 92.8±2.5% was achieved for the four maximally entangled Bell states. However, from experiments performed to date, it is unclear what mechanisms cause the decrease from ideal fidelity. Possible causes include distinguishability of the input photon pair, inaccurate phase shifters, non ideal reflectivities of the on-chip directional couplers, variations in the output coupling and photon detection efficiencies. In order to make further improvement to the fidelity of chip operation it is important to carefully characterise all these. We focus here on the chip parameters and distinguishability. Figure 1. Schematic design of the integrated photonic device. The reconfigurable phase shifters are labelled by ϕ 1 . . . ϕ 8 and the waveguide directional couplers by DC1-13. All directional couplers have a design reflectivity of 1/2 except the ones marked by a dot (reflectivity 1/3). The shaded blocks indicate different functions within the circuit. The pink P/M1 and blue blocks P/M2 perform single qubit rotations on the target qubit and the control qubit, respectively. The yellow region C performs a CNOT operation. The arrows on the input side indicate the input waveguides for the photon pair arriving from the source. We adopt a convention such that the control qubit states |0 and |1 correspond to the output channel pairs 5 and 4, respectively, while the target qubit states |0 and |1 correspond to the output channel pairs 3 and 2, respectively. Before using the Bayesian technique to calibrate the the chip (passive) parameters accurate mapping of the phase shift as a function of the applied voltage is required. The eight resistor-based variable thermo-optic phase shifters' dependencies were measured individually with single photons [12]. This was done using interference by sending single photons repeatedly to the waveguides 2 or 4 and then counting the number of output photons within a fixed time window as a function of the heater voltage. All the eight calibrations were done while the remaining heaters were driven at a medium power level to mimic the conditions during a typical experiment.
With all the eight phase shifters calibrated, the reflectivities of the 13 on-chip directional couplers r = (r 1 , . . ., r 13 ) could be determined using Bayesian inference [14]. A statistical model for the set of variables X given the set of parameters Y is described by the conditional probability P(X |Y ). This model can be converted to the distribution of the parameters P(Y |X ) given a set of observations X using the Bayes rule where P(Y ) is the prior distribution containing the initial assumptions and P(Y |X ) is the posterior distribution. In the present context X is the set of observed photon coincidences for different configurations whereas Y is the set of device parameters. The rule can be used iteratively for updating the distribution: The latest posterior can be used as a prior for new data. The method makes the dependence on underlying assumptions transparent and provides a general model that can be applied as long as one is able to write down an appropriate statistical model for the experiment. In the present study the required likelihood function is for two-photon coincidences, given the parameters of the photonic circuit. Evaluating the likelihood function exactly is hard and we use a Markov chain Monte Carlo (MCMC) [18] method to draw samples from the posterior.
To generate the experimental data for the Bayesian inference task we pick 1000 sets of 8 random phases uniformly generated between 0 and 2π and bias the phase shifters accordingly. For each of the 1000 phase settings ϕ j , j ∈ {1, . . ., 1000}, identical photons were sent to waveguides 2 and 4 while we recorded the number of coincidences N j kl , 1 ≤ k < l ≤ 6 between all the pairs l and k of the 6 output channels. That is, we recorded the frequency of the 15 different coincidence events for 1000 randomly chosen phase settings. We denote this data set collectively as N and the number of coincidences observed for a given phase setting ϕ j is indicated by N j = ∑ 6 l=1 ∑ l−1 k=1 N j kl . For two channels k = l one can express the expected probabilities for the observed coincidences compactly as Here p dist is a parameter describing the probability that the two photons are distinguishable. U is the underlying 6 × 6 unitary describing the single-photon behaviour of the chip. U ik can be obtained in a straightforward way by combining the effect of the 13 directional couplers and 8 phase shifters (see Appendix A). The probability p kl thus depends on all the chip parameters of interest. The normalisation factor C is obtained by summing over all the events that our detection scheme can detect. This is needed since the events corresponding to two photons in the same channel are not measured. For a derivation of the coincidence probabilities starting from the device unitaries see Appendix B. The model for the coincidence probability can be interpreted as a statistical mixture of ideal quantum interference behaviour and distinguishable behaviour.
To see how this model can be used to obtain the unknown parameters, let us consider the probability of observing the set of coincidences N given the parameters β = (r 1 , . . . , r 13 , p dist ). For each of the experiments j the probability of observing a number of coincidences N j kl , (1 ≤ k < l ≤ 6) is given by the multinomial distribution We can therefore write the total probability as the product of 1000 multinomial distributions (for a given total number of events N j for each experiment j) This function should be considered to be the conditional distribution given that N j events have occurred for each j. Although N j is in principle a stochastic variable, it could just as well be selected by collecting precisely that amount of data. When the N j kl are fixed, this function of β is called the likelihood function. However, we were interested in the distribution of the parameters given the observations, and not vice versa. We therefore used the Bayes theorem to write the so-called posterior distribution for β as Here the normalisation factor can be in principle obtained by integrating over β but in practice it is hard to do without solving for the distribution. Here we took the prior as a constant but set it to zero in unphysical regions of the parameter space. It is very difficult to directly evaluate P( β |N ). Instead, we used the Markov chain Monte Carlo Metropolis-Hastings algorithm to perform a random walk in the parameter space β . This method works by starting from a random point in parameter space and picking trial points randomly in a symmetric way. A trial point can be accepted or rejected depending on how its probability compares to the probability of the previous point. The new point is always accepted if it is more probable. If the new point is less probable it is accepted with the probability corresponding to the ratio of probabilities. Here pseudo random numbers are used. Note that using ratios avoids calculating the normalisation factor. Numerically it is actually much more accurate and stable to perform the comparisons with logarithms of the pseudo random numbers and logarithms of the probabilities log(P( β |N )). In our case this approach avoided the need of multiplying 15000 below unity numbers before comparison. It also eliminated the factorial term which is a constant as a function of β . The resulting data was stored and owing to so-called detailed balance and ergodicity, the walk sampled the distribution as if the points were drawn from it. One could then calculate e.g. the moments or histograms from the data. For the data shown below we let the system initialise for 30000 steps and we then sampled for 200000 steps. The number of steps was chosen empirically. Trials were performed by adding a normally distributed number (standard deviation was chosen to be 0.5% of design/expected value) to a randomly picked parameter. Without optimising the code the sampling takes about the same time as the experiment using a laptop running MATLAB (overnight).
The resulting expectation values and standard deviations of the reflectivities of the directional couplers 1-13 are shown in Table 1. The differences between fitted and designed values were expected to be less than 5% for the process employed. The slightly larger observed variation indicates that the fabrication process was not fully within specifications.
In addition, we found the probability of two-photon distinguishability (which equals one minus visibility of two photon interference) to be p dist = 4.51 ± 0.11%. This probability of distinguishability takes into account all contributions that might deteriorate the visibility of two photon interference such as non-identical spectra and polarisation of the photon source as well as non-uniform refractive index or birefringence in the waveguides.
To confirm the reliability of the fitted results of both the coupler reflectivities and distinguishability of the photons, two photon interference experiments were carried out over various Mach Zehnder interferometers on the chip. Figure 2 shows a Hong-Ou-Mandel dip [19], measured over the branch on the top right corner of the chip. The Hong Ou Mandel dip was obtained by inserting photons in waveguides 1 and 4 and counting the coincidences at waveguides 2 and 3. The phase shifters 3 and 4 on the preparation side were adjusted so that the first interference takes place at directional coupler 5, while phase shifters 5 and 6 were tuned so that the effective reflectivity of the whole branch was as close to 50% as possible (estimated 52%). The visibility of the dip was measured to be 96.09 ± 1.8%. Another Hong Ou Mandel dip measurement over the central directional coupler 8 yielded a visibility of 73.09±1.0%. This is about 3.1% lower than the value (76.21%) to be expected from reflectivity of 0.3175 for coupler 8. Both of the Hong Ou Mandel scans thus resulted in about 3-4% imperfection which is in agreement with our Markov chain Monte Carlo-based characterisation.
Having carefully calibrated the integrated quantum photonic chip parameters we then performed a demanding benchmark experiment. The generation and characterization of maximally entangled Bell states offered an ideal test case for this purpose. Using inputs 2 and 4 we prepared the initial state |10 and drove the input side phase shifters 1-4  √ 2 (|01 + |10 ) could then be produced utilising the CNOT gate in the centre of the chip. However, it is clear from the characterisation of the device that the beam splitter reflectivities deviate from their ideal values and even for perfectly indistinguishable photons we cannot expect to be able to prepare precisely these states. We therefore also calculated the theoretically expected modified density matrices for the purpose of the current benchmark test. Similarly to the Markov chain Monte Carlo-based calibration, we modelled the theoretically expected density matrix taking the real chip parameters into account as (see Appendix C) within the two-qubit subspace, where C is chosen such that Trρ real = 1. The indistinguishable part exhibiting ideal quantum interference is obtained simply as ρ ind = |ψ ind ψ ind | using |ψ ind = |ψ dist1 + |ψ dist2 , where the amplitudes of the two distinguishable possibilities are and where the unitary U (p) is now only of the preparation stage (see Appendix A). The distinguishable part can be obtained as the statistical mixture of the two distinguishable possibilities In order to arrive at a valid density matrix the two qubit state has to be normalised to account for the fact that the CNOT works probabilistically, i.e. the state is projected to the two-qubit subspace. Note that the diagonals of ρ dist and ρ ind consist of the familiar looking elements: in the former case probabilities are added, and in the latter case amplitudes are added.
To benchmark the photonic chip we reconstructed the density matrix by quantum state tomography [20,21,22]. Instead of the more commonly used maximum likelihood tomography we used Bayesian MCMC method. This serves two purposes: It allows us to conveniently ensure that the density matrix is physical and to obtain rigorous error bars for the density matrix fidelity against theoretical expectations. To obtain the required experimental data, we used nine different phase settings on the output side phase shifters per input state, which are in principle enough to characterise the state fully [23]. That is, we measured the qubits along {X ,Y, Z} × {X ,Y, Z} as accurately as possible. Each one of the measurements gives information not only about the corresponding two-qubit density matrix element but also about the single-qubit terms. However, to account for imperfections and finite number of repetitions we resorted to numerical methods. We parametrised the density matrix as ρ = ∑ 4 r,s=0 α rs σ r σ s , where α rs 's are the 15 free real unknown parameters, with α 00 = 1/4, and σ r are the Pauli matrices including the identity σ 0 in the notation. Similar to parameter estimation, we can obtain the distribution of the density matrix parameters α using P( α|M ) ∼ P(M | α)P( α) or log P( α|M ) = log P(M | α) + log P( α) up to a constant. Here M denotes the collective set of observations. We can write the multinomial likelihood as where D is a constant, the index i runs over the nine tomography phase settings, M i ab is the number of times that we detected the qubit state |ab and p i ab is the corresponding expected probability. These probabilities depend on both the unknown density matrix parameters that we optimise over and the currently known phase settings of the shifters 5-8. We used a uniform prior P( α) over physical density matrices, i.e. P( α) is constant whenever the corresponding eigenvalues of ρ are non-negative and zero otherwise. In practice this means that log P( α) is set to a large value (ideally infinite) whenever the MCMC algorithm attempts to move outside the physical region. There are numerous other alternative ways to choose the prior [24,25]. We chose the present one due to numerical convenience. The real part of measured density matrices of all the four Bell states are shown in Figure 3. The fidelity estimates and the mean density matrices are obtained by averaging over the random walk. Both the fidelity against the ideal Bell state and the fidelity against the predicted density matrix taking into account the calibrated parameters are shown. These are denoted by F ideal and F real , respectively. We used the definition of fidelity [26] where ρ exp is the experimentally measured density matrix. To calculate the fidelity F ideal we choose the reference density matrix ρ ideal to be the density matrix of one of the Bell states. To obtain F real we set the reference density matrix to the density matrix calculated using Eq. (5) with the same settings for the phase shifters as for the Bell states but using real coupler efficiencies and taking the non-ideal photon indistinguishability into account. For the four Bell states F ideal is 93.79±1.05% on average. This is an improvement over previously reported [12] fidelities. The remaining 5-6% imperfection mostly arises from the non ideal reflectivities and the photon source, as the average fidelity increases to 97.57±0.96% for F real . This indicates that a major source of the decrease of the fidelity are the non-ideal reflectivities of the on-chip directional couplers.
To further demonstrate the agreement with the theoretical model and the experimental results, we varied the relative delay of the two photons and performed quantum state tomography for each fixed delay. This experiment can be viewed as a Hong-Ou-Mandel measurement for Bell states. Depending on the delay between the two photons, the four two-qubit states were expected to change between maximally entangled and totally mixed as predicted by the model presented above. We compared the measured density matrices against (i) the ideal Bell states, (ii) against the best expected density matrix with finite but optimal p dist with real reflectivities and (iii) finally against the delay dependent ρ(p dist ) (with real reflectivities), where p dist was deduced from the independent measurement in Fig. 2. Figure 4 (a)-(d) illustrate the results for |Φ − , |Φ + , |Ψ − and |Ψ + , respectively. There are three curves for each state: the lowest (red) curve corresponds to case (i), the middle (blue) corresponds to case (ii) and top (magenta) corresponds to case (iii). In cases (i) and (ii) one can clearly see how the fidelities peak in analogy with the Hong-Ou-Mandel effect.
Comparing the experiments with the detailed model results in an increase in the fidelities. The tops of the four peaks correspond to Fig. 3. Case (iii) shows the agreement of theory and experiment most clearly; the fidelity of the measured density matrices as a function of delay agrees almost perfectly with the model for the delay dependent ρ real (p dist ) which is a statistical mixture of distinguishable and indistinguishable behaviour. How the reconstructed density matrices evolve as a function of the delay is shown in a supplementary video.
our method introduces minimal disturbance from calibration to the actual measurements as it is not necessary to switch the input ports for the photons. Having obtained the real chip and source parameters we checked the fidelity of a prepared state against a theoretical prediction using these parameters. A maximum fidelity of 98.02±1.03% against the theory prediction using the real device parameters shows that practically all error sources are accounted for and improving the photon source as well as the chip fabrication will make it possible to prepare states with fidelities larger than 99%. A remarkable fact is that the almost unit fidelity also extends to mixed states. This enables us to reliably prepare not only maximally entangled states but also mixed states with a varying degree of mixedness. A source of such states can be of considerable interest in investigating mixed state quantum computation and test concepts like quantum discord.  where k and l label the waveguides. For example a phaseshifter on waveguide 2 is given by which is a unity matrix, where the 2nd element on the diagonal is replaced by a phase factor. A directional coupler between waveguide a and b with reflectivity r is described by As an example a directional coupler with reflectivity r between waveguide 2 and waveguide 3 is given by The unitary of the chip U has a rather complicated form and we will therefore present it as the product of its elementary building blocks. It can be written in terms of the preparation stage unitary U (p) and the unitary describing the selection of the measurement bases U (m) as where the preparation stage unitary can be broken down further into with the unitary of the central part given by where the reflectivities r i (design and measured) can be found in Tab 1. The two other blocks are given by U P2 = S (4) (ϕ 4 ) · D (4,5) (r 10 ) · S (4) (ϕ 3 ) · D (4,5) (r 9 ) (A.11) In the same way the unitary describing the selection of the measurement bases can be written as (4,5) (r 12 ) · S (4) (ϕ 8 ) · D (4,5) (r 11 ) · S (4) (ϕ 7 ) · (A.14) This fully defines the single photon unitary of the photonic chip under consideration.

Appendix B. From Unitaries to Probabilities
Once the unitary is known we can proceed to calculate the probabilities for coincidence counts at the detectors. We will assume that the source produces photon with a given frequency distribution, so that a photon in wave guide i is described by where |0 is the vacuum state,â † i (ω) a creation operator for a photon with frequency ω in waveguide i and α(ω) a normalised amplitude for a frequency so that´dω|α(ω)| 2 = 1. With a photonic circuit described by the single photon unitary U and an initial state with two photons entering the waveguides m and n, n = m, we can write the wave function at the exit of the circuit as where we defined the two photon state The coincidence counts at detectors k and l, k = l, are then given by where the number operators are given bŷ After a lengthy but straightforward calculation, contracting pairs of creation and annihilation operators, using the commutation relation we obtain Note that we have here selected the frequency distribution to distinguish the photons but a similar argument can be made for other degrees of freedom, e.g. polarisation. In our experiments we are interested in the the probability of a certain coincidence, given that we insert a photon into each of the waveguides 2 and 4. We obtain the probability of coincidence as p kl = n kl /C with the normalisation constant given by which is exactly the form of Eq. (2). We now show that p dist can be directly obtained from a Hong-Ou-Mandel type experiment. In such a measurement two photons are directed to two different ports of a directional coupler with reflectivity r=1/2. The unitary is given by What is measured is the number of coincidence counts relative to the number of counts obtained for completely distinguishable photons. For a beam splitter with reflectivity r we obtain n 12 = (1 − p dist )(1 − 2r) 2 + p dist (1 − 2r + 2r 2 ).
Normalising this to the number of coincidence counts for completely distinguishable photons we get x = (1 − p dist )(1 − 2r) 2 + p dist (1 − 2r + 2r 2 ) 1 − 2r + 2r 2 or expanding around r = 1/2 + δ r to second order in δ r we obtain This shows that we can directly read off the function p dist from a Hong-Ou-Mandel type experiment and that the error induced by using a non-ideal beam splitter is quadratic in the deviation.

Appendix C. The Two Qubit Density Matrix
If we want to consider the photonic quantum circuit in the light of quantum computation we have to assign the meaning of qubits to certain combinations of photons. We do this in the following way: Waveguides 2 and 3 encode one qubit in dual rail encoding, while waveguides 4 and 5 encode the second one. A photon present in waveguide 2 and the second photon in waveguide 4 maps onto the logical two qubit state |11 L . Similarly we make the mappings |25 → |01 L , |34 → |10 L and |35 → |00 L . Given the full wave function on the chip and the fact that the detectors only register the presence of a photon, but do not distinguish parameters like polarisation or wavelength, we can construct a reduced density matrix for the two qubit subspace as This is the form of the two qubit density matrix given in Eq. (5).