Implication of CMS data on photon PDFs

As part of a recent analysis of exclusive two-photon production of W+ W- pairs at the LHC, the CMS experiment used di-lepton data to obtain an"effective"photon-photon luminosity. We show how the CMS analysis on their 8 TeV data, along with some assumptions about the likelihood for events in which the proton breaks up to pass the selection criteria, can be used to significantly constrain the photon parton distribution functions, such as those from the CTEQ, MRST, and NNPDF collaborations. We compare the data with predictions using these photon distributions, as well as the new LUXqed photon distribution. We study the impact of including these data on the NNPDF2.3QED, NNPDF3.0QED and CT14QEDinc fits. We find that these data place a usefil and complementary cross-check on the photon distribution, which is consistent with LUXqed prediction while suggesting that the NNPDF photon error band should be significantly reduced. Additionally, we propose a simple model for describing the two-photon production of W+ W- at the LHC. Using this model, we constrain the amount of inelastic photon that remains after the experimental cuts are applied.

With the start of the 13 TeV run of the Large Hadron Collider (LHC), more precise theory calculations are needed to correctly interpret the present and upcoming experimental data.Calculations at the next-to-nextto-leading order (NNLO) in Quantum Chromodynamics (QCD) are becoming the standard, so that the theoretical uncertainty can be reduced to the same order as the experimental uncertainty.At this level of precision, the leading-order electroweak correction is also important, because the square of the coupling of the strong interaction (α s ) is of the same order of magnitude as the electromagnetic coupling (α).Therefore, it becomes necessary to include electroweak corrections in the calculations.
One particular electroweak correction of interest is that due to photons coming from the proton in the initial state.This requires the inclusion of the photon as a parton inside the proton, with an associated parton distribution function (PDF).This is necessary both for consistency when electroweak corrections are included and because the photon-initiated processes can become significant at high energies.The treatment of the photon PDF in a global analysis was first performed by the MRST collaboration [1].Since then, both NNPDF and CTEQ collaborations have introduced photon PDFs [2, 3], along with PDF evolution at leading order (LO) in QED and next-to-leading order (NLO) or NNLO in QCD.The MRST2004QED set contains photon PDFs with a parametrization based on radiation off of "primordial" up and down quarks, with the photon radiation cut off at low scales.Two choices of the cutoff are given: a set with the cutoff at the current quark masses is labeled as MRST0 and a set with the cutoff at the constituent quark masses is labeled as MRST1 [1].The NNPDF2.3QED set uses a more general photon parametrization, which was then constrained by W , Z and Drell-Yan data at the LHC [2].This was recently updated in the NNPDF3.0QEDset [4].The CT14QED sets also use the radiative ansatz, but for the "inelastic" component of the photon PDF only and with the inelastic photon momentum fraction at the initial scale left as a free parameter.Data on isolated photon production in electron-proton deep inelastic scattering (DIS), measured by the ZEUS Collaboration [5], were used to constrain the inelastic initial photon momentum fraction to be less than 0.14% at the 90% confidence level (CL) and less than 0.11% at the 68% CL [3].In the same article, the CTEQ-TEA group also presented CT14QEDinc sets, which describe the inclusive photon PDF in the proton, given at the initial scale Q 0 , as the sum of the (inelastic) CT14QED plus the "elastic" photon contribution [6].The elastic contribution to the photon PDF, in which the initial proton remains intact, was obtained from the Equivalent Photon Approximation (EPA) [7].Since CT14QEDinc PDFS were obtained from fitting to ZEUS data, the photon PDFs are better known for the parton momentum fraction x ranging from 10 −4 to around 0.4.Recently, a new determination of the photon PDF, LuxQED, was obtained from the lepton-photon structure functions [8].This approach has the potential to greatly reduce the uncertainties in the determination of the photon PDF, though the uncertainty in LuxQED photon PDFs are likely to be underestimated due to lack of a global fit to experimental data.
With the large amounts of data to be collected at the LHC, photon-initiated processes will become increasingly important.For instance, a precise determination of the quartic couplings of photons and W -bosons can be obtained through the analysis of W pair production through photon-fusion.This has been shown to be the most precise channel to measure these couplings [9,10], with the possibility of measurements that are several orders of magnitude more precise than the limits found at the Tevatron [11] and LEP [12][13][14][15][16][17][18].For all of these uses, a good understanding of the initial photon PDF is vital.
In this paper we consider the CMS studies of exclusive two-photon production of W boson pairs [19], and show how the di-lepton cross-check analysis can be used to constrain the photon PDF.We compare predictions from the various photon PDFs against each other and against the CMS data analysis, after invoking a simple model to separate the various photon-photon initiated scattering contributions.We find that the predictions from various PDF sets are in good agreement with the CMS data under the assumption that the double dissociative contribution is negligible.After comparing the photon PDFs of CT14QEDinc, LuxQED, MRST2004QED, NNPDF2.3QED and NNPDF3.0QEDthrough the photon-photon luminosity at the LHC with a 13 TeV center-of-mass collider energy, we demonstrate how the result of the CMS data analysis strongly constrains the most commonly used NNPDF photon PDFs.Consequently, many studies in the literature that used the NNPDF2.3QEDphoton PDF, which predicted large photon-initiated contributions at the LHC (and with large uncertainties due to the photon PDFs), should see reduced photon-initiated contributions.As an example, we show that the predicted high-mass Drell-Yan pair production cross sections at the LHC are reduced by more than one order of magnitude in multiple TeV region, if the NNPDF photon PDFs are reweighted to include the impact of the CMS data.This is followed by our summary.
Recently, the CMS experiment at the LHC has performed measurements of the W -boson pair production process (pp → p ( * ) W + W − p ( * ) ) at √ s = 7 TeV [20] and at √ s = 8 TeV [19], and used these to put constraints on anomalous quartic gauge couplings.In these measurements they selected for photon-photon fusion events, including elastic events where both protons remained intact and also inelastic (quasi-exclusive or "proton dissociative") in which one or both protons dissociate.This selection was attained by requiring no additional associated charged tracks beyond the muon (µ) and electron (e) pairs with opposite sign in their charges (µ ± e ∓ ), which identified the W boson pairs, in the central rapidity region (less than about 2.5 in magnitude).In order to predict the expected rate of pp → p ( * ) W + W − p ( * ) , they used the much-higher-statistics sample of ℓ + ℓ − events (away from the Z-peak and in the same mass-squared range, with ℓ = µ or e) to extract an effective photon-photon luminosity.This was obtained by taking the ratio of the observed ℓ + ℓ − events with no additional associated charge tracks to that predicted from purely elastic scattering (after subtracting possible quark-initiated contamination, estimated from Z-peak events).The effective photonphoton luminosity determined from this data-driven approach was then used to predict the total cross section to Since these predicted cross sections use their respective extracted photon-photon luminosities, they include both elastic and proton dissociative contributions.Therefore, they can be used to constrain the photon PDFs if we make some assumptions about the fraction of dissocia-tive events that pass the no-additional-charged-tracks cut.For this comparison, we calculate the total cross section for W -pair production 1 via the photon-photon fusion process γγ → W + W − , with the proper W boson decay branching ratios included, at the leading-order in electroweak interaction.The factorization scale is chosen to be the invariant mass ( √ ŝ) of the W -boson pair, unless specified.Using CT14QEDinc PDFs for the inclusive photon and the EPA for the elastic photon, we separated the prediction into elastic, single-dissociative, and double-dissociative events.To take into account the cut on additional charged tracks, we use a crude approximation based on the fact that double-dissociative events are most likely to produce additional tracks in the central detector due to hadronic rescattering [21].We assume that the elastic and single-dissociative events all pass the cut, while the double-dissociative events are reduced by a factor f , which we vary between 0 and 1. Namely, we compare to the effective photon-photon luminosity extracted from the CMS di-muon data by the following theory calculation: Here, σ elastic is calculated using EPA photon PDFs from both colliding protons; σ single−dissociative is obtained by using one EPA photon PDF and one inelastic photon PDF; while σ double−dissociative is calculated using inelastic photon PDFs from both colliding protons.For the inelastic photon PDF we used the difference between an inclusive photon PDF, such as CT14QEDinc photon PDF, and the EPA photon PDF.We note that CT14QEDinc PDF includes both elastic and inelastic contributions to the photon PDF, and can be well-approximated by the linear sum of the elastic component from EPA and the inelastic component from CT14QED at any given scale Q, as illustrated in Figs.1(a) and 1(b).This observation was used in the original analysis to constrain the CT14QED and CT14QEDinc photon PDFs [3] from the ZEUS data, and it also agrees with the conclusion made in Ref. [6].Furthermore, Fig. 2 shows that the EPA photon contribution to the proton momentum (p γ ) becomes essentially constant at scales Q above the initial scale of Q 0 = 1.3 GeV.For example, at Q = 10 GeV, the (elastic) EPA photon contributes about 0.15% of the proton momentum, and the (inelastic) CTEQ14QED photon contributes about 0.11% and 0.22% of the proton momentum, respectively, for the two PDF sets labelled by their initial inelastic photon momentum fractions as [CT14QED 0%] and [CT14QED 0.11%].At 1 1 We emphasize that, although we are using the W + W − cross section for the comparison, it is in fact the effective photonphoton luminosity extracted from the CMS di-muon data that constrains the photon PDFs.
Using this approximation we can calculate the predicted cross section as a function of f and compare with the CMS result.In Fig. 3 we show the predicted cross sections for f = 0 and f = 1 using the CT14QEDinc PDFs as a function of the initial inelastic photon momentum fraction (p γ 0 ) compared with the √ s = 8 TeV prediction from the CMS analysis.It clearly shows that the CMS result is consistent with a fraction f much less than 1.Assuming f ≈ 0, the 8 TeV CMS prediction favors small values of p γ 0 ≈ 0.04% with p γ 0 ≤ 0.11% at the 68% confidence level (CL).For comparison, we note that this result is consistent with the constraint of p γ 0 ≤ 0.14% at the 90% CL, derived from comparing to the isolated photon production rate in DIS process, measured by ZEUS Collaboration [3].
We can also calculate the same cross section us-ing other photon PDFs (assumed to be inclusive) in the same manner, as a function of f .In Fig. 4 we compare the CMS result with predictions from the CT14QEDinc, LUXqed, MRST2004qed, NNPDF2.3QED and NNPDF3.0QEDphoton PDF sets.In all cases, the f = 0 assumption is in good agreement with the CMS data.In addition, we can see that, while all PDF sets are consistent with the data for f = 0, the uncertainty due to the photon PDF increases as we change from LUXqed to CT14QEDinc, MRST, and finally to NNPDF, which predicts the largest uncertainty.This originates from the different methods used to extract the photon PDFs by the different groups.LUXqed derived their photon PDF from the proton electromagnetic form factors, obtained partly from data and partly from theory calculations using PDF4LHC15 PDFs; CT14QED fit to the ZEUS isolated photon production data, in which photon-initiated process contributes at the leading order; MRST2004qed modeled the photon PDF without fitting to data, but using two different scale choices to estimate the uncertainty; while NNPDF2.3QEDand NNPDF3.0QEDfit to the inclusive Drell-Yan pair data, which is dominated by the much-larger quark-antiquark initiated processes.In other words, the NNPDF photon PDF fits were dominated by the error in the measurement of the Drell-Yan pair production rate, which explains the quite large uncertainty in its Monte Carlo replica sets.
To facilitate the comparison of theory predictions of various production rates induced by the photon-photon fusion process at the LHC, we compute the photonphoton parton luminosity for each of the PDF sets, defined as: where y = 1 2 ln( x1 x2 ), τ = x 1 x 2 = M 2 s , M is the invariant mass of the photon pair, and x 1 , x 2 are the momentum fractions of the photons from each proton.This is shown in Fig. 5 for the LHC at 13 TeV collider energy for the high-invariant mass region.In the highinvariant mass region above approximately 1 TeV, the central NNPDF2.3QEDand NNPDF3.0QEDluminosities greatly exceed that of the other PDFs.This can be traced to the large uncertainty in the photon PDF determination at large x, as well as the extra freedom in the NNPDF photon PDF parametrization, resulting in larger NNPDF photon PDFs at large x.Here, we can see that the LUXqed luminosity prediction is enveloped by the CT14QEDinc estimated uncertainty, which in turn is enveloped by the MRST uncertainty, while all of these predictions lie within the NNPDF error bands.
Next, we examine the impact of the CMS data on the CT14QEDinc, NNPDF2.3QED, and NNPDF3.0QEDphoton PDFs.We adopt the PDF Bayesian reweighting technique to study its effect.The idea of reweighting PDFS was originally proposed by Giele and Keller in [22], and later discussed by the NNPDF collaboration [23,24].In Ref. [25], a detailed discussion was given to compare those two reweighting methods and favored the original procedure in [22].(In the case of including only one new data point, such as in the present study, both methods coincide.)The reweighting technique assigns weights to each of the replica sets, which strongly suppress those whose theory predictions are in poor agreement with the new (CMS) data.The weights are derived from the chisquare (χ 2 ) values of the comparison between the new data and theory prediction from each of the PDF replicas.The central value of any observable is the weighted average of the values extracted from each of the PDF replicas, and its PDF error is given by the weighted root-mean-square (RMS) of those values [22].While NNPDF2.3QED and NNPDF3.0QEDare already in the form of Monte Carlo replicas, we need to first construct the Monte Carlo replicas from the two CT14QEDinc photon PDFs, [CT14QEDinc 0%] and [CT14QEDinc 0.11%], which represent the two error PDFs along the negative and positive direction of the photon error PDF eigenvector in Hessian method [26].For that, we use the public code MCGEN [27] which facilitates the method described in Ref. [28] to generate the CT14QEDinc replicas for this study.
The results of including the CMS data to reweight the different photon PDF replicas are shown in Figs.6(a) and 6(b), where we calculate the relative uncertainties in the distribution of lepton pair invariant mass in the high mass region.As expected, the PDF uncertainties for this distribution are reduced for both the CT14QEDinc and NNPDF photon PDF sets after including the 8 TeV CMS data.In particular, the CMS data can have a very large effect in reducing the errors due to the NNPDF photon PDFs.For example, at 2 TeV and 3 TeV, the relative errors (∆σ/σ) in the NNPDF3.0QEDpredictions reduce from 240% and 380%, respectively, to about 40%, while the average values of the cross sections (σ) reduce by about a factor of 2 after including the 8 TeV CMS data.In contrast, the reduction in ∆σ/σ in the CT14QEDinc prediction is mild, from about 25% to 15%, while the average predicted σ is almost unchanged.
In summary, we have shown that the "effective" photon-photon luminosity obtained by the CMS collaboration from analyzing the exclusive two-photon production of W + W − pairs at the LHC can impose a strong constraint on the photon PDFs, particularly on the most commonly used NNPDF2.3QEDand NNPDF3.0QEDphoton PDFs.This will have a great impact on present and future LHC experimental analyses.Many previous analyses that were based on NNPDF2.3QED or NNPDF3.0QEDphoton PDFs and had found a large contribution from photon-induced processes will need to be reexamined.For example, it is pointed out in Ref. [29] that the largest source of uncertainty for predicting the W ± H production rate, which is important for measuring the coupling of Higgs boson to W bosons, is due to photon-induced contributions.This conclusion needs to be reexamined, based on our finding that the NNPDF photon PDFs overestimate the photon contribution to, as well as the uncertainty in, the calculation of processes such as W ± H, lepton-pair or vector-boson-pair production at the LHC.Likewise, it will also modify early conclusions about the potential of the LHC and future hadron colliders to search for new physics effects induced by photon-initiated process, e.g., Ref. [30].

FIG. 5 :
FIG.5: Photon-photon luminosity predicted by various photon PDFs for an invariant mass of 1.5 TeV to 4.5 TeV, at the LHC with 13 TeV collider energy.The lower error curves of NNPDF2.3QED and NNPDF3.0QEDpredictions are below the x-axis of this plot.