Quantifying (dis)agreement between direct detection experiments in a halo-independent way

We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of dark matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.


Introduction
The interpretation of results from dark matter (DM) direct detection experiments is significantly affected by uncertainties in the local DM density and velocity distribution [1][2][3][4][5][6][7][8][9][10][11][12]. While the assumed value for the local DM density only affects the determination of the overall DM scattering cross section, the assumed halo velocity distribution f (v) enters in the predicted event rate in a complicated way and changes the shape of the predicted recoil spectrum. Indeed, a nuclear recoil of energy E R can originate from any DM particle which has a speed greater than a minimum value v min (E R ), and experiments are then sensitive to the velocity integral g(v min ) = v>v min f (v)/v d 3 v.
In a recent publication [13], the authors of the present work proposed a new way to treat halo uncertainties, which is based on determining the velocity integral that best describes a given set of data. In that work it was shown that such a method can be applied to the task of determining DM parameters from the clear detection of a DM signal across multiple future experiments without requiring any assumptions about the underlying velocity distribution. In the present work we demonstrate that a similar method can also be used to study less robust detections, with smaller statistics and with experiments that may appear to be in disagreement. Our goal here will be to determine, in a quantitative and halo independent way, to what extent such a set of data points towards a consistent DM signal.
Previous such halo independent analyses have appeared in the literature, but our approach improves upon them in two key respects: • True halo independence. By virtue of using the method from [13] to find the best-fit halo for a given set of data and DM parameters, our method yields results which are manifestly independent of any halo assumptions. The halo velocity integral is written as a sum of step functions with step heights optimized to fit the data. The number of steps is then taken to approach infinity, yielding the function g(v min ) which matches the data as well as possible. This approach is similar to the one proposed in [14] for the qualitative analysis of individual direct detection experiments, but should be contrasted with the quantitative global analyses in [3,15,16] that make use of more restricted parameterizations of the halo and with the methods proposed in [8,[17][18][19][20][21][22][23] for the analysis of future data with high statistics. We will also discuss an extension to the method from [13], which allows for adding observational results on the Galactic escape velocity to the determination of the best-fit halo.
• Quantitative global results for goodness of fit. Since our method considers all experiments simultaneously when constructing the best-fit halo and determining DM model parameters, we are able to perform a global, quantitative analysis of how well the best-fit point describes the full set of data. Previous halo-independent methods have typically only been able to compare experiments in pairs for fixed DM parameters [24][25][26] (see also [27][28][29][30][31][32][33]). The basic idea of these previous approaches was to take experiments seeing a signal as giving measurements of the velocity integral g(v min ), while experimental null results were taken to provide upper limits on this function. If the measured values were seen to violate the upper limits anywhere in velocity space, the results were then deemed incompatible.
Such analyses have several disadvantages. In particular, as they only consider constraints from one experiment at a time, and at one point in velocity space at a time, they do not yield quantitative statements about the net amount of agreement or disagreement in the data. Even the direct comparison of two experiments is only possible if the two experiments probe the same region of v min -space [24,34]. Moreover, since for given DM parameters no aggregate test statistic is computed, it also follows that it is not possible for these methods to quantitatively compare different choices of DM parameters or to determine an allowed DM parameter region. Our approach, on the other hand, allows us to use all the information available in v min -space in order to not only find the best-fit values for the DM parameters, but also to determine how well the proposed model actually fits the data.
In addition to the halo optimization procedure of [13], our approach depends on three essential ingredients. The first is the construction of a likelihood function for each direct detection experiment, including those which report an excess as well as those which report null results. Depending on the available experimental information, we use either a binned likelihood function as in [35] or an unbinned likelihood function [36]. The advantage over using statistical tests like the 'maximum gap method' [37] is that we can multiply the individual likelihood functions in order to perform a global analysis and determine the best-fit model parameters for a given set of events. Similar constructions have been made in Ref. [15] in the context of Bayesian analyses of DM direct detection experiments and in Ref. [38] using a global χ 2 function.
In contrast to a χ 2 test statistic, the numerical value of the likelihood function at the best-fit point does not reveal how well the model is actually able to describe the data, because only likelihood ratios between different points in parameter space are invariant under changes of variables in the experimentally observed quantities. Nevertheless, for ambiguous experimental hints and possible tension between different results it is of crucial importance to determine the goodness-of-fit of the best-fit point. We cannot simply perform a χ 2 analysis in the present context, because we generally do not obtain sufficiently well populated bins of data from experiments observing a small number of events. The second additional ingredient for our approach is thus an alternate test statistic, based on likelihood ratios, which does indeed contain the desired goodness-of-fit information. This statistic is given by dividing the global maximum likelihood by the maximum likelihood obtainable for each individual experiment on its own. Intuitively, this statistic measures how badly the experiments have to compromise in order to satisfy each other by moving to the global best-fit point. Following [39], we will refer to this quantity as the "parameter goodness of fit statistic" (PGS), with larger values of the statistic indicating larger disagreement. By considering both the PGS and the likelihood ratio between the bestfit point and the background-only hypothesis we can quantify the preference for a DM signal in a given set of experimental data.
The final ingredient in our approach is to determine the probability distribution functions for our test statistics, in particular the probability that the PGS would have been observed to take a value as large as the measured one. When the PGS is applied to data with high statistics, it can be shown analytically that this quantity follows a χ 2 distribution with an appropriate number of degrees of freedom [39]. In the present case, however, we do not have such an analytic formula because of the small number of observed signal events, and thus the distribution of PGS values must be extracted from Monte Carlo (MC) simulations. It is thus fortunate that our halo optimization procedure from [13] has the additional virtue of being numerically highly efficient, and is thus well suited to being implemented in MC simulations.
The structure of this paper is as follows. In Sec. 2 we briefly review the basic formalism of DM direct detection and introduce various ways to construct likelihood functions for direct detection experiments. In this context, we also review the halo-independent method that we employ. Sec. 3 then deals with the combination of the individual likelihood functions into a global likelihood function, as well as possible test statistics that can be constructed from likelihood ratios. We also motivate the need for MC simulations to extract the distribution of these test statistics. We then present two specific examples in Secs. 4 and 5, where we apply our method to recent results from direct detection experiments for different model assumptions. Finally, we discuss our findings in Sec. 6. Two appendices provide additional details on the experiments under consideration and the implementation of the MC simulations.

The basic method
In this section we provide the relevant formulas for the analysis of direct detection experiments and review the halo-independent method that we use. Moreover, we discuss how to construct a global likelihood function suitable for analyzing experimental hints of an emerging DM signal.
The minimum velocity that a DM particle must have in order to induce a nuclear recoil with energy E R is given by where µ = m χ m N /(m χ + m N ) is the reduced mass of the DM-nucleus system and m χ and m N are the DM mass and the mass of the target nucleus respectively. We can then define the velocity integral where f (v) is the local DM velocity distribution in the lab frame. Following [24,25], we also introduce the rescaled velocity integral where σ p is the DM-proton scattering cross section and ρ is the local DM density. Assuming elastic spin-independent scattering, we can then write the differential event rate with respect to recoil energy in a very simple form: where F (E R ) is the appropriate nuclear form factor and µ nχ is the reduced DM-nucleon mass. For the present study, we adopt the Helm form factor from [40], in particular we neglect the small difference between proton and neutron form factor discussed in [41]. The where A and Z are the mass and charge number of the target nucleus, parameterizes the effective coupling to the entire target nucleus in terms of f n /f p , the ratio of DM-proton to DM-neutron couplings. If for a given isotope the coupling ratio satisfies f n /f p ∼ −Z/(A − Z), the effective coupling becomes very small due to destructive interference and scattering off this isotope is strongly suppressed [42][43][44][45].
In order to analyze direct detection experiments without making any ansatz for the DM velocity distribution f (v), we follow the approach outlined in [13], i.e. we parameterize the rescaled velocity integralg(v min ) in a fully general way and then find the form that best describes a given set of data. For this purpose, we divide the region of v min -space under consideration into N s intervals of the form [v j , v j+1 ] and take the rescaled velocity integral to be constant (g(v min ) = g j ) within each interval. The fact that the velocity integral is both positive and monotonically decreasing leads to the requirement 0 ≤ g j ≤ g j−1 for all j.
In order to calculate predicted event rates for a given direct detection experiment we need to know the probability p(E D ; E R ) that a DM interaction with true (i.e. physical) recoil energy E R leads to a signal with detected energy E D . In terms of this function, and using our ansatz forg(v min ), we find where E j is defined implicitly by v j = v min (E j ) and we have introduced N s functions D j (E D ) in the last step. If we are interested in the number of events R i predicted in a bin of the form [E i , E i+1 ], we simply need to multiply dR/dE D with the total exposure κ and integrate over E D : In general, the function p(E D ; E R ) depends on the detection efficiency and the detector resolution. In the case where the detection efficiency depends only on E R and the fluctuations can be approximated by a Gaussian distribution with standard deviation given by ∆E(E R ), we can write In this case, we can directly perform the integration over E D and obtain where erf(x) = (2/ √ π) x 0 exp(−t 2 ) dt. The functions D j (E D ) and the constants D ij can easily be calculated numerically and depend only on the experimental details and the assumed DM properties, but are independent of astrophysics. The signal predictions then depend linearly on the constants g j , making it easy to handle a large number of free parameters. In the following, we will always take N s = 50, and we have checked that a larger number of steps leads to only negligible improvements.
We now want to determine the constants g j that best describe given data from direct detection experiments. In contrast to the analysis performed in [13], we want to focus on current and near-future direct detection experiments. For this purpose, the χ 2 test statistic introduced in [13] is not appropriate, since it would place undue emphasis on bins where the number of predicted events is small, resulting in significant dependence on the choice of binning. Moreover, in many cases it is desirable to use a method that requires no binning at all, as was recently suggested by [14].
To study experiments with very small numbers of observed events we introduce a likelihood function of the form where α represents the different experiments under consideration. Depending on the available experimental information, we will either use a binned or an unbinned likelihood function for the individual likelihoods L (α) . The binned likelihood function for an experiment with n α bins is given by are the expected signal, expected background and observed number of events in the ith bin, respectively. For N i = 0, the last term is taken to be zero.
The unbinned likelihood for an experiment with N α observed events at energies E (α) i is given by where dR (α) /dE D and dB (α) /dE D are the predicted signal and background distribution, respectively, and R (α) + B (α) is the total number of predicted events. Note that this function is not invariant under a change of units for the differential event rate. The absolute value of the unbinned likelihood function therefore carries no physical significance, and only (dimensionless) likelihood ratios can be studied in a sensible way.
Once we have constructed the individual likelihoods, we determine the step heights g j that minimize −2 log L, while at the same time satisfying the monotonicity requirement 0 ≤ g j ≤ g j−1 . It was pointed out in [13] that this minimization is very simple for a χ 2 test statistic, because any local minimum is automatically a global minimum. This property remains true when we replace the χ 2 function by log L, making the minimization very fast even for a large number of parameters. We can therefore quickly determine the best possible form for the velocity integral given a set of data and DM model parameters.

Studying the global likelihood function
The global likelihood function constructed in the previous section depends on the DM parameters (such as m χ and f n /f p ) and the parameters g that characterize the velocity integral. 1 By finding the best-fit velocity integral, we obtain the profile likelihood function In the practical examples below, we will introduce additional nuisance parameters relating to background uncertainties and the cut-off velocity v max . In these cases, the profile likelihood function is obtained by maximizing the likelihood with respect to both the halo parameters g and additional nuisance parameters. 2 In other words,L always only depends on the assumed particle physics parameters of DM.
It is now straightforward to find the DM parameters that maximize the profile likelihood function. We refer to these parameters asm χ andf n /f p and call the maximum value of the profile likelihood function L max . Nevertheless, as long as the presence of a DM signal in the existing data is not established, this result in itself has very little meaning, since one would always obtain a best-fit point even for highly incompatible data sets. Rather than inferring best-fit parameter regions, we therefore want to address the following two questions: 1) How strongly is the best-fit DM interpretation of the data favored over the background-only hypothesis? 2) How well does the global best-fit model describe the data from the individual experiments?
To answer the first question, we can calculate the likelihood for the background-only hypothesis, L bkg , by settingg(v min ) = 0 (and maximizing the likelihood with respect to all other nuisance parameters). We then define the likelihood ratio for the signal and background hypotheses: Note that, by construction q bkg ≥ 0. 3 In order to quantify the p-value of the background-only hypothesis, we now need to quantify how likely it is to obtain a value of q bkg as large as the one implied by the data simply from random fluctuations of the expected backgrounds. Naively, q bkg should follow a χ 2 -distribution with number of degrees of freedom equal to the number of parameters fitted 1 Note that we have absorbed the DM scattering cross section into the definition ofg(vmin). The likelihood function hence depends on σp only implicitly via the halo parameters g. 2 In the presence of additional nuisance parameters it is no longer necessarily true that the likelihood function has only a unique minimum. In practice, this problem is addressed by first calculating a profile likelihood function by maximizing with respect to the halo parameters and then determining the optimum values for the nuisance parameters. 3 The background only hypothesis withg(vmin) = 0 is contained in the set of models which are optimised over to find Lmax. This is why q bkg is necessarily non-negative.
in order to obtain L max . In the present case, however, there are two additional complications: First of all, the notion of free parameters is rather unclear, since most of the parameters g j used to describe the velocity integral are not actually fitted but simply set to one of their boundary values (i.e. either g j = g j−1 or g j = 0) [13,14]. Moreover, even if we kept the velocity integral fixed, q bkg would not necessarily follow a χ 2 -distribution for experiments with a small number of expected events. This complication can clearly be seen in the MC simulations of q bkg performed by the CDMS-II collaboration [46].
In order to determine the probability distribution function of q bkg we therefore need to run MC simulations of the background distributions, i.e. we simulate the possible results of repeating the experiments under consideration many times in the absence of a DM signal. For each individual realization, we need to repeat the procedure outlined above, i.e. we find firstL and then L max by determining the velocity integral and the DM parameters that best describe each simulated data set. The p-value of the background-only hypothesis is then defined as whereq bkg is the actually observed value of q bkg in the published data.
Even if there is a strong preference for an additional signal contribution on top of the known backgrounds, this observation is not sufficient to conclude that the proposed best-fit DM model actually gives a good description of the data. For example, it is possible that (for the DM parameters under consideration) an excess in one experiment is in significant tension with the exclusion limit from another experiment. In this case, the global best-fit point will be a compromise between the two results, unable to fully explain the observed excess in the first experiment while still predicting an uncomfortably large number of events in the second.
To study the amount tension between individual experiments and their overall agreement with the model predictions, we introduce another test statistic in analogy to the parameter goodness-of-fit method discussed in [39] (see also [38]). For a given set of data we define L (α) max as the maximum likelihood for the experiment α, obtained by varying both the halo parameters and the particle physics properties of DM. 4 We then define L PG = L (α) max . Note that, in contrast to our definition of L max , we allow different parameters for each experiment, i.e. we optimize the DM parameters and the velocity integral separately for each experiment. We can now define our second test statistic which by construction satisfies q PG ≥ 0. Intuitively, q PG quantifies how much the individual experiments have to compromise in order to find a consistent description of all available data. If the individual experiments are in good agreement, we would expect that q PG is rather small, since similar parameters can fit all experiments simultaneously. A very large value of q PG , on the other hand, would point towards strong disagreement between the individual experiments, because the respective best-fit parameters strongly disagree. To extract more quantitative statements about the probability distribution of q PG we again need to rely on MC simulations, but this time we want to generate events under the assumption of the best-fit DM model (rather than under the background-only hypothesis as above). We define the p-value of the best-fit point as where, as before,q PG denotes the actually observed value of q PG . We can generalize our definition of q bkg and q PG to apply not only at the best-fit point, but for any set of DM parameters: Our previous definitions then correspond to q x = q x (m χ ,f n /f p ). Note that the set of DM parameters which maximizes q bkg also minimizes q PG . In the following two sections, we will make our discussion more explicit for a specific set of experimental results and different DM models. We emphasize that we study the currently available experimental data simply to illustrate our method. The aim of these examples is not to advertise a particular DM interpretation of any recently observed excesses. In fact, we will show that -under certain assumptions -a DM interpretation is entirely inconsistent with the data for any DM velocity distribution. Our objective is therefore to demonstrate how our method can be applied to future more reliable data sets.

Example 1: CDMS-Si without isospin dependence
To give an example for the method introduced above and to determine the test statistics q bkg and q PG as well as their distributions in a specific case, we now turn to the analysis of several recent results from direct detection experiments. Our aim is to compare the excess claimed by CDMS-Si [47] with the recent bounds from LUX [48] and SuperCDMS [49] for different values of the DM mass. In the subsequent section, we will allow the neutron-proton coupling ratio to vary, while here we set f n /f p = 1.
To study CDMS-Si, we follow the approach from Ref. [14,25,46] and use the extended maximum likelihood method from Eq. (2.11). For LUX and SuperCDMS we have insufficient information on the background distribution and therefore use the binned maximum likelihood method for these two experiments. Details on the assumed detector properties and background distributions are provided in Appendix A. Before combining the likelihood functions for a global analysis, we validate our approach by confirming that we recover the standard results from the literature using the individual likelihood functions.
The left panel of Fig. 1 shows the exclusion bounds (for LUX and SuperCDMS) and the CDMS-Si best-fit region in the m χ -σ p parameter plane under the assumption of the Standard Halo Model for the DM velocity distribution, which is a truncated Maxwell-Boltzmann distribution with velocity dispersion σ dis = 3/2 × 220 km s −1 and escape velocity v esc = 540 km s −1 in the Galactic rest frame. Our results agree well with the published bounds.
For the right panel of Fig. 1 we analyze the three experiments in v min -space, following the method first introduced in [24]. The basic idea of this method is that an experimental null result gives an upper bound on the rescaled velocity integral as a function of v min . To obtain the bound at v min =v min , one can simply use the standard techniques for setting an exclusion limit, but withg which is the velocity integral which gives the smallest DM signal consistent with monotonicity for a giveng(v min ). The bounds we obtain for LUX and SuperCDMS agree well with the ones shown in [50].
An observed excess, on the other hand, can be used to deduce information ong(v min ) by binning both the observed events and the background expectation and then inverting Eq. (2.4) (see [29]). For a bin width of 3 keV one obtains the two data points shown in blue. To avoid the need for binning, it was suggested in [14] to directly show the rescaled velocity integral that maximizes the unbinned likelihood function for CDMS-Si. Our best-fit velocity integral is indicated by the cyan line. This result agrees well with the one found in [14].
It is important to note that we obtain the best-fit velocity integral for CDMS-Si in a way that is rather different from the one proposed in [14]. It was shown [14] that the unbinned likelihood is maximized by taking the velocity integral to be a sum of n steps, where n is smaller than or equal to the number of observed events. The best-fit velocity integral can therefore be found by simply maximizing the likelihood with respect to the height and position of each of these steps. In our approach we allow for a much larger number of steps, but find agreement with the results from [14] in that the best-fit velocity integral only consists of a very small number of steps with non-zero step heights. Nevertheless, our approach allows us to determine the endpoints of these non-trivial steps in a numerically very efficient way (since varying step endpoints directly requires changing endpoints of integration in equation (2.5), and is therefore numerically taxing). This will become important when we combine the information from several experiments (as well as for the case of upcoming experiments with more than a few observed events).
Having validated the individual likelihood functions, we can now combine all the information to construct a global likelihood function and determine L max . We will first focus on the case of isoscalar DM with f p = f n and then turn to different coupling ratios subsequently. For this assumption, the only free DM parameter is the DM mass m χ .
As expected from Fig. 1 for isoscalar DM, the constraints from LUX and SuperCDMS very strongly disfavor the best-fit interpretation of CDMS-Si. Fig. 2 shows the best-fit velocity   integral obtained from the combined likelihood of all three experiments for m χ = 7 GeV. As expected, this velocity integral respects the bounds on the velocity integral obtained from the individual likelihood functions. We observe that the combined best fit is about two orders of magnitude smaller than the best fit obtained for CDMS-Si alone. Although there is a significant preference of the best-fit point over the background-only hypothesis (q bkg = 5.0), the obvious tension between the individual experiments is reflected in the fact that we obtain q PG = 12.7. In other words, the proposed model does not give a good fit to the observed data.
Let us now discuss whether a different choice of m χ could give a better fit to the data and thereby increase q bkg and decrease q PG . Indeed, since CDMS-Si has the lightest target element of the experiments under consideration, the disagreement between the three experiments can be reduced by making m χ smaller and smaller. This reduces the overlap region in v minspace probed by the experiments, and allows for reducing tension by an appropriate choice ofg(v min ). Although doing so indeed leads to an increasing likelihood for the experimental data, there is a big price to pay: Moving to smaller masses pushes the best-fit velocity integral to be non-zero at higher and higher values of v min . There are, however, strong observational constraints on the Galactic escape velocity and the benefit of going to lower masses is significantly outweighed by the resulting tension of the best-fit velocity integral with astrophysical bounds.
To check for a sensible DM interpretation, we clearly want to restrict ourselves to velocity distributions with reasonable escape velocities. To impose the observational constraints, we therefore introduce an additional term in our likelihood function: where v max is the maximum velocity v j for which g j = 0 and we take v esc = 540 km/s, v E = 225 km/s and ∆v = ∆v 2 esc + ∆v 2 E ≈ 42 km/s in accordance with [51] and [52]. This extra term penalizes velocity integrals with exceedingly large velocities, treating the measured value of v esc + v E as approximately following a normal distribution. Indeed, with this additional term it becomes unfavorable to reduce the DM mass significantly below 6 GeV (see Fig. 3), because the two events in CDMS-Si at higher energies can no longer be explained by DM with a reasonable escape velocity. The best-fit point is found for m χ = 5.7 GeV and gives q bkg = 5.1 and q PG = 12.6. 6 Clearly, significant tension remains even for the optimum choice of m χ . To determine how large this tension actually is, we now run MC simulations of the background distribution (to determine the p-value of the background-only hypothesis) as well as MC simulations of the best-fit DM model (to determine the p-value of the DM interpretation). Our results are shown in Fig. 4.
First of all, we notice that for the background-only model (background+DM model) there is a probability of 71% (55%) to find q bkg = 0 (q PG = 0). This can happen for example when the MC gives a case with no events observed in both LUX and CDMS-Si implying q bkg = q PG = 0. 7 However, we also obtain q bkg ≈ 0 for cases where there are only highenergy events in CDMS-Si, which are firmly excluded by the bound from LUX. Similarly, if there are only events in LUX, we typically find q PG ≈ 0, since neither SuperCDMS nor CDMS-Si can constrain such an excess.
We furthermore observe from the left panel of Fig. 4 that for x > 0 the probability to have q bkg > x falls off roughly exponentially with increasing x, with some discreet features arising from the binning of the data in LUX. Based on 5700 data sets, we determine the p-value of the background-only hypothesis for the actually observed events (q bkg = 5.1) to be (1.46 ± 0.16)%. Given the excess of events observed by CDMS-Si it is unsurprising that assuming only known backgrounds the background hypothesis can be rejected with a probability of almost 98.5%. The fact that this number is smaller than the value quoted by the CDMS-II collaboration (99.8%) reflects the difficulty to interpret the observed excess in terms of DM when taking the exclusion bounds from LUX and SuperCDMS into account.
The crucial question therefore is whether the data can be well described by the bestfit DM model. Since we foundq PG = 12.6 q bkg we expect to find a very small p-value for the best-fit point. Indeed, based on 2500 data sets, we find that the probability to when determining the best-fit velocity integral numerically, and then repeat the optimization for different values of vmax. This procedure is necessary because there are typically several local minima in the full {gj, vmax} halo parameter space. Recall that this does not occur when only the gj step parameters are used. 6 With no constraint on vesc, and for e.g. mχ = 3 GeV, we find q bkg = 6.2 and qPG = 11.5, though very large velocities with vmax ∼ 1430 km/s are required. 7 Note that, since we do not make any assumption on the background in SuperCDMS, no positive evidence for DM can come from this experiment.
have q PG >q PG is only (0.44 ± 0.13)%. In other words, if the best-fit DM model were the true model of nature, there should be less tension between the individual experiments than observed with a probability of 99.6%.
In summary, we find that the known-background-only hypothesis is disfavored with 98.5% probability. Nevertheless, the individual experimental results exclude each other with a probability of 99.6%. We would like to emphasize again that in order to derive these results, we have made no assumptions on the DM velocity distribution apart from imposing an observational constraint on the escape velocity. We therefore confirm the conclusion about the incompatibility of LUX, SuperCDMS and CDMS-Si obtained for the Standard Halo Model in a halo-independent way.

Example 2: CDMS-Si with isospin-dependent couplings
It is well known that the constraints from SuperCDMS and LUX can be weakened by considering non-isoscalar DM [42][43][44][45]. In particular, f n /f p = −0.7 strongly suppresses the bounds from LUX, while f n /f p ∼ −0.8 strongly suppresses the bound from SuperCDMS [53]. We illustrate these two possibilities in Fig. 5 using the mapping into v min -space introduced in Sec. 4. Indeed, this figure demonstrates one of the key problems of the conventional presentation of these results: It is impossible to tell by looking at these plots which choice of f n /f p actually leads to the better agreement between the three experiments. Similarly, one might wonder whether even better agreement can be obtained for some intermediate value of f n /f p . By using the global likelihood function we are now in a position to answer these questions.
The left panel of Fig. 6 shows the value of q bkg as a function of f n /f p for m χ = 7 GeV. This figure allows to directly compare the two cases shown in Fig. 5. We observe that there is a slight preference for the suppression of LUX (f n /f p = −0.7) compared to the suppression of SuperCDMS(f n /f p = −0.8). In the right panel we scan over both f n /f p and m χ simultaneously. As in Sec. 4, we have introduced an extra term in the likelihood function to disfavor very small masses and unacceptably large escape velocities. We find the global minimum at m χ = 6.2 GeV and f n /f p = −0.71. A second (almost degenerate) minimum is found at m χ = 6.3 GeV and f n /f p = −0.79. For these two best-fit points we find q bkg = 12.9, q PG = 4.8 and q bkg = 12.6, q PG = 5.1 respectively.
As above, we first of all want to determine the p-value for the background-only hypothesis. The cumulative distribution function for q bkg based on 5700 data sets is shown in    7. First of all, we observe that the probability to have no preference for a DM signal at all is only 49% (compared to 71% for the case of f n = f p ). The reason is that, by varying f n /f p , it is now possible to find a consistent DM interpretation for a larger number of upward fluctuations. The observed value of q bkg in this case is significantly larger than in the case without isospin-dependence. Consequently, we find a p-value for the backgroundonly hypothesis of p(background) = (0.070 ± 0.035)%. In other words, the background-only hypothesis is disfavored with a probability of about 99.93%. 8 Nevertheless, as we have seen above it is not sufficient to simply reject the assumption of known backgrounds only. We also have to demonstrate that the proposed DM model provides a good fit to the data in the sense that the tension between the individual experiments is no larger than what is expected from random fluctuations of the data. We have therefore run MC simulations for both minima found in Fig. 6 and determined the value of q PG for each data set. The results based on 2000 MC samples are shown in Fig. 8.
We observe that in contrast to the case of f n = f p the observed values of q PG now lie well within the central regions of the distributions. Indeed, we find that for the global minimum,   a value of q PG as large as (or larger than) the observed valueq PG = 4.8 will occur with a probability of (18.7 ± 1.0)%. For the second local minimum, the p-value is still (18.5 ± 1.0)%. In other words, the "tension" between the different experiments is no larger than what would be expected from random fluctuations of a consistent DM model in 20% of the cases.

Discussion
In this work, we have presented a new quantitative method for analyzing an emerging DM signal from direct detection experiments in a way that is independent of uncertainties in the halo velocity distribution. This is the first such method in the literature which makes no assumptions about the form of the velocity distribution, and yet still allows for precise conclusions to be drawn about the significance of the signal in the data, as well as the compatibility of the results of multiple experiments. Our method makes use of two test statistics, q bkg , which characterizes the goodness of fit relative to the background only hypothesis, and q PG , which evaluates the compatibility of the full set of data.
We have concentrated on analyzing the best-fit point in DM parameter space (as defined by q PG or equivalently by q bkg ). Where the evidence for a DM signal is found to be significant, one might also be interested in computing confidence intervals in that parameter space. This is feasible with our method as well, though it would likely be time consuming numerically. The reason is that for experiments seeing few events we cannot determine the distribution of q PG independent of the DM parameters, but rather our method currently requires MC simulations to determine the p-values for q BKG and q PG for each set of model parameters.
In practice, we do not expect the probability distribution for the PGS to exhibit a strong dependence on the DM parameters. For example, for the two different sets of parameters considered in Fig. 8 we obtain almost identical distributions. Nevertheless, it is conceivable that one might find a set of DM parameters that gives a comparably high goodness-of-fit even though it does not minimise the likelihood function. To study these possibilities in detail, it would be highly desirable to have -at least approximately -an analytical understanding of the distribution functions.
In this context we note that for all the cases considered in this paper, the distribution of q bkg has a faster falling tail than the distribution of q PG . In other words, for comparable values of q bkg and q PG , the latter typically has the larger p-value. This observation is related to the fact that we study q bkg under the assumption of only background events and q PG using the best-fit DM model. 9 We can therefore conclude that a certain amount of information can already be extracted without the need for MC simulations: If for a given best-fit DM model, we find q PG < q bkg , we can conclude that the tension between the individual experiments is smaller than the tension with the background-only hypothesis. In other words, in such a case the probability is higher for the observed data to have arisen from fluctuations of the DM model than from fluctuations of the background. If, on the other hand, q PG is significantly larger than q bkg , we can conclude that the proposed DM interpretation is not a good explanation for the observed data and that indeed background fluctuations could better account for the experimental results. Only if q bkg ∼ q PG , or if a more quantitative statement is required, will MC simulations become necessary.
Finally, we would like to point out an interesting result concerning the overall DMnucleon scattering cross section. For the entire discussion so far, we have absorbed the DMproton scattering cross section into the definition of the rescaled velocity integralg(v min ) = ρ σp mχ v>v min f (v)/v d 3 v, which is the quantity we have optimized during our analysis. This has enabled us to vary the overall normalization ofg(v min ) arbitrarily, since σ p is a priori unknown. However, producing appropriately large values forg(v min ) may require correspondingly large values of σ p , which might be possible to test via other types of experiments, such as collider searches [54,55]. Indeed, we have Now, direct detection experiments only allow us to determine g(v min ) in a certain range of velocities [v low , v high ]. In other words, we cannot determine the fraction of the local DM density that has a velocity below v low or above v high . Nevertheless, we can use the monotonicity and positivity of the velocity integral to yield a lower bound on σ p as desired: where the best fit value ofG may be calculated in our method, and we then have For the best-fit velocity integral obtained in section 5 for the CDMS-Si, SuperCDMS and LUX data, and assuming ρ 0.3 GeV cm −3 we find approximately σ p 1.7 × 10 −41 cm 2 . It is important to note that this is not a 90% confidence level upper bound on the scattering cross section, but simply the statement that the best-fit velocity integral is mathematically inconsistent with smaller values of σ p . Consequently, if complementary searches for DM can exclude scattering cross sections of this magnitude, they would also exclude the best-fit DM interpretation of the current data from direct detection experiments in a halo-independent way.
In this appendix we provide the experimental details needed to construct the likelihood functions that we use for our global analysis.

CDMS-Si
The CDMS-II collaboration has observed 3 events in the DM search region of their silicon detectors (CDMS-Si for short), compared to a background expectation of only 0.62 events [47]. For our analysis we assume an energy resolution of ∆E R = 0.3 keV [56] and use the detector acceptance from Ref. [47]. For the background estimate we take the normalized background distributions from Ref. [57] and rescale the individual contributions in such a way that 0.41, 0.13 and 0.08 events are expected from surface events, neutrons and 206 Pb, respectively, as in [47]. The likelihood for a given DM model is then calculated using Eq. (2.11).

SuperCDMS
We use the results from Ref. [49] based on an exposure of 577 kg days. We divide the search region [1.6 keV, 10 keV] into 8 evenly spaced bins and calculate the expected number of events in each bin using the detection efficiency from Ref. [49] and assuming a detector resolution of ∆E R = 0.3 keV as above. We then calculate the likelihood following Eq. (2.10). Following the analysis by the SuperCDMS collaboration, we do not make any assumption on the level of background. In any bin where the number of events predicted from DM is smaller than the number of observed events, we therefore take the background to account for the difference, such that this bin gives no contribution to log L. If the number of events predicted from DM is larger than the number of observed events, we take the background contribution to be zero. Effectively this procedure means that only bins where the predicted DM signal exceeds the number of observed events give a finite contribution to log L. This very conservative approach reproduces the published bound (see Fig. 1), but implies that we can only ever obtain an upper bound, but not positive evidence for DM from SuperCDMS.

LUX
We analyse the results from the first run of LUX, based on an exposure of 10065.4kg days [48]. In principle, the optimum way to calculate the likelihood for LUX would be to construct an unbinned likelihood function based on the two-dimensional distributions of signal and background in the S1-log 10 (S2/S1) parameter plane. Unfortunately, not all the necessary information on these distributions is publicly available. We will therefore define a more restrictive signal region by considering only events that fall below the mean of the nuclear recoil band. Nevertheless, to retain at least some information on the energy of the observed events, we divide the signal region into 8 bins that are evenly spaced in reconstructed recoil energy. In order to calculate the likelihood for LUX we therefore need two key ingredients: First of all, we need to know the acceptance for each bin as a function of the true nuclear recoil energy. In other words, we need to determine the probability that a nuclear recoil with a given recoil energy is reconstructed with a recoil energy within a given bin and has a value of log 10 (S2/S1) signal below the mean of the nuclear recoil band. And second, we need to estimate the background expectation in each bin. We will now explain how we extract these two quantities from the publicly available information.
To determine the detector acceptance, we need to estimate the distribution of the S1 and S2 signals as a function of the true nuclear recoil signal. For the distribution of the S1 signal, we take the relative scintillation yield L eff and the absolute yield from [58]. The average detection efficiency for a scintillation photon is 0.14 [48]. For the fluctuations we make the standard assumptions [59] that for an expected S1 signal of S1 = ν the S1 distribution is given by p(S1; E R ) = (S1) ∞ n=1 Gauss(S1; n, σ(n)) Pois(n; ν) where Pois(n; ν) is a Poisson distribution with expectation value ν and Gauss(S1; n, σ(n)) is a normal distribution with mean n and standard deviation σ(n). Because of the similarity of LUX and XENON100, we assume σ(n) = 0.5 √ n as in [59]. Finally (S1) gives the detector acceptance for an S1 signal of given magnitude. This quantity has been extracted by comparing calibration data to numerical simulations [48].
For the distribution of the S2 signal, we take the ionization yield for an electric field of 181 kV/m from [60]. According to [48] the average probability for an electron to be extracted into the gas phase is 0.65. We include an additional factor of 0.95 to account for the probability that an electron will recombine before it reaches the liquid-gas interface. Once the electron reaches the gas phase, it will create on average a raw S2 signal of 25.8 phe [58], out of which on average 11.1 are on the bottom PMT array, contributing to the "S2 b " signal. 10 It is the S2 b signal that is used to define the nuclear recoil band by the LUX collaboration, and its total value is then given by the number of electrons created at the interaction point n e times the amplification factor f a ≈ 6.9 = 0.65 × 0.95 × 11.1. Taking into account both fluctuations in n e and fluctuations in the amplification, the S2 b distribution is given by Gauss(S2 b ; f a n, σ(f a n)) Pois(n; n e ) (A. 2) where σ is defined as before and (S2 b ) is the acceptance of the S2 b signal. According to Fig. 6 in [48] this latter quantity should be very close to unity for the signals we are interested in. Since we have n e 1 for recoil energies above the threshold, we can approximate the Poisson distribution by a Gaussian and, after convoluting the two Gaussians, we obtain approximately p(S2 b ; E R ) = Gauss(S2 b ; f a n e ,σ(n e )) (A. 3) whereσ(n e ) = 0.25 f a n e + f 2 a n e ≈ f a √ n e . We neglect anti-correlation between the S1 and S2 b signals and therefore write the combined distribution as p(S1, S2 b ; E R ) = p(S1; E R )p(S2 b ; E R ) . Convoluting this distribution with a given recoil spectrum then yields the expected S1-S2 b distribution. A simple variable transformation to lS = log 10 S2 b /S1 then gives p(S1, lS; E R ) = p(S1, S2 b ; E R ) log 10 10 lS S1 . (A.5) Having constructed the distributions as above, we can very easily run a MC simulation of DM recoils, in order to generate mock calibration data and validate our implementation. We find that our implementation is sufficient to reproduce the nuclear recoil band from [48] to good approximation. Similarly, we reproduce the various cut acceptances given in [48]. Using these data sets we can now extract the acceptance for each bin. The resulting acceptance functions are shown in Fig. 9.
Before we can calculate the likelihood for a given DM model, we need an estimate of the background distribution. We assume that the background is dominated by leakage from the electron recoil band and neglect the contribution from neutrons. The S1 distribution of electron recoils is given in [48]. To calculate the number of electron recoils that leak into each bin (i.e. that have a value of log 10 S2 b /S1 below the mean of the nuclear recoil band), we furthermore need to know the leakage fraction as a function of energy. This quantity was also estimated in [48]. Combining all available information, we find that the distribution of electron recoils below the nuclear recoil band can be well fitted by d bkg (E R ) = (0.002 keV −1 ) · (1 + (E R /keV)). This approximation predicts 0.03, 0.04, 0.06, 0.07, 0.09, 0.10, 0.12 and 0.13 background events in the 8 bins under consideration, in agreement with the published prediction of a total background of 0.64 events.

B Details on the Monte Carlo simulations
The MC simulations used to determine the distribution functions for the test statistics q bkg and q PG are based on a three-step process: The calculation of the expected event rates for a given model, the event generation and the determination of the best-fit DM parameters for each set of events. In this appendix, we will provide some additional information on each of these steps.
The calculation of the expected event rates for a given model proceeds in complete analogy to the calculation used to determine the likelihood for a given model. The only subtlety is that in order to run MC simulations of SuperCDMS, we actually need to make an assumption on the background expectation. To be consistent with the approach outlined in Appendix A, we determine the background from the actual observations. In every bin where the model prediction exceeds the number of observed events, we set the background equal to zero, in all other bins we set the background equal to the difference between observation and prediction. This approach means that the MC simulation for SuperCDMS actually depends not only on the assumed model, but also on the experimental data. Given further information on the background expectation in SuperCDMS, it would be straight-forward to implement a different approach.
For SuperCDMS and LUX, where we consider only binned numbers of events, it is straight-forward to generate large samples of events using Poisson distributions centred at the expected number of events. For CDMS-Si, on the other hand, we actually need to sample the full distribution function, which (for a best-fit halo with several steps) can have several complicated features. For this purpose, we calculate the expected energy spectrum by convoluting the DM recoil spectrum with the detector acceptance and resolution and then numerically integrate this spectrum to determine the cumulative distribution function (CDF). We can then apply inverse transform sampling, i.e. we use the inverse of the CDF and a uniformly distributed sample of random numbers to sample the energy spectrum in CDMS-Si.
Finally, in order to determine q bkg and q PG for each set of events, we proceed in exactly the same way as for the original data set. In particular this means that we no longer make use of the background estimate in SuperCDMS used to generate the MC sample, and instead determine the best-fit background estimate from the new set of data. Moreover, given the background distributions in LUX and CDMS-Si, we inevitably obtain samples with a number of events at relatively high recoil energies (E R > 20 keV). This fact, together with our constraint on the escape velocity, means that we can no longer focus just on the low-mass region but need to search for additional minima with large values of m χ .
Given that, for each MC sample we need to scan over the DM parameters and find the best-fit DM velocity integral at every point, it is clear why it is absolutely crucial to have a very efficient code. Even though we can optimize the halo parameters in a few seconds, a full parameter scan can still take up to 30 minutes. In practice, such an extensive scan is not necessary for each set of events. 11 In the end, a single CPU can generate and process about 100-200 MC samples per day.