Joint cosmological and gravitational-wave population inference using dark sirens and galaxy catalogues

In the absence of numerous gravitational-wave detections with confirmed electromagnetic counterparts, the “dark siren” method has emerged as a leading technique of gravitational-wave cosmology. The method allows redshift information of such events to be inferred statistically from a catalogue of potential host galaxies. Due to selection effects, dark siren analyses necessarily depend on the mass distribution of compact objects and the evolution of their merger rate with redshift. Informative priors on these quantities will impact the inferred posterior constraints on the Hubble constant (H 0). It is thus crucial to vary these unknown distributions during an H 0 inference. This was not possible in earlier analyses due to the high computational cost, restricting them to either excluding galaxy catalogue information, or fixing the gravitational-wave population mass distribution and risking introducing bias to the H 0 measurement. This paper introduces a significantly enhanced version of the Python package gwcosmo, which allows joint estimation of cosmological and compact binary population parameters. This thereby ensures the analysis is now robust to a major source of potential bias. The gravitational-wave events from the Third Gravitational-Wave Transient Catalogue are reanalysed with the GLADE+ galaxy catalogue, and an updated, more reliable measurement of H 0 = 69+12 -7 km s-1 Mpc-1 is found (maximum a posteriori probability and 68% highest density interval). This improved method will enable cosmological analyses with future gravitational-wave detections to make full use of the information available (both from galaxy catalogues and the compact binary population itself), leading to promising new independent bounds on the Hubble constant.

A Full population results 26 1 Introduction The disagreement surrounding the value of the Hubble constant (H 0 ) is one of the dominant unanswered questions in current cosmology.Current measurements of H 0 are in tension with each other to a level above 4σ (see, e.g.[1,2]), with local (low-redshift) measurements typically preferring a higher value, and early-universe (high-redshift, model-dependent) measurements preferring a lower one.The cause of this tension is unknown, but potential explanations include unaccounted-for measurement errors (which are more likely to impact the local measurements), or a flaw in the current standard cosmological model, Λ-cold-darkmatter (ΛCDM) (which would impact the early-universe measurements) [3,4].The relationship between redshift z, luminosity distance d L , and H 0 can be expressed as for a flat universe.At low redshifts this simplifies to Hence, for a local measurement of H 0 to be made, all that is required is redshift and luminosity distance information.The amplitude of a gravitational wave signal from a compact binary coalescence (CBC) directly encodes the luminosity distance to the source, requiring no external calibration.This property has led to these types of gravitational wave (GW) events being commonly termed "standard sirens".If information about the redshift of the source can be supplied, the GW data can be used to produce a measurement of H 0 which is independent of the cosmic distance ladder, and of all electromagnetic (EM) measurements of H 0 to date.Herein lies significant potential for the field of gravitational wave astronomy to resolve the long-standing Hubble tension.If a uniquely-identified EM counterpart can be associated to the GW event (making it a "bright siren"), the redshift of the GW host galaxy can be determined via spectroscopy, allowing for a direct measurement of H 0 .Thus far this has only been possible for the detection of the binary neutron star GW170817 and its EM counterpart [5,6].The small localisation region of the event (possible because the two LIGO detectors [7] and Virgo detector [8] were online at the time) meant that multiple telescopes were able to search the relevant area over a short time-frame, leading to the identification of the kilonova associated with the GW event, and subsequently the identification of its host galaxy.This led to the first ever measurement of H 0 from GWs [9], a remarkably informative measurement from a single event.
Since then, the third observing run of Advanced LIGO, Virgo and KAGRA [10] has taken place, with observations lasting for just under a year and increasing the current catalogue of GW transients to ∼ 90 [11].Despite improvements in the sensitivity of the LIGO and Virgo detectors [12,13], no EM counterparts have been uniquely associated with any of the subsequent events.This distinct lack of EM counterparts has led to the blossoming of methods for supplying redshift information to these "dark sirens".
Current dark sirens methods can be broadly split into two further categories:1 • The galaxy catalogue method: here the redshift information is provided by a catalogue of potential host galaxies within the sky localisation region of a GW event (see [21][22][23][24][25][26][27][28]).
Each galaxy contributes to a hypothetical measurement of H 0 , such that the galaxy structure within a GW event's localisation volume is reflected in the H 0 posterior it produces.How informative the individual events are will depend strongly on their localisation volumes.By combining the contributions of many events, the true value of H 0 will be measured as other values will statistically average out.
• The population method, also known "spectral sirens": here features in the mass distribution of the compact object population break the mass-redshift degeneracy (see [29][30][31][32][33]).This method can be thought of as analogous to the redshifting of emission lines in EM spectra: if the location of a feature is known in the source frame, its measured position in the detector frame -shifted by a factor of (1 + z) -informs us about the redshift of the source.
These two varieties of dark sirens methods (as presented in [28,32]) were applied in the most recent LIGO-Virgo-KAGRA cosmology paper [34], making use of the GW data from the third Gravitational-Wave Transient Catalogue (GWTC-3) [11].Until recently, the two dark sirens methods have always been carried out separately due to the computational difficulty in incorporating galaxy catalogue information (which tends to be highly discrete) into a sampling method, and the computational cost of incorporating a reliable correction for galaxy catalogue incompleteness.This means that the population method has not benefited from the additional constraining power of galaxy redshifts.Conversely, the galaxy catalogue method required fixing the GW population when computing the selection function, which opens it up to potential bias if the assumed model (with fixed hyper-parameters) is incorrect.This second point is clearly more serious, as it could lead to a significant bias in the measurement of H 0 if not corrected.This paper presents a novel method for carrying out cosmological and GW population parameter estimation jointly, such that the resulting posterior on H 0 is free from bias due to GW population assumptions.2This method combines the best of both dark siren techniques above: for the large numbers of binary black holes (BBHs) at high redshifts the galaxy catalogues are relatively uninformative (due to incompleteness), but these events offer significant constraining power through their mass distribution.Near-by, well-localised binary neutron stars (BNSs), neutron star -black hole pairs (NSBHs) and BBHs are relatively lower in numbers and hence do not strongly probe the mass distribution, but instead can access the highly informative nature of a galaxy catalogue to provide redshift information.This updated method is implemented within the Python package gwcosmo,3 and used to reanalyse the GWTC-3 data in combination with the GLADE+ galaxy catalogue [35], leading to a final updated value of H 0 which utilises 47 GW events and incorporates redshift information from the black hole (BH) mass distribution, a full-sky galaxy catalogue, and the one confirmed EM counterpart to date.This version of gwcosmo, which is about 1000 times faster than its predecessor, is well suited to the challenges that lie ahead in the field of GW cosmology, in terms of the large numbers of gravitational wave events which are expected over the coming years, and the increasing depths of galaxy surveys which means that the sheer quantity of data that will be available will require efficient handling and analyses which can scale to large datasets.Looking to the future, beyond the H 0 tension, this version of gwcosmo can additionally be used to put constraints on models of modified gravitational-wave propagation [36].
Section 2 presents the updated methodology in a Bayesian framework, including introducing the crucial concept of a "line-of-sight (LOS) redshift prior".Section 3 discusses the choice of GW data and galaxy catalogues, including how the GLADE+ galaxy catalogue is converted into a LOS redshift prior.Section 4 reanalyses the GWTC-3 data with the updated methodology, under the same assumptions as [34], and quantifies the impact of various assumptions, then presents a full population + galaxy catalogue reanalysis of GWTC-3, including an updated measurement of H 0 which utilises the greatest amount of GW information to date.Finally, section 5 concludes the paper and discusses the application of this method to future data.
Finally, a paper presenting an alternative method of joint cosmological and population inference using the Icarogw package was released while this paper was in preparation [37].The methodology is derived using a different approach (expressing the likelihood in terms of CBCs merger rates) which is consistent with the gwcosmo methodology where equivalent assumptions are made.The way in which gravitational-wave data is handled between the two codes differs, as Icarogw computes numerical integrals involved in the evaluation of the likelihood by summing over posterior samples, while gwcosmo constructs a kernel density estimate (KDE) for different lines of sight.The Icarogw approach enables a faster analysis, at the expense of requiring that pixel sizes and galaxy redshift distributions are large enough to contain a representative number of posterior samples.The gwcosmo approach, while potentially slower, does not suffer from any of these resolution effects due to the smoothing of the GW posterior. 4Differences aside, we find good agreement with their results when considering the same set of 42 BBHs for a full catalogue+population analysis, and when considering the highly informative dark siren GW190814 for a catalogue-only analysis.

Method
In general, the posterior on a set of hyper-parameters Λ, given a set of GW data {x GW }, corresponding to N det detections, {D GW }, is given by [38,39]: (2.1) The parameter D GWi is a binary parameter (taking the value 0 or 1) denoting that the GW data of event i passed some detection threshold statistic in order to be deemed "detected".The term Λ includes the cosmological parameters of interest (nominally H 0 for this analysis) as well as a set of population hyper-parameters which describes the underlying mass distribution of compact binaries and their distribution in redshift.This expression expands to: where it's been written explicitly that the individual GW likelihood need to be marginalised over individual event parameters (such as sky location, masses, inclination, spins, redshift etc.), grouped together and denoted by θ.GW selection effects are incorporated in the term p(D GW |Λ) = p(D GW |θ, Λ)p(θ|Λ)dθ.This is the "probability of detection" for a CBC drawn from a population described by parameters Λ, and because it applies to the whole population of CBCs (as opposed to being event-specific) it can be taken outside the product. 5The term I on the right-hand side of each expression is a catch-all term which holds any additional assumptions that have not been explicitly expressed.
The method, as written to this point, is the same between the population and the galaxy catalogue methods of computing H 0 , but at this point it is worth reiterating the manner in which these analyses will differ: the prior redshift information.In the population method, the redshift prior would commonly be taken to be uniform in comoving volume (in order to model the galaxy distribution with the least informative assumptions when lacking any galaxy positional data), modified by some additional term for the evolution of the merger rate of binaries with redshift.In this scenario, the redshift information necessary for cosmological inference comes mainly from the distribution of the observed GW masses p(m d 1 , m d 2 ), in the detector frame (denoted by the superscript "d").Gravitational-wave detectors are sensitive to detector-frame mass, rather than source-frame mass.This means that the values observed in the detector frame have suffered the effect of the cosmological expansion and are related to their value in the source frame (labelled with the superscript "s") by the relation m s = m d /(1 + z).If the mass distribution of compact binaries contains features this breaks the mass-redshift degeneracy, meaning that the observed population of compact objects becomes cosmologically informative.If the attributes of these features are known then they can be strongly informative, with the caveat that assuming an incorrect mass distribution will have a strong impact on the inferred cosmology (see e.g.[32,34]).
In order to avoid biasing the cosmological inference by assuming a mass distribution, the uncertainty in the population can be marginalised over, by carrying out multiparameter estimation on cosmological and population hyper-parameters together.Explicitly, the p(θ|Λ, I) terms in Eq. 2.2 can be expanded as p(θ|Λ, I) = p(m s 1 , m s 2 |Λ, I)p(ϕ|Λ, I), where θ = {m s 1 , m s 2 , ϕ} and Λ is the set of cosmological and population hyper-parameters of interest, including those which describe source-frame mass distribution prior.For the sake of compactness, we do not make this expansion for the rest of the derivation, but it is important to remember that redshift information from the mass distribution is entering the analysis in this way.
In the galaxy catalogue case, instead of taking the redshift prior to be uniform in comoving volume, it is built from a galaxy catalogue, and so instead of being a smooth function of redshift, becomes something highly discrete due to the point-like nature of galaxies. 6In this scenario, information from the structures observed in the mass distribution still contributes to the cosmological inference as before, though it can be a subdominant effect. 7n addition to galaxies being discrete, galaxy catalogues are incomplete at the redshifts to which GWs are currently detectable, meaning that an incompleteness correction must be applied to account for the fact that the host galaxy of the GW event may not be contained within the catalogue at all, due to being fainter than the flux limit of the galaxy survey.To further complicate matters, the completeness of galaxy catalogues is not uniform across the sky due to 1) obscuration by the Milky Way and 2) different surveys in different patches of the sky, leading to varying depths of completeness as a function of sky location, Ω.The variation of incompleteness across the sky was addressed extensively in [28], in which the sky was separated into N pix equally-sized pixels and the completeness correction was based on the catalogue's apparent magnitude threshold within each pixel.Here, a similar approach is applied, but by deriving the rest of the method in a different order to the one presented in that paper, the analysis can be made much more computationally tractable.With this in mind, Eq. 2.2 can be rewritten to explicitly include the marginalisation over Ω as a sum over pixels: (2.3) The p(Ω j |I) terms are recognised as being identical between same-sized pixels, so come out the front and cancel in numerator and denominator.Now it is worth explicitly writing the marginalisation over z (with θ = {z, θ ′ }): (2.4) Here the assumption has been made that p(D GW |Λ, I) is uniform across the sky (i.e.pixelindependent), and hence can be taken outside the sum over pixels.As p(D GW |Λ, I) is averaged over an observing run, the rotation of the Earth blurs out much of the right ascension (RA) and declination (dec) dependence of the detectors' antenna patterns, leaving only a mild declination dependence that for the time being can be safely ignored.The remaining parameters contained within θ are independent of Ω and so lose their dependence.If the prior on the GW merger rate, R, is set to ∝ 1/R, the term p(N det |Λ, I) loses its dependence on Λ and can be ignored (see [23]), a simplification which is used throughout the rest of this paper.

The line of sight redshift prior
Now the LOS redshift prior, p(z|Ω j , Λ, I), can be considered more closely.As in the original gwcosmo methodology, the redshift prior must include the cases where the host galaxy is (G), and is not ( Ḡ), inside the galaxy catalogue, which also requires marginalising over apparent and absolute magnitude (m and M ) as these determine, to leading order, which galaxies will end up inside a galaxy catalogue which is formed from a flux-limited EM survey.
An important distinction to make at this point is that between a galaxy and a host galaxy.So far in the derivation, the assumption that a real GW signal has been emitted has been implicit (i.e.x GW is assumed to correspond to a real GW trigger, not a false alarm).This can be made explicit by including the parameter s (denoting the presence of a GW source, which until now has been hidden inside I on the right-hand side of every expression).
As in [28], p(z, M, m|G, Ω i , Λ, s, I) is the prior on z, M and m for host galaxies, informed by the galaxy catalogue, within the sky area covered by pixel i.Its counterpart, p(z, M, m| Ḡ, Ω i , Λ, s, I) is the prior on the same parameters but outside the catalogue, and so will have some associated apparent magnitude threshold (m th ).

The in-catalogue part
Taking a closer look at the in-catalogue part, p(z, M, m|G, Ω i , Λ, s, I), galaxies come as (approximately) delta-function-like points in Ω and m, with some larger uncertainty on z.The absolute magnitude of the galaxy is not provided in the catalogue, and requires a cosmological assumption to compute.For galaxy j, M j = f (z j , m j , Λ). Applying Bayes' theorem to the term p(z, M, m|G, Ω i , Λ, s, I) gives: (2.6) The value for M is fixed for given values of z, m and a cosmological model, and so turns into a delta function.The term p(z, m|G, Ω i , I) is simply the prior on z and m given the galaxies in the catalogue within pixel i.Note that this is now the prior on all galaxies, not just host galaxies, as the parameter s has been separated out.Because this information comes purely from the galaxy catalogue, the dependence on Λ is lost.The probability for a galaxy to host a GW merger retains some dependence on z and M to allow for possible luminosity weighting, or evolving merger rates with redshift (i.e. the p(s|z, M, Λ, I) term, which allows galaxies with certain properties to be considered as more likely hosts for CBCs).Now integrating p(z, M, m|G, Ω i , Λ, s, I) over m and M leads to (2.7) The delta-function relationship between M , z and m means that the integral over M can be carried out by simply replacing all occurrences of M with M (z, m, Λ).The galaxy catalogue information comes into the term p(z, m|G, Ω i , I), which can be replaced by a sum over galaxies.The measured apparent magnitudes (in a given observation band) can be taken as delta functions,8 while the measured redshift of each galaxy will come with some non-negligible uncertainty, particularly in the case of photometric redshifts. (2.8) Here the hat notation is used for measured quantities.The term p(z|ẑ k ) is the posterior on the redshift of galaxy k, given the measured redshift ẑk .The rest of this paper will use the assumption that p(z|ẑ k ) = G(z − ẑk ; σk ), (2.9) i.e. that a Gaussian centered at ẑk with standard deviation σk is a reasonable approximation for the redshift posterior of galaxy k. 9 Additionally the assumption is made that the galaxy measurements provided in the catalogue used for the analyses later in this paper (see Sec. 3.2) are posteriors. 10eturning to Eq. 2.7, and substituting the sum over galaxies into the integral over m returns the following: (2. 10) This in-catalogue part of the LOS redshift prior is essentially a weighted sum of Gaussians, where the weights are some function of redshift and absolute magnitude.

The out-of-catalogue part
Now turning attention to the out-of-catalogue part, p(z, M, m| Ḡ, Ω i , Λ, I), and expanding it leads to a slightly different expression: (2.11) Here an additional step has been taken to separate out the term p( Ḡ|z, M, m, Ω i , Λ, I), which is the probability that a galaxy with properties z, M , m, etc. is outside the galaxy catalogue, which requires modeling the EM selection effects of that particular survey.The main assumption here is that the galaxy catalogue is flux limited, such that the probability that a galaxy is or is not contained within it depends on the galaxy's apparent magnitude, and whether it is greater or less than the apparent magnitude threshold of the catalogue along the same line of sight, m th (Ω i ).However, this is not the only reason galaxies may be missing from a particular catalogue.If the redshift or colour information of galaxies at high redshifts becomes particularly unreliable then it may be prudent to discard galaxies above some redshift, z cut , from the catalogue, in which case their exclusion must be accounted for with the out of catalogue term.In this case it becomes (2.12) Here, Θ denotes a Heaviside step function.If no redshift information is discarded, i.e. z cut → ∞, then Θ[z cut − z] = 1 and Θ[z − z cut ] = 0, meaning that the selection effects simplify to only depending on the m th of the catalogue.
Another term to examine more closely is p(z, M, m|Ω i , Λ, I), which is the prior on redshift, absolute and apparent magnitude for (all) galaxies within pixel i.Because the parameter Ḡ is not present here, this term does not depend on direction-dependent EM selection effects, and so Ω i can be dropped from the right-hand side.Due to the relationship between z, M and m, priors only need to be defined for two of these parameters, as the third is specified by its relationship to the other two.For the out of catalogue case it makes sense to define priors for z and M because these are reasonably well known (galaxies are broadly expected to follow a uniform in comoving volume distribution in redshift when viewed on a large enough scale, and galaxy absolute magnitudes are expected to follow a Schechter function).This means that the term can be rewritten as p(z, M, m|Λ, I) = δ(m − m(z, M, Λ))p(z, M |Λ, I).Substituting this and Eq.2.12 into Eq.2.11 gives Now integrating this term over both m and M leads to (2.14) The Heaviside step function for the apparent magnitude threshold has been converted to a redshift-dependent integration limit over M which depends on the apparent magnitude threshold of pixel i.The integration limits on M in general depend on H 0 because the parameters of the Schechter function are H 0 -dependent, but the final distribution is insensitive to its value.

The full expression for the LOS prior
Both Eq. 2.10 and Eq.2.14 contain prefactors which are of a format hard to compute.They take the form 1/p(s|G, Ω i , Λ, I) for the former, and 1/(p(s| Ḡ, Ω i , Λ, I)p( Ḡ|Ω i , Λ, I)) for the latter.However, when substituting Eq. 2.10 and Eq.2.14 into Eq.2.5 it can be seen that these expressions simplify.For ease of notation let the rest of Eq. 2.10 and Eq.2.14 be termed f (IN CAT) and f (OUT OF CAT) respectively.Then the expression becomes (

2.15)
There is a lack of symmetry between the in and out of catalogue terms, as one might have expected a p( Ḡ|Ω i , H 0 , I) as a prefactor to the out of catalogue term, but this term has cancelled with one in the denominator.With further consideration, this makes sense -the sum over galaxies of the in-catalogue part by definition integrates to 1 over z (assuming uniform galaxy weights for now), while the integral over the out-of-catalogue part does not, due to the apparent magnitude threshold appearing in the integral limits.
The term 1/p(s|Ω i , Λ, I) can be ignored as a normalisation constant.It will be the same for all pixels, assuming GW sources are isotropically distributed, and as such will eventually cancel with an identical term in denominator once the LOS prior is substituted back into Eq.2.4.
The one term we have not yet looked at is the probability of a galaxy in pixel i being inside the galaxy catalogue (notice now that there is no s on the right-hand side, so here we are considering all galaxies, not just host ones): (2. 16) The probability that a galaxy is inside the catalogue can then be expanded as which leads to The fact that all probability densities must be properly normalised may cause the reader to wonder whether the choice of redshift range over which this is done so has an impact on this analysis.After all the term p(G|Ω i , Λ, I) will take a very different value if p(z, M |Λ, I) is normalised over a redshift range which is truncated at z max = 2 and one where z max = 10.However, it is useful to note that as long as the same expression for p(z, M |Λ, I) is also be used in Eq. 2.14, the normalisation will come out the front as a constant.As such, the choice of what redshift range over which to normalise p(z, M |Λ, I) has no bearing on the result.Combining Eqs.2.15, 2.10 and 2.14 leads to a full expression for the redshift prior of (2. 19) Looking ahead, Fig. 8 shows an example of the LOS redshift prior, demonstrating the discrete galaxies which are visible at low redshift, and how it converges to a uniform in comoving distribution at higher redshifts as galaxy catalogue completeness decreases.

The separability of GW mass and rate evolution hyper-parameters from the line-of-sight redshift prior
The LOS redshift prior, as presented in Eq. 2.19, retains clear dependence on Λ throughout, where Λ is a set of cosmological and population hyper-parameters of interest, many of which we would like to jointly estimate.However, if Eq. 2.19 has to be recomputed for every newly drawn set of parameter values this will slow things down to the point of impracticality.To address this, Λ can be broken into its constituent parts, namely Λ = {Λ cosmo , Λ mass , Λ rate }.Λ cosmo denotes the parameters of the cosmological model, namely {H 0 , Ω M , Ω Λ }.Λ mass denotes the parameters of the GW mass model, such as minimum and maximum mass, slope, the position of any features, etc. Λ rate denotes the parameters of the rate-evolution model which determines how the intrinsic rate of CBC mergers evolves with redshift (see Sec. 4 for details).
The immediate impact of substituting this definition into Eq.2.19 is that the priors on galaxy redshift and absolute magnitude depend only on Λ cosmo .Beyond this, some simplifying assumptions must be made.We assume that the GW mass model doesn't evolve with redshift (there is, as yet, no strong evidence for a mass model which evolves with redshift [40], though this may change once the data from future observing runs has been analysed [41,42]).In this scenario, none of the terms in Eq. 2.19 are dependent on Λ mass .The host galaxy weights term retains dependence on Λ rate (and, in the case where those galaxies come from the catalogue, such that their absolute magnitude is computed from their apparent magnitude and redshift, Λ cosmo ).
So far in Eq. 2.19 the only assumption about the relationship between GW mergers and host galaxies is that it will have some (unspecified) dependence on redshift and absolute magnitude.At this point in time, the true relationship between galaxy properties and GW merger probability is not well constrained, but there are a couple of reasons for parameterising the problem this way.The usual justification is to reason that there is likely a causal link between galaxy mass and/or star formation rate (SFR) and that galaxy's probability of hosting a GW merger.Higher mass means more matter available to be turned into compact binaries.Higher SFR means more stars forming, which means more binary systems forming and eventually turning into compact binaries.Of course there may well be some significant time-delay between SFR and a binary merging, but this is also currently uncertain.In any case, both mass and SFR are traced by galaxy luminosity in different bands, and so estimates for both can be constructed from observable data. 11s we do not yet know the exact relationship between GW mergers and host galaxy properties, it is worth remaining flexible and allowing that there may be some additional redshift-dependence to the merger rate model.As such, for the remainder of this paper, the following form is taken: p(s|z, M, Λ rate , I) ∝ p(s|z, Λ rate , I)p(s|M, I). ( (2. 22) The term p(s|z, Λ rate , I) has come out the front, meaning that Λ rate parameters are not needed for computing the bulk of the LOS prior, and the aim becomes to simply precompute the part of the expression within the square brackets.
Within the square brackets there is still a clear dependence on Λ cosmo in both the inand out-of-catalogue parts.The term p(z|Λ cosmo , I) is a uniform in comoving volume distribution and its dependence on H 0 drops out when normalised, leaving only the dependence on {Ω M , Ω Λ }.The galaxy luminosity function, p(M |z, Λ cosmo , I), will depend only on H 0 (Schechter function parameters are quoted with a dependence on h ≡ H 0 /100).The relation between redshift, apparent and absolute magnitude, M (z, m, Λ cosmo ) will retain dependence on all cosmological parameters due to the necessity of converting redshift to luminosity distance.
Despite the dependence of individual terms on H 0 , when computed for different values of H 0 the LOS prior is independent of its value down to some normalisation constant.Because the same prior is used to evaluate the GW likelihood and GW selection effects, this dependence cancels, meaning that the LOS prior can in fact be computed just once and used in an analysis in which {H 0 , Λ mass , Λ rate } vary.

The resolution of the LOS prior
Looking at Eq. 2.22 it should be noted that pixel area does not explicitly appear anywhere within the current definition, but there is an implicit assumption present which is worth taking care with.The quantities computed in the method thus far are based on assumptions about the distribution of galaxies which we cannot see (i.e. that they are distributed, on average, uniformly in comoving volume, and follow some kind of luminosity distribution such probability, but a full exploration of these parameters and how they could be incorporated into the current methodology is left for the future. as a Schechter function).Consider p(G|Ω i , Λ, I) for example, and how it is expressed in Eq. 2.18.This term is the fraction of galaxies which within pixel i which are inside the galaxy catalogue.The "true" value is given by N gal (Ω i )/N tot (Ω i ) -that is the ratio of the number of galaxies in the catalogue (in that pixel) to the total number of galaxies (including the ones we cannot see) which lie in that pixel.Because we do not have access to the true total number of galaxies we must use an approximation based on our chosen priors.This approximation is okay, as long as the pixel size is large enough that N gal (Ω i ) ≈ p(G|Ω i , Λ, I)×N tot ×∆Ω, where N tot is the total number of galaxies in the universe, and ∆Ω is the fractional area covered by one pixel.This will clearly break down in the limit of infinitesimally small pixels which may not contain any galaxies, but which would still have a non-zero value of p(G|Ω i , Λ, I).In order to avoid this limit, pixel size must be chosen to be large enough that the approximation is valid. 12his fine-resolution limit to the method has a drawback, in that all galaxies within a pixel are treated as having the same Ω, effectively reducing the galaxies' positional (RA and dec) information.If the GW likelihood changes significantly across the area of one pixel, valuable information will be lost.In order to carry out the analysis with suitably high resolution, we make the choice to define two different levels of resolution: n map , which determines the resolution on which N gal (Ω i ) and m th are estimated, 13 and n high , a finer resolution which pixels are subdivided into.The specific choices for these resolutions are discussed in section 3.2.These parameters, n map and n high are values of nside, a parameter which relates to the number of equal-sized pixels the surface of a sphere is separated into, using the formula n pix = 12 × nside 2 .
The n map resolution is used to compute HEALPix maps [43,44] of two quantities:14 m th and N gal (Ω i ).When computing the full LOS redshift prior on the n high resolution, m th is taken from the m th map, and used for the incompleteness corrections.N gal (Ω i ) is also taken from the coarser map, and divided through by the number of sub-pixels that the coarse pixel is divided into, and this value is then used in place of N gal (Ω i ) in Eq. 2.22.For clarity, the sum is still carried out over the true number of galaxies in the fine pixel, but the normalising factor of 1/N gal (Ω i ) is constructed from the coarse map.This means that you will get the same results when computing the full LOS prior on a coarse map as when computing it on a higher resolution, then summing pixels to return to the same resolution as the coarse map.As such it is robust in the limit of infinitely high resolutions.
Figure 1 may help to visualise this.First assume you do not pre-compute m th or N gal (Ω i ) on a coarser resolution.Looking at the left-hand panel, if the LOS prior was constructed each galaxy would have a normalising factor of 1/4 in front of it.In the middle panel, however, the pixel has been split into 4 sub-pixels, and the same galaxies have normalisation factors of 1 or 1/2, depending on which pixel they fall in.If you then summed the 4 sub-pixels you would not recover the same result as the left-hand panel; instead two of your galaxies would be down-weighted relative to the other two, simply due to small number statistics.Additionally, in the high-resolution limit (right-hand panel) there will be a maximum of 1 galaxy per pixel, but there can be an arbitrarily high number of empty pixels, which means the out-of-catalogue contribution will dominate.If, however, N gal (Ω i ) is computed for a coarse pixel then divided through by the number of sub-pixels (leading to an average of 4 galaxies per sub-pixel in the left-hand panel of Fig. 1, 1 for the middle panel, and 1/4 for the right-hand panel), then the effective N gal (Ω i ) of each high resolution pixel remains inversely proportional to the total number of out-of-catalogue contributions (1 per sub-pixel).

Gravitational-wave selection effects
In order to obtain an unbiased estimate of cosmological (and population) parameters, it is crucial to accurately model GW selection effects.The "probability of detection" of a GW event (often shortened to P det ) can be written as p(D GW |Λ, I) = p(D GW |θ, Λ, I)p(θ|Λ, I)dθ.In previous versions of gwcosmo, where the only parameter being constrained was H 0 (i.e.Λ = Λ cosmo = H 0 ), and all other mass model and cosmological parameters were set to fixed values, this was done by simulating a population of GW events and computing their probability of detection as a function of redshift and H 0 (marginalising over masses, sky position, polarisation, and inclination).Now, with a dozen or more parameters to jointly constrain, this approach is no longer practical due to the high computational burden.
In order for Eq.2.4 to be used by a Markov Chain Monte Carlo (MCMC) or Nested Sampling method, the probability of detection must be quickly computable for any values of Λ = {Λ cosmo , Λ mass , Λ rate }.Probability of detection is equivalent to the ratio of detectable events N exp (Λ) over the total number of mergers N (Λ) up to a certain redshift.This integral can be computed numerically using a Monte Carlo approximation.
where π s inj is used to denote a (prior) probability density function which is distinct from another prior on the same parameters (see later in this section).The discrete summation is done over a large set of simulated GW signals (typically of the order of N sim = 10 5 − 10 6 ) using a prior on the characteristics of the simulated CBC (masses, redshift or luminosity distance).These signals are injected in realistic time sequences of noise following the detector sensitivities and duty cycles in order to reproduce observational conditions as closely as possible.Then, we can compute the corresponding signal-to-noise ratio (SNR) for each simulated signal so that we have in the end a number N det of detected events (injections) above the specified network SNR threshold in the analysis, which, when summed produces the probability of detection.
The probabilities appearing in the numerator of Eq. 2.23, p(m s 1,i , m s 2,i , z i |Λ, I), are computed for each value of Λ drawn during the sampling process, applying the desired priors on compact binary source-frame masses and redshift (as an example, the prior for the distribution of the primary mass m 1 in the source frame can be seen in Eq. 4.1).In particular, the model used for p(z|Λ, I) is the LOS redshift prior of Eq. 2.22.The probabilities of the denominator, π s inj (m s 1,i , m s 2,i , z i |Λ), are the individual probabilities of the detected injections after taking into account the current value of Λ and must be recomputed accordingly.These probabilities are therefore closely related to their known values, when the injections have been drawn from the chosen prior distribution π inj .In the following analyses, we chose to draw injections in the detector frame, ), as detectability of a GW event is a function of these parameters, and they therefore provide coverage for a wide range of cosmological values.With this choice, it is possible to obtain the required values in terms of source frame masses, π s inj (m s 1,i , m s 2,i , z i |Λ), using the relation: where the Jacobians for transforming between source frame and detector frame, and d L and z have been written explicitly.Finally, substituting Eq. 2.24 into the denominator of Eq. 2.23 gives: The denominator acts as a weight of each detected injection.The sampling process modifies these weights for each new value of Λ; this is a fast operation and we are therefore able to get, at every point of the hyper-parameters space, the estimate of N exp /N for any value of Λ.
Assuming that the redshift prior in the numerator of Eq. 2.25 is constructed from the LOS redshift prior, this is (to within a normalisation constant) the quantity between the first set of square brackets appearing in Eq. 2.4: 3 Gravitational-wave data and galaxy catalogues

Gravitational-wave data
In this paper we select all GW events with SNR > 11 from GWTC-3 (as was done in [34]).This leads to a selection of 42 BBHs, 2 NSBHs (GW200105 and GW200115), the asymmetric mass binary GW190814 (which is treated as an NSBH in all the analyses which follow) and as well as two BNSs (GW170817 and GW190425).gwcosmo makes use of both GW posterior samples (for computing a KDE of the GW distance distribution along every line-of-sight) and skymaps (for weighting the normalised distance KDEs to retain the correct 3-dimensional distribution of GW probability in RA, dec and luminosity distance). 15 set of injections (as described in section 2.2) was generated using gwcosmo to characterise the detectability of GW events assuming a network SNR threshold of 11.

Galaxy catalogues
For this analysis we use the GLADE+ galaxy catalogue, which is a composite catalogue made by cross-matching the Gravitational Wave Galaxy Catalogue [46], HyperLEDA [47], the 2 Micron All-Sky Survey Extended Source Catalog [48,49], the 2MASS Photometric Redshift Catalog [50], the WISExSCOS Photometric Redshift Catalogue [51], and the Sloan Digital Sky Survey quasar catalogue from the 16th data release [52].We choose galaxies in the K-band in order for our results to be comparable to [34].We assume that the galaxy luminosity function follows Schechter function that does not evolve with redshift, 16 with parameters M * = −23.39+ 5 log(h) (where h ≡ H 0 /100) and α = −1.09,following [53].To curtail the faint-end of the Schechter function we choose a limit of M max = −19.0+ 5 log(h).The default assumption for host galaxy weighting is that host probability is proportional to K-band luminosity, which traces galaxy stellar mass.As mentioned in section 2.1.4,an assumption of CBC merger rate evolution with redshift is not required at this stage as it is incorporated at a later stage in the analysis.K-corrections are also applied when converting galaxy apparent magnitudes to rest-frame absolute magnitudes, following the prescription in [53].
With the galaxy catalogue selected, and the observation band chosen, the LOS redshift prior can now be pre-computed.The first stage of this is to pre-compute maps describing the m th of the catalogue across the sky, as well as the corresponding numbers of galaxies.In order to assess which coarse resolution satisfies the conditions for n map listed in Sec.2.1.5,we have computed maps for two different n map values: 32 and 64.These maps are shown in Figs. 2 and 3.For the m th maps, the minimum number of galaxies a pixel must contain in order to not be considered empty is 10.For n map =32 this results in a map for which the only empty pixels are those for which the Milky Way band obscures the sky.For n map =64, due to the limited number of galaxies with available K-band luminosities and the small pixel sizes, many of the pixels evaluate as empty.For this reason, n map = 32 is chosen to be the fiducial value (though the impact of changing to n map = 64 is investigated in Sec.4.2), and we do not investigate coarser resolutions, as they will not be able to accurately capture the varying galaxy catalog completeness across the sky.
With the m th and N gal maps precomputed, the full LOS redshift prior can now be computed at a higher resolution.The LOS redshift prior (the part within square brackets of Eq. 2.22) is computed for every pixel assuming n high = 32, 64 and 128.The impact of changing the choice of resolution is investigated in section 4.2, and n high = 128 is chosen to be the fiducial value for the analyses moving forwards, as it allows the maximum information from galaxy RAs and decs to be utilised.Higher resolutions are not investigated in this paper due to 1) the length of time it takes to compute the LOS redshift prior (though this is parallelisable) and 2) the resulting file sizes: a LOS prior with n high = 128 is about 7GB, and each subsequent increase in resolution divides each pixel into 4, and so quadruples the resulting file size.Usefully, however, this file size is completely independent of the number   of galaxies which were originally in the galaxy catalogue. 17The second criterion which determines the overall file size, other than pixel resolution, is the redshift resolution.Every pixel in the LOS redshift prior is computed for the same redshift array, in order to facilitate fast operations summing pixels at later stages.For this reason, the redshift array resolution must be high enough to capture the redshift profile of every galaxy in the catalogue while maintaining an output file with manageable size.Looking at Fig. 4, which shows a scatter plot of absolute redshift uncertainty against redshift for all the galaxies in GLADE+, it is clear that there are several distinct regions.These correspond to the different galaxy surveys which make up GLADE+, for which different uncertainty models were used. 18We identified 4 redshift ranges in GLADE+ where the galaxies with the smallest z uncertainties (σ z ) show differing dependence on z and we construct the grid accordingly, shown in orange.The grid starts at z = 10 −6 and ends at z = 10, switching between linear and logarithmic spacing in order to closely match the data.By setting a requirement of having at least 1 The redshift where the slope of the merger rate changes.U(0, 4.0) Table 2. Summary of merger rate shape parameters with the corresponding priors.
min(M NS max , m 1 ).The BNSs use a uniform prior between M NS min and M NS max .Table 1 describes the parameters used in the mass distributions and gives the prior ranges which are used for the analyses in Secs.4.1 and 4.3.
The CBC merger rate evolution model is assumed to follow a Madau-Dickinson-like distribution [55] which is given by: Definitions of the parameters appearing in Eq. 4.2 are given in Tab. 2, as well as the prior ranges which are used for the analyses in Secs.4.1 and 4.3.Using the selection of events described in 3.1, three separate analyses are carried out: a pure population analysis using 42 BBHs (see Sec. 4.1), a galaxy catalogue analysis with fixed population assumptions using all the BBHs, NSBHs and BNSs (see Sec. 4.2) and, finally, a population + galaxy catalogue analysis which uses all GW events and galaxy catalogue information, and jointly estimates cosmological and population parameters (Sec.4.3).For the multi-parameter estimation analyses, gwcosmo utilises nested sampling with the sampler dynesty [56], 19 while for the H 0 -only analysis a gridded method is used.

The population analysis of GWTC-3 BBHs
In this section the 42 BBHs are reanalysed using gwcosmo, allowing cosmological and population parameters to vary.Fig. 5 shows the result of the analysis for H 0 and the parameters which correlate most strongly with it, namely µ g , M BH min , M BH max and σ g from the mass model (see Table 1) and γ from the merger rate model (see Table 2).The full corner plot showing all parameters can be found in appendix A. Marginalising over population parameters results in a value of H 0 =33 +51 −18 km s −1 Mpc −1 (maximum a posteriori probability (MAP) and 68% highest density interval (HDI)).All results from this point on will be quoted using the same, unless specified otherwise.The result is fully compatible with the Icarogw result from [34], with minor deviations due to the versions of GW parameter estimation which were used in each analysis ( [34] used parameter estimation from the GWTC-2 release, whereas we make use of the updated GWTC-2.1 release), and the fact that gwcosmo implicitly marginalises over R 0 with a 1/R 0 prior, while in [34] Icarogw used a uniform prior on R 0 .

The galaxy catalogue analysis of GWTC-3 with fixed population assumptions
In order to best assess the individual contributions of GW events to the H 0 measurement, as well as determine to what extent a result is driven by population information versus galaxy catalogue information, it is useful to run an analysis with fixed population assumptions.This analysis is carried out for the 46 dark siren events from GWTC-3 with SNR > 11.
The posteriors for individual events (assuming uniform priors on H 0 ) can be seen in Fig. 6, for three different analysis assumptions: luminosity weighting of host galaxies in the K-band, uniform weighting of host galaxies, and no galaxy catalogue (also known as the "empty catalogue" analysis).The empty catalogue analysis demonstrates the information which is driven by the chosen GW population, and as such it is the difference between this line and the others which should be used to assess how informative the galaxy catalogue method is for any given event.For the majority of events, lying at distances higher than that to which the GLADE+ catalogue is complete in the K-band, the results of all 3 analyses are the same.Some nearer-by events show deviations at the low-H 0 end, which corresponds to low redshift (for a fixed luminosity distance) where the galaxy catalogue is more complete.In general, the luminosity weighted results are the most informative, as the majority of the GW host probability gets condensed into a smaller number of galaxies, while in the unweighted case the presence of many low-luminosity galaxies means that the final LOS redshift prior will appear much closer to uniform in comoving volume when averaged over the volume of an event.
The event of most interest, from a galaxy catalogue perspective, is GW190814.Due to its nearby distance (around 240 Mpc) and small sky localisation (18.5 deg 2 for the 90% HDI) [57] this event is an ideal dark siren.In order to assess the impact of analysis assumptions relating to the choice of pixel resolution, GW190814 is reanalysed under a variety of different assumptions (see Fig. 7).Two different resolutions of m th and N gal maps are used: n map =32 and 64 (see Figs. 2 and 3), and for each of these, different resolutions up to n high =128 are used for the final LOS prior.Interestingly, the choice of map resolution seems to have a max , M BH min and σ g using the 42 BBH events of the GWTC-3 catalog, obtained with a pure population analysis with gwcosmo.larger impact on the shape of the posterior than that of n high .Looking at Fig. 8, it can be seen that while the same galaxies are present in the LOS redshift prior for the n map =32 and 64 cases, the relative contributions of these galaxies can differ significantly.The driving reason for this is the small number statistics discussed in detail in Sec.2.1.5:in the n map =64 scenario, a lot of pixels within GW190814's sky area only contain about a dozen galaxies, meaning that the relative contributions from adjacent pixels can be significantly impacted.The larger pixel sizes in the n map =32 scenario reduce this impact, and the results are more robust because of it.
The value of n high also impacts the final shape of the posterior.This is most noticeable for the n high = 32 result, compared with the n high = 64 and 128 results using n map =32 in all cases.A lower redshift galaxy which contributes a large peak for n high = 32 is subsequently suppressed at higher resolutions, because the pixel it lies within contains significant variation  The dotted line shows the contribution coming purely from the GW population assumptions and no galaxy catalogue information (the "empty catalogue" scenario).The dashed orange line includes galaxy catalogue information with uniform weighting of host galaxies, while the blue dot-dashed line includes galaxy catalogue information and applies weighting proportional to the galaxy's luminosity.
in GW support: a detail which is only captured at higher resolutions.   .Ratio of the galaxy catalogue and empty catalogue LOS redshift prior, weighted by the sky area of GW190814 with n high = 128.The blue curve corresponds to the n map =32 case and the orange curve corresponds to n map =64.The horizontal dashed black line at 1.0 corresponds to the scenario when matter is distributed truly uniformly in comoving volume, so oscillations above and below this correspond to over-and under-densities in matter.At high redshift, where the catalogue contains no information in the K-band, the curves tend to 1.0 as the incompleteness correction dominates.
GW190814 is the most well-localised dark siren in GWTC-3, we conclude that this resolution is sufficient to capture all the necessary GW information for this analysis.It is worth remembering, however, that the choice of resolution which is "good enough" will depend on the localisation area of the GW events it is used with -a future GW event with a smaller localisation region that GW190814 will need a higher resolution LOS prior to do it justice.Posterior on H 0 produced using 42 BBHs, 2 NSBHs, GW190814, and GW190425 using the GLADE+ galaxy catalogue and GW170817 with its EM counterpart (black line).The separate contributions from the BBHs (blue) the NSBHs and GW190814 (orange) and GW170817 (green) are also shown (normalised).The 1σ Planck [1] and SH0ES [2] results are shown as shaded pink and brown bands respectively.

Combined cosmological and population inference using GWTC-3
With the robustness of this version of gwcosmo established, the final step is to do a full population + galaxy catalogue analysis of GWTC-3.The BBHs are analysed as in Sec.4.1, but this time utilising a LOS redshift prior constructed from the GLADE+ K band, using a resolutions of n map = 32 and n high = 128.The two NSBHs and GW190814 are also analysed using the full population + galaxy catalogue method.The maximum and minimum mass of the neutron star (NS) component is allowed to vary in this analysis, as GW190814's secondary mass is very heavy for a NS and we want to avoid a cut off in the mass distribution which could incorporate additional cosmological information into the analysis (see appendix A for the full results of this analysis).When considering these 3 events alone, the majority of population parameters are poorly constrained (as expected, see also the NSBH analysis in [36]).Because GW190425 is the only BNS without an EM counterpart we do not carry out a population analysis for it.Given the lack of features in the assumed BNS mass distribution (assuming the event is not so heavy or light as to touch the BNS prior bounds), we simply fix the population parameters to fiducial values and carry out a 1-dimensional analysis over H 0 .The result which combines the inference from all 3 GW populations is presented in Fig. 9.The final constraint on H 0 is 69 +12 −7 km s −1 Mpc −1 which, as anticipated, is less informative than the previous result produced by gwcosmo in [34] of H 0 = 68 +8 −6 km s −1 Mpc −1 , due to the marginalisation over the uncertainty of the population hyper-parameters.While less informative, this result is far more robust, and no longer susceptible to bias due to fixed assumptions about the GW population.

Conclusions
This paper presents a novel method for cosmological analyses using GW data and galaxy catalogues, incorporating information from the GW mass distribution in a way which allows the joint estimation of cosmological and GW population parameters, a crucial improvement while the true mass distribution of compact mergers remains uncertain.A key part of this new method is the construction of a LOS redshift prior which incorporates both galaxy catalogue data and an incompleteness correction, the pre-computation of which produces a speed-up in the analysis of roughly 3 orders of magnitude over the previous version of gwcosmo.Comparing the population-only cosmological results using 42 BBHs from GWTC-3 to the Icarogw results in [34] shows excellent agreement between the methods.Similarly, the fixed population result using the GLADE+ galaxy catalogue agrees very well with the previous version of gwcosmo in [34].The ability to jointly estimate, or marginalise over, the population of compact mergers means that this analysis is largely robust to the current uncertainties in the population.The LVK's fourth observing run (O4) is currently underway and is expected to increase the number of GW detections to several hundreds.This improved version of gwcosmo will allow these new events to be used in an informative cosmological analysis which incorporates data from current galaxy surveys, and also utilises information from the GW population itself.Nearby and well localised sources will make informative contributions due to the known galaxy structure within their localisation volumes.The numerous GW events at high distances, where catalogue support is low will still contribute cosmologically through their redshifted masses.By joining these two sources of redshift information the best of both is utilised, enabling reliable and informative future measurements of H 0 using dark sirens.However, there are still areas in which further improvements must be made.Here, for example, in the final result we treat population hyper-parameters as uncorrelated between the different source populations.Ideally, parameters which are common between the populations (e.g. the hyper-parameters describing the black hole as the primary mass in both the BBH and NSBH cases) would be estimated jointly.However for certain parameters, such as those parameterising the merger rate evolution model, one would not expect to be the same between the different GW populations.There, and in general perhaps, the solution could be to infer hyper-parameters for the entire CBC population, rather than breaking it into sub categories.Given the current difficulties in firmly placing current GW detections into these categories (GW190814 being an excellent example, not to mention the current uncertainty surrounding the existence -or not -of a mass gap between the NS and BH population [40]) this could simplify the process noticeably.
While the gwcosmo method is now able to jointly constrain cosmological and population hyper-parameters, it shouldn't be forgotten that the choice of mass model can have an impact on the inference, and ensuring that a model is used which adequately captures the structure in the compact binary mass distribution will be an important aspect of this analysis moving forwards.Similarly, while current evidence does not confirm whether the mass distribution of compact binaries evolves with redshift or remains constant, this would -27 -  Figure 11.Posteriors on 14 cosmological and population parameters using the 2 NSBH events (GW200105 and GW200115) and GW190814, including information from the GLADE+ galaxy catalogue.

Figure 1 .
Figure 1.Schematic demonstrating the impacts of pixel resolution.The same patch of sky containing four galaxies is shown with 3 different pixel resolutions.
m th map for nmap = 32.The sky is divided into 12,288 pixels, each covering an area of 3.36 deg 2 .m th map for nmap = 64.The sky is divided into 49,152 pixels, each covering an area of 0.84 deg 2 .

Figure 2 .
Figure 2. m th maps for different n map values.The minimum number of galaxies a pixel must contain in order to not be considered empty is 10.Empty pixels are shown in white.

Figure 3 .
Figure 3. Effective N gal maps for different n map values.

Figure 5 .
Figure 5. Posteriors on parameters H 0 , γ, µ g , M BHmax , M BH min and σ g using the 42 BBH events of the GWTC-3 catalog, obtained with a pure population analysis with gwcosmo.

Figure 6 .
Figure 6.H 0 posteriors (uniform prior) for the 46 dark sirens from GWTC-3 with SNR > 11.The dotted line shows the contribution coming purely from the GW population assumptions and no galaxy catalogue information (the "empty catalogue" scenario).The dashed orange line includes galaxy catalogue information with uniform weighting of host galaxies, while the blue dot-dashed line includes galaxy catalogue information and applies weighting proportional to the galaxy's luminosity.

Figure 7 .
Figure 7. H 0 posteriors for GW190814 using different values for n high and n map .

Figure 8
Figure 8. Ratio of the galaxy catalogue and empty catalogue LOS redshift prior, weighted by the sky area of GW190814 with n high = 128.The blue curve corresponds to the n map =32 case and the orange curve corresponds to n map =64.The horizontal dashed black line at 1.0 corresponds to the scenario when matter is distributed truly uniformly in comoving volume, so oscillations above and below this correspond to over-and under-densities in matter.At high redshift, where the catalogue contains no information in the K-band, the curves tend to 1.0 as the incompleteness correction dominates.

Figure 10 .
Figure 10.Posteriors on 12 cosmological and population parameters using the 42 BBH events of the GWTC-3 catalog, obtained with a pure population analysis with gwcosmo.
By keeping the parameter(s) Λ rate flexible, such that p(s|z, Λ rate , I) can return to a uniform distribution, this allows additional unknown redshift dependence to be captured.Making this substitution in Eq. 2.19 leads to p(z|Ω i , Λ, s, I) = p(s|z, Λ rate , I) p(s|Ω i , Λ,

Table 1 .
Summary of mass distribution parameters with the corresponding prior ranges.
The n high = 64 and 128 results are very similar, showing a reasonable level of convergence has been reached.As