Observational constraints on extended Proca-Nuevo gravity and cosmology

We confront massive Proca-Nuevo gravity with cosmological observations. The former is a non-linear theory involving a massive spin-1 field, that can be extended incorporating operators of the Generalized Proca class, and when coupled to gravity it can be covariantized in a way that exhibits consistent and ghost-free cosmological solutions, without experiencing instabilities and superluminalities at the perturbative level. When applied at a cosmological framework it induces extra terms in the Friedmann equations, however due to the special non-linear construction the field is eliminated in favor of the Hubble function. Thus, the resulting effective dark energy sector is dynamical, however it contains the same number of free parameters with the ΛCDM concordance model. We use data from Supernovae Ia (SNIa) and Cosmic Chronometers (CC) observations and we construct the corresponding likelihood-contours for the free parameters. Interestingly enough, application of various information criteria, such as AIC, BIC and DIC, shows that the scenario of massive Proca-Nuevo gravity, although having exactly the same number of free parameters with ΛCDM paradigm, it is more efficient in fitting the data. Finally, the reconstructed dark-energy equation-of-state parameter shows statistical compatibility with the model-independent, data-driven reconstructed one.


Introduction
The concordance model of cosmology, namely ΛCDM paradigm, which is based on general relativity with a cosmological constant, on the standard-model particles, and on cold dark matter, has been proven very efficient in describing the Universe evolution both at the background and perturbation levels.Nevertheless, according to recent observations of various origins, ΛCDM predictions seem to be in tension with the data, as for instance the H 0 tension [1], the σ 8 tensions [2] etc (for reviews see [3,4]).On the other hand, at the theoretical level, ΛCDM faces the cosmological constant problem [5,6], while general relativity itself is non-renormalizable and thus it cannot be brought close to a quantum description [7,8].Hence, a large amount of research has been devoted in constructing gravitational modifications, namely theories that possess general relativity as a limit but presenting theoretical as well as phenomenological advantages (for a review see [9]).
A first subset of modified gravity theories emerges by enhancing the Einstein-Hilbert Lagrangian through the inclusion of supplementary terms.This results in a diverse array of formulations, encompassing f (R) gravity [10,11], f (G) gravity [12], f (P ) gravity [13], and Lovelock gravity [14].Nonetheless, an alternative trajectory involves transcending the conventional curvature-centric approach to gravity by integrating other geometrical parameters, such as torsion and non-metricity.More specifically, one can embark from the teleparallel equivalent of general relativity [15,16], utilizing the torsion scalar T as a Lagrangian and extending it to f (T ) gravity [17,18].Alternatively, one may extend the scope by adopting the symmetric teleparallel theory, utilizing the non-metricity scalar as the Lagrangian [19], and further extending it to f (Q) gravity [20,21].All these classifications of gravitational theories demonstrate rich cosmological behaviors, captivating the keen interest of the scientific community .
One interesting subclass of modified gravity arises by considering the graviton to be massive.The inquiry into whether the graviton can possess mass has intrigued theorists for an extensive period.Originating with Fierz and Pauli [53], the subsequent development of a comprehensive nonlinear framework for massive gravity encountered a significant challenge, i.e. the emergence of the Boulware-Deser (BD) ghost [54].This inherent problem persisted for decades until a notable breakthrough, marked by the introduction of a specific nonlinear extension of massive gravity by de Rham, Gabadadze, and Tolley (dRGT) [55].Through meticulous Hamiltonian constraint analysis [56] and an effective field theory approach [57], it was demonstrated that the BD ghost could be eradicated by introducing a secondary Hamiltonian constraint (see [58] for a review).Beyond its theoretical significance, this nonlinear construction offers an additional advantage by potentially explaining the observed late-time cosmic acceleration: since by fine-tuning the graviton mass to an adequately small value, gravity becomes weaker at cosmological scales, then the graviton potential can effectively emulate a cosmological constant [59,60].However, the basic versions of the theory exhibit instabilities at the perturbative level [61].
Recently, an extended Proca theory, namely Proca-Nuevo (PN) theory, appeared in the literature [80].It corresponds to a non-linear theory of a massive spin-1 field, that can be extended by incorporating operators of the Generalized Proca class without compromising the essential primary constraint crucial for consistency.When the theory is combined with gravity it can be covariantized in models that support consistent and ghost-free cosmological solutions.In particular, they exhibit hot Big Bang solutions featuring a late-time self-accelerating epoch, and additionally at the perturbative level certain sub-classes of the theory satisfy all stability and subluminality conditions, and intriguingly, gravitational waves propagate at the speed of light.Moreover, further cosmological solutions of the theory have been studied in [81], while the complete analysis of the constraint algebra has been performed in [82,83].Finally, the quantum stability of Proca-Nuevo interactions was explored in [84], where it was found that Proca-Nuevo and generalized Proca theories have analogous behaviour at the quantum level, opening the door to speculations for being specific cases of a more general theory.
In the present work we desire to confront massive Proca-Nuevo gravity and cosmology with cosmological data from Supernovae Ia (SNIa) and Cosmic Chronometers (CC) observations, and extract constraints on the model parameters.The plan of the work is the following: In Section 2 we present extended Proca-Nuevo gravity and we apply it at a cosmological framework.In Section 3 we provide the datasets that we will use in our analysis, and we analyze the various information criteria used for model comparison.Then, in Section 4 we perform the observational confrontation and we present the results.Finally, Section 5 is devoted to the Conclusions.

Extended Proca-Nuevo gravity and cosmology
In this section we briefly present extended Proca-Nuevo theory (EPN) coupled with gravity, and we apply it at a cosmological framework, following [81].The action of the covariant extended Proca-Nuevo theory is based on a Lorentz massive spin-1 field, and it is written as where R is the Ricci scalar, and L M is the standard matter Lagrangian.The massive spin-1 Lagrangian is [81] L where ) ) where A µ is the vector field, Λ is an energy scale that controls the strength of the vector self-interactions, and the coefficients α n (X) and d n (X) are functions of X = − 1 2Λ 2 A µ A µ (a subscript X denotes differentiation with respect to X).Additionally, [K] = tr(K) is the trace of the tensor K µ ν = X µ ν − δ µ ν [85,86], with , η µν the flat Minkowski metric, and the Stückelberg-inspired Lorentz tensor.Finally, the term G µν in L 3 is the Einstein tensor.In summary, massive Proca-Nuevo gravity is a consistent theory of a massive spin-1 fields coupled with gravity through minimal, non-minimal, and derivative terms.
Let us apply the above theory at a cosmological framework.In order to achieve this, we focus on flat Friedmann-Robertson-Walker (FRW) metric of the form with a(t) the scale factor, while for the vector field we assume [81] A with ϕ a scalar field.
Variation of the action with respect to the metric leads to the two Friedmann equations [81] while variation with respect to ϕ(t) gives (2.12) In the above equations ρ m and p m are respectively the energy density and pressure of the perfect matter fluid, while we have introduce an effective dark energy sector with energy density and pressure given by What makes the present scenario interesting is that the vector field equation, and thus due to the ansatz (2.9) the scalar field equation (2.12), is non-dynamical, i.e. it is just a constraint imposing an algebraic relation between H and ϕ.Hence, exactly due to the specific construction of action (2.1), the resulting Friedmann equations depend only on the Hubble function and not on the vector condensate field.Therefore, the effective dark energy density and pressure are simply (2.15) ) ∼ 1 and y = 4 6 cm [81].Finally, the equations close by considering the matter conservation equation ρm + 3H(ρ m + p m ) = 0, which according to the Friedmann equations (2.10),(2.11)leads to the dark energy conservation too, namely ρDE + 3H(ρ DE + p DE ) = 0.
In the following we use the redshift z = −1 + a 0 a as the independent variable, setting the present scale factor to a 0 = 1 (from now on, a subscript "0" denotes the value of a quantity at present time).Furthermore, we focus on dust matter, namely we consider p m = 0, and thus the matter conservation equation gives ρ m = ρ m0 (1 + z) 3 .Additionally, as usual, we introduce the density parameters (2.17) (2.18) Thus, the first Friedmann equation (2.10) becomes Ω m +Ω DE = 1, which applied at present time, i.e. at z = 0, and using (2.15), gives Note that although the scenario has two intrinsic parameters, namely Λ and m, the specific combination in which both of them appear can be eliminated in favour of just Ω m0 .Inserting (2.19) back into (2.15)we can eliminate c m y 2/3 , as well as Λ, and therefore we obtain Interestingly enough, in the scenario at hand the effective dark energy density depends only on the present matter density parameter Ω m0 , as well as on the usual normalization factor, i.e. the present Hubble parameter H 0 , and thus it has the same number of free parameters as ΛCDM cosmology, which for comparison we remind that it has In summary, the scenario of extended Proca-Nuevo cosmology, assuming a spatially flat universe and considering only pressureless matter, has exactly the same number of parameters with ΛCDM cosmology, however its cosmological behavior will in general be different.Inserting all these in the first Friedmann equation (2.10), we obtain the simple algebraic equation In summary, it is apparent that for large redshifts the term E(z) −2/3 becomes much less important, and the model effectively reduces to pure general relativity plus cold dark matter scenario.On the other hand, at late times, where H → H 0 and thus E(z) → 1, we observe that ρ DE behaves close to ρ DE | ΛCDM and for z = 0 we obtain full coincidence.Hence, the scenario at hand of Proca-Nuevo cosmology has exactly the same number of parameters with ΛCDM cosmology, it reduces to the latter at early and present times, however at intermediate times it exhibits a non-trivial deviation.In the next section we desire to see whether this intermediate-time deviation can improve the fitting with the data comparing to ΛCDM paradigm.

Observational Data and Analysis
In this Section we review the datasets that we will employ in our analysis, namely data from Supernovae Ia (SNIa) observations and Cosmic Chronometers (CC) direct measurements of the Hubble function.Additionally, we provide the various information criteria used for model comparison.In what follows, ϕ λ is the statistical vector, that contains the free parameters at hand.

Supernovae Ia (SNIa)
One of the most extensively studied class of standard candles in cosmology are the Supernovae Type Ia (SNIa).We incorporate the binned Pantheon sample [87].The entire dataset is approximated by the binned dataset comprising N = 40 data points within the redshift range 0.01 ≲ z ≲ 1.6.
The chi-square function for this dataset is expressed as: The distance modulus is defined as µ i = µ B,i − M, with µ B,i the apparent magnitude at maximum in the rest frame for redshift z i .The introduction of the free parameter M is essential due to the dependence of the observable distance modulus, µ obs , on assumptions related to H 0 and the fiducial cosmology.By absorbing artifacts originating from the transformation of flux observations to distance modulus estimates, M enables independence from the fiducial cosmology, as discussed in [87].Additionally, the theoretical form of the distance modulus is given by:

Cosmic Chronometers (CC)
We utilize data from the most recent compilation of the H(z) dataset, as provided by [88].We focus on data obtained from cosmic chronometers (CC), which represent massive galaxies evolving at a relatively slow pace during distinct intervals of cosmic time.By leveraging their differential age, it becomes possible to directly measure the Hubble rate [89].A notable advantage of utilizing the differential age of passively evolving galaxies is that the resulting Hubble rate measurement rely on minimal assumptions about the underlying cosmology, i.e.FRW geometry.Moreover, we exclude CC measurements from [90] in accordance with Ref. [91], which found that they are not reproducible.Our analysis incorporates a total of N = 22 measurements of the Hubble expansion, covering the redshift range 0.07 ≲ z ≲ 2.0.

Likelihood analysis and model selection
Given P independent observational data-sets and assuming Gaussian errors, the total likelihood function is defined as: where the corresponding expression for χ 2 tot is written as The statistical vector has a dimension of k, comprising ν parameters of the model under consideration plus ν hyp hyper-parameters from the utilized data sets, yielding k = ν + ν hyp .
In the above expression ϕ µ is the vector containing the free parameters, which in our case is ϕ µ = {Ω m0 , h, M} and P = {CC, SN Ia}.Lastly, to obtain the posterior distributions of the model parameters, given the data, we use a Markov Chain Monte Carlo (MCMC) sampler.Instead of the standard Metropolis -Hastings algorithm, to avoid the need of fine-tuning for the hyper-parameters, we use an affine-invariant Markov Chain Monte Carlo sampler as implemented within the open-source Python package emcee [92,93], involving 1000 chains (walkers) and 2500 steps (states).Regarding the convergence of the MCMC algorithm, we use the traditional Gelman-Rubin criterion and also the auto-correlation time analysis.
In the assessment of cosmological models based on their predictions with respect to the available data, we employ three widely recognized criteria: the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and the Deviance Information Criterion ( [94] and references therein).
The AIC criterion addresses the issue of model adequacy from an information theory perspective.Specifically, it serves as an estimator of the Kullback-Leibler information and possesses the property of asymptotic unbiasedness.Under the standard assumption of Gaussian errors, the AIC estimator is expressed as [94]: Here, Lmax represents the maximum likelihood of the considered data set(s), and N tot is the total number of data points.For large N tot the expression simplifies to AIC ≃ −2 ln(L max ) + 2k, which corresponds to the standard form of the AIC criterion.Consequently, it is advisable to utilize the modified AIC criterion in all cases [94].The Bayesian Information Criterion (BIC) serves as an estimator of the Bayesian evidence, and its expression is given by: BIC = −2 ln(Lmax) + k • log(N tot). (3.7) On the other hand, the Deviance Information Criterion (DIC) is formulated by incorporating concepts from both Bayesian statistics and information theory [94].It is expressed as: with C B = D(ϕ µ )−D(ϕ µ ) the Bayesian complexity, where the overline represents the mean value.Moreover, D(ϕ µ ) corresponds to the Bayesian Deviation, which, for a general class of distributions (the exponential family), is given by D(ϕ µ ) = −2 ln(L(ϕ µ )).This is closely tied to the effective degrees of freedom [94], representing the number of parameters that actually contribute to the fitting.
In the task of ranking a set of competing models based on their fitting quality to observational data, we employ the aforementioned criteria, specifically focusing on the relative difference in Information Criterion (IC) values within the given model set.The difference ∆ICmodel = ICmodel − IC min , compares each model's IC value to the minimum IC value in the set of competing models.We use the rule in order to assess the "degree of belief" for each model [95], where the index i runs over the set of n models.Finally, according to Jeffreys scale [96], when ∆IC ≤ 2 the model is statistically compatible with the most favored model by the data, when 2 < ∆IC < 6 implies a moderate tension between the two models, while the case ∆IC ≥ 10 indicates a significant tension.

Results and Discussion
We have now all the machinery needed to proceed to the observational confrontation of extended Proca-Nuevo gravity at cosmological level.We perform the analysis described in the previous section, and we summarize the results for the posterior parameter values in Table 1, while in Table 2 1. Observational constraints and the corresponding χ 2 min for the Extended Proca-Nuevo (EPN) cosmology, as well as for ΛCDM paradigm, using the joint analysis of SNIa/CC datasets.For direct comparison we additionally include the concordance ΛCDM scenario.
As we observe, the scenario at hand is in agreement with observations.Nevertheless, the most interesting result is that, by means of model selection criteria, for the considered The 1σ, 2σ and 3σ iso-likelihood contours for the extended Proca-Nuevo (EPN) cosmology, for the various two-dimensional subsets of the parameter space (Ω m0 , h, M), using the joint analysis of SNIa/CC datasets.Furthermore, we present the mean values of the parameters.
datasets the Extended Proca-Nuevo cosmology is preferred over the concordance one.In particular, all considered model selection criteria (AIC, BIC, DIC) suggest the above result,  with the difference of ∆IC being ∼ 8 in all cases (see Tab. 2).To better understand these ∆IC values, we calculate the "degree of belief" (3.9) given the data for extended Proca-Nuevo scenario which is P ∼ 0.99998, while the corresponding one for ΛCDM is P ∼ 1e − 6.
In comparison with other cosmological models that potentially manage to perform better than ΛCDM , such as Running Vacuum models (RVMs) [98] (∆IC max ≤ 3), or f (Q) cosmologies [99] (∆IC max ∼ 3) and [44] (∆IC max ∼ 0.3), the extended Proca-Nuevo cosmology shows the best performance against the concordance cosmology.One might be inclined to interpret the latter findings directly.However, we would desire not to make such a strong and definitive statement, since for the moment we have focused solely on background data for the late universe, not including perturbation analysis, namely utilization of f σ8 data as it was done in [44], and/or CMB data.
On the other hand, in terms of our results for the posterior parameter values, we note an approximately 10% increase in the Ω m0 parameter compared to the value in the concordance model, accompanied by a subsequent reduction in the Hubble constant h.These values for the extended Proca-Nuevo cosmology lie within approximately 2σ of PLANCK results, namely Ω m0 = 0.315 ± 0.007 and h = 0.674 ± 0.005 [100].
Moreover, we use the data and expression (2.23) and we reconstruct the equation-ofstate parameter of dark energy (2.23), depicting it in Fig. 2. In particular, in the upper panel of Fig. 2 we illustrate the present-day value of the equation-of-state parameter, w DE (z = 0) ≡ w DE,0 , for various parameter vectors.It is observed that as we approach the concordance model with w DE,0 = −1, smaller values of Ω m0 are attained.In the lower panel of Fig. 2, we illustrate the reconstructed evolution of w DE (z) for extended Proca-Nuevo cosmology scenario (red line), using a re-sampling of the posterior parameter distribution.For the benefit of the reader, we additionally include the ΛCDM value w DE (z) = −1 (dashed black line) as well as the model independent reconstruction of w DE (z) taken from [97] that has been obtained using similar datasets.We observe statistical compatibility for ∼ 1 to 2σ levels, between our reconstructed w DE (z) EP N and the model independent one from [97], which serves as an additional check for the correctness of our analysis.
Furthermore, the increased value at present time w DE (z = 0) EP N in comparison with the corresponding value for ΛCDM scenario is related to the reduced h and increased Ω m0 .Both Ω m0 increment and H 0 decrement may cause changes regarding the matter clustering, however again we must restrain ourselves until the full analysis is performed.According to [101], for a cosmology which asymptotically reaches ΛCDM at large redshifts, in order to solve H 0 tension, while having constant G(z) ≡ G N , a phantom crossing is needed, which according to the lower panel of Fig. 2 does not seem to be realized.However, this could not be a problem, as due to the included spin-1 field of EPN gravity, emergent interactions at the perturbative level provide scale-dependent effects and significant modifications on the friction term on the matter overdensity δ m equation via the modified sound velocity [81], thus violating part of the assumptions used by [101], e.g the incorporation of new physics effects only via ∆G ≡ G ef f (z) − G N .

Conclusions
In this work we confronted massive Proca-Nuevo gravity with cosmological observations.The former is a recently proposed non-linear theory inspired by dRGT massive gravity, but involving a massive spin-1 field, that can be extended incorporating operators of the Generalized Proca class without compromising the essential primary constraint crucial for consistency.Then the theory can be coupled to gravity and can be covariantized in a way that exhibits consistent and ghost-free cosmological solutions.Specifically, one can recover the thermal history of the universe, and obtain a late-time accelerated epoch.Additionally, these cosmological solutions are well-behaved at the perturbative level, without experiencing instabilities and superluminalities, which was the typical problem of the basic massive gravity scenarios.
When it is applied at a cosmological framework, massive Proca-Nuevo gravity induces extra terms in the Friedmann equations that can be collectively absorbed into an effective dark-energy sector.The interesting feature of the special non-linear construction is that the Klein-Gordon equation that arises form the scalar field parametrizing the vector ansatz, is non-dynamical, namely it is just a constraint imposing an algebraic relation between the field and the Hubble function.This allows to eliminate the field in favor of the Hubble function, and thus the effective dark energy sector finally depends only on the Hubble function.In summary, the cosmological scenario of massive Proca-Nuevo gravity is different than ΛCDM paradigm, but possessing exactly the same number of free parameters.
We used data from Supernovae Ia (SNIa) and Cosmic Chronometers (CC) observations, in order to extract constraints of the free parameters of the theory.In particular, we provided the corresponding likelihood-contours, as well as the best-fit values and the 1σ intervals for the parameters, showing that the scenario is in agreement with observations.Nevertheless, interestingly enough, application of various information criteria showed that massive Proca-Nuevo gravity can be more efficient comparing to ΛCDM concordance model in fitting the data.The reason for this is that the scenario does include a dynamical dark energy, but without extra parameters comparing to ΛCDM model, and hence it can lead to an improved behavior.Finally, the reconstructed dark-energy equation-of-state parameter showed statistical compatibility at ∼ 1 to 2σ levels, with the model-independent, datadriven reconstructed one.
In summary, massive Proca-Nuevo gravity and cosmology may potentially challenge ΛCDM paradigm.However, there are definitely additional investigations that one must perform in order to reach to such a conclusion.Among these, is the confrontation with observations at the perturbative level, using data from Large Scale Structure (i.e.f σ 8 data) and other probes.Such a study lies beyond the present work and it is left for a future project.
"Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse)".
) with d L (z) = c(1 + z) z 0 dx H(x,ϕ ν ) the luminosity distance for spatially flat geometry.Note that M and the normalized Hubble constant h exhibit intrinsic degeneracy within the context of the Pantheon dataset.

Table 2 .
Figure 1.The 1σ, 2σ and 3σ iso-likelihood contours for the extended Proca-Nuevo (EPN) cosmology, for the various two-dimensional subsets of the parameter space (Ω m0 , h, M), using the joint analysis of SNIa/CC datasets.Furthermore, we present the mean values of the parameters.

Figure 2 .
Figure 2. Upper panel: Scatter plot of 8500 re-sampled points from the posterior parameters distributions with regard to the present value of the equation-of-state parameter w DE (z = 0) for the extended Proca-Nuevo (EPN) cosmology.The point which corresponds to the most probable value of the parameters is given as red cross.Lower panel: Reconstruction of the equation-of-state parameter (2.23) for EPN cosmology.Additionally, the blue, shaded areas, correspond to 1σ (deep blue) and 2σ (light blue) regions for the model independent reconstruction of w DE (z) taken from [97] using similar datasets.The red continuous curve corresponds to the most probable parameter values for the EPN scenario and the black dashed line to the concordance value of w DE = −1.

Table
we present the values for model selection criteria.The posterior distributions of the model parameters are presented in Fig 1 as iso-likelihood contours on two-dimensional sub-spaces of the parameter space (triangle plot).