Hubble diagrams in statistically homogeneous, anisotropic universes

We consider the form of Hubble diagrams that would be constructed by observers in universes that are homogeneous but anisotropic, when averaged over suitably large length-scales. This is achieved by ray-tracing in different directions on the sky in families of exact inhomogeneous cosmological solutions of Einstein's equations, in order to determine the redshifts and luminosity distances that observers in these space-times would infer for distant astrophysical objects. We compare the results of this procedure to the Hubble diagrams that would be obtained by direct use of the large-scale-averaged anisotropic cosmological models, and find that observables calculated in the averaged model closely agree with those obtained from ray-tracing in all cases where a statistical homogeneity scale exists. In contrast, we find that in cosmologies with spaces that contain no statistical homogeneity scale that Hubble diagrams inferred from the averaged cosmological model can differ considerably from those that observers in the space-time would actually construct. We hope that these results will be of use for understanding and interpreting recent observations that suggest that large-scale anisotropy may have developed in the late Universe.


Introduction
Hubble diagrams describe the relationship between the redshift of light received from distant sources, and the luminosity distance to them.They are of fundamental importance in cosmology, as they have played a crucial historical role, and underpin a wide array of cosmological observables.Of course, Hubble diagrams are typically interpreted within the homogeneous and isotropic Friedmann-Lemaître-Robertson-Walker (FLRW) models of the Universe, and within this class of models can be used to determine the isotropic Hubble rate, H 0 , and deceleration parameter q 0 .However, they can also be constructed in anisotropic cosmological models.In such cases, the luminosity distance as a function of redshift depends on direction in space, and one could imagine constructing Hubble diagrams along certain lines of sight, so that H 0 and q 0 become functions on the sky.This issue has been brought to the fore by the apparent direction-dependence of dipole asymmetries in the CMB and matter distribution, which has led some to ask whether the late Universe might in fact be best described as being anisotropic on large scales [1].
In a previous paper we considered how such an anisotropy might potentially emerge from structure formation [2].In this work, we consider what the observational consequences of such a scenario might be, focusing in particular on how Hubble diagrams might be constructed in cosmologies that have emergent large-scale anisotropy in their expansion, and how the values of H 0 and q 0 inferred from such diagrams might be related to the average expansion and shear of space itself.In order to investigate this possibility we use the Sachs formalism for propagating bundles of rays of light in general space-times, as well as our general framework for incorporating the direction-dependent back-reaction of inhomogeneities on the large-scale properties of space.Our approach is entirely relativistic and non-perturbative, and will therefore allow us to test explicitly whether the optical properties of inhomogeneous cosmological models can be described by an anisotropic average, even in situations where the inhomogeneities are of very large scale and/or amplitude.
The problem of calculating luminosity distances in inhomogeneous cosmologies is far from a new one [3], with numerous studies having been performed in (for example) Swiss cheese models [4][5][6][7][8][9], Lemaître-Tolman-Bondi and Szekeres cosmologies [10][11][12][13][14][15][16][17], Lindquist-Wheeler models [18][19][20][21], post-Newtonian cosmologies [22], N-body simulations [23,24,26], models constructed using numerical relativity [25,27,28], and cosmographic analyses that construct the low-z Hubble diagram in a generalised way [29][30][31][32].In this work, we will extend this field by investigating the extent to which Hubble diagrams in inhomogeneous universes can be accurately described by a large-scale averaged model that is anisotropic.We will study not only the all-sky average of the Hubble diagram (i.e. the monopole), but also the full variation of the luminosity distance function across the skies of many observers.This is made possible by the framework we built in Ref. [2], which is an extended version of the spatial averaging procedure of Buchert [33].We find that our formalism can account well for that full variation, as long as the average model is allowed to be anisotropic, and as long as the spatial averaging is done on an appropriate foliation of the space-time.
The family of space-times we choose to consider for our study are dust-filled and plane symmetric.These solutions are discussed in detail in Section 4, after we discuss our averaging formalism in Section 2, and the formalism we use to calculate distance measures in Section 3. In Section 5 we bring these techniques together in the context of inhomogeneous and anisotropic space-times with zero back-reaction.We show that in that case there is a unique choice of homogeneous model corresponding to the scalar averages, and that within this class of models the averaged geometry permits a very good understanding of the Hubble diagrams of observers within it.We follow the same procedure in Sections 6 and 7, for space-times with non-zero back-reaction.Section 6 deals with a universe that is homogeneous, but where the surfaces of homogeneity are tilted with respect to the matter distribution.In contrast, Section 7 considers an inhomogeneous and anisotropic space-time where backreaction is small but non-zero.We conclude in Section 8. Throughout this paper, we use units such that c = 8πG = 1, and adopt the metric signature (−, +, +, +).Latin indices from the beginning of the alphabet a, b, c, .. denote space-time indices, whereas those from the middle of the alphabet i, j, k, ... are reserved for space only.

Cosmological Averaging with Anisotropy
In order to extract the large-scale cosmological behaviour of an inhomogeneous space-time one requires an averaging, or coarse-graining, procedure.For this we use the spatial averaging procedure pioneered by Buchert [33], originally used in the context of the 1+3-decomposition of Ehlers and Ellis [37].This is one of the most well-established averaging formalisms available, and can be applied to any covariantly defined scalars S in a three-dimensional spatial domain D of constant time t as follows: where (3) g ab = g ab + n a n b is the induced spatial metric in D, where the spatial volume of the domain is given by V D = D d 3 x (3) g(t, x i ), and where n a = −N (t, x i ) ∇ a t is a time-like normal to D, with N being the lapse function.The spatial volume, defined in this way, can be used to define an effective scale factor for D of the form Within this approach, the time-evolution of scalar averages can be computed using the commutation rule where Cov (•, •) indicates a covariance of its two arguments.In the above equation Θ = ∇ a n a is the isotropic expansion associated with the congruence n a , such that its (lapse-weighted) average satisfies ⟨N Θ⟩ = 3 ∂ t a D /a D .In Buchert's original approach [33], spatial averaging is carried out on the scalars that are required to describe a homogeneous and isotropic FLRW universe, resulting in the following averaged versions of the Raychaudhuri and Hamiltonian constraint equations: where we have taken the matter content to be pressureless dust, and set the lapse equal to unity and the cosmological constant to zero.Here ρ = T ab n a n b is the local energy density of matter, and (3) R is the Ricci scalar curvature of the three-spaces orthogonal to n a .The term B, often referred to as the 'kinematical back-reaction scalar', is given by where σ ab is the shear tensor associated with n a , and σ 2 = σ ab σ ab /2 .Buchert's equations (2.4-2.5)provide a useful formalism for modelling the large-scale expansion of an inhomogeneous cosmological space-time, but do not allow one to calculate the anisotropy in the emergent expansion, which is the subject of the next section.

Averaging to an anisotropic universe
If one has reason to believe that there may be substantial anisotropy, or simply wishes to test how well a certain anisotropic model fits observational data, then it is necessary to consider not only the average of the isotropic expansion, but also of a number of other quantities (as in Ref. [2]).The first step in this approach is to decompose the space-time not only with respect to the time-like vector n a , but also a space-like vector field m a , which could correspond (for example) to the axis of a cosmic dipole.The general name for such a decomposition, and the equations that result, is the 1+1+2-formalism [39].It leads to a large set of covariantly-defined scalars, which may then be averaged in order to obtain an emergent anisotropic cosmology that is local rotationally symmetric (LRS) [40].A locally rotationally symmetric space-time is one in which every point has associated with it a single preferred spacelike direction, about which rotations leave the geometry unchanged.They do not need to be homogeneous in general, but when they are we will refer to them as 'LRS cosmologies'.Such LRS cosmologies exist within the Bianchi classification, and can be of types I, II, III, V, VII or IX, with types I, V, VII and IX containing FLRW as special cases.
Assuming that the matter content of the universe is well-described by pressureless dust, all the LRS space-times we are interested in can be described purely in terms of the expansion scalar, Θ, the energy density ρ , and the following three additional covariantly-defined 1+1+2scalars: which correspond to the space-like expansion of m a , the shear of n a and the electric part of the Weyl tensor, respectively.In the above, we have defined D a to be the covariant derivative projected into space-like hypersurfaces of constant t, and the notation t ⟨ab⟩ denotes a rank-2 tensor t ab that has been made symmetric and trace-free as well as being projected into the hypersurfaces orthogonal to n a .
After averaging, the LRS scalars {Θ, ρ, ϕ, Σ, E} should therefore be thought of as nonlocal, large-scale quantities that evolve according to and which obey the constraints (2.17) These are a reduced set of equations, tailored to the case of plane-symmetric dust-filled spacetimes, and with the lapse function chosen to be N = 1 .For the full set of 15 equations1 , and a presentation and explanation of their origin, the reader is referred to Ref. [2].
The back-reaction scalars, B i , in Eqs.(2.10)-(2.17)encode the contribution of inhomogeneities to the averaged equations, and are given by the following, in our current setting: Cov (ϕ, Σ) Cov (Σ, ρ) It can be seen that these scalars are composed by averaging spatial gradients of scalars along the preferred direction m a , as well from the variances (Var) and covariances (Cov) of various of the covariantly-defined scalars, where Var S := Cov (S, S) for any scalar S.

Homogeneous, anisotropic LRS cosmologies
Given a set of averaged scalars, one may wish to construct an homogeneous but anisotropic LRS cosmology out of them, in order to interpret the large-scale behaviour of the averaged space-time.In such a scenario, the existence of any non-zero back-reaction scalars would indicate that the inhomogeneities that have been averaged away are having an influence on the dynamics of the large-scale cosmology.For the simplest possible example, the averaged scalars ⟨Θ⟩ and ⟨Σ⟩ can be used to specify a Bianchi type-I line-element of the form where A(t) and B(t) are scale factors that are related to our averaged scalars by Similarly, one can write down a Bianchi type-V line-element of the form where ⟨Θ⟩ and ⟨Σ⟩ are related to A(t) and B(t) as in the equations above, and where the spatial curvature parameter β is given according to where (3) R is the Ricci scalar curvature of the hypersurfaces orthogonal to n a .In the final equality here we have made use of the Hamiltonian constraint equation [37] in order to relate β to ⟨ρ⟩, ⟨Θ⟩ and ⟨Σ⟩.In both cases we recover an FLRW model when A(t) = B(t) , as can be seen from the vanishing of the shear in this case.

Redshifts and Distance Measures in General Space-Times
To achieve our goal of testing how well emergent cosmological models describe observations in inhomogeneous and anisotropic universes we will require a general understanding of how to calculate redshifts and distance measures in General Relativity.There are three ingredients required for this: (i) a ray tracing procedure to determine the paths followed by photons, (ii) a formalism to describe null geodesic congruences, and (iii) a way to calculate distance measures from the properties of those congruences.These are explained below.

Ray tracing
In order to calculate redshifts and luminosity distances we need to know the trajectories of rays of light in a given space-time.Under the eikonal approximation these will be null geodesics with k b ∇ b k a = 0, where k a is the tangent vector to the ray.Finding these paths can be achieved straightforwardly by constructing the Hamiltonian H := g ab k a k b /2, subject to the constraint H = 0, and using Hamilton's equations: These equations will be integrated backwards in time from the observer to the source, by choosing k a to be past-directed.For the initial conditions, one requires a space-time location x a obs , and a propagation direction e obs a along which rays arrive at the observer.In defining e obs a , we have decomposed the tangent vector k a with respect to the time-like vector n a , such that k a = −E (n a − e a ), where e a n a = 0 and e a e a = 1.The photon energy is then E = k a n a , and e a gives the direction of the ray in the space-like hypersurfaces orthogonal to n a .
The initial direction of propagation e obs a is chosen by specifying the angles (θ c , ϕ c ) on the observer's celestial sphere.These angles pick out a space-like unit vector Aligning e obs a with this unit vector, and using the null condition k a k a = 0, is then sufficient to determine k a at the observer (up to the specific value of E).By varying the observing angles (θ c , ϕ c ), and the observer's location in space-time x a obs , we can then calculate the path of any null geodesic.We can of course also calculate the redshift along any particular geodesic, using the following definition: In order to construct the Hubble diagram, however, it is also necessary to consider congruences of such curves, which is what we will do in the next section.

Geodesic null congruences
By considering a family of many infinitesimally separated null geodesics with tangent k a , one can construct a congruence that is orthogonal to a two-dimensional screen space, which defines the projection tensor s ab = g ab + n a n b − e a e b .By parallel-transporting the screen space basis along the null geodesic congruence, one can study how the properties of that congruence evolve, allowing us to study the optical properties of the space-time.
Expansion and shear of the screen space are defined by θ := 1 2 ∇ a k a and σab := s c (a s d b) − 1 2 s ab s cd ∇ c k d , and the screen space itself is spanned by a pair of orthonormal space-like vectors s a 1 and s a 2 , or equivalently by the complex null vector s a = (s a 1 − is a 2 ) / √ 2 and its complex conjugate sa .The screen space projection tensor is related to these basis vectors by s ab = s 1 a s 1 b + s 2 a s 2 b = s a sb + sa s b .Since the null shear is symmetric, trace-free and fully projected into the (2-dimensional) screen space, it contains only two independent degrees of freedom.Therefore, it can be characterised entirely by a single complex scalar σ := −s a s b σab .The evolution of θ and σ along the null geodesics is governed by the Sachs equations [41][42][43], where Φ 00 = −R ab k a k b /2 and Ψ 0 = C abcd s a k b s c k d are the Newman-Penrose scalars corresponding to Ricci focusing and Weyl lensing, respectively, with the null energy condition implying Φ 00 ≤ 0 .These equations determine the evolution of the null expansion and shear, and are complemented by the following equation for the evolution of redshift where is the rate of expansion of space in the direction of the ray of light.

Distance measures
The measure of distance in which we are ultimately interested, for the construction of Hubble diagrams, is the luminosity distance, d L .However, for a single observer seeing multiple astrophysical sources it is most straightforward to calculate the angular diameter distance d A , which is related to the null expansion scalar θ by Working with d A , the Sachs equations can be re-written as with the initial conditions where H ∥ 0 denotes H ∥ evaluated at the observer's location.One may then use d A to determine the luminosity distance using Etherington's reciprocity theorem [44][45][46]: which allows a Hubble diagram to be constructed from the function d L (z) along any given line of sight.However, in practice it is often useful to display the Hubble diagram in terms of the distance modulus, defined by (3.12) We will use the distance modulus in much of the analysis we perform in Sections 5-7.

Plane-Symmetric Cosmological Models
The next ingredient that we need in order to implement our formalism is a set of inhomogeneous cosmological models.Our intent is to average these models using the equations from Section 2, and to calculate observables in both the averaged and un-averaged space-times using the approach outlined in Section 3.An appropriate choice is provided by the family of plane-symmetric dust-filled cosmologies, which exhibit a single preferred space-like direction orthogonal to the homogeneous planes of symmetry, and which can exhibit arbitrary amounts of inhomogeneity along this direction.Such plane-symmetric space-times are necessarily LRS, as every point in each plane of symmetry has associated with it a 1-parameter isotropy group consisting of rotations within that plane [34].
The metric for plane-symmetric space-times can be written in the general form This very general class of metrics includes the spatially-flat and negatively-curved FLRW, degenerate Kasner (with identical scale factors in the y and z directions), and vacuum Taub solutions as special cases.In this case, the solutions to Einstein's equations can be split into two distinct classes: (i) those with ∂R/∂r = 0 , and (ii) those with ∂R/∂r ̸ = 0.Both of these classes allow for significant inhomogeneity, but only the second will turn out to have non-zero back-reaction scalars, B i .The y and z coordinates in Eq. (4.1) label points in the planes of symmetry, while t and r correspond to time and space directions orthogonal to those planes.For these models, a natural choice for unit vectors in the preferred time-like and space-like directions is prescribed by n a = −δ t a and m a = exp{λ(t, r)} δ r a , and the scalars {Θ, ρ, ϕ, Σ, E} are naturally all functions of t and r only.Plane-symmetric spacetimes are therefore very well-suited for study within our formalism, and for the remainder of this section we will outline the form the metric functions ν(t), λ(t, r) and R(t, r) must take in order to be solutions to Einstein's equations.

Dust-filled solutions with R ′ = 0
When R ′ = 0, where a prime denotes partial differentiation with respect to r, one can redefine the time coordinate such that R(t) = t .The metric functions can then be expressed as [34,35] e 2ν(t) = t t 0 and e 2λ(t,r) = where c 1 (r) and c 2 (r) are arbitrary functions of r, and t 0 is a constant with units of time.By a further re-definition of the time coordinate, t −→ t 0 (3t/2t 0 ) 2/3 , we can set ν = 0. Finally, the factors of 3/2 and t 0 can be absorbed into c 1 (r) and c 2 (r), to end up with the general solution in the form The line-element above corresponds to the Einstein-de Sitter solution when c 2 vanishes, and the degenerate Kasner solution when c 1 vanishes.The 1+1+2-scalars in this case are given by and ϕ = 0. Due to the plane symmetry, averages over space-like domains reduce to ratios of one-dimensional integrals over the specified range of the r coordinate, such that for any scalar S = S(t, r).Evaluating the back-reaction scalars in Section 2.1, one finds that for any choice of the functions c 1 (r), c 2 (r), and for any interval (r min , r max ), all B i = 0, as long as c 1 (r) and c 2 (r) are integrable.
As an example, to illustrate how this works, consider the scalar and where all integrals should be understood to be between r min and r max .This clearly demonstrates that B 1 vanishes, as long as all the integrals in the two expressions above are welldefined.Similar calculations show that the other B i also vanish.This pleasing result can be understood by thinking about the averages of c 1 (r) and c 2 (r): where the averages pick up a time dependence due to the presence of t in the integrands.
From these, we can define an effective line-element where This means that the averaged geometry behaves like a degenerate (i.e.LRS) Bianchi type-I cosmology, with directionally-dependent scale factors A(t) and B(t) .As the metric in Eq. (4.7) is a member of the target space of solutions in our averaging formalism (i.e. it is an LRS Bianchi space-time), there is therefore no back-reaction.This result means that we can always identify a unique homogeneous model that describes the large-scale dynamics, as long as we are prepared for that model to be anisotropic 2 .Finally, the quantities required to solve Eqs.(3.4)- (3.6) in the R ′ = 0 class of planesymmetric space-times are , where k a is the tangent vector to the ray of light.With these quantities calculated as a function of affine distance along every null ray arriving at the observer, one can directly solve the Sachs optical equations (3.8) and (3.9).

Dust-filled solutions with R ′ ̸ = 0
Let us now consider the class of plane-symmetric dust-filled cosmologies with R ′ ̸ = 0.In this case, one has where dots denote partial derivatives with respect to t, and where we have assumed T tr = 0 by aligning ∂ t with the flow-lines of the dust.This equation is solved by λ = ln R ′ − ln f , where f = f (r) is any arbitrary function of r .We can therefore write the metric as where we have assumed that the matter is dust and chosen the time coordinate to set ν = 0 .The kinematic 1+1+2-scalars are then given by 2 Note, however, that if one were to take the target space for one's averaging procedure to be FLRW, as in most approaches to scalar averaging in cosmology, then this effective line-element cannot be mapped exactly onto that space.Hence, performing averages that map this class of space-times onto an FLRW cosmology must necessarily involve some non-zero amount of back-reaction.This exemplifies the usefulness of our approach, which is designed specifically for understanding space-times with large-scale anisotropy.It also provides a way to understand the result that the square of the shear, σ 2 , need not be small [36], which would usually be interpreted as a contribution to the back-reaction scalar B, but in the present case would be accounted for by the emergent large-scale anisotropy.
while the Ricci and Weyl scalars are and ϕ = 2f /R.We note that in the R ′ ̸ = 0 class we have ϕ ̸ = 0 , in contrast to what happens in the case R ′ = 0 .Within this class, the rest of Einstein's equations are solved completely if we write the following constraint equation [34] Ṙ2 where m(r) is another arbitrary function, which can be related to the energy density of the dust according to Eq. (4.14) is solved in parametric form by where t 0 (r) is a third free function, which can be thought of as setting the bang time at each value of r , such that the coordinate extent of the space-time is bounded by the curve t = t 0 (r) .A choice of the three free functions f (r), m(r) and t 0 (r) specifies a solution to Einstein's equations of the form given in Eq. (4.10).These constitute two independent functional degrees of freedom, as there remains a freedom in re-parameterising the r coordinate.
In the present case, the plane symmetry of the space-time means that calculating the Buchert averages reduces to computing a set of one-dimensional integrals of the form . (4.17) Aside from a small number of special cases, the back-reaction scalars are generically non-zero in these solutions, as will be verified explicitly in Sections 6 and 7.
For any set of functions f (r), m(r) and t 0 (r), the quantities required to solve Eqs.
where k a is again the tangent vector to a ray of light.We will use these equations to create Hubble diagrams in tilted and inhomogeneous space-times in Sections 6 and 7, below.
5 An R ′ = 0 Universe with Inhomogeneity In this section we consider Hubble diagrams constructed in plane-symmetric dust-dominated cosmologies with R ′ = 0, in which all B i = 0.This means that the average evolution of the cosmology is exactly equivalent to that of a Bianchi model, and the metric can be written as in Eq. (4.3).We note that in such cases Buchert's back-reaction scalar B does not vanish [35], even though the scalars B i from our anisotropic formalism are all zero.
Let us construct a space-time within this class that exhibits non-perturbative inhomogeneity in the matter distribution.This can be achieved by choosing the free functions c 1 (r) and c 2 (r) to be oscillatory, such that c 1 (r) = 2 cos 2 qr and c 2 (r) = 2η sin 2 qr.The energy density of the dust, as measured by comoving observers, is then where for simplicity we have normalised the time coordinate so that t 0 = 1.We see in Fig. 1 that for t ≪ η, ρ −→ 4 tan 2 qr/ (3tη), so the density profile is dominated by Kasnerlike vacuum at early times, with small regions of very high density, whereas for t ≫ η, ρ −→ 4/ 3t 2 , as the density profile tends towards that of an homogeneous Einstein-de Sitter (EdS) universe.Correspondingly, we have from Eq. (4.4) that the expansion, shear and electric Weyl scalars are From these one may reconstruct the shear tensor as σ ab = Σ (3m a m b − n a n b − g ab ) /2 , and the electric part of the Weyl tensor as The expansion scalar Θ behaves in a similar fashion to the plot of ρ in Fig. 1, while the scalars E and Σ have the opposite behaviour.At early (vacuum-dominated) times E and Σ are mostly large and non-zero (as in Kasner), whereas at late (matter-dominated) times they are mostly zero (as in EdS), except for spikes in the vacuum regions.

Ray tracing
We solve the geodesic equations (3.1) for a large number of observing directions θ c , and observer positions r 0 , for observers at time t = t 0 .Because of the plane symmetry of the space-time, the initial coordinates y 0 and z 0 are irrelevant, as is the azimuthal angle ϕ c on the observer's celestial sphere (we choose ϕ c = 5π/4, for the sake of numerical simplicity).Moreover, the symmetry of the sinusoidal metric profile means that one only need consider θ c in the range [0, π/2] .We can then numerically integrate the geodesic and Sachs equations.The free parameters in the metric are set to η = 1/3 and q = 100 .We normalise the time coordinate so that the observing time t obs = t 0 is equal to 1, and consider 100 observers at even r-coordinate spacings between r = 0 and r = π/q .The angular range is split up into discrete intervals of ∆ θ c = π/200 .Some key results of carrying out the ray tracing are shown in Fig. 2. In these plots, we have considered an observer at time t 0 who is placed at a point r obs = π/2q, which corresponds to the centre of an underdensity, i.e. ρ(t 0 , r obs ) = 0.For each initial direction θ c , we can first calculate the redshift z as a function of the affine parameter in the geodesic equation (3.1).For small θ c (e.g. the red and blue curves in Fig. 2a), the bumpy nature of the function z(λ) indicates the strong oscillatory inhomogeneities in the metric as light rays propagate in that direction.For those directions, the function z(λ) is not monotonically increasing.When a future-directed null ray passes through the vacuum-dominated regions, it can gain energy as it moves forward in time (and vice versa for past-directed null rays), if it is directed along or close to the spatially contracting symmetry axis of the rotationally symmetric Kasner-like geometry.For large θ c (e.g. the black and pink curves), z(λ) is much smoother, because those observers are looking in directions along which the space-time is much closer to homogeneous.
With z(λ) obtained, one can then calculate any quantity as a function of the observable redshift, rather than the non-observable λ.In Fig. 2b we show the expansion rate H ∥ parallel to the corresponding null geodesic, as a function of the redshift along that curve.As the light ray passes through the underdense regions, the function H ∥ dips and can become negative if the effect of the inhomogeneity is sufficiently strong, as can happen for small θ c .The loops in the H ∥ curve for θ c = 0 reflect the non-monotonicity of z(λ), wherein a past-directed null ray moving from an overdense to an underdense region initially has H ∥ decreasing but dz/dλ remaining positive.This produces the leading, right-hand side of each loop.Then as the light ray is moving towards the centre of the underdensity, dz/dλ becomes negative as H ∥ crosses zero, as per Eq.(3.6).After the null ray passes the centre of the underdensity, H ∥ begins to increase again.Finally H ∥ passes back through zero, and therefore the redshift begins to increase again.For null rays travelling in directions where H ∥ is always positive, these loops do not exist.Moreover, as seen in the pink curve in Fig. 2b, for θ c = π/2 the geometry that the ray moves through appears spatially homogeneous, and H ∥ increases monotonically.
Because we are displaying the results for observers situated within a vacuum region, the expansion rate H to the magnitude of the all-sky average of H 0 .The expansion rate of space along the line of sight then monotonically increases as a function of θ c , to over twice the all-sky average at θ c = π/2 .This is indicative of the very strong anisotropy in the space-time.One sees a similar set of effects for the Ricci and Weyl curvature terms Φ 00 and Ψ 0 , in Figs.2c and  2d, respectively.The scalar Ψ 0 is small in matter-dominated regions, where the EdS-like dynamics dominate, and rises sharply in the Kasner-like underdensities.For null rays with θ c near π/2, Ψ 0 increases monotonically, and Ψ 0 ≫ |Φ 00 | at all redshifts.The first of these facts is explained, like for the H ∥ curve in Fig. 2b, by the apparent homogeneity along that line of sight.For the latter fact, we recall that Therefore, for a light ray that always resides in regions of zero (or very low) matter density, the energymomentum tensor remains zero (or near zero), and so the space-time curvature is dominated instead by the free gravitational field encoded in the Weyl curvature.
For null rays with θ c = 0, Φ 00 is large (and negative, as it must be due to the null energy condition) at most points where the density is sufficiently large that the dynamics are EdSlike, and then drops sharply to zero in vacuum/near-vacuum regions.Note, however, that the Weyl scalar Ψ 0 remains zero throughout, because a null congruence that travels directly along the rotational symmetry axis of the space-time cannot be sheared (it is a principal null direction).The loops in Fig. 2c have the same origin as those in Fig. 2b.They are just about visible in the blue curve (θ c = π/8) in Fig. 2d, for which the Weyl term is non-zero, but is strongly suppressed relative to the Ricci curvature.
With past-directed solutions to the geodesic equation (3.1) known, we can now solve the Sachs equations (3.8) and (3.9) to obtain the angular diameter distance d A (z), and hence the luminosity distance, along individual lines of sight.This will be done for each possible θ c , at multiple observer locations r obs on the t = t 0 constant-time hypersurface.

Hubble diagrams
A typical approach used in cosmology is to consider the Hubble diagrams that would result in some average cosmology, which is expected to reproduce the large-scale properties of the Universe.This process is left somewhat implicit in the standard FLRW framework, but is done explicitly in the case of the Buchert averaging procedure.In doing this, one obtains a "Hubble diagram of the average".As we are currently dealing with an R ′ = 0 metric, it follows that for any choice of averaging domain the back-reaction scalars will vanish, and the large-scale model is obtained by simply replacing the sinusoidal functions c 1 (r) and c 2 (r) by their average values.In the present case, the average model is therefore described by a Bianchi type-I metric (2.18), with scale factors A(t) = t 2/3 + ηt −1/3 and B(t) = t 2/3 .We can now perform ray tracing in that averaged model universe, and compare the results to those of the inhomogeneous model that existed before averaging.The averaged model in this case is homogeneous but anisotropic, meaning that the spatial location of the observer is irrelevant, but that we still expect angular variation in the Hubble diagram.By integrating the Sachs equations using the averaged Bianchi-I metric, we finally calculate d model L (z) in different directions on the observer's sky.This will let us determine how accurately that homogeneous averaged model fits the distance-redshift relation for observers in the actual inhomogeneous space-time.
First, we consider the angular variation of d L , at a constant specified redshift.Such a variation is entirely absent in an isotropic universe, but is non-trivial, and dependent on the observer location, in an inhomogeneous space-time.Our results are shown in Fig. 3a.The function d L (θ c ) is rather complicated in the inhomogeneous space-time, as each light ray will have passed through a continually oscillating geometry, leading to an oscillatory pattern within the function, arising from the cos 2 qr and sin 2 qr terms in the metric.That pattern is itself contained in a quadrupole envelope, which, rather than coming from any oscillations, is due to the overall discrepancy between the expansion rates in the r coordinate direction and each of the y and z directions, producing a shear σ ab which is naturally quadrupolar and affects the Hubble diagram through e.g. the line-of-sight Hubble parameter H ∥ 0 at O(z) , the line-of-sight deceleration parameter q ∥ 0 at O(z 2 ) , and so on.Thus, the envelope is a signature of the global anisotropy that arises due to the presence of Kasner-like regions, which are contracting along θ c = 0 and therefore reducing redshifts for a given luminosity distance in that direction.Therefore, to reach the target redshift requires the past-directed null geodesic to travel further, meaning that d L exceeds the all-sky average (i.e. the monopole).Conversely, observing at θ c = π/2 means that one sees the lowest possible d L for that redshift, as there is only EdS-like expansion along that line of sight.The averaged Bianchi-I model captures something of the quadrupolar envelope, which in that context can be interpreted entirely in terms of the shear anisotropy σ ab of the Bianchi I spacetime, but loses all information about any higher-order multipoles (i.e.multipoles with l ≥ 2).For observers residing in underdense, roughly Kasner-like, regions, d L is over-estimated at the axes and under- estimated in between, because the local geometry is less isotropic than the average.The extent of the average model's success (or lack thereof) in predicting the Hubble diagrams of an individual observer is displayed in Fig. 3b, wherein we see that the angular variation of µ − µ model is roughly maintained as z increases.
Fig. 4a shows the performance of the averaged homogeneous model as a function of redshift, for the same observer.The magnitude difference can be very large at low redshifts, as one expects: on very small scales, and in the presence of very large inhomogeneities, observations in one's immediate vicinity do not produce an accurate Hubble diagram of the universe at large.However, as null congruences travel through many cycles of the oscillatory geometry, d L becomes much closer to the average.This is seen most clearly in the red curve, where observing along the axis of inhomogeneity θ c = 0 means that the light ray ultimately samples equal numbers of overdensities and underdensities.It therefore converges, with an oscillatory pattern, towards the homogeneous model obtained through our averaging procedure, as it corresponds to one of the axes of the quadrupole in d L for that model.Similarly, for the pink curve, which corresponds to θ c = π/2, the quadrupolar nature of the average model means that d L is accurately reproduced, as also shown by the return of µ − µ model to zero as a function of θ c in Fig. 3b.For observing angles θ c that are off the quadrupole axes, the curve µ(z) − µ model (z) does not always converge back to zero, but to a constant offset value.This is to be expected for single observers, as they cannot be said to be measuring a fair sample of the universe in all directions.
While any averaged model could not be expected to reproduce the Hubble diagrams of every individual observer, we might expect it to produce a good representation of the average Hubble diagram that would be obtained by averaging the results from many different observers.To investigate this possibility, we pick an observing direction, θ c , and consider the d L (z) from 100 different observers at many different r obs across an averaging domain, separated from one another by equal intervals of the r coordinate.Comparing their measurements gives rise to a situation as in Fig. 5, where we consider the difference µ − µ model for a given θ c and z , as a function of the observer's location within the averaging domain.
By choosing a large range of redshift intervals, we can then calculate the mean and variance of d L for each point in the discretised parameter space spanned by (r obs , θ c ).The averaged distance modulus ⟨µ⟩ that results is displayed in Fig. 6, where we have calculated ⟨µ⟩ − µ model for a variety of directions in the redshift range z ∈ [0, 0.30) .Although at low redshift the homogeneous large-scale averaged model describes the distance modulus poorly, the curve ⟨µ⟩ (z) rapidly converges to µ model (z).This convergence is particularly strong for a collection of observers viewing along the axis of inhomogeneity, θ c = 0, but even for the collection of observers who are off-axis the average distance modulus can be seen to settle down to have only a small offset from the averaged model (comfortably within one standard deviation).This result reflects the capacity of our formalism to account for large-scale anisotropy.6 An R ′ ̸ = 0 Universe with Tilt While the back-reaction scalars, B i , vanish in the R ′ = 0 cosmologies, the situation with R ′ ̸ = 0 cosmologies is more complicated.For example, the presence of a non-zero R ′ means that the dust that sources the space-time curvature does not need to be moving along integral curves of n a .This is true even if the space-time is homogeneous, and constitutes a class of anisotropic cosmological models that is usually referred to as being "tilted"3 .As in Ref. [2], we wish to study this possibility using the anisotropic cosmologies of Farnsworth [47].
Farnsworth's cosmologies are exact homogeneous solutions of Einstein's equations of Bianchi type-V .They are locally rotationally symmetric, but tilted, and are given by the

Normalised back-reaction
Back-reaction in Farnsworth spacetime where W , k, α and C are constants, with the latter determining the tilt velocity of the matter with respect to the homogeneous surfaces, which are level surfaces of the function t + Cr [47].This means that a set of observers comoving with the matter flow should measure global inhomogeneity in the hypersurface that is spanned by their rest spaces, so the "homogeneous" cosmological model that they would construct by averaging over domains of those hypersurfaces would appear to exhibit back-reaction, and the Hubble diagram they would infer within that model may well be a poor fit to observations of distance measures.The exception to this is the special case C = 0 , in which case the solution (6.1) is just an FLRW geometry with negative spatial curvature.Fig. 7 shows that observers comoving with the dust would have orthogonal rest spaces that are inhomogeneous, provided C ̸ = 0.This effect is entirely due to the tilt.We now wish to carry out numerical integrations for rays of light in this space-time, for which we make the following choices for parameter values: α = 1 for the characteristic length scale, and k = 5, W = 125 and C = 2 for the curvature, density, and tilt parameters, respectively (as in Ref. [2]).Because of the nature of the apparent inhomogeneity induced by the tilt, the rest spaces of the observers in this case are not statistically homogeneous.This means that there is no natural homogeneity scale that can be used to define our averaging domain, D. We therefore choose to average between {r min , r max } = α −1 , 3α −1 , which in the absence of an homogeneity scale is made purely out of computational convenience.We find that  back-reaction scalars in this case can be non-zero, with relative sizes of up to 10% , though they decay at late times (as shown in Fig. 7b).

Ray tracing
We now calculate the paths of null geodesics arriving at observers on a constant-time hypersurface, by repeatedly solving the geodesic equation in this space-time.Once again, the plane symmetry of the space-time restricts the initial conditions we need to vary to r obs and θ c .We note that although the space-time has homogeneous surfaces, these are not the constant-time surfaces of observers comoving with the dust, and so we would expect different observers on any given t = t obs hypersurface to construct different Hubble diagrams, even if they observe in the same direction, as they will not exist on the same hypersurface of homogeneity.By contrast, if we picked observers at different t obs , but the same value of the combination t obs + Cr obs , they would all be on the same homogeneous hypersurface, and would therefore construct identical Hubble diagrams.We consider 100 different observers at regular intervals of the r coordinate throughout our averaging domain, on a hypersurface of constant time, t = t obs = 10 .In Fig. 8 we show results for an observer located at r obs = 2.5/α.As expected, there are no bumps in the functions z(λ), H ∥ (z), Φ 00 (z) and Ψ 0 (z), which in this case is entirely real.The effect of the Weyl curvature on the light ray's propagation, given by Ψ 0 , is in all directions substantially smaller than the effect of Ricci curvature, Φ 00 .This is indicative of the late- time isotropisation of the Farnsworth metric, wherein it tends towards an FLRW universe with negative spatial curvature (and therefore no Weyl curvature).

Hubble diagrams
The Farnsworth cosmologies are of Bianchi type-V , with homogeneous surfaces tilted by a 3-velocity Cf (r)/R ′ (t, r) with respect to the matter-comoving surfaces of constant time.It therefore seems natural to map the averages from the t = cst.surfaces to Bianchi type-V cosmologies.The angular variation of µ at specified redshift shells, compared to this averaged Bianchi type-V cosmology, is shown in Fig. 9a, where it can be seen that the difference between the curves increases with redshift at all angles.This suggests that the model will fail to capture the correct Hubble diagram at intermediate and high redshifts, as verified in Fig. 9b.This is in contrast to the case studied in Section 5, and we interpret it as being due to the lack of an homogeneity scale in the tilted hypersurfaces.As expected, averaging over a large ensemble of observers does not alleviate the discrepancy, as displayed in Fig. 10.Not only does ⟨µ⟩ − µ model not return to zero, but the standard deviation remains large in all cases.This indicates that not only does the lack of an homogeneity scale make the averaged Bianchi model perform poorly, but also that averaging in the constant-t surfaces may not be a sensible procedure in the first place.This is because the tilt gives rise to a type of global inhomogeneity, meaning that imposing an homogeneous model gives rise to a Hubble diagram that bears no relation to one that observers in the space-time would record.
7 An R ′ ̸ = 0 Universe with Inhomogeneity Finally, let us consider an inhomogeneous plane-symmetric universe with R ′ ̸ = 0.In this case the back-reaction scalars are not a priori restricted to be zero.An homogeneous, tilt-free solution to the plane-symmetric metric with R ′ ̸ = 0 is provided by the metric functions in Eqs.(4.10) and (4.16) taking the form f (r) = kr, m(r) = Af 3 (r) and t 0 (r) = 0, for some positive constants k and A .To introduce inhomogeneity, we can therefore simply modify these functions, such that where b and q are free parameters controlling the amplitude and frequency of oscillations in the inhomogeneities.The 1+1+2-scalars, evaluated at the observing time t obs = 40, are displayed in Figs.11 for this case, where we have chosen k = 5, A = 1, and q = 5.Importantly, we restrict the amplitude b so that the matter density undergoes fluctuations of order unity, but is always non-negative.We find that b = 0.1 produces a density that is never less than 30%, or more than 180% , of its average value at t = t obs .This means that the model never reaches perfect matter-domination nor vacuum-domination (unlike in the R ′ = 0 case from Section 5), but that the density variations are still large.We then calculate the full set of scalar averages B i .By inspection of Fig. 11, one sees that ∆r = π/q defines the statistical homogeneity scale, meaning that one may choose any such oscillation cycle as constituting the extent of the r-coordinate in a well-motivated averaging domain (the y and z coordinates again being irrelevant, due to the plane symmetry of the space-time).We choose r min = 9π/q , so r max = 10π/q , and show our results in Fig. 12.
The reader can see that back-reaction scalars all make very small contributions to the evolution equations in this case.Their contributions are roughly constant in time, relative to the overall scale of the quantities in the equation, and only B 12 (the back-reaction term in the evolution equation for ⟨E⟩ ) is larger than 0.01% of the dominant term in its evolution equation.Furthermore, B 12 has the opposite sign to ⟨Θ⟩ ⟨E⟩, so it is not causing the average Weyl curvature to grow, but rather is suppressing it.The other B i are all of positive sign, but are negligibly small.Overall, one sees that although the back-reaction scalars are allowed to be non-zero in the R ′ ̸ = 0 plane-symmetric space-times, they are still highly restricted by symmetries.This is true even though the matter density fluctuations are of order unity, and seems to be independent of our precise choice of parameter values and functions.

Ray tracing
The results of our ray tracing procedure are summarised in Fig. 13, where we have displayed z(λ), H ∥ (z), Φ 00 (z) and Ψ 0 (z) for null geodesics arriving at an observer at time t = t obs and location r obs = r min + π 4q .Fig. 13a shows that the redshift z is monotonic in affine parameter λ for all θ c considered.The effect of inhomogeneities on the null rays is, however, clear once one considers the inferred line-of-sight Hubble parameter, which is displayed in Fig. 13b.Here the oscillations in the metric function f (r) are of amplitude 10% , so the effects of inhomogeneity on H ∥ are less drastic than in the R ′ = 0 case (where the metric oscillated entirely between dust-dominated and vacuum regions).Normalised back-reaction Back-reaction in R ,r = 0 oscillatory spacetime . Non-vanishing back-reaction scalars B i for constant-time surfaces of the R ′ ̸ = 0 geometry.These scalars have all been made dimensionless, by normalising them with respect to the largest term from the equation in which they appear.
The Ricci and Weyl curvature terms have the same oscillatory pattern as H ∥ .Unless one is observing directly along the symmetry axis (θ c = 0), then the Weyl curvature contribution Ψ 0 is typically of the same order of magnitude as the Ricci contribution Φ 00 , which is displayed in Fig. 13c.Fig. 13d shows that Ψ 0 has a changing sign in each case.This means that for observations at high redshift, for which null rays will typically have to travel through many oscillations in the geometry, the effect of Weyl curvature will be suppressed relative to Ricci curvature, even though the two terms Ψ 0 and Φ 00 are typically of comparable magnitude.This can be understood by studying Sachs' equation for the null shear (3.9), which integrates to As H ∥ is always positive in this case, the oscillations in the sign of Ψ 0 cause the right-hand side of Eq. ( 7.2) to remain bounded, when integrated to intermediate and high redshifts, as can be seen in Fig 14 .The effect of Weyl curvature on d A is communicated through the presence of σσ in Sachs' equation for d A .By z = 1, σσ is four orders of magnitude smaller than Φ 00 , and hence the integrated effects of Weyl curvature are small.

Hubble diagrams
In this section we consider both the Bianchi type-I and V geometries as candidate models, by mapping the scalar averages ⟨Θ⟩, ⟨Σ⟩ and (3) R onto these classes (as appropriate).Our averages are calculated over the domains described above, and evolved backwards in time from t obs to t obs /4 .As in the previous sections, we then perform ray tracing in those homogeneous averaged models by solving the geodesic equation for past-directed null geodesics emanating -25 - from observers at t obs , and compare the results of this to the results of ray tracing in the true inhomogeneous space-time.Fig. 15a shows a comparison of the distance moduli to the average models, as a function of θ c for a given redshift.It demonstrates that the type-V average model appears more appropriate than type-I, which is clear once one considers observing directions θ c that are sufficiently far from the rotational symmetry axis.Not only are the differences µ − µ model significantly smaller for type-V than type-I, but they also decay with redshift.However, at this point it is not yet clear that observations of type-Ia supernovae would actually indicate a preference for either average model, as any overall shift in µ can be removed by a recalibration of the intrinsic magnitude of the supernovae.All that would remain in that case would be the angular profile, which is not easily explained by either choice of homogeneous largescale geometry.If this were all of the information available then one would not be able to select which average model would be more appropriate, which could lead to serious errors in inferring cosmological parameters.
Figure 15b shows the Hubble diagrams µ(z) that would be constructed for observing directions θ c = {0, π/8, π/4, π/2} , for redshifts up to z ∼ 0.5.One sees that at low redshifts there is little distinction between the models, but at high redshift the Bianchi type-V in-terpretation of the scalar averages is substantially preferred over type-I.In the case of the type-V models, µ − µ model remains very close to zero at all redshifts for θ c = π/8 and larger.This is a direct result of the angular profile of µ displayed in Fig. 15a: the function µ(θ c ) is steepest near θ c = 0 and then rapidly becomes a nearly flat line just above the monopole.Therefore, µ − µ model for each θ c ≳ π/8 is nearly flat.This explains the lack of features in Fig. 15b, for both observing locations and all the observing directions shown (except directly along the symmetry axis).On the other hand, the effect of inhomogeneities is clearest for θ c = 0 , but there is still a relatively rapid trend towards zero, with |µ − µ model | < 0.05 for z > 0.2 for the type-V models, in both plots.This shows the success of our averaging procedure at describing the large-scale Hubble diagram of an anisotropic cosmology, if the large-scale averaged model is chosen appropriately.
Let us now suppose that we had access to information from 100 such observers, at different coordinate locations equally separated from one another, throughout our averaging domain (i.e. with r obs ∈ [9π/q, 10π/q)).The effect of the inhomogeneity on distance measures in this case is displayed in Fig. 16, where it can be seen that the preference for the type-V model only becomes clear at redshifts z ≳ 0.2 .
Performing an ensemble average over this set of observers gives the results displayed in Fig. 17, which shows that there are small effects in all directions in the redshift range z < 0.1.This is due to some observers receiving photons that have just moved through a region of high density, and so a large negative Ricci curvature term Φ 00 , and others receiving photons coming through underdensities.These local effects are, however, only pronounced for θ c close to zero, as can be seen from the results for θ c = π/200, which show a rapid transition in behaviour as the observing angle θ c is increased from zero.We also note that oscillatory features in the Hubble diagram are highly suppressed if one is observing away from the symmetry axis.This is in obvious contrast to the R ′ = 0 models considered in Section 5, where Fig. 6 shows that for the R ′ = 0 model, the "average Hubble diagram" has distinct features of inhomogeneity for θ c = π/8 and π/4 as well as just θ c = 0.
While the θ c = 0 and θ c = π/200 cases demonstrate significant effects from the inhomogeneities, it remains true that ⟨µ⟩ is always consistent with µ model to within 1σ.It can also be seen that the size of the oscillations, and their associated confidence intervals, is clearly decaying with increased redshift.Finally let us note that the offset at z = 0 can be explained with reference to the Hubble parameter, since for z ≪ 1 a Taylor series expansion of d L (z) shows that the leading term at low redshifts (in a generic space-time) is given by d L ≃ z/H ∥ 0 [29], which in our diagrams leads to a vertical displacement in µ ∼ log d L .At the same time, the ensemble average of the line-of-sight Hubble parameters H ∥ 0 that are measured by each of the observers (as displayed in Fig. 13b) is not always equal to the line-of-sight Hubble parameter in the averaged homogeneous model, which means that a local effect on µ is entirely expected.In the θ c = π/8 and π/2 diagrams we have subtracted off the small offset.Thus, the nearly flat line ⟨µ⟩ − µ model that appears for z ≳ 0.1 is consistent with zero, which reflects the fact that a flat vertical displacement would not be observable.This concludes our study of the Hubble diagrams constructed by observers in statistically homogeneous, anisotropic cosmologies, and their ability to be fit by the models that result from our averaging formalism.

Discussion
We have constructed Hubble diagrams in universes that are inhomogeneous, and anisotropic on large scales.These diagrams depend on the line-of-sight of the particular observer, as well as their position in space-time.We have then compared these diagrams to those that would be created by considering the large-scale average space-time, using a formalism we presented in Ref. [2].In order to carry out this comparison, we have focused on three families of cosmological models within the plane-symmetric class of dust-dominated solutions of Einstein's equations.These solutions admit closed-form exact solutions, and for arbitrary amounts of inhomogeneity to be introduced in the directions orthogonal to the surfaces of symmetry (though care is needed to avoid situations that may involve shell-crossings and singularities).The homogeneous sub-class of this set of solutions belong to the LRS Bianchi type-I and V cosmologies, which are therefore considered to be the "target space" for the averaged cosmological models.While the observations made by any one observer in these space-times are not necessarily reproduced well by the averaged cosmological models, we find that the observations made by many observers have an average that can be described well.In particular, we found in Sections 5 and 7 that there exist averaged anisotropic cosmological models that can accurately predict the Hubble diagram for average observations made in all directions to within 1σ.This is a non-trivial result, as the Hubble diagrams can take very different shapes in different directions in universes that exhibit large-scale anisotropy.In particular, it extends previous studies that studied only observations along lines-of-sight that are aligned with principal null directions [43].Furthermore, the 1σ confidence interval generically shrinks as redshift increases, meaning that the average model becomes a better approximation for typical observers who make measurements over larger distances, as one would hope for a useful cosmological model.
While our study has shown some successes for the anisotropic cosmologies that result from the application of our averaging formalism, it has also shown some clear warning signs.In particular, it is clear that the choice of foliation on which the averaging is performed must be chosen with care.This is exemplified by the tilted Farnsworth cosmologies studied in Section 6, where we found that an anisotropic homogeneous model constructed from our averaging procedure could not fit the Hubble diagrams of observers accurately, even in an average sense.This is despite the fact that the Farnsworth space-time is genuinely spatially homogeneous, and clearly indicates the importance of suitably foliating the spacetime.In general, we do not expect averaged cosmological models to reproduce the average of observables if there is no statistical homogeneity scale.If a foliation is such that no scale of this type exists, then we expect that choice to fail.
Another area which requires care, in order to get a sensible result for the averaged cosmology, is the choice of target space for the symmetries of the averaged model.In particular, if the averaged model does not allow for all aspects of the averaged covariant scalars to be accounted for, then it is unlikely to reproduce the average of observations made within the space-time.We believe this to be the reason why the Bianchi type-I models failed to reproduce the average Hubble diagrams for the observers considered in Section 7. In that case the average spatial curvature is non-zero, and so it needs an averaged cosmology that allows for this possibility to exist.This is the case for Bianchi type-V models (which reproduced observables well), but not for Bianchi type-I (which did not, at high z).We suspect the same will be true for cosmological models with large-scale tilt, which is only allowed in a restricted set of Bianchi classes.
Ultimately, as cosmologists, we would like to obtain a full picture of the large-scale properties of our Universe.As the number of type-Ia supernovae, quasars, radio galaxies and other distant sources we observe rises in the coming years, it will be possible to construct an increasingly precise sky map of d L (z), to z ∼ 1 and beyond.If these observations continue to support the existence of large-scale anisotropies in the Universe, then we will need cosmological models that can include that freedom.The analysis presented in this work constitutes a step towards understanding the Hubble diagram in such space-times, which we hope will be of use for understanding the theoretical modelling problem at hand, as well as potentially providing a framework within which anisotropic observations in the real Universe can be understood and interpreted.In future work we hope to investigate situations in which our back-reaction scalars can be large, while maintaining statistical homogeneity within our averaging domains.This did not prove possible in the plane-symmetric space-times we considered in the present work, but could potentially be found in more general situations.The use of relativistic simulations [27,28,[49][50][51] may well prove to be fruitful in this regard.

∥0Figure 2 .
Figure 2. Top left panel: photon redshift z as a function of affine parameter λ, for observing angles θ c = 0 (red), π/8 (blue), π/4 (black) and π/2 (magenta) in the R ′ = 0 geometry.Top right: line-ofsight Hubble parameter H ∥ = Θ/3 + σ ab e a e b (normalised by its monopole H 0 at z = 0).Bottom left and right: Ricci and Weyl lensing scalars Φ 00 = −1/2R ab k a k b and Ψ 0 = C abcd s a k b s c k d as functions of redshift for the same observer (both made dimensionless by normalising with respect to H 2 0 ).

Figure 3 .
Figure 3. Left: luminosity distance d L at a given redshift as a function of observing angle θ c , relative to the monopole, for an observer at the centre of an underdensity in the R ′ = 0 geometry (as per Eq.(4.3), with c 1 (r) = 2 cos 2 qr and c 2 (r) = 2η sin 2 qr , where η = 1/3).The curves are for z = 0.1 (red), z = 0.2 (blue), and z = 0.3 (brown).The solid lines are obtained by performing ray tracing in the inhomogeneous space-time, and the dashed lines come from ray-tracing in the averaged model.Right: difference between distance modulus µ obtained by from ray-tracing in the inhomogeneous space-time and averaged model, as a function of θ c .Curves are as given in the left-hand plot.

Figure 4 .
Figure 4. Left: difference in distance modulus from the averaged model, µ − µ model , as a function of redshift, shown for an observer at the centre of an underdense region in the R ′ = 0 geometry.The curves are for θ c = 0 (red), θ c = π/8 (blue), θ c = π/4 (black), and θ c = π/2 (magenta).Right: An illustration of the ray tracing procedure used to calculate the sky map of d L (z) for each observer.Past-directed null geodesics emanating from the observer are distinguished by their value of θ c , as in the left-hand plot.The space-like vector m a is indicated as being normal to the planes of symmetry P symm , which here are in the horizontal plane.

2 Figure 5 .
Figure 5. µ − µ model for a given line of sight θ c , as a function of observer location within a single averaging domain in the R ′ = 0 geometry.The left and right edges of the plot correspond to overdense regions, whereas the middle corresponds to an underdensity.The curves are for z = 0.1 (red), z = 0.2 (blue), and z = 0.3 (brown).

Figure 6 .
Figure 6.Averaged distance moduli, ⟨µ⟩−µ model , as a function of redshift, in the geometry with R ′ = 0.These averages are obtained from the mean of 100 observers evenly spaced in coordinate location r throughout the averaging domain.The shaded red regions indicate the 1σ confidence intervals.Results are displayed for sets of observers all viewing in the directions θ c = {0, π/8, π/4, π/2}.

Figure 7 .
Figure 7. Left panel: energy density profile on constant-time surfaces of the Farnsworth cosmologies, normalised by its value at r = α −1 .The curves shows ρ(r) at the observing time t = t obs (black), t = t obs /3 (red), and t = 3t obs (blue).Right panel: all non-vanishing back-reaction scalars in constant-time hypersurfaces of the Farnsworth geometry (6.1).The scalars B i have all been made dimensionless, by normalising them with respect to the largest term from the equation in which they appear.

Figure 9 .
Figure 9. Left: µ − µ model as a function of θ c , for at observer at r obs = 2.5/α in the Farnsworth cosmology.Curves are for redshifts 0.1 (red), 0.2 (blue) and 0.3 (brown).The solid lines are obtained by performing ray tracing in the tilted space-time, while the dashed lines come from ray tracing in the averaged LRS Bianchi type-V model.Right: µ − µ model as function of redshift, for the observer at r = 2.5/α considered in the last set of plots.Curves are for observing angles θ c = 0 (red), π/8 (blue), π/4 (black) and π/2 (magenta).Again, µ model refers to a Bianchi type-V model.

π 4 Figure 10 .
Figure 10.The ensemble mean of µ over 100 observers evenly spaced throughout the averaging domain in the Farnsworth geometry, compared to µ model obtained by mapping the scalar averages from that domain onto an LRS Bianchi type-V model, for θ c = 0 (left) and π/4 (right).

Figure 11 .
Figure 11.Left panel: matter density ρ, isotropic expansion Θ, and space-like expansion ϕ, at the observing time t obs , as a function of the r coordinate in the R ′ ̸ = 0 models.Right panel: shear Σ and electric Weyl curvature E, also at t obs .All quantities are normalised by their averages, which have been calculated over the domain [r min , r max ) .

Figure 13 .
Figure 13.Top left: redshift as a function of affine distance in models with R ′ ̸ = 0 , for observing angles θ c = 0 (red), π/4 (blue), π/2 (black) and π/2 (magenta) .Top right: line-of-sight Hubble parameter H ∥ as a function of redshift, normalised by its monopole at z = 0. Bottom left and right: Ricci and Weyl lensing scalars Φ 00 and Ψ 0 as functions of redshift.Each of the Ricci and Weyl terms have been normalised by the maximum affine parameter value λ −2 max .

π 2 Figure 16 .
Figure16.The same set of plots as in Fig.5, but for the R ′ ̸ = 0 models, and where µ model refers to averaged Bianchi type-I (solid) and V cosmologies.Curves are for z = 0.1 (red), z = 0.2 (blue), and z = 0.3 (brown).The averaging domain extends from r min = 9π/q to r max = 10π/q .