Is It Possible to Know Cosmological Fine-tuning?

Fine-tuning studies whether some physical parameters, or relevant ratios between them, are located within so-called life-permitting intervals of small probability outside of which carbon-based life would not be possible. Recent developments have found estimates of these probabilities that circumvent previous concerns of measurability and selection bias. However, the question remains whether fine-tuning can indeed be known. Using a mathematization of the concepts of learning and knowledge acquisition, we argue that most examples that have been touted as fine-tuned cannot be formally assessed as such. Nevertheless, fine-tuning can be known when the physical parameter is seen as a random variable and it is supported in the nonnegative real line, provided the size of the life-permitting interval is small in relation to the observed value of the parameter.


INTRODUCTION
Cosmological fine-tuning (FT) says that some physical parameters, some ratios between them, and some boundary conditions, must pertain to intervals of small probability to permit the existence of carbon-based life (Lewis & Barnes 2016).FT started to spark the interest of the scientific community when Carter (1974) brought it to light calling it the anthropic principle, a term that, according to Lewis & Barnes (2016), was popularized and modified by Barrow & Tipler (1988).Since then, considerable effort has been made to determine the length of these so-called life-permitting intervals (LPIs), constituting the area where the scientific literature has primarily focused (see, e.g., Adams 2019; Barnes 2012;Davies 1982).For this reason, some have linked FT with the small length of the intervals, with no regard for their probability (see, e.g., Helbig 2023).This evokes the idea of "naturalness" in physics, where a good theory is assumed to present numbers that are close to unity, and departures from it point to ad-hoc explanations trying to fit observations to theory as Mercury epicycles in the Ptolemaic model (Hossenfelder 2020).FT goes beyond naturalness, asserting that, according to our current theories, such unnaturalness is needed if carbon-based life is going to exist (Barnes 2021).The existence of life sets a specification; i.e., a subset of possible outcomes of parameters for which life is permitted.These outcomes maximize a function f that quantifies how specified outcomes are, and it is stochastically independent of the original random variable.In cosmological FT, this random variable is the physical parameter, whose value is assumed to be independent of the requisites for carbon-based life existence, and the LPI maximizes an indicator function f that determines if life is possible or not.Therefore, the tuning of a physical constant for life is fine if and only if the probability of its LPI is small (Díaz-Pachón & Hössjer 2022).
Fine-tuning divides the scientific and philosophical communities between those who dismiss it as mere speculation (e.g., Hossenfelder 2020; Colyvan et al. 2005;McGrew et al. 2001) and those who consider it a serious scientific endeavor (e.g., Davies 2008;Lewis & Barnes 2016;Tegmark 2015).One of the main criticisms, even among those who see it as a legitimate question, is the lack of a valid probability distribution to impose over the parameters (Adams 2019).In this direction, Hossenfelder asks: "[H]ow do you make a statement about probability without a probability distribution?"(Hossenfelder 2020, p. 205).The problem is indeed daunting: it requires estimating the probability of the LPI using a biased sample of size 1 taken from an unknown distribution supported in an unknown space.More explicitly, and to establish some notation, let X be the possible values of a random physical parameter X. FT is about inferring the probability of its LPI ℓ X when the only observation we possess is x 0 , the observed value of X in our uni verse; moreover, such an observation suffers from selection bias because that universe harbors life; and the only information about the probability distribution F = F X of X is that F is supported on X and x 0 ∈ X .Moreover, assuming Carter's definition, it is just natural to assume that F corresponds to a prior distribution of X that maximizes the entropy (Carter 1974).
Past efforts to find the probabilities of LPIs use mainly uniform distributions over the set of possible values that constants might take (see, e.g., Sandora 2019a; Barnes 2019Barnes -2020;;Tegmark et al. 2006).Nonetheless, this approach attests to the difficulty of the task, since the choice of the uniform distribution is based on Bernoulli's Principle of Insufficient Reason (Bernoulli 1713), which requires knowledge that the sample space is finite -a strong assumption that cannot be warranted for cosmological FT.This legitimate criticism of FT measurements came from philosophy, where it was called the normalization problem (Colyvan et al. 2005;McGrew et al. 2001;McGrew & McGrew 2005;McGrew 2018).
Thus, the tuning problem can be divided into two steps.First, determining the size of the LPI for a given constant of nature -a task of physics-, and determining the probability of the LPI -a mathematical task-.For this reason, we formally define FT next.
Definition 1 (Fine-tuning).Fine-tuning happens if and only if F 0 (ℓ X ) is small, where F 0 = F X is the assumed true distribution of X, which is supported on X .In more detail, there is fine-tuning if and only if there exists a real-valued δ > 0 such that (1) If the tuning is not fine, we will call it coarse.
Recently Díaz-Pachón, Hössjer, and Marks developed Algorithm 1 (below) to find an upper bound on the probability of the known LPI ℓ X of a parameter X (Díaz-Pachón et al. 2021;Díaz-Pachón et al. 2023).Algorithm 1 can be viewed as a statistical test, where the null hypothesis H 0 that tuning of X is coarse is tested against the alternative hypothesis H 1 that X is fine-tuned.The first two steps of Algorithm 1 ask for the input: a parameter space X (e.g., N, R + , R, R n , C, or some subset thereof), and the constraints to impose on the prior distributions F X supported on X (e.g., if some event B ⊂ X has a known probability, if some of the moments of X are finite, etc.).In the third step, a collection F . .= {F (•; θ); θ ∈ Θ} of all the possible maximum entropy (maxent) priors F is determined that satisfy these constraints.These priors are indexed by a hyperparameter θ that varies over a hyperparameter space Θ.In particular, we will assume that the true distribution of X satisfies for some θ 0 ∈ Θ.In the fourth step of Algorithm 1 the maximum probability TP max of the LPI ℓ X is found over the collection of all the posterior probabilities {F (ℓ X ; θ); θ ∈ Θ} induced by the priors in F. That is, for each θ ∈ Θ we first define the corresponding marginal probability of ℓ X as by integrating over all possible outcomes x of X ∼ F (•; θ).We refer to F (ℓ X ; θ) as a tuning probability.By maximizing this tuning probability over Θ, the maximum tuning probability is obtained.In the fifth step, the outcome of Algorithm 1 is produced: if TP max is small (≤ δ according to Definition 1), we reject the null hypothesis H 0 and conclude there is FT.If TP max is not small, the algorithm is inconclusive.In this case, we cannot reject H 0 .This does not mean that X is not finely tuned though.Indeed, H 1 may still be true when H 0 is not rejected, since F 0 (ℓ X ) may be larger or smaller than δ when TP max > δ.
Algorithm1 Algorithm for testing fine-tuning Input: Choose the set of possible values X of X.
Input: Define constraints on the distribution F = FX of X.
Find the family F = {F (•; θ); θ ∈ Θ} of maxent distributions F with support X , subject to the constraints in Step 2, and assume (H0) that the true distribution of X satisfies F0 = F (•; θ0) ∈ F for some θ0 ∈ Θ.
Algorithm 1 has some important properties.First, the upper bound TP max is sharp because if the family F is made of a single maxent distribution F 0 , then TP max = F 0 (ℓ X ).For instance, assume that Step 1 selects X = R + , and in Step 2 it is assumed that the first moment is finite and known (e.g., E 0 X = θ 0 ).Then the family of Step 3 is made of a single distribution with density F ′ (x; θ 0 ) = f (x; θ 0 ) = e −x/θ0 /θ 0 , the exponential with mean θ 0 , since this distribution is maxent among all the distributions with mean θ 0 that are supported in R + .Consequently, TP max = F 0 (ℓ X ) = ℓ X e −x/θ0 dx/θ 0 .
Second, the selection of the class F as a family of maxent distributions simultaneously solves both the normalization and selection bias problems (for a detailed discussion, see Díaz-Pachón et al. 2023), maybe the two biggest concerns raised by physicists (Tegmark et al. 2006;Adams 2019;Hossenfelder 2021Hossenfelder , 2020) ) and philosophers (Colyvan et al. 2005;McGrew et al. 2001;Bostrom 2002) alike.On one side, considering more general classes of maxent distributions than uniform ones solves the normalization problem.At the same time, by considering the whole class F, and not only the distribution induced by an estimate of θ 0 , from the observed value of X in our universe, the selection bias problem is solved.Consider for instance the case when X = R + and F is the class of exponential distributions with expected value θ ∈ R + .It can then be shown that the maximum likelihood estimator of θ 0 is θ0 = X.But we still have no guarantee that F 0 (ℓ X ) = F (ℓ X ; θ 0 ) ≤ F (ℓ X ; θ0 ).On the other hand, by (2) we know that F 0 (ℓ X ) ≤ TP max .Thus, considering all possible values of θ removes the bias induced by the weak anthropic principle, since the class F includes F (•, θ 0 ) but is not reduced to it.
Third, Algorithm 1 reveals that the output heavily depends on the input, namely the sample space X for X and the constraints to impose on the family F of prior distributions F of X.Moreover, Díaz-Pachón, Hössjer, and Marks proved a theorem (Appendix 3 of Díaz-Pachón et al. ( 2023), summarized in Table 1 below, and explained in Remark 1) where they show that subtle changes in the input might produce extremely different outcomes.This can be seen, for instance, from Rows 4-5 of Table 1, when considering families F of distributions over X = R that depend on a scale parameter only: if 0 ∈ ℓ X , it will provoke TP max = 1, regardless the size of the interval ℓ X (therefore the level of tuning cannot be assessed), whereas 0 / ∈ ℓ X with midpoint x 0 of ℓ X , will produce a small TP max , provided the relative half size ϵ = |ℓ X |/(2x 0 ) of the interval is small.In the same direction, when X = R + (=R) and the form and scale (location and scale) family F is considered, TP max jumps from very small to 1, depending on whether the maximal signal-to-noise ratio T of the prior distributions F ∈ F in Rows 2-3 (7-8) is bounded or not.
Remark 1.The results of Table 1 are presented for LPIs of the form where ϵ is a dimensionless quantity representing the relative half size of the interval, and x 0 is the observed value of X, taken to be the middle point of the interval (if x 0 does not coincide with the middle point of ℓ X , nothing is lost by assuming the two values coincide and the notation is greatly simplified).When ϵ > 0 is small, fine-tuning probabilities are computed as a function of ϵ for diverse parametric families F of prior distributions F of a randomly generated universe X.These TP max are presented given certain constraints on ℓ X and/or θ ∈ Θ, the latter of which includes one or two variable hyperparameters of the prior densities f (x; θ) = F ′ (x; θ) in F. The different choices of F correspond to a prior density f (x/θ)/θ for the scale family with scale parameter θ, a prior density f (x/θ 2 ; θ 1 )/θ 2 for the form and scale parameter with form parameter θ 1 and scale parameter θ 2 , and a prior density f ((x − θ 1 )/θ 2 )/θ 2 for the location and scale family with location parameter θ 1 and scale parameter θ 2 .When F involves two hyperparameters, the constraints on these hyperparameters are formulated in terms of the maximal value T of the signal-to-noise ratio SNR = E 2 (X)/Var(X), that is, the maximal value of the ratio of the squared first moment and variance of the distribution F .There is also a constant associated to each family F; C 1 = max x>0 xf (x) for the scale family, C 2 = 1/ √ 2π for the form and scale family, and C 3 = max x∈R f (x) for the location and location-scale families.

MATHEMATICAL FRAMEWORK FOR LEARNING AND KNOWLEDGE ACQUISITION
The No Free Lunch theorems (Wolpert & MacReady 1995, 1997) assert that, on average, no search does better than a blind one, and therefore a guided search infuses information to the search problem.Active information was thus defined to measure the amount of information introduced by a programmer in order to reach a target, compared to a baseline distribution which is usually, but not necessarily, in maxent (Dembski & Marks II 2009;Díaz-Pachón & Marks II 2020).Formally, active information is defined as where the target A ⊂ Ω is a subset of a search space Ω, whereas P and P 0 are probability distributions on Ω that represent searches of the programmer and blind search, respectively.
Since its inception, active information has been used in the measurement of bias for machine learning algorithms (Montañez 2017a,b;Montañez et al. 2019Montañez et al. , 2021)) (Hössjer et al. 2023;Zhou et al. 2023), among others.In fact, active information can be used as a measure of FT if the search space Ω equals the sample space X of the physical parameter X, (4) is large for A = ℓ X , with P 0 (A) = F 0 (A) as in (1) and P(A) = δ x0 (A) a one-point distribution at x 0 , the observed value of X (or the midpoint of ℓ X ) (Díaz-Pachón & Hössjer 2022; Thorvaldsen & Hössjer 2020).
Based also on active information, a mathematical formalization of the epistemological notions of learning and knowledge acquisition was recently developed by Hössjer et al. (2022), where knowledge is usually defined as "justified true belief" (Gettier 1963;Ichikawa & Steup 2018;Schwitzgebel 2021).This means that a subject or agent S knows a proposition p if i S believes p, ii p is true, iii S's belief about p is justified.
If only properties i and ii are satisfied, we say that S learns p.In other words, there is learning if and only if there is a true belief.Thus, there is a hierarchization from belief to knowledge through learning: knowledge ⊂ learning ⊂ belief.
Moreover, the inclusions are proper, since it is possible to have a belief in a false proposition so that such a belief does not constitute learning; and it is also possible to learn p without getting to know p, if the true belief cannot be justified.
As for the mathematical formalization, belief is defined as a probability, as it is customary in Bayesian theory (see, e.g., Berger 2010;MacKay 2003), whereas propositions are parameters that have a true value.More explicitly, suppose we want to learn whether a proposition p is either true or false.A set of possible worlds Ω is defined, where one world ω 0 represents the value of the parameter ω, referred to as true world, whereas all other ω ∈ Ω \ {ω 0 } are counterfactuals.The set A of interest is made of the worlds in which proposition p is true.An ignorant person assigns beliefs to every subset of Ω according to some initial distribution P 0 , whereas an agent S with some data D and discernment G (corresponding to a σ-algebra generated by the non-trivial events which data cannot discern into smaller events) updates his beliefs to P, according to Bayes's rule, with L(D|A) the likelihood of observing D given A. Then learning of p is defined as follows.
Definition 2 (Learning).I) Agent S has learnt about proposition p, compared to an ignorant person, either when p is true and the posterior belief P about p is higher than the prior belief P 0 about p, or when p is false and the posterior belief about p is smaller than the prior belief about p. II) The agent S has fully learnt p (regardless of the beliefs of the ignorant person) if the posterior belief P about p 1 is 1 (0) when p 1 is true (false).
Remark 2. Mathematically, the two parts of Definition 2 can be phrased as follows: I) We say that there is learning about p, compared to an ignorant person, if , and p is true in the true world ω 0 , 0 > I + (A), and p is false in the true world ω 0 .
II) There is full learning about p (regardless of the beliefs of the ignorant person) if P(A) = 1 (P(A) = 0) when p is true (false) in the true world ω 0 .
In particular, a subject in maximum state of ignorance is represented by a maxent distribution P 0 over Ω.Nonetheless, learning does not necessarily entail a particular belief about the true world, so it does not satisfy the conditions of a justified true belief, which requires having a belief for the right reasons.Knowledge acquisition is defined to cover this gap.
Definition 3 (Knowledge).I) Agent S has acquired knowledge about p, compared to an ignorant person, if the following three conditions are satisfied: 1. S has learnt about p, compared to the ignorant person.
2. The true world ω 0 is among the pool of possibilities for S, according to his posterior beliefs.
3. The belief in ω 0 under P is stronger than that under P 0 .2. The true world ω 0 is in supp(P), the support of P.

For all
for some ϵ > 0, where d is a metric over Ω.

LEARNING AND KNOWLEDGE IN COSMOLOGY
It is pointless to think of different universes with different laws of nature because they are beyond our comprehension.Therefore, following the reasonable little question of Barnes (2019Barnes ( -2020)), we reduce the set of possible universes to those with the same laws but possibly different constants.In such scenario, there is a one-to-one relation between X and the multiverse because each x ∈ X represents a universe with value x for the parameter X.In other words, there is a bijective function from X to the set of possible universes (Adams 2019;Sandora 2019a,b,c,d;Sandora et al. 2022Sandora et al. , 2023a,b,c),b,c), with x 0 being the actual value of the constant of nature in our world, and any other x ∈ X being a counterfactual universe.
The framework from Section 2 will be applied to analyze conditions under which FT can be known.As a preparation, we first consider the simpler proposition that our universe harbors life (which we know to be true).
Theorem 1.The proposition p 1 : Our universe harbors life can be fully learned and known.A sufficient condition for learning and knowledge acquisition of p 1 , compared to an ignorant person, is having TP max < 1.
Proof.Identify the set of possible worlds Ω for the learning and knowledge acquisition problem with the set of possible universes X , with ω 0 corresponding the observed value x 0 of X for our universe.According to Section 2, the set A = A 1 of worlds for which p 1 is true coincides with ℓ X , the LPI of X.
Let P be given by our best current theories and data.Since our theories and data admit knowledge of ω 0 = x 0 (i.e., we know that our universe admits life as well as the observed value x 0 of X), it follows that P = δ x0 has a one point distribution at x 0 .Since x 0 ∈ ℓ X , this implies Then, according to Part II) of Definition 2 and Remark 2, we fully learn p 1 because P(A) = 1 and p 1 is true.Since P = δ x0 , it follows from Part II) of Definition 3 that we also acquire full knowledge about p 1 .This completes the proof of the first part of Theorem 1.For a proof of the second part of Theorem 1, we also need to consider the beliefs P 0 of the ignorant person about the value of X. Recall that P 0 is in maxent over Ω = X , given the restrictions imposed by F, the class of distributions considered in Table 1.This implies P 0 ∈ F, and consequently Assume TP max < 1.From this assumption and (3) it follows that P 0 (A 1 ) < P(A 1 ), which, according to Part I) of Definition 2 and Remark 2 implies learning about p 1 compared to an ignorant person, since p 1 is true.The assumption TP max < 1 also implies knowledge acquisition about p 1 compared to an ignorant person, since Conditions 2 and 3 of Definition 3 are satisfied as well.
Remark 4. Notice that p 1 does not include the concept of fine-tuning.
After having dealt with learning and knowledge acquisition of our universe harboring life, let us now study whether it is possible to learn and know that our universe is fine-tuned.This corresponds to the proposition p 2 : Our universe is fine-tuned.
We prove the following result: Theorem 2. Let F 0 be the actual distribution of X and 0 < δ ≪ 1 the upper bound probability for fine-tuning, as stated in Definition 1. Assume the agent S believes that the assumption of Algorithm 1 is true.The active information for the set of worlds A for which p 2 is true satisfies about our universe.It turns out that these two steps are of crucial importance for the validity of (A) and (B), and hence for establishing that our universe is fine-tuned.The less we assume in Steps 1 and 2 of Algorithm 1 (the larger F is), the easier it is to satisfy (A), but the more difficult it is to satisfy (B).This has interesting implications for the list of possible choices of F in Table 1: First, since the maximal tuning probability TP max is large for Rows 2, 5, and 8 of Table 1, (B) is not satisfied and FT cannot be neither established nor known in these cases.
On the other hand, Rows 3, 4, 6, and 7 of Table 1 represent a second scenario where F is smaller, making it easy to satisfy (B) but more difficult to satisfy (A).For instance, when the signal-to-noise ratio of the distribution F X of X is bounded (Rows 3, 6, and 7), it is still possible that (A) does not hold (e.g., in the X = R case, if the true maxent distribution F 0 of X is positive, with a very small standard deviation in relation to its mean).

Random distributions
For the second scenario of Section 4.4.1, when F is chosen so small that (B) holds but (A) does not, one may assume that the distribution F = F X of X is random.Under appropriate conditions, this leads to a generalization of Theorem 3, whereby FT of X can be known, not with certainty but still with a high probability.
In more detail, since F X = F (•; θ), such an approach leads to a having a random hyperparameter θ, with the true distribution F 0 = F (•; θ 0 ) of X corresponding to an instantiated value θ 0 of θ.It follows form the proofs of Corollary 2.1 and Theorem 3 that if (B) holds, we can know that X is fine-tuned, with a high probability 1 − P (E), if the event has low probability.(Note that P (E) being small is less stringent than (A) failing with a small probability).Showing that P (E) is small thus requires choosing the distribution of F = F (•; θ), for which a distribution G θ of θ is needed.This requires implementing Algorithm 1 on a new class of hyperhyperparameters λ that parametrize the distribution G θ = G(•; λ) of θ, which in turn leads to new restrictions λ ∈ Λ on the hyperhyperparameter.We thus end up in an endless loop (Hofstadter 1999) because it requires showing that the hyperparameters θ themselves are tuned to obtain FT of the physical parameters X.In other words, we replace the FT question of the physical parameters X with a tuning question of the hyperparameters θ.

Bounded parameter space
The strategy of Section 4.4.2, to regard F X as random, would be great if the parameter space X = [a, b] of X were finite and (B) were satisfied for the chosen class F of distributions of X.If there are no asymmetric restrictions on F , it is natural to regard [a, b] as a torus and assume that F is translation invariant in the sense that F (•) and F (• − x) have the same distribution for all x.It can then be proven, as a consequence of Markov's Inequality, that the probability of the set in (10) satisfies This implies that, for a finite parameter space X , we can know that X is fine-tuned, with a large probability 1 − P (E), if F is chosen small enough so that (B) holds, but also large enough so that (11) is valid.Since a finite parameter space is a favorite scenario for physicists (remember the discussion after Theorem 1), it is worth exploring whether there are actual FT instances in which the parameter space is actually finite.However, current arguments in favor of this hypothesis are unconvincing (Colyvan et al. 2005;McGrew et al. 2001).Therefore, we need to deal with X being unbounded, and for such cases FT cannot be known unless some restrictions, such as (A) and (B), are imposed.

The real line as parameter space
The argument of Section 4.4.3, based on random measures, is unfortunately not straightforward to generalize when the parameter space X of X equals R. Rows 4 and 5 of Table 1 correspond to a scale family of distributions that typically are symmetric around the origin.However, the scenario of Row 4 does not consider the observed value x 0 of the parameter in our universe as an estimator of a location parameter for F X , making it plausible to suggest that (A) is violated.Indeed, it is a basic principle to obtain maxent distributions, subjecting them to the available observations (Celis et al. 2020;Singh & Vishnoi 2014;Wainwright & Jordan 2008;Suh 2023).Thus, defining F without including into it a random variable whose location parameter is given by the observed value of X in our universe might be too restrictive.
On the other hand, the location-scale family with an upper bound T of the signal-to-noise ratio (Row 7 of Table 1) does include distributions with x 0 as mean parameter.It is possible then to know that X is fine-tuned if the true distribution F 0 satisfies SNR ≤ T (so that (A) holds), and relative half length ϵ of the LPI is small enough so that (B) holds.The assumption SNR ≤ T cannot be derived from the maxent principle though, and it rather needs some other justification (such as simplicity).

Positive line as parameter space
When X = R + , it is possible to use Row 1 of Table 1.Notice that this is a scale family of distributions over R + that includes all the exponential distributions.This is important because the exponential distribution F (•; θ) is maxent over all distributions restricted to have mean θ.Other maxent distributions over R + are possible under different constraints; however, they are less parsimonious because they require more restrictions.For instance, the χ 2 distribution of X has maxent distribution over all distributions on the nonnegative reals given restrictions on E(X) and E(log X) (see, e.g., Park & Bera 2009), thus pertaining to Row 2 of Table 1.Hence, the family of exponential distributions {F (•; θ)} θ∈R + is the less restrictive family of maxent distributions over R + , and we can naturally include into it the exponential distribution with mean value corresponding to the observed value x 0 of the constant in our universe.Therefore, in this scenario it is plausible that (A) is satisfied and, whenever (B) holds as well, we can establish and know FT.To give a more explicit formulation of (B) for the family of exponential distributions of X (cf.Row 1 of Table 1), we notice that the maximum tuning probability equals when ϵ > 0 is small.By the reasoning above, we have thus established the following important Principle: Principle 1.The fine-tuning of the parameter X can be known when: 1.The constant can only take nonnegative values (X = R + ).
2. The size of the LPI is small relative to the observed value of the constant in our universe (2e −1 ϵ ≤ δ ≪ 1).

EXAMPLES
We consider several examples of physical parameters X to determine whether FT can be known for them.We will only treat cases in which the parameter cannot take negative values, and make use of Principle 1.This Section does not intend to be exhaustive.It rather intends to show two things.First, that FT can be known in a few instances.Second, that we can fail to know the level of tuning for a parameter if the relative half length ϵ of its LPI is not small.More bounds for the LPIs can be found for instance in (Adams 2019) and references therein.In particular, Table 2 of (Adams 2019) shows that many LPIs span over several orders of magnitude.This implies that FT cannot be known in those cases because ϵ is large and therefore (by Table 1) (B) is not satisfied.

Gravitational force
According to Davies (1982), when observing the ratio X of the gravitational constant G to the contribution from vacuum energy to the cosmological constant Λ vac , the LPI of the gravitational constant is ℓ X = x 0 1 − 10 −100 , 1 + 10 −100 , where x 0 is the observed value of X.Since gravitation is attractive, it cannot be negative, and therefore X is positive as well (so that X = R + ).Since ϵ = 10 −100 , it follows that ϵ ≪ 1.Thus, according to Row 1 of Table 1, Díaz-Pachón et al. ( 2021) and ( 12), the maximal tuning probability is TP max = 2e −1 × 10 −100 .By Principle 1, we know that there is FT.

Higgs Vacuum Expectation Value
The Higgs vacuum expectation value v is given by v 0 = 2 × 10 −17 m P , where m P is the Planck mass.According to Barnes (2012), its LPI (in units of m P ) is given by If v falls below the lower bound a, the proton becomes heavier than the neutron, causing hydrogen to be unstable and enabling electron capture reaction (Damour & Donoghue 2008).If v exceeds the upper bound b of ℓ v , there is no nuclear binding.The Higgs vacuum expectation value is positive as it describes the average value of the Higgs field in the vacuum (i.e., X = R + ).Writing (13) as in (3), we obtain Since ϵ = 0.617 is not small, we cannot use ( 12) to calculate the maximum tuning probability as TP max = 2e −1 ×0.617 = 0.454.Instead, we make use of the fact that for an exponential distribution F (•; θ), with mean θ, the tuning probability of Thus, despite the smallness of ℓ v , FT cannot be known for v because TP max is large and hence (B) is not satisfied.

Amplitude of primordial fluctuations
According to Rees (2000), the LPI of the amplitude Q of the primordial fluctuations is with Q 0 = 5.5 • 10 −6 the mid point of ℓ Q .Since the amplitude cannot be negative, X = R + .Since ϵ = 0.818 is not small, we follow the procedure of the previous example and calculate the maximal tuning probability as  (2021).Since this number is not small, the FT of Q cannot be known.

Baryon-Photon Ratio
According to Adams (2019), the baryon-photon ratio η has a value η ∼ 6 × 10 −10 , and it can sustain life in between the interval ℓ η = [6 × 10 −13 , 6 × 10 Thus, since TP max is not small, the level of tuning of η cannot be known.
Remark 7. It follows from Examples 5.3-5.5 that the maximal tuning probability of the life-permitting interval ℓ X = [a, b] of X ∈ X = R + , with a class of exponential priors for F X , is a function TP max = h(r) = r −1/(r−1) − r −r/(r−1) (17 of the ratio r = b/a = (1 + ϵ)/(1 − ϵ) between the right and left end points of ℓ X .It can be seen that h : (1, ∞) → (0, 1) is a strictly increasing function of r, with lim r→1 h(r) =: h(1) = 0, h ′ (1) = e −1 and lim r→∞ h(r) = 1.From this, it follows that (12) is a special case of (17), which holds in the limit of small half relative sizes ϵ > 0 of ℓ X .Moreover, from Theorem 3, a sufficient condition to know about FT of X on When δ > 0 is small, (18) is essentially equivalent to the condition on ϵ > 0 in the second part of Principle 1.

CONCLUSION
Using the recent mathematization of knowledge acquisition in (Hössjer et al. 2022), we formalize Principle 1, which establishes that the FT of a given physical parameter X can be known at least if the following two conditions are satisfied: 1.The constant X can only take nonnegative values.
2. The size of its life-permitting interval is small relative to the observed value x 0 of the constant in our universe.This latter proviso is extremely important since, as we saw in the examples, it is possible to have very small intervals whose size relative to x 0 is large.As a result, (B) is not satisfied and the level of tuning cannot be known.In other words, to know that a given constant is fine-tuned, it is the relative half size of the interval that must be small, not the interval itself.That is, |ℓ X | ≪ x 0 is required in order to satisfy (B).Our conclusion is sobering in the sense that many things that, in spite of the ado, are touted as fine-tuned cannot be known to be so.
On the other hand, when Principle 1 is satisfied, we can be sure that FT can be known.However, we also highlight that not satisfying (B) does not imply that the tuning is coarse; it only implies that FT cannot be known, even if (A) is satisfied, since it is still possible that the real probability F 0 (ℓ X ) of tuning is small.Thus, when the maximal tuning probability TP max is not small, the conclusion of coarse tuning is epistemological, not ontological.
, hypothesis testing (Díaz-Pachón et al. 2020; Hom et al. 2023), statistical genetics (Thorvaldsen & Hössjer 2023; Díaz-Pachón & Marks II 2020), bump hunting (Díaz-Pachón et al. 2019; Liu et al. 2023), and estimation and correction of prevalence estimators of Covid-19 S has acquired full knowledge about p (regardless of the beliefs of the ignorant person) if P = δ ω0 .Remark 3. Mathematically, the three requirements of knowledge acquisition for Part I of Definition 3 amount respectively to 1.The criteria of Part I of Definition 2 are satisfied.
−7 ] = [a, b].(15)Again, since ϵ is not small, we follow the procedure of the previous two examples to calculate the maximal tuning probability TP max =