Multifractal analysis for Markov interval maps with countably many branches

We study multifractal decompositions based on Birkhoff averages for sequences of functions belonging to certain classes of symbolically continuous functions. We do this for an expanding interval map with countably many branches, which we assume can be coded by a topologically mixing countable Markov shift. This generalises previous work on expanding maps with finitely many branches, and expanding maps with countably many branches where the coding is assumed to be the full shift. When the infimum of the derivative on each branch approaches infinity in the limit, we can directly generalise the results of the full countable shift case. However, when this does not hold, we show that there can be different behaviour, in particular in cases where the coding has finite topological entropy.


Introduction
Given a dynamical system (X, T ) on a metric space X and a measurable function f : X → R, for α ∈ R it is natural to consider the size (Hausdorff dimension) of the sets L(α) := x ∈ X : lim When X is compact and both T and f are continuous, L(α) is non-empty if and only if there exists a T -invariant measure µ such that f dµ = α. Moreover, if (X, T ) is expanding, the space of invariant measures is typically very rich and it is often possible to express the dimension of L(α) as a conditional variational principle in terms of the entropies and Lypunov exponents of such measures µ. Notice that one can similarly define corresponding sets L(α) for finitely or countably many functions f i : X → R. For compact expanding dynamical systems, there is usually little difficultly in extending this result from one function to finitely or countably many functions.
Related results have been attained in cases when X is not compact. However, without compactness, the situation is more complicated and interesting new behaviour has been found. For example, it is possible for L(α) to be non-empty and have positive dimension but not support any invariant measures. In this paper, we consider a dynamical system on the unit interval with a countable number of expanding C 1 branches which we assume can be coded by a topologically mixing countable Markov shift (CMS). This generalises work by A. Fan, T. Jordan, L. Liao and M. Rams in their paper Multifractal Analysis for Expanding Interval Maps with Infinitely Many Branches [FJLR15] where they considered the problem in the specific case where the CMS is the full shift. While much of the theory is analogous and can be seen as a direct generalisation, we also find new behaviour, in particular when the CMS has finite topological entropy.
Interest in problems of this type goes back as far as 1934 when A. S. Besicovitch considered the Hausdorff dimension of the set of points in the unit interval whose base 2 expansion have digits with given frequencies [Bes34] (see also, [Kni34]). This is equivalent to finding the Hausdorff dimension of the sets L(α) in the case where X is the unit interval, T is doubling map and f is the characteristic function on [0, 1/2). In 1948, Eggleston generalised this to the base N case [Egg49], and this was further extended in papers including [BSa01], [Caj81], [Dur97], [Oli98], [Oli00], [Ols02], [Ols03b], [OW03], [PS07] and [Vol58].
In the paper Recurrence, Dimension and Entropy [FFW01], A. Fan, D. Feng and J. Wu considered the problem with a finite number of continuous functions f i on a topologically mixing sub-shift of finite type. Related problems were also studied in the papers [BSc00], [BSS02a], [BSS02b], [FF00], [FLW02], [Oli99], [Ols03a] [OW07], [PW01] and [Tem01]. The most fundamental application in this setting is to consider the size of the set of points with digits of given frequency. In our setting, we can consider the analogous problem relating to the set of points whose orbits occupy each branch with given frequency, that is, the set of points whose codings have digits with given frequency. As our system is coded by a countable Markov shift, there will be points whose frequency of digits sum below one. In [FJLR15], in the specific case where the coding is the full countable shift, they showed that there is some value s ∞ , depending on the map T , such that when the probabilities sum below one the dimension of the corresponding frequency sets is s ∞ . This behaviour was first found in [FLM10] for the Gauss map G : (0, 1] → (0, 1] defined by G(x) = 1/x mod 1. We show in Theorem 2.2 that this holds in the more general setting of when the infimum of the derivative on each branch approaches infinity in the limit. However, in Theorem 2.4, we find that in some instances where the derivative has a uniform bound, the dimension of these sets may vary. In this case, the CMS has finite topological entropy and the quantity δ ∞ , the entropy at infinity, plays a significant role.
A convenience in the full shift case is that one is able to approximate invariant measures using Bernoulli measures. For sub-shifts of finite type, there is a bound for the number of steps it takes to get from one digit to another on the shift space, so the argument using Bernoulli measures can be adapted. This is no longer the case for general countable Markov shifts, which causes additional complications in the analysis. We are able to work around this problem using recent work by G. Iommi, M. Todd and A. Velozo on the space of invariant measures for countable Markov shifts and the properties in the limit of sequences of these measures [IV19], [ITV19].

Setting and Results
Let {I i } i∈N be a countable collection of disjoint subintervals of [0, 1] which satisfy ∪ i∈N I i = ∪ i∈N I i . Let T i : I i → [0, 1] be an injective C 1 map such that |T ′ i (x)| ≥ ζ > 1. By this we mean that T i can be extended to a C 1 diffeomorphism from an open neighbourhood of I i to an open neighbourhood of T i (I i ) which maps I i to T i (I i ). We define the map T : ∪ i∈N I i → [0, 1] by T (x) = T i (x) for all x ∈ I i and adopt the convention that T ′ (x) = T ′ i (x) for all x ∈ I i . We also assume that log |T ′ | has variations uniformly tending to 0 (see Definition 3.1).
We assume that int T i (I i ) ∩ int I j is equal to int I j or the empty set for all i, j ∈ N, where int denotes the interior. Let (Σ, σ) be the countable Markov shift with transition matrix A ij = 1 if and only if int T i (I i ) ∩ int I j = int I j . Throughout this paper we assume that this coding (Σ, σ) is topologically mixing (see Section 3.1). Consider the natural projection Π : Σ → Then (Λ, T ) defines a dynamical system. We denote E := {x ∈ Λ : #Π −1 (x) ≥ 2} to be the set of points without a unique coding and note that ∪ ∞ n=0 T −n E is at most countable, so for any set Ω ⊂ Λ we have that dim Ω = dim Ω \ ∪ ∞ n=0 T −n E. We assume that Π(ω 1 , ω 2 , . . .) ∈ I ω 1 for every ω ∈ Σ \ Π −1 (E), so that T (Π(ω)) = T ω 1 (Π(ω)) = Π(σω)) for these ω. This can be achieved by modifying the endpoints of the {I i } i∈N where necessary. We will also assume that there are no periodic points contained in E.
Let M(Λ, T ) be the set of T -invariant probability measures on Λ. Since E is at most countable and does not contain any periodic points, it does not support any invariant measures. It follows that Π gives a bijection between the set of T -invariant measures and the set of shift invariant measures M(Σ, σ). For µ ∈ M(Λ, T ), let λ µ := log |T ′ | dµ be the Lyapunov exponent of µ and let h µ the entropy of µ with respect to T (see Section 3.3). We also define M E (Λ, T ) and M E (Σ, σ) to be the subsets of M(Λ, T ) and M(Σ, T ), respectively, consisting of the ergodic measures. It is easy to see that Π also gives a bijection between M E (Λ, T ) and M E (Σ, σ).
For a sequence of functions φ i : Λ → R with variations uniformly tending to 0 (Definition 3.1), we will study the possible limit points in R N of the Birkhoff average sequences (A n φ i (x)) n∈N , where In particular, we investigate sets of the form The following sets will be used to describe the possible limits of the Birkhoff averages. Let and let Z be the closure of Z 0 in the pointwise limit topology, that is The following set will be of importance in this paper. Let {ω ∈ Σ : ω i = q for infinitely many i ∈ N} be the recurrent set. We will also use T to denote the transient set Σ \ R. For a set Ω ⊂ Σ we denote the set Π(Ω) \ ∪ ∞ n=0 T −n E by Λ Ω and Π(Ω) ∩ Λ(γ) by Λ Ω (γ). Unfortunately, unlike in the case where the coding is the full countable shift, there may exists γ ∈ Z which is the Birkhoff limit of some x ∈ Λ T . For this reason, we must restrict our attention to the recurrent set R. We calculate the Hausdorff dimension of the sets Λ R (γ). For γ ∈ Z, let Theorem 2.1. Let (φ i ) i∈N be a sequence of functions with variations uniformly tending to 0.
We would like to have the dimension without the limits in k and ε. This is possible if we restrict the behaviour of |T ′ | on I i in the limit as i → ∞, and set stricter conditions on the φ i . For Theorem 2.2 we assume (in addition to our previous assumptions) that as i → ∞, and that the φ i are bounded. In this case we can extend the result of Theorem 1.2 in [FJLR15]. For γ ∈ Z 0 , let Analogously to [FJLR15], we define where, for a function φ : Λ → R, P (φ) is the pressure defined by We remark that if the topological entropy h top (Σ, σ) is finite (see Section 3.4), then s ∞ = 0.
Theorem 2.2. In the setting of Theorem 2.1, let |T ′ | further satisfy condition (2.1) and let the (φ i ) i∈N be bounded. For γ ∈ Z 0 we have Remark 2.3. The methods used in [FJLR15] can be modified to hold when the coding satisfies the big images and pre-images (BIP) property (see [Sar15,Definition 5.8]). Therefore, if (Σ, σ) satisfies the BIP property, Theorem 2.1 and Theorem 2.2 hold without taking the intersection with Π(R). We state our final theorem now but defer the definitions until Section 4. If, in addition to the conditions in Theorem 2.1, we assume the functions φ i ∈ C 0 (Λ) and log |T ′ | − L ∈ C 0 (Λ) for some L ≥ log ζ, where C 0 (Λ) is some class of bounded functions with variations uniformly tending to 0 which symbolically vanish at infinity, we can also get an exact-type result. We remark that log |T ′ | being bounded implies that the topological entropy h top (Σ, σ) is finite since h µ ≤ λ µ for all µ ∈ M(Λ, T ) (see Lemma 3.4 and Remark 3.5) and entropy is preserved under Π. As the functions φ i are in C 0 (Λ), for γ ∈ Z \ {0} we have Λ(γ) = Λ R (γ) trivially. This allows us to calculate the dimension of the sets Λ(γ) without taking any intersection. With these conditions, we show that Z can be written as where δ ∞ is the entropy at infinity of (Σ, σ) (see Definition 4.4). When µ is the zero measure, the quantity in the brackets is to be interpreted as δ ∞ /L.
Moreover, 0 ∈ Z and satisfies dim Λ(0) = max{α 4 (0), dim Λ T }. In the next section we introduce some basic definitions and prove some distortion estimates. In Section 4 we then recall some theory of countable Markov shifts with finite topological entropy. While we are not assuming (Σ, σ) has finite topological entropy, the suspension space on Σ with roof function log |T ′ | will have finite topological entropy. Hence in Section 5, via Abramov's formula we are able to relate the quantities hµ λµ to the entropy of a measure on a CMS with finite topological entropy. We use this to prove two propositions: Proposition 5.1 and Proposition 5.2. The first allows us to approximate arbitrary shift invariant measures with ergodic measures supported on finite sub-shifts. The second gives us upper semi-continuity of the map µ → hµ λµ in the weak* topology when the measures satisfy hµ λµ > s ∞ (in the setting of Theorem 2.2). Note that this is a necessary tool for proving Theorem 2.2 from Theorem 2.1. In [FJLR15] they prove this in the specific case where (Σ, σ) is the full shift. However, their methods rely on the uniform structure of the shift space and cannot be used here. The following three sections are devoted to proving Theorem 2.1, Theorem 2.2 and Theorem 2.4 respectively. Finally, in Section 9 we discuss some applications. In particular, to the frequency of digits case and the map F λ : (0, 1] → (0, 1], defined for λ ∈ (0, 1) by , for x ∈ I n , n ≥ 2, which was studied in [SV97], [BT12], [BT15] and [IJT17]. We finish by discussing some cases when α 4 (γ) can instead be written as a supremum over probability measures. An admissible word of length n is a string (ω 1 , ω 2 , . . . , ω n ) ∈ N n such that A ω i ,ω i+1 = 1 for all i = 1, . . . , n − 1. We denote by C n (Σ) the set of all admissible words of length n. For a point ω = (ω 1 , ω 2 , . . .) ∈ Σ we use ω| j i to denote the word (ω i , . . . , ω j ) ∈ C j−i+1 (Σ). For (ω 1 , . . . , ω n ) ∈ C n (Σ) the nth level cylinder [ω 1 , . . . , ω n ] is defined by {ω ′ ∈ Σ : ω ′ i = ω i , ∀i = 1, . . . , n} and the nth level basic interval is defined by

CMS and basic intervals
where Conv denotes the convex hull. An admissible word w is said to connect a, b ∈ N if the cylinder [a, w, b] is non-empty. In this paper we always assume the coding (Σ, σ) corresponding to (Λ, T ) is topologically mixing, that is, for each pair a, b ∈ N there exists an N ∈ N such that for all n ≥ N there is an admissible word of length n connecting a and b. In Section 5, from the coding (Σ, σ) we will construct a CMS which may not be topologically mixing, but will be topologically transitive. This is a weaker condition and means that for each pair a, b ∈ N there exists an admissible word connecting a and b. We endow Σ with the topology generated by the cylinders. We use the metric d : This generates the same topology as that of the cylinders.
We define log |T ′ | : ∪ i∈N I i → R to have variations uniformly tending to 0 similarly. Note that if φ has variations uniformly tending to 0 then there exists a uniformly continuous function, which we will denote by f φ , such that f φ (ω) = φ • Π(ω) for all ω ∈ Σ \ Π −1 (E). Since we are assuming E does not contain any periodic points, it cannot support any invariant measures. It follows that for every ν ∈ M(Σ, σ) and every φ with variations uniformly tending to 0 (3.1) We also have that for every φ with variations uniformly tending to 0, every n ∈ N, and all Given a basic interval C n (ω 1 , . . . , ω n ), we define M * φ(ω 1 , . . . , ω n ) := sup ω∈Cn(ω 1 ,...,ωn) Proof. This follows straightforwardly as for any n ∈ N and (ω 1 , . . . , ω n ) ∈ C n (Σ) We adapt Lemma 2.3 in [FJLR15] to our setting. Note that we have an additional term since we are allowing diam(T (I i )) to be less than 1.
Lemma 3.3. There exists a positive sequence ε(n) converging to 0 such that for any ω ∈ Σ \ Proof. By the Mean Value Theorem we have for some x ∈ C n (ω). Hence, We can then apply Lemma 3.2 to log |T ′ | since we are assuming it has variations uniformly tending to 0.

Entropy of invariant measures
We briefly recall the definition of the entropy of an invariant probability measure (for more details, see [Wal81, Chapter 4]). Let (Y, B, f, µ) be a probability-preserving transformation. A partition β is a finite or countable collection of subsets ξ i ∈ B such that ξ i ∩ ξ j = ∅ if i = j and The entropy of the partition is defined to be where 0 log 0 := 0. Note it is possible for H µ (ξ) to be infinite. For a partition β, we define f −1 (β) := {f −1 (ξ i ) : ξ i ∈ β}. Then f −1 (β) is also a partition. Furthermore, for two partitions β, β ′ we define the join β ∧ β ′ to be the set Again, the set β ∧ β ′ is also a partition. The entropy of µ with respect to β is then defined to be Finally, the entropy of µ is defined to be We have the following lemma bounding the entropy of measures µ ∈ M E (Λ, T ) by their Lyapunov exponents. The proof is standard and likely known. However, as there are some adjustments needed to prove it in the setting we are working in, we include it here for completeness.
Remark 3.5. Lemma 3.4 is sufficient for our use in the proof of Theorem 2.1. However, it then follows from Theorem 2.1 that the inequality h µ ≤ λ µ holds for all µ ∈ M(Λ, T ).

Topological pressure and topological entropy
Recall that for a function φ : Λ → R, the pressure P (φ) is defined by While this definition of pressure is sufficient for our purposes, we remark that when φ has summable variations, that is ∞ n=1 var n (φ) < ∞, this can be alternatively stated (see [Sar99, Theorem 3] and [IJT15, Theorem 2.10]) as where the value does not depend on the a ∈ N chosen. This is equivalently the Gurevich pressure of φ • Π on (Σ, σ). This was defined by Sarig in [Sar99] based on work by Gurevich [Gur69]. The Gurevich pressure of the zero function is of special importance. We call this the topological entropy of (Σ, σ) and denote it by h top (Σ, σ). Explicitly, If (Σ, σ) is topologically transitive but not topologically mixing then the limit is replaced by a limsup. In either case, this satisfies the variational principle where the second equality follows from [Sar99, Theorem 2] and [Wal81, Corollary 8.6.1].

Countable Markov shifts with finite topological entropy
In this section we recall some definitions and theory for countable Markov shifts with finite topological entropy. For a more thorough account we refer the reader to [IV19] and [ITV19]. We emphasise that we are not in general assuming the shift space (Σ, σ) corresponding to (Λ, T ) has finite topological entropy. However, in Section 5 we show that the quantities hµ λµ can be related to the entropies of measures on a topologically transitive (but not necessarily topologically mixing) CMS with finite topological entropy. Furthermore, under the conditions of Theorem 2.4, the topological entropy of (Σ, σ) corresponding to (Λ, T ) will indeed be finite, so the theory outlined here will be useful for us once again. Throughout this section, to account for the fact that the CMS in Section 5 may not be topologically mixing, by (Σ, σ) we mean a topologically transitive CMS with finite topological entropy.

The space of invariant measures
Recall we denote by M(Σ, σ) the set of all σ-invariant probability measures on Σ and M E (Σ, σ) the subset of M(Σ, σ) consisting of the ergodic measures. The following proposition was proved in [ITV19] (Theorem 8.7). While they state in the theorem that the measures ν n may be taken to be compactly supported, they prove the stronger result stated here. Notice that as log |T ′ • Π| is not necessarily bounded we cannot also conclude that lim n→∞ λ νn = λ ν . However, in Proposition 5.1 we extend this result allowing us to assume this also holds.
Proposition 4.1. Let ν ∈ M(Σ, σ), then there exists a sequence of ergodic measures ν n ∈ M(Σ, σ) such that ν n converges to ν in the weak* topology and lim n→∞ h νn = h ν . It is moreover possible to choose the ν n such that they are supported on finitely many symbols, that is, supported on {1, . . . , k n } N , respectively, for some sequence (k n ) n∈N ⊂ N.
It is well known that M(Σ, σ) is not compact in the weak* topology as mass may be lost in the limit. Accordingly, let M ≤1 (Σ, σ) be the set of sub-probability σ-invariant measures. This is defined to be the set of σ-invariant measures such that |ν| ≤ 1 where |ν| := ν(Σ). Given an enumeration C i of the cylinders of Σ, we define the metric on M ≤1 (Σ, σ) by The topology induced by this metric is called the topology of convergence on cylinders. We say a sequence of measures ν n ∈ M ≤1 (Σ, σ) converges to ν ∈ M ≤1 (Σ, σ) on cylinders if for every Clearly a sequence ν n converges to a measure ν on cylinders if and only if it converges to ν in the topology of convergence on cylinders. It was shown in [IV19] (Theorem 1.2) that M ≤1 (Σ, σ) endowed with this topology is compact. Moreover, weak* convergence and convergence on cylinders are equivalent when there is no loss of mass (see [IV19, Lemma 3.17]). We can further characterise this topology in terms of test functions as follows. If C is a cylinder of length m, denote by For a non-empty set Ω ⊂ Σ we define var We say f : Σ → R is in C 0 (Σ) if and only if the following four conditions hold: Definition 4.2. We further define C 0 (Λ) to be the subset of functions φ : Λ → R with variations uniformly tending to 0 such that f φ ∈ C 0 (Σ), where f φ is as defined as in Section 3.2.
The following lemma was shown in [IV19] (see Lemma 3.19) and characterises the topology of convergence on cylinders in terms of test functions. We will use this lemma multiple times in the proof of Theorem 2.4.

Entropy at infinity
An important quantity of a CMS with finite topological entropy is the entropy at infinity. This is a measure of how complex the system is near infinity.
This too satisfies a variational principle. Theorem 1.4 in [ITV19] says is the metric theoretic entropy at infinity of (Σ, σ). Here (ν n ) n → 0 means that the sequence of measures converges to the zero measure on cylinders and the supremum is over all such sequences (ν n ) n∈N ⊂ M(Σ, σ).
The following theorem was proved in [ITV19]. We will use this to prove Proposition 5.2 and in the proof of the upper bound of Theorem 2.4. Note that it also gives upper semi-continuity of the entropy map in the weak* topology.
Theorem 4.5. Let (Σ, σ) be a topologically transitive CMS with finite topological entropy. Let (ν n ) n∈N be a sequence of σ-invariant probability measures converging on cylinders to ν ∈ M ≤1 (Σ, σ). Then lim sup If the sequence converges on cylinders to the zero measure, then the right hand side is understood as δ ∞ . In this section we will construct a sequence of suspension spaces on Σ with locally constant roof functions τ m which take integer values and are such that τm 2 m converges to f log |T ′ | uniformly. We will show that each suspension space is closely related to a countable Markov shift with finite topological entropy, Σ τm say, and further show that there is a correspondence between measures in M λ<∞ (Σ, σ) and measures in M(Σ τm , σ). By Abramov's formula, for each ν ∈ M λ<∞ (Σ, σ) the entropy of the corresponding measure in M(Σ τm , σ) will be approximately equal to h ν /λ ν , with the limit converging as m → ∞. Using this and the results of CMS with finite topological entropy discussed in the previous section, we are able to prove the following two propositions: σ). Then there exists a sequence of ergodic measures ν n ∈ M λ<∞ (Σ, σ) supported on finitely many symbols such that ν n → ν weak*, lim n→∞ h νn = h ν and lim n→∞ λ νn = λ ν .

Entropy relations via a suspension space
Then there exists ν ∈ M λ<∞ (Σ, σ) and a subsequence (n k ) k∈N such that ν n k → ν weak* and As well the constructed countable Markov shifts and Abramov's formula, the key elements in the proof of Proposition 5.1 are Lemma 5.1 in [ITV18] and Proposition 4.1. Once we have proved Proposition 5.1, we will discuss some heuristics for the proof of Proposition 5.2.
We now define the suspension spaces on Σ and prove several lemmas. For m ∈ N let Then τm 2 m ր f log |T ′ | and each τ m is uniformly continuous, takes integer values on Σ, and is strictly positive if m is sufficiently large. We may assume that τ m is strictly positive for all m ∈ N (otherwise replace 2 m with l m where l ∈ N is such that 1/l < log ζ). Since f log |T ′ | is uniformly continuous and by (3.1), we have that For now, fix an m ∈ N. We define the suspension space X by where we identify the points (ω, τ m (ω)) = (σ(ω), 0) for all ω ∈ Σ. Let ϕ t be the map given by ϕ t (ω, x) = (ω, x + t). Then (X, ϕ t ) defines a semi-flow. We will mainly consider the map ϕ 1 because, due to the roof function taking integer values, (X, ϕ 1 ) is closely related to a CMS. We endow X with the Bowen-Walters metric described in [BI06, Section 2.3]. We can define a map where Leb is the Lebesgue measure on the real line R and (ν × Leb)| X stands for the restriction of ν × Leb to X. When we want to be explicit about the dependence of M on m, we will write M m . Work by Ambrose and Kakutani implies that M is a bijection. It is easy to see that Mν is ergodic if and only if ν is ergodic. Let ν ∈ M λ<∞ (Σ, σ) and let A ⊂ X be the set A = Σ × [0, 1). Then Mν(A) = 1/ τ m dν. Furthermore A is spanning with respect to the map ϕ 1 , so by Abramov's formula We now define an map from X onto a topologically transitive CMS, Σ τm say, with finite topological entropy. This will be related to Σ in the following simple way. First note that the CMS Σ is isomorphic to a CMS, Σ m say, on the alphabet C m (Σ) where (ω 1 , . . . , ω m ) → (ω ′ 1 , . . . , ω ′ m ) if and only if ω l+1 = ω ′ l for all 1 ≤ l ≤ m − 1. The CMS Σ τm is then formed from Σ m by replacing each vertex (ω 1 , . . . , ω m ) by a string of k (ω 1 ,...,ωm) vertices.
It is easy to see that π is invertible and its inverse is continuous.
We are now ready to prove Proposition 5.1.
For the rest of this section, we assume that as i → ∞. To prove Proposition 5.2 we require the following lemma which follows from routine arguments involving the pressure. Since the proof is relatively long and unrelated to the countable Markov shifts Σ τm , we will postpone the proof of this lemma until Section 7 (see Lemmas 7.1 and 7.2).
Before we continue, let us give some heuristics for the proof of Proposition 5.2. Using Lemma 5.7 it can be shown that, since the measures ν n in Proposition 5.2 satisfy h νn /λ νn > s ∞ + δ for all n ∈ N, they must be tight and hence have a weak* limit point, ν say. Assume that ν n → ν weak* (note this can be done without loss of generality). One may hope to use Proposition 8.5 in [ITV19] to prove Proposition 5.2. Unfortunately, the measures ν n converging weak* does not imply that the measures Mν n converge weak*, nor even is the set of measures (Mν n ) n∈N necessarily tight. To see this, consider the measures ρ n := (1−ε n )ν n +ε n η n , where η n ∈ M(Σ, σ) is a sequence of measures converging on cylinders to the zero measure and ε n is a positive sequence converging to zero. Clearly ρ n also converges weak* to ν, but as the map M gives a disproportionate amount of weight to cylinders where log |T ′ •Π| is large, the part of the measure corresponding to ε n η n on the suspension space may not decay. However, as h νn /λ νn > s ∞ + δ and by Lemma 5.7, That is, intuitively, 'small' bits of measure going off to infinity should not hinder upper semicontinuity from holding. While we cannot split the measures ν n up into parts that are staying bounded and parts going off to infinity, the behaviour of 'small' bits of measure going off to infinity on Σ is captured by loss of mass in the topology of convergence on cylinders on Σ τm . In particular, for each m ∈ N, by compactness π * M m ν n has a limit point in the topology of convergence on cylinders, η m ∈ M ≤1 (Σ τm ) say. We will show that the measures ν (m) := M −1 m ( π −1 ) * ηm |ηm| × Leb [0,1) are precisely equal to ν. Moreover, by Lemma 5.7 and (5.4) one can see that s ∞ is closely related to the quantities δ ∞ (Σ τm , σ). Hence using that π * M m ν n → η m on cylinders, Theorem 4.5, and (5.4), we are able to prove the upper semi-continuity statement.
The next two lemmas show that we can relate cylinders in Σ to cylinders in Σ τm , and vice versa. In Lemma 5.10 we will use this to relate s ∞ with the quantities δ ∞ (Σ τm , σ). Later, Lemma 5.9 will be used again to prove that the measures ν (m) are indeed equal to the weak* limit point ν. We remark that we do this directly by showing that, along some subsequence n k , ν n k → ν (m) on cylinders for every m. Thus, it is not necessary for us to use the tightness of the measures ν n to deduce the existence of the weak* limit point ν.
By Lemma 5.10 and (5.7), we have with ε(m) → 0 as m → ∞. This implies that for all m large enough and consequently Hence by the Monotone Convergence Theorem, This completes the proof of Proposition 5.2.
6 Proof of Theorem 2.1 The following proposition proves the first statement in Theorem 2.1.
Proof. We adapt the proof of Proposition 4.1 in [FJLR15]. Given γ assume there exists x ∈ Λ R (γ) such that lim n→∞ A n φ i (x) = γ i for all i ∈ N. Let ω ∈ Σ satisfy Πω = x. Let a ∈ N be such that ω i = a infinitely often. We may assume ω 1 = a since for any j ∈ N we have lim n→∞ A n φ i (T j (x)) = γ i . If we fix ε > 0 and k ∈ N, then by Lemma 3.2 we can find N ∈ N such that for all n ≥ N sup For some n > N we must have ω n = ω 1 . Let ν be the shift invariant probability measure on Σ defined on the periodic orbit (ω 1 , . . . , ω n−1 ). The measure µ = ν • Π −1 satisfies | φ i dµ − γ i | ≤ ε for each 1 ≤ i ≤ k. This finishes the proof.

Upper Bound
We now prove the upper estimate.
We adapt the proof from the proof of the upper bound of Theorem 1.1 in [FJLR15]. Note that as we are considering subsets of Λ R , we are still able to define the σ n -invariant Bernoulli measures used in the proof.
Proof of Proposition 6.2. We can write Hence, as the T j are bi-Lipschitz on I j and by the countable stability of Hausdorff dimension, it suffices to prove an upper bound for the sets Λ R,a (γ) := Λ R (γ) ∩ {Πω ∈ Λ : ω 1 = a, ω i = a infinitely often}.
Fix a ∈ N and lets = dim Λ R,a (γ). Given ε > 0 and k ∈ N, it follows from Lemma 3.2 that the basic intervals corresponding to the set n≥l P n,a (γ) is a covering for Λ R,a (γ) for any l ∈ N. We must have I∈Pn,a(γ) diam(Π(I))s −ε > 1 for infinitely many n. This must be true as otherwise for any δ > 0 we would have contradicting thats = dim Λ R,a (γ). For these n we can choose a finite subfamily S n ⊂ P n,a (γ) such that the sum of their diameters in power ofs − ε is still greater than 1. We can then choose a different exponent s n >s − ε for which this sum is equal to 1. Thus, for these n we can define a σ n -invariant Bernoulli measure η n on Σ by giving each I ∈ P n,a (γ) weight diam(Π(I)) sn . Then the measures are σ-invariant, ergodic, and satisfy φ i • Π dν n ∈ (γ i − ε, γ i + ε) for all i ≤ k. Moreover, by Abramov's formula for entropy (see [PU10], Theorem 2.4.6) I∈Pn,a(γ) diam(Π(I)) sn log diam(Π(I)) sn and, by Lemma 3.3, for n large enough diam(Π(I)) sn log diam(Π(I)) + ε.
Thus, considering the measures µ n := Π * ν n , Taking the limit as k → ∞ and ε → 0 completes the proof.

Lower Bound
We now prove the lower bound.
Lemma 6.4. Let (µ n ) n∈N ∈ M(Λ, T ) be a sequence of measures with finite Lyapunov exponent such that the following limits exist (6.1) For the proof of this lemma, we use the technique of w-measures. The term 'w-measures' was introduced in [GR09], though similar notions had been around before this. For convenience, in this proof we will denote f log |T ′ | by f 0 and f φ i by f i , where recall these are the corresponding uniformly continuous functions on the shift space. Furthermore, let and note that, by (3.2), Π(Σ R (γ)) = Λ R (γ). The idea of the proof is to construct a probability measure η which gives mass to Σ R (γ) by defining it on a family of cylinders which has a product structure. We then apply the mass distribution principle to the push-forward measure Π * η. Note that by Proposition 5.1, we may assume the measures ν n := µ n • Π are ergodic and supported on finitely many symbols. For each n, using Birkhoff's ergodic theorem and uniform continuity, by making m n large enough we can find a collection of cylinders of length m n with total ν n -measure arbitrarily close to one such that each point contained in these cylinders has partial m n -Birkhoff average close to f i dν n for all 0 ≤ i ≤ n. By Lemma 3.3 and the fact that the ν n are supported on finitely many symbols, from this we can also approximate the diameter of the corresponding basic intervals. Furthermore, using the Shannon-McMillan-Breiman theorem we can control the ν n -measure of these cylinders, insisting that their individual masses do not vary too far from exp(−m n h νn ). The product measure is then constructed by using bridge words to connect each set of cylinders to the cylinders constructed from the next measure in the sequence. Crucially, the length of the bridge words and the convergence constants of the following measure do not depend on our m n and so we can choose our m n large enough so that any effect from bridging between measures is negligible. Provided the total ν n -measure of each set of cylinders approaches one fast enough, the push-forward of the constructed measure will give mass to Λ R (γ), allowing us to apply the mass distribution principle.
The proof is split into three parts. We first define the product measure η, then show that η(G) > 0 for some well-behaved set G ⊂ Σ R (γ), and finally we apply the mass distribution principle.
Proof of Lemma 6.4. Applying Proposition 5.1 to the measures µ n • Π ∈ M(Σ, σ), we can find ergodic measures ν n ∈ M E (Σ, σ) supported on finitely many symbols such that lim n→∞ h νn λ νn = lim sup n→∞ h µn λ µn and lim Without loss of generality we may assume that lim n→∞ hν n λν n > 0. This ensures that the constructed measure η does not give mass to individual points, in particular to Π −1 (∪ ∞ j=0 T −j E). Let k n ∈ N be an increasing sequence such that ν n ({1, . . . , k n } N ) = 1 for all n ∈ N. As Σ is mixing there exists N n ∈ N such that for each pair a ∈ {1, . . . , k n } N , b ∈ {1, . . . , k n+1 } N we can choose an admissible word w(a, b) ∈ C Nn (Σ) connecting a and b. Furthermore let K n ⊂ Σ be an increasing sequence of finite Markov shifts encompassing all elements of Σ consisting only of digits in {1, . . . , k n+1 } and those appearing in the chosen admissible words connecting {1, . . . , k n } and {1, . . . , k n+1 }. Each f i , 0 ≤ i ≤ n, is bounded on K n by a uniform bound, λ n say. Let (m n ) n∈N be a increasing sequence of integers such that λ n m n → 0, (6.2) among some other conditions which we will specify in what follows. For each n ∈ N and for all cylinders and depends on u i ∈ C m i (Σ) and u i+1 ∈ C m i+1 (Σ). We call these w i bridge words. They have length N * i := 2N i + 1. We use these bridge words to make sure our measure accumulates on R. For all cylinders u which do not intersect one of these [u 1 , w 1 , u 2 . . . , w n−1 , u n ], set η(u)=0. Then η is a uniquely defined probability measure supported on It is clear that η(R) = 1.
We now specify the other conditions we want our m n to satisfy. Let (ε n ) n∈N be a sequence of positive numbers decreasing to zero. Since each f i is uniformly continuous, for each n ∈ N there exists M n ∈ N such that for all ω, ω ′ ∈ Σ and for all 0 (6.4) By Birkhoff's ergodic theorem for any δ ′ n > 0 there exits M ′′ n such that 5) For f 0 we can further relate the Birkhoff averages to the diameter of basic intervals. By Lemma 3.3 there exists M ′′′ n such that for all m ≥ M ′′′ n and all ω ∈ We let M * n = max{M n , M ′ n , M ′′ n , M ′′′ n }. Taking each n in turn, choose m n large enough so that and m n ≥ M * n . Let C n be the collection of m n -level cylinders formed by truncating the points in the intersection of the sets in (6.4) and (6.5). Let where the union is over u i ∈ C i and the w i are the bridge words defined earlier. The set n , so choosing δ n , δ ′ n small enough we can have η(G) > 0.
We now show that Given ω ∈ G, we may write it in the form ω = (u 1 , w 1 , u 2 , w 2 , . . . , w n−1 , u n , . . .) where the bridge words w i have length N * i . We claim that for each k ∈ N and all M * where A m l f := 1 m−l+1 m−1 i=l−1 f • σ i is the partial Birkhoff average of f between terms l and m. Notice that (6.9) follows immediately from these statements. In particular, G ⊆ Σ R (γ) follows directly from (6.10), and the other inclusion can be proved from (6.10), (6.11), (6.12) and the elementary fact that if a n , b n > 0 and n b n = ∞, then an bn → L implies n i=1 a i n i=1 b i → L. Let us prove each of these statements. For the first, we can find ω ′ in the set in (6.5), with n = k, such that (ω ′ 1 , . . . , ω ′ m k ) = u k . Then by (6.3), (6.5) and (6.7), for any i ≤ k A m The second follows as by (6.8), with j : ). Finally, for the third we can find ω ′ ∈ K k \ Π −1 (∪ ∞ n=0 T −n E) such that ω ′ | m+M k 1 = ω| m+M k 1 . Then by (6.3) and (6.6) we have Notice that by (6.10) and (6.12) we also have We are now ready to apply the mass distribution principle. Given ε > 0, we can find M ∈ N and a subset S ⊂ G with η(S) > 0 such that for all m ≥ M and all ω ∈ S sup ω ′ ,ω ′′ ∈G 1 m log diam(C m (ω ′ )) − 1 m log diam(C m (ω ′′ )) < ε, (6.14) and Let m be the smallest positive integer such that diam(C m (ω)) < r for all ω ∈ S. Clearly m ≥ M + 1. We also have that m ≤ − log r log ζ + 1 (6.18) since by the definition of m there exists ω ′ ∈ S such that For this ω ′ ∈ S, we must have and so for all where the third inequality follows from (6.3) and (6.17) (to see this, note that we can find ). Hence for any x ∈ [0, 1], the ball B r (x) overlaps Π(S) at no more than exp(4mε) + 2 mth level intervals. This is less than exp(5mε) provided our M was chosen large enough. Thus by equations (6.16) and (6.18) Remark 6.5. If desired, we could have instead used the bridge words which depend on u 1 ∈ C m 1 (Σ),. . . ,u i+1 ∈ C m i+1 (Σ) and have length 2N i + i. This would make the measure accumulate on the set which is a subset of R. Therefore, in Theorem 2.1 and Theorem 2.2 we could alternatively take the intersection with the set Note that this is a commonly used notion of recurrence (for example, the one considered in [Iom05]).

Proof of Theorem 2.2
In this section we assume that the φ i are also bounded and Before proving Theorem 2.2, we first prove Lemma 5.7 whose proof was deferred when proving Proposition 5.2. This follows immediately from the following two lemmas.
Lemma 7.1. There exists a sequence of measures µ n ∈ M(Λ, T ) such that We adapt the proof of Lemma 2.5 in [FJLR15] into our setting. First suppose s ∞ > 0. Let ε n be a positive sequence converging to zero. Note that for any µ ∈ M(Λ, T ) such that hµ λµ ≥ s ∞ + 2ε n we have εn . Now take two positive sequences (t n ) n∈N and (k n ) n∈N such that for each n, t n < s ∞ , lim n→∞ t n = s ∞ and lim k→∞ k n = ∞. Since for all n ∈ N we have P (−t n log |T ′ |) = ∞, we can find a sequence of measures µ n ∈ M(Λ, T ) such that h µn − t n λ µn > k n and hence hµ n λµ n > t n . Furthermore, by the fact that λ µn ≥ h µn (see Remark 3.5), we have λ µn > k n .
Proof. This follows in the same way as in the proof of Lemma 6.4 in [FJLR15]. Let t ∈ R be such that s ∞ < t < s ∞ + δ. By the variational principle we have h µ − tλ µ ≤ P (−t log |T ′ |). Since So, The next lemma shows that α 1 (γ) ≥ s ∞ for any γ ∈ Z. As Theorem 2.1 is already proved, from this it follows that dim Λ R (γ) = max{s ∞ , α 1 (γ)}.
Lemma 7.3. Let γ ∈ Z, k ∈ N, ε > 0 and µ ∈ M(Λ, T ) be such that Then there exists a measure ν ∈ M(Λ, T ) such that Proof. The proof follows as in Lemma 5.1 in [FJLR15]. Let ε > 0 and let A = sup 1≤i≤k sup x∈Λ |φ i (x)|. By Lemma 7.1 we can find a sequence of measures µ n ∈ M(Λ, T ) such that lim n→∞ λ µn = ∞ and lim n→∞ hµ n λµ n = s ∞ . Consider the measure Then we have that for each 1 ≤ i ≤ k
Applying Theorem 1.2 in [IV19] to the measures ν n := µ n • Π, there exists a measure ν ∈ M ≤1 (Σ, σ) and a subsequence (ν n j ) j∈N such that ν n j converges to ν in the topology of convergence of cylinders. Without loss of generality we may assume that ν n converges to this measure. Let µ := Π * ν. By Lemma 4.3 and (8.1) we have This shows that Furthermore, since log |T ′ • Π| − L ∈ C 0 (Λ) and by Theorem 4.5, We now prove the inequality α 4 (γ) ≤ α 1 (γ). Let η > 0 and choose µ ∈ M ≤1 (Λ, T ) such that We can find a sequence of measures ν n ∈ M(Σ, σ) converging to the zero measure on cylinders such that h νn → δ ∞ . Consider the measures µ n = µ + (1 −|µ|)Π * ν n . Notice that µ n • Π converges to µ • Π on cylinders, therefore This shows that By the affinity of the entropy map (see [Wal81,Theorem 8.1], noting it also holds in this noncompact setting), we have Hence as h νn → δ ∞ and log |T ′ • Π| − L dν n → 0, for all n sufficiently large we have Moreover, by (8.2), for any ε > 0 and k ∈ N and all n sufficiently large we have We may choose ε > 0 small enough and k ∈ N large enough so that Putting this together, with n large enough we have This concludes the proof of Lemma 8.1.
A major difference between the two cases is that if log |T ′ | is as in Theorem 2.2, then when γ ∈ Z and ∞ i=1 γ i < 1, dim Λ(γ) can only take one value, whereas if log |T ′ | is as in Theorem 2.4 then it can take a range of values.
9.3 Cases when α 4 (γ) = α 3 (γ) It is natural to ask whether the supremum α 4 (γ) in Theorem 2.4 can instead be written as a supremum over probability measures; that is, when does α 4 (γ) = α 3 (γ)? Notice that this will hold in the frequency of digits case whenever the sum of the frequencies is equal to one. Corollary 2.4 in [IJT17] gives other circumstances where this is the case. Note that when we have only one function φ : Λ → R, the set γ ∈ R N : ∃µ ∈ M(Λ, T ), φ i dµ = γ is an interval, with end points γ m , γ M say. An application of their results, which they prove much more generally, says that when we have only one function φ ∈ C 0 (Λ) and both f φ , f log |T ′ | are moreover locally Hölder continuous, then for γ ∈ (γ m , γ M ) dim Λ(γ) = sup µ∈M(Λ,T ) h µ λ µ : φ dµ = γ .
We prove the following theorem which says that this holds whenever we have finitely many functions, and that it is not necessary to also assume that f φ i and f log |T ′ | are locally Hölder continuous.