Academia.eduAcademia.edu

Dimensions of Points in Self-Similar Fractals

2008, SIAM Journal on Computing

Self-similar fractals arise as the unique attractors of iterated function systems (IFSs) consisting of finitely many contracting similarities satisfying an open set condition. Each point x in such a fractal F arising from an IFS S is naturally regarded as the "outcome" of an infinite coding sequence T (which need not be unique) over the alphabet Σ k = {0,. .. , k − 1}, where k is the number of contracting similarities in S. A classical theorem of Moran (1946) and Falconer (1989) states that the Hausdorff and packing dimensions of a self-similar fractal coincide with its similarity dimension, which depends only on the contraction ratios of the similarities. The theory of computing has recently been used to provide a meaningful notion of the dimensions of individual points in Euclidean space. In this paper, we use (and extend) this theory to analyze the dimensions of individual points in fractals that are computably self-similar, meaning that they are unique attractors of IFSs that are computable and satisfy the open set condition. Our main theorem states that, if F ⊆ R n is any computably self-similar fractal and S is any IFS testifying to this fact, then the dimension identities dim(x) = sdim(F) dim π S (T)

Dimensions of Points in Self-Similar Fractals Jack H. Lutz∗ Elvira Mayordomo† Abstract Self-similar fractals arise as the unique attractors of iterated function systems (IFSs) consisting of finitely many contracting similarities satisfying an open set condition. Each point x in such a fractal F arising from an IFS S is naturally regarded as the “outcome” of an infinite coding sequence T (which need not be unique) over the alphabet Σk = {0, . . . , k − 1}, where k is the number of contracting similarities in S. A classical theorem of Moran (1946) and Falconer (1989) states that the Hausdorff and packing dimensions of a self-similar fractal coincide with its similarity dimension, which depends only on the contraction ratios of the similarities. The theory of computing has recently been used to provide a meaningful notion of the dimensions of individual points in Euclidean space. In this paper, we use (and extend) this theory to analyze the dimensions of individual points in fractals that are computably self-similar, meaning that they are unique attractors of IFSs that are computable and satisfy the open set condition. Our main theorem states that, if F ⊆ Rn is any computably self-similar fractal and S is any IFS testifying to this fact, then the dimension identities dim(x) = sdim(F ) dimπS (T ) ∗ Department of Computer Science, Iowa State University, Ames, IA 50011 USA. [email protected]. Research supported in part by National Science Foundation Grants 0344187, 0652569, and 0728806 and by Spanish Government MEC Project TIN 200508832-C03-02. Part of this author’s research was performed during a sabbatical at the University of Wisconsin and two visits at the University of Zaragoza. † Departamento de Informática e Ingenierı́a de Sistemas, Marı́a de Luna 1, Universidad de Zaragoza, 50018 Zaragoza, SPAIN. [email protected]. Research supported in part by Spanish Government MEC Project TIN 2005-08832-C03-02. 1 and Dim(x) = sdim(F ) DimπS (T ) hold for all x ∈ F and all coding sequences T for x. In these equations, sdim(F ) denotes the similarity dimension of the fractal F ; dim(x) and Dim(x) denote the dimension and strong dimension, respectively, of the point x in Euclidean space; and dimπS (T ) and DimπS (T ) denote the dimension and strong dimension, respectively, of the coding sequence T relative to a probability measure πS that the IFS S induces on the alphabet Σk . The above-mentioned theorem of Moran and Falconer follows easily from our main theorem by relativization. Along the way to our main theorem, we develop the elements of the theory of constructive dimensions relative to general probability measures. The proof of our main theorem uses Kolmogorov complexity characterizations of these dimensions. Keywords: Billingsley dimension, computability, constructive dimension, fractal geometry, geometric measure theory, iterated function system, Hausdorff dimension, packing dimension, self-similar fractal. 1 Introduction The theory of computing has recently been used to formulate effective versions of Hausdorff dimension and packing dimension, the two most important dimensions in geometric measure theory [34, 35, 11, 1]. These effective fractal dimensions have already produced quantitative insights into many aspects of algorithmic randomness, Kolmogorov complexity, computational complexity, data compression, and prediction [26]. They are also beginning to yield results in geometric measure theory itself [20]. The most fundamental effective dimensions are the constructive dimension [35] and the constructive strong dimension [1]. These two constructive dimensions (which are defined explicitly in section 3 below) are the constructive versions of the Hausdorff and packing dimensions, respectively. For each set X of (infinite) sequences over a finite alphabet Σ, the constructive dimension cdim(X) and the constructive strong dimension cDim(X) are real numbers satisfying the inequalities cDim(X) ≤ ≤ cdim(X) ≤ 0 ≤ dimH (X) ≤ DimP (X), 2 ≤ 1 where dimH (X) and DimP (X) are the Hausdorff and packing dimensions, respectively, of X. These constructive dimensions are exact analogs of the constructive Lebesgue measure that Martin-Löf used to formulate algorithmic randomness [36]. As such, they are endowed with universality properties that make them especially well behaved. For example, unlike the other effective dimensions, and unlike their classical counterparts, the constructive dimensions are absolutely stable, meaning that the dimension of any union (countable or otherwise) of sets is the supremum of the dimensions of the individual sets. In particular, this implies that, if we define the dimension and strong dimension of an individual sequence S ∈ Σ∞ to be dim(S) = cdim({S}) (1.1) Dim(S) = cDim({S}), (1.2) and respectively, then the constructive dimensions of any set X ⊆ Σ∞ are determined by the equations cdim(X) = sup dim(S) (1.3) S∈X and cDim(X) = sup Dim(S). (1.4) S∈X Constructive dimensions are thus investigated entirely in terms of the dimensions of individual sequences. The two constructive dimensions also admit precise characterizations in terms of Kolmogorov complexity [37, 1]. Specifically, for any sequence S ∈ Σ∞ , K(S[0..j − 1]) (1.5) dim(S) = lim inf j→∞ j log |Σ| and Dim(S) = lim sup j→∞ K(S[0..j − 1]) , j log |Σ| (1.6) where K(S[0..j − 1]) denotes the Kolmogorov complexity of the j-symbol prefix of S [31] and the logarithm is base-2. Since K(w) measures the algorithmic information content (in bits) of a string w, (1.5) and (1.6) say that dim(S) and Dim(S) are asymptotic measures of the algorithmic information density of S. 3 Although the constructive dimensions have primarily been investigated in sequence spaces Σ∞ , they work equally well in Euclidean spaces Rn . One of several equivalent ways to achieve this is to fix a base k ≥ 2 in which to expand the coordinates of each point x = (x1 , . . . , xn ) ∈ Rn . If the expansions of the fractional parts of these coordinates are S1 , . . . , Sn ∈ Σ∞ k , respectively, where Σk = {0, . . . , k − 1}, and if S is the interleaving of these sequences, i.e., S = S1 [0]S2 [0] . . . Sn [0]S1 [1]S2 [1] . . . Sn [1]S1 [2]S2 [2] . . . , then the dimension of the point x is dim(x) = n dim(S), (1.7) and the strong dimension of x is Dim(x) = n Dim(S). (1.8) If one or more of the coordinates of x have two base-k expansions (because they are rationals whose denominators are powers of k), it is easily seen that the numbers dim(x) and Dim(x) are unaffected by how we choose between these base-k expansions. Also, a theorem of Staiger [45], in combination with (1.5) and (1.6), implies that dim(x) and Dim(x) do not depend on the choice of the base k. The dimension and strong dimension of a point x ∈ Rn are properties of the point x itself, not properties of a particular encoding. Clearly, 0 ≤ dim(x) ≤ Dim(x) ≤ n for all x ∈ Rn . In fact, this is the only restriction that holds in general, i.e., for any two real numbers 0 ≤ α ≤ β ≤ n, there is a point x in Rn with dim(x) = α and Dim(x) = β [1]. The theory of computing thus assigns a dimension dim(x) and a strong dimension Dim(x) to each point x in Euclidean space. This assignment is robust (i.e., several natural approaches all lead to the same assignment), but is it geometrically meaningful? Prior work already indicates an affirmative answer. By Hitchcock’s correspondence principle for constructive dimension ([25], extending a result of [44]), together with the absolute stability of constructive dimension [35], if X ⊆ Rn is any countable (not necessarily effective) union of computably closed, i.e., Π01 , sets, then cdim(X) = dimH (X). Putting this together with (1.3) and (1.7), we have dimH (X) = sup dim(x) (1.9) x∈X for any set X that is a union of computably closed sets. That is, the classical Hausdorff dimension [15] of any such set is completely determined by the 4 dimensions of its individual points. Many, perhaps most, of the sets which arise in “standard” mathematical practice are unions of computably closed sets, so (1.9) constitutes strong prima facie evidence that the dimensions of individual points are indeed geometrically meaningful. This paper analyzes the dimensions of points in the most widely known type of fractals, the self-similar fractals. The class of self-similar fractals includes such famous members as the Cantor set, the von Koch curve, the Sierpinski triangle, and the Menger sponge, along with many more exotic sets in Euclidean space [2, 12, 13, 15]. A self-similar fractal (defined precisely in section 5 below) is constructed from an iterated function system (IFS) S = (S0 , ..., Sk−1 ), which is a list of two or more contracting similarities mapping an initial nonempty, closed set D ⊆ Rn into itself. Each set Si (D) is a strictly smaller “copy” of D inside of D, and each set Si (Sj (D)) is a strictly smaller “copy” of Sj (D) inside of Si (D). Continuing in this way, each sequence T ∈ Σ∞ k encodes a nested sequence D ⊇ ST [0] (D) ⊇ ST [0] (ST [1] (D)) ⊇ . . . (1.10) of nonempty, closed sets in Rn . Each of these sets is strictly smaller than the one preceding it, because each similarity Si has a contraction ratio ci ∈ (0, 1). Hence there is a unique point S(T ) ∈ Rn that is an element of all the sets in (1.10). Figure 1 illustrates how a coding sequence T represents a point S(T ) in the Sierpinski triangle. The attractor of the IFS S is the set F (S) = {S(T ) | T ∈ Σ∞ k }. (1.11) In general, the sets S0 (D), . . . , Sk−1 (D) need not be disjoint, so a point x ∈ F (S) may have many coding sequences, i.e., many sequences T for which S(T ) = x. A self-similar fractal is a set F ⊆ Rn that is the attractor of an IFS S that satisfies a technical open set condition (defined in section 5), which ensures that the sets S0 (D), ..., Sk−1 (D) are “nearly” disjoint. The similarity dimension of an IFS S is the (unique) solution sdim(S) of the equation k−1 X sdim(S) ci = 1, (1.12) i=0 where c0 , . . . , ck−1 are the contraction ratios of the similarities S0 , . . . , Sk−1 , respectively. The similarity dimension of a self-similar fractal F = F (S) is 5 Figure 1. A sequence T ∈ {0, 1, 2}∞ codes a point S(T ) in the Sierpinski triangle. sdim(F ) = sdim(S). A classical theorem of Moran [38] and Falconer [14] says that, for any self-similar fractal F , dimH (F ) = DimP (F ) = sdim(F ), (1.13) i.e., the Hausdorff and packing dimensions of F coincide with its similarity dimension. In addition to its theoretical interest, the Moran-Falconer theorem has the pragmatic consequence that the Hausdorff and packing dimensions of a self-similar fractal are easily computed from the contraction ratios by solving equation (1.12). Our main theorem concerns the dimensions of points in fractals that are computably self-similar, meaning that they are attractors of computable iterated function systems satisfying the open set condition. (We note that most self-similar fractals occurring in practice – including the four famous examples mentioned above – are, in fact, computably self-similar.) Our main theorem says that, if F is any fractal that is computably self-similar with the IFS S as witness, then, for every point x ∈ F and every coding sequence T for x, the dimension and strong dimension of the point x are given by the dimension formulas dim(x) = sdim(F )dimπS (T ) (1.14) 6 and Dim(x) = sdim(F )DimπS (T ), πS (1.15) πS where dim (T ) and Dim (T ) are the dimension and strong dimension of T with respect to the probability measure πS on the alphabet Σk defined by sdim(F ) πS (i) = ci (1.16) for all i ∈ Σk . (We note that dimπS (T ) is the constructive analog of Billingsley dimension [3, 9].) This theorem gives a complete analysis of the dimensions of points in computably self-similar fractals and the manner in which the dimensions of these points arise from the dimensions of their coding sequences. Although our main theorem only applies directly to computably selfsimilar fractals, we use relativization to show that the Moran-Falconer theorem (1.13) for arbitrary self-similar fractals is an easy consequence of our main theorem. Hence, as is often the case, a theorem of computable analysis (i.e., the theoretical foundations of scientific computing [6]) has an immediate corollary in classical analysis. The proof of our main theorem has some geometric and combinatorial similarities with the classical proofs of Moran [38] and Falconer [14], but the argument here is information-theoretic. Specifically, our proof uses Kolmogorov complexity characterizations of dimensions with respect to probability measures. These characterizations (proven in section 4 below) say that, if ν is a suitable probability measure on a sequence space Σ∞ , then, for every sequence S ∈ Σ∞ , dimν (S) = lim inf K(S[0..j − 1]) Iν (S[0..j − 1]) (1.17) Dimν (S) = lim sup K(S[0..j − 1]) Iν (S[0..j − 1]) (1.18) j→∞ and j→∞ where 1 ν(w) is the Shannon self-information of the string w with respect to the probability measure ν [10]. The older characterizations (1.5) and (1.6) are the special cases of (1.17) and (1.18) in which ν(w) = |Σ|−|w| for all w in Σ∗ . The characterizations (1.17) and (1.18) say that dimν (S) and Dimν (S) are asymptotic Iν (w) = log 7 measures of the algorithmic information density of S, but the “density” here is now an information-to-cost ratio. In this ratio, the “information” is algorithmic information, i.e., Kolmogorov complexity, and the “cost” is the Shannon self-information. To see why this makes sense, consider the case of interest in our main theorem. In this case, (1.16) says that ν(w) = |w|−1 Y sdim(F ) cw[j] , j=0 whence the cost of a string w ∈ Σ∗k is Iν (w) = sdim(F ) |w|−1 X j=0 log 1 cw[j] , i.e., the sum of the costs of the symbols in w, where the cost of a symbol i ∈ Σk is sdim(F ) log(1/ci ). These symbol costs are computational and realistic. A symbol i with high cost invokes a similarity Si with a small contraction ratio ci , thereby necessitating a high-precision computation. We briefly mention some other recent research on fractals in theoretical computer science. Braverman and Cook [5, 6] have used computability and complexity of various fractals to explore the relationships between the two main models of real computation. Rettinger and Weihrauch [40], Hertling [23], and Braverman and Yampolsky [7] have investigated computability and complexity properties of Mandelbrot and Julia sets. Gupta, Krauthgamer, and Lee [21] have used fractal geometry to prove a lower bounds on the distortions of certain embeddings of metric spaces. Most of the fractals involved in these papers are more exotic than the self-similar fractals that we investigate here. Cai and Hartmanis [8] and Fernau and Staiger [17] have investigated Hausdorff dimension in self-similar fractals and their coding spaces. This work is more closely related to the present paper, but the motivations and results are different. Our focus here is on a pointwise analysis of dimensions. Some of the most difficult open problems in geometric measure theory involve establishing lower bounds on the fractal dimensions of various sets. Kolmogorov complexity has proven to be a powerful tool for lower-bound arguments, leading to the solution of many long-standing open problems in discrete mathematics [31]. There is thus reason to hope that our pointwise approach to fractal dimension, coupled with the introduction of Kolmogorov complexity techniques, will lead to progress in this classical area. In any 8 case, our results extend computable analysis [39, 28, 52] in a new, geometric direction. The rest of this paper is organized as follows. Section 2 summarizes basic terminology and notation. Section 3 develops the basic theory of constructive dimensions with respect to probability measures. Section 4 establishes the Kolmogorov complexity characterizations of these dimensions. Section 5 is a brief exposition of self-similar fractals for readers who are not familiar with iterated function systems. Section 6 proves our main theorem and derives the Moran-Falconer theorem from it. 2 Preliminaries Given a finite alphabet Σ, we write Σ∗ for the set of all (finite) strings over Σ and Σ∞ for the set of all (infinite) sequences over Σ. If ψ ∈ Σ∗ ∪ Σ∞ and 0 ≤ i ≤ j < |ψ|, where |ψ| is the length of ψ, then ψ[i] is the ith symbol in ψ (where ψ[0] is the leftmost symbol in ψ), and ψ[i..j] is the string consisting of the ith through the jth symbols in ψ. If w ∈ Σ∗ and ψ ∈ Σ∗ ∪ Σ∞ , then w is a prefix of ψ, and we write w ⊑ ψ, if there exists i ∈ N such that w = ψ[0..i − 1]. If A ⊆ Σ∗ then A=n = {x | x ∈ A ∧ |x| = n}. For functions on Euclidean space, we use the computability notion formulated by Grzegorczyk [19] and Lacombe [29] in the 1950’s and exposited in the monographs by Pour-El and Richards [39], Ko [28], and Weihrauch [52] and in the recent survey paper by Braverman and Cook [6]. A function f : Rn → Rn is computable if there is an oracle Turing machine M with the following property. For all x ∈ Rn and r ∈ N, if M is given a function oracle ϕx : N → Qn such that, for all k ∈ N, |ϕx (k) − x| ≤ 2−k , then M , with oracle ϕx and input r, outputs a rational point M ϕx (r) ∈ Qn such that |M ϕx (r) − f (x)| ≤ 2−r . A point x ∈ Rn is computable if there is a computable function ψx : N → Qn such that, for all r ∈ N, |ψx (r) − x| ≤ 2−r . For subsets of Euclidean space, we use the computability notion introduced by Brattka and Weihrauch [4] (see also [52, 6]). A set X ⊆ Rn is computable if there is a computable function fX : Qn × N → {0, 1} that satisfies the following two conditions for all q ∈ Qn and r ∈ N. (i) If there exists x ∈ X such that |x − q| ≤ 2−r , then fX (q, r) = 1. (ii) If there is no x ∈ X such that |x − q| ≤ 21−r , then fX (q, r) = 0. 9 The following two observations are well known and easy to verify. Observation 2.1 A nonempty set X ⊆ Rn is computable if and only if the associated distance function ρX : Rn → [0, ∞) ρX (y) = inf x∈X |x − y| is computable. Observation 2.2 If X ⊆ Rn is both computable and closed, then X is a computably closed, i.e., Π01 , set. All logarithms in this paper are base-2. 3 Dimensions relative to probability measures Here we develop the basic theory of constructive fractal dimension on a sequence space Σ∞ with respect to a suitable probability measure on Σ∞ . We first review the classical Hausdorff and packing dimensions. Let ρ be a metric on a set X . We use the following standard terminology. The diameter of a set X ⊆ X is diam(X) = sup {ρ(x, y) | x, y ∈ X } (which may be ∞). For each x ∈ X and r ∈ R, the closed ball of radius r about x is the set B(x, r) = {y ∈ X | ρ(y, x) ≤ r } , and the open ball of radius r about x is the set B o (x, r) = {y ∈ X | ρ(y, x) < r } . A ball is any set of the form B(x, r) or B o (x, r). A ball B is centered in a set X ⊆ X if B = B(x, r) or B = B o (x, r) for some x ∈ X and r ≥ 0. For each δ > 0, we let Cδ be the set of all countable collections B of balls such that diam(B) ≤ δ for all B ∈ B, and we let Dδ be the set of all B ∈ Cδ 10 such that the balls in B are pairwise disjoint. For each X ⊆ X and δ > 0, we define the sets ( ) [ Hδ (X) = B ∈ Cδ X ⊆ B , B∈B Pδ (X) = {B ∈ Dδ | (∀B ∈ B)B is centered in X } . If B ∈ Hδ (X), then we call B a δ-cover of X. If B ∈ Pδ (X), then we call B a δ-packing of X. For X ⊆ X , δ > 0 and s ≥ 0, we define the quantities X diam(B)s , Hδs (X) = inf B∈Hδ (X) Pδs (X) = sup B∈B X diam(B)s . B∈Pδ (X) B∈B Since Hδs (X) and Pδs (X) are monotone as δ → 0, the limits H s (X) = lim Hδs (X), δ→0 P0s (X) = lim Pδs (X) δ→0 exist, though they may be infinite. Let (∞ ) ∞ X [ P0s (Xi ) X ⊆ Xi . P s (X) = inf i=0 (3.1) i=0 It is routine to verify that the set functions H s and P s are outer measures [15]. The quantities H s (X) and P s (X) – which may be infinite – are called the s-dimensional Hausdorff (outer) ball measure and the s-dimensional packing (outer) ball measure of X, respectively. The optimization (3.1) over all countable partitions of X is needed because the set function P0s is not an outer measure. Definition. Let ρ be a metric on a set X , and let X ⊆ X . 1. (Hausdorff [22]). The Hausdorff dimension of X with respect to ρ is (ρ) dimH (X) = inf {s ∈ [0, ∞) | H s (X) = 0} . 2. (Tricot [48], Sullivan [47]). The packing dimension of X with respect to ρ is (ρ) DimP (X) = inf {s ∈ [0, ∞) | P s (X) = 0} . 11 When X is a Euclidean space Rn and ρ is the usual Euclidean metric on (ρ) (ρ) R , dimH and DimP are the ordinary Hausdorff and packing dimensions, also denoted by dimH and DimP , respectively. We now focus our attention on sequence spaces. Let Σ be a finite alphabet with |Σ| ≥ 2. A (Borel) probability measure on Σ∞ is a function ν : Σ∗ → P [0, 1] such that ν(λ) = 1 and ν(w) = a∈Σ ν(wa) for all w ∈ Σ∗ . Intuitively, ν(w) is the probability that w ⊑ S when a sequence S ∈ Σ∞ is chosen according to the probability measure ν. A probability measure ν on Σ∞ is strongly positive if there exists δ > 0 such that, for all w ∈ Σ∗ and a ∈ Σ, ν(wa) > δν(w). The following type of probability measure is used in our main theorem. n Example 3.1 Let π be a probability measure on the alphabet Σ, i.e., a funcP tion π : Σ → [0, 1] such that a∈Σ π(a) = 1. Then π induces the product probability measure π on Σ∞ defined by π(w) = |w|−1 Y π(w[i]) i=0 for all w ∈ Σ∗ . If π is positive on Σ, i.e., π(a) > 0 for all a ∈ Σ, then the probability measure π on Σ∞ is strongly positive. Example 3.2 We reserve the symbol µ for the uniform probability measure on Σ∞ , which is the function µ : Σ∗ → [0, ∞) defined by µ(w) = |Σ|−|w| for all w ∈ Σ∗ . Note that this is the special case of Example 3.1 in which π(a) = 1/|Σ| for each a ∈ Σ. Definition. The metric induced by a strongly positive probability measure ν on Σ∞ is the function ρν : Σ∞ × Σ∞ → [0, 1] defined by ρν (S, T ) = inf {ν(w) | w ⊑ S and w ⊑ T } for all S, T ∈ Σ∞ . The following fact is easily verified. Observation 3.3 For every strongly positive probability measure ν on Σ∞ , the function ρν is a metric on Σ∞ . 12 Hausdorff and packing dimensions with respect to probability measures on sequence spaces are defined as follows. Definition. Let Σ be a finite alphabet with |Σ| ≥ 2, let ν be a strongly positive probability measure on Σ∞ , and let X ⊆ Σ∞ . 1. The Hausdorff dimension of X with respect to ν (also called the Billingsley dimension of X with respect to ν [3, 9]) is (ρ ) dimνH (X) = dimH ν (X). 2. The packing dimension of X with respect to ν is (ρ ) DimνP (X) = DimP ν (X). Note: We have assumed strong positivity here for clarity of presentation, but this assumption can be weakened in various ways for various results. When ν is the probability measure µ, it is generally omitted from the terminology. Thus, the Hausdorff dimension of X is dimH (X) = dimµH (X), and the packing dimension of X is DimP (X) = DimµP (X). It was apparently Wegmann [51] who first noticed that the metric ρν could be used to make Billingsley dimension a special case of Hausdorff dimension. Fernau and Staiger [17] have also investigated this notion. We now develop gale characterizations of dimνH and DimνP . Definition. Let Σ be a finite alphabet with |Σ| ≥ 2, let ν be a probability measure on Σ∞ , and let s ∈ [0, ∞). 1. A ν-s-supergale is a function d : Σ∗ → [0, ∞) that satisfies the condition X d(w)ν(w)s ≥ d(wa)ν(wa)s (3.2) a∈Σ for all w ∈ Σ∗ . 2. A ν-s-gale is a ν-s-supergale that satisfies (3.2) with equality for all w ∈ Σ∗ . 3. A ν-supermartingale is a ν-1-supergale. 4. A ν-martingale is a ν-1-gale. 5. An s-supergale is a µ-s-supergale. 13 6. An s-gale is a µ-s-gale. 7. A supermartingale is a 1-supergale. 8. A martingale is a 1-gale. The following observation shows how gales and supergales are affected by variation of the parameter s. Observation 3.4 [35]. Let ν be a probability measure on Σ∞ , let s, s′ ∈ [0, ∞) and let d, d′ : Σ∗ → [0, ∞). Assume that d(w)ν(w)s = d′ (w)ν(w)s ′ holds for all w ∈ Σ∗ . 1. d is a ν-s-supergale if and only if d′ is a ν-s′ -supergale. 2. d is a ν-s-gale if and only if d′ is a ν-s′ -gale. For example, if the probability measure ν is positive, then a function d : Σ∗ → [0, ∞) is a ν-s-gale if and only if the function d′ : Σ∗ → [0, ∞) defined by d′ (w) = ν(w)s−1 d(w) is a ν-martingale. Martingales were introduced by Lévy [30] and Ville [50]. They have been used extensively by Schnorr [41, 42, 43] and others in investigations of randomness and by Lutz [32, 33] and others in the development of resourcebounded measure. Gales are a convenient generalization of martingales introduced by Lutz [34, 35] in the development of effective fractal dimensions. The following generalization of Kraft’s inequality [10] is often useful. Lemma 3.5 [35] Let d be a ν-s-supergale, where ν is a probability measure on Σ∞ and s ∈ [0, ∞). Then, for all w ∈ Σ∗ and all prefix sets B ⊆ Σ∗ , X d(wu)ν(wu)s ≤ d(w)ν(w)s . u∈B Intuitively, a ν-s-gale d is a strategy for betting on the successive symbols in a sequence S ∈ Σ∞ . We regard the value d(w) as the amount of money that a gambler using the strategy d will have after betting on the symbols in w, if w is a prefix of S. If s = 1, then the ν-s-gale identity, X d(w)ν(w)s = d(wa)ν(wa)s , (3.3) a∈Σ 14 ensures that the payoffs are fair in the sense that the conditional ν-expected value of the gambler’s capital after the symbol following w, given that w has occurred, is precisely d(w), the gambler’s capital after w. If s < 1, then (3.3) says that the payoffs are less than fair. If s > 1, then (3.3) says that the payoffs are more than fair. Clearly, the smaller s is, the more hostile the betting environment is. There are two important notions of success for a supergale. Definition. Let d be a ν-s-supergale, where ν is a probability measure on Σ∞ and s ∈ [0, ∞), and let S ∈ Σ∞ . 1. We say that d succeeds on S, and we write S ∈ S ∞ [d], if lim supt→∞ d(S[0..t − 1]) = ∞. ∞ 2. We say that d succeeds strongly on S, and we write S ∈ Sstr [d], if lim inf t→∞ d(S[0..t − 1]) = ∞. Notation. Let ν be a probability measure on Σ∞ , and let X ⊆ Σ∞ . 1. G ν (X) is the set of all s ∈ [0, ∞) such that there is a ν-s-gale d for which X ⊆ S ∞ [d]. 2. G ν,str (X) is the set of all s ∈ [0, ∞) such that there is a ν-s-gale d for ∞ which X ⊆ Sstr [d]. 3. Gbν (X) is the set of all s ∈ [0, ∞) such that there is a ν-s-supergale d for which X ⊆ S ∞ [d]. 4. Gbν,str (X) is the set of all s ∈ [0, ∞) such that there is a ν-s-supergale ∞ d for which X ⊆ Sstr [d]. The following theorem gives useful characterizations of the classical Hausdorff and packing dimensions with respect to probability measures on sequence spaces. Theorem 3.6 (gale characterizations of dimνH (X) and DimνP (X)). If ν is a strongly positive probability measure on Σ∞ , then, for all X ⊆ Σ∞ , and dimνH (X) = inf G ν (X) = inf Gbν (X) DimνP (X) = inf G ν,str (X) = inf Gbν,str (X). 15 (3.4) (3.5) Proof. In this proof we will use the following notation, for each w ∈ Σ∗ , Cw = {S ∈ Σ∞ | w ⊑ S }. Notice that for each S ∈ Σ∞ , r > 0, the balls B(S, r) = Cv , B o (S, r) = Cw for some v, w ∈ Σ∗ . Therefore two balls Cw , Cw′ are either disjoint or one contained in the other. In order to prove (3.4) it suffices to show that for all s ∈ [0, ∞), H s (X) = 0 =⇒ s ∈ G ν (X) =⇒ s ∈ Gbν (X) =⇒ H s (X) = 0 First, assume that H s (X) = 0. Then H1s (X) = 0,P which implies that for each r ∈ N, there is a disjoint cover B ∈ C1 such that B∈B diam(B)s < 2−r . Let Ar = {w ∈ Σ∗ | Cw ∈ B }. We define a function d : Σ∗ → [0, ∞) as follows. Let w ∈ Σ∗ . If there exists v ⊑ w such that v ∈ Ar then  1−s ν(w) dr (w) = . ν(v) Otherwise, X  ν(wu) s . dr (w) = ν(w) u, wu∈Ar It is routine to verify that the following conditions hold for all r ∈ N. (i) dr is a ν-s-gale. (ii) dr (λ) < 2−r . (iii) For all w ∈ Ar , dr (w) = 1. P r ∞ Let d = ∞ r=0 2 d2r . Notice that d is a ν-s-gale. To see that X ⊆ S [d], let T ∈ X, and let r ∈ N be arbitrary. Since B covers X, there exists w ∈ A2r such that w ⊑ T . Then by (iii) above, d(w) ≥ 2r d2r (w) = 2r . Since r ∈ N is arbitrary, this shows that T ∈ S ∞ [d], confirming that X ⊆ S ∞ [d]. We have now shown that d is a ν-s-gale such that X ⊆ S ∞ [d], whence s ∈ G ν (X). Conversely, assume that s ∈ Gbν (X). To see that H s (X) = 0, let δ > 0, r ∈ N. It suffices to show that H s (X) ≤ 2−r . If X = ∅ this is trivial, so assume that X 6= ∅. 16 Since s ∈ Gbν (X), there is a ν-s-supergale d such that X ⊆ S ∞ [d]. Note that d(λ) > 0 because X 6= ∅. Let A = {w ∈ Σ∗ | ν(w) < δ, d(w) ≥ 2r d(λ) and (∀v)[v ❁ w =⇒ v 6∈ A]} . It is clear that A is a prefix set. It is also clear that B = {Cw | w ∈ A } is a δ-cover of S ∞ [d], and since X ⊆ S ∞ [d], B is also a δ-cover of X. By Lemma 3.5 and the definition of A, we have X X d(λ) ≥ ν(w)s d(w) ≥ 2r d(λ) ν(w)s . w∈A w∈A Since B ∈ Cδ (X) and d(λ) > 0, it follows that X Hδs (X) ≤ ν(w)s ≤ 2−r . w∈A This completes the proof of (3.4). The proof of (3.5) is based on the following three claims. Claim 1. For each family Xi ⊆ Σ∞ , i ∈ N inf G ν,str (∪i Xi ) = sup inf G ν,str (Xi ). i Claim 2. For each X ⊆ Σ∞ , if P0s (X) < ∞ then inf G ν,str (X) ≤ s. Claim 3. For each X ⊆ Σ∞ , if s > inf Gbν,str (X) then P s (X) = 0. Proof of Claim 1. The ≥ inequality follows from the definition of G ν,str (). To prove that inf G ν,str (∪i Xi ) ≤ supi inf G ν,str (Xi ), let s > supi inf G ν,str (Xi ). Assume that Xi 6= ∅ for every i, since otherwise the proof is similar taking only nonempty Xi ’s. Then for each i ∈ N there is a ν-s-gale di such that ∞ Xi ⊆ Sstr [di ]. We define a ν-s-gale d by d(w) = X 2−i di (w) di (λ) i for all w ∈ Σ∗ . Then for each i, for any S ∈ Xi , we have 2−i di (S[0..n − 1]) d(S[0..n − 1]) ≥ di (λ) ∞ ∞ for all n, so S ∈ Sstr [d]. Therefore ∪i Xi ⊆ Sstr [d] and the claim follows. 17 ✷ Proof of Claim 2. Assume that P0s (X) < ∞. Let ǫ > 0. Let A = {w | w ∈ Σ∗ and Cw ∩ X 6= ∅} . P Notice that there is a constant c such that for every n, w∈A=n ν(w)s < c and that for each T ∈ X, for every n, T [0..n − 1] ∈ A. For each n ∈ N we define dn : Σ∗ → [0, ∞) similarly to the first part of this proof, that is, let w ∈ Σ∗ . If there exists v ⊑ w such that v ∈ A=n then  1−s ν(w) dn (w) = . ν(v) Otherwise, X  ν(wu) s . dn (w) = ν(w) u, P wu∈A=n dn is a ν-s-gale,P dn (λ) = u∈A=n ν(u)s and for all w ∈ A=n , dn (w) = 1. −ǫ Let d(w) = ∞ n=0 ν(w) dn (w). Notice that d is a ν-(s + ǫ)-gale. To see ∞ that X ⊆ Sstr [d], let T ∈ X and let n be arbitrary. Since T [0..n − 1] ∈ A, d(T [0..n − 1]) ≥ ν(T [0..n − 1])−ǫ dn (T [0..n − 1]) ≥ ν(T [0..n − 1])−ǫ . n→∞ ∞ ∞ Since ν(T [0..n − 1]) −→ 0 this shows that T ∈ Sstr [d]. Therefore X ⊆ Sstr [d] and inf G ν,str (X) ≤ s + ǫ for arbitrary ǫ, so the claim follows. ✷ Proof of Claim 3. Let s > t > inf Gbν,str (X). To see that P s (X) = 0, let d ∞ be a ν-t-supergale such that X ⊆ Sstr [d]. Let i ∈ N and Xi = {T | ∀n ≥ i, d(T [0..n − 1]) > d(λ)} . Then X ⊆ ∪i Xi . For each i ∈ N we prove that P0s (Xi ) = 0. Let δi = min|w|≤i ν(w). Let of Xi , then P δ < δi tand B be a δ-packing t B ⊆ {w | d(w) > d(λ)} and w∈B ν(w) ≤ 1. Therefore P0 (Xi ) ≤ 1 and P δ→0 P0s (Xi ) = 0 (since w∈B ν(w)s ≤ δ s−t −→ 0). Therefore P s (X) = 0 and the claim follows. ✷ ν ν,str We next prove (3.5). inf G (X) ≤ DimP (X) follows from Claims 1 and 2, and DimνP (X) ≤ Gbν,str (X) from Claim 3. ✷ We note that the case ν = µ of (3.4) was proven by Lutz [34], and the case ν = µ of (3.5) was proven by Athreya, Hitchcock, Lutz, and Mayordomo [1]. 18 Guided by Theorem 3.6, we now develop the constructive fractal νdimensions. Definition. A ν-s-supergale d is constructive if it is lower semicomputable, i.e., if there is an exactly computable function db : Σ∗ × N → Q with the following two properties. b t) ≤ d(w, b t + 1) < d(w). (i) For all w ∈ Σ∗ and t ∈ N, d(w, b t) = d(w). (ii) For all w ∈ Σ∗ , limt→∞ d(w, Notation. For each probability measure ν on Σ∞ and each set X ⊆ Σ∞ , we ν,str ν,str ν ν (X), Gbconstr (X), and Gbconstr (X) exactly like define the sets Gconstr (X), Gconstr ν ν,str ν ν,str b b the sets G (X), G (X), G (X), and G (X), respectively, except that the gales and supergales d are now required to be constructive. Definition. Let ν be a probability measure on Σ∞ , and let X ⊆ Σ∞ . ν 1. The constructive ν-dimension of X is cdimν (X) = inf Gbconstr (X). ν,str (X). 2. The constructive strong ν-dimension of X is cDimν (X) = inf Gbconstr 3. The constructive dimension of X is cdim(X) = cdimµ (X). 4. The constructive strong dimension of X is cDim(X) = cDimµ (X). The fact that the “unhatted” G-classes can be used in place of the “hatb ted” G-classes is not as obvious in the constructive case as in the classical case. Nevertheless, Fenner [16] proved that this is the case for constructive ν-dimension. (Hitchcock [24] proved this independently for the case ν = µ.) The case of strong ν-dimension also holds with a more careful argument [1]. Theorem 3.7 (Fenner [16]). If ν is a strongly positive, computable probability measure on Σ∞ , then, for all X ⊆ Σ∞ , ν cdimν (X) = inf Gconstr (X) and ν,str (X). cDimν (X) = inf Gconstr A correspondence principle for an effective dimension is a theorem stating that the effective dimension coincides with its classical counterpart on sufficiently “simple” sets. The following such principle, proven by Hitchcock [25], extended a correspondence principle for computable dimension that was implicit in results of Staiger [44]. 19 Theorem 3.8 (correspondence principle for constructive dimension [25]). If X ⊆ Σ∞ is any union (not necessarily effective) of computably closed, i.e., Π01 , sets, then cdim(X) = dimH (X). We now define the constructive dimensions of individual sequences. Definition. Let ν be a probability measure on Σ∞ , and let S ∈ Σ∞ . Then the ν-dimension of S is dimν (S) = cdimν ({S}), and the strong ν-dimension of S is Dimν (S) = cDimν ({S}). 4 Kolmogorov Complexity Characterizations In this section we prove characterizations of constructive ν-dimension and constructive strong ν-dimension in terms of Kolmogorov complexity. These characterizations are used in the proof of our main theorem in section 6. Let Σ be a finite alphabet, with |Σ| ≥ 2. The Kolmogorov complexity of a string w ∈ Σ∗ is the natural number K(w) = min {|π| | π ∈ {0, 1}∗ and U (π) = w } , where U is a fixed optimal universal prefix Turing machine. This is a standard notion of (prefix) Kolmogorov complexity. The reader is referred to the standard text by Li and Vitanyi [31] for background on prefix Turing machines and Kolmogorov complexity. If ν is a probability measure on Σ∞ , then the Shannon self information of a string w ∈ Σ∗ with respect to ν is Iν (w) = log 1 . ν(w) Note that 0 ≤ Iν (w) ≤ ∞. Equality holds on the left here if and only if ν(w) = 1, and equality holds on the right if and only if ν(w) = 0. Since our results here concern strongly positive probability measures, we will have 0 < Iν (w) < ∞ for all w ∈ Σ+ . 20 The following result is the main theorem of this section. It gives characterizations of the ν-dimensions and the strong ν-dimensions of sequences in terms of Kolmogorov complexity. Theorem 4.1 If ν is a strongly positive, computable probability measure on Σ∞ , then, for all S ∈ Σ∞ , dimν (S) = lim inf K(S[0..m − 1]) Iν (S[0..m − 1]) (4.1) Dimν (S) = lim sup K(S[0..m − 1]) . Iν (S[0..m − 1]) (4.2) m→∞ and m→∞ . For infinitely many Proof. Let S ∈ Σ∞ . Let s > s′ > lim inf m IK(S[0..m−1]) ν (S[0..m−1]) ′ m, K(S[0..m − 1]) < s′ Iν (S[0..m − 1]), so ν(S[0..m − 1])s < 2−K(S[0..m−1]) . Let m ∈ N. We define the computably enumerable (c.e.) set A = {w | K(w) < s′ Iν (w)} , and the ν-s-constructive supergale dm as follows. If there exists v ⊑ w such that v ∈ A=m then 1−s  ν(w) . dm (w) = ν(v) Otherwise, dm (w) = X  ν(wu) s . ν(w) u, wu∈A=m First notice that dm is well-defined since dm (λ) < ∞ and dm is a supergale, X X ′ ′ dm (λ) = ν(u)s ≤ 2−K(u) (1 − δ)m(s−s ) ≤ (1 − δ)m(s−s ) u∈A=m u∈A=m for δ ∈ (0, 1) a constant testifying that ν is strongly positive. We define the ν-s-constructive supergale X X ′ ′ d(w) = (1 − δ)−m(s−s ) d2m (w) + (1 − δ)−m(s−s ) d2m+1 (w). m m Notice that the fact that A is c.e. is necessary for the constructivity of d. ′ Since for w ∈ A, d|w| (w) = 1 we have that d(w) ≥ (1 − δ)−|w|(s−s )/2 for 21 w ∈ A. Since for infinitely many m, S[0..m − 1] ∈ A we have that S ∈ S ∞ [d] and dimν (S) ≤ s. This finishes the proof of the first inequality of (4.1). For the other direction, let s > dimν (S). Let d be a ν-s-constructive gale succeeding on S. Let c ≥ d(λ) be a rational number. Let B = {w | d(w) > c}, notice that B is c.e. For every m, X ν(w)s ≤ 1. w∈B =m Let θm : B =m → {0, 1}∗ be the Shannon-Fano-Elias code given by the probability submeasure p defined as p(w) = ν(w)s for w ∈ B =m (this code assigns shorter code words θm (w) to words with a larger probability p(w), see for =m example [10]). lSpecifically, m for each w ∈ B , θm (w) is defined as the most 1 significant 1 + log p(w) bits of the real number X p(v) + |v|=m, v<B w 1 p(w) 2 where <B corresponds to the words in B ordered according to their appearance in the computable enumeration of B. Then   1 = 1 + ⌈s Iν (w)⌉ |θm (w)| = 1 + log p(w) for w ∈ B =m . Since B is c.e. codification and decodification can be computed given the length, that is, every w ∈ B can be computed from |w| and θ|w| (w). Therefore if w ∈ B, K(w) ≤ 2 + s Iν (w) + 2 log(|w|). Notice that since ν is strongly positive, Iν (w) = Ω(|w|) and since there exist infinitely many m for which S[0..m − 1] ∈ B, lim inf m→∞ K(S[0..m − 1]) ≤ s. Iν (S[0..m − 1]) The proof of (4.2) is analogous. ✷ ∞ If ν is a strongly positive probability measure on Σ , then there is a real constant α > 0 such that, for all w ∈ Σ∗ , Iν (w) ≥ α|w|. Since other notions of Kolmogorov complexity, such as the plain complexity C(w) and the monotone complexity Km (w) [31], differ from K(w) by at most O(log |w|), it follows 22 that Theorem 4.1 also holds with K(S[0..m − 1]) replaced by C(S[0..m − 1]), Km (S[0..m − 1]), etc. The following known characterizations of dimension and strong dimension are simply the special case of Theorem 4.1 in which Σ = {0, 1} and ν = µ. Corollary 4.2 [37, 1] For all S ∈ {0, 1}∞ , dim(S) = lim inf K(S[0..m − 1]) m Dim(S) = lim sup K(S[0..m − 1]) . m m→∞ and m→∞ Later, alternative proofs of Corollary 4.2 appear in [35, 46]. We define the dimension and strong dimension of a point x in Euclidean space as in (1.7) and (1.8). It is convenient to characterize these dimensions in terms of Kolmogorov complexity of rational approximations in Euclidean space. Specifically, for each x ∈ Rn and r ∈ N, we define the Kolmogorov complexity of x at precision r to be the natural number  Kr (x) = min K(q) q ∈ Qn and |q − x| ≤ 2−r . That is, Kr (x) is the minimum length of any program π ∈ {0, 1}∗ for which U (π) ∈ Qn ∩ B(x, 2−r ). (Related notions of approximate Kolmogorov complexity have recently been considered by Vitanyi and Vereshchagin [49] and Fortnow, Lee and Vereshchagin [18].) We also mention the quantity  Kr (r, x) = min K(r, q) q ∈ Qn and |q − x| ≤ 2−r , in which the program π must specify the precision parameter r as well as a rational approximation q of x to within 2−r . The following relationship between these two quantities is easily verified by standard techniques. Observation 4.3 There exist constants a, b ∈ N such that, for all x ∈ Rn and r ∈ N, Kr (x) − a ≤ Kr (r, x) ≤ Kr (x) + K(r) + b. We now show that the quantity Kr (r, x) is within a constant of the Kolmogorov complexity of the first nr bits of an interleaved binary expansion of the fractional part of the coordinates of x, which was defined in section 1, together with the integer part of x. 23 Lemma 4.4 There is a constant c ∈ N such that, for all x = (x1 , . . . , xn ) ∈ Rn , all interleaved binary expansions S of the fractional parts of x1 , . . . , xn , and all r ∈ N, |Kr (r, x) − K(⌊x⌋, S[0..nr − 1])| ≤ c. (4.3) where ⌊x⌋ is the interleaved binary expansion of (⌊x1 ⌋, . . . , ⌊xn ⌋) Proof. We first consider the case x ∈ [0, 1]n . For convenience, let l = ⌈ log2 n ⌉ (notice that both n and l are constants). Let M be a prefix Turing machine such that, if π ∈ {0, 1}∗ is a program such that U (π) = w ∈ {0, 1}∗ and |w| is divisible by n, and if v ∈ {0, 1}nl , then M (πv) = (|w|/n, q), where q ∈ Qn is the dyadic rational point whose interleaved binary expansion is wv. Let c1 = nl + cM , where cM is an optimality constant for M . Let x ∈ Rn , let S be an interleaved binary expansion of x, and let r ∈ N. Let π ∈ {0, 1}∗ be a witness to the value of Kr (S[0..nr − 1]), and let v = S[nr..n(r + l) − 1]. Then M (πv) = (r, q), where q is the dyadic rational point whose interleaved binary expansion is S[0..n(l + r) − 1]. Since q √ |q − x| = n(2−(r+l) )2 = 2−(r+l) n ≤ 2−r , it follows that Kr (r, x) ≤ |πv| + cM = K(S[0..nr − 1]) + c1 , (4.4) which is one of the inequalities we need to get (4.3). We now turn to the reverse inequality. For each r ∈ N and q ∈ Qn , let Ar,q be the set of all r-dyadic points within 2l−r + 2r of q. That is, Ar,q is the set of all points q ′ = (q1′ , . . . , qn′ ) ∈ Qn such that |q − q ′ | ≤ 2l−r + 2r and each qi′ is of the form 2−r a′i for some integer a′i . Let q ′ , q ′′ ∈ Ar,q . For each 1 ≤ i ≤ n, let qi′ = 2−r a′i and qi′′ = 2−r a′′i be the ith coordinates of q ′ and q ′′ , respectively. Then, for each 1 ≤ i ≤ n, we have |a′i − a′′i | = ≤ ≤ = 2r |qi′ − qi′′ | 2r (|q ′ − q| + |q ′′ − q|) 2r+1 (2l−r + 2−r ) 2l+1 + 2. This shows that there are at most 2l+1 + 3 possible values of a′i . It follows that |Ar,q | ≤ (2l+1 + 3)n . (4.5) 24 Let M ′ be a prefix Turing machine such that, if π ∈ {0, 1}∗ is a program such that U (π) = (r, q) ∈ N×Qn , 0 ≤ m < |Ar,q |, and sm is the mth string in the standard enumeration s0 , s1 , s2 , . . . of {0, 1}∗ , then M ′ (π0|sm | 1sm ) is the nr-bit interleaved binary expansion of the fractional points of the coordinates of the mth element of a canonical enumeration of Ar,q . Let c2 = n(2l′ + 1) + cM ′ , where l′ = ⌈log(2l+1 + 3)⌉ and cM ′ is an optimality constant for M ′ . Let x ∈ Rn , let S be an interleaved binary expansion of x, and let r ∈ N. Let q ′ be the r-dyadic point whose interleaved binary expansion is S[0..nr−1], and let π ∈ {0, 1}∗ be a witness to the value of Kr (r, x). Then U (π) = (r, q) for some q ∈ Qn ∩ B(x, 2−r ). Since |q ′ − q| ≤ |q ′ − x| + |q − x| √ ≤ 2−r n + 2−r ≤ 2l−r + 2−r , we have q ′ ∈ Aq,r . It follows that there exists 0 ≤ m < |Ar,q | such that M ′ (π0|sm | 1sm ) = S[0..nr − 1]. This implies that K(S[0..nr − 1]) ≤ = ≤ = ≤ ≤ |π0|sm | 1sm | + cM ′ Kr (r, x) + 2|sm | + cM ′ + 1 Kr (r, x) + 2|s|Ar,q |−1 | + cM ′ + 1 Kr (r, x) + 2⌊log |Ar,q |⌋ + cM ′ + 1 Kr (r, x) + 2⌊n log(2l+1 + 3)⌋ + cM ′ + 1 Kr (r, x) + c2 . (4.6) If we let c = max{c1 , c2 }, then (4.4) and (4.6) imply (4.3). For the general case, notice that Kr (r, ⌊x⌋) = K(⌊x⌋) + O(1). ✷ We now have the following characterizations of the dimensions and strong dimensions of points in Euclidean space. Theorem 4.5 For all x ∈ Rn , dim(x) = lim inf Kr (x) , r (4.7) Dim(x) = lim sup Kr (x) . r (4.8) r→∞ and r→∞ 25 Proof. Let x ∈ Rn , and let S be an interleaved binary expansion of the fractional parts of the coordinates of x. By (1.7) and Corollary 4.2, we have dim(x) = n dim(S) = n lim inf m→∞ K(S[0..m − 1]) . m Since all values of K(S[0..m − 1]) with nr ≤ m < n(r + 1) are within a constant (that depends on the constant n) of one another, it follows that K(S[0..nr − 1]) r→∞ nr K(S[0..nr − 1]) . = lim inf r→∞ r dim(x) = n lim inf Since K(r) = O(log r) [31], it follows by Observation 4.3 and Lemma 4.4 that (4.7) holds. The proof that (4.8) holds is analogous. ✷ 5 Self-Similar Fractals This expository section reviews a fragment of the theory of self-similar fractals that is adequate for understanding our main theorem and its proof. Our treatment is self-contained, but of course far from complete. The interested reader is referred to any of the standard texts [2, 12, 13, 15] for more extensive discussion. Definition. A contracting similarity on a set D ⊆ Rn is a function S : D → D for which there exists a real number c ∈ (0, 1), called a contraction ratio of S, satisfying |S(x) − S(y)| = c|x − y| for all x, y ∈ D. Definition. An iterated function system (IFS) is a finite sequence S = (S0 , . . . , Sk−1 ) of two or more contracting similarities on a nonempty, closed set D ⊆ Rn . We call D the domain of S, writing D = dom(S). We note that iterated function systems are often defined more generally. For example, IFSs consisting of contractions, which merely satisfy inequalities of the form |S(x) − S(y)| ≤ c|x − y| for some c ∈ (0, 1), are often considered. Since our results here are confined to self-similar fractals, we have used the more restrictive definition. We use the standard notation K(D) for the set of all nonempty compact (i.e., closed and bounded) subsets of a nonempty closed set D ⊆ Rn . For each IFS S, we write K(S) = K(dom(S)). 26 With each IFS S = (S0 , . . . , Sk−1 ), we define the transformation S : K(S) → K(S) by k−1 [ S(A) = Si (A) i=0 for all A ∈ K(S), where Si (A) is the image of A under the contracting similarity Si . Observation 5.1 For each IFS S, there exists A ∈ K(S) such that S(A) ⊆ A. Proof. Assume the hypothesis, with S = (S0 , . . . , Sk−1 ) and dom(S) = D, and let c0 , . . . , ck−1 be contraction ratios of S0 , . . . , Sk−1 , respectively. Let c = max{c0 , . . . , ck−1 }, noting that c ∈ (0, 1), and fix z ∈ D. Let r= 1 max |Si (z) − z|, 1 − c 0≤i<k and let A = D ∩ Br (z). Then A is a closed subset of the compact set Br (z), and z ∈ A, so A ∈ K(S). For all x ∈ A and 0 ≤ i < k, we have |Si (x) − z| ≤ ≤ ≤ = |Si (x) − Si (z)| + |Si (z) − z| c|x − z| + |Si (z) − z| cr + (1 − c)r r, so each Si (A) ⊆ A, so S(A) ⊆ A. ✷ For each IFS S = (S0 , . . . , Sk−1 ) and each set A ∈ K(S) satisfying S(A) ⊆ A, we define the function SA : Σ∗k → K(S) by the recursion SA (λ) = A; SA (iw) = Si (SA (w)) for all w ∈ Σ∗k and i ∈ Σk . If c = max{c0 , . . . , ck−1 }, where c0 , . . . , ck−1 are contraction ratios of S0 , . . . , Sk−1 , respectively, then routine inductions establish that, for all w ∈ Σ∗k and i ∈ Σk , SA (iw) ⊆ SA (w) (5.1) 27 and diam(SA (w)) ≤ c|w| diam(A). Since c ∈ (0, 1), it follows that, for each sequence T ∈ point SA (T ) ∈ Rn such that \ SA (w) = {SA (T )}. (5.2) Σ∞ k , there is a unique (5.3) w⊑T n In this manner, we have defined a function SA : Σ∞ k → R . The following observation shows that this function does not really depend on the choice of A. Observation 5.2 Let S be an IFS. If A, B ∈ K(S) satisfy S(A) ⊆ A and S(B) ⊆ B, then SA = SB . Our proof of Observation 5.2 uses the Hausdorff metric on K(Rn ), which is the function ρH : K(Rn ) × K(Rn ) → [0, ∞) defined by ρH (A, B) = max{sup inf |x − y|, sup inf |x − y|} x∈A y∈B y∈B x∈A for all A, B ∈ K(Rn ). It is easy to see that ρH is a metric on K(Rn ). It follows that ρH is a metric on K(S) for every IFS S. Proof of Observation 5.2. Assume the hypothesis, with S = (S0 , . . . , Sk−1 ), and let c0 , . . . , ck−1 be contraction ratios of S0 , . . . , Sk−1 , respectively. The definition of ρH implies immediately that, for all E, F ∈ K(S) and 0 ≤ i < k, ρH (Si (E), Si (F )) = ci ρH (E, F ). It follows by an easy induction that, if we let c = max{c0 , . . . , ck−1 }, then, for all w ∈ Σ∗k , ρH (SA (w), SB (w)) ≤ c|w| ρH (A, B). (5.4) To see that SA = SB , let T ∈ Σ∞ k , and let ǫ > 0. For each w ⊑ T , (5.1), (5.2), and (5.3) tell us that ρH ({SA (T )}, SA (w)) ≤ diam(SA (w)) ≤ c|w| diam(A) (5.5) ρH ({SA (T )}, SB (w)) ≤ diam(SB (w)) ≤ c|w| diam(B). (5.6) and Since c ∈ (0, 1), (5.4), (5.5), and (5.6) tell us that there is a prefix w ⊑ T such that ρH ({SA (T )}, {SB (T )}) 28 ≤ ρH ({SA (T )}, SA (w)) + ρH (SA (w), SB (w)) + ρH ({SB (T )}, SB (w)) < ǫ/3 + ǫ/3 + ǫ/3 = ǫ. Since this holds for all ǫ > 0, it follows that ρH ({SA (T )}, {SB (T )}) = 0, i.e., that SA (T ) = SB (T ). ✷ n For each IFS S, we define the induced function S : Σ∞ → R by setk ting S = SA , where A is any element of K(S) satisfying S(A) ⊆ A. By Observations 5.1 and 5.2, this induced function S is well-defined. We now have the machinery to define a rich collection of fractals in Rn . Definition. The attractor (or invariant set) of an IFS S = (S0 , . . . , Sk−1 ) is the set F (S) = S(Σ∞ k ), n i.e., the range of the induced function S : Σ∞ k → R . It is well-known that the attractor F (S) is the unique fixed point of the induced transformation S : K(S) → K(S), but we do not use this fact here. For each T ∈ Σ∞ k , we call T a coding sequence, or an S-code, of the point S(T ) ∈ F (S). Example 5.3 (generalized Sierpinski triangles S in R2 ). Let D be the set consisting√ of the triangle in R2 with vertices v0 = (0, 0), v1 = (1, 0), and v2 = ( 21 , 23 ), together with this triangle’s interior. Given c0 , c1 , c2 ∈ (0, 1), define S0 , S1 , S2 : D → D by Si (p) = vi + ci (p − vi ) for i ∈ {0, 1, 2} and p ∈ D. Then S0 , S1 , and S2 are contracting similarities with contraction ratios c0 , c1 , and c2 , respectively, so S = (S0 , S1 , S2 ) is an IFS with domain D. Intuitively, a coding sequence T ∈ {0, 1, 2}∞ can be regarded as an abbreviation of the procedure ∆0 := D; for j := 0 to ∞ do ∆j+1 := the cT [j] − reduced copy of ∆j lying in corner T [j] of ∆j . The point S(T ) of F (S) is then the unique point of R2 lying in all the triangles ∆0 , ∆1 , ∆2 , . . .. The attractor F (S) is thus a generalized Sierpinski triangle. Figure 2(a,b) illustrates this construction in the case where c0 = 21 , c1 = 14 , and c2 = 31 . If c0 = c1 = c2 = 1/2, then F (S) is the familiar Sierpinski triangle of Figure 2(c). 29 (a) (b) (c) Figure 2: (a) The IFS S of Example 5.3, with c0 = 12 , c1 = 14 , and c2 = 13 . (b) The attractor F (S) of this IFS. (c) The attractor F (S) when c0 = c1 = c2 = 21 . 30 In general, the attractor of an IFS S = (S0 , . . . , Sk−1 ) is easiest to analyze when the sets S0 (dom(S)), . . . , Sk−1 (dom(S)) are “nearly disjoint”. (Intuitively, this prevents each point x ∈ F (S) from having “too many” coding sequences T ∈ Σ∞ k .) The following definition makes this notion precise. Definition. An IFS S = (S0 , . . . , Sk−1 ) with domain D satisfies the open set condition if there exists a nonempty, bounded, open set G ⊆ D such that S0 (G), . . . , Sk−1 (G) are disjoint subsets of G. We will say that G is the witness of the open set condition for S. We now define the most widely known type of fractal. Definition. A self-similar fractal is a set F ⊆ Rn that is the attractor of an IFS that satisfies the open set condition. Example 5.3 (continued). If c0 + c1 ≤ 1, c0 + c2 ≤ 1, and c1 + c2 ≤ 1, then the set G = Do (the topological interior of D) testifies that S satisfies the open set condition, whence F (S) is a self-similar fractal. If c0 + c1 > 1 or c0 + c2 > 1 or c1 + c2 > 1, then S does not satisfy the open set condition. The following quantity plays a central role in the theory of self-similar fractals. Definition. The similarity dimension of an IFS S = (S0 , . . . , Sk−1 ) with contraction ratios c0 , . . . , ck−1 is the (unique) solution sdim(S) = s ∈ [0, ∞) of the equation k−1 X csi = 1. i=0 If F = F (S) is a self-similar fractal, where S is an IFS satisfying the open set condition, then the classical Hausdorff and packing dimensions of F are known to coincide with the similarity dimension of S. (The fact that this holds for Hausdorff dimension was proven by Moran [38]. The fact that it holds for packing dimension was proven by Falconer [14]. As we shall see, both facts follow from our main theorem.) In particular, this implies that the following definition is unambiguous. Definition. The similarity dimension of a self-similar fractal F is the number sdim(F ) = sdim(S), where S is an IFS satisfying F (S) = F and the open set condition. It should be noted that some authors define a fractal to be self-similar if it is the attractor of any IFS S. With this terminology, i.e., in the absence 31 of the open set condition, the Hausdorff and packing dimensions may be less than the similarity dimension. 6 Pointwise Analysis of Dimensions In this section we prove our main theorem, which gives a precise analysis of the dimensions of individual points in computably self-similar fractals. We first recall the known fact that such fractals are computable. Definition. An IFS S = (S0 , . . . , Sk−1 ) is computable if dom(S) is a computable set and the functions S0 , . . . , Sk−1 are computable. Theorem 6.1 (Kamo and Kawamura [27]). For every computable IFS S, the attractor F (S) is a computable set. One consequence of Theorem 6.1 is the following. Corollary 6.2 For every computable IFS S, cdim(F (S)) = dimH (F (S)). Proof. Let S be a computable IFS. Then F (S) is compact, hence closed, and is computable by Theorem 6.1, so F (S) is computably closed by Observation 2.2. It follows by the correspondence principle for constructive dimension (Theorem 3.8) that cdim(F (S)) = dimH (F (S)). ✷ We next present three lemmas that we use in the proof of our main theorem. The first is a well-known geometric fact (e.g., it is Lemma 9.2 in [15]) whose proof is short enough to repeat here. Lemma 6.3 Let G be a collection of disjoint open sets in Rn , and let r, a, b ∈ (0, ∞). If every element of G contains a ball of radius ar and is contained in n a ball of radius br, then no ball of radius r meets more than 1+2b of the a closures of the elements of G. Proof. Assume the hypothesis, and let B be a ball of radius r. Let  M = G ∈ G B ∩ G 6= ∅ , and let m = |M|. Let B ′ be a closed ball that is concentric with B and has radius (1 + 2b)r. Then B ′ contains G for every G ∈ M. Since each G ∈ M 32 contains a ball BG of radius ar, and since these balls are disjoint, it follows that X volume(B ′ ) ≥ volume(BG ). G∈M This implies that [(1 + 2b)r]n ≥ m(ar)n , n whence m ≤ 1+2b . ✷ a Our second lemma gives a computable means of assigning rational “hubs” to the various open sets arising from a computable IFS satisfying the open set condition. Definition. A hub function for an IFS S = (S0 , . . . , Sk−1 ) satisfying the open set condition with G as witness is a function h : Σ∗k → Rn such that h(w) ∈ SG (w) for all w ∈ Σ∗k . In this case, we call h(w) the hub that h assigns to the set SG (w). Lemma 6.4 If S = (S0 , . . . , Sk−1 ) is a computable IFS satisfying the open set condition with G as witness, then there is an exactly computable, rationalvalued hub function h : Σ∗k → Qn for S and G. Proof. Assume the hypothesis. From oracle Turing machines computing S0 , . . . , Sk−1 , it is routine to construct an oracle Turing machine M computing the function S̃ : dom(S) × Σ∗k → dom(S) defined by the recursion S̃(x, λ) = x, S̃(x, iw) = Si (S̃(x, w)) for all x ∈ dom(S), w ∈ Σ∗k , and i ∈ Σk . Fix a rational point q ∈ G ∩ Qn , and let Cq be the oracle that returns the value q on all queries, noting that |M Cq (w, r) − S̃(q, w)| ≤ 2−r (6.1) holds for all w ∈ Σ∗k and r ∈ N. Fix l ∈ Z+ large enough to satisfy the following conditions. (i) G contains the closed ball of radius 2−l about q. (ii) For each i ∈ Σk , 2−l ≤ ci , where ci is the contraction ratio of Si . 33 Then a routine induction shows that, for each w ∈ Σ∗k , SG (w) contains the closed ball of radius 2−l(1+|w|) about S̃(q, w). It follows by (6.1) that the function h : Σ∗k → Qn defined by h(w) = M Cq (w, l(1 + |w|)) is a hub function for S and G. It is clear that h is rational-valued and exactly computable. ✷ Iterated function systems induce probability measures on alphabets in the following manner. Definition. The similarity probability measure of an IFS S = (S0 , . . . , Sk−1 ) with contraction ratios c0 , . . . , ck−1 is the probability measure πS on the alphabet Σk defined by sdim(S) πS (i) = ci for all i ∈ Σk . For w ∈ Σ∗k , we use the abbreviation IS (w) = IπS (w). Our third lemma provides a decidable set of well-behaved “canonical prefixes” of sequences in Σ∞ k . Lemma 6.5 Let S = (S0 , . . . , Sk−1 ) be a computable IFS, and let cmin be the minimum of the contraction ratios of S = (S0 , . . . , Sk−1 ). For any real number 1 , (6.2) α > sdim(S) log cmin there exists a decidable set A ⊆ N × Σ∗k such that, for each r ∈ N, the set Ar = {w ∈ Σ∗k | (r, w) ∈ A} has the following three properties. (i) No element of Ar is a proper prefix of any element of Ar′ for any r′ ≤ r. (ii) Each sequence in Σ∞ k has a (unique) prefix in Ar . (iii) For all w ∈ Ar , r · sdim(S) < IS (w) < r · sdim(S) + α. (6.3) Proof. Let S, cmin , and α be as given, and let c0 , . . . , ck−1 be the contraction ratios of S0 , . . . , Sk−1 , respectively. Let cmax = max{c0 , . . . , ck−1 }, and let 34 1 δ = 21 min{cmin , 1 − cmax }, noting that δ ∈ (0, 2k ]. Since S is computable, there is, for each i ∈ Σk , an exactly computable function b ci : N → Q ∩ [δ, 1 − δ] such that, for all t ∈ N, |b ci (t) − ci | ≤ 2−t . (6.4) For all T ∈ Σ∞ k and l, t ∈ N, we have l−1 Y i=0 = b cT [i] (t + i) − l−1 X i=0 − = i=0 cT [j] j=0 i Y cT [j] ! ! cT [j] j=0 i=0 l−1 Y j=i l−1 Y j=i+1 ! l−1 Y j=i+1 Since each |pi | ≤ 1, it follows by (6.4) that l−1 Y i=0 cT [i] ! b cT [j] (t + j) !# b cT [j] (t + j) (b cT [i] (t + i) − cT [i] )pi , i−1 Y pi = i−1 Y j=0 l−1 X where " l−1 Y b cT [i] (t + i) − l−1 Y ! b cT [j] (t + j) . cT [i] < 21−t (6.5) i=0 holds for all T ∈ Σ∞ k and l, t ∈ N. By (6.2), we have 2−α/sdim(S) /cmin < 1, so we can fix m ∈ Z+ such that 21−m < 1 − 2−α/sdim(S) /cmin . For each T ∈ Σ∞ k and r ∈ N, let ( ) l−1 Y lr (T ) = min l ∈ N b cT [i] (r + m + i + 1) ≤ 2−r − 2−(r+m) , i=0 35 (6.6) and let A = {(r, T [0..lr (T )]) | T ∈ Σ∞ k and r ∈ N } . Since the functions b c0 , . . . , b ck−1 are rational-valued and exactly computable, the set A is decidable. It is clear that each Ar has properties (i) and (ii). Let r ∈ N. To see that Ar has property (iii), let w ∈ Ar . Let l = |w|, and fix T ∈ Σ∞ k such that l = lr (T ) and w = T [0..l − 1]. By the definition of lr (T ) and (6.5), we have l−1 Y cw[i] < 2−r , i=0 which implies that IS (w) > r · sdim(S). (6.7) If l > 0, then the minimality of lr (T ) tells us that l−2 Y i=0 b cw[i] (r + m + i + 1) > 2−r − 2−(r+m) . It follows by (6.5) and (6.6) that l−2 Y i=0 cw[i] > 2−r − 21−(r+m) = 2−r (1 − 21−m ) > 2−(r+α/sdim(S)) /cmin , whence l−1 Y i=0 cw[i] > cw[l−1] −(r+α/sdim(S)) 2 cmin ≥ 2−(r+α/sdim(S)) . This implies that πS (w) > 2−(r·sdim(S)+α) . (6.8) If l = 0, then πS (w) = 1, so (6.8) again holds. Hence, in any case, we have IS (w) < r · sdim(S) + α. By (6.7) and (6.9), Ar has property (iii). 36 (6.9) ✷ Our main theorem concerns the following type of fractal. Definition. A computably self-similar fractal is a set F ⊆ Rn that is the attractor of an IFS that is computable and satisfies the open set condition. Most self-similar fractals occurring in the literature are, in fact, computably self-similar. For instance, let F be a generalized Sierpinski triangle with contraction ratios c0 , c1 , c2 ∈ (0, 1), defined as in Example 5.3. As we have noted, F is self-similar if c0 + c1 ≤ 1, c0 + c2 ≤ 1, and c1 + c2 ≤ 1. It is easy to see that F is computably self-similar if c0 , c1 , and c2 are also computable real numbers. We now have the machinery to give a complete analysis of the dimensions of points in computably self-similar fractals. Theorem 6.6 (main theorem). If F ⊆ Rn is a computably self-similar fractal and S is an IFS testifying this fact, then, for all points x ∈ F and all S-codes T of x, dim(x) = sdim(F )dimπS (T ) (6.10) and Dim(x) = sdim(F )DimπS (T ). (6.11) Proof. Assume the hypothesis, with S = (S0 , . . . , Sk−1 ). Let c0 , . . . , ck−1 be the contraction ratios of S0 , . . . , Sk−1 , respectively, and let G be a witness to the fact that S satisfies the open set condition, and let l = max{0, ⌈log diam(G)⌉}. Let h : Σ∗k → Qn be an exactly computable, rational-valued hub function for S and G as given by Lemma 6.4. Let α = 1 1 + sdim(F ) log cmin , for cmin = min{c0 , . . . , ck−1 }, and choose a decidable set A for S and α as in Lemma 6.5. For all w ∈ Σ∗k , we have diam(SG (w)) = diam(G) |w|−1 Y cw[i] i=0 1 = diam(G)πS (w) sdim(F ) . It follows by (6.3) that, for all r ∈ N and w ∈ Ar+l , 2−r a1 ≤ diam(SG (w)) ≤ 2−r , where a1 = 2−(l+ sdim(F ) ) diam(G). α 37 (6.12) Let x ∈ F , and let T ∈ Σ∞ k be an S-code of x, i.e., S(T ) = x. For each r ∈ N, let wr be the unique element of Ar+l that is a prefix of T . Much of this proof is devoted to deriving a close relationship between the Kolmogorov complexities Kr (x) and K(wr ). Once we have this relationship, we will use it to prove (6.10) and (6.11). Since the hub function h is computable, there is a constant a2 such that, for all w ∈ Σ∗k , K(h(w)) ≤ K(w) + a2 . (6.13) Since h(wr ) ∈ SG (wr ) and x = S(T ) ∈ SG (wr ) = SG (wr ), (6.12) tells us that |h(wr ) − x| ≤ diam(SG (wr )) ≤ 2−r , whence Kr (x) ≤ K(h(wr )) for all r ∈ N. It follows by (6.13) that Kr (x) ≤ K(wr ) + a2 (6.14) for all r ∈ N. Combining (6.14) and the right-hand inequality in (6.3) gives K(wr ) + a2 Kr (x) ≤ r · sdim(F ) IS (wr ) − α (6.15) for all r ∈ N. Let E be the set of all triples (q, r, w) such that q ∈ Qn , r ∈ N, w ∈ Ar+l , and |q − h(w)| ≤ 21−r . (6.16) Since the set A and the condition (6.16) are decidable, the set E is decidable. For each q ∈ Qn and r ∈ N, let Eq,r = {w ∈ Σ∗k | (q, r, w) ∈ E } . We prove two key properties of the sets Eq,r . First, for all q ∈ Qn and r ∈ N, |q − x| ≤ 2−r ⇒ wr ∈ Eq,r . (6.17) To see that this holds, assume that |q−x| ≤ 2−r . Since x = S(T ) ∈ SG (wr ) = SG (wr ), the right-hand inequality in (6.12) tells us that |q − h(wr )| ≤ |q − x| + |x − h(wr )| ≤ 2−r + diam(SG (wr )) ≤ 21−r , 38 confirming (6.17). The second key property of the sets Eq,r is that they are small, namely, that |Eq,r | ≤ γ (6.18) holds for all q ∈ Qn and r ∈ N, where γ is a constant that does not depend on q or r. To see this, let w ∈ Eq,r . Then w ∈ Ar+l and |q − h(w)| ≤ 21−r , so h(w) ∈ SG (w) ∩ B(q, 21−r ). This argument establishes that w ∈ Eq,r ⇒ B(q, 21−r ) meets SG (w). (6.19) Now let Gr = {SG (w) | w ∈ Ar+l } . By our choice of G, Gr is a collection of disjoint open sets in Rn . By the right-hand inequality in (6.12), each element of Gr is contained in a closed ball of radius 2−r . Since G is open, it contains a closed ball of some radius a3 > 0. It follows by the left-hand inequality in (6.12) that SG (w), being a a1 a3 contraction of G, contains a closed ball of radius 21−r a4 , where a4 = 2diam(G) . 1−r By Lemma 6.3, this implies that B(q, 2 ) meets  no more than γ of the n (closures of the) elements of Gr , where γ = a24 . By (6.19), this confirms (6.18). Now let M be a prefix Turing machine with the following property. If U (π) = q ∈ Qn (where U is the universal prefix Turing machine), sr is the rth string in a standard enumeration s0 , s1 , . . . of {0, 1}∗ , and 0 ≤ m < |Eq,r |, then M (π0|sr | 1sr 0m 1) is the mth element of Eq,r . There is a constant a5 such that, for all w ∈ Σ∗k , K(w) ≤ KM (w) + a5 . (6.20) Taking π to be a program testifying to the value of Kr (x) and applying (6.17) and (6.18) shows that KM (wr ) ≤ = ≤ ≤ |π0|sr | 1sr 0m 1| Kr (x) + 2|sr | + m + 2 Kr (x) + 2 log(r + 1) + |Eq,r | + 1 Kr (x) + 2 log(r + 1) + γ + 1, whence (6.20) tells us that K(wr ) ≤ Kr (x) + ǫ(r) 39 (6.21) for all r ∈ N, where ǫ(r) = 2 log(r + 1) + a5 + γ + 1. Combining (6.21) and the left-hand inequality in (6.3) gives Kr (x) K(wr ) − ǫ(r) ≥ r · sdim(F ) IS (wr ) (6.22) for all r ∈ N. Note that ǫ(r) = o(IS (wr )) as r → ∞. By (6.15) and (6.22), we now have K(wr ) − ǫ(r) Kr (x) K(wr ) + a2 ≤ ≤ IS (wr ) r · sdim(F ) IS (wr ) − α (6.23) for all r ∈ N. In order to use this relationship between Kr (x) and K(wr ), r) for r ∈ N is the we need to know that the asymptotic behavior of IK(w S (wr ) for arbitrary prefixes w of T . Our same as the asymptotic behavior of IK(w) S (w) verification of this fact makes repeated use of the additivity of IS , by which we mean that IS (uv) = IS (u) + IS (v) (6.24) holds for all u, v ∈ Σ∗k . Let r ∈ N, and let wr ⊑ w ⊑ wr+1 , writing w = wr u and wr+1 = wv. Then (6.24) tells us that IS (wr ) ≤ IS (w) ≤ IS (wr+1 ), and (6.3) tells us that IS (wr+1 ) − IS (wr ) ≤ sdim(F ) + α, so we have IS (wr ) ≤ IS (w) ≤ IS (wr ) + a6 , where a6 = sdim(F ) + α. We also have a6 ≥ IS (wr+1 ) − IS (wr ) = IS (uv) 1 = log πS (uv) −sdim(F )|uv| ≥ log cmin = |uv|sdim(F ) log 40 1 cmin , (6.25) i.e., where a7 = a6 sdim(F ) log |wr+1 | − |wr | ≤ a7 , 1 cmin (6.26) . Since (6.26) holds for all r ∈ N and a7 does not depend on r, there is a constant a8 such that, for all r ∈ N and wr ⊑ w ⊑ wr+1 , |K(w) − K(wr )| ≤ a8 . (6.27) K(w) K(wr ) + a8 K(wr ) − a8 ≤ ≤ IS (wr ) + a6 IS (w) IS (wr ) (6.28) It follows by (6.25) that holds for all r ∈ N and wr ⊑ w ⊑ wr+1 . By (6.23), (6.28), Theorem 4.5, and Theorem 4.1, we now have dim(x) = lim inf r→∞ Kr (x) r K(wr ) r→∞ IS (wr ) K(T [0..j − 1]) = sdim(F ) lim inf j→∞ IS (T [0..j − 1]) = sdim(F )dimπS (T ) = sdim(F ) lim inf and Dim(x) = lim sup r→∞ Kr (x) r K(wr ) r→∞ IS (wr ) K(T [0..j − 1]) = sdim(F ) lim sup j→∞ IS (T [0..j − 1]) = sdim(F )DimπS (T ), = sdim(F ) lim sup i.e., (6.10) and (6.11) hold. ✷ Finally, we use relativization to derive the following well-known classical theorem from our main theorem. 41 Corollary 6.7 (Moran [38], Falconer [14]). For every self-similar fractal F ⊆ Rn , dimH (F ) = DimP (F ) = sdim(F ). Proof. Let F ⊆ Rn be self-similar. Then there is an IFS S satisfying F (S) = F and the open set condition. For any such S, there is an oracle A ⊆ {0, 1}∗ relative to which S is computable. We then have dimH (F ) ≤ = ≤ = = DimP (F ) DimA P (F ) cDimA (F ) sup DimA (x) x∈F (a) sdim(F ) sup DimπS ,A (T ) T ∈Σ∞ k = sdim(F ) = sdim(F ) sup dimπS ,A (T ) T ∈Σ∞ k = (b) sup dimA (x) x∈F A = cdim (F ) = (c) dimA H (F ) = dimH (F ) which implies the corollary. Equalities (a) and (b) hold by Theorem 6.6, relativized to A. Equality (c) holds by Corollary 6.2, relativized to A. ✷ 7 Conclusion Our main theorem gives a complete analysis of the dimensions of points in computably self-similar fractals. It will be interesting to see whether larger classes of fractals are also amenable to such pointwise analyses of dimensions. Acknowledgments The first author thanks Dan Mauldin for useful discussions. We thank Xiaoyang Gu for pointing out that dimνH is Billingsley dimension, and we thank 42 the referees for careful reading and several corrections and improvements. References [1] K. B. Athreya, J. M. Hitchcock, J. H. Lutz, and E. Mayordomo. Effective strong dimension in algorithmic information and computational complexity. SIAM Journal on Computing, 37:671–705, 2007. [2] M.F. Barnsley. Fractals Everywhere. Morgan Kaufmann Pub, 1993. [3] P. Billingsley. Hausdorff dimension in probability theory. Illinois J. Math, 4:187–209, 1960. [4] V. Brattka and K. Weihrauch. Computability on subsets of euclidean space i: Closed and compact subsets. Theoretical Computer Science, 219:65–93, 1999. [5] M. Braverman. On the complexity of real functions. In Proceedings of the Forty-Sixth Annual IEEE Symposium on Foundations of Computer Science, pages 155–164, 2005. [6] M. Braverman and S. Cook. Computing over the reals: Foundations for scientific computing. Notices of the AMS, 53(3):318–329, 2006. [7] M. Braverman and M. Yampolsky. Constructing non-computable Julia sets. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, pages 709–716, 2007. [8] J. Cai and J. Hartmanis. On Hausdorff and topological dimensions of the Kolmogorov complexity of the real line. Journal of Computer and Systems Sciences, 49:605–619, 1994. [9] H. Cajar. Billingsley dimension in probability spaces. Springer Lecture Notes in Mathematics, 892, 1981. [10] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., New York, N.Y., 1991. [11] J. J. Dai, J. I. Lathrop, J. H. Lutz, and E. Mayordomo. Finite-state dimension. Theoretical Computer Science, 310:1–33, 2004. 43 [12] G. A. Edgar. Integral, probability, and fractal measures. Springer-Verlag, 1998. [13] K. Falconer. The Geometry of Fractal Sets. Cambridge University Press, 1985. [14] K. Falconer. Dimensions and measures of quasi self-similar sets. Proc. Amer. Math. Soc., 106:543–554, 1989. [15] K. Falconer. Fractal Geometry: Mathematical Foundations and Applications. John Wiley & sons, 2003. [16] S. A. Fenner. Gales and supergales are equivalent for defining constructive Hausdorff dimension. Technical Report cs.CC/0208044, Computing Research Repository, 2002. [17] H. Fernau and L. Staiger. Iterated function systems and control languages. Information and Computation, 168:125–143, 2001. [18] L. Fortnow, T. Lee, and N. Vereshchagin. Kolmogorov complexity with error. In Proceedings of the 23rd Symposium on Theoretical Aspects of Computer Science, volume 3884 of Lecture Notes in Computer Science, pages 137–148. Springer-Verlag, 2006. [19] A. Grzegorczyk. Computable functionals. Fundamenta Mathematicae, 42:168–202, 1955. [20] X. Gu, J. H. Lutz, and E. Mayordomo. Points on computable curves. In Proceedings of the Forty-Seventh Annual IEEE Symposium on Foundations of Computer Science, pages 469–474, 2006. [21] A. Gupta, R. Krauthgamer, and J. R. Lee. Bounded geometries, fractals, and low-distortion embeddings. In Proceedings of the Forty-Fourth Annual IEEE Symposium on Foundations of Computer Science, pages 534–543, 2003. [22] F. Hausdorff. Dimension und äußeres Maß. Math. Ann., 79:157–179, 1919. [23] P. Hertling. Is the Mandelbrot set computable? Mathematical Logic Quarterly, 51:5–18, 2005. 44 [24] J. M. Hitchcock. Gales suffice for constructive dimension. Information Processing Letters, 86(1):9–12, 2003. [25] J. M. Hitchcock. Correspondence principles for effective dimensions. Theory of Computing Systems, 38:559–571, 2005. [26] J.M. Hitchcock. Effective fractal dimension http://www.cs.uwyo.edu/˜jhitchco/bib/dim.shtml. bibliography. [27] H. Kamo and K. Kawamura. Computability of self-similar sets. Math. Log. Q., 45:23–30, 1999. [28] K. Ko. Complexity Theory of Real Functions. Birkhäuser, Boston, 1991. [29] D. Lacombe. Extension de la notion de fonction recursive aux fonctions d’une ow plusiers variables reelles, and other notes. Comptes Rendus, 240:2478-2480; 241:13-14, 151-153, 1250-1252, 1955. [30] P. Lévy. Théorie de l’Addition des Variables Aleatoires. GauthierVillars, 1937 (second edition 1954). [31] M. Li and P. M. B. Vitányi. An Introduction to Kolmogorov Complexity and its Applications. Springer-Verlag, Berlin, 1997. Second Edition. [32] J. H. Lutz. Almost everywhere high nonuniform complexity. Journal of Computer and System Sciences, 44(2):220–258, 1992. [33] J. H. Lutz. Resource-bounded measure. In Proceedings of the 13th IEEE Conference on Computational Complexity, pages 236–248, 1998. [34] J. H. Lutz. Dimension in complexity classes. SIAM Journal on Computing, 32:1236–1259, 2003. [35] J. H. Lutz. The dimensions of individual strings and sequences. Information and Computation, 187:49–79, 2003. [36] P. Martin-Löf. The definition of random sequences. Information and Control, 9:602–619, 1966. [37] E. Mayordomo. A Kolmogorov complexity characterization of constructive Hausdorff dimension. Information Processing Letters, 84(1):1–3, 2002. 45 [38] P.A. Moran. Additive functions of intervals and Hausdorff dimension. Proceedings of the Cambridge Philosophical Society, 42:5–23, 1946. [39] M.B. Pour-El and J.I. Richards. Computability in Analysis and Physics. Springer-Verlag, 1989. [40] R. Rettinger and K. Weihrauch. The computational complexity of some Julia sets. In Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing, pages 177–185, 2003. [41] C. P. Schnorr. A unified approach to the definition of random sequences. Mathematical Systems Theory, 5:246–258, 1971. [42] C. P. Schnorr. Zufälligkeit und Wahrscheinlichkeit. Lecture Notes in Mathematics, 218, 1971. [43] C. P. Schnorr. A survey of the theory of random sequences. In R. E. Butts and J. Hintikka, editors, Basic Problems in Methodology and Linguistics, pages 193–210. D. Reidel, 1977. [44] L. Staiger. A tight upper bound on Kolmogorov complexity and uniformly optimal prediction. Theory of Computing Systems, 31:215–29, 1998. [45] L. Staiger. The Kolmogorov complexity of real numbers. Theoretical Computer Science, 284:455–466, 2002. [46] L. Staiger. Constructive dimension equals Kolmogorov complexity. Information Processing Letters, 93:149–153, 2005. [47] D. Sullivan. Entropy, Hausdorff measures old and new, and limit sets of geometrically finite Kleinian groups. Acta Mathematica, 153:259–277, 1984. [48] C. Tricot. Two definitions of fractional dimension. Mathematical Proceedings of the Cambridge Philosophical Society, 91:57–74, 1982. [49] K. Vereshchagin and P.M.B. Vitanyi. Algorithmic rate-distortion function. In Proceedings IEEE Intn’l Symp. Information Theory, pages 798– 802, 2006. 46 [50] J. Ville. Étude Critique de la Notion de Collectif. Gauthier–Villars, Paris, 1939. [51] H. Wegmann. Uber den dimensionsbegriff in wahrsheinlichkeitsraumen von P. Billingsley I and II. Z. Wahrscheinlichkeitstheorie verw. Geb., 9:216–221 and 222–231, 1968. [52] K Weihrauch. Computable Analysis. An Introduction. Springer-Verlag, 2000. 47