Academia.eduAcademia.edu

V-variable fractals and superfractals

2003, arXiv (Cornell University)

Deterministic and random fractals, within the framework of Iterated Function Systems, have been used to model and study a wide range of phenomena across many areas of science and technology. However, for many applications deterministic fractals are locally too similar near distinct points while standard random fractals have too little local correlation. Random fractals are also slow and difficult to compute. We construct V-variable fractals, which have the property that at each level of decomposition there are at most V distinct components up to a natural equivalence relation. We show that V-variable fractals are elements of a superfractal, for which there is a fast forward algorithm. Finally we show that V-variable fractals approximate standard random fractals for large V and thereby obtain a fast forward algorithm for obtaining standard random fractals and their natural probability distribution to within any prescribed degree of approximation. The main ideas are developed by means of examples with the intention of being accessible to a wide readership.

V -variable fractals and superfractals Michael Barnsley†, John E Hutchinson†and Örjan Stenflo‡ † Department of Mathematics, Australian National University, Canberra, ACT. 0200, Australia ‡ Department of Mathematics, Stockholm University, SE-10691, Stockholm, Sweden Abstract. Deterministic and random fractals, within the framework of Iterated Function Systems, have been used to model and study a wide range of phenomena across many areas of science and technology. However, for many applications deterministic fractals are locally too similar near distinct points while standard random fractals have too little local correlation. Random fractals are also slow and difficult to compute. We construct V -variable fractals, which have the property that at each level of decomposition there are at most V distinct components up to a natural equivalence relation. We show that V -variable fractals are elements of a superfractal, for which there is a fast forward algorithm. Finally we show that V -variable fractals approximate standard random fractals for large V and thereby obtain a fast forward algorithm for obtaining standard random fractals and their natural probability distribution to within any prescribed degree of approximation. The main ideas are developed by means of examples with the intention of being accessible to a wide readership. AMS classification scheme numbers: 28A80, 60D05, 28A78 V -variable fractals and superfractals 2 1. Introduction We introduce the class of V -variable fractals. This solves two problems previously restricting further applications of Iterated Function System (IFS) generated fractals, the absence of fine control on local variability and the absence of a fast algorithm for computing and accurately sampling standard random fractals. In order to make the ideas clearer and hopefully facilitate the application of these notions to nonmathematical areas, except in Section 5 we have deliberately avoided technicalities and illustrated the ideas by means of a number of model examples, but the general construction and results should be clear. A careful mathematical treatment of the material here and a number of generalisations and further developments is given in Barnsley, Hutchinson and Stenflo 2003b. The integer parameter V in “V -variable” controls the number of distinct shapes or forms at each level of magnification (Figures 4, 6 and the discussion at the end of Section 3). The case V = 1 includes standard deterministic fractals (Figures 1, 2, also the fern and lettuce in Figure 4) generated by a single IFS and homogeneous random fractals (c.f. Hambly 1992, 2000 and Stenflo 2001). See Section 3 for a description of the construction in a particular case with V = 5 and note that the parameter V equals the number of buffers used in the construction. In Section 6 we construct and discuss examples of 2-variable fractals. In Section 5 we define the class of V -variable fractals and in (4) of the theorem in that section we justify the terminology “V -variable”. The construction of V -variable fractals is by means of a fast Markov Chain Monte Carlo type algorithm. See Section 6 for an example of the algorithm. Large V allows rapid approximations to standard random fractals in a quantifiable manner, and the approximation is to not one, but to a potentially infinite sequence of correctly distributed examples. See parts (5) and (6) of the theorem in Section 5 and the remarks following the proof of this result. In particular, one can approximate standard random fractals, together with their associated probability distribution, by means of a fast forward algorithm. A surprising but important fact is that each family of V -variable fractals, together with its naturally associated probability distribution, forms a single superfractal generated by a single superIFS, operating not on points in the plane (for example) as for a standard IFS but on V -tuples of images. See Section 4 and the theorem in Section 5. Dimensions of V -variable fractals are computable using products of random matrices and ideas from statistical mechanics. We implement a Monte Carlo method for this purpose, see Section 7. Since the mathematically natural notion of a V -variable fractal solves two major problems previously restricting wider applications, we anticipate that V -variable fractals should lead to further developments and applications of fractal models. For example, IFSs provide models for certain plants, leaves and ferns by virtue of the self-similarity which often occurs in branching structures in nature. But nature also exhibits V -variable fractals and superfractals 3 randomness and variation from one level to the next — no two ferns are exactly alike, and the branching fronds become leaves at a smaller scale. V -variable fractals allow for such randomness and variability across scales, while at the same time admitting a continuous dependence on parameters which facilitates geometrical modelling. These factors allow us to make the hybrid biological models in Figures 4, 9. Because of underlying special code trees (Barnsley, Hutchinson and Stenflo 2003a) which provide the fundamental information-theoretic basis of V -variable fractals, we speculate that when a V -variable geometrical fractal model is found that has a good match to the geometry of a given plant, then there is a specific relationship between these code trees and the information stored in the genes of the plant. In Barnsley, Hutchinson and Stenflo (2003a) we survey the classical properties of IFSs, develop the underlying theory of V-variable code trees and establish general existence and other properties for V -variable fractals and superfractals. In Barnsley, Hutchinson and Stenflo (2003b) we give a fairly extensive mathematical treatment of the theory and generalisations, and in particular prove the dimension results for V variable fractals for which we here give an informal justification. 2. Iterated Function Systems By way of background we first recall the concept of an IFS via the canonical example of the Sierpinski triangle S (approximated in the bottom right panel of Figure 1), which has been studied extensively (see Falconer 1990, 1997 and Hambly 1992, 2000) both mathematically and as a model for diffusion processes through disordered and highly porous material. The set S has three components, each of which is a scaled image of itself; each of these components has three sub-components, giving nine scaled images of at the next scale, and so on ad infinitum. A simple observation is that if f1 , f2 , f3 are contractions of space by the factor 21 with fixed points given by the three vertices A1 , A2 , A3 respectively of S, then the three major sub-components of S are f1 (S), f2 (S), f3 (S) respectively and S = f1 (S) ∪ f2 (S) ∪ f3 (S). (1) V -variable fractals and superfractals 4 Figure 1. Convergent or Backward Process. Beginning from any set (fish) T0 , iterates T1 = F (T0 ), T2 = F (T1 ),. . . converge to the Sierpinski Triangle S. Shown are iterates T0 , T1 , T2 , T3 , T4 , T8 . The collection of maps F = (f1 , f2 , f3 ) is called an Iterated Function System (or IFS ). For any set T one similarly defines F (T ) = f1 (T ) ∪ f2 (T ) ∪ f3 (T ). It is standard that S is the unique compact set satisfying (1) . Furthermore, beginning from any compact set T0 , and for k ≥ 1 recursively defining Tk = F (Tk−1 ), it follows that Tk converges to S in the Hausdorff metric as k → ∞, independently of the initial set T0 (Figure 1). For this reason, S is called the fractal set attractor of the IFS F and this approximation method is called the convergent or backward process, c.f. Hutchinson (1981). An alternative approach to generating S is by a chaotic or forward process (Barnsley and Demko 1985), sometimes called the chaos game (Figure 2). Begin from any point x0 in the plane and recursively define xk = fbk (xk−1 ), where each fbk is chosen independently and with equal probability from (f1 , f2 , f3 ). With probability one the sequence of points (xk )k≥0 approaches and moves ergodically around, and increasingly closer to, the attractor S. For this reason F is called an iterated function system. If instead the fbk are selected from (f1 , f2 , f3 ) with probabilities (p1 , p2 , p3 ) respectively, where each pi > 0 and p1 + p2 + p3 = 1, then the same set S is determined by the sequence (xk )k≥0 , but now the points accumulate unevenly, and the resulting measure attractor can be thought of as a greyscale image on S, or probability distribution on S, or more precisely as a measure. In this case (f1 , f2 , f3 ; p1 , p2 , p3 ) is called an IFS with weights. Figure 2. Chaotic or Forward Process. Beginning from any initial point and randomly and independently applying f1 , f2 or f3 produces the Sierpinski Triangle as attractor with probability one. Shown are the first 10,000 and 100,000 points respectively. These ideas and results naturally extend to general families of contraction maps and probabilities. Even with a few affine or projective transformations, one can construct natural looking images (see the initial fern and lettuce in Figure 4). IFSs have been extended to study the notion of random fractals. See Falconer (1986), Graf (1987) and Mauldin and Williams (1986); also Hutchinson and Rüschendorf (1998, 2000) where the idea of an IFS operating directly in the underlying probability spaces is used. V -variable fractals and superfractals 5 3. Construction of V -variable fractals We now proceed to the construction of V -variable fractals. This can be understood in the model situation of two IFSs F and G, with F as before and with G having the same fixed points A1 , A2 , A3 but with contraction ratios 31 instead of 12 . We emphasise that the following construction is in no sense ad hoc, but is the natural chaotic or forward process for a superfractal whose members are V -variable fractals as we discuss in Section 4 and establish in Section 5. Figure 3. Forward algorithm for generating families of V -variable fractals. Shown are levels 1 (top), 2 and 3 in the construction of a potentially infinite sequence of 5-tuples of 5-variable Sierpinski Triangles. For each buffer from level 2 onwards, F or G indicates which IFS was used, and the input arrows indicate the buffers to which this IFS was applied. One begins with arbitrary sets, one in each of V input buffers at level 1 (Figure 3 where V = 5, colour coded to indicate the maps involved, and Fig. 6 where V = 2). A set in the first of V output buffers (level 2, Figure 3) is constructed as follows: choose an IFS F or G with the preassigned probabilities P F or P G respectively; then apply the chosen IFS to the content of three buffers chosen randomly and independently with uniform probability from the input buffers at level 1, allowing the possibility that the same buffer is chosen more than once (thus one is performing uniform sampling with replacement). The resulting set is placed in the first buffer at level 2. The content of each of the remaining buffers at level 2 is constructed similarly and independently of the others at level 2. These output buffers then become the input buffers for the next step and the process is repeated, obtaining the bottom row in Figure 3, and so on. The construction produces an arbitrarily long sequence of V -tuples of approximate V -variable fractals associated to the pair of IFSs (F, G) and the probabilities (P F , P G ). V -variable fractals and superfractals 6 The degree of approximation is soon, and thereafter remains, within screen resolution or machine tolerance (Figure 6). The empirically obtained distribution of V -variable fractals over any infinite sequence of runs is the same with probability one and is the natural distribution as we explain later. The generalisation to the case of a family of IFSs (F 1 , . . . , F N ) with associated probabilities (P 1 , . . . , P N ) is straightforward; also sets can be replaced by greyscale images or more generally by coloured sets built up from primary colours if IFSs with weights are used (Figure 9). Figure 4. 2-Variable Fractals. In the first “Squares and Shields” example, the number of components at each level of magnification is 1,4,16,64, . . . , but at each level there are at most two distinct shapes/forms up to affine transformations. The actual shapes depend on the level. In the second example of fractal breeding, fern and lettuce fractal parents are shown with four possible 2-variable offspring. The two IFSs used are those generating the fern and the lettuce. The associated superfractal is the family of all possible offspring together with the naturally associated probability distribution. The V -variability can be understood as follows. In Figure 3 the set in each buffer from level 2 onwards is composed of three component parts, each obtained from one of the V = 5 buffers at the previous level; at level 3 onwards sets are composed of 9 smaller component parts each obtained from one of the V buffers two levels back; at level 4 onwards sets are composed of 27 smaller component parts each obtained from one of the V buffers three levels back; etc. Thus at each level of magnification there are at most V distinct component parts up to rescaling. In general, for a V variable fractal, although the number of components grows exponentially with the level of magnification, the number of distinct shapes or forms is at most V up to a suitable class of transformations (e.g. rescalings, affine or projective maps) determined by the component maps in each of the IFSs (F 1 , . . . , F N ) (Figure 4 first panel, Figure 6). If parts of the V -variable fractals overlap each other, the V -variability is not so obvious, and fractal measures (greyscale or colour images) rather than fractal sets (black and white images) are then more natural to consider. V -variable fractals and superfractals 7 4. Superfractals The limit probability distribution on the infinite family of V -variable fractals obtained by the previous construction is independent of the experimental run with probability one as we explain at the end of this Section and establish in Section 5. As noted previously, an initially surprising but basic fact is that this family of V -variable fractals and its probability distribution is a fractal in its own right, called a superfractal, and the construction process for the generated collection of V -variable fractals turns out to be the forward or chaotic process for this superfractal, see also Barnsley, Hutchinson and Stenflo (2003a). For large V , V -variable fractals approximate standard random fractals and their naturally associated probability distributions in a quantifiable manner, providing another justification for the canonical nature of the construction, see Barnsley, Hutchinson and Stenflo (2003a, b). Although there was previously no useful forward algorithm for standard random fractals such as Brownian sheets, one can now rapidly generate correctly distributed families of such fractals to any specified degree of approximation by using the previously described fast forward (Monte Carlo Markov Chain) algorithm for large V . The number of required operations typically grows linearly in V as only sparse matrix type operations are required. The superfractal idea can be understood as follows. The process of passing from the V -tuple of sets at one level of construction to the V -tuple at the next (Figures 3, 6) is given by a random function F a : HV → HV , where HV is the set of all V -tuples of compact subsets of [0, 1]2 ⊂ R2 or of some other compact metric space as appropriate, and where a belongs to some index set A. All information necessary to describe the chosen F a at each level in Figure 3 is given by the chosen IFSs for each buffer at that level (namely G, F, G, F, F across level 2) and by the arrows pointing to each buffer at that level (indicating which three buffers at the previous level are used for each application of F or of G), see also the captions in Figure 6. Each F a has a certain probability P a of being chosen, this probability is induced in the natural manner from the probabilities P F and P G of selecting F or G. The F a are contraction maps on HV in the Hausdorff metric, with contraction ratio equal 12 , and in general the contraction ratio of F a equals the maximum of the contraction ratios of the individual maps in the IFSs being used. In particular, the superIFS (F a , P a , a ∈ A) is an IFS operating not on points in R2 as for a standard IFS but on V -tuples of sets in HV . From IFS theory applied in this setting, there is a unique superfractal set and superfractal measure which with probability one is effectively given by the collection of V -tuples of V -variable fractals together with the experimentally obtained probability distribution arising from the previous construction. See Barnsley, Hutchinson and Stenflo (2003a) for detailed proofs. 5. Definitions and Proofs In this section, which is independent of the rest of the paper and written in a more formal style, we provide definition, theorem and proof. V -variable fractals and superfractals 8 Suppose (X, d) is a compact metric space. For each n = 1, . . . , N let F n = n n n n (f1n , . . . , fM ) be an IFS consisting of Lipschitz maps fm : X → X. Let rm := Lip fm , n n n i.e. d(fm (x), fm (y)) ≤ rm d(x, y) for all x, y ∈ X. n Assume r := maxm,n rm < 1. For a fixed integer V ≥ 1 let AV be the set of all matrices   a I (1) J a (1, 1) . . . J a (1, M )   .. .. .. (2) a =  ...  . . . I a (V ) J a (V, 1) . . . J a (V, M ) where 1 ≤ I a (v) ≤ N and 1 ≤ J a (v, m) ≤ V . Let H(X) be the set of compact subsets of X and let dH be the Hausdorff metric on H(X). Then (H(X), dH ) is a compact metric space. We also use dH to denote the induced Hausdorff metric on H(H(X)) in (6) of the following Theorem. For 1 ≤ n ≤ N define F n : H(X)M → H(X)M by n F (K1 , . . . , KM ) = M [ n fm (Km ). m=1 For a ∈ AV define the buffer map F a : H(X)V → H(X)V by F a (K1 , . . . , KV ) := F I a (1) (KJ a (1,1) , . . . , KJ a (1,M ) ), . . . , F I Define the metric a (V )  (KJ a (V,1) , . . . , KJ a (V,M ) ) . dH ((K1 , . . . , KV ), (K1′ , . . . , KV′ )) = max dH (Kv , Kv′ ). 1≤v≤V Then (HV , dH ) is a compact metric space. Moreover, it is immediate that each F a is a contraction map with Lip F a ≤ r < 1. Let FV = (F a , a ∈ AV ). Then FV is an IFS with constituent maps F a : H(X)V → H(X)V . (Recall for comparison n that the original IFSs F n had constituent maps fm : X → X.) V Let H(H(X) ) denote the set of compact collections of subsets of H(X)V and let d∗H denote the Hausdorff metric on H(H(X)V ). Then (H(H(X)V ), d∗H ) is itself a compact metric space, whose elements are compact collections of V -tuples of compact subsets of X. Since FV is an IFS on H(X)V we define in the usual manner FV : H(H(X)V ) → H(H(X)V ) by [ [ FV (K) = F a (K) = {F a (K1 , . . . , KV ) : (K1 , . . . , KV ) ∈ K} a∈AV a a∈AV = {F (K1 , . . . , KV ) : a ∈ AV , (K1 , . . . , KV ) ∈ K} for any K ∈ H(H(X)V ). V -variable fractals and superfractals 9 Definition. The pair FV = (F a , a ∈ AV ) is a superIFS. The unique compact attractor KV∗ ⊂ H(X)V is called a superfractal, as is its projection KV onto any component (see the following theorem). Members of KV are called V -variable fractals. Theorem. (1) There exists a unique KV∗ ∈ H(H(X)V ) such that FV (KV∗ ) = KV∗ . For any K0 ∈ H(H(X)V ), FkV (K0 ) → KV∗ in the d∗H sense as k → ∞ and d∗H (FkV (K0 ), KV∗ ) ≤ rk d∗H (K0 , KV∗ ). (2) If πv : H(X)V → H(X) denotes the vth projection map then KV := πv [KV∗ ] is independent of v. (3) Each (K1 , . . . , KV ) ∈ KV∗ can be written in the form (K1 , . . . , KV ) = lim F a1 ◦ · · · ◦ F ak (K10 , . . . , KV0 ) k→∞ for some (in fact many) (ak )k≥1 ⊂ AV . The limit is independent of (K10 , . . . , KV0 ). Any set (K1 , . . . , KV ) of this form belongs to KV∗ . (4) Let T be the tree of all finite sequences from {1, . . . , M } including the empty sequence ∅ of length 0. Then for each K ∈ KV and each k ≥ 1, K= M [ η(∅) η(m1 ) η(m1 m2 ) η(m1 ...mk−1 ) fm ◦ fm ◦ fm ◦ · · · ◦ fm (Km1 ...mk ) 1 2 3 k m1 ,...,mk =1 for some“code function” η : T → {1, . . . , N } and some collection of sets Km1 ...mk ∈ KV . Moreover, there are at most V distinct such sets Km1 ...mk for each k. (5) Suppose for k ≥ 1 that ak ∈ AV are iid so that I ak (v) ∈ {1, . . . , N } and J ak (v, m) ∈ {1, . . . , V } each have the corresponding uniform distribution. Then for every (K10 , . . . , KV0 ) ∈ H(X)V and a.e. (ak )k≥1 the set of limit points of the sequence (K1k , . . . , KVk )k≥1 , where (K1k , . . . , KVk ) := F ak ◦ · · · ◦ F a1 (K10 , . . . , KV0 ), equals the set KV∗ . (6) Let K∞ be the collection of random fractal sets generated in the usual manner from the IFSs F 1 , . . . , F N . Then dH (KV , K∞ ) ≤ r[log V / log M ] diam X where [s] denotes the integer part of s. In particular, KV → K∞ in the dH sense as V → ∞. Proof. Results (1), (3) and (5) follow from standard results for IFSs. More precisely, since FkV is a contraction map with contraction ratio r on a complete metric space it has a unique fixed point with exponential convergence as stated in (1). For (3) one similarly uses the fact each F a is a contraction. Result (5) follows from ergodic theory or in this case directly from the given probability distributions. For (2) note that FV is invariant under permutations of {1, . . . , V } and use the uniqueness of KV∗ . V -variable fractals and superfractals 10 Result (4) follows from FkV (KV∗ ) = KV∗ and expanding out the left side. Note that any standard random fractal generated from the IFSs F 1 , . . . , F N is of the same form but without any restriction h i on the Km1 ...mk . log V For (6) let k = log . Then M k ≤ V < M k+1 . It follows from a combinatorial M argument that for any element of K∞ with code function η there is an element of KV whose code function η agrees on nodes of length at most k − 1. It follows that dH (KV , K∞ ) ≤ rk diam X, which gives the result. The terminology V -variable is justified by (4). In the case of deterministic fractals or more generally of homogeneous random fractals, V = 1. Standard random fractals correspond to the case “V = ∞”. From (5) there is a fast forward algorithm for generating V -variable fractals, and from (6) it follows that one has a fast forward algorithm for generating standard random fractals up to any apriori prescribed error. In fact much more is true. We show in Barnsley, Hutchinson and Stenflo 2003b that similar results apply to random fractal measures. Moreover, the empirical distribution obtained by the forward algorithm approximates the probability measure on standard random fractals in a quantifiable manner. The results of the theorem hold under much weaker average contractive conditions, provided one works with fractal measures and the appropriate metrics are used, as is established in Barnsley, Hutchinson and Stenflo 2003b. 6. Examples of 2-variable fractals We begin with 2 IFSs where  x 3y f1 (x, y) = + − 2 8  x 3y g1 (x, y) = + − 2 8 U = (f1 , f2 ) (“Up with a reflection”) and D = (g1 , g2 ) (“Down”),  x 3y 1 x 3y 9 9 x 3y 17  , − + , f2 (x, y) = − + ,− − + , 16 2 8 16 2 8 16 2 8 16    1 x 3y 7 x 3y 9 x 3y 1 ,− + + , g2 (x, y) = − + , + − . 16 2 8 16 2 8 16 2 8 16 The corresponding fractal attractors are shown in Figure 5. V -variable fractals and superfractals 11 Figure 5. Up (green) and Down (red) attractors Figure 6 shows the first 20 steps in the construction of a sequence of pairs of 2variable fractals from the two IFSs U and D. The initial pair of input figures can be arbitrarily chosen, here they are each the same and consist of four leaves. For the first step in the construction (producing the contents of the second pair of buffers) the IFS U = (f1 , f2 ) was chosen, f1 was applied to the previous left buffer L and f2 was applied to the previous right buffer R; the second buffer was obtained by applying U with f1 and f2 both acting on the right buffer R at the previous step. Thus the first step in the construction can be described by U (L, R) and U (R, R) respectively; see the caption below the second pair of screens in Figure 6. The second step is given by D(R, R) and D(R, L), the third by U (L, R) and D(R, L), the fourth by U (R, R) and D(L, L), and so on from left to right and then down the page. Initial Sets U (L, R) U (R, R) D(R, R) D(R, L) V -variable fractals and superfractals 12 U (L, R) D(R, L) U (R, R) D(L, L) U (R, R) D(L, R) D(R, L) U (R, R) D(R, R) D(R, R) D(L, L) D(R, L) D(L, R) D(L, L) U (R, R) D(R, L) U (R, R) D(R, L) U (R, R) D(R, L) U (R, L) D(R, R) U (L, R) D(R, L) V -variable fractals and superfractals 13 D(L, L) U (R, R) U (R, L) D(L, L) D(L, R) D(R, R) U (R, L) U (L, R) U (L, R) U (L, L) U (R, R) U (R, L) Figure 6. A sequence of pairs of (approximate) 2-variable fractals (from left to right and then down the page). In each case in this particular example, for each buffer at each step, either U or D was chosed with probability 21 . For each buffer, the (in this case two) input buffers chosen from the previously generated pair of buffers were also each chosen as either L or R with probability 12 , and the same buffer is allowed to be selected twice. After about 12 iterations, the images obtained are independent of the initial images up to screen resolution. After this stage the images (or “necklaces”) can be considered as examples of 2-variable fractals corresponding to the family (U, D) of IFSs with associated choice probabilities ( 12 , 21 ). The pair of 2-variable fractals obtained at each step depends on the previous choices of IFS and input buffers, and will vary from one experimental run to another. However, over any sufficiently long experimental run, the empirically obtained distribution on pairs of 2-variable fractals will (up to any prescribed resolution) be the same with probability one. This follows from ergodic theory and the fact that the construction process corresponds to the chaos game for an IFS (operating here on pairs of images rather than on single points as does a standard IFS — see the discussion of the chaos game in the second half of Section 2 and also Fig. 2) As discussed in the previous section, we call this type of IFS a superIFS. The collection of 2-variable necklaces obtained over a long experimental run should be thought of as a single superfractal, and the corresponding probability distribution on necklaces should be thought of as the corresponding superfractal measure. V -variable fractals and superfractals 14 In Figure 7 we have superimposed the members of a generated sequence of 2variable fractal necklaces. By virtue of the fact that, as discussed before, the probability distribution given by such a sequence approximates the associated superfractal measure, the image can be regarded as a projection of the superfractal onto 2-dimensional space. The attractors of the individual IFSs U and D are shown in green and red respectively. The projected support of the superfractal is shown on a black background but, inside the support, increasing density of the superfractal measure is indicated by increasing intensity of white. Figure 7. Superfractal projected onto 2 dimensional space. In Figure 8 are shown a fern and lettuce generated by two IFSs, each IFS consisting of 4 functions. In Figure 9 is a sequence of hybrid offspring, extending the examples in Figure 4. The colouring was obtained by working with two IFSs in 5 dimensional space, with the three additional dimensions corresponding to RGB colouring. The two IFSs used project onto two IFSs operating in two dimensional space and which give the (standard black and white) fern and lettuce attractors respectively. The 2-variable offspring were coloured by extending the superfractal construction to 5 dimensional space in a natural manner. V -variable fractals and superfractals 15 Figure 8. Lettuce and fern attractors. Figure 9. A sequence of fern-lettuce hybrid offspring. 7. Computation of Dimension for V -variable Fractals An important theoretical and empirical classification of fractals is via their dimension. We first show how to compute the dimensions of V -variable Sierpinski triangles. As we V -variable fractals and superfractals 16 discuss in the final paragraph of the next section, and prove in Barnsley, Hutchinson and Stenflo 2003b, the method generalises by using the uniform open set condition. Associated with the transition from the k-level set of V buffers to the k + 1 level is a matrix M k (α) defined for each α as follows. Entries of M k (α) are initialised to zero. For the set in the vth output buffer at level k + 1 one considers each input buffer w used in its construction and adds rα to the wth entry in the vth row where r = 21 or 13 is the corresponding contraction ratio. The construction of M k (α) can be seen in passing from level 1 to level 2 and from level 2 to level 3 in Figure 3, giving respectively:  1 1 1  1 1 1   0 0 0 0 α α α α α α 3 3 3 2 2 2  1 0 1 1 0  0 2 0 0 1 α α α     2 2 3α 3α  3  2 1 1 1  1 1  1 1 M (α) =  0 0 3α 3α 3α  . M (α) =  0 3α 3α 0 3α  ,  1   1  2  2α 0 0 2 α 0   3α 0 0 0 32α  0 0 21α 21α 21α 0 0 22α 0 21α The “pressure” function 1 γV (α) = lim log k→∞ k  1 M 1 (α) · . . . · M k (α) V  (3) exists and is independent of the experimental run with probability one by a result of Furstenberg and Kesten (1960), see also Cohen (1988) for the version required here. (By kAk we mean the sum of the absolute values of all entries in the matrix A.) The factor 1/V is not necessary in the limit, but is the correct theoretical and numerical normalisation, as we see in the next section. (See Feng and Lau (2002) for another use of Furstenberg and Kesten type results for computing dimensions of random fractals.) In case V = 1,  α α γ1 (α) = 1 − log 3 − log 2 (4) 2 2 from the strong law of large numbers. It can be shown, see Barnsley, Hutchinson and Stenflo (2003b), that for each V γV (α) is monotone decreasing. In this example the derivative lies between − log 2 and − log 3, corresponding to the contraction ratios 12 and 13 respectively, see Figure 10. Moreover, there is a unique d = d(V ) such that γV (d) = 0. This is the dimension of the corresponding V -variable random fractals with probability one. The establishment and generalisation of this method uses the theory of products of random matrices and ideas from statistical mechanics, as we discuss in the next section. V -variable fractals and superfractals 0.03 17 γ(α) 0.02 0.01 α 1.23 0 d(1) 1.24 d(2) 1.25 d(5) 1.26 1.27 d(infty) –0.01 –0.02 –0.03 –0.04 Figure 10. Graphs of the “pressure” function γV (α) for V = 1, 2, and 5 respectively, from left to right. It was previously known that the dimension of homogeneous random Sierpinski triangles (V = 1) is 2 log 3/(log 2 + log 3) ≈ 1.226, see Hambly (1992, 2000), and that the dimension of standard random Sierpinski triangles (V → ∞) is the solution d of   1 1 d 1 d 1 3 3 + = 1, or approximately 1.262, see Falconer (1986), Graf (1987) and 2 2 2 3 Mauldin and Williams (1986). In particular, (4) is in agreement when computing d(1). For V > 1 we used Monte Carlo simulations to compute γV (α) in the region of the interval [d(1), d(∞)], with the computed values shown (Figure 10). These values have error at most .001 at the 95% confidence level, and from this one obtains the dimensions d(2) ≈ 1.241, d(5) ≈ 1.252 (Figure 10). The computed graphs for V > 1 are concave up, although this does not show on the scale of Figure 10. 8. Analysis of Dimension results for V -variable Fractals In order to motivate the following analysis, consider a smooth curve or smooth surface having dimension 1 or 2 respectively. It is possible to cover each “efficiently” (i.e. with little overlap) by sets of small diameter such that the sum of the diameters raised to the power 1 or 2 respectively is very close to the length, or to the area divided by π/4, respectively. However, if any power α > 1 or 2 respectively is used then the limit of this sum, as the maximum diameter of the covering sets becomes arbitrarily small, is zero. For any power α < 1 or 2 respectively the limit of the sum, as the maximum diameter becomes arbitrarily small, is infinity. V -variable fractals and superfractals 18 In the case of the construction of 5-variable Sierpinski fractal triangles, we see that if one begins the construction process with 5 copies of a triangle T as indicated (Figure 3), then the contents of each buffer at later stages will consist of a large number of tiny triangles which approximate and cover the “ideal” or limiting 5-variable Sierpinski fractal triangles. (Note that what is obtained after k steps is only an actual 5-variable Sierpinski fractal triangle up to k levels of magnification.) One can check that the sum Svk (α) of the diameters to the power α of the triangles in the v-th buffer at level k is given by the sum of the entries in the v-th row of the matrix M 1 (α) · . . . · M k (α). Using (3) it is not too difficult to show that limk→∞ k1 log Svk (α) also exists for each v and equals γV (α), independently of v, with probability one. (The argument relies on the existence of “necks”, where a neck in the construction process occurs at some level if the same IFS and the same single fixed input buffer is used for constructing the set in each buffer at that level.) One can show (Barnsley, Hutchinson and Stenflo 2003b) that γV (α) is decreasing in α and deduce that there is a unique d = d(V ) such that γV (d) = 0 (Figure 10). From this and (3) it follows that α < d =⇒ γ(α) > 0 =⇒ lim Svk (α) = lim exp(k γ(α)) = ∞, k→∞ α > d =⇒ γ(α) < 0 =⇒ lim k→∞ k→∞ Svk (α) = lim exp(k γ(α)) = 0. k→∞ It is hence plausible from the previous discussion of curves and surfaces that the dimension of V -variable Sierpinski triangles equals d(V ) with probability one. The motivation is that the covering by small triangles is very “efficient”. The justification that the dimension is at most d is in fact now straightforward from the definition of (Hausdorff) dimension of a set. The rigorous argument that the dimension is at least d, and hence exactly d, is much more difficult, see Barnsley, Hutchinson and Stenflo (2003b). It requires a careful analysis of the frequency of occurrence of necks and the construction of Gibbs type measures on V -variable Sierpinski triangles, analogous to ideas in statistical mechanics. Similar results on dimension have been established much more generally, see Barnsley, Hutchinson and Stenflo (2003b). For example, suppose the functions in each IFS are similitudes, i.e. built up from translations, rotations, reflections in lines, and a single contraction around a fixed point by a fixed ratio r (both the point and the ratio r may depend on the function in question). We also require that the IFSs involved satisfy the uniform open set condition. In the case of 5-variable Sierpinski fractal triangles constructed from the IFSs F = (f1 , f2 , f3 ) and G = (g1 , g2 , g3 ) this means the following. There is an open set O (the interior of the triangle T ) such that f1 (O) ⊆ O, f2 (O) ⊆ O, f3 (O) ⊆ O, and f1 (O), f2 (O), f3 (O) have no points in common, and such that analogous conditions apply to the maps g1 , g2 , g3 with the same set O. (In general, a single open set O which applies to every IFS may not be so simple to obtain.) Under these circumstances one constructs the matrices M k (α) and the pressure function γV (α) as before and it follows that the solution d(V ) of γV (d) = 0 is the dimension of the corresponding V variable fractals with probability one. V -variable fractals and superfractals 19 9. Generalisations Many generalisations are possible. These are proved in Barnsley, Hutchinson and Stenflo 2003b, or are otherwise clear. n The number of functions M in each of the IFSs F n = (f1n , . . . , fM ; pn1 , . . . , pnM ) may n vary. The maps fm need only be mean contractive when their contraction ratios are averaged over m and n according to the probabilities pnm and P n respectively. Neither the number of functions in an IFS nor the number of IFSs need be finite; this is important for simulating various selfsimilar processes, including Brownian motion, see Hutchinson n and Rüschendorf (2000). The maps fm may be nonlinear, and many of the results and n arguments, including those concerning dimension, will still be valid. The set maps fm need not be induced from point maps; this is technically useful in extending results to n n the case where M is not constant by artificially adding set maps fm such that fm (A) is always the empty set. It could also be important in applications to modelling biological or physical phenomena where the objects under consideration are not naturally modelled as sets or measures. Buffer sampling need not be uniform; buffers could be placed in a rectangular or other grid, and nearby buffers sampled with greater probability, in order to simulate various biological and physical phenomena. An IFS operates on R2 , or more generally on a compact metric space (X, d), to produce a fractal set attractor; a weighted IFS produces a fractal measure attractor. We have seen in this paper how a family of IFSs operating on (X, d), a probability distribution on this family of IFSs, and an integer parameter V , can be used to generate a (super)IFS operating in a natural way on (H(X)V , dH ), where H(X) is the space of compact subsets of X and dH is the induced Hausdorff metric. In the case of a family of weighted IFSs, the induced superIFS operates on (P(X)V , dM K ) where P(X) is the space of unit mass measures on X and dM K is the induced Monge Kantorovitch metric. In either case there is a superfractal set (consisting of V -variable sets or measures respectively) together with an associated superfractal measure (a probability distribution on the collection of V -variable sets or measures). There is also a fast forward algorithm to generate this superfractal. We can consider iterating this procedure. Replace (X, d) by (H(X)V , dH ) or (P(X)V , dM K ), take a family of superIFSs, an associated probability distribution, and a new parameter W . To speculate; if a superfractal may be thought of as a gallery of a new class of fractal images, can one use some version of the iteration to produce a museum of galleries of yet another new class of fractal images? 10. Conclusion There appear to be many potential applications, which include both the extension of modelling possibilities to allow a controlled degree of variability where deterministic or random fractals have been previously applied, and the rapid generation of accurately distributed examples of random fractals — previously not possible except in very special V -variable fractals and superfractals 20 cases. Acknowledgments We thank the Australian Research Council for their support of this research, which was carried out at the Australian National University. References Barnsley M F and Demko S 1985 Iterated function systems and the global construction of fractals Proc. Roy. Soc. Lond. Ser. A 399 243–275 Barnsley M F, Hutchinson J E and Stenflo Ö 2003a A fractal value random iteration algorithm and fractal hierarchy (submitted) Barnsley M F, Hutchinson J E and Stenflo Ö 2003b New classes of fractals in preparation Cohen J E 1988 Generalised products of random matrices and operations research SIAM Review 30 69–86 Falconer K J 1986 Random fractals Math. Proc. Cambridge Philos. Soc. 100 559–82 Falconer K J 1990 Fractal Geometry. Mathematical Foundations and Applications (Chichester: John Wiley & Sons) Falconer K J 1997 Techniques in Fractal Geometry (Chichester: John Wiley & Sons) Feng D J and Lau K S 2002 The pressure function for products of non-negative matrices Math. Res. Letters 9 1–16 Furstenberg H and Kesten H 1960 Products of random matrices Ann. Math. Statist. 31 457–469 Graf S 1987 Statistically self-similar fractals Probab. Theory Related Fields 74 357–392 Hambly B M 2000 Heat kernels and spectral asymptotics for some random Sierpinski gaskets Progr. Probab. 46 239–267 Hambly B M 1992 Brownian motion on a homogeneous random fractal Probab. Theory Related Fields 94 1–38 Hutchinson J E 1981 Fractals and self-similarity Indiana Univ. Math. J. 30 713–749 Hutchinson J E and Rüschendorf L 1998 Random fractal measures via the contraction method Indiana Univ. Math. J. 47 471–487 Hutchinson J E and Rüschendorf L 2000 Random fractals and probability metrics Adv. in Appl. Probab. 32 925–947 Hutchinson J E and Rüschendorf L 2000 Selfsimilar fractals and selfsimilar random fractals Progr. Probab. 46 109–123 Mauldin R D and Williams S C 1986 Random recursive constructions; asymptotic geometrical and topological Properties. Trans. Amer. Math. Soc. 295 325–346 Stenflo Ö 2001 Markov chains in random environments and random iterated function systems Trans. Amer. Math. Soc. 353 3547–62