PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 207

Fractal Patterns in

Nonlinear Dynamics
and Applications
Santo Banerjee
Institute for Mathematical Research
University Putra Malaysia
Serdang, Malaysia
M K Hassan
Dhaka University
Dhaka, Bangladesh
Sayan Mukherjee
Department of Mathematics
Sivanath Sastri College
Kolkata, India
A Gowrisankar
Department of Mathematics
Vellore Institute of Technology
Vellore, Tamil Nadu, India

p,
p,
A SCIENCE PUBLISHERS BOOK
A SCIENCE PUBLISHERS BOOK
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2020 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20191014

International Standard Book Number-13: 978-1-4987-4135-4 (Hardback)


This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been
made to publish reliable data and information, but the author and publisher cannot assume responsibility for the
validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the
copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to
publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let
us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted,
or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, includ-
ing photocopying, microfilming, and recording, or in any information storage or retrieval system, without written
permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com
(http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers,
MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety
of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment
has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at


http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
This volume is dedicated to all researchers
who love to explore the mystery of nature
Preface

Mathematics and Physical Sciences have been used together to describe natural
phenomena. As a result of its numerous successes, there have been a growth of
scientific discoveries. This book is meant for anyone who wants to understand
patterns of fractal geometry in detail along with the physical aspects and basic
mathematical background. It is our goal to give readers a broad interpretation of
the underlying notions behind fractals and multifractals. Furthermore, we want
to illustrate the fundamentals of fractals, stochastic fractals and multifractals
with applications.
Many phenomena in nature exhibit self-similarity. That is, either a part is
similar to the whole or snapshots of the same system at different times are similar
to one another albeit it differs in size. Initially this book describes novel physical
applications and the recent progress through scale-invariance and self-similarity.
In general, mathematics is concerned with sets and functions to model real world
problems which are done by classical Euclidean geometry. However, there are
many phenomena which are traditionally observed as too irregular or complex
to be described using classical Euclidean geometry. In such a case, there is a
need for alternative geometry to resolve these complexities which helps in a
better understanding of natural patterns. The idea of irregular objects has been
revolutionized by Benoit B Mandelbrot and is called fractal geometry. It has
generated a widespread interest in almost every branch of science. The advent of
inexpensive computer power and graphics has led to the study of non-traditional
geometric objects in many fields of science and the idea of the fractal has been
used to describe them. In a sense, the idea of fractals has brought many seemingly
unrelated subjects under one umbrella. The second chapter deals the construction
of fractals through an iterated function system of contractive mappings and
illustrates some examples.
Nature loves randomness and natural objects which we see around us
evolve in time. The apparently complex look of most of natural objects does not
vi < Fractal Patterns in Nonlinear Dynamics and Applications

mean that nature favours complexity, rather that the opposite is true. Often the
inherent and the basic rule is trivially simple; it is in fact the randomness and the
repetition of the same simple rule over and over again that makes the object look
complex. Of course, natural fractals cannot be strictly self-similar rather they are
statistically self-similar. For instance, one can draw a curve describing the tip
of the trees in the horizon but the details of the two pictures drawn by the same
person will never be the same despite how hard one tries. Capturing the generic
feature of cloud distribution without knowing anything about self-similarity
can be described as our natural instinct. The subsequent section attempts to
incorporate both the ingredients (randomness and kinetic) to the various classical
fractals, namely stochastic fractals and multifractals, in order to know what role
these two quantities play in the resulting processes.
Contents

Preface v
Symbols xi

1. Scaling, Scale-invariance and Self-similarity 1


1.1 Dimensions of physical quantity 4
1.2 Buckingham Pi-theorem 5
1.3 Examples to illustrate the significance of Π-theorem 8
1.4 Similarity 11
1.5 Self-similarity 13
1.6 Dynamic scaling 16
1.7 Scale-invariance: Homogeneous function 18
1.7.1 Scale invariance: Generalized homogeneous functions 20
1.7.2 Dimension functions are scale-invariant 21
1.8 Power-law distribution 21
1.8.1 Examples of power-law distributions 22
1.8.1.1 Euclidean geometry 22
1.8.1.2 First return probability 23
1.8.2 Extensive numerical simulation to verify powerlaw 28
first return probability

2. Fractals 31
2.1 Introduction 31
2.2 Euclidean geometry 33
2.3 Fractals 35
2.3.1 Recursive Cantor set 37
2.3.2 von Koch curve 40
2.3.3 Sierpinski gasket 42
viii < Fractal Patterns in Nonlinear Dynamics and Applications

2.4 Space of fractal 45


2.4.1 Complete metric space 45
2.4.2 Banach contraction mapping 49
2.4.3 Completeness of the fractal space 51
2.5 Construction of deterministic fractals 58
2.5.1 Iterated function system 58

3. Stochastic Fractal 69
3.1 Introduction 69
3.2 A brief description of stochastic process 70
3.3 Dyadic Cantor Set (DCS): Random fractal 71
3.4 Kinetic dyadic Cantor set 73
3.5 Stochastic dyadic Cantor set 77
3.6 Numerical simulation 81
3.7 Stochastic fractal in aggregation with stochastic 85
self-replication
3.8 Discussion and summary 95

4. Multifractality 97
4.1 Introduction 97
4.2 The Legendre transformation 99
4.3 Theory of multifractality 101
4.3.0.1 Properties of the mass exponent τ(q) 102
4.3.1 Legendre transformation of τs(q): f (α) spectrum 104
4.3.1.1 Physical significance of α and f(α) 105
4.4 Multifractal formalism in fractal 105
4.4.1 Deterministic multifractal 107
4.5 Cut and paste model on Sierpinski carpet 110
4.6 Stochastic multifractal 115
4.7 Weighted planar stochastic lattice model 115
4.8 Algorithm of the weighted planar stochastic lattice (WPSL) 116
4.9 Geometric properties of WPSL 119
4.9.0.1 Multifractal analysis to stochastic 120
Sierpinski carpet
4.9.0.2 Legendre transformation of the mass 122
exponent τs(q): The f (α) spectrum
4.10 Multifractal formalism in kinetic square lattice 124
4.10.1 Discussions 126

5. Fractal and Multifractal in Stochastic Time Series 129


5.1 Introduction 129
5.2 Concept of scaling law, monofractal and multifractal 130
time series
Contents < ix

5.3 Stationary and non-stationary time series 133


5.4 Fluctuation analysis on monofractal stationary and 135
non-stationary time series
5.4.1 Autocorrelation function 135
5.4.2 Fourier based spectrum analysis 135
5.4.3 Hurst exponent 136
5.4.4 Fluctuation analysis (FA) 139
5.4.5 Detrended fluctuation analysis 140
5.5 Fluctuation analysis on stationary and non-stationary 144
multifractal time series
5.5.1 Wavelet transform modulus maxima 144
5.5.2 Multifractal detrended fluctuation analysis 145
5.6 Discussion 150

6. Application in Image Processing 153


6.1 Introduction 153
6.1.1 Digital image 153
6.1.2 Digital image processing 154
6.1.3 Image segmentation 155
6.2 Generalized fractal dimensions 156
6.2.1 Monofractal dimensions 156
6.2.2 Box dimension of image 158
6.2.3 Multifractal dimension 160
6.3 Image thresholding 162
6.3.1 Multifractal dimensions: A threshold measure 162
6.4 Performance analysis 164
6.4.1 Evaluation measure for quantitative analysis 164
6.4.2 Human visual perception 164
6.5 Medical image processing 166
6.6 Mid-sagittal plane detection 168
6.6.1 Description of experimental MRI data 171
6.6.2 Performance evaluation metrics 171
6.6.3 Results and discussions 172
6.6.4 Conclusion 181
References 183
Index 191
Symbols

N the set of all natural numbers.


R the set of all real numbers.
Rn the n-dimensional Euclidean space.
(X, d) metric space.
K (X) collection of all non-empty compact subsets of X.
Hd Hausdorff metric induced by the metric d.
N(K, ε) smallest number of closed balls required to cover K.
dimT topological dimension.
dimB box dimension or fractal dimension.
Hs(K) s-dimensional Hausdorff measure of K.
dimH Hausdorff dimension.
REq Renyi entropy of order q.
Dq generalized fractal dimension of order q.
a0 the Bohr radius.
P(m, N) probability that the walker is at position m after N random steps.
C[a, b] set of all continuous real-valued functions defined on the closed
interval [a, b].
B(x, r) open ball centred at x with radius r.
B[x, r] open ball centred at x with radius r.
τ (q) the mass exponent.
M(m, n; t) 2-tuple Mellin transform.
D0 the dimension of the support or fractal dimension.
D1 the information dimension.
D2 the correlation dimension.
Chapter 1

Scaling,
Scale-invariance and
Self-similarity

Physics is all about observations and the measurement of physical


quantities with the desire to find patterns in the data. These pat-
terns subsequently lead to finding principles of the phenomena under
investigation. Finding patterns often means finding order, similarity,
self-similarity and scale-invariance in the phenomena under investiga-
tion which may otherwise look disordered. Many phenomena in nature
exhibit similarity and self-similarity. That is, either part is similar to
the whole or snapshots of the same system at different times are simi-
lar to one another albeit they differ in sizes. Trying to understand the
idea of self-similarity is therefore like trying to learn their nature.
It has proved quintessential to understand the idea of dimension and
dimensionless quantity to really grasp the idea of similarity and self-
similarity.
The building blocks of physics are the physical quantities quantified
by numbers which are obtained by measurements. When we can mea-
sure a physical quantity and express it in numbers only then we can say
we have gained some knowledge about that quantity; but when we can-
not express it in numbers then our knowledge about that quantity can
be described as meagre and unsatisfactory. Numbers to quantify physi-
cal quantities are typically obtained by a direct or indirect comparison
2  Fractal Patterns in Nonlinear Dynamics and Applications

with the corresponding units of measurements which are of two kinds


and they are called fundamental and derivative units. The fundamen-
tal units of measurements are usually defined arbitrarily in the form
of certain standards, artificial or natural. The derivative ones, on the
other hand, are obtained from the fundamental units of measurements
by virtue of the definitions which always indicates some conceptual
method or definition. For instance, speed is the ratio of the distance tra-
versed in a given time interval to the magnitude of that time interval and
hence as a unit of speed one can take the ratio of the unit of length
to the unit of time in the given system. A set of fundamental units
of measurement is sufficient for measuring the properties of a class of
phenomena under investigation and it is then called a system of units of
measurement. Until recently, the centimeter-gram-second system, ab-
breviated as CGS system, has been widely used by scientists working
in laboratories, dealing with small quantities where one gram (gm) is
adopted as the unit of mass, one centimeter (cm) is adopted as the unit
of length, and one second (s) is adopted as the unit of time. A system of
units of measurement consisting of two units such as a unit for the mea-
surement of length and a unit for the measurement of time for example
is sufficient for investigating the kinetic phenomena, while a system of
units consisting of only one length unit is sufficient for measuring the
geometric aspects of the object. In addition to the CGS system, one
often considers a system in which 1 km (105 cm) is used as the unit of
length, 1 metric ton (106 gm) is used as the unit of mass, and 1 hour
(3600 s) is used as the unit of time. These two systems of units have the
following property in common. The standard quantities and the funda-
mental units are of the same physical nature (mass, length, and time).
Consequently, we say that these systems belong to the same class. To
generalize, a set of systems of units that differ only in the magnitude
(but not in the physical nature of their standard) of the fundamental
units is called a class of systems of units.
The system just mentioned and the CGS system are members of the
same class since the same standards of lengths, masses and time are
used as the fundamental units. The corresponding units for an arbitrary
system within the same class are as follows: (i) unit of length = cm/L,
(ii) unit of mass = g/M, and (iii) unit of time = s/T, where L, M, T are
the abstract positive numbers that indicate the factors by which the
fundamental units of length, mass and time decreases in passing from
the original system (in this case, the cgs system) to another system in
the same class. This class is called the LMT class. In particular, we find
that one of the widely used the SI systems belongs to the same LM T
class, in which the unit of mass is taken to be 1 kg=1000 gm, which is
complete mass of the previously mentioned standard of mass; the unit
Scaling, Scale-invariance and Self-similarity  3

of length is taken to be 1 meter = 100 cm, which is the complete length


of the standard of length mentioned before; and the unit of time is taken
to be 1 second. Thus upon passage from the cgs system to the SI system
M = 0.001, L = 0.01, and T = 1. Often one also uses systems of the FLT
class, in which the fundamental units of measurement have the form
kgf /F , cm/L, and s/T . These are certainly not the only possible choice
of units of measurements rather the choice of units of measurements
depends on the scale of the physical quantity under investigation. For
instance in the study studying phenomena at the atomic and molecular
level it is useful to choose a unit of length which is comparable to the
size of an atom. We do this by using the electron charge e as the unit
of charge, and the electron mass me as the unit of mass. At the atomic
level the forces are all electromagnetic, and therefore the energies are
e2
always proportional to 4π 0
which has dimensions M L3 T −2 . Another
quantity that appears in quantum physics is ~, the Plank’s constant
divided by 2π which has dimensions M L2 T −1 . Dimensional analysis
then reveals that the atomic unit of length is

~2
a0 = m e e2
. (1.1)
4π0

This is known as the Bohr radius or radius of smallest orbit for an


electron circling around proton of the hydrogen atom. In the atomic
world the Bohr radius can be chosen as the standard of length. In as-
tronomy we deal with very large distances hence the Bohr radius as a
standard can be highly inconvenient and therefore several other units
are in use. For instance, the astronomical unit (AU = 1.496 × 1011 m)
in which the average distance between the Earth and the Sun is taken
as the standard. The basic idea is that the physical laws do not depend
upon arbitrariness in the choice of the basic units of measurement.
Newton’s second law, F = ma, is true regardless of whether we mea-
sure mass in kilograms, acceleration in meters per second squared, and
force in newtons, or whether we measure mass in slugs, acceleration in
feet per second squared, and force in pounds. This universality of the
physical laws makes science so beautiful.
The measurement systems are fundamental to physics as it allows us
to obtain the numerical number for quantifying different physical quan-
tities. However, the precise magnitude of the numerical value strictly
depends on the choice of the class of units of measurements we use to
obtain it. Any investigation in physics ultimately comes down to deter-
mining certain quantity which may depend on various other physical
quantities that characterizes the phenomena under consideration. The
4  Fractal Patterns in Nonlinear Dynamics and Applications

problem therefore reduces to establishing the relationship which can


always be represented in the form

f = f (a1 , ..., an ), (1.2)


where f is the quantity of primary interest which we will call the gov-
erned parameter and the quantities a1 , a2 , ..., an on which f depends will
be called governing parameters. The governing parameters are usually
combined into one single term so that it bears the same physical di-
mension as the one for f . In this opening chapter we shall go over some
important aperitifs that we will need throughout this book. We will dis-
cuss the idea of dimension of physical quantity and how it differs from
dimensionless quantity. We will then rigorously prove that the dimen-
sional function of the physical quantities are always power-law mono-
mial in nature. The primary aim of this chapter is to establish the fact
that the scale-invariance, self-similarity or scaling as well as the math-
ematical definitions of the homogeneous function are all deeply rooted
to the power-law monomial nature of the dimensions of the physical
quantity (recommended reading [10, 20, 32, 37, 39]).

1.1 Dimensions of physical quantity


It can be said with certainty that each and everyone reading this book
have dealt with dimensional analysis at some stage of their studies,
especially while solving problems in their early undergraduate years.
Yet, many of us may not know how to separate a dimensionless quan-
tity from a dimensional quantity? It is absolutely essential to know a
clear-cut definition of the dimensional quantity and how it differs from
dimensionless quantity. Dimension of a physical quantity is the function
that determines the number of times one needs to alter the numerical
value of this quantity while passing from one system of units of mea-
surement to another system within the same class. For instance, if the
unit of length is decreased by a factor L, the unit of mass is decreased
by a factor of M and the unit of time is decreased by a factor T then
the unit of force is smaller by a factor of M LT −2 than the original
unit. Consequently, the numerical values of all the forces in the new
unit of measurement will be increased by a factor of M LT −2 owing to
the definition of the principle of equivalence. Upon decreasing the unit
of mass by a factor M and the unit of length by a factor of L, we find
that the new unit of density is smaller by factor M L−3 than the original
unit, so that the numerical values of all the densities are increased by a
factor M L−3 . One can treat the other physical quantities of interest in
a similar fashion. Let us consider that we have a stick whose dimension
Scaling, Scale-invariance and Self-similarity  5

function is φ(M, K, T ) = L. Say that we want to measure its length


using a meter scale and found that its length is 9 meters. Say, now that
we decreased the unit of measurement by a factor of L = 100 and hence
in the new scale the numerical value will be increased by a factor of
100 since its dimension is L.
On the other hand, quantities whose numerical values remain iden-
tical in all systems of units within a given class are called dimensionless
and hence the dimension function is equal to unity for a dimension-
less quantity. Perhaps a couple of examples at this stage will provide a
better understanding about the dimensionless quantity than its mere
definition. Consider, that you want to measure the ratio of the cir-
cumference to the diameter of a circle. Regardless, of the choice of the
unit of measurement the numerical value of the ratio will always be
the same. We will find this definition extremely useful not only in this
chapter but in the subsequent chapters as well. Again, consider that
the angular frequency ω of a pendulum of length l for small oscillations
is 9.8 cycle/sec. On the other hand using a simple dimensional analysis
or pdetailed solution reveals that the natural frequency of a pendulum
is g/l where g is the acceleration due to gravity, which is 9 : 8 ms−2
on earth (in the SI system of units). To obtain the natural frequency of
the simple pendulum one usually needs to solve the differential equa-
tion which results from applying Newton’s second law to the pendulum.
Thus it is clear that the ratio
ω
Π= p , (1.3)
g/l

is a dimensionless
√ quantity. Suppose that l = 1m and ω = 9.8cycles/sec
and hence Π = 9.8 in the SI system of units. Now let us change the
system of units so that the unit of mass is decreased by a factor of
M = 1000, the unit of length is decreased by a factor of L = 100, and
the unit of time is decreased by a factor of T = 1. With this change, the
units of frequency will decrease by a factor of T −1 = 1 and the units of
acceleration by a factor of LT −2 = 100. Therefore, the numerical value
of l in the system of units of measurement will be l100 instead of 1
and that of g will be 980. However, the numerical value of Π will still
remain invariant under a change of units and hence it is a dimensionless
quantity.

1.2 Buckingham Pi-theorem


The name Pi comes from the mathematical notation Π as defined
here, for historical reason, to describe the dimensionless variables ob-
6  Fractal Patterns in Nonlinear Dynamics and Applications

tained from the power products of governing parameters denoted by


Π1 , Π2 , Π3 ..., etc. If a physical process involves investigation of a cer-
tain dimensional physical quantity (governed quantity) that depends
on n number of other dimensional variables then the Buckingham Pi
theorem guides us in a systematic way to reduce a function of n vari-
able problems into a function of k dimensionless variable problems if
each of the k dimensional variables of the original n variables can be
expressed in terms of the n − k dimensionally independent variables.
Now the question is: What is dimensionally independent variable? A set
of dimensional variables, a1 , a2 , . . . , ak of the total n variables, is said
to have independent dimensions if none of the k variables have dimen-
sions which can be represented as a product of powers of the dimensions
of the remaining variables. For instance, density [ρ] = M L−3 , velocity
[v ] = LT −1 and force [F ] = M LT −2 have independent dimensions in
the sense that none of these quantities can be expressed in terms of the
rest. In other words, a product of powers of these quantities doesn’t ex-
ist which is dimensionless. On the other hand consider the case where
force is replaced by pressure [P ] = M L−1 T −2 . In this case the dimen-
sion of pressure can be expressed in terms of the dimension of that of
velocity and density since [P ] = [v 2 ][ρ] and hence velocity and density
have independent dimension but no pressure.
The first step in modeling any physical phenomena is the identifi-
cation of the relevant variables, and then relating these variables via
known physical laws. For sufficiently simple phenomena, we can usu-
ally construct a quantitative relationship amongst the physical variables
from the first principles; however, for many complex phenomena such
an ab initio theory is often difficult, if not impossible. In these situa-
tions constructing a model in a systematic manner with minimum input
parameters can be a quite useful method for dimensional analysis. In
fact, it has been proved to be a quite powerful method for analyzing
experimental data. Most undergraduates have already encountered di-
mensional analysis during their course studies but without knowing its
full potential. Here we will use dimensional analysis to capture tech-
niques for analyzing experimental or empirical data or to solve partial
differential equations which also help infer some insights about the so-
lution. As we have already stated the relationship found in physical
theories or experiments can always be represented in the form
a = f (a1 , a2 , . . . , an ), (1.4)
where the quantities a1 , a2 , . . . , an are called the governing parameters.
Any investigation ultimately comes down to determining one or sev-
eral dependencies of the form Eq. (1.4). It is always possible to classify
the governing parameters a1 , ..., an into two groups using the definition
Scaling, Scale-invariance and Self-similarity  7

of dependent and independent variables. Let the arguments ak+1 , ..., an


have independent dimensions and the dimensions of the arguments
a1 , a2 , ..., ak can be expressed in terms of the dimensions of the gov-
erning independent parameters ak+1 , ..., an in the following way:
[a1 ] = [ak+1 ]α1 · · · [an ]γ1 , (1.5)
[a2 ] = [ak+1 ]α2 · · · [an ]γ2 ,
... ... ...
... ... ...
[ak ] = [ak+1 ]αk · · · [an ]γk .
The dimension of the governed parameter a must also be expressible in
terms of the dimensionally independent governing parameters a1 , ..., ak
since a does not have an independent dimension and hence we can write
[a] = [ak+1 ]α · · · [an ]γ . (1.6)
Thus, there exists numbers α, β , and γ , such that Eq. (1.6) holds. We
can therefore have a set of dimensionless governing parameters
a1
Π1 = , (1.7)
[ak+1 ] 1 ...[an ]γ1
α
a2
Π2 = α2 ,
ak+1 ...aγn2
... ... ...
ak
Πk = ,
[ak+1 ]αk ...[an ]γk
and a dimensionless governed parameter
f (Π1 ([ak+1 ]α1 ...[an ]γ1 ), ..., Πk ([ak+1 ]αk ...[an ]γk , ak+1 , ..., an )
Π= . (1.8)
[ak+1 ]α ...[an ]γ
The right hand side of this equation clearly reveals that the dimension-
less quantity Π is a function of ak+1 , ..., an , Π1 , ..., Πk , i.e.,
Π ≡ F (ak+1 , ..., an , Π1 , ..., Πk ). (1.9)
The quantities Π, Π1 , ..., Πk are obviously dimensionless, and hence
upon transition from one system of units to another inside the given
class their numerical values must remain unchanged. At the same time,
according to the above, one can pass to a system of units of measure-
ment such that any of the parameters of ak+1 , ..., an , say for example,
ak+1 , is changed by an arbitrary factor, and the remaining ones are un-
changed. Upon such a transition the first argument of F is changed ar-
bitrarily, and all the other arguments of the function remain unchanged
8  Fractal Patterns in Nonlinear Dynamics and Applications

as well as its value Π. Hence, it follows δaδF k+1


= 0 and entirely analo-
δF δF
gously δak+2 = 0, ... δan = 0. Therefore, the relation Eq. (1.9) is in fact
represented by a function of k arguments and proves it is independent
of ak+1 , ..., an , i.e.,
Π = Φ(Π1 , . . . , Πk ), (1.10)
and the function f can be written in the following special form

f (a1 , . . . , ak , . . . , an ) = aα γ
k+1 · · · an Φ((Π1 , . . . , Πk ). (1.11)

This constitutes the content of the central statement of dimensional


analysis that has got far reaching consequences in understanding the
scaling theory. It is also known as the π -theorem that has been formu-
lated and proved for the first time by E. Buckingham.
The importance and use of dimensional analysis and the Bucking-
ham π -theorem in particular in the description of the scaling or self-
similarity can hardly be exaggerated. Theoretically, it is found that
most physical systems are governed either by partial differential or
by partial integro-differential equations and finding scaling or self-
similarity will be the subject of intense and detailed consideration in
the subsequent chapters. Nevertheless, here we give some instructive ex-
amples that will certainly help with grasping the essential ideas behind
the π -theorem. In particular, the definition of dimensionless quantity
will play the central role in establishing scaling or self-similarity.

1.3 Examples to illustrate the significance of Π-


theorem
Perhaps the simplest and yet the most amusing example to demonstrate
how the idea of the π -theorem can be used to prove the Pythagorean
theorem. Consider a right angle triangle where three sides are of size a,
b and c and, for definiteness, the smaller of its acute angles θ. Assume
that we are to measure the area S of the triangle. The area S can be
written in the following form

S = S (a, b, c). (1.12)

However, the dimension of two governing parameters a and b can be


expressed in terms of c alone since we have

[a] ∼ [c] and [b] ∼ [c], (1.13)


Scaling, Scale-invariance and Self-similarity  9

and so it is true for the governed parameter S as we can write the di-
mensional relation [S ] ∼ [c2 ]. We therefore can define two dimensionless
governing parameters

Π1 = sin θ = a/c and Π2 = cos θ = b/c, (1.14)

and the dimensionless governed parameter


S
Π= = c−2 S (cΠ1 , cΠ2 , c) ≡ F (c, Π1 , Π2 ). (1.15)
c2
Now it is possible to pass from one unit of measurement to another
system of unit of measurement within the same class and upon such
transition the arguments Π1 and Π2 of the function F as well as the
function itself remains unchanged. It implies that the function F is
independent of c and hence we can write

Π = φ(Π1 , Π2 ). (1.16)

However, Π1 and Π2 both depend on the dimensionless quantity θ, i.e.,

φ(Π1 , Π2 ) = φ(θ). (1.17)

and hence we can write


S = c2 φ(θ), (1.18)
where the scaling function φ(θ) is universal in character.
In order to further capture the significance of Eq. (1.18) we re-write
it as cS2 ∼ φ(θ). This result has far reaching consequences. For instance,
consider that we have a right triangle of with arbitrary sides a0 6= a,
b0 6= b and c0 6= c but have the same acute angle θ as before. This
can be ensured by choosing an arbitrary point on the hypotenuse of
the previous triangle and drop a perpendicular on the base b. Consider
that the area of the new triangle is S 0 , yet we will have
S S0
= , (1.19)
c2 (c0 )2
since the numerical value of the ratio of the area over the square of
the hypotenuse depends only on the angle θ. It implies that if we plot
the ratio of the area over the square of the hypotenuse as a function θ
all the data points should collapse onto a single curve regardless of the
size of the hypotenuse and the respective areas of the right triangle. In
fact, the detailed calculation reveals that
1
φ(b/c) = sin 2θ. (1.20)
4
10  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 1.1: This figure shows how area S for one of the triangle of the above figure
(say the largest triangle) varies as the base b is changed.

Figure 1.2: This shows that if we plot S/c2 vs b/c instead of S vs b then all the three
curves of Fig. (3) collapse on to a single master curve. It implies that for a given
numerical value of the ratio b/c regardless of the size of the triangle corresponding
S/c2 is the same because triangle that satisfies these conditions are similar and hence
the function φ(b/c) is unique.
Scaling, Scale-invariance and Self-similarity  11

Now the question is: what does this data collapse imply? It implies that
if two or more right triangles which have one of the acute angles iden-
tical then such triangles are similar. We shall see later that whenever
we will find a data collapse between two different systems of the same
phenomenon then it would mean that the corresponding systems or
their underlying mechanisms are similar. Similarly, if we find that data
collected from the whole system collapsed with similarly collected data
from a suitably chosen part of the whole system then we can conclude
that the part is similar to the whole. On the other hand, if we have a
set of data collected at many different times for a kinetic system and
find they all collapse onto a single curve then we can say that the same
system at different times are similar. However, similarity in this case is
found of the same system at different times and hence we may coin it
as temporal self-similarity.
Proof of the Pythagoras theorem: The area S of the original right
triangle is determined by the size of its hypotenuse c and the acute angle
θ since S = c2 φ(θ). Now, by drawing the altitude which is perpendicular
to the hypotenuse c we can divide the original triangle into two smaller
right triangles whose hypotenuses are a and b and say their respective
areas are S1 and S2 . The two smaller right triangles will also have their
acute angle theta and hence we can easily show that

S1 = a2 φ(θ) and S2 = b2 φ(θ). (1.21)

On the other hand we have

S = S1 + S2 , (1.22)

and it immediately leads to the conclusion that c2 = a2 + b2 which is


the Pythagorean theorem.

1.4 Similarity
In most cases, before a large and expensive structure or object such
as a ship or aeroplane is built, efforts are always devoted to testing
on a model system instead of working directly with the real system.
The results of testing on model systems are then used to infer the
various physical characteristics of the real object or structure under
their future working conditions. In order to have successful modeling,
it is necessary that we know how to relate the results of testing on
models to the actual manufactured product. Needless to mention that
if such connections are not known or cannot be made then modeling
is simply a useless pursuit. For the purpose of rational modeling, the
12  Fractal Patterns in Nonlinear Dynamics and Applications

concept and the importance of the similarity phenomena can hardly be


over emphasized. Indeed, the physically similar phenomena is the key
to successful modeling.
The concept of physical similarity is a natural generalization of the
concept of similarity in geometry. For instance, two triangles are similar
if they differ only in the numerical values of the dimensional parameters,
i.e., the lengths of the sides, while the dimensionless parameters, the
angles at the vertices are identical for the two triangles. Analogously,
physical phenomena are called similar if they differ only in their numer-
ical values of the dimensional governing parameters; the values of the
corresponding dimensionless parameters Π1 , ..., Πm being identical. In
connection with this definition of similar phenomena, the dimensionless
quantities are called similarity parameters.
Let us consider two similar phenomena, one of which will be called
the prototype and the other the model; it should be understood that
this terminology is just a convention. For both phenomena there is some
relation of the form
a = f (a1 , a2 , a3 , b1 , b2 ), (1.23)
where the function f is the same in both cases by the definition of sim-
ilar phenomena, but the numerical values of the governing parameters
a1 , a2 , a3 , b1 , b2 are different. Thus for prototype we have
(p) (p) (p) (p) (p)
ap = f (a1 , a2 , a3 , b1 , b2 ), (1.24)
and for the model system we have
(m) (m) (m) (m) (m)
am = f (a1 , a2 , a3 , b1 , b2 ), (1.25)
where the index p denotes quantities related to the prototype and the
index m denotes quantities related to the model. Consider that b1 and
b2 are dependent variables so that they can be expressed in terms of
a1 , a2 and a3 in both model and prototype systems. Using dimensional
analysis we find for both phenomena

Π(p) = Φ(Π(p) (p)


1 , Π2 ), (1.26)
and
Π(m) = Φ(Π(m) (m)
1 , Π2 ), (1.27)
where the function Φ must be the same for the model and the prototype.
By the definition of similar phenomena the dimensionless quantities
must be identical in both the cases such as in the proto-type and in the
model, i.e.,
Π(m)
1 = Π(p)
1 , Π(m)
2 = Π(p)
2 . (1.28)
Scaling, Scale-invariance and Self-similarity  13

It also follows that the governed dimensionless parameter satisfies

Π(m) = Π(p) . (1.29)

Returning to dimensional variables, we get from the above equation


 a(p) p  a(p) q  a(p) r
1 2 3
ap = am (m) (m) (m)
, (1.30)
a1 a2 a3

which is a simple rule for recalculating the results of measurements


on the similar model for the prototype, for which direct measurements
may be difficult to carry out for one reason or another.
The conditions for similarity of the model to the prototype-equality
of the similarity parameters Π1 , Π2 for both phenomena show that it is
necessary to choose the governing parameters b(m) (m)
1 , b2 , of the model
so as to guarantee the similarity of the model to the prototype:
 a(m) α1  a(m) β1  a(m) γ1
(m)
b1 = b(p)
1
1
(p)
2
(p)
3
(p)
, (1.31)
a1 a2 a3

and
 a(m) α2  a(m) β2  a(m) γ2
(m)
b2 = b(p)
1
1
(p)
2
(p)
3
(p)
, (1.32)
a1 a2 a3
whereas the model parameters a(m) (m) (m)
1 , a2 , a3 can be chosen arbitrarily.
The simple definitions and statements presented above describe the
entire content of the theory of similarity: we emphasize that there is
nothing more to this. We will now give a few examples to help us grasp
the idea.

1.5 Self-similarity
The notion of self-similarity will be an underlying theme for the rest
of the book. In a way the word self-similarity needs no explanation.
Perhaps the best way to perceive the meaning of self-similarity is to
consider an example and at this stage there is no better example than
a cauliflower. The cauliflower head contains branches or parts, which
when removed and compared with the whole is found to be very much
the same except it is scaled down. These isolated branches can again
be decomposed into smaller parts, which again look very similar to the
whole as well as of the branches. Such self-similarity can easily be car-
ried through for about three to four stages. After that the structures
14  Fractal Patterns in Nonlinear Dynamics and Applications

are too small to go for further dissection. Of course, from the mathe-
matical point of view the property of self-similarity may be continued
through an infinite stages though in real world such properties sustain
only a few stages. Self-similarity can also be found on smaller scales.
For instance, snowflakes often display self-similar branching patterns
and aggregating colloidal particles have growth patterns that are sta-
tistically self-similar. Self-similarity, in fact, is so widespread in nature
that they are exceedingly easy to find. Consequently, the main ideas of
self-similarity are not difficult to grasp. Hopefully, when we are done
with this chapter, we will have the basic understanding of self-similar
phenomena and when we are done with the whole book we will be
able to appreciate the fact that self-similar phenomena are ubiquitous
in nature as it is virtually everywhere we look. In our endeavour we
need to spend just a few more minutes laying the foundation for that
knowledge. We begin our exploration with three more examples of self-
similarity.
Firstly, think of a lone leafless tree standing against a gray winter
sky as the background. Needless to mention that branching patterns
are quite familiar to us. There is a single trunk that rises to a region
from which major branches split off. If we follow any one of these ma-
jor branches, we see that they also split into smaller branches, which
split into even smaller branches, and so on. A unit pattern (in this case
one length splitting into two or more thinner branches) is repeated on
ever-smaller size scales until we reach the tree-top. Having this pic-
ture in mind, we can state a simple generic definition: the fundamental
principle of a self-similar structure is the repetition of a unit pattern
on different size scales. Note that the delicate veins of leaves also have
self-similar branching patterns, as do virtually all root systems. A tree
is therefore the epitome of self-similarity.
Secondly, the decimal number system which is probably the oldest
and most useful construction that used the idea of self-similarity. How-
ever, we use it so much in our daily life that we hardly had time to
appreciate its self-similar properties. Let us look at the meter stick,
which has marks for decimeters (ten of it make a meter), centimeters
(ten of it makes a decimeter and a thousand make a meter), millimeters
(ten of it makes a centimeter). In a sense a decimeter together with its
marks looks like a meter with its marks, however, scaled down by a
factor of 10. This is not an accident. It is in strict correspondence with
the decimal system. When we say 248 mm, for example, we mean 2
decimeters, 4 centimeters, and 8 millimeters. In other words, the posi-
tion of the figures determines their place value, exactly as in the decimal
number system. One meter has a thousand millimeters to it and when
we have to locate position 248 only a fool would start counting from
Scaling, Scale-invariance and Self-similarity  15

left to right from 1 to 248. rather we would go to the 2 decimeter tick


mark, from there to the four centimeter tick mark, and from there to
the 8 millimeter tick mark. Most of us take this elegant procedure for
granted. But somebody who has to convert miles, yards, and inches can
really appreciate the beauty of this system. Actually finding a position
on the meter stick corresponding to the branches of a tree, the decimal
number. the structure of the tree expresses the self-similarity of the
decimal system very strongly.
Thirdly, distant mountain ranges which are an example of statistical
self-similarity as we find a hierarchy of peak/valley morphologies on size
scales ranging from kilometers to centimeters. This is in sharp contrast
to the above mentioned decimal system. Such self-similarity is found
true in almost all the surfaces we encounter since the majority of them
have an hierarchy of self-similar structure over at least some range of
size scales. Consider the vast craters-within-craters surface of the Moon
displaying self-similar divots that range in diameter from kilometers to
millimeters. Erosion processes in desert regions yield a wealth of self-
similar surfaces, and areas of snow and ice, from the vast continent of
Antarctica to the piles of snow next to our driveways, have self-similar
structures. There are examples of statistical self-similarity in the plant
kingdom too. Ferns and cedar boughs have pleasing multi-leveled de-
signs that are strongly self-similar. The overall shape of the fronds or
boughs is roughly that of a broad sword with sharp tip and a stem run-
ning along its mid-line. However, a closer look reveals that it is actually
composed of many small swords oriented at right angles to the main
stem. These second-level swords, in turn, are divided into third-level
swords lying roughly perpendicular to the second-level stems. Perhaps
many of us had been looking on countless occasion at many seeming
self-similar structure without really appreciating their true self-similar
design.
One may even find self-similarity in social systems if one thinks
about how our governments, judicial systems, law enforcement agen-
cies or various economic systems work. It is fairly easy to see that they
have almost discrete hierarchical arrangements with federal, state, local
upto family levels. Basically a similar type of activity is occurring at
the different scales and so we find a familiar design in a new context.
The main principle of self-similarity—same thing on different scales—
can be found in fact in countless examples from the social sectors. The
Internet has grown according to the laws of self-similarity and its com-
plex multi-scaled networking is actually resulting from the repetition
of simple law. Multi-scaled networking is reminiscent of the self-similar
networking structure in the brain of living systems. Likewise the growth
patterns and inter-connectedness of cities exhibit similar phenomena on
16  Fractal Patterns in Nonlinear Dynamics and Applications

different scales, the hallmark of self-similarity. We could go on with fur-


ther examples of self-similarity such as lightning bolts, designs on shells,
aggregation of bacteria or metal ions, surfaces of cancer cells, crystal-
lization patterns in agate, scores of scaling laws in biology, quantum
particle paths, gamma-ray burst fluctuations, species distributions or
abundances, drop formation, renormalization in quantum electrody-
namics, and so on. We will have many opportunities in the chapter to
come to add more examples because it can said beyond any reasonable
doubt that nature adores self-similarity.
In physics we often investigate physical problems which we do not
see by our naked eye and hence cannot appreciate straight away if the
system possess self-similarity. However, we can attempt to solve them
through modeling which is often governed by or described by kinetic
or rate equation approach. Mathematician and physicists often look for
a scaling or self-similar solution to their respective equations which is
the solution in the long time limit. In this limit, the solution usually
assumes a simpler and universal form. An equation is considered to
assume self-similar or scaling solution if its solution has either power-
law form or has the dynamic scaling form which are discussed below.

1.6 Dynamic scaling


There are many phenomena which physicists often investigate that are
not static rather evolve probabilistically with time. Our universe is per-
haps one of the best examples which is expanding ever since the Big
Bang. Similarly, growth of networks like WWW, the Internet, etc. are
also ever growing systems. Another example is the polymer degradation
process where degradation does not occur in a blink of an eye rather
it happens over quite a long time. Spread of biological and computer
viruses too does not happen over night. In such systems we find cer-
tain stochastic variable x which assumes values that depend on time.
In such cases, we are often interested to know the distribution of x
at various instant of time, i.e., f (x, t). Now the numerical value of f
and the typical or mean value of x may well be very different at ev-
ery different instant of measurement. The question is: What happens
to the corresponding dimensionless variables? If the numerical values
of the dimensional quantities are different, however, corresponding di-
mensionless quantities remain invariant then we can argue that the
snapshot of the system at different times are similar. When this hap-
pens we conclude that the system is self-similar because the system is
similar to itself at different time. The dynamic scaling is the litmus test
of showing us that an evolving system exhibits such self-similarity.
Scaling, Scale-invariance and Self-similarity  17

Let us first consider that one of the variables, say t, is an independent


parameter so that x can be expressed in terms of time t. Using the idea
that the dimension of physical quantity must obey power monomial law
we can write
x ∼ tz . (1.33)
It implies that we can choose tz as unit of measurement or yard-stick
and quantify x in unit of tz and the corresponding dimensionless quan-
tity is
ξ = x/tz . (1.34)
Here, the quantity ξ is a number that tells how many tz we need to
measure x. If t is independent quantity then we can also express f in
terms of t alone and hence we can write

f ∼ tθ . (1.35)

We can, therefore, define yet another dimensionless quantity

Π = f (x, t)/tθ = F (tz ξ, t) ≡ φ(t, ξ ), (1.36)

where the exponent θ is fixed by the dimensional requirement [f ] = [tθ ]


[17, 18]. Now, the numerical value of F should remain invariant despite
the unit of measurement of t which is changed by some factor since φ
is a dimensionless quantity. It implies that F must be independent of
t, i.e.,
F (t, ξ ) = φ(ξ ), (1.37)
and hence we write
f (x, t) ∼ tθ φ(x/tz ), (1.38)
where the function φ(ξ ) is known as the scaling function. The function
f (x, t) is said to obey dynamic scaling if it satisfies the above relation.
We thus see that the Buckingham π -theorem can provide a systematic
processing procedure to obtain the dynamic scaling form.
We thus see that the emergence of dynamic scaling is deeply rooted
to the Buckingham’s π -theorem. Knowing that a system exhibits dy-
namic scaling means significant development of our understanding
about the system. That is, the distribution f (x, t) at various moments
of time can be obtained from one another by similarity transformation

x −→ λz x, t −→ λt, f −→ λθ f, (1.39)

revealing the self-similar nature of the function f (x, t). One way of
verifying the dynamic scaling is to plot dimensionless variables f /tθ as
a function of x/tz of the data extracted at various different times. Then
if all the plots of f vs x obtained at different times collapse onto a single
18  Fractal Patterns in Nonlinear Dynamics and Applications

universal curve then it is said that the systems at different times are
similar and it obeys dynamic scaling. Essentially such systems can be
termed as temporal self-similarity since the the same system is similar
at different times.

1.7 Scale-invariance: Homogeneous function


A function is called scale-invariant or scale-free if it retains its form
keeping all its characteristic features intact even if we change the mea-
surement unit (scale). Mathematically, a function f (r) is called scale-
invariant or scale-free if it satisfies
f (λr) = g (λ)f (r) ∀ λ, (1.40)
where g (λ) is a yet unspecified function. That is, one is interested in
the shape of f (λr) for some scale factor λ which can be taken to be a
length or size rescaling. For instance dimensional functions of physical
quantity are always scale-free since they obey power monomial law. In
fact, it can be rigorously proved that the functions that satisfy Eq.
(1.40) should always have power-law form f (r) ∼ r−α .
Proof: Starting from Eq. (1.40), let us first set x = 1 to obtain
f (λ) = g (λ)f (1). Thus g (λ) = f (λ)/f (1) and Eq. (1.40) can be written
as
f (λ)f (x)
f (λx) = . (1.41)
f (1)
The above equation is supposed to be true for any λ, we can therefore
differentiate both sides with respect to λ to yield
f 0 (λ)f (x)
xf 0 (λx) = , (1.42)
f (1)
where f 0 indicates the derivative of f with respect to its argument. Now
we set λ = 1 and get
f 0 (1)f (x)
xf 0 (x) = .n (1.43)
f (1)
This is a first-order ordinary differential equation which has a solution
f (x) = f (1)x−α , (1.44)
where α = −f (1)/f 0 (1) [58]. It clearly proves that the power-law type
is the only solution that can satisfy Eq. (1.40).
In fact, it can also be proved rigorously that g (λ) too has the power-
law solution.
Scaling, Scale-invariance and Self-similarity  19

Proof: Suppose we make two changes of scale, first by a factor of µ


and then by by a factor of λ. The homogeneity condition of the function
f (r) implies

f λ(µr) = g (λ)f (µr) = g (λ)g (µ)f (r). (1.45)

A similar result can also be obtained by a single change of scale as


follows 
f (λµ)r = g (λµ)f (r). (1.46)
By applying the principle of equivalence of the above two equations we
obtain
g (λµ) = g (λ)g (µ). (1.47)
Now any continuous function that satisfies the above functional equa-
tion must be either identically zero or else will have a simple power-law
form with respect to its argument. To prove this we take the derivative
with respect to µ, we find

g (λµ) = λg 0 (λµ) = g (λ)g 0 (µ), (1.48)
∂µ
where prime indicates the derivative with respect to the argument of
the corresponding function. We now set µ = 1 and g 0 (1) = p then we
can immediately write
1 0 p
g (λ) = . (1.49)
g (λ) λ
Integrating it we find

ln[g (λ)] = p ln[λ] + c, (1.50)

or
g (λ) = ec λp . (1.51)
0 c p−1
Now from the above equation we have g (λ) = pe λ and the defi-
nition g 0 (1) = p implies that the integration constant c has the value
zero. Thus
g (λ) ∼ λp , (1.52)
and it makes the proof complete.
The power-law type distributions f (x) ∼ x−α are regarded as scale-
free because f (λx)/f (x) depends only on λ not on x and hence their
distribution requires no characteristic or typical scale. The idea is that
if the unit of measurement of x is increased (decreased) by a factor of
λ then the numerical value of f is decreased (increased) by a factor of
g (λ) but the overall shape of the function remains invariant. It is called
scale-free also because it ensures self-similarity in the sense that the
20  Fractal Patterns in Nonlinear Dynamics and Applications

function looks the same whatever scale we look at it. One immediate
consequence of the scale-invariant function f (r) is that if we know its
value at one point says at r = r0 , and we know the functional form of
g (λ), then the function f (r) is known everywhere. This follows because
any value of r can always be written in the form r = λr0 , and

f (λr0 ) = g (λ)f (r0 ). (1.53)

The above equation says that the value f (r) at any point is related to
the value of f (r0 ) at a reference point r = r0 by a simple change of
scale. Of course, this change of scale is, in general, not linear.

1.7.1 Scale invariance: Generalized homogeneous


functions
We shall see latter in the static scaling hypothesis where we shall use the
fact that the thermodynamic potentials are homogeneous of the form
A function f (x, y ) of two independent variables x and y is said to be a
generalized homogeneous function if, for all values of the parameter λ
f (x, y ) satisfies
f (λa x, λb y ) = λf (x, y ), (1.54)
where a and b are arbitrary numbers [5, 10]. In analogy with the homo-
geneous function why doesn’t one define the generalized homogeneous
function as f (λx, λy ) = λp f (x, y )? The answer is the following. It is
worth noting that that Eq. (1.54) cannot be further generalized to an
equation of the form

f (λa x, λb y ) = λp f (x, y ), (1.55)

because without loss of generality we can choose p = 1 in the above


equation; i.e., a function f (x, y ) that satisfies Eq. (1.54) also satisfies

f (λa/p x, λb/p y ) = λf (x, y ), (1.56)

and the converse statement is also valid. Since the above equation is of
the form of Eq. (1.54), we conclude that Eq. (1.55) is no more general
than Eq. (1.54). Other equivalent forms of Eq. (1.54) that frequently
appear in the literature on scaling laws are

f (λx, λb y ) = λp f (x, y ) (1.57)

and
f (λa x, λy ) = λp f (x, y ). (1.58)
Scaling, Scale-invariance and Self-similarity  21

The main point to note here is that there are at least two undetermined
parameters a and b for a generalized homogeneous function. So the
litmus test for the generalized homogeneous function is given by Eq.
(1.54). Say, that it is true also for any λ so that we can choose λa = 1/x.
Then Eq. (1.54) becomes

f (1, y/xb/a ) = x−p/a f (x, y ), (1.59)


or
f (x, y ) = xp/a f (y/xb/a ). (1.60)
It implies that the two variables x and y of the function f is combined
y
into a single term xb/a in a non-trivial way. Such simplification has
far reaching consequence as it has been the primary essence of Widom
scaling in the theory of phase transition and critical phenomena.

1.7.2 Dimension functions are scale-invariant


Dimension function of physical quantities are always homogeneous with
respect to its units of measurement since they are power-law monomial
f (λa P, λb Q, λc R) = λθ f (P, Q, R), (1.61)
where θ = aα + bβ + cγ . Note that there exists a dimension function for
every physical quantity which are always of power-law monomial type.
We therefore can conclude that physical quantities are homogeneous
functions which satisfy Eq. (1.54) and hence have a power-law type
solution. This is the reason why power-law distribution is so ubiquitous
in nature. The most interesting property of power-law which makes
them interesting is their scale-free character since it makes the function
look the same whatever the scale we look at it by.

1.8 Power-law distribution


Many things we measure have a typical size or scale in the sense that
a typical value around which measurements are centered. The heights
of human beings perhaps is the simplest example. Most adult human
beings are about 1.8m tall. Of course, the actual value will vary accord-
ing to the geographic location of the population. However, the extent
of variation may also depend on sex. Nonetheless, the crucial point is
that we neither see individuals as small as 10cm nor we see individ-
uals as tall as 5.0m. If one plots the heights of adult person of any
country or city one will definitely find that indeed the distribution is
22  Fractal Patterns in Nonlinear Dynamics and Applications

relatively narrow and peaked around a characteristic value. We can cite


another example of the emergence of a typical scale: the blood pressure
of the adult human population, the speeds of cars in miles per hour on
the motorway. For instance, the histogram of speeds is strongly peaked
around 75 mph in this case. Of course the exact peak value may vary
depending on the quality of city, the traffic situation, etc. However,
one thing that remains universal is that there is always a sharp peak
around the specific value which is the point we want to make here. Also
the blood pressure of normal human beings peaks around a typical or
characteristic value that we usually regard as the normal pressure.
But, not all things we measure are peaked around a typical value.
Some vary over several orders of magnitude. A classic example of this
type of behaviour is the sizes of towns, cities or even asset size of the
people of these town and cities. The largest population of any city in
the US is 8.00 million for New York City as of the most recent (2000)
census. The town with smallest population is harder to pin down, since
it depends on what we call a town. America’s smallest town is Duffield,
Virginia, with a population of 52. Whichever way you look at it, the
ratio of largest to smallest population is at least 150, 000. Clearly, this
is quite different from what we see for heights of people.
This observation seems first to have been observed by Auerbach,
however it is often attributed to Zipf. What does it mean? Let p(x)dx
be the fraction of cities with population between x and x + dx then
exhibits power-law
p(x) ∼ x−α , (1.62)
where the exponent α is of special interest. Distribution of the form
(1.62) are said to follow a power-law.

1.8.1 Examples of power-law distributions


Power-law distributions occur in an extraordinarily diverse range of
phenomena [58]. Some of the examples are given below.

1.8.1.1 Euclidean geometry


How do we measure the size of a set of points that may constitute a
line, a square or a cube? Consider that we are to measure the size of the
length L of a straight line. To quantify its size we need an yardstick, say
of size δ for instance, and use it to find N needed to cover the length
L. Note that δ here is an arbitrarily chosen size and hence the number
Scaling, Scale-invariance and Self-similarity  23

N should depend on the size of the yard stick, i.e.,

N (δ ) = L/δ, (1.63)

is an integer if a suitable size δ is chosen. Now, if we measure the same


length L with a smaller size of the yard-stick δ 0 = δ/n (where n > 0 is
an integer number) then we will have a new number
L
N (δ/n) = . (1.64)
δ/n

Combining the above two equations (1.63) and (1.64) one can immedi-
ately find that
N (δ/n) = nN (δ ). (1.65)
Extending the idea for the case of a plane and for an object that
occupies space we can write the generalized relation

N (δ/n) = nd N (δ ), (1.66)

where d = 1, 2, 3, i.e., assumes the only integer number corresponding


to Euclidean geometry. It implies that if the size of the yardstick is
decreased by a factor of n then the numerical value of the number N is
increased by a factor of nd . It can be rigorously proved that Eq. (1.66)
can only have none but the inverse power-law solution

N (δ ) ∼ δ −d , (1.67)

where d = 1, 2, 3. We can take it as the definition of dimension which


provides a systematic processing procedures to find dimension of an
object. For instance, if we have a regular object that occupies a plane
and we want to find its dimension then the idea is as follows. We can
measure it with an yard-stick of area δ × δ to obtain N (δ ) = A/δ 2 . We
can then measure it with an yard-stick of area δ/n × δ/n and collect a
data for N versus δ by varying the n value. Then, the plots of ln(N )
versus ln(δ ) will always be a straight line according to Eq. (1.67) and
the slope of the line will be d = 2 which is the dimension of the plane.
Finding the dimension following this procedure is as well-known as box
counting method in physics and mathematics. The power-law solution
is the signature of the fact that the object in question is self-similar.

1.8.1.2 First return probability


Random walk problem can be best understood using the walk in one
dimension which can be defined as follows: A walker walks along, say
24  Fractal Patterns in Nonlinear Dynamics and Applications

in 1D lattice for simplicity, and before each step, the walker flips an
honest coin. If it is head, the walker takes a step forward (or to the
right) and if it is tail then the walker takes a step backward (or to the
left). The coin is unbiased so that the chances of getting heads or tails
are equal. Often random RW is compared with a drunkard walk who
is so very drunk that he may move forward or backward with equal
probability and that each step is independent of the steps which are
already taken. The resulting walk is so irregular that one can predict
nothing with certainty about the next step. Instead, all we can talk
about is the probability of his covering a specific distance in a given
time and that too in the statistical sense.
Another interesting question one may ask in the context of RW
problem is the following: How long does it take for a random walker to
reach a given point for the first time? This is an well-known problem
which is more generally known as the first passage probability. It is
defined as the probability f (r, t) in which the walker returns for the
first time to the position r at time t [52]. We, however, will focus on
the special case where the walker starts walking from the origin and we
want to know the probability that the walker returns to zero for the
first time after time t. This is also known as the gamblers ruin process.
The statement of the gambler’s ruin problem and its relation with the
first return probability is trivially simple. Say, a gambler has i dollars
out of total n. Say, the rule is set so that if a coin is flipped and it
turns out to be head then the gambler will gain a dollar or else lose a
dollar. The game continues until the gambler goes broke. However, we
will restrict our discussion to only the first return probability.
Consider that we have N number of independent walkers and ini-
tially all of them are at the origin. Then all of them are set to walk at
the same time and we record the time they took to return to the origin.
Equivalently, one could also perform the same experiment with a single
walker and let the walker return to the origin N times and we take a
record of the time it took each time the walker returned. The records
for the corresponding return time will have exactly the same feature as
that of for N walker. If one now looks at the record they will definitely
find no order which may lead to the conclusion that there cannot exist
any law to characterize the emergent behaviour. However, if the size
of the record is large enough by letting N → ∞ and if they undergo a
systematic statistical processing then it may lead to finding some order
even in this seemingly disordered data records.
We now discuss the processing procedure to extract the first return
probability using the records obtained from N independent walks till
they return to the origin. We first bin the record into equal size say of
width 0.1. That is, say the first bin goes 0 to 0.1, the second from 0.1
Scaling, Scale-invariance and Self-similarity  25

to 0.2 and so forth. We then find what fraction of the total N walks
return to zero within these bins and which is actually the frequency
of returns within a specific bin. The plot of these fractions against bin
size is equivalent to plotting normal histogram of the record produced
by binning them into bins of equal size of width 0.1. To check if the
corresponding distribution reveals power-law or not it is better to plot
the histogram on logarithmic scale. Doing exactly this with the current
records gives the straight line which is the characteristic feature of the
power-law distribution when plotted on the logarithmic scale. Besides,
we find the slope of the straight line is exactly 3/2 which is essentially
the exponent of the power-law. However, one should also notice that
the right hand end of the distribution is quite noisy reflecting scarce
data points near the tail meaning each bin near the right hand end only
has a few samples in it, if any at all. Thus the fractional fluctuations
in the bin counts are large which appears ultimately as a fat tail.
We can solve the problem analytically. Let us assume that Sn rep-
resents the position of the walker at the end of n seconds. We say that
a return to the origin has occurred at time t = n, if Sn = 0. We note
that this can only occur if n is an even integer. In order to calculate
the probability of a return to origin at time 2m, we only need to count
the number of paths of length 2m whichbegins  and ends at the origin.
2n
The number of such a path is clearly . We first consider u2m
n
the unconstrained probability of a return to the origin at time t = 2m
in which the walk is allowed to return to zero as many times as it likes,
before returning there again at time t = 2m. The probability of return
to the origin at time 2n is therefore given by
 
2n
u2n = 2−2n , (1.68)
n

since each path has the probability 2−2n . Note that there are now u2n 22n
paths of length 2n which have endpoints (0, 0) and (2n, 0). We make
one obvious assumption that u0 = 1 since the walker starts at zero.
A random walker is said to have a first return to the origin at time
2m if m > 0 and S2k 6= 0 for all k < m. We define f2m as the first return
probability at time t = 2m. In analogy with u2n we can also think of
the expression f2m 22m as the number of paths of length 2m between the
points (0, 0) and (2m, 0) so that the displacement S2n versus time (i.e.,
n) curve do not touch the horizontal axis except at the end points (0, 0)
and (2m, 0). Using this idea it is easy to establish a relation between
unconstrained return probability u2n and the first return probability
f2m . There are u2n 22n paths of length 2n which have end points (0, 0)
and (2n, 0). The collection of such paths can be partitioned into n sets,
26  Fractal Patterns in Nonlinear Dynamics and Applications

0
Log(f(t)) vs Log(t)
Fitting Curve

-2

-4

-6
Log(f(t))

-8

-10

-12

-14

-16
0 1 2 3 4 5 6 7 8 9 10
Log(t)

Figure 1.3: First return probability ln f (t) vs ln(t) is drawn where 25 indepen-
dent realizations are superimposed. The straight line with slope equal to 1.5 clearly
confirms analytical prediction.

depending upon the time of first return to the origin. Assume that a
path in this collection which has a first return to the origin at time
2m consists of an initial segment from (0, 0) to (2m, 0), in which the
path has not intercepted the time axis, and a terminal segment from
(2m, 0) to (2n, 0), with no further restrictions on this segment. Thus,
the number of paths in the collection which have a first return to the
origin at time 2m is
f2m 22m × u2n−2m 22n−2m = f2m u2n−2m 22n . (1.69)
If we sum the above equation over m we obtain the number of paths
u2n 22n of length 2n and hence we have
n
X
u2n 22n = f2m u2n−2m 22n , (1.70)
m

which can be re-written as



1P ; if n=0
u2n = n . (1.71)
m=1 f2m u2n−2m ; if n ≥ 1,
where m is also an integer and we define f0 = 0 since finding the walker
at origin after 0 step is not really a return.
Scaling, Scale-invariance and Self-similarity  27

We find it convenient using the generating functions approach to


solve the above equation for f2n and hence we define

X ∞
X
n
U (z ) = u2n z , F (z ) = f2n z n . (1.72)
n=0 n=1

Then multiplying Eq. (1.71) throughout by z n and summing over whole


range yield
∞ X
X n
U (z ) = 1 + f2n u2n−2m z n (1.73)
n=1 m=1
X∞ ∞
X
= 1+ f2m z m u2n−2m z n−m
m=1 n=m
= 1 + F (z )U (z ).

So that we can immediately obtain


1
F (z ) = 1 − . (1.74)
U (z )

Now, if we can find a closed form solution for the function U (z ), we will
also be able to find closed-form solution for F (z ). The function U (z )
however is quite easy to calculate. The probability ut that the walker
is at position zero after t steps can be obtained from
N!  1 N
P (m, N ) = N +m N −m . (1.75)
( 2 )!( 2 )! 2

Here, P (m, N ) is the probability that the walker is at position m after


N random steps. By setting m = 0 and N = 2n, since we need an even
number of steps to reach to origin, in Eq. (1.75) we obtain
 
−2n 2n
u2n = 2 . (1.76)
n
so ∞ 
2n z n

X 1
U (z ) = =√ . (1.77)
n 4n
1 −z
n=0

Therefore, we have √
F (z ) = 1 − 1 − z. (1.78)
We find it worthwhile to note that
U (z )
F0 = , (1.79)
2
28  Fractal Patterns in Nonlinear Dynamics and Applications

where prime on the function F indicates differentiation with respect to


its argument. Using Eq. (1.76) in the definition of U (z ) we obtain
∞  
X
−2n 2n
U (z ) = 2 zn. (1.80)
n
0

Substituting this into Eq. (1.79) and integrating with respect to z we


can immediately obtain
u2m−2
f2n =
2m 
2m − 2
m−1
= 2m−1
(1.81)
2
m 
2m
m
= ,
(2m − 1)22m
and we then have our solution for the distribution of first return times.
Now consider theform of f2n for large n. Writing out the binomial
2n
coefficient as = (2n)!/(n!)2 , we take the logs to obtain
n

ln f2n = ln(2n)! − 2 ln n! − 2n ln 2 − ln(2n − 1), (1.82)


1
and use the Sterling’s formula ln n! ≈ n ln n − n + 2 ln n to get f2n ≈
1
2 ln n − ln(2n − 1) or
s
2
f2n ≈ . (1.83)
n(2n − 1)2

In the limit n −→ ∞, this implies that f2n ∼ n−3/2 , or


ft ∼ t−3/2 . (1.84)
This shows that the distribution of return times falls off as an inverse
power of time t called power-law decay ft ∼ t−α characterized by the
exponent α = 3/2.

1.8.2 Extensive numerical simulation to verify power-


law first return probability
Consider that we have N number of independent walkers and initially
all of them are at the origin. Then all of them are set to walk at the
Scaling, Scale-invariance and Self-similarity  29

same time we then record the time they need to return to the origin
where they started the walk from. Equivalently, one could also perform
the same experiment with a single walker and let the walker return to
the origin N times. The records for the corresponding return times will
have exactly the same feature as that of for N walker. If one now looks
at the record they will definitely find no order which may lead to the
conclusion that there cannot exist any law to characterize the emergent
behaviour. However, if the size of the record is large enough by letting
N large and if they undergo a systematic statistical processing then it
may lead to finding some order even in this seemingly disordered data
records.
We now discuss the processing procedure to extract the first return
probability using the records obtained from N independent walks till
they return to the origin. We first bin the record into equal size say of
width 0.1. That is, the first bin goes 0 to 0.1, the second from 0.1 to 0.2
and so forth. We then find what fraction of the total N walks return
to zero within these bins. The plot of these fractions against bin size
is equivalent to plotting normal histogram of the record produced by
binning them into bins of equal size of width 0.1. To check if the corre-
sponding distribution reveals power-law or not it is better to plot the
histogram on a logarithmic scale. Doing exactly this with the current
records gives the straight line which is the characteristic feature of the
power-law distribution when plotted on the logarithmic scale. However,
it is worth mentioning that the right hand end of the distribution is
quite noisy reflecting scarce data points near the tail meaning each bin
near the right hand end only has a few samples in it, if any at all. Thus
the fractional fluctuations in the bin counts are large which appears
ultimately as a fat tail.
• Zipf law: In the 1930’s Zipf (the Harvard Linguistic professor)
found the frequency of occurrence of some event f , as a function rank r,
where the rank is determined by the extent of occurrence or frequency f ,
follows a power-law f ∼ r−k with exponent close to unity. In particular,
he found that the frequency of words in the English text follows such
power-law. Similar distributions are seen for words in other languages
too.
• Power-law has also been observed in earthquakes as observed by
Guttenberg and Richter in 1954. According to Guttenberg and Richter
the number of N (E ) of earthquakes which releases a certain amount
of energy E has the form N (E ) ∼ E −α with α = 1.5 independent
of geographic region. Such power-law implies that it is meaningless
to define a typical or average strength of an earthquake which would
correspond to the peak of a Gaussian distribution. It also implies that
the same mechanism is responsible for earthquakes of all sizes, including
30  Fractal Patterns in Nonlinear Dynamics and Applications

the larges one. The power-law distribution suggests that although the
large earthquakes are rare in occurrence, they are expected to occur
occasionally and do not require any special mechanism. It may oppose
our physical intuition, in the sense that we are used to think that small
disturbances lead to small consequences and some special mechanism
is required to produce large effects. It has been found that there exist a
power-law relation between the amplitude (strength) and the frequency
of occurrence.
In fact power-law distribution is found in an extraordinarily diverse
range of phenomena that includes solar flares, computer files and wars,
the numbers of papers scientists write, the number of citations received
by papers, the number of hits on web pages, the sales of books, music
recordings and almost every other branded commodity, the numbers of
species in biological taxa, etc. just to name a few.
Chapter 2

Fractals

2.1 Introduction
Benoit B. Mandelbrot (20 November 1924–14 October 2010) born in
Poland, and brought up in France and who later lived and worked in
America added a new word in scientific vocabulary through his cre-
ative work which he himself coined as fractal [8, 13]. He was Sterling
Professor of Mathematical Sciences at Yale University and IBM Fellow
Emeritus in Physics at the IBM T.J. Watson Research Center. He was
a maverick mathematician who conceived, enriched and at the same
time popularized the idea of fractal almost single handedly. With the
idea of fractal he has revolutionized the notion of geometry that has
generated a widespread interest in almost every branch of science. He
presented his new concept through his monumental book ‘The Fractal
Geometry of Nature’ in an awe-inspiring way and ever since then it has
remained as the standard reference book for both the beginner and
researcher. The advent in recent years of inexpensive computer power
and graphics has led to the study of nontraditional geometric objects
in many fields of science and the idea of fractals has been used to de-
scribe them. In a sense, the idea of fractal has brought many seemingly
unrelated subjects under one umbrella. Mandelbrot himself has written
a large number of scientific papers dealing with the fractal geometry
of things as diverse as the price changes and salary distributions, tur-
bulance, statistics of error in telephone message, word frequencies in
written texts, in aggregation and fragmentation processes are just to
name a few. To make his technical papers more accessible to scien-
32  Fractal Patterns in Nonlinear Dynamics and Applications

tific community he later wrote two more books on fractals which have
inspired many to use fractal geometry in their own fields of research.
In fact, the history of describing natural objects using geometry
is as old as the advent of science itself. Traditionally lines, squares,
rectangles, circles, spheres, etc. have been the basis of our intuitive
understanding of the geometry. However, nature is not restricted to
such Euclidean objects which are only characterized typically by inte-
ger dimensions. Yet, we confined ourselves for so long to only integer
dimensions. Even now in the early stage of our education we learn
that objects which have only length are one dimensional, objects with
length and width are two dimensional and those have length, width and
breadth are three dimensional. Did we ever question why we jump to
only integer dimensions? The answer to the best of my knowledge is:
Never. We never questioned, if there existed objects with non-integer
dimensions. However, one person did and he was none other than Man-
delbrot. He was bewildered with such thoughts that for many years.
He realized that nature is not restricted to Euclidean or integer di-
mensional space. Instead, most of the natural objects we see around
us are so complex in shape that conventional Euclidean or integer di-
mension is not sufficient to describe them. The idea of fractal geometry
appears to be indispensible for characterizing such complex objects at
least quantitatively.
The idea of fractals in fact enables us to see a certain symmetry and
order even in an otherwise seemingly disordered and complex system.
The importance of the discovery of fractals can hardly be exaggerated.
Since its discovery there has been a surge of research activities in using
this powerful concept in almost every branch of scientific disciplines
to gain deep insights into many unresolved problems. Yet, there is no
neat and complete definition of fractal. Possibly the simplest way to
define a fractal is as an object which appears self-similar under varying
degrees of magnification, i.e., one loosely associates a fractal with a
shape made of parts similar to the whole in some way. There is nothing
new in it. In fact, all Euclidean object also possess this self-similar
property. What is new though is the following: objects which are now
known as fractals were previously regarded as geometric monsters. We
could not appreciate the hidden symmetry they posses. Thanks to the
idea of fractals that we can now appreciate them and also quantify them
by a non-integer number exponent called the fractal dimension. The
numerical value of the fractal dimension can characterize the structure
and the extent of stringiness or degree of remification. This definition
immediately confirms the existence of scale invariance, that is, objects
look the same on different scales of observation. To understand fractals,
their physical origin and how they appear in nature we need to be able
Fractals  33

to model them theoretically. The present chapter is motivated by the


desire from this thirst.

2.2 Euclidean geometry


In order to grasp the notion of fractals it is absolutely necessary first
to know what is Eucleadian geometry. Look at man-made objects like
houses, mosques, temples, churches and furniture inside, toys, etc. and
check their shapes. You will be amazed to find out that most of the
objects are comprised of lines, squares, rectangles, triangles, perallelo-
grams, semi-circles, hemispheres, spheres, etc. We all know that these
objects have only integer dimensions such as 1, 2 or 3. We first encoun-
tered the concept of such integer dimensions in school where we are
told that objects having only length are one dimensional, having both
length and width are two dimensional and objects having length, width
and height are three dimensional. A more refined definition, however,
is taught later in college or university where we learn that the number
of linearly independent axis needed to describe points of a given object
is the geometric dimension of that object. For instance, a line is one
dimensional because it takes only one number to uniquely define any
point on it. That one number could be the distance from the start of
the line. A plane is two dimensional since in order to uniquely define
any point on its surface we require two numbers. On the other hand,
objects that occupy space and require three linearly independent basis
vectors to describe points of the object said to have dimension 3.
The concept of dimension is not easy to grasp. It has been one of the
major challenges for mathematicians to streamline its meaning and its
properties. To this end, the mathematicians have ended up with some
ten or more different notions of dimension, e.g., topological dimension,
Hausdorff-Besicovitch dimension, similarity dimension, box-counting
dimension, informamtion dimension, Euclidean dimension, fractal di-
mension, etc. Some of these are related in one way or another and
their details can be confusing. Some of them are more useful in some
situations while others are not. In the context of fractal geometry it
is more than enough to restrict ourselves to an elementary discussion
on Hausdaurf-Besicovitch dimension, box-counting dimension and sim-
ilarity dimension. It is note worthy to mention at this stage that the
basic notion of Hausdorff-Besicovitch dimension and the box-counting
dimension are the same while the later is more experimental friendly
than the former. In fact, it is the Hausdorff-Besicovich dimension which
we take seriously as it provides us with better understanding of frac-
tals than others. As a passing note, we would like to mention that in
34  Fractal Patterns in Nonlinear Dynamics and Applications

Qunatum mechanics we learnt about Hilbertspace which is known as the


space of infinite dimensions. Mathematically, it is just an extension of
Euclidean geometry as we require infinitely many linearly independent
basis vectors to specify its state vectors.
Yet another description of dimensions of the Euclidean geometry
based on the observation of how the “mass” of an object varies as the
size of the object is changed while preserving its shape. We will see
later that it plays an important role in defining fractals as well. For
instance, we can take a line segment and see what happens if its size is
changed by a factor of, say, two, three or k in general. The line segment
can be either set points sitting next to each other leaving no holes in
between or it can be full of gaps or holes of same size to form a lattice
but in either case the points are uniformly distributed. Therefore, if the
linear dimension of the line is increased by a factor, say of λ, then the
mass of the line is also increased by the same factor and hence mass
M ∼ L in the case of line. We can extend this argument to a circular
or spherical object. The object in question can be either compact or
full of holes of same size distributed uniformly. Let the diameter of the
object is increased from L to λL, the mass of the object is increased by
a factor of λ2 . On the other hand the mass of the object is increased
by a factor of λ3 if the object is spherical. This relationship between
dimension DE , linear scaling L and the resulting increase in mass M
can be generalised to a much known mass-length relation as

M = LDE . (2.1)

This relationship which holds for Euclidean shape is just telling us


mathematically what we all know from our everyday experience.
An object whose mass-length relation follows Eq. (2.1) is siad to be
compact and at the same time Euclidean. In general, the mass-length
relation implies that if the the linear dimension of an object is increased
by a factor of L while preserving its shape, the mass of the object is
increased by a factor of Ld . This mass-length relation is closely related
to our intuitive idea of dimension. Note that the density of the mass
element of an object is defined as
Mass of the object
ρ= . (2.2)
Volume of the space occupied by the object
Now, if the dimension of the oblect d and the dimension of the embed-
ding space DE coincides then the object is said to be compact since its
mass density ρ ∼ M/Ld is

ρ ∼ L0 = const. (2.3)
Fractals  35

That is why, when we are to draw 1d, 2d or 3d lattices we decorate


points ensuring the fact that the density of lattice points is constant
and we can have the luxury of choosing the spacing between lattice
points. Do all the objects (natural or man-made) we see around us
follow this mass-length relation with DE strictly an integer number?
An answer to this question is central to this chapter.
Some of the key ideas that we have learnt here are:
• Traditionally, lines, squares, rectangles, circles, spheres, etc.,
have been the basis of our intuitive understanding of the ge-
ometry which correspond to integer dimensions and is known as
Euclidean geometry.
• Self-similarity in Euclidean objects is far too obvious, which even
a layman can appreciate.
• Are there objects in nature that cannot be described by conven-
tional Euclidean geometry?
• The question is: why do we jump to an integer value like 0 to 1,
1 to 2 and 2 to 3? We in fact took the idea of integer dimension
for granted and hence prior to the 80s we never even questioned
if it was possible for an object to have non-integer dimensions,
at least I did not.

2.3 Fractals
Let us ask: How long is the coast of Bay of Bengal or coast of Britain?
Finding an answer to this question lies in appreciating the fact that
there are curves twisted so much that their length depends on the size
of the yard-stick we choose to measure them, there are surfaces that
fold so wildly that they fill space and their size too depends on the size
of the yard-stick we choose to measure them. Consider that we are to
measure the size of the coast of the Bay of Bengal, the longest beach
in the world. Obviously, the size will be larger if we use an yardstick of
1.0cm long than if we use an yardstick of 1.0m long since in the former
case we can capture the finer details of the twist of the coast than with
the later case. Likewise, there are curves, surfaces, and the volumes in
nature can be so complex and wild in character that they used to be
known as the geometric monster. However, with the advent of fractal
geometry they are no longer called so. The idea of fractal geometry
vis-a-vis the fractal dimension provides a systematic measure of the
degree of inhomogeneity in the structure. The two key ingredients for
any geometric structure to be coined as fractal are (i) self-similarity, at
36  Fractal Patterns in Nonlinear Dynamics and Applications

least in the statistical sense and (ii) the dimension of the object is less
than the dimension of the space where the object is being embedded.
Prior to Mandelbrot, we always took it for granted that d can only
assume an integer number and the existence of objects having non-
integer geometric dimension had never come under scrutiny. Now, one
may ask the following:
• Are there objects which result in non-integer exponents of the
mass-length relation or the power-law relation between the num-
ber N and the yardstick size δ ?
• If yes, then how do they differ from those which have integer
exponents?
Assume that we have infinitely many dots and we are to decorate
them to constitute an Euclidean object like a line or a plane. What do
we really have to ensure to that end? The dots must be embedded in an
Euclidean space making sure that the dots are uniformly distributed
following a regular pattern. It does not matter whether the dots are
closely or widely spaced as long as they are decorated in a regular
array. The constant density keeps its signature through their seemingly
apparent self-similar appearence. In such cases the relation between the
number N and the yardstick δ always exhibits power-law with integer
exponent coinciding with the dimension of the embedding space. Can
we decorate the dots in the space so that the power-law relation between
N and the corresponding yardstick δ still prevails where neither the
density is constant nor the self-similarity is seemingly as apparent as it
was in the case of its Euclidean counterpart? It was Mandelbrot who for
the first time raised the question that no one even thought of it before.
Indeed, we shall see several such cases in the sections below where dots
can be decorated in Euclidean space to constitute an object such that
power-law relation between N and the corresponding yardstick δ still
prevails but with an non-integer exponent. Below we will discuss a few
simple examples where a relation between N (δ ) and δ do exhibit inverse
power-law with an exponent non-integer.
The geometrical structures and properties of irregular objects are
addressed by Benoit B. Mandelbrot in 1975 and he coined the term
“fractal” based on the Latin word fractus, derived from the content
frengere meaning to break. Mandelbrot defined a fractal as a set with
non-integral Hausdorff dimension which is strictly greater than its topo-
logical dimension. This definition proved to be unsatisfactory in the
sense that it does not include a number of sets that are supposed to
be regarded as fractals. In fact, there is no well-defined mathematical
definition for characterizing the fractal. However, Falconer suggested
the following which guarantees a set to be a fractal [57].
Fractals  37

 Fractals exhibits a fine structure at arbitrary small scales. Nearly


the same at every corner, on a large scale and on a small scale.
Further, no smooth part should be observed by scaling.
 It has a Hausdorff dimension that is in general strictly greater
than its topological dimension.
 It is recursively defined through initiators.
 Fractal are finite copies of itself.
 In general, fractals can not be easily described by Euclidean ge-
ometry!
In general, fractal are defined by rough or fragmented geomet-
ric shapes that can be split into parts where each smaller part is a
resemblance of the whole. That is, fractals can be defined through
self-similar property. According to self-similar property, fractals can
be characterized into two types namely random fractal and deter-
ministic fractal. An object having an approximate or statistical self-
similarity is called “random fractal” and an object having regular or
exact self-similarity is called “deterministic fractal”(for further reading
[11, 27, 28, 57, 64, 74]).

2.3.1 Recursive Cantor set


The best known text book example of fractals is the Cantor set. The
novelty of this example lies in the simplicity of its construction. The
notion of fractal and its inherent character such as self-similarity is
almost always introduced to the beginner through this example. In the
years 1871 − 1884 Georg Cantor invented the theory of infinite sets.
In the process, Cantor constructed a set that is strictly self-similar
at all scales. Upon magnifying a suitable portion of the set reveals a
piece that looks like the entire set itself. At the time Cantor discovered
these pathological sets, it was believed that they were the purest form
of mathematical invention. Never would they find an application in
natural world. But, after the invention of fractal geometry they proved
to be wrong. Today, we know that many natural processes produce such
self-similar objects now called fractal. Almost every book written so far
on fractals begins an illustration of fractals by introducing the Cantor
set simply because of its simplicity in definition and yet have all the
ingredients that a fractal must have.
The best aid to the comprehension of the Cantor set fractal is an
illustration of its method of construction. This is given in Fig. 2.1 for
38  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 2.1: Construction of the triadic cantor set. The initiator is the unity interval
[0, 1]. The generator removes the open middle third.

the simplest form of the Cantor set, namely the triadic Cantor set.
Consider a line segment of unit length described by closed interval
C0 = [0.1] which is known as an initiator. We start generating the
set by diving the initiator into three equal parts and removing the
middle third. We now have two closed intervals [0, 13 ] and [ 23 , 1] in step
1 each of length 13 . Thus, C1 = [0, 13 ] ∪ [ 23 , 1]. Now remove the middle
thirds from the remaining two intervals leaving four closed intervals
of length 19 and these are [0, 19 ] and [ 29 , 13 ], [ 23 , 79 ], [ 89 , 1]. This leaves
C2 = [0, 19 ]∪[ 29 , 13 ]∪[ 23 , 79 ]∪[ 89 , 1]. Now remove the middle thirds from each
of the remaining four intervals to create eight smaller closed intervals.
Well perhaps the idea about the future steps is made clear enough. The
triadic Cantor set is the limit C of the sequence Cn of sets. The sets
decrease: C0 ⊇ C1 ⊇ C2 ⊇ · · · . So we will define the limit to be the
intersection of the sets,
\
C= Cn .
n∈N

If we continue the above construction process through infinitely


many steps then one may well ask: (i) What will we be left with?
(ii) How much of the initiator have we thrown?
In step one we remove one interval of size 1/3, in step two we remove
two intervals of size 1/32 , in step three we remove four intervals of size
Fractals  39

1/33 and in general in step n we remove 2n intervals of size 1/3n . Let


us add up the length of the segments we have removed:

1 X  2 n
1/3 + 2/9 + 4/27 + ... + ... = (2.4)
3 n=0 3
1 1
=
3 1 − 23
1
= × 3 = 1.
3
We thus find that asymptotically the size of the total length of all
the intervals being thrown away is equal to the size of the initiator.
Moreover, the set Cn consists of 2n disjoint closed intervals, each of
length (1/3)n in step n. So the total length of Cn , the sum of the
lengths, is (2/3)n . The limit is
 n
2
lim = 0.
n→∞ 3

So the total length of the Cantor set is zero. This is quite surprising as
it implies that there is hardly anything left in the Cantor set, but soon
we will see, there are tons of points in the Cantor set. Now let us see
chek the left overs. To this end, we look into the k th moment Mk of
the remaing intervals at the nth step of the construction process. The
moment Mk is
X2n
Mk = xki . (2.5)
i
Note that each of the remaining intervals at the nth step are of equal
size xi = 3−n and hence can write
Mk = en ln 2−kn ln 3 . (2.6)
ln 2
It means that if we choose k = ln 3 then we find
M ln 2 = 1, (2.7)
ln 3

regardless of the value of n. That is, this result is true even in the limit
n → ∞. We can thus conclude that the set is not empty which is one
more surprising feature of the Cantor set. We will now attempt to find
the significance of the value of k = ln 2
ln 3 .
Let us observe carefully what points form the Cantor set. We started
to constitute Cantor set with the initiator [0, 1] and the endpoints 0 and
1 belong to all of the future sets Ck , k ≥ n, and therefore belong to the
intersection C . Taking all the endpoints of all the intervals of all the
approximations Cn , we get an infinite set of points, all belonging to C .
40  Fractal Patterns in Nonlinear Dynamics and Applications

2.3.2 von Koch curve


The triadic Koch curve is another text-book like simple example used
to illustrate that a curve is twisted so badly that it occupies a plane and
yet has a dimension 1 < D < 2. It is a good example of fractal because it
gives a visual illustration of how simple rules can give rise to self-similar
fractal structures. One of the interesting features of the Koch curve is
that it is continuous but not differentiable anywhere. The construction
of the Koch curve starts with the initiator which we may choose as a
line segment of unit length L(1) = 1. It is the zeroth generation (k = 0)
of the Koch curve. The algorithm for the construction of the Koch

(a) Initiator (b) Step 1

(c) Setp 2 (d) Setp 3

(e) 10th step

Figure 2.2: Construction of the von Koch curve.


Fractals  41

curve is as follows: We divide the initiator into three equal pieces. The
middle third is then replaced with two equal segments, both one-third
in length, which form an equilateral triangle (step k = 1) if the middle
third were not removed. This step is regarded as the generator of the
Koch curve. At the next step (k = 2), the middle third is removed from
each of the four remaining line segments and each of them is replaced by
the generator. The resulting cruve will have N = 42 line segments of size
δ = 1/32 and the length of the prefractal curve L = (4/3)2 . This process
is repeated ad infinitum to produce the Koch curve. Once again the self-
similarity of the set is evident: each sub-segment is an exact replica of
the original curve. As the number of generations increase the length
of the curve clearly diverges. By applying a reduced generator to all
segments of a generation of the curve a new generation is obtained. Such
a curve is called pre-fractal. Clearly in the nth step of the generation we
will have N = 4n of line segments of size δ = 3−n . The k th moment Mk
of the interval therefore is Mk = 4n 3−nk and hence Mk = en ln 4−nk ln 3 . It
implies that the k th moment is always equal to the size of the initiator
regardless of the value of n if we choose k = ln 4/ ln 3. We will soon see
that this is once again is the fractal dimension of the Koch curve.
Let us now follow the detailed calculation of how the Hausdorff-
Besicovitch dimension D can be obtained in the case of triadic Koch
curve. The length of the nth prefractal is given by
L(δ ) = (4/3)n . (2.8)
The length of each of the small line segments at the nth step is
1
δ= . (2.9)
3n
Note that the generation number n can be expressed as
ln δ
n=− . (2.10)
ln 3
We then find that the number of segments N (n) = 4n and using Eq.
(2.10) we find ln N (δ ) = −D ln 4. We thus immediately find that N (δ )
exhibits power-law
ln 4
N (δ ) ∼ δ − ln 3 , (2.11)
We see once again that the exponent of the inverse power-law relation
between N (δ ) and δ is a non-integer. First, note that the Koch curve
is embedded in a plane and hence the dimension of the embedding sp
ace is d = 2 which is higher than its Hausdorff-Besicovitch dimension
ln 4
ln 3 . It implies that the Koch curve is a fractal with fractal dimension
df = ln 4
ln 3 .
42  Fractal Patterns in Nonlinear Dynamics and Applications

Like Cantor set, the Koch curve too can be constructed by aggre-
gation of intervals of unit size L(0) = 1 and unit mass to find the
mass-length relation. In step one, we can assmeble three intervals of
unit size and unit mass next to each other. We replace the middle one
by an equilateral triangle of sides L(1) = 1 and delete the base. This
is known as the generator which contains four line segments and hence
M = 4 and the linear size of the curve L = 3. In step two we place
three line segments of length L(2) = 3 next to each other and replace
the middle one by an equilateral triangle deleting the base again. Each
of these line segments is then replaced by the generator. The system
in step two thus has a mass M = 42 and the linear size of the sys-
tem L = 32 . In step three we again place three line segments of length
L(3) = 9 next to each other. We then replace the line segment in the
middle by an equilateral triangle and delete the base. Each line segment
is then replaced by the generator to give M = 43 and the linear size
of the system L = 33 . We continue the process ad infinitum. In the nth
step the system will have mass M = 4n and the linear size of the system
is L = 3n . Like before, we can eliminate n in favur of L to obtain
ln 4
M ∼ L ln 3 . (2.12)

Using this mass-length relation in the definition of density we again


find that density of intervals in the system decreases as the linear size
of the system increases like ρ ∼ Ldf −d where df = ln 4/ ln 3 and d = 2.

2.3.3 Sierpinski gasket


We can generalize the idea of the Cantor set into higher dimensions
too. For instance, in the case of Sierpinski gasket the initiator is a
equilateral triangle, say S0 , and the generator divides it into four equal
triangle, using lines joining the midpoints of the sides and remove the
interior triangle whose vertices are the midpoints of each side of the
initiator but leaves boundary of the triangle. The resultant set is S1 ⊆
S0 . Now each of the remaining three triangles are subdivided into four
smaller triangles with edge length 1/4, and the three middle triangles
are removed. The result is S2 ⊆ S 1. The generator is then applied over
and over again to all the available triangles, this process gives sequence
Sn of sets such that S0 ⊇ ST 1 ⊇ S2 ⊇ · · · . The limit of Sn is called the
Sierpinski gasket, thus S = n∈N Sn .
It is easy to find out that at the nth step there are √ 3n triangles of
side δ = 2−n . So the total area of Sn is 3n · (1/2n )2 · 3/4. As n → ∞,
Sn converges to 0. The total area of the Sierpinski gasket 0.
Fractals  43

(a) Initiator (b) Setp 1

(c) Setp 2 (d) Setp 3

(e) Setp 10

Figure 2.3: Construction of the Sierpinski gasket.

The line segments that make up the boundary of one of the triangles
of Sn remain in all the later approximations Sn , n ≥ k . So the set S
contains at least all of these line segments. In Sn there are 3n triangles,
each having 3 sides of length 2−n . So the total length of S is at least
3n · 3 · 2−n . This goes to ∞ as n → ∞. So it makes sense to say that the
total length of S is infinite.
At the nth step there are N = 3n triangles of side δ = 2−n . Elim-
ln 3
inating n in favour of δ we find N ∼ δ − ln 2 . One can also show that
44  Fractal Patterns in Nonlinear Dynamics and Applications

ln 3
M ∼ L ln 2 and M− ln 3 = 1 Likewise, in the case of Sierpinsky carpet the
ln 2
initiator is assumed to be a square instead of triangle and the generator
divides it into b2 equal sized squares and remove one of the square, say
top left if b = 2 and the square in the middle if b = 3, is removed. This
action is then applied ad infinitum to all the available squares in the
subsequent steps. That is, in each application of the generator every
survived square is divided into b2 smaller squares and remove one in
a certain prescribed fashion. The resultant fractal is called Sierpinski
carpet. In the nth generation the number of squares N = (b2 − 1)n and
each squares are of equal sized with sides δ = (1/b)n . According to the
definition of the Hausdorff-Besicovitch dimension we use δ as the yard-
stick to measure the set created in the nth generation and find that the
number N scales with δ as
2
N (δ ) ∼ δ − ln(b −1)/ ln b
. (2.13)

Following the same procedure we can also obtain a similar relation for
the Sierpinsky gasket
N (δ ) ∼ δ − ln 3/ ln 2 . (2.14)
We thus find that the dimension of the Seirpinsky gasket or the Seir-
pinsky carpet is less than the dimension of their embedding space d = 2
in both the cases the exponents are non-integer. The conservation law

M− ln 3 = 1, (2.15)
ln 2

are obeyed here as well regardless whether the object is Sierpinsky gas-
ket or it is the Sierpinsky carpet. So far we have looked at constructions
of strictly deterministic fractals embedded in a line (Cantor set) and
embedded in the plane (Koch curve and Sierpinski gasket and carpet).
We can also construct fractals embedded in 3d space known as the
Menger sponge.
Aforementioned attractors von Koch curve and Sierpinski gasket
have infinite length as well as zero area. If they have been thought as
1-dimensional objects, it is too big. If they have been considered as 2-
dimensional objects, it is too small. Thus, Cantor set, von Koch curve
and Sierpinski gasket are satisfying the following:

 They can not be easily described by Euclidean geometry.


 Exhibits a fine structure at arbitrary small scales. Nearly the
same at every corner, on the large scale and on a small scale.
Further, no smooth part should be observed by scaling. (Self-
similar property).
Fractals  45

 It has a Hausdorff dimension that is in general strictly greater


than its topological dimension. (This point will be verified in
later chapter).
 It is recursively defined through initiators in the sense of iterated
function system.
 Finite copies of itself. Since
S all are obtained from the self-
referential equation F (A) = k∈Nn fk (A).

According to self-similar properties, fractals can be characterized into


two types,
(i) an object having approximate or statistical self-similarity called
random fractal.
(ii) an object having regular or exact self-similarity called determin-
istic or regular fractal.
The central theme of this book is to present the clear idea about “deter-
ministic fractal”, “random fractal”and its applications. Subsequence of
this section concisely presents the mathematical backgrounds which are
need to construct the deterministic fractal in the metric space. Later,
we discuss the random fractals.

2.4 Space of fractal


2.4.1 Complete metric space
Definition 2.1 [7, 12]
A metric space (X, d) is a nonempty set X together with a function
d : X × X → [0, ∞), which is called as a metric or distance function,
satisfying
(i) d(x, y ) = 0 ⇐⇒ x = y
(ii) d(x, y ) = d(y, x) ∀x, y ∈ X
(iii) d(x, z ) ≤ d(x, y ) + d(y, z ) ∀x, y, z ∈ X.

The nonnegative real number d(x, y ) measures the distance between x


and y in X . Since,
46  Fractal Patterns in Nonlinear Dynamics and Applications

(i) Distance is zero if and only if two objects lie on the same position.
(ii) Distance is symmetry with respect to the position of object, that
is the distance between two points is same whichever the point
it is measured.
(iii) The distance between two points x, y cannot exceed sum of the
distances from x to an intermediate point z and from y to z .

Example 2.1 The set of all real numbers R together with d(x, y ) = |x−y|
is metric space and it is denoted as (R, |.|), where

 x if x > 0,
|x| = −x if x < 0,
0 if x = 0.

The distance function |.| is called the usual metric on R.

Proof: The range of modulus function |.| is [0, ∞), hence d(x, y ) ≥ 0
for all x, y ∈ R.
(i) |x − y| = 0 ⇐⇒ x = y
(ii) |x − y| = | − (x − y )| = |y − x|
(iii) Observe that |x| ≥ ± x for all x, y ∈ R. By using this inequality
it is easy to derive the triangle inequality for |.|,

|x − y| = |x − z + z − y| ≤ |x − z| + |z − y|.

Example 2.2 Let X be a nonempty set. Define



0 if x = y,
d(x, y ) =
1 if x 6= y.
Then, d defines the metric on X and (X, d) is called the discrete metric
space.

Example 2.3 Let X = Rn , the set of ordered n-tuples of real numbers.


Define
Pn
i. d1 (x, y ) = k=1 |xi − yi |,
Pn 1/2
ii. dp (x, y ) = ( k=1 (xi − yi )2 ) , p ≥ 1,
iii. d∞ (x, y ) = max1≤k≤n {|xi − yi |},
for all x, y ∈ Rn . Then, d1 (x, y ), dp (x, y ), d∞ (x, y ) are metrics on Rn .
Fractals  47

Definition 2.2 A function f from a metric space (X, d1 ) to another met-


ric space (Y, d2 ) and a ∈ X. We say that f is continuous at a, if for any
given  > 0, there exists δ > 0 such that

d1 (x, a) < δ implies d2 (f (x), f (a)) <  for all x ∈ X.

Definition 2.3 A function f : X → Y is said to be uniformly continuous


on X if for each  > 0, there exists δ > 0 such that

d1 (x, y ) < δ implies d2 (f (x), f (y )) <  for all x, y ∈ X.

Example 2.4 Let C [a, b] be set of all continuous real-valued functions de-
fined on the closed interval [a, b]. Consider the function |.|∞ : C [a, b] ×
C [a, b] → [0, ∞) defined by

|f − g|∞ = sup |f (x) − g (x)|.


x∈[a,b]

This function |.|∞ := d∞ defines the metric on the set C [a, b] and it is
called uniform metric.
The metric d∞ measures the distance from f to g as the maximum of
vertical distance from the point (x, f (x)) to (x, g (x)) on the graphs of f
and g.

Definition 2.4 Let (X, d) be a metric space. Given a point x ∈ X and a


positive real number r. The subsets

B (x, r) = {y ∈ X : d(x, y ) < r}

and
B [x, r] = {y ∈ X : d(x, y ) ≤ r}

Figure 2.4: Distance between f (x) and g(x).


48  Fractal Patterns in Nonlinear Dynamics and Applications

are respectively called the open and closed ball centred at x with radius r
with respect to the metric d.

Definition 2.5 Let (X, d) be a metric space and A ⊆ X. A point x ∈ A is


said to be an interior point of A if there exists r > 0 such that B (x, r) ⊆ A.
Other words, B (x, r) ∩ A = B (x, r).
A point x ∈ X is said to be a limit point of A if for every r > 0, B (x, r)∩
A 6= ∅. Other words, each open ball centred at x contains at least one point
of A different from x.

Definition 2.6 Let (X, d) be a metric space. A sequence in X is a func-


tion f from the set of all natural numbers N into X. We denote the se-
quence as (xn )n∈N where xn is image of n under the function f .

A sequence (xn ) in X is said to be converges to the point x ∈ X if given


 > 0, there exists n0 ∈ N such that d(xn , x) <  for all n ≥ n0 . The point
x is called the limit of the sequence (xn ) and we write limn→∞ xn = x
or xn → x as n → ∞.
Definition 2.7 A sequence {xn }∞ n=1 in a metric space (X, d) is called
a Cauchy sequence if for every  > 0, there exists n0 ∈ N such that
d(xn , xm ) <  whenever n, m ≥ n0 .

Theorem 2.1 Every convergent in a metric space is a Cauchy sequence.

Proof: Let (xn ) converge to x in a metric space (X, d). Given  > 0,
we can choose n0 ∈ N such that d(xn , x) < /2 for n ≥ n0 . Then, if
n, m ≥ n0 ,

d(xn , xm ) ≤ d(xn , x) + d(xm , x) < /2 + /2 = .

Hence (xn ) is a Cauchy sequence. √


Consider the metric space (Q, |.|) and the sequence xn = bn n 2c in

Q. Clearly, the sequence (xn ) converges to 2. Using the Theorem 2.1,
one can show (xn ) is a Cauchy sequence in (Q, |.|). However, it does not
converge to a point of Q. It seems that the converse of Theorem 2.1 is
not true.

Proposition 2.1 Let {xn }∞ n=1 be a sequence in a metric space (X, d) and
d(xn , xn+1 ) < 21n for all n. Then {xn }∞n=1 is a Cauchy sequence.

Proof: Let {xn }∞


n=1 be a sequence in a metric space (X, d). Let  > 0
and choose n0 ∈ N such that 2n01−1 < . Then for all n > m ≥ n0 we
Fractals  49

have
n−1
X
d(xm , xn ) ≤ d(xk , xk+1 )
k=m
n−1
X 1
< k
k=m
2

X 1 1
< k
= m−1
2
k=m
2
1
≤ < .
2n0 −1
Finally we have d(xm , xn ) <  for all m, n. Therefore {xn }∞
n=1 is a
Cauchy sequence.
Definition 2.8 A metric space (X, d) is said to be complete if every
Cauchy sequence in X converges to an element in X.

2.4.2 Banach contraction mapping


Definition 2.9 Let (X, d) be a metric space. A self-mapping f on X is
said to be contraction mapping (contraction) if there exists a constant
α ∈ [0, 1) such that

d(f (x), f (y )) ≤ αd(x, y ) for all x, y ∈ X. (2.16)


The constant α is called the contraction factor.

Definition 2.10 (fixed point) Let (X, d) be a metric space and f :


X → X be a mapping. A point x∗ is said to be a fixed point of f if
f (x∗ ) = x∗ .

Example 2.5 Consider the space R with usual metric.


1. Let f : R → R be a mapping defined by f (x) = ax, where a ∈ R.
Then x∗ = 0 is the fixed point of f
2. If f : R → R defined by f (x) = x + a, where a ∈ R and a 6= 0 then
f has no fixed point.
3. If f : R → R defined by f (x) = x then all the points in R is a fixed
point of f .

Theorem 2.2 A contraction mapping on the metric space (X, d) is uni-


formly continuous function.
50  Fractal Patterns in Nonlinear Dynamics and Applications

Proof: Let  > 0 be given. If α = 0, then d(f (x), f (y )) = 0 < . If


α ∈ (0, 1), then d(f (x), f (y )) ≤ αd(x, y ) for all x, y ∈ X. Choose δ = /α,
we have
d(f (x), f (y )) < δ for all x, y ∈ X.
This shows f is uniformly continuous on X .
The most fascinated result connecting the contraction mapping and
complete metric space was explored by Stefan Banach in 1922 which
stated as every contraction mapping on a complete metric space has a
unique fixed point. Later on, this theorem is named as Banach fixed
point theorem .
Theorem 2.3 Let (X, d) be a complete metric space and f be a contrac-
tion mapping on X. Then f has a unique fixed point x∗ .
Proof: Let x0 ∈ X be an arbitrary point such that f (x0 ) = x0 , other-
wise nothing to prove. Define, x1 = f (x0 ), xn+1 = f (xn ) for n ∈ N. We
claim that the sequence (xn ) is Cauchy in X . Observe,
d(xn , xn+1 ) = d(f (xn−1 ), f (xn ))
≤ αd(xn−1 , xn ) = αd(f (xn−2 ), f (xn−1 ))
≤ α2 d(xn−2 , xn−2 )
..
.
≤ αn−1 d(f (x1 ), f (x0 )) ≤ αn d(x1 , x0 ).
For any n, m ∈ N with n < m,
d(xm , xn ) ≤ d(xm , xm−1 ) + d(xm−1 , xm−2 ) + · · · + d(xn+1 , xn )
m−1
X
= d(xi , xi+1 )
i=n
m−1
X αn
≤ αi d(x1 , x0 ) ≤ d(x1 , x0 ).
i=n
1−α
n
Hence, for given  > 0, choose n0 large enough that d(x01−α ,x1 )α
< . Then,
for n, m ≥ n0 , we have d(xm , xn ) < . It shows that (xn ) is Cauchy. Since
X is complete, there exists x∗ ∈ X such that xn → x∗ as n → ∞. Given
f is contraction and hence f is continuous. Therefore, xn → x∗ implies
f (xn ) → f (x∗ ). It follows f (x∗ ) = x∗ .
To prove the uniqueness of fixed point x∗ , assume x∗ , y ∗ are two fixed
points of f . Then f (x∗ ) = x∗ and f (y ∗ ) = y ∗ . Since f is contraction
mapping,
d(x∗ , y ∗ ) = d(f (x∗ ), f (y ∗ )) ≤ αd(x∗ , y ∗ ) < d(x∗ , y ∗ ),
Fractals  51

which is contradiction. Hence x∗ = y ∗ .


Note 2.1 1. Let X = (0, 1) and f : X → X defined by f (x) = xa
where a ∈ R and a 6= 0. Then f has no fixed point. Here X is not a
complete metric space.
2. If X = [a, ∞), a ≥ 1 and f : X → X defined by f (x) = x + xa then
X is complete metric space but f is not a contraction. Since,
 
a
|f (x) − f (y )| = |x − y| 1 − < |x − y|, ∵ a < xy.
xy
Here α = 1, it gives f is not a contraction. Therefore f has no fixed
point in X.

2.4.3 Completeness of the fractal space


Let (x, d) be a complete metric space and H(X ) be the set of all
nonempty compact subsets of X . Now we define the distance between
a point x in X and a compact subset A in H(X ) as follows
d(x, A) = inf {d(x, a) : a ∈ A}. (2.17)
Remarks 2.1 Here A is compact set so d(x, a) exists and never takes the
values less than zero for all a ∈ A. Hence, the set {d(x, a) : a ∈ A} is
nonnegative. Therefore, by the completeness axiom infimum of Eq. (2.17)
exists.
Suppose d(x, A) = r and consider the open ball centered at x with radius
r (see Fig. 2.5). No point of A lies in the interior of the open ball, since r
is the infimum of distances from x to points in A. Hence, for any r1 > 0
the open ball centred at x and radius r + r1 contains at least one point of
A.
Construct a sequence (an ) in A such that d(x, an ) < r +1/n. Since A is
compact, so there exists a subsequence (ank ) of (an ) that converges to a0 in
A. Hence, d(x, a0 ) ≤ r. By definition of d(x, A) we have d(x, A) ≤ d(x, a0 )
which gives d(x, A) = d(x, a0 ). That is, if A is compact then, there exists
a point a0 in A such that d(x, A) = d(x, a0 ).
Now define the distance between two sets A, B ∈ H(X ) as
d(A, B ) = sup{d(a, B ) : a ∈ A}. (2.18)
Remarks 2.2 By the definition of supremum, there is a sequence (an )
in A such that d(A, B ) = limn→∞ d(an , B ). By Remark 2.1, there exists
a sequence (bn ) in B such that d(an , B ) = d(an , bn ). Since A and B are
52  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 2.5: Distance from a point x to the set A.

compact, there exists subsequences (ank ) of (an ) and (bnk ) of (bn ) such
that limk→∞ ank = a0 , limk→∞ bnk = b0 . Therefore,

d(A, B ) = lim d(ank , B ) = lim (ank , bnk ) = d(a0 , b0 ).


k→∞ k→∞

Example 2.6 Let A = {x ∈ R : x > 0} and B = {x ∈ R : x < 0} be


subsets of R with usual metric. Then d(A, B ) = 0, but A ∩ B = ∅. If a = 0
then d(a, A) = 0, but a 6∈ A. Thus, we observed the following:
 d(x, A) = 0 does not give x ∈ A.
 d(A, B ) = 0 does not guarantee that A and B have common points.
Note that here A and B are not closed subsets of R.

Exercise 2.1 Let A, B, C ∈ H(X ). Then


1. Show that d(x, A) = 0 if and only if x ∈ A
2. Show that d(A, B ) = 0 if and only if A ⊆ B
3. If A ⊆ B, show that d(x, B ) ≤ d(x, A)
4. d(A ∪ B, C ) = max{d(A, C ), d(B, C )}

Definition 2.11 Let (X, d) be a complete metric space and H(X ) be a


associated hyperspace of nonempty compact subsets of X.Then, the Haus-
dorff distance between A and B in H(X ) is defined as

Hd (A, B ) = max{d(A, B ), d(B, A)} (2.19)


Theorem 2.4 The Hausdorff distance Hd defines a metric on H(X ).
Fractals  53

Proof: It is enough to verify the following conditions to say that Hd is


a metric on H(X ).
i. Hd (A, B ) > 0 for all A, B ∈ H(X ) and A ∩ B = ∅
ii. Hd (A, B ) = 0 if and only if A = B
iii. Hd (A, B ) = Hd (B, A) for all A, B ∈ H(X )
iv. Hd (A, C ) ≤ Hd (A, B ) + Hd (B, C ) for all A, B, C ∈ H(X )
i. For any A, B ∈ H(X ), by Eq. (2.17) one can get d(A, B ) ≥ 0. Let
A ∩ B = ∅. Then A 6⊆ B and B 6⊆ A, it gives that d(A, B ) > 0 and
d(B, A) > 0. Thus, by the definition of Hd , the value of Hd is the
maximum of two non-zero positive real numbers. Hence Hd (A, B ) > 0.
ii. Assume A = B . Clearly A ⊆ B and B ⊆ A. Therefore d(A, B ) =
0, d(B, A) = 0 and thus Hd (A, B ) = 0. Now assume Hd (A, B ) = 0. This
gives d(A, B ) = 0 and d(B, A) = 0, since Hd is maximum of d(A, B ) and
d(B, A). But d(A, B ) = 0 gives A ⊆ B and d(B, A) = 0 gives B ⊆ A.
Hence A = B .
iii. By the definition, Hd is the maximum of two nonnegative real num-
bers. We know that the maximum value is symmetric with respect to
any pair of numbers, i.e., max{a, b} = max{b, a} for any a, b ∈ R. Hence
Hd (A, B ) = Hd (B, A).
iv. Let A, B, C ∈ H(X ). Then for each a ∈ A there exists b0 ∈ B such
that d(a, B ) = d(a, b0 ). Now
d(a, C ) = inf {d(a, c) : c ∈ C}
s ≤ inf {d(a, b0 ) + d(b0 , c) : c ∈ C}
= d(a, b0 ) + inf {d(b0 , c) : c ∈ C}
= d(a, B ) + d(b0 , C ).
Since a ∈ A is arbitrary and each a ∈ A is associated with b0 ∈ B such
that d(a, B ) = d(a, b0 ). Take the supremum over a, then we get
sup d(a, B ) ≤ sup{d(a, B ) + d(b0 , C )}
a∈A a∈A
d(A, C ) ≤ d(A, B ) + d(B, C ).

Now, d(A, C ) ≤ d(A, B ) + d(B, C )


≤ max{d(A, B ), d(B, A)} + max{d(B, C ), d(C, B )}
= Hd (A, B ) + Hd (B, C ).
Similarly, d(C, A) ≤ Hd (C, B ) + Hd (B, A).
Hence, Hd (A, C ) = max{d(A, C ), d(C, A)} ≤ Hd (A, B ) + Hd (B, C ).
54  Fractal Patterns in Nonlinear Dynamics and Applications

Definition 2.12 ( Dilation) Let A ∈ H(X ) and  ≥ 0. Then A +  =


{x ∈ X : d(x, A) ≤ } is called the dilation of A.

Theorem 2.5 For all A ∈ H(X ) and  ≥ 0, the set A +  is closed.

Proof: Let A ∈ H(X ). Suppose  = 0 then it is clear that A +  = A,


so A +  is closed. Let  > 0 and x be a limit point of A + . It is
enough to show that x ∈ A + , i.e., d(x, A) ≤ . As x is a limit point of
A + , so there exists a sequence {xn } in A +  such that xn → x. Since
xn ∈ A +  ∀ n, we get d(xn , A) ≤  ∀ n. For each n there exists an ∈ A
such that d(xn , A) = d(xn , an ). Thus d(xn , an ) ≤  for all n.
The set A ∈ H(X ), hence by the definition of compactness of A, ev-
ery sequence {an } in A has a convergent subsequence {ank } in A. So, the
sequence {an } has a subsequence {ank } which converges in a point of A,
say a. Since, xn → x, therefore the subsequence {xnk } of {xn } converges
to x. Hence, d(xnk , ank ) → d(x, a). But already we have d(xn , an ) ≤ ,
this implies d(xnk , ank ) ≤  for all k . Further, d(xnk , ank ) ≤  gives that
d(x, a) ≤  hence d(x, A) ≤ . Thus x ∈ A.

Theorem 2.6 Let A, B ∈ H(X ) and  > 0. Then Hd (A, B ) ≤  if and


only if A ⊂ B +  and B ⊂ A + .

Proof: Suppose Hd (A, B ) ≤ . Then max{d(A, B ), d(B, A)} ≤ , thus


d(A, B ) ≤  and d(B, A) ≤ . It is enough to show that d(A, B ) ≤  ⇐⇒
A ⊂ B + . Assume d(A, B ) ≤ . Then for every a ∈ A we have d(a, B ) ≤
. Hence for each a ∈ A, a ∈ B + . It gives that A ⊂ B + . For converse
part, let us assume A ⊂ B + . Then for every a ∈ A we get d(a, B ).
This implies d(A, B ) ≤ .

Theorem 2.7 Let {An }∞ n=1 be a Cauchy sequence in (H(X ), Hd ) and


{nk }∞
k=1 be a sequence of positive integer satisfies nk < nk+1 for all k. If
{xnk }∞
n=1 is Cauchy sequence in X such that xnk ∈ Ank for all k, then
there exists a Cauchy sequence {x̃n }∞ n=1 in X such that x̃n ∈ An for all n
and x̃nk = xnk for all k.

Proof: Let xnk be a Cauchy sequence in X such that xnk ∈ Ank for
all k . For each n in between nk−1 and nk choose x̃n ∈ An such that
d(xnk , An ) = d(xnk , yn ). Then we get

d(xnk , yn ) = d(xnk , An ) ≤ d(Ank , An ) ≤ Hd (Ank , An ).

Since xnk ∈ Ank , then d(xnk , ynk ) = d(xnk , Ank ) = 0. It gives x̃nk = xnk
for all k .
Given {xnk }∞
n=1 is a Cauchy sequence in X . So, for given  > 0, there
exists a positive integer N0 such that d(xnk , xj ) < /3 for all k, j ≥ N0 .
Fractals  55

Since {An }∞ n=1 is Cauchy in (H(X ), so there exists N1 ≥ nN0 such that
Hd (An , Am ) < /3 for all n, m ≥ N1 . If n, m ≥ N1 , then there exists
integers j, k ≥ N0 such that nk−1 < n ≤ nk , nj−1 < m ≤ nj . Then

d(x̃n , x̃m ) ≤ d(x̃n , xnk ) + d(xnk , xnj ) + d(xnj , x̃m )


= d(xnk , An ) + d(xnk , xnj ) + d(xnj , Am )
≤ d(Ank , An ) + d(xnk , xnj ) + d(Anj , Am )
≤ Hd (Ank , An ) + Hd (xnk , xnj ) + Hd (Anj , Am )
< /3 + /3 + /3 = .

Hence, {x̃n }∞
n=1 is Cauchy sequence in X such that x̃n ∈ An for all n
and x̃nk = xnk for all k .

Theorem 2.8 Let {An }∞ n=1 be a sequence in (H(X ), Hd ) and let A be the
set of all points x ∈ X such that there is a sequence {xn }∞
n=1 that converges
to x and satisfies xn ∈ An for all n. If {An }∞ n=1 is Cauchy sequence, then
the set A is closed and nonempty.

Proof: Given {An }∞ n=1 is a Cauchy sequence, therefore there exists an


integer n1 such that Hd (Am , An ) < 12 for all m, n ≥ n1 . Similarly, there
exists an integer n2 > n1 such that Hd (Am , An ) < 212 for all m, n ≥ n2 .
Continuing this process we get a sequence of integer {nk }∞ k=1 such that
nk < nk+1 for all k and Hd (Am , An ) < 21k for all m, n ≥ nk . Let xn1 be a
fixed point in An1 . By Remark 2.1, there exists a point xn2 ∈ An2 such
that d(xn1 , xn2 ) = d(xn1 , An2 ). Then

d(xn1 , xn2 ) = d(xn1 , An2 ) ≤ d(An1 , An2 )


1
≤ Hd (An1 , An2 ) < ,
2
since {Ank }∞ ∞
k=1 is a subsequence of a Cauchy sequence {An }n=1 . Simi-
larly, there exists xn3 ∈ An3 such that

d(xn2 , xn3 ) = d(xn2 , An3 ) ≤ d(An2 , An3 )


1
≤ Hd (An2 , An3 ) < 2 .
2
Continuing this process, we can construct {xnk }∞
k=1 satisfying xnk ∈ Ank
and for all k ,

d(xnk , xnk+1 ) = d(xnk , Ank+1 ) ≤ d(Ank , Ank+1 )


1
≤ Hd (Ank , Ank+1 ) < k .
2
56  Fractal Patterns in Nonlinear Dynamics and Applications

By Proposition 2.1, {xnk }∞ k=1 is a Cauchy sequence. Since {xnk } is a


Cauchy sequence and xnk ∈ Ank and for all k , by Theorem 2.7, there
exists a Cauchy sequence {x̃n } in X such that x̃n ∈ An for all n and
x̃nk = xnk for all k . X is complete, therefore the Cauchy sequence {x̃n }
has a limit x̃ ∈ X . For all n, {x̃n } ∈ An which gives x̃ ∈ A. Hence A is
nonempty.
Suppose s is a limit point of A. We have to prove A is closed, it is
enough to prove that s belongs to A. There exists a sequence sn ∈ A\{s}
that converges to s, since s is a limit point of A. By the definition of A,
there exists a sequence {tn } that converges to {sn } such that tn ∈ An
for all n, since {sn } ⊆ A. Moreover, there exists an integer n1 such
that xn1 ∈ An1 and d(xn1 , s1 ) < 1. Similarly, there exists an integer
n2 > n1 and xn2 ∈ An2 such that d(xn2 , s2 ) < 12 . Continuing this way
we can choose a sequence of integers {nk } such that nk < nk+1 and
d(xnk , sk ) < k1 for all k . Then

d(xnk , s) ≤ d(xnk , sk ) + d(sk , s)


1
< + d(sk , s) → 0 as k → ∞.
k
It gives that {xnk } converges to s. Hence, {xnk } is a Cauchy sequence,
since every convergent sequence is Cauchy sequence. It gives that {xnk }
is a Cauchy sequence for which xnk ∈ Ank for all k . Theorem 2.7 guar-
antees that there exists a Cauchy sequence {x̃n } in X such that x̃n ∈ An
for all n and x̃nk = xnk . Therefore s ∈ A, it shows A is closed.

Theorem 2.9 Let {An }∞ n=1 be a sequence of totally bounded subsets of


X and let A be any subset of X. If for each  > 0, there exists a positive
integer N such that A ⊆ AN + , then A is totally bounded.

Proof: For given  > 0, choose a positive integer N such that A ⊆


AN + 4 . By the definition of totally bounded set, choose a finite set
Sk
{xn ∈ AN : 1 ≤ n ≤ k} such that AN ⊆ n=1 B (xn , 4 ), since AN is
totally bounded. Without loss of generality, assume that B (xn , 2 ) ∩A 6=
∅ for 1 ≤ n ≤ k and B (xn , 2 ) ∩A = ∅ for k ≤ n. Then for each 1 ≤ n ≤ k ,
let yn ∈ B (xn , 2 ) ∩ A. Let a ∈ A. To prove A is totally bounded, it is
enough to prove A ⊆ kn=1 B (yn , ), i.e., a ∈ B (yn , ). Now, a ∈ A gives
S
a ∈ AN + 4 , so d(a, AN ) ≤ 4 . By Remark 2.1, there exists x ∈ AN such
that d(a, x) = d(a, AN ). Then

d(a, xn ) ≤ d(a, x) + d(x, xn )


  
≤ + = .
4 4 2
Fractals  57

Hence x ∈ B (xn , 2 ) for some 1 ≤ n ≤ k . Thus, yn ∈ B (xn , 2 ) ∩ A such


that d(xn , yn ) < 2 . It follows that

d(a, yn ) ≤ d(a, xn ) + d(xn , yn )


 
< + = .
2 2
Therefore for each a ∈ A we have yn , 1 ≤ n ≤ k, such that a ∈ B (yn , ),
it follows that A ⊆ kn=1 B (yn , ). Hence, A is totally bounded subset
S
of X .

Theorem 2.10 Let (X, d) be a metric space and H(X ) be a associated


hyperspace of nonempty compact subsets of X with Hausdorff metric Hd .
If (X, d) is complete, then (H(X ), Hd ) is complete.

Proof: Let {An }∞ n=1 be a Cauchy sequence in H(X ) and define A be the
set of all points x in X such that there is a sequence {xn } that converges
to x and satisfies xn ∈ An for all n. To prove the completeness of H(X ),
we need to show {An } converges to A and A is a member of H(X ).
By Theorem 2.8, the set A is nonempty closed set. It gives that A is
complete, since A is closed subset of a complete metric space (X, d). Let
 > 0. Then there exists a positive integer N such that Hd (An , Am ) < 
for all m, n ≥, since {An } is a Cauchy sequence. Further, Theorem 2.6
gives Am ⊆ An +  for all m > n ≥ N . Let a ∈ A and fix n ≥ N . By
definition of A, there exists a sequence {xk } such that xk ∈ Ak for all k
and {xk } converges to a. Then, Theorem 2.5 gives that An +  is closed.
Since xk ∈ An +  for each k , then it follows that a ∈ An + . We took
a is arbitrary element in A and showed that a ∈ An + . Hence,

A ⊆ An + . (2.20)

By Theorem 2.9, the set A is totally bounded. Therefore A is compact,


since A is nonempty, complete and totally bounded. Thus A ∈ H(X ).
Let  > 0 and y ∈ An . Since {An } is a Cauchy sequence, there
exists a positive integer N such that Hd (Am , An ) < 2 for all m, n ≥ N
and there exists a strictly increasing sequence of positive integers {nk }

such that Hd (Am , An ) < 2k+1 and n1 > N for all m, n > nk . By using
Remark 2.1 we get sequence {xnk } such that xnk ∈ Ank for all k and

d(xnk , xnk+1 ) ≤ 2k+1 . Moreover, Proposition 2.1 guarantees {xnk } is
Cauchy sequence and Theorem 2.7 gives limit of the sequence {xnk }, a
is member of A. Further,
k
X
d(y, xnk ) ≤ d(y, xn1 ) + d(xni , xni+1 ) < .
i=1
58  Fractal Patterns in Nonlinear Dynamics and Applications

Since d(y, xnk ) ≤  for all k , it follows that d(y, a) ≤  and hence
y ∈ A + . Thus, there exists N such that An ⊆ A + . Moreover,
from equation 2.20 A ⊆ An +  for all n ≥ N . Hence, by Theorem 2.6,
Hd (An , A) <  for all n ≥ N . It seems that {An } converges to A. Thus,
(H(X ), Hd ) is complete.

Now we have the enough materials to present the theory of iterated


function system and there by we construct the deterministic fractals.

2.5 Construction of deterministic fractals


2.5.1 Iterated function system
Hutchinson introduced the conventional explanation of deterministic
fractals through the theory of iterated function system. Meanwhile,
Barnsley formulated the theory of iterated function system called the
Hutchinson-Barnsley theory in order to define and construct the fractals
as a non-empty compact invariant subset of a complete metric space
generated by the Banach fixed point theorem (for further reading [11,
27, 62, 86, 87, 88, 90, 91, 94, 95]). This section concisely discusses the
construction of deterministic fractal (or metric fractal) in the complete
metric space generated by the IFS of Banach contractions.

Definition 2.13 For n ∈ N, let Nn denote the subset {1, 2, . . . , n} of N.


Consider a finite set of contraction mappings f1 , f2 , . . . , fn on X with con-
traction ratios αk ∈ [0, 1), k ∈ Nn , simply written as (fk )k∈Nn . Then the
system {X ; fk : k ∈ Nn } is called an Iterated Function System (IFS)
or finite iterated function system.

Definition 2.14 Define the self-mapping F : H(X ) −→ H(X ) by


[
F (A) = fk (A), for all A ∈ H(X ). (2.21)
k∈Nn

This self-mapping F is called as Hutchinson-Barnsley mapping (HB map-


ping) on H(X ).

Lemma 2.1 Let f : H(X ) → H(X ) defined by f (A) = {f (a) : a ∈ A}


for all A ∈ H(X ). If f is contraction on X with contraction ratio α, then
f is contraction on H(X ) with same contraction ratio α.

Proof: Let A, B ∈ H(X ). By equation 2.17, we get d(a, B ) =


min{d(a, b) : b ∈ B}. Now, d(f (A), f (B )) = max{min{d(f (a), f (b)) :
Fractals  59

b ∈ B} : a ∈ A}. Since f is contraction mapping with contraction


ratio α,
d(f (A), f (B )) ≤ max{min{α.d(a, b) : b ∈ B} : a ∈ A}
= α. max{min{d(a, b) : b ∈ B} : a ∈ A}
= αd(A, B ).
Similarly, d(f (B ), f (A)) ≤ αd(B, A). Hence
Hd (f (A), f (B )) = max{d(f (A), f (B )), d(f (B ), f (A))}
≤ α max{d(A, B ), d(B, A)}
≤ αHd (A, B ).

It shows that f is also contraction on H(X ) with contraction ratio α.


Theorem 2.11 Let A, B, C, D ∈ H(X ). Then

Hd (A ∪ B, C ∪ D) ≤ max{Hd (A, C ), Hd (B, D)}.

Proof: Let A, B, C, D ∈ H(X ). Then we have

Hd (A ∪ B, C ∪ D) = max{d(A ∪ B, C ∪ D), d(C ∪ D, A ∪ D)}.

There are two possible cases i) Hd (A ∪ B, C ∪ D) = d(A ∪ B, C ∪ D) and


ii)Hd (A ∪ B, C ∪ D) = d(C ∪ D, A ∪ D).
Case i: If Hd (A ∪ B, C ∪ D) = d(A ∪ B, C ∪ D) then d(A ∪ B, C ∪ D) =
d(A, C ∪D) or d(A∪B, C ∪D) = d(B, C ∪D). Suppose d(A∪B, C ∪D) =
d(A, C ∪ D), then

Hd (A ∪ B, C ∪ D) = d(A, C ∪ D) ≤ d(A, C )
≤ Hd (A, C ) ≤ max{Hd (A, C ), Hd (B, D)}.

Suppose d(A ∪ B, C ∪ D) = d(B, C ∪ D), then


Hd (A ∪ B, C ∪ D) = d(B, C ∪ D) ≤ d(B, D)

≤ Hd (B, D) ≤ max{Hd (A, C ), Hd (B, D)}.

Case ii: If Hd (A ∪ B, C ∪ D) = d(C ∪ D, A ∪ B ) then d(C ∪ D, A ∪ B ) =


d(C, A ∪ B ) or d(C ∪ D, A ∪ B ) = d(D, A ∪ B ). Suppose d(C ∪ D, A ∪ B ) =
d(C, A ∪ B ), then

Hd (A ∪ B, C ∪ D) = d(C, A ∪ B ) ≤ d(C, A)
≤ Hd (A, C ) ≤ max{Hd (A, C ), Hd (B, D)}.
60  Fractal Patterns in Nonlinear Dynamics and Applications

Suppose d(C ∪ D, A ∪ B ) = d(D, A ∪ B ), then


Hd (A ∪ B, C ∪ D) = d(D, A ∪ B ) ≤ d(D, B )
≤ Hd (B, D) ≤ max{Hd (A, C ), Hd (B, D)}.

Hence Hd (A ∪ B, C ∪ D) ≤ max{Hd (A, C ), Hd (B, D)}.


Theorem 2.12 Let (X, d) be a metric space and H(X ) be a associated
hyperspace of nonempty compact subsets of X with Hausdorff metric Hd .
If fk ’s are contraction mapping on X for all k ∈ Nn , then the HB mapping
F is contraction on H(X ).
Proof: Let A, B ∈ H(X ) and consider the case n = 2. Then we get
2 2
!
[ [
Hd (F (A), F (B )) = Hd fk (A), fk (B )
k=1 k=1
≤ max {Hd (f1 (A), f1 (B )), Hd (f2 (A), f2 (B ))}
≤ max {α1 Hd (A, B ), α2 Hd (A, B )}
≤ αHd (A, B ).
Here α = max{αk : k ∈ N2 }.
Theorem 2.13 Let (X, d) be a complete metric space and (H(X ), Hd ) be
a associated Hausdorff metric space. If the self-mapping F , in Eq. (2.21),
is defined by the IFS {X ; fk : k ∈ Nn }, then F has a unique fixed point
A∗ in H(X ), that is, there exists a unique nonempty set A∗ ∈ H(X ) such
that F satisfying the self-referential equation
[
A∗ = F (A∗ ) = fk (A∗ ).
k∈Nn

Moreover, for any B ∈ H(X ),


lim F ◦p (B ) = A∗ ,
p→∞

the limit being taken with respect to the Hausdorff metric.


Proof: By Theorem 2.10, completeness of the space (X, d) gives
(H(X ), Hd ) is complete metric space. Theorem 2.12 shows that F is
contraction mapping on H(X ). Hence, by Banach fixed point theorem
2.3 the contraction mapping F on the complete metric space (H(X ), Hd )
has a unique fixed point. This completes the proof.
In Theorem 2.13, F ◦p describes the pth composition of the HB map-
ping F , that is, F ◦p = F
| ◦ F {z
◦ · · · ◦ F}.
p times
Fractals  61

Definition 2.15 A nonempty compact set A∗ obtained from the Theorem


2.13 is called an invariant set or self-referential set or attractor of the
IFS {X ; fk : k ∈ Nn } .
Note 2.2 In general, A∗ has Hausdorff dimension which exceeds its topo-
logical dimension. Hence, the fixed point A∗ ∈ H(X ) of the HB mapping F
is called the Fractal of the IFS of Banach contractions. Sometimes A∗ is
called a deterministic fractal generated by the IFS of Banach contractions.
Let us see some classical examples of deterministic fractal.
Example 2.7 (Cantor set) Let X = [0, 1] and consider the IFS on X
consists of the following two mappings,
x
f1 (x) = ;
3
x+2
f2 (x) = .
3
The self mappings f1 , f2 are contractions with contraction factor
1/3. By Theorem 2.13, the Hutchinson-Barnsley mapping F of IFS
([0, 1]; f1 , f2 ) generates the IFS and the resulting fractal is called Can-
tor set as shown in Fig. 2.6.

Figure 2.6: Construction of the Cantor set.


62  Fractal Patterns in Nonlinear Dynamics and Applications

(a) Behaviour of f1 on S0

(b) Behaviour of f2 on S0

(c) Behaviour of f3 on S0

(d) First iteration (e) 10th iteration

Figure 2.7: Construction of the Sierpinski gasket.


Fractals  63

Example 2.8 (Sierpinski gasket) Let X be an equilateral triangle


ABC ⊆ [0√, 1] × [0, 1] with the vertices A = (0, 0), B = (1, 0) and
C = (1/2, 3/2), and consider the IFS on X consists of the following
three contractions,
 
1 1
f1 (x, y ) = x, y ;
2 2
 
1 1 1
f2 (x, y ) = x+ , y ;
2 2 2
√ !
1 1 1 3
f3 (x, y ) = x+ , y+ .
2 4 2 4

All the above contractions have contraction factor 1/2. By Theorem 2.13,
the resulting fractal is called Sierpinski gasket as depicted in Fig. 2.8.
Example 2.9 (Sierpinski carpet) Let X be a unit square in R2 with
the vertices A = (0, 0), B = (1, 0) C = (0, 1) and D = (1, 1), and consider
the IFS on X consists of the following eight contractions,
   
1 1 1 1 1
f1 (x, y ) = x, y ; f2 (x, y ) = x, y +
3 3 3 3 3
   
1 1 2 1 1 1
f3 (x, y ) = x, y + ; f4 (x, y ) = x+ , y
3 3 3 3 3 3
   
1 1 1 2 1 2 1
f5 (x, y ) = x+ , y+ ; f6 (x, y ) = x+ , y ;
3 3 3 3 3 3 3
   
1 2 1 1 1 2 1 2
f7 (x, y ) = x+ , y+ ; f8 (x, y ) = x+ , y+ .
3 3 3 3 3 3 3 3
All the above contractions have contraction factor 1/3. By Theorem 2.13,
the resulting fractal is called Sierpinski Carpet as depicted in Fig. 2.8.
Example 2.10 (von Koch curve) Let X = [−1, 1] ×{0} ⊆ [−1, 1]2 and
consider the IFS on X consists of the following four contractions,
 
1 2 1
f1 (x, y ) = x− , y ;
3 3 3
√ √ √ !
1 3 1 3 1 3
f2 (x, y ) = x− y− , x+ y+ ;
6 6 6 6 6 6
√ √ √ !
1 3 1 3 1 3
f3 (x, y ) = x+ y+ , − x+ y+ ;
6 6 6 6 6 6
 
1 2 1
f4 (x, y ) = x+ , y .
3 3 3
64  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 2.8: Sierpinski Carpet.

All the above contractions have contraction factor 1/3. By Theorem 2.13,
the resulting fractal is called von Koch curve as shown in Fig. 2.3.2.
The von Koch curve is an example of a continuous curve which is nowhere
differentiable. It is also a curve of infinite length.

Example 2.11 (Dragon curve) Let X be the line segment joining two
points (0, 0), (1, 0) and consider the IFS on X consists of the following two
contractions,
 
1 1 1 1
f1 (x, y ) = x − y, x + y ;
2 2 2 2
 
1 1 1 1
f2 (x, y ) = − x − y + 1, x − y ;
2 2 2 2

All the above contractions have contraction factor 1/ 2. By Theorem
2.13, the resulting fractal is called Dragon curve as shown in Fig. 2.9.

Example 2.12 (Fractal leaves) Let X be the subset of R2 and the fol-
lowing IFS on X consists of the four contractions that generate the fern
leaf,
Fractals  65

Figure 2.9: Dragon Curve.

4
f1 (x, y ) = y;
25
 
17 1 −1 17 4
f2 (x, y ) = x + y, x+ y+ ;
20 25 25 20 25
 
1 −13 23 11 4
f3 (x, y ) = x+ y, − x+ y+ ;
2 50 100 50 25
 
−3 7 13 6 11
f4 (x, y ) = x + y, x+ y+ .
20 25 50 25 25
The following IFS on X consists of the four contractions that generate the
maple leaf,
 
49 1 1 31 −1
f1 (x, y ) = x+ y+ , y+ ;
100 100 4 100 50
 
27 13 −2 9 14
f2 (x, y ) = x + y, x+ y+ ;
100 25 5 25 25
 
9 −73 22 1 13 2
f3 (x, y ) = x+ y+ , x+ y+ ;
50 100 55 2 50 25
 
1 −1 13 1 8
f4 (x, y ) = x+ y+ , x+ .
25 100 25 2 25
66  Fractal Patterns in Nonlinear Dynamics and Applications

(a) Fern leaf

(b) Maple leaf

Figure 2.10: Fractal leaves.


Fractals  67

To summarize, the deterministic fractal constructed in the example


2.7-2.12 possesses the following facts:

 Fine structures: Nearly the same at every corner by scaling and


no smooth part is observed by scaling.
 Agrees with the Mandelbrot definition: Hausdorff dimension is
strictly greater than its topological dimension.
 A recursive construction: It is recursively defined through IFS
with initiator, here initiator is line or region in R2 plane.
 Self-similarity: All are finite copies of itself.
 Due to their length and area measures, they can not be easily
described by Euclidean geometry.
 A natural pattern.
Chapter 3

Stochastic Fractal

3.1 Introduction
All the examples discussed in the previous chapter are far removed
from what we have around us in nature. Nature loves randomness and
the natural objects which we see around us evolve with time. The ap-
parently complex look of most of natural objects does not mean that
nature favours complextity, rather that the opposite is true. Often the
inherent and the basic rule is trivially simple; it is in fact the random-
ness and the repetition of the same simple rule over and over again
makes the object look complex. Of course, natural fractals cannot be
strictly self-similar rather they are statistically self-similar [8, 13, 38].
For instance, if a child is shown a picture of a sky full of clouds dis-
tributed all over, then even the child can capture the generic feature
of cloud distribution and use it in the future. Later if the same child
is asked to draw the picture of clouds without looking at it, the child
may well do it. The two pictures will of course never be the same but it
will be similar depending on how well the child can draw. Similarly, we
can all draw a curve describing the tip of the trees in the horizon but
details of the two pictures drawn by the same person will never be the
same despite how hard one tries. Capturing the generic feature of cloud
distribution without having to know anything about self-similarity can
be described as our natural instinct. In the following section we at-
tempt to incorporate both the ingredients (randomness and kinetic)
to the various classical fractals in order to know what role these two
quantities play in the resulting processes [29, 35, 40, 42, 54, 55].
70  Fractal Patterns in Nonlinear Dynamics and Applications

To this end, we will consider a couple of interesting variants of the


classical Cantor set in which probability, time and randomness are in-
corporated in a logical progression. This allows for us to learn the role
that each of these parameters plays in the resulting fractal. We first
propose the dyadic Cantor set (DCS) which is simpler than the tri-
adic Cantor set since dividing into two intervals requires only one cut
while dividing into three requires two of them [85]. Note that addi-
tion of every extra cut adds extra complications in solving the problem
either analytically or numerically. It is worth noting that the genera-
tor that divides an interval into two equal parts and removes one will
leave nothing to investigate since there will always be only one interval
left in the system. The dyadic Cantor set results in a fractal only if
we remove one of the intervals with certain probability which makes
it inherently a random fractal and this is sharp contrast to its triadic
counterpart. We then introduce time into the problem while we still
divide the intervals into two equal parts. In the kinetic DCS, we apply
the generator sequentially, i.e., at each step the generator is applied to
only one interval, instead of applying it recursively to all the intervals
which is done in the traditional Cantor set [71]. We then further mod-
ify the generator such that it divides an interval randomly into two
parts instead of dividing into two equal parts and apply it sequentially.
It now incorporates both time and spatial randomness in the system
making it a stochastic counterpart of the DCS. Each of these variants
are solved analytically to obtain fractal dimension and to show self-
similarity. Analytical results, especially the self-similar properties, are
verified numerically by invoking the idea of data collapse [37].

3.2 A brief description of stochastic process


Stochastic process provides the theoretical framework for studying non-
equilibrium statistical mechanics. In order to be able to understand
and appreciate the meaning of a stochastic process it is essential to
understand the concept of random variable as it lies at the very heart
of its definition. Stochastic or random variable x is defined by (i) a set of
possible values x1 , x2 , .., xn which we may call range or set of states and
(ii) probability distribution over this set. Consider a simple example of
a dice to illustrate the two points mentioned above. In each throw, the
number in the upper face corresponds to the variable x, with possible
outcomes: {1, 2, 3, 4, 5, 6} and the probabilities attached with each of
the outcomes which is 1/6 (for an honest dice) in the present example.
It is worthwhile to mention that the range or the set of states may be
discrete or continuous, finite or infinite. If the set is discrete (as the case
Stochastic Fractal  71

of dice, or coin) then the probability distribution will be given by a set


of non negative numbers such that the attached probabilities satisfies
X
Pn = 1. (3.1)
n

On the other hand, if the range is continuous within an interval [a, b]


over the x axis then the probability distribution is given by a non
negative function P (x) > 0 which is normalized in the sense that
Z b
P (x)dx = 1, (3.2)
a

where the integral extends over the whole range. The probability that
x has a value between x and x + dx is P (x)dx.
Once a stochastic variable x has been defined, an infinity of other
stochastic variables can be derived from it. For instance, all quantities
y = f (x) obtained by some mapping f is again a stochastic variable
which always indicates some conceptual method. These quantities y
could also be functions of an additional variable t, i.e.,
y (t) = f (x, t), (3.3)
where t is the time and typically it represents the realization of the pro-
cess. The function y (t) describes the stochastic process if t stands for
time [53, 6]. Thus a stochastic function is simply a function of two vari-
ables, one of which is the time t and the other is a stochastic variable x
as defined above. In other words, systems which evolve probabilistically
in time or systems in which there exists a certain time-dependent ran-
dom variable x(t) then it is regarded as stochastic process(refer [53, 6]).

3.3 Dyadic Cantor Set (DCS): Random fractal


Dyadic Cantor set starts with an initiator which is typically an interval
of unit length [0, 1]. The generator then divides it into two equal parts
and deletes one, say the right half, with probability (1 − p). After step
one, the system will have on average (1 + p) number of sub-intervals
each of size 1/2, since the right half interval remains there in step one
with probability p. Say that we give a piece of thread and a scissor
to N number of people to divide their thread into two equal parts.
We then give them a fair coin and ask them to remove one of the two
pieces if the outcome of toss of the coin is head and keep both the
pieces if the outcome is tail. This would correspond to p = 1/2 and
hence in the limit N → ∞ half of the people will have two intervals
72  Fractal Patterns in Nonlinear Dynamics and Applications

and the other half will have only one intervals to make the average
number of intervals equal to 1 + p where p = 1/2 in this case. In the
next step, the generator is applied to each of the available (1 + p) sub-
intervals to divide them into two equal parts and remove the right half
from each of the 1 + p intervals with probability (1 − p). The system
will then have on the average (1 + p)2 number of intervals of size 1/4
as (1 − p)(1 + p) number of intervals of size 1/4 are removed on the
average. The process is then continued over and over again by applying
the generator on all the available intervals at each step recursively. Like
its definition, finding the fractal dimension of the DCS problem is also
trivially simple. According to the construction of the DCS process there
are N = (1 + p)n intervals in the nth generation each of which have
size δ = 2−n and hence it is also the mean interval size. Like in the
recursive triadic Cantor set we can construct the k th moment Mk of
the intervals size at nth step of construction process of DCS too and
find that here too it is a conserved quantity.
Once again the most convenient yard-stick to measure the size of the
set in the nth step is the mean interval size δ = 2−n which coincides
with the individual size of the intervals in the nth step. Expressing N in
favour of δ using δ = 2−n we find that the number N falls off following
power-law against mean interval size δ , i.e.,
N (δ ) ∼ δ −df , (3.4)

with df = ln(1+p)
ln 2 < 1 ∀ 0 < p < 1. Note that the exponent df is non-
integer and at the same time it is less than the dimension of the space
d = 1 where the set is embedded if we have 0 < p < 1 then it is the
fractal dimension of the resulting dyadic Cantor set [85]. Unlike triadic
Cantor set where the Cantor dusts are distributed in a strictly self-
similar fashion the Cantor dust in the dyadic Cantor set are distributed
in a random fashion yet it is self-similar but in the statistical sense
[54, 55, 66, 67, 70, 83, 85, 97].
It is well-known that the triadic Cantor set possess the following
unusual property. For instance, the intervals which are removed from
the set are 1/3, 2/9, 4/27, ..., etc. and if we add them up we get

1X
(2/3)n = 1, (3.5)
3 n=0

which is the size of the initiator. This leads us to conclude that the size
of the set that remains is precisely zero since the sum of the sizes that
are removed equal to the size of the initiator. The question is: Does the
dyadic Cantor set too possess the same properties? It is indeed the case.
For instance, on the average in step one the amount of size removed is
Stochastic Fractal  73

1−p
2 , in step two the total amount of size removed is (1−p)(1+p) 4 , in step
(1−p)(1+p)2 (1−p)(1+p)3
three it is 8 , in step four it is 16 and so on. If we add
these intervals we obtain

(1 − p) X  1 + p n
=1 (3.6)
2 n=0 2

which is again the size of the initiator. It means there is hardly anything
left in the set. However, we will show later that there is still tons of
members in the set. One of the virtues of the dyadic Cantor set is its
simplicity. The notion of random fractal and its inherent character, self-
similarity, can be introduced to the beginner through this example in
the simplest possible way.

3.4 Kinetic dyadic Cantor set


Now we come to the following question, which is worth addressing.
What if the generator, that divides an interval into two equal parts
and remove one with probability 1 − p, is applied to only one interval at
each step instead of applying it to all the available intervals? Clearly,
the interval sizes of the remaining intervals along the line will have
great many different which is in sharp contrast to the one created by
the DCS problem. It raises a further question: How do we choose one
interval when the system has more than one interval of different sizes?
We choose the case whereby an interval is picked with the probability
proportional to their respective sizes as it appears to be the most generic
case. One advantage of modifying the DCS problem in this way is that
we can use the customized fragmentation equation approach to solve it
analytically.
Note that the construction of Cantor set is essentially part of the
fragmentation process. We can thus use the kinetics of the fragmen-
tation equation to the dyadic Cantor set problem analytically. The
kinetics of the fragmentation process can be described by the evolution
of the particle (which we shall call interval) size distribution function
c(x, t), where c(x, t)dx is the number of intervals of size within x and
x + dx, can be described by the following integro-differnetial equation
Z x
∂c(x, t)
= −c(x, t) dyF (y, x − y )dz
∂t 0
Z ∞
+ 2 dyc(y, t)F (x, y − x). (3.7)
x
74  Fractal Patterns in Nonlinear Dynamics and Applications

In this equation the kernel F (x, y ) describes the rules and rate at which
a parent interval of x + y is divided into two smaller intervals of size
x and y [19, 21]. The first term on the right hand side of Eq. (3.7)
describes the loss of interval of size x due to their division into two
smaller intervals. On the other hand, the second term describes the
gain of interval of size x due to division of an interval of size y > x into
two smaller intervals so that one of the two smaller intervals is of size
x. The factor “2” in the gain term actually 21 that describes that at
each breaking event we create two intervals out of one interval.
To customize Eq. (3.7) for the kinetic dyadic Cantor set we just
need to replace the factor “2” in the gain term by 1 + p and choose the
following kernel
F (x, y ) = (x + y )δ (x − y ). (3.8)
The delta function ensures the fact that the two product intervals x and
y must be equal in size. However, it differs from the earlier definition
of dyadic Cantor set Note that in the fragmentation process, at each
step only one interval is divided. Thus it is necessary to choose a rule to
decide how an interval is picked when there is more than one interval of
different sizes. To this end, the pre-factor (x + y ) describes that a parent
interval of size x + y is chosen preferentially according to their sizes and
then the delta function ensures that it is divided into two intervals of
size x and y so that x = y . The resulting equation for kinetic dyadic
Cantor set becomes
∂c(x, t) x
= − c(x, t) + (1 + p)2xc(2x, t) (3.9)
∂t 2
which is the required rate equation that describes the kinetic DCS
problem.
The algorithm of the j th step which starts with, say Nj number of
intervals, can be described as follows.
(a) Generate a random number R from the open interval (0, 1).
(b) Check which of the 1, 2, ..., Nj intervals contains the random
number R. Say, the interval that contain R is labelled as m and
hence pick the interval m. Else, if none of the surviving Nj in-
tervals contain R then increase time by one unit and go to step
(a).
(c) Apply the generator to the sub-interval m to divide it into two
equal pieces and remove one of the two parts with probability
(1 − p).
Stochastic Fractal  75

(d) Label the newly created intervals starting from the left end which
is labelled with its parents label m and the interval on the right
if it remains there then it is labelled with a new number Nj+1 .
(e) Increase time by one unit.
(f) Repeat the steps (a)-(e) ad infinitum.
To solve Eq. (3.9) we find it convenient to introduce the nth moment
of c(x, t) Z ∞
Mn (t) = xn c(x, t)dx, (3.10)
0

instead of finding solution for c(x, t) itself we find attempt to find the
solution for its moment since finding the the latter is much simpler
than the former. Incorporating it in Eq. (3.9) gives the rate equation
for Mn (t) which reads as

dMn (t) h 1 (1 + p) i
= − − n+1 Mn+1 (t). (3.11)
dt 2 2
We can easily find a value of n = n∗ for which Mn∗ is a time independent
or a conserved quantity simply by finding the root of the following
equation
1 (1 + p)
− n∗ +1 = 0. (3.12)
2 2
Solving it we immediately find that n∗ = ln(1+p) ln 2 implying that the
quantity M ln(1+p) (t) is a conserved quantity. Numerically, it means
ln 2
that if we label all the surviving intervals, say, at the j th step as
x1 , x2 , x3 , ...xj starting from the left most till the right most interval
then the n∗ th moment is
∗ ∗ ∗ ∗
Mn∗ = xn1 + xn2 + xn3 + ... + ... + xnj . (3.13)

The numerical simulation too suggest this which is shown in Fig. (3.1).
Note that the value of this moment in a given realization remains the
same although the exact numerical value at which the value remains
the same may vary with different realization. Interestingly, the ensemble
averaged value on the other hand is equal to the size of the initiator
regardless of the time we choose to measure. To find why the index of
the moment n∗ = ln(1+p)
ln 2 is so special we need to know the solution for
the nth moment.
It is expected that the kinetic DCS problem too, like the DCS prob-
lem, will generate fractals in the long time limit and hence must exhibits
self-similarity an essential property of fractals. It is therefore reasonable
76  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 3.1: The plot show that the number of intervals N grows with time t
following power-law N (t) ∼ tdf with exponent n∗ = ln(1 + p)/ ln 2 which is exactly
what has been predicted by Eq. (3.18).

to anticipate that the solution of Eq. (3.11) for general n will exhibit
scaling. Existence of scaling means that the various moments of c(x, t)
should have power-law relation with time [26]. We therefore can write
a tentative solution of Eq. (3.11) as

Mn (t) ∼ A(n)tα(n) . (3.14)

Substituting Eq. (3.14) in Eq. (3.11) we obtain the following recursion


relation
α(n + 1) = α(n) − 1. (3.15)
Iterating it subject to the condition that α(ln 1 + p/ ln 2) = 0 gives
ln 1 + p
α(n) = −(n − ). (3.16)
ln 2
We therefore now have an explicit asymptotic solution for the nth mo-
ment  
ln(1+p)
− n− ln 2
Mn (t) ∼ t . (3.17)
It implies that the number of intervals N (t) = M0 (t) grows with time
as ln(1+p)
N (t) ∼ t ln 2 , (3.18)
Stochastic Fractal  77

which is verified numerically (see Fig. (3.1). On the other hand, the
mean interval size δ = M1 (t)/M0 (t) decreases with time as

δ ∼ t−γ with γ = 1. (3.19)

Using it in Eq. (3.18) to eliminate time in favour of δ we find that N


exhibits the a power-law
N ∼ δ −df , (3.20)
and found the same exponent df = ln(1+p)
ln 2 as n∗ as the fractal dimension
of DCS [23]. Besides, the value of df is the same as the index of the
conserved moment n∗ . It proves that the exact value of the fractal
dimension does not depend whether we apply the generator to one
interval or to all the available intervals at each step as long as the
generator remain the same.

3.5 Stochastic dyadic Cantor set


Yet another interesting question is: what if we use a generator that di-
vides an interval randomly into two smaller intervals instead of dividing

d d
Figure 3.2: The sum of the df th power of all the remaining intervals x1f + x2f +
d
... + xj f = 1 regardless of time provided df = ln(1 + p)/ ln 2 which is the fractal
dimension of the kinetic dyadic Cantor set. An interesting point to note that the
numerical value of the conserved quantity is different in every independent realization
albeit remain constant with time time.
78  Fractal Patterns in Nonlinear Dynamics and Applications

into two equal intervals? Obviously, the interval sizes will now be a ran-
dom variable x whose sample space and the corresponding probability
will be different at a different time in a given realization. That is, the
system will now evolve probabilistically with time in which the ran-
dom variable is time dependent x(t) and hence we regard the system
as stochastic dyadic Cantor set. The process starts with an initiator
which can be of unit interval [0, 1] as before. However, the generator
here divides an interval randomly into two pieces and remove one with
probability (1 − p). Perhaps the model can be best described by giv-
ing the algorithm for the j th generation step that starts say with Nj
number of intervals. It can be described as follows.
(i) Generate a random number R from the open interval (0, 1).
(ii) Check which of the available intervals contains R.
(iii) Pick the interval [a, b] labelled as k if it contains R else go to
step (i) if none of the available intervals contain R and increase
the time by one unit in either case.
(iv) Apply the generator onto the interval k to divide it randomly
into two pieces and remove one with probability (1 − p). For this
we generate a random number, say C , from the open interval
(a, b) to divide it into [a, c] and [c, b] and delete the open interval
(c, b) with probability (1 − p).
(v) Update the logbook by labeling the left end of the two newly
created interval [a, c] with its parents label k since the label k
was already redundant and label the other interval as Nj + 1 if
it has not been removed in step (iv).
(vi) Repeat the steps (i)-(vi) ad infinitum.
The binary fragmentation equation given by Eq. (3.7) can describe
the rules of the SDCS problem stated in the algorithm (i)-(vi) if we
choose
F (x, y ) = 1, (3.21)
and the factor 0 20 in the gain term is replaced by (1 + p). The master
equation for the stochastic dyadic Cantor set then is
Z ∞
∂c(x, t)
= −xc(x, t) + (1 + p) c(y, t)dy. (3.22)
∂t x

Incorporating the definition of the nth moment in it gives


dMn (t) h (1 + p) i
=− 1− Mn+1 . (3.23)
dt n+1
Stochastic Fractal  79

To solve it for Mn (t) we follow the same procedure as for the KDCS
problem and find the asymptotic solution for the nth moment as

Mn (t) ∼ t(n−p)z with z = −1. (3.24)

It implies that p is the special value of n for which Mp (t) is a conserved


quantity. Note that once again we find that the exponent of the power-
law relation for Mn (t) is linear in n and hence the system must obey a
simple scaling but only in the statistical sense. It is interesting to note
that the nth moment Mn (t) is a conserved quantity if n = p whereas for
KDCS the conserved quantity corresponds to n = ln(1 + p)/ ln 2 which
is greater than p ∀p.
Next interesting quantity is to check if the solution for the mean
interval size agrees with the numerical simulation. From Eq. (3.24) we
find that the number of interval N (t), which is the zeroth moment
M0 (t), grows with time as

N (t) ∼ tp , (3.25)

and the total mass M (t), which is the first moment M1 (t), decreases
with time as
M (t) ∼ t−(1−p) , (3.26)
since (1 − p) > 0 for 0 < p < 1. Using these in the definition of the mean
interval size
M (t)
δ= , (3.27)
N (t)
we find that
δ (t) ∼ t−1 , (3.28)
which is non-trivial as it is independent of p. Expressing the expression
for the number of intervals N (t) in terms δ we find it scales as

N (δ ) ∼ δ −df , (3.29)

with df = p is the fractal dimension of the stochastic dyadic Cantor


set [85]. That is the significance of p for which the pth moment is
conserved according to Eq. (3.24) Note that df is always less than
ln(1+p)
ln 2 ∀ 0 < p < 1 revealing that the fractal dimension of the stochastic
fractal is always less than that of its recursive or kinetic counterpart.
We still have to prove that the resulting fractal is self-similar which
is one of the basic ingredients. We shall now apply the Buckingham Pi
theorem to obtain scaling solution for c(x, t) as it provides deep insight
into the problem [37]. Note that according to Eq. (3.22) the governed
parameter c for a given value of p depends on two parameters x and t.
80  Fractal Patterns in Nonlinear Dynamics and Applications

However, the knowledge about the decay law for the mean interval size
implies that one of the parameters, say x, can be expressed in terms of
t since according to Eq. (3.28) t−1 bear the dimension of interval size
x. We therefore can define a dimensionless governing parameter
x
ξ= , (3.30)
t−1
and a dimensionless governed parameter
c(x, t)
Π= . (3.31)

The numerical value of the right side of the above equation remains the
same even if the time t is changed by some factor, say µ for example,
since the left hand side is a dimensionless quantity. It means that the
two parameters x and t must combine to form a dimensionless quantity
ξ = x/t−1 and the dimensionless parameter Π can only depends on ξ .
In other words we can write
c(x, t)
= φ(x/t−1 ), (3.32)

which leads to the following dynamic scaling form

c(x, t) ∼ tθ φ(x/t−1 ), (3.33)

where exponents θ = 1 + p is typically fixed by the conservation law


and φ(ξ ) is known as the scaling function [17, 20].
To obtain a solution for c(x, t) we now substitute Eq. (3.33) in Eq.
(3.22) and find the following equation for the scaling function
Z ∞
dφ(ξ )  
ξ + ξ + (1 + p) φ(ξ ) = (1 + p) φ(η )dη. (3.34)
dξ ξ

We thus see that it reduces a partial integro-differential equation for the


distribution function c(x, t) into an ordinary integro-differential equa-
tion for the scaling function φ(ξ ) and the later equation is much simpler
to solve rather than solving the former. To simplify it further we dif-
ferentiate Eq. (3.34) with respect to ξ and obtain

d2 φ(ξ )   dφ(ξ )
ξ + ξ + (2 + p ) + (2 + p)φ(ξ ) = 0, (3.35)
dξ 2 dξ
which we can re-write as
d2 φ(ξ )   dφ(ξ )
(−ξ ) + (2 + p ) − (−ξ ) − (2 + p)φ(ξ ) = 0, (3.36)
d(−ξ )2 d(−ξ )
Stochastic Fractal  81

This is exactly Kumar’s confluent differential equation and we write its


solution as
φ(ξ ) = 1 F1 (2 + p; 2 + p; −ξ ), (3.37)
and its asymptotic solution is

φ(ξ ) = e−ξ . (3.38)

where 1 F1 (a; b; z ) is known as Kumer’s function [4] Using it in Eq. (3.56)


we can write the solution for c(x, t) in the long-time limit

c(x, t) ∼ t1+p e−xt . (3.39)

An interesting point about finding solutions using the Buckingham Pi-


theorem is that specification of initial condition is not required at any
stage. It implies that the solution is universal in the sense that it is
independent of initial condition.

3.6 Numerical simulation


We know that we performed the numerical simulation based on the
rules depicted in the algorithm (i)–(vi). First, we address the question
of how to define time. The time scale in the simulations is defined
as the number of attempts to divide an interval regardless of whether
the attempt is successful or not. We can verify it by checking whether
our numerical data can satisfy Eq. (3.25) or not. In Fig. (3.3) we plot
log(N (t)) versus log(t) for different p. In each case we find a straight
line with slope df = p which agrees with Eq. (3.25). It means that the
way we defined time t is the same as described by Eq. (3.25).
Next, we verify the conservation law. To verify it we label all the
existing intervals at a given time, say ti , in a given realization as
x1 , x2 , ..., xN . We then measure the sum of the df th power of their
sizes at each different time ti in a single realization which is essen-
tially the same as the df th moment of c(x, t), i.e., Mdf (t). Plotting the
resulting data in Fig. (3.4) as a function of time t we find it a con-
stant. One interesting thing is that the exact numerical value of this
quantity in each different realization is different. However, the aver-
age of this ensemble of values is also a conserved quantity although
the corresponding numerical value is always equal to one, as shown in
Fig. (3.4) by deep black colored filled circles, provided the ensemble size
is large enough. Furthermore, to verify Eq. (3.29) we measure mean in-
terval size δ by summing the sizes of all the intervals in a given time
and dividing the resulting sum by the corresponding interval number.
82  Fractal Patterns in Nonlinear Dynamics and Applications

11
t = 106, E = 104
10 p = 0.50
p = 0.65
9
log(N) p = 0.75
8

4
9 10 11 12 13 14
log(t)
Figure 3.3: (a) Plots of log(N ) versus log(t) for different p where N is the zeroth
moment M0 (t). In each case we find a straight line with slope equal to p as it should
be according to Eq. (3.25).

3
6 4
p = 0.75, t = 10 , E = 10
2.5

2
Md = Σxidf

1.5
f

0.5

0
0 200000 400000 600000 800000 1x106
t
d d d
Figure 3.4: The df th moment of all the remaining intervals x1f + x2f + ... + xj f
remain constant in independent realization. However, the numerical value at which
it is constant may be different as shown by colored lines. On the other hand, their
ensemble averaged value is always equal to one as shown by black colored lines.
Stochastic Fractal  83

We then plot log(N ) versus log(δ ) in Fig. (3.5) and find an excellent
straight line with slope equal to p as predicted by Eq. (3.29).
To test the analytic solution for c(x, t) of the rate equation, given by
Eq. (3.22), we perform a simulation for a a given fixed time and then
extract the sizes of all intervals. The distribution of interval sizes can
then be constructed. We bin the data to find the number of intervals
within a suitable class ∆ which is then normalized by the width of
the class ∆ itself so that c(x, t)∆x gives the number of intervals which
falls within the range x and x + ∆x. We then plot a histogram in Fig.
(3.6) where the number of intervals that fall in each class is normalized
by the width ∆x of the class size. The resulting curves for different
times are shown in Fig. (3.7) which represent plots of c(x, t) versus x
at four different instant such as at t1 = 100k , t2 = 200k , t3 = 300k
and t4 = 400k time unit. We then then divide the ordinate by t1+p i
and multiply abscissa by ti of the c(x, t) vs x data where ti is the time
when the snapshot were taken. The resulting plot, which is equivalent
to plots of t−(1+p) c(x, t) vs xt, is shown in Fig. (3.7(a)) which clearly
shows that all the distinct plots of Fig. (3.6) now collapsed superbly into
one universal curve. Plotting the same curve in the log −linear scale, as
shown in Fig. (3.7(b)), gives a straight line suggesting the solution for
the scaling function is exponential as we found analytically.

11 6 4
t = 10 , E = 10
10 p = 0.50
p = 0.65
9 p = 0.75
log(N)

-4 -3 -2 -1 0
log(δ)
Figure 3.5: We plot log(N ) versus log(δ) and find a set of straight lines with slope
df = p. It provides numerical verification of the theoretical relation N ∼ δ −df for
the stochastic DCS.
84  Fractal Patterns in Nonlinear Dynamics and Applications

7x109
p = 0.75, E = 104
9
6x10 t = 100K
t = 200K
5x109
t = 300K
c(x,t)
4x109 t = 400K

3x109

2x109

1x109

0
0 5x10-6 1x10-5 1.5x10-5 2x10-5
x
Figure 3.6: The distribution function c(t, x) is drawn as a function of x for three
different times. (b) The same set of data are used to plot in the log-linear scale.
The straight line clearly proves that c(x, t) for fixed time decays exponentially as
predicted by the solution given by Eq. (3.39).

1.2
p = 0.75, E = 104 0 p = 0.75, E = 104
1 t = 100K t = 100K
-2
log[c(x,t)/t1.75]

t = 200K t = 200K
0.8 t = 300K t = 300K
c(x,t)/t1.75

-4
t = 400K t = 400K
0.6 -6
-8
0.4
-10
0.2
-12
0 -14
0 1 2 3 4 5 6 7 0 2 4 6 8 10 12 14 16
xt xt
(a) (b)

Figure 3.7: (a) The distribution function c(t, x) is drawn as a function of x for
three different times. (b) The same set of data are used to plot in the log-linear
scale. The straight line clearly proves that c(x, t) for fixed time decays exponentially
as predicted by the solution given by Eq. (3.56).

Recall that two triangles are called similar even if their dimensional
quantities are different though the corresponding dimensionless quan-
Stochastic Fractal  85

tities coincide. For evolving systems like SDCS we can conclude that
the system is similar to itself at different times and hence we say that
it exhibits self-similarity. Note that self-similarity is also a kind of sym-
metry as we find that x is replaced by λ−1 x, time t is replaced by λt in
Eq. 3.39) we find that c(x, t) can be brought to itself
c(λ−1 x, λt) = λ1+df c(x, t), (3.40)
which is a kind of symmetry. That is, as the system evolves if we take a
few snapshots at any arbitrary time and they can all be obtained from
one another by similarity transformation. On the other hand, data-
collapse, which proves dynamic scaling, mean self-similarity. Actually,
self-similarity and similarity transformation are the same thing. All
these suggest that there is a continuous symmetry along the time axis.
Emmy Noether showed that whenever there exists a continuous sym-
metry there must exist a conservation law. This is now a well-known
Noether’s theorem. Indeed, we find that for every value of p within
0 < p < 1 and β there exists a unique conservation law. That is, the
system is governed by a conservation law for each value of β and p
which ultimately can be held responsible for fixing the value of θ and z
for a given value of albeit all the major variables of the system changes
with time. We can thus conclude that the fractal that emerges through
evolution in time must obey the Noether’s theorem.

3.7 Stochastic fractal in aggregation with


stochastic self-replication
The basic idea of the Cantor set is based on fragmentation and removal
of intervals in some way. What if we consider its opposite case such as
the aggregation of intervals instead of fragmentation. It has been ob-
served that in this case we have to add intervals in some way not remove
them to create fractals [83]. In general the kinetics of the aggregation
process can be described by Smoluchowski’s equation
Z ∞
∂c(x, t)
= −c(x, t) K (x, y )c(y, t)dy
∂t 0
Z x
1
+ K (y, x − y )c(y, t)c(x − y, t)dy, (3.41)
2 0
where the aggregation kernel K (x, y ) determines the rate at which par-
ticles of size x and y combine to form a particle of size (x + y ) [1, 2].
Essentially, Eq. (3.41) describes the following kinetic reaction scheme,
R
Ax (t) + Ay (t) −→ A(x+y) (t + τ ), (3.42)
86  Fractal Patterns in Nonlinear Dynamics and Applications

where, Ax (t) represents an aggregate of size x at time t and the reaction


rate R is related to the aggregation kernel via
Z ∞
R= K (x, y )c(y, t)dy. (3.43)
0

The factor 1/2 in the gain term of Eq. (3.41) implies that at each step
two particles combine to form one particle. The Smoluchowski equation
has been studied extensively in and around the eighties for a large class
of kernels satisfying K (bx, by ) = bλ K (x, y ), where b > 0 and λ is known
as the homogeneity index [9].
We can consider the case where each of the time two intervals, say,
of size x and y combine to form an aggregate of size x + y an interval
of the same size is added to the system with probability p. To describe
this process we just need to replace the factor 12 of the gain term of
Eq. (3.41) by 1+p 2 . The mechanism that the resulting Smoluchowski
equation describes can be best understood by giving an algorithm. The
process starts with a system that comprise of a large number of chem-
ically identical Brownian particles and a fixed value for the probability
p ∈ [0, 1] by which particles are self-replicated. The algorithm of the
model is as follows:
(i) Two particles, say of sizes x and y , are picked randomly from
the system to mimic a random collision via Brownian motion.
(ii) Add the sizes of the two particles to form one particle of their
combined size (x + y ) to mimic aggregation.
(iii) Pick a random number 0 < R < 1. If R ≤ p then add another
particle of size (x + y ) to the system to mimic self-replication.
(iv) The steps (i)–(iii) are repeated ad infinitum to mimic the time
evolution.
One may also consider that the system has two different kinds of parti-
cles: active and passive. As the systems evolve, active particles always
remain active and take part in aggregation while the character of the
passive particles are altered irreversibly to an active particle with prob-
ability p. Once a passive particle turns into an active particle it can take
part in further aggregation like other active particles already present
in the system on an equal footing and it never turns into a passive
particle. This interpretation is very similar to the work of Krapivsky
and Ben-Naim [32, 48]. While in their work the character of an active
particle is altered, in our work it is the other way around. The two
models are different also because here we only consider the dynamics
of the active particles, whereas Krapivsky and Ben-Naim studied the
Stochastic Fractal  87

dynamics of both the entities since a passive particle in their case exists
at the expense of an active particle and therefore a consistency check is
required. However, the present model does not require such consistency
check.
Note that random collision due to Brownian motion can be ensured
if we choose a constant kernel K (x, y ), e.g.,

K (x, y ) = 2, (3.44)

for convenience. The generalized Smoluchowski equation then is


Z ∞
∂c(x, t)
= −2c(x, t) dyc(y, t) + (1 + p)
∂t 0
Z x
× dyc(y, t)c(x − y, t). (3.45)
0

This is the fitting equation to the model described by the algorithm


(i) − (iv ). Incorporating the definition of the nth moment in Eq. (3.45)
we obtain
Z ∞Z ∞
dMj (t)
= dxdyc(x, t)c(y, t) (3.46)
dt
h0 0
i
× (1 + p)(x + y )j − xj − y j ) .

Setting p = 0 we can recover the conservation of mass (M1 (t) = const.)


of the classical Smoluchowski equation for constant kernel. It is clearly
evident from Eq. (3.46) that the mass of the system for 0 < p < 1 is
no longer a conserved quantity, and it is obvious due to the inherent
definition of our model. However, it is not obvious from Eq. (3.46) if
the system will still be governed by a conservation law or not.
It is fairly easy to obtain solutions to Eq. (3.46) for the first two
moments, namely M0 (t) ≡ N (t) and M1 (t) ≡ M (t). For instance, for
the mono-disperse initial condition c(x, 0) = δ (x − 1) they are
1
N (t) = , (3.47)
(1 + (1 − p)t)
and 2p
M (t) = L(0)(1 + (1 − p)t) 1−p , 0 ≤ p < 1, (3.48)
respectively. We find it convenient to check how the mean or typical
interval size s(t) grows with time t. This is defined as
R∞
dxxc(x, t) M1 (t)
s(t) = hxi = R0 ∞ = . (3.49)
0
dxc(x, t ) M0 (t)
88  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 3.8: We plot ln(s(t)) against ln(t) for three different values of p starting
with mono-disperse initial conditions (we choose 50, 000 particles of unit size). The
1+p
1+p
lines have slopes given by the relation 1−p
, confirming that s(t) ∼ t 1−p .

Using Eqs. (3.47) and (3.48) we find that


L(0) ( 1+p )
s(t) = (1 + (1 − p)N (0)t) 1−p , 0 ≤ p < 1. (3.50)
N (0)

We thus see that for 0 ≤ p < 1 the mean particle size s(t) in the limit
t → ∞ grows following a power-law
1+p
s(t) ∼ ((1 − p)t) (1−p) . (3.51)

To verify this we plot ln(s(t)) against ln(t) in Fig. (3.8) using data
from numerical simulation for three different values of p with the same
mono-disperse initial condition in each case. Appreciating the fact that
t ∼ 1/N in the long-time limit we obtain three straight lines whose
1+p
gradients are given by ( (1−p) ), providing numerical confirmation of the
theoretically derived result given by Eq. (3.51).
In fractal analysis, one usually seeks for a power-law relation be-
tween the number N needed to cover the object under investigation
and a suitable yard-stick size. Let us choose s(t) is the yard-stick size
and hence expressing N (t) in terms of s(t) we find

N (s(t) ∼ s−df , (3.52)


Stochastic Fractal  89

Figure 3.9: Plots of ln(N (s)) against ln(s) are drawn for three different values of p
for the same initial conditions. The lines have slopes equal to −( 1−p
1+p
) as predicted by
theory. In each case simulation was performed till 30, 000 aggregation events while
the process started with initially N (0) = 50, 000 particles of unit size.

with df = (1 − p)/(1 + p). Notice that the exponent df is a non-integer


∀ p where 0 < p < 1 and its value is less than the dimension of the
embedding space and hence it is the fractal dimension of the resulting
system. To verify our analytical result, we have drawn ln(N ) versus
ln(s) in Fig. (3.9) from the numerical data collected for a fixed initial
condition but varying only the p value. On the other hand, in Fig.
(3.10) we have drawn the same plots for a fixed p value but varying
only initial conditions (monodisperse and polydisperse). Both figures
show an excellent power-law fit as predicted by Eq. (3.52) with an
exponent exactly equal to df regardless of the choice we make for the
initial size distribution of particles in the system.
We already know from the Cantor set that the df th moment is a con-
served quantity. To check this in the present context we label each par-
ticle of the system at a given time t by the index i = 1, 2, 3, ...., N where
N = M0 (t). Then we construct the df th moment at time t given by
P df R∞ d
i xi which is equivalent to its theoretical counterpart 0 x c(x, t)dx
f

in the continuum limit. Using data from numerical simulation we have


shown in Fig. (3.11) that the sum of the q th power of the sizes of all the
existing particles in the system remain conserved regardless of time t if
we choose q = 1−p 1+p . Conserved quantities have always attracted physi-
cists as they usually point to some underlying symmetry in the theory
90  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 3.10: The parallel lines resulting from plots of ln(N (s)) against ln(s) for
−( 1−p )
mono-disperse and poly-disperse initial conditions confirming that N (s) ∼ s 1+p
is independent of the initial conditions. In each case simulation started with initially
50,000 particles were drawn randomly from the size range between 1 and 1000 for
poly 1000, between 1 and 3000 for poly 3000 and for monodisperse initial condition
all the particles were chosen to be of unit size.

or model in which they manifest. Therefore, it is worth pursuing an


understanding of the non-trivial value 1−p1+p for p > 0 as it leads to the
conserved quantity M( 1−p ) (t) in the scaling regime. Such a non-trivial
1+p
conserved quantity has also been reported in one of our recent works on
condensation-driven aggregation and indicates that it is closely related
to the fractal dimension. It will be interesting if we find similar close
connections between fractal dimension and the non-trivial conserved
quantity.
We shall now apply the Buckingham Pi theorem to obtain the scaling
solution as it will show us that the resulting fractal is also self-similar.
Note that according to Eq. (3.45) the governed parameter c depends on
three parameters x, t and p. However, the knowledge about the growth
law for the mean particle size implies that one of the parameters, say
x, can be expressed in terms of t and p since according to Eq. (3.51)
1+p
the quantity ((1 − p)t) (1−p) bear the dimension of particle size. Note
though that p itself does not have dimension, yet we are keeping it as
we find it convenient for our future discussion. If we consider (1 − p)t
as an independent parameter then the distribution function c(x, t) too
Stochastic Fractal  91

Figure 3.11: ln(M( 1−p ) (t)) is plotted against ln(t) for various values of p and vari-
1+p
ous different initial conditions. The horizontal straight lines indicate that M( 1−p ) (t)
1+p
is constant in the scaling regime. In all cases initially 50,000 particles were drawn
randomly from the size range between 1 and n where n = 1000, 3000 and denoted
as poly n.

can be expressed in terms of (1 − p)t alone, and using the power-law


monomial nature of the dimension of physical quantity we can write
c(x, t) ∼ ((1 − p)t)θ . We therefore can define a dimensionless governing
parameter
x
ξ= , (3.53)
((1 − p)t)z
1+p
where z = (1−p) and a dimensionless governed parameter

c(x, t)
Π= . (3.54)
((1 − p)t)θ
The numerical value of the right hand side of the above two equations
remains the same even if the time t is changed by some factor µ for
example since the left hand side are dimensionless. It means that the
two parameters x and t must combine to form a dimensionless quantity
ξ = x/tz such that the dimensionless governed parameter Π can only
depends on ξ . In other words, we can write
c(x, t)
= f (x/tz ), (3.55)
((1 − p)t)θ
92  Fractal Patterns in Nonlinear Dynamics and Applications

which leads to the following dynamic scaling form

c(x, t) ∼ ((1 − p)t)θ f (x/((1 − p)t)z ), (3.56)

where the exponents θ and z are fixed by the dimensional relations


[tθ ] = [c] and [tz ] = [x] respectively and f (ξ ) is known as the scaling
function.
We now use the scaling form given by Eq. (3.56) into Eq. (3.45) and
find that the scaling function φ(ξ ) satisfies

(1 − p)2θ+z h
t−(θ+z+1) = − 2µ0 f (ξ )
F (p, ξ )
Z ξ i
+ (1 + p) f (η )f (ξ − η )dη , (3.57)
0

where h df (ξ ) i
F (p, ξ ) = θ(1 − p)θ f (ξ ) − z (1 − p)θ ξ , (3.58)

and Z ∞
µ0 = dξf (ξ ), (3.59)
0
is the zeroth moment of the scaling function. The right hand side of Eq.
(3.57) is dimensionless and hence the dimensional consistency requires
θ + z + 1 = 0 or
2
θ=− . (3.60)
1−p
The equation for the scaling function f (ξ ) which we have to solve for
this θ value is
h df (ξ ) Z ξ i
(1 + p) ξ + f (η )f (ξ − η )dη = 2f (ξ )(µ0 − 1). (3.61)
dξ 0

Integrating it over ξ from 0 to ∞ immediately gives µ0 = 1 and hence


the equation that we have to solve to find the scaling function f (x) is
Z ξ
df (ξ )
ξ =− f (η )f (ξ − η )dη. (3.62)
dξ 0

To solve Eq. (3.62) we apply the Laplace transform G(k ) of f (ξ ) in


Eq. (3.62) and find that G(k ) satisfies
d 
kG(k ) = G2 (k ). (3.63)
dk
Stochastic Fractal  93

It can be easily solved after linearizing it by making the transformation


of the form G(k ) = 1/u(k ) and integrating straightaway gives
1
G(k ) = . (3.64)
1+k
Using it in the definition of the inverse Laplace transform we find the
required solution
f (ξ ) = e−ξ , (3.65)
and hence according to Eq. (3.56) the scaling solution for the distribu-
tion function is
1+p
2
c(x, t) ∼ ((1 − p)t)− (1−p) e−x/((1−p)t)
(1−p)
. (3.66)
The advantage of using the scaling theory is that one does not need to
specify the initial condition revealing the fact that the solution is true
for any initial condition.
The question is: How do we verify Eq. (3.66) using the data ex-
tracted from the numerical simulation? First, we need to appreciate
the fact that each step of the algorithm does not correspond to a one
time unit since time t ∼ 1/((1 − p)N ) in the long-time limit as pre-
dicted by Eq. (3.47). Secondly, we collect data for a fixed time t and
appreciate the fact that ct (x) is the histogram where the height repre-
sents the number of particles within a given range, say of width ∆x,
is normalized by the width itself so that area under curve gives the
number of particles present in the system at time t regardless of their
size. This is exactly what is shown in Figs. (3.12) and (3.13) while the
Fig. (3.13) is shown in the log-linear scale to show that ct (x) for fixed
time decays exponentially. Now, the solution given by Eq. (3.66) im-
plies that distinct data points of c(x, t) as a function of x at various
different times can be made to collapse on a single master curve if we
2 1+p
plot t (1−p) c(x, t) vs xt− (1−p) instead. Note that multiplying time t by
a constant multiplying factor (1 − p) has no impact in the resulting
plot. Indeed, we find that the same data points of all the three distinct
curves of Figs. (3.12) and (3.13) merge superbly onto a single univer-
sal curve, which is essentially the scaling function f (ξ ). It is clear from
Fig. (3.14) that the scaling function f (ξ ) decays exponentially and once
again this is in perfect agreement with our analytical solution given by
Eq. (3.65). Besides aggregation with stochastic self-replication, parti-
cles may also grow in size by condensation, deposition or by accretion.
For instance, aggregation in vapor phase or in damp environment par-
ticles or droplets may continuously grow by condensation. It has been
shown that the condensation-driven aggregation problem can be best
described as fractal [66, 68].
94  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 3.12: Plot of distribution function c(x, t) as a function of x is shown at


three different times using data obtained by numerical simulation. Essentially, it is
a plot of a histogram where the number of particles in each class size is normalized
by the width ∆x of the interval size.

Figure 3.13: Log-linear plot of the same data as in Fig. (3.12) showing the expo-
nential decay of the particle size distribution function ct (x) with particle size x at
fixed time as seen analytically.
Stochastic Fractal  95

Figure 3.14: The three distinct curves of Figs. (1) and (2) for three different system
sizes are well collapsed onto a single universal curve when c(x, t) is measured in units
2 1+p

of t (1−p) and x measured in units of t (1−p) . Such data-collapse implies that the
process evolves with time preserving its self-similar character. We have chosen semi-
log scale to demonstrate that the scaling function decays exponentially f (ξ) ∼ e−ξ
as predicted by the theory.

3.8 Discussion and summary


We have investigated several fractals of seemingly disparate nature. De-
spite their differences we have found two common things. First, they
are self-similar either spatially or temporally. Second, in each case there
is at least one conservation law. Now self-similarity being also a kind
of symmetry suggests an intimate connection between which is rem-
iniscent of the Noether’s first theorem. We argue that self-similarity
cannot exist without a quantity that remains conserved. Think of the
stochastic fractal that evolves through time yet each snapshot taken at
different times are similar. Here similarity means numerical values of
the dimensional quantities of each snapshot may differ but the numer-
ical values of the corresponding dimensionless quantities must remain
the same. The conserved quantity is the quantity which preserves the
self-similar property.
Chapter 4

Multifractality

4.1 Introduction
Fractals only tell us about an object in which a certain physical quan-
tity is distributed evenly wherever it is found in the embedding space.
Therefore, the fractal dimension of the object will still be the same
even if the content is diluted to change the density of the content and
then distributed throughout the space exactly in the same way as be-
fore revealing that it tells us nothing about the distribution of the
content. There are situations in which physical quantities are unevenly
distributed over the space. That is, the distribution may not be ho-
mogeneous and instead heterogeneous in the sense that the density of
the content over the occupied regions only are different. Many precious
elements such as Gold for instance are found in high concentrations at
only a few places, in moderate to lower concentration at many places
and in very low concentrations (where the cost of extraction is higher
than the price of gold extracted) almost everywhere. In such cases, the
fractal vis-a-vis the fractal dimension tells us nothing about distribu-
tion. We therefore have to invoke the idea of multifractality which tells
us about the uneven or heterogeneous distribution of the content on a
geometric support. The dimension of the support may itself be a fractal
or not.
Perhaps an example can provide better understanding than long
description. For this we consider the case of the generalized Sierpinsky
carpet which is constructed as follows. Let us consider that the initiator
is a square of unit area and unit mass. That is, a unit amount of mass is
distributed in square of unit area uniformly. The generator then divides
98  Fractal Patterns in Nonlinear Dynamics and Applications

the initiator into four equal smaller squares and the content of the upper
left square is pasted onto the lower left square. This process involves
cutting and pasting by applying the generator on the available squares
over and over again. In the end we have two things: (i) We have the
support without content, (ii) We have unevenly distributed mass on
the geometric support which is the Sierpinsky carpet. We shall show
rigorously in this chapter that the distribution of mass on the support is
multifractal. Note that we talk about fractals if all the occupied squares
of equal size at any stage have the same amount of mass. It is in this
sense we say that the mass is evenly distributed. However, in the case
of a cut and paste model the mass is not distributed evenly rather it is
distributed unevenly. In this section we develop multifractal formalism
in the context of both deterministic as well as a random or stochastic
case to be more precise.
So, a system is a candidate for multifractal analysis if a certain
physical quantity or variables fluctuate wildly in space or on a support.
Consider the case of a stochastic cantor set. In the previous chapter we
have seen that the nth moment Mn (t) of the interval size distribution
function c(x, t) of stochastic fractal has in general the following form
Z ∞
xn c(x, t)dx = Mn (t) ∼ t−(n−D)z . (4.1)
0

We therefore can calculate


R∞ n
n x c(x, t)dx Mn (t)
hx i = 0R ∞ = , (4.2)
0
c(x, t)dx M0 (t)

and R∞
xc(x, t)dx M1 (t)
hxi = R0 ∞ = . (4.3)
0
c(x, t)dx M0 (t)
Using Eq. (4.1) in Eq. (4.2) and Eq. (4.3) we find

hxn i ∼ hxin . (4.4)

Thus, there exists a single length scale called the mean interval size
< x > which can characterize all the moments of the distribution func-
tion. This is true because the exponent n−D(β)
3β+2 of Mn (t) is linear in n
which implies that there exists a constant gap between the exponents
of consecutive n value [32]. Such linear dependence of the exponent of
the nth moment is taken as the lit-mass test for simple scaling.
There are cases where one finds

hxn i =
6 hxin , (4.5)
Multifractality  99

which is typical to the systems where fluctuations in the distribution


of the properties of interest is wild in nature. Such systems are typi-
cal candidates which can be checked whether it is multifractal or not.
One of the important ingredients used to obtain the significance of the
spectrum of dimension requires us to use Legendre transformation. We
there find it worthwhile to discuss the Legendre transformation in brief.

4.2 The Legendre transformation


To understand the significance of f (α) we need to use the Legendre
transform of τ (q ). Before we apply this it is worth it to first understand
the principle of Legendre transformation [73]. For this we consider a
function
y = y (x1 , x2 , ..., xn ). (4.6)
Legendre transformation can then be described as a method whereby
the derivatives  ∂y 
mk = , (4.7)
∂xk
can be considered as independent variables without sacrificing any of
the mathematical contents of the Eq. (4.6). For the sake of simplicity,
let us first consider a function of single variable, e.g.,

y = y (x). (4.8)

Now,
 ∂y 
m= , (4.9)
∂x
is the slope of this curve y (x) at the point (x, y ). In order to treat m as
an independent variable in place of x, one may be tempted to eliminate
x between the Eq. (4.8) and Eq. (4.9), thereby getting y as a function
of m. However, this is not correct since in doing so one would sacrifice
some of the mathematical content of Eq. (4.8) in the sense knowing
y as a function of m we will not be able to reconstruct the function
y = y (x) as emphasized in Fig. (1).
As an illustrative example, consider the function y = 12 e2x for which
m = (dy/dx) = e2x and hence, y (m) = m/2. Let us try to reconstruct
y (x) from the relation y (m) = m/2. Since m = (dy/dx) = 2y , we get
y (x) = Ce2x where the integration constant C remains unspecified.
Clearly, the inadequacy of this procedure arises from the fact that the
relation y = y (m) involves dy/dx and the integration required to get
y (x) from this first order differential equation yields y (x) up to an
unspecified constant of integration.
100  Fractal Patterns in Nonlinear Dynamics and Applications

It is straightforward to realize even from this simple example that a


curve (a sequence of points) y = y (x) can also be represented uniquely
by the relation c = c(m) that describes the tangent lines. In other
words, a knowledge of the intercepts c of the tangent lines as a function
of the slope m enables one to construct the family of tangent lines and
thence the curve as shown in the Fig. (3). Now the question is: How
to calculate c(m) if we are given y (x)? The appropriate mathematical
tool is known as Legendre transformation. Consider a tangent line that
passes through the point x, y and has a slope m and intercept c on the
Y − axis. Then,
c = y − mx, (4.10)
can be taken as the analytical definition of c. In fact, c is the Legendre
transform y .
We note that there is a simple and general method for systematically
obtaining all the state functions or the thermodynamic potentials de-
scribing a system with state variables associated with exact differential
forms which is done by using the method of Legendre transformation.
Let us first discuss the Legendre transformation in general of a function
Y which depends on the variables x1 , x2 , ..., xi , etc. i.e.,
Y = Y (x1 , x2 , ..., xi ). (4.11)
The partial derivatives of Y with respect to its variables are
 ∂Y 
ai = . (4.12)
∂xi xj ,j6=i
 
If now x1 is replaced by a1 = ∂Y /∂x1 as independent variable
xj ,j6=1
we can then define a new function as
Ψ = Y − a1 x1 , (4.13)
and in general X
Ψ=Y − ai xi . (4.14)
i
The new transformed state function Ψ is now function of ai . Such a
transformation from the original variables xi to a new set of variables
is called the Legendre transformation.
Once we know the fundamental principle behind the Legendre trans-
formation we can follow the simple working procedure described below
without having to go through all the details.
(i) We first have to find the differential dY of the function Y whose
Legendre transformation we intend to find. Say it is
Y = adx. (4.15)
Multifractality  101

(ii) We then re-write it as

dY = adx + xda − xda. (4.16)

(iii) Hence we can write

d(Y − ax) = −xda. (4.17)

(iv) The quantity in the parenthesis of the left hand side of the above
equation can be defined as a new quantity

Ψ ≡ Y − ax, (4.18)

which according to the right hand side of Eq. (4.91) reveals


that it is a function of a which is in fact the slope of the origi-
nal function Y . In the following section we will in fact use this
procedure.

4.3 Theory of multifractality


In general, one can describe a set S of points describing an object which
can be divided into boxes labeled by an index i such that the ith box
will have Ni points of the total N points of the set. These points are
sample points describing the content of the underlying measure. Let us
use the ‘mass’ or probability µi = Ni /N in the i-th cell to construct
weighted d-measure which we write
N
X
Md (q, δ ) = µqi δ d = Z (q, δ )δ d (4.19)
i=1

The mass exponent τ (q ) for the set depends on the moment


X q
Z (q, δ ) = µi , (4.20)
i

of order q chosen. It is also known as the partition function as it be-


haves in a similar manner when one tries to find an analogy of multi-
fractal formalism with thermodynamic phase transition. Like N (δ ) of
bare d-measure the partition function (if we may call) Z (q, δ ) also often
exhibits power-law distribution

Z (q, δ ) ∼ δ −τ (q) , (4.21)


102  Fractal Patterns in Nonlinear Dynamics and Applications

where the exponent τ (q ) is called the mass exponent not the fractal
dimension if it is non-linear in q [23]. The mass exponent is more re-
vealing than the simple fractal dimension as we shall soon find out. It
can equivalently be defined as
ln Z (q, δ )
τ (q ) = − lim . (4.22)
δ→0 ln δ
Using Eq. (4.21) in Eq. (4.20) we can write

d−τ (q) δ→0 0 d > τ (q )
Md (q, δ ) ∼ d =−→ . (4.23)
∞ d < τ (q )

This weighted d-measure has a critical mass exponent d = τ (q ) for


which the measure neither vanishes nor diverges as δ → 0 analogous
to the definition of Hausdorff-Besicovitch dimension for the bare d-
measure. The weighted measure is characterized by a whole sequence
of exponents τ (q ) that controls how the moments of the probabilities
{µi } scale with δ .

4.3.0.1 Properties of the mass exponent τ (q)


 We first note that if we choose q = 0 then we have µq=0
i = 1 and
hence we find Z (q = 0, δ ) = N (δ ). This number Z (0, δ ) = N (δ )
is simply the number of boxes needed to cover the support and
therefore
τ (0) = D, (4.24)
is the Hausdorff-Basicovitch dimension of the support and tells
nothing about the content in it.
P
 On the other hand, the probabilities are normalized: µi = 1,
and it follows from Eq. (4.22) that

τ (1) = 0. (4.25)

Choosing large values of q , say 10 or 100, in Eq. (4.20) favours


contributions from cells with relatively high values of µi since µqi >> µqj ,
with µi > µj if q >> 1. conversely, q << −1 favours the cells with
relatively low values of the measure µi on the cell. These limits are
best discussed by considering the derivative dτ (q )/dq given by
P q
dτ (q ) µ ln µ
= − lim Pi iq . (4.26)
dq δ→0 ( i µi ) ln δ
Multifractality  103

Let µmin be the minimum value of µi in the sum. Then we find


( 0i µqmin ) ln µmin
P
dτ (q )
= − lim P0 q , (4.27)
dq q→−∞

δ→0 (
i µmin ) ln δ

where the prime on the sum indicates that only cells with µi = µmin
contribute. The expression may be written as
dτ (q ) ln µmin
= − lim = −αmax . (4.28)
dq q→−∞ δ→0 ln δ

A similar argument in the limit q −→ ∞ leads to the conclusion that


the minimum value of α is given by
dτ (q ) ln µmax
= − lim = −αmin , (4.29)
dq q→+∞ ln δ

δ→0

where µmax is the largest value of µi , which leads to the smallest value
of α. Eqs. (4.28) and (4.29) implies that
α
µmin ∼ δmax , (4.30)
and
α
µmax ∼ δmin . (4.31)
We can combine the above two relations and write a general equation
µ ∼ δα. (4.32)
Later we shall show that it is indeed the case and that
α(q ) = −dτ /dq, (4.33)
is true in general [16, 22].

For q = 1 we find that dqhas an interesting value:
P
dτ (q ) µi ln µi S (δ )
= − lim i = lim , (4.34)
dq q=1 ln δ δ→0 ln δ

δ→0

where S (δ ) is the information entropy of the partition of the measure


which we may write as
X
S (δ ) = − µi ln µi ∼ −α(1) ln δ. (4.35)
i

The exponent α(1) = −(dτ /dq )|q=1 = dS is also the fractal dimension
of the set onto which the measure concentrates and describes the scal-
ing with the box size δ of the entropy of the measure. Note that the
partition entropy S (δ ) at resolution δ is given in terms of the entropy
S of the measure by S (δ ) = −S ln δ .
104  Fractal Patterns in Nonlinear Dynamics and Applications

4.3.1 Legendre transformation of τs (q): f (α) spectrum


Often we find that the Legendre transform of a function is more useful
than the function itself. For instance, the Hamiltonian H = T + V which
is the Legendre transform of Lagrangian L = T − V , the Helmoltz free
energy F which is the Legendre transform of the internal energy E , etc.
are more useful than their respective function. In the present case we
have a function
τ = τ (q ), (4.36)
which is non-linear in q and hence its slope depends on q . Legendre
transformation of τ (q ) means a method whereby its derivative

= −α, (4.37)
dq

can be considered an independent variable instead of q itself [23]. It is


straightforward to realize that the curve given by Eq. (4.36) can also be
represented uniquely by the relation f = f (α) that describes a family
of tangent lines as a function of slope α without sacrificing any of the
mathematical content of the parent equation.
Note that the slope of τ vs q should be different at each value of q .
From the curve we can draw tangent for a given value of q and write
down the equation for the resulting straight line. In general, if we denote
α as the slope and the intercept f (which should depend on the slope
itself) then the equation for the straight line is

τ = −αq + f (α). (4.38)

The function f (α) is in fact the Legendre transform of the function


τ (q ). Below, we give a more familiar systematic processing procedure
to obtain the Legendre transform of the mass exponent τ (q ). From Eq.
(4.37) we can write
dτ = −αdq, (4.39)
which we can re-write as

dτ = −αdq − qdα + qdα, (4.40)

and hence
d(τ + αq ) = qdα. (4.41)
The above equation immediately reveals that we can define a new func-
tion
f (α) = τ + αq, (4.42)
where the new function f is function of α.
Multifractality  105

4.3.1.1 Physical significance of α and f (α)


The sequence of mass exponents τ (q ) is related to the f (α) spectrum
via Legendre transformation. Recall the multifractal partition function
given by equation X q
Z (q, δ ) = µi , (4.43)
i

where cells are indexed by labelling them as i = 1, 2, ...,etc. On the


other hand, we have seen that the probability µ scales as δ α . So, the
distribution of the content can be subdivided into subsets characterized
by α instead of i. We can think of cells which scales sharing the same
α. We can now obtain the number Z (α, δ ) needed to cover such subsets
of cells. Since the exponent α is continuum variable we further consider
that ρ(α)dα is the number of subsets from Sα to Sα+dα . Upon transition
from discrete into continuum variable we write

Z (α, δ )dα = ρ(α)dαδ −f (α) , (4.44)

and hence the so called partition function can be re-written as


Z Z
−f (α) αq
Z (α, δ ) = ρ(α)dαδ δ = ρ(α)dαδ qα−f (α) . (4.45)

The integral in Eq. (4.45) will be dominated by the value of α which


makes qα − f (α) smallest, provided that ρ(α) is nonzero. Thus, we
replace α by α(q ), which is defined by the extremal condition
d
[qα − f (α)] = 0. (4.46)

dα α=α(q)

We also have
d2
[qα − f (α )] > 0, (4.47)

dα2

α=α(q)

so that f 0 (α(q )) = q , f 00 (α(q )) < 0. From Eq. (4.32) it follows that


we then have q = 0, and we conclude from Eq. (4.13) that fmax = D,
since τ (0) = D, where D is the fractal dimension of the support of the
measure.

4.4 Multifractal formalism in fractal


We already know that the df th moment of the interval size at any
stage during the construction process is equal to one. It means we
can consider each interval as a cell containing a mass equal to the
106  Fractal Patterns in Nonlinear Dynamics and Applications

df th power of the size of the interval. If the cells are labelled as i =


1, 2, ..., , etc. then we can define
d
µi = xi f . (4.48)
The corresponding partition function therefore is
X q
Zq = µi , (4.49)
i

for discrete system and for the continuum system it is


Z ∞
Zq = µqi c(x, t)dx. (4.50)
0

Let us first consider the dyadic Cantor set [85]. In the nth step of its
construction there are (1+ p)n number of intervals each of size xi = 2−n
and hence the mean interval size is also δ = 2−n . Thus the partition
function is X q
Zq = µi = N (2−n )qdf = δ −(1−q)df , (4.51)
i
so that
τ (q ) = (1 − q )df . (4.52)
Its Legendre transformation gives
f (α) = df , (4.53)
and hence it is just a simple fractal.
On the other hand, if we apply the formalism in the stochastic dyadic
Cantor set then the partition is
Z ∞
Zq = µq c(x, t)dx, (4.54)
0
df
where µ = x . We can then immediately write
Zq = Mqdf (t). (4.55)
We know the solution for the nth moment
Mn (t) ∼ t−(n−df )z . (4.56)
Using Eq. (4.56) in Eq. (4.55) gives
Zq ∼ t−(q−1)df z . (4.57)
Expressing it in terms of δ we
Zq ∼ δ −(1−q)df . (4.58)
Once again the same result as it is for the deterministic dyadic Cantor
set [85].
Multifractality  107

4.4.1 Deterministic multifractal


In this section we further modify the construction process of the
dyadic Cantor set. Here the support is the DCS with fractal dimen-
sion df = log(1 + p)/ log 2 on which the total mass of the initiator will
be distributed heterogeneously. However, before we do that, let us first
apply the multifractal formalism to the DCS and show that it results in
a unique f value instead of a multifractal f (α) spectrum. Recall that at
step one the generator divides the initiator of unit area into two equal
pieces and remove one with probability 1 − p. In the next, the generator
is applied to each of the remaining intervals to divide them into two
equal pieces. If we continue the process then in the nth step we will have
N = (1 + p)n intervals of size s = 2−n and we already know that in the
large n limit the resulting system emerges as a fractal. However, here
we want to apply the multifractal formalism so that we can appreciate
the difference between the fractal and multifractal.
In order to apply the multifractal formalism, we first have to know
what to choose as a measure and find what fraction of this measure is
in the ith cell. To make things simpler let us think of the a case where
the generator divides the initiator into two equal parts but removes the
right half with probability 1 − p. That is, after step one there are 1 + p
intervals of size equal to 1/2. In such case we assume that each cell is
occupied with a content equal to (1/2)df . In step two, we divide the
interval in the left into two equal parts and remove the right half with
probability 1 − p. Similarly, the interval which is there with probability
p is also divided in the same fashion. At the end of step two, we shall
have (1 + p)2 intervals of size xi = 1/22 . If we now continue the process
then at the end of the nth step we shall have (1 + p)n intervals of size
xi = 2−n . So, we could describe that the occupation probability of each
d
is pi = xi f so that
(1+p)n
X
pi = 1, (4.59)
i=1
independent of the step number n.
The so called partition function Zq therefore is
(1+p)n
X
Zq = pqi . (4.60)
i

Now eliminating n in favour of s where s = 2−n we can write


Zq (s) ∼ s−(1−q)df . (4.61)
The mass exponent therefore is
τ (q ) = (1 − q )df , (4.62)
108  Fractal Patterns in Nonlinear Dynamics and Applications

with df = ln(1+ p)/ ln 2. The mass exponent satisfies the two conditions
(i) τ (0) = ln 3
ln 2 and τ (1) = 0. It is worthwhile to mention as a passing
note that Hentschel and Procaccia introduced another set of dimensions
defined by
τ (q ) = (1 − q )Dq . (4.63)
A system can be described as multifractal only if Dq depends on q .
In such case D0 is just the dimension of the support while D1 is the
information dimension and D2 is the correlation dimension. Legendre
transformation of τ (q ) immediately gives

ln(1 + p)
f (α) = . (4.64)
ln 2
Thus, instead of getting f (α) spectrum we get a constant value and
hence it is not a multifractal since slope of the mass exponent is the
same regardless of the value of q . We conclude with the note that the
resulting system is the fractal since the fraction of the total measure
contains is the same in each cell of same size [97].
Now we slightly change the construction process of the DCS. Instead
of throwing away the right half of the interval after each time we apply
the generator and paste it onto the left half that always remains with
probability 1 − p [97]. That is, after step one there are 1 + p intervals of
which one in the left has mass (2 − p)/2 and the other one in the right
has mass p/2 so that the sum is equal to the mass of the initiator. In

f (α)

0.6

0.4

0.2

α
0.4 0.5 0.6 0.7 0.8 0.9

Figure 4.1: Multifractal f (α) spectrum of multifractal dyadic Cantor set for p =
1/3, 1/2 and p = 2/3. The peak of the respective curves occur at log(1 + p)/ log(2)
which is the skeleton on which the measure is distributed.
Multifractality  109

such a case we could assume that the left cell has the fractional mass
equal to p1 = (2 − p)/2 and the right cell has mass equal to p2 = 1/2
with probability p. That is, after step n = 1 there are P1two types of
mass µ0 = p1 and µ1 = p2 such  that the total mass k=0 Nk µk = 1
1
is conserved since Nk = pk where k = 0, and 1. In step two,
k
the generator is applied to the remaining 1 + p intervals and in each
we paste the right half onto the left half with probability (1 − p) and
hence at the end of step two there are (1 + p)2 intervals with three
types of masses µ0 = p2 with probability one and two cells have mass
µ2 = p1 p2 with probability p and µ3 = p22 with probability p2 . The total
P2
mass of all the cells
 is  k=0 Nk µk = 1 where the number of cells of
2
k type is Nk = and k = 0, 1 and 2. This process of cut and
k
paste is repeated ad infinitum. After the nth step we have (1 + p)n
intervals with n + 1 different types of mass. We label each type by an
integer k (k = 0, 1., ..., n) such that the mass of each of the k -type is
given by µk = pn−k
1 pk2 . The number of the k -type squares Nk is given
n
by Nk = pk where k = 0, 1, ..., n. The distribution of the mass
k
which is clearly not uniform rather heterogeneous. We shall now analyze
it below invoking the idea of multifractal formalism.
We can now construct the partition function as
n n   
X q
X n 2 − p q n−k  p k
Zq = Nk µk = (4.65)
k 2 2q
k=0 k=0

We can re-write it as
 2 − p q p n
Zq = + . (4.66)
2 2q
If we now measure the partition function Zq in terms of s = 2−n , then
we can eliminate n from Eq. (4.66) in favour of s and find that

Zq = s−τ (q) , (4.67)

where h q i
2−p p
log 2 + 2q
τ (q ) = . (4.68)
log[2]
One can easily check that two essential properties of the mass expo-
nent τ (q ) are obeyed. For instance, τ (1) = 0 and τ (0) = ln(1+p)
ln 2 is the
dimension of the support. How to obtain the f (α) curve?
110  Fractal Patterns in Nonlinear Dynamics and Applications

To find the fractal subset, we use the usual Legendre transformation


of τ (q ). This is a method whereby the slope of τ (q ) is considered an
independent variable instead q itself. To obtain the Legendre transfor-
mation we write
dτ (q )
dτ = dq, (4.69)
dq
dτ (q)
where dq is slope of the curve τ versus q curve and define it as

dτ (q )
α(q ) = − . (4.70)
dq
We can now write Eq. (4.69) as follows

d(τ + αq ) = qdα. (4.71)

The above equation immediately reveals that we can define the new
function
f (α) = τ (q ) + αq, (4.72)
where the new function f is function of α since the quantity q is constant
on right hand side of Eq. (4.71) while α is variable [97]. In this way q is
replaced by α as independent variable. We can thus obtain the following
expression for the multifractal spectrum
h q i
log 2−p 2 + 2
p
q
f (α) = qα + , (4.73)
log[2]
and the Hölder exponent is
 q h i
p 2−p 2−p
1 2q ln 2 − 2 ln 2
α(q ) = q , (4.74)
ln 2

2−p p
2 + 2 q

The f (α) vs α spectrum is shown in Fig. (4.3) which clearly shows


that the pick value of the f (α) is the dimension of the skeleton
log(1 + p)/ log 2 and it is concave in shape. Here the skeleton is the
dyadic Cantor set but the distribution of the content in this skeleton is
multifractal.

4.5 Cut and paste model on Sierpinski carpet


In this section we will first discuss an example of multifractal which is
strictly self-similar in character. It is constructed by a recursive process
Multifractality  111

for which exact analytical calculations can be made. Here the support
is the Sierpinsky carpet with b = 2 which we have already discussed in
section 4.3.3. However, let us apply the multifractal formalism to the
Sierpinsky carpet and show that it results in a unique f value instead
of f (α) spectrum. Recall that at step one the generator divides the
initiator of unit area into b2 = 4 equal pieces and removes one which
we assume always to be the top left of the four new squares. In the
next, the generator is applied to each of the remaining three blocks
to divide them into four equal pieces. If we continue the process then
in the nth step we will have N = 3n blocks of size δ = 2−n and we
already know that in the large n limit the resulting system emerges as a
fractal of dimension df = ln 3/ ln 2. However, here we want to apply the
multifractal formalism so that we can appreciate the difference between
fractal and multifractal.
In order to apply the multifractal formalism we first have to know
what to choose as a measure and find what fraction of this measure is
in the ith cell. To make things simpler let us think of the case where
the generator divides the initiator into four equal parts but remove
nothing. In such case we could assume that each block is occupied with
a content equal to its area and the sum of all the areas would be equal
to one as the initiator is chosen to be a square of unit area. So, we could
describe the area of the respective block as the occupation probability
pi = x2i of each block where x2i is the area of the ith block and hence
P P4n 2
i pi = i=1 = 1. The exponent 2 in pi = xi is in fact the dimension
of the resulting system which is actually the square lattice. We can
generalize the idea. In the context of the Sierpinsky carpet we already
know its dimension df = ln 3/ ln 2. So, in analogy with the square lattice
we find that the df th moment of the sides of the remaining squares
P3n df
i=1 xi too is a conserved quantity. We, therefore, can assume that
ith block of 3n remaining block is occupied with a certain content equal
Pn
to pi = xln
i
3/ ln 2
. It can be easily shown that indeed 3i=1 pi = 1 since

x1 = x2 = ... = x3n = 2−n (4.75)

and hence n
3
d
X
xi f = 3n 2−ndf = en ln 3 e−ndf ln 2 = 1. (4.76)
i=1

We therefore can once again regard pi as the occupation probability.


The so called partition function Zq therefore is
n
3
X
Zq = pqi . (4.77)
i
112  Fractal Patterns in Nonlinear Dynamics and Applications

Now eliminating n in favour of δ where δ = 2−n we can write


Zq (δ ) ∼ δ −(1−q)df . (4.78)
The mass exponent therefore is
τ (q ) = (1 − q )df , (4.79)
where df = ln 3/ ln 2 which is the fractal dimension of the Sierpiksy car-
pet. The mass exponent satisfies the two conditions (i) τ (0) = ln 3
ln 2 and
τ (1) = 0. It is worthwhile to mention as a passing note that Hentschel
and Procaccia introduced another set of dimensions defined by
τ (q ) = (1 − q )Dq . (4.80)
A system can be described as multifractal only if Dq depends on q .
In such case D0 is just the dimension of the support while D1 is the
information dimension and D2 is the correlation dimension. Legendre
transformation of τ (q ) immediately gives
ln 3
f (α) = . (4.81)
ln 2
So, it is not a multifractal since the slope of the mass exponent is the
same regardless of the value of q . We conclude with the note that since
the fraction of the total measure contained is the same in each cell
of same size. In the cut and paste model we will take the Sierpinsky
carpet as the support where blocks are occupied by a fraction of the
total measure which can be different even if the cells are of same size.
The construction of the cut and paste model can be described by
a curdling process where the initiator is a square of unit area and of
unit mass distributed on the square uniformly [44]. Then the generator
of the Sierpinsky carpet is modified as follows. In step one then the
generator divides the initiator into four equal squares and upper left
square is removed and paste its mass equal to 1/4 on the lower left
square. The mass of the lower left square is therefore equal to 1/2 and
each of the remaining two squares has mass equal to 1/2. In step two
the generator is applied to the remaining three squares and in each case
mass from their upper left squares are removed and paste their mass
on the respective lower left squares. This process of cut and paste is
repeated ad infinitum. The distribution of the mass which is clearly not
even rather heterogeneous which we have analyzed invoking the idea of
multifractal formalism discussed in the previous section.
The construction process may equivalently be described as follows.
At the first step (n = 1), divide the square into four equal squares;
redistribute the total mass of the original square into three smaller
Multifractality  113

Figure 4.2: The first two steps are shown to illustrate the cut and paste model on
the Sierpinski carpet.

squares of which the two on the right have a fractional mass p2 = 1/4
and one on the bottom left has mass p1 = 1/2 such that p1 + 2p2 = 1.
That is, after step n = 1 there are two types of mass µ0 = p1 and
µ1 = p2 . Repeat this process recursively for each of the smaller squares.
The case of n = 2 is shown in Fig (2). So, in step n = 2 the generator
when applied to the lower left square of µ0 will now have two types of
mass µ0 = p21 and µ1 = p1 p2 . Similarly, when the generator is applied
to the upper right squares then the bottom left of the four new smaller
squares will have mass µ1 = p2 p1 and the two squares on the right will
have mass µ2 = p22 . In the same way, when the generator will be applied
on the third square will again have two types of mass as obtained for
the upper right square. Thus at step n = 2 we find three types of mass.
It can be shown that at the n-th step, the linear size of each square is
δ = 2−n , and there are (n + 1) different types of squares when classified
according to their masses. Each type can be designated by an integer
k (k = 0.1., ..., n) such that the mass of each of the k -type squares is
given by µk= pn−k k
1 p2 . The number of the k -type squares Nk is given
n
by Nk = 2k .
k
These results can easily be seen by observing that the different
masses in the n-th step can be obtained in a multiplicative process.
At the n-th step, the possible masses in the squares are given by all the
114  Fractal Patterns in Nonlinear Dynamics and Applications

terms in the expression of (p1 + p2 )n , which is equivalent to


n   n  
X n X n
pn−k
1 (2 p 2 )k
= 2 k
pn−k
1 pk2 . (4.82)
k k
k=0 k=0

How to obtain the mass exponent? The fraction of masses in the i-th
cellµi can be taken to be the fractal measure. The sequence of mass
exponents τ (q ) is defined by
X q
N (q, δ ) = µi ∼ δ −τ (q) , (4.83)
i

as δ −→ 0, where δ is the size of each cell. Specialized to our case here,


one obtains
Xn
N (q, δ ) = Nk µqk
k=0
n  
X n
= 2k (pn−k
1 (pk2 )k )q (4.84)
k
k=0
n  
X n
= (pq1 )n−k (2pq2 )k )
k
k=0
= (pq1 + 2pq2 )n .
Eliminating n in favour of δ using δ = 2−n and using Eqs (4.82) and
(4.83), we find that
Zq (δ ) ∼ δ −τ (q) , (4.85)
where
τ (q ) == [ln(pq1 + 2pq2 )]/ ln 2. (4.86)
One can easily check that two essential properties of the mass exponent
τ (q ) are obeyed. For instance, τ (1) = 0 and τ (0) = ln 3
ln 2 is the dimension
of the support. How to obtain the f (α) curve? Using Legendre transfor-
mation and after a few steps of algebra gives the following expression
for the multifractal spectrum
f (α) = qα + [ln(pq1 + 2pq2 )]/ ln 2, (4.87)
and the Hölder exponent is
1 pq1 ln p1 + 2pq2 ln p2
α(q ) = − . (4.88)
ln 2 pq1 + 2pq2
It only requires us to see the difference between the cut and paste
model and the Sierpinski carpet in order to understand the origin of
multifractality.
Multifractality  115

f (α)

1.5

1.0

0.5

α
1.2 1.4 1.6 1.8 2.0

Figure 4.3: Multifractal f (α) spectrum of multifractal dyadic Cantor set for p =
1/3, 1/2 and p = 2/3. The peak of the respective curves occur at log(1 + p)/ log(2)
which is the skeleton on which the measure is distributed.

4.6 Stochastic multifractal


4.7 Weighted planar stochastic lattice model
Perhaps, the square lattice is the simplest example of the cellular struc-
ture where every cell has the same size and the same coordination num-
ber. Its construction starts with an initiator, say a square of unit area,
and a generator that divides it into four equal parts. In the next step
and steps thereafter the generator is applied to all the available blocks
which eventually generates a square lattice. In this section, we intend
to address the following questions. Firstly, we ask what if the generator
is applied to only one of the available blocks at each step by picking
it preferentially with respect to the areas? Secondly, we ask what if
we use a modified generator that divides the initiator randomly into
four blocks instead of four equal parts and apply it to only one of the
available blocks at each step, which are again picked preferentially with
respect to their respective areas? Our primary focus will be on the later
case which results in the tiling of the initiator into increasingly smaller
mutually exclusive rectangular blocks. We term the resulting structure
as weighted planar stochastic lattice (WPSL) since the spatial random-
ness is incorporated by the modified generator and also the time is
incorporated in it by the sequential application of the modified genera-
tor [75, 80, 92]. The definition of the model may appear too simple but
the results it offers, as we shall see soon, are far from simple. To illus-
116  Fractal Patterns in Nonlinear Dynamics and Applications

Figure 4.4: A snapshot of the weighted planar stochastic lattice containing 30001
blocks.

trate the type of systems expected, we show a snapshot of the result-


ing weighted planar stochastic lattice taken during the evolution (see
Fig. (4.4)). We intend to investigate its topological and geometrical
properties in an attempt to find some order in this seemingly disor-
dered lattice.

4.8 Algorithm of the weighted planar stochastic


lattice (WPSL)
Perhaps an exact algorithm can provide a better description of the
model rather than the mere definition. In step one, the generator di-
vides the initiator, say a square of unit area, randomly into four smaller
blocks. The four newly created blocks are then labelled by their respec-
tive areas a1 , a2 , a3 and a4 in a clockwise fashion starting from the
upper left block (see Fig. 2). In step two and thereafter only one block
is picked at each step with the probability equal to their respective area
Multifractality  117

and then it is divided randomly into four blocks. In general, the j th


step of the algorithm can be described as follows.
(i) Subdivide the interval [0, 1] into (3j − 2) sub-intervals of size
[0, a1 ], [a1 , a1 + a2 ], ..., [ 3j−3
P
i=1 ai , 1] each of which represents the
blocks labelled by their areas a1 , a2 , ..., a(3j−2) respectively.
(ii) Generate a random number R from the interval [0, 1] and find
which of the (3i − 2) sub-interval contains this R. The corre-
sponding block it represents, say the pth block of area ap , is
picked.
(iii) Calculate the length xp and the width yp of this block and keep
note of the coordinate of the lower-left corner of the pth block,
say it is (xlow , ylow ).
(iv) Generate two random numbers xR and yR from [0, xp ] and [0, yp ]
respectively and hence the point (xR + xlow , yR + ylow ) mimicking
a random nucleation of a seed in the block p.
(v) Draw two perpendicular lines through the point (xR + xlow , yR +
ylow ) parallel to the sides of the pth block mimicking orthogonal
cracks parallel to the sides of the blocks which stops growing
upon touching existing cracks and divide it into four smaller
blocks. The label ap is now redundant and hence it can be reused.
(vi) Label the four newly created blocks according to their areas ap ,
a(3j−1) , a3j and a(3j+1) respectively in a clockwise fashion start-
ing from the upper left corner.
(vii) Increase time by one unit and repeat the steps (i)–(vii) ad infini-
tum.
In general, the distribution function C (x, y ; t) describing the blocks
of the lattice by their length x and width y evolves according to the
following kinetic equation [32]
Z xZ y
∂C (x, y ; t)
= −C (x, y ; t) dx1 dy1 F (x1 , x − x1 , y1 , y − y1 )+ (4.89)
∂t 0 0
Z ∞Z ∞
4 C (x1 , y1 ; t)F (x, x1 − x, y, y1 − y )dx1 dy1 ,
x y

where kernel F (x1 , x2 , y1 , y2 ) determines the rules and the rate at which
the block of sides (x1 + x2 ) and (y1 + y2 ) is divided into four smaller
blocks whose sides are the arguments of the kernel [31, 32, 33]. The
first term on the right hand side of equation (4.89) represents the loss
118  Fractal Patterns in Nonlinear Dynamics and Applications

of blocks of sides x and y due to nucleation of seeds of crack on one such


block from which mutually perpendicular cracks are grown to divide it
into four smaller blocks. Similarly, the second term on the right hand
side represents the gain of blocks of sides x and y due to nucleation of
seeds of crack on a block of sides x1 and y1 ensuring that one of the four
new blocks have sides x and y . Let us now consider the case where the
generator divides the initiator randomly into four smaller rectangles
and apply it to only one of the available squares thereafter by picking
preferentially with respect to their areas. It effectively describes the
random sequential nucleation of seeds with uniform probability on the
initiator. Within the rate equation approach this can be ensured if one
chooses the following kernel

F (x1 , x2 , y1 , y2 ) = 1 (4.90)

Substituting it into equation (4.89) we obtain


Z ∞Z ∞
∂C (x, y ; t)
= −xyC (x, y ; t) + 4 C (x1 , y1 ; t)dx1 dy1 . (4.91)
∂t x y

The coefficient xy of C (x, y ; t) of the loss term implies that seeds of


cracks are nucleated on the blocks preferentially with respect to their
areas which is consistent with the definition of our model.
Incorporating the 2-tuple Mellin transform given by
Z ∞Z ∞
M (m, n; t) = xm−1 y n−1 C (x, y ; t)dxdy, (4.92)
0 0

in equation (4.91) we get

dM (m, n; t)  4 
= − 1 M (m + 1, n + 1; t). (4.93)
dt mn
Iterating equation (4.93) to get all the derivatives of M (m, n; t) and
then substituting them into the Taylor series expansion of M (m, n; t)
about t = 0 one can immediately write its solution as
 
M (m, n; t) = 2 F2 a+ , a− ; m, n; −t , (4.94)

where M (m, n; t) = M (n, m; t) for symmetry reason and


1
m + n h m − n 2 i2
a± = ± +4 , (4.95)
2 2
 
where 2 F2 a, b; c, d; z is the generalized hypergeometric function [4].
Multifractality  119

One can see that (i) M (1, 1; t) = 1 + 3t is the total number of blocks
N (t) and (ii) M (2, 2; t) = 1 is the sum of areas of all the blocks which
is obviously a conserved quantity [35, 41]. Both properties are again
consistent with the definition of the WPSL depicted in the algorithm.
The behaviour of M (m, n; t) in the long time limit is

M (m, n; t) ∼ t−a− . (4.96)

Thus, in addition to the conservation of total area the system is also


governed by infinitely many non-trivial conservation laws as it implies

M (n, 4/n; t) ∼ constant ∀n. (4.97)

We used numerical simulation to verify equation (4.97) or its dis-


P n−1 4/n−1
crete counterpart N i xi yi if we label all the available blocks
as i = 1, 2, ..., N . We found that the analytical solution is in perfect
agreement with the numerical simulation which we performed based on
the algorithm for the WPSL model (see Fig. 3).

4.9 Geometric properties of WPSL


We
R ∞ now find it interesting to focus on the distribution function c(x, t) =
0
f (x, y, t)dy that describes the concentration of rectangles which have

n−1 4/n−1
Figure 4.5: The plots of N
P
i xi yi vs N for n = 3, 4, 5 are drawn using data
collected from one realization.
120  Fractal Patterns in Nonlinear Dynamics and Applications

length x at time t regardless of the size of their widths y . Then the q th


moment of n(x, t) is defined as
Z ∞
Mq (t) = xq n(x, t)dx. (4.98)
0

q 2 +16−(q+2)}/2
Mq (t) ∼ t{ . (4.99)
Note that for symmetry reasons it does not matter whether we con-
sider the q th moments of n(x, t) or that of n(y, t) since we have
M (q + 1, 1; t) = M (1, q + 1; t). According to equation (4.99) the quan-
tity M3 (t) and hence N
P 3 PN 3
i xi or i yi is a conserved quantity. Although
M3 (t) remains a constant against time in every independent realization
the exact numerical value is found to be different in every different
realization (see Fig. 4). Like in the dyadic Cantor set their ensemble
average value is, however, equal to one. We also find that the q th mo-
ment of c(x, t) and the q th power of the first moment are not equal,
i.e.,
R∞ q  R ∞ xn(x, t)dx q
q 0
x n(x, t)dx q
< x >= R ∞ 6=< x > = R0 ∞ . (4.100)
0
n ( x, t) dx 0
n(x, t)dx

It suggests that a single length scale cannot characterize all the mo-
ments of the distribution function n(x, t).

4.9.0.1 Multifractal analysis to stochastic Sierpinski carpet


Of all the conservation laws we find that M3 (t) = i x3i is a special
P
one since we can use it as a multifractal measureP consisting of members
pi , the fraction of the total measure pi = x3i / i x3i , distributed on the
geometric support WPSL. That is, we assume that the ith block is
occupied with the cubic power of its own length xi . The corresponding
“partition function”of multifractal formalism then is
X q
Zq (t) = pi ∼ M3q (t). (4.101)
i

Its solution can immediately be obtained from equation (4.99) to give


√ 2
Zq (t) ∼ t{ 9q +16−(3q+2)}/2 . (4.102)
p
Using the square root of the mean area δ (t) = M (2, 2; t)/M (1, 1; t) ∼
t−1/2 as the yard-stick to express the partition function Zq gives the
Multifractality  121

Figure 4.6: The plots of τ (q) vs q to show its slope as a function of q varies.

weighted number of squares N (q, δ ) needed to cover the measure which


we find decays following power-law
N (q, δ ) ∼ δ −τ (q) , (4.103)
where the mass exponent
p
τ (q ) = 9q 2 + 16 − (3q + 2). (4.104)
The non-linear nature of τ (q ), see Fig. 5 for instance, suggests that the
gap exponent
∆ = τ (q ) − τ (q − 1) (4.105)
is different for every q value. It implies that we require an infinite hi-
erarchy of exponents to specify how the moments of the probabilities
{p}s scales with δ . Now if we choose q = 0 then it gives an estimate
of the number of squares N (0, δ = N (δ ) of sides δ we need to cover
the support on which the members of the population is distributed. We
find that N (δ ) scales as
N (q = 0, δ ) ∼ δ −τ (0) , (4.106)
where τ (0) = 2 is the Hausdorff-Besicovitch dimension P of the support.
On the other hand, if we choose q = 1 we have Z1 = i pi = const. and
hence we must have τ (1) = 0. This is indeed the case according to equa-
tion (4.104). Therefore, τ (0) = 2 (the dimension of the support) and
τ (1) = 0 (required by the normalization condition) are often considered
as the first self-consistency check for the multifractal analysis.
122  Fractal Patterns in Nonlinear Dynamics and Applications

4.9.0.2 Legendre transformation of the mass exponent τs (q): The


f (α) spectrum
We now perform the Legendre transformation of the mass exponent τ (q )
by using the Lipschitz-Hölder exponent, as given in equation (4.37), as
an independent variable to obtain the new function

f (α) = qα + τ (q ). (4.107)

Replacing τ (q ) in equation (4.103) in favour of f (α) we find that

N (q (α), δ ) ∼ lim δ qα−f (α) . (4.108)


δ→0

On the
Pother hand, using p ∼ δ α in the expression for partition function
q
Zq = i p and replacing the sum by integral while indexing the blocks
by a continuous Lipschitz-Hölder exponent α as variable with a weight
ρ(α) we obtain
Z
N (q (α), δ ) ∼ ρ(α)dαN (α, δ )δ qα , (4.109)

where N (α, δ ) is the number of squares of side δ needed to cover the


measure indexed by α. Comparing Eqs. (4.108) and (4.109) we find

N (α, δ ) ∼ δ −f (α) . (4.110)

Figure 4.7: The f (α) spectrum.


Multifractality  123

It implies that we have a spectrum of spatially intertwined fractal di-


mensions
16
f (α(q )) = p − 2, (4.111)
9q 2 + 16
are needed to characterize the measure. That is, the size disorder of the
blocks are multifractal in character since the measure {pα } is related to
the size of the blocks. That is, the distribution of {pα } in WPSL can be
subdivided into a union of fractal subsets each with fractal dimension
f (α) ≤ 2 in which the measure pα scales as δ α . Note that f (α) is always
concave in character (see Fig. 6) with a single maximum at q = 0 which
corresponds to the dimension of the WPSL with empty blocks. P
On the other hand, we find that the entropy S (δ ) = − i pi ln pi
associated with
P the partition of the measure on the support by using
the relation i pqi ∼ δ −τ (q) in the definition of S (δ ). Then a few steps
of algebraic manipulation reveals that S (δ ) exhibits scaling

S (δ ) = ln δ −α(1) (4.112)
6
where the exponent α1 = 5 obtained from

α(q ) = − dτ (q )/dq|q . (4.113)

It is interesting to note that α(1) is related to the generalizedP dimension


Dq , is also related to the Rényi entropy Hq (p) = q−1 1
ln i pqi in the
information theory, given by
h 1 ln P pq i τ (q )
i i
Dq = lim = , (4.114)
δ→0 q − 1 ln δ 1−q
which is often used in the multifractal formalism as it can also provide
insightful interpretation. For instance, D0 = τ (0) is the dimension of
the support, D1 = α1 is the Renyi information dimension and D2 is
known as the correlation dimension. Multifractal analysis was initially
proposed to treat turbulence but later successfully applied in a wide
range of exciting field of research. For instance, it has been recently
found that the wild fluctuations of the wave functions at the Anderson
and the quantum Hall transition can be best described by multifrac-
tality. Recently, though this has got renewed momentum as it has been
found that the probability density function at the Anderson and the
quantum Hall transition exhibits multifractality since in the vicinity of
the transition point fluctuations are wild - a characteristic feature of
multifractal behaviour.
124  Fractal Patterns in Nonlinear Dynamics and Applications

4.10 Multifractal formalism in kinetic square


lattice
In an attempt to understand the origin of multifractality we now con-
sider the case where the generator divides the initiator into four equal
blocks instead of randomly into four blocks. If the generator is applied
over and over again thereafter to only one of the available squares by
picking preferentially with respect to their areas then it results in the
kinetic square lattice (KSL). Within the rate equation approach it can
be described by the kernel
F (x1 , x2 , y1 , y2 ) = (x1 + x2 )(y1 + y2 )δ (x1 − x2 )δ (y1 − y2 ). (4.115)
and hence the resulting rate equation can be obtained after substituting
it in equation (4.89) to give
∂C (x, y ; t) 1
= − xyC (x, y ; t) + 42 xyC (2x, 2y ; t). (4.116)
∂t 4
Incorporating equation (4.92) in equation (4.116) yield
dM (m, n; t) 1 4 
=− − m+n M (m + 1, n + 1; t). (4.117)
dt 4 2
To obtain the solution of this equation in the long-time limit, we assume
the following power-law asymptotic behaviour of M (m, n; t) and write

M (m, n; t) ∼ A(m, n)tθ(m+n) , (4.118)


with θ(4) = 0 since the total area obtained by setting m = n = 2 is
an obvious conserved quantity. Using it in equation (4.117) yields the
following difference equation
θ(m + n + 2) = θ(m + n) − 1. (4.119)
Iterating its subject to the condition that θ(4) = 0 gives
m+n−4
M (m, n; t) ∼ t− 2 . (4.120)
Apparently it appears that in addition to the conservation of the total
area M (2, 2; t), we find that the integrals M (3, 1; t) and M (1, 3; t)) are
also conserved. Interestingly, all the three integrals M (2, 2; t), M (3, 1; t)
and M (1, 3; t) effectively describe the same physical quantity since all
the blocks are square in shape and hence
N
X N
X N
X
x2i = yi2 = xi yi . (4.121)
i=1 i=1 i=1
Multifractality  125

Therefore, in reality the system obeys only one conservation law - con-
servation of total area.
We again look into the q th moment of n(x, t) using equation (4.98)
and appreciating the fact that Mq (t) equal to M (q + 1, 1; t) or M (1, q +
1; t) we immediately find that
q−2
Mq (t) ∼ t− 2 . (4.122)

Unlike in the previous case where the exponent of the power-law so-
lution of the Mq (t) is non-linear, here we have an exponent which is
linear in q . It immediately implies that in the case of a kinetic square
lattice
< xq >=< x >q , (4.123)
and hence a single length-scale is enough to characterize all the mo-
ments of n(x, t). That is, the system now exhibits simple scaling instead
of multiscaling. Like before let us consider that each block is occupied
with P
a fraction of the measure equal to square of its own length or area
N
pi = i x2i and hence the corresponding partition function is
N
X
Zq = pqi = M2q (t). (4.124)
i

Using equation (4.122) we can immediately write its solution


2q−2
Zq (t) ∼ t− 2 . (4.125)

Expressing it in terms of the square root of the average area δ ∼ t−1/2


gives the weighted number of square N (q, δ ) of side δ needed to cover
the measure which has the following power-law solution

N (q, δ ) ∼ δ −(1−q)2 , (4.126)

where mass exponent τ (q ) is

τ (q ) = 2 − 2q. (4.127)

The Legendre transform of the mass exponent is a constant

f (α) = 2, (4.128)

and so is the generalized dimension

Dq = 2 . (4.129)
126  Fractal Patterns in Nonlinear Dynamics and Applications

We thus find that if the generator divides the initiator into four equal
squares and we apply it thereafter sequentially then the resulting lat-
tice is no longer a multifractal. The reason is that the distribution of
the population in the resulting support in this case is uniform. The
two models therefore provide a unique opportunity to look for possi-
ble origin of multifractality. The two models discussed in this section
differs only in the definition of the generator. In the case when the
generator divides the initiator randomly into four blocks and we apply
it over and over again sequentially then we have multifractality since
the underlying mechanism in this case is governed by random multi-
plicative process. This is not, however, the case if the generator divides
the initiator into equal four blocks and applies it over and over again
sequentially since the resulting dynamic is governed by deterministic
multiplicative process instead.

4.10.1 Discussions
In this section, we discuss two main features, (i) finding the explicit
time dependent and scaling properties of the particle size distribution
function, when particles are characterized by more than one variable
and (ii) the connection between the kinetics of the fragmentation pro-
cess and the occurrence of multifractality to describe the rich pattern
formed due to the breakup process. Both these results have important
applications including a unique opportunity to search for the origin
of multifractality and multiscaling. In reality, fragmenting objects will
have both size and shape, i.e., a geometry. Intrigued by the possibil-
ity that the geometry of the fragmenting objects may influence the
fragmentation process, we have investigated three distinct models of
fragmentation. For some simple choices of the fragmentation rule we
give exact and explicit solutions to these geometric models.
We find it difficult to find the explicit solution for general homo-
geneity in the case of two-dimensions. Nevertheless, we find that the
solution of the rate equation for the moments is analytically tractable
to find the temporal behaviour of the moment in the long time limit.
Since the moment keeps the signature of some generic features of the
particle size distribution function we confined ourselves to the asymp-
totic behaviour of the moment for general homogeneity indices which is
essential to look out for the occurrence of the shattering transition. We
suggest that the existence of an infinite number of hidden conserved
quantities clearly indicates the absence of scaling solutions.
Multifractality  127

The models we discuss in this chapter can also be used to produce


stochastic fractals which are reminiscent of the Cantor gasket (d = 2),
Cantor cheese (d = 3), etc. We derive exact expressions for fractal di-
mensions when the initiator is a rectangle and at each time the step
generator subdivides into four rectangles and one or more of them are
removed randomly. We continue the process ad infinitum with the re-
maining pieces. In this case it appears that one cannot describe the
phenomenon by a single fractal dimension-infinitely many are required.
Such phenomena are called multifractal. Physically, it means that it is
possible to partition the resulting system into subsets such that each
subset is a fractal with its own characteristic dimension. Typically,
multifractal patterns appear in systems that develop far from equilib-
rium and that do not yield a minimum energy configuration, such as
diffusion limited aggregation, or a metal foil grown by electro-
deposition.
When the system describes the fragmentation process, the result-
ing set in the long time limit is an integer number typical for Eu-
clidean shape. Yet, we find that the system shows multifractality and
gives a unique measure of support (Df = 2) on which subsets can
be distributed. However, any observable fluctuates strongly from one
realization to the other. Although each realization is statistically self-
similar in these fluctuations, it means that averaged quantities of any
observables can be measured with reasonable accuracy only through
the ensemble average. That is, a single experiment over a longer period
of time will not give any averaged quantities with good accuracy, in-
stead a large number of independent experiments is required. This is a
very important property for real or numerical experiments. But when
describing stochastic fractals, one associates pictures of wildly varying
probabilities of the measure since at each realization the dimension of
the support can be different. This reflects the fact that in the case of a
system describing stochastic fractals, the entropy of the system has one
more source than in the fragmentation process. This extra source arises
due to the competition among the fractal support for different m∗ in a
given experiment. Recently, fragmentation of a heavy drop falling in a
lighter miscible fluid has been performed in the laboratory. During this
process, the droplet sizes were found to display multifractal properties
which is in agreement with our investigation.
One might wonder if it is possible to say when to expect a random
fractal or multifractal. Perhaps, it is quite safe at this stage to say
that whenever we hopelessly fail to produce an identical copy under
the same initial condition, but each realization has the same generic
form to recognize them, we find a fractal object. On the other hand, we
expect a system to show multifractality whenever each copy appears
128  Fractal Patterns in Nonlinear Dynamics and Applications

with strong fluctuations between different copies and each copy can
be partitioned into subsets such that each subset scales with different
exponents, yet they can be recognized due to their generic features.
Note that when three fragments are removed from the system at each
time event (s = 1) the dimension of the measure support (Df (s)) is
zero where the measure can be distributed.
Chapter 5

Fractal and
Multifractal in
Stochastic Time Series

5.1 Introduction
In the previous chapters, we have discussed the concept of self-
similarity, fractality and multifractality for both the deterministic and
stochastic processes. In each case, a rule (deterministic or stochastic)
is considered to generate the process and define its fractal or multi-
fractal behaviour. The deterministic rule is mathematical and the cor-
responding process is called deterministic [47, 49]. On the other hand,
a stochastic rule is completely statistical, i.e; behaviour of the system
depends on a probabilistic law and the associated system is known as a
stochastic process [50, 53]. However, these kinds of predefined rules do
not work well on prediction of real world phenomena (e.g.; human heart
oscillation, neuro system, tumor cell growth, cancer growth, environ-
ment, socio-economical system, etc). Moreover, it is impossible to define
an exact rule of the corresponding phenomena. In such a case, the only
way is to predict the process is by time series analysis. A time series
is deterministic or stochastic due to the corresponding nature of the
process. For the deterministic time series, fractal as well as multifractal
analysis can be done by reconstruction of the attractor from the given
time series [47, 49]. Reconstruction of the attractor requires a suitable
time-delay and proper embedding dimension [14, 47, 49]. On the other
130  Fractal Patterns in Nonlinear Dynamics and Applications

hand, such delay embedding method cannot be applied for a stochastic


time series (STS) as because both delay and embedding are very large
[50]. However, self-similarity can be investigated by measuring a scaling
law for the STS. The scaling law can be measured by the fluctuation
analysis based on statistical variation in the time series [30, 25, 69, 81].
A single scaling behaviour corresponds to monofractal STS [50]. Per-
haps, this single scaling is not always effective to describe the different
types of irregularity in the time series. In this context, multifractality
is investigated for the STS [50]. In this chapter, we have discussed the
method of fluctuation analysis, fractality and multifractality in STS
with various numerical examples.
A summarized plan of the discussion is given in Fig. 5.1. In
Section 5.2, basic conceptions about scaling law, monofractal and mul-
tifractal STS are discussed. As two different fluctuation theories were
developed for stationary and non-stationary STS respectively, we, first,
discussed the method of verifying the stationarity and non-stationarity
of a time series in Section 5.2. The next discussion, given in Section
5.4, is about the fluctuation analysis for monofractal STS. For the
stationary monofractal STS, auto-correlation coefficient method (Sec-
tion 5.4.1), spectral analysis (Section 5.4.2), method of Hurst expo-
nent (Section 5.4.3), fluctuation analysis (Section 5.4.4) can be applied
to find the underlying scaling law. On the other hand, non-stationary
monofractal STS can be analyzed by wavelet analysis, detrended fluc-
tuation analysis (Section 5.4.5), detrended moving average technique,
centred moving average, etc. Perhaps, detrended fluctuation analysis
are generally used to characterize the monofractal STS due to some
drawbacks and huge computational cost of other methods. Section 5.5
discusses the method of wavelet transform modulus maxima, multifrac-
tal detrended fluctuation analysis to characterize a multifractal STS.
In the last section, we summarize the chapter and discuss the future
scope of all the analyses.

5.2 Concept of scaling law, monofractal and


multifractal time series
A collection of random variables {Xt }, t ∈ T , where Xt is defined as
a map from a probability space (S, P, Ω) to a measurable space M, is
known as a stochastic or random process. The index set T is generally
considered as time domain. We can get discrete as well as the continuous
stochastic process according to T = N and R respectively. For example,
let us consider a random variable X : S → R (S being sample space of
Fractal and Multifractal in Stochastic Time Series  131

Figure 5.1: A flow chart represents a plan of discussion of the remaining topics.
The chart starts from upper left corner (the rectangle) and end at bottom left corner
(the square). Each geometrical figure signifies different sections and arrow indicates
directions of the progression of the successive sections.

throwing two coins) is considered by



number of ’head’s, if wi ∈ S
X (wi ) =
0, otherwise.

If S is defined by {(H, H ) = ω1 , (H, T ) = ω2 , (T, H ) = ω3 , (T, T ) = ω4 },


then the values of X are given by X (ω1 ) = 2, X (ω2 ) = 1, X (ω3 ) =
1, X (ω4 ) = 0. Then probability P of an event 0 ≤ X (ωi ) ≤ 1 denoted
by P (0 ≤ X ≤ 1) or P (X ≤ 1) and it’s equal to P (X ≤ 1) = 12 .
For any stochastic process, each random variables can make a set of
observations at different time t ∈ T . These are known as state spaces
of the stochastic process.

Definition 5.1 For a random variable X, a set of observation {x(k)}Nk=1


(sometimes denoted by {x}) obtained at successive time t = nk, n ∈ N is
known as stochastic time series (STT) of the corresponding process.
132  Fractal Patterns in Nonlinear Dynamics and Applications

For fixed n, the time series is known as regularly sampled; otherwise


we call this irregularly sampled. In this chapter, we dealt with only
regularly sampled time series.

Definition 5.2 For a given time series {x}, a measure µ is said to obey a
scaling law with an exponent α if µ(cs) = cα µ(s) hold, where c is a constant
and s represents scale parameter.

For example, consider a time series with a measure µ that satisfies


the relation µ(s) = asα . Then, µ(cs) = a(cs)α = cα µ(s). Thus µ obeys
the scaling law with exponent α.
It can be seen that the measure is independent of scaling up to the
desired order. So long-term behaviour of the time series can be char-
acterized by a statistical measure that obeys a scaling Laws. Hence,
long-term dynamics of a STT can be described by scaling laws of some
statistical measures which are valid for a wide range of time or frequen-
cies scales. The parameter α is called self-similarity parameter. Accord-
ing to a single or multiple non-integer scaling exponent, a time series
is characterized by monofractal or multifractal process respectively.

Definition 5.3 A stochastic process is said to be a fractal process if the


corresponding time series {x} obeys the following conditions:
(a) it satisfies scaling law,
(b) the scaling exponent should be non-integer.

Sometimes, it has been observed that single scaling exponent is not


sufficient to describe the stochastic process. In that case, time series
are needed to be characterized by multiple scaling exponents and the
underlying processes are called mutifractal process.

Definition 5.4 A stochastic process is said to be a multifractal process if


the corresponding time series {x} obeys the following conditions:
(a) it satisfies scaling law with various number of different scaling exponents,
(b) the scaling exponents are non-integer.

In order to find the fractal scaling law, stationarity and non-


stationarity are needed to investigate from a given time series. Station-
arity, as well as the non-stationarity of a time series, can be verified by
the auto-correlation method. The auto-correlation method is basically a
correlation of a time series with itself. So, the next section is given for a
discussion on the method of verifying stationarity and non-stationarity
of a given STS.
Fractal and Multifractal in Stochastic Time Series  133

5.3 Stationary and non-stationary time series


Stationary time series means its distribution changes nowhere in any
time span [24]. For a time series {x(k )}N k=1 with probability p(x(k )),
is said to be stationary if and only if p(x(k )) = p(x(k + τ )), where
τ (∈ Z) represents translation or lag of the time index k . In a practical
situation, equality between p(x(k )) and p(x(k +τ )) is not always possible
to establish a given time series {x(k )}N
k=1 . It is an ideal case that can
discuss in the theory of stationary time series. It often call strictly
(or strongly) stationarity of a time series. To overcome this limitation,
an alternative criteria is proposed which is, in fact, known as weakly
stationarity.

Definition 5.5 A time series {x(k)}N k=1 is said to be weakly stationary if


hxi = hx(k + τ )i and σx(k) = σx(k+τ ) , where
v
N u N
1 X u1 X
hxi = x(k), σx(k) = t {x(k) − hxi}2 , (5.1)
N N
k=1 k=1

τ (∈ Z) being the time-delay.

Based on parametric and non-parametric techniques, two methods


are generally used to test the stationarity. The parametric method is
generally used in the time domain, such as economists, who are mak-
ing certain assumptions about the nature of financial data [50]. On
the other hand, the non-parametric method is mainly developed on
the frequency domain, such as engineers, who often consider the sys-
tem as a black box. In fact, the non-parametric method does not con-
sider any assumption about the system. Moreover, consideration of the
normal distribution of the data is not needed in the non-parametric
method. Though the non-parametric method is widely applicable from
a statistical view, it reveals less powerful results than parametric
method. The parametric method is mainly based on auto-covariance
or auto-correlation method. We now define auto-covariance and auto-
correlation of a STS of length N .

Definition 5.6 Let {x(k)}N k=1 be a given time series. Then, auto-covariance
of {x(k)}N
k=1 with delay τ is denoted by ACOV (τ ) and defined as

N −τ
1 X
ACOV (τ ) = (x(k) − hxi)(x(k + τ ) − hx(k + τ )i), (5.2)
N
k=1

where τ = 0, 1, 2, 3, . . ..
134  Fractal Patterns in Nonlinear Dynamics and Applications

Similarly, we can define ACOV (−τ ) by


N
1 X
ACOV (−τ ) = (x(k) − hxi)(x(k + τ ) − hx(k + τ )i), (5.3)
N
k=1−τ

where τ = −1, −2, −3, . . ..

Definition 5.7 For a time series {x(k)}N


k=1 , auto-correlations AC(τ ) and
AC(−τ ) are defined by
ACOV (τ )
AC(τ ) = , τ = 0, 1, 2, 3, . . . (5.4a)
σx(k) σx(k+τ )
ACOV (−τ )
AC(−τ ) = , τ = −1, −2, −3, . . . (5.4b)
σx(k) σx(k+τ )

From the Definition 5.7, it can be verified that a time series will be
stationary if and only if AC (τ ) = AC (−τ ). Otherwise, a time series is
called non-stationary. A numerical illustration is given in Example 5.1
to show the applicability of auto-correlation measure on verifying the
stationarity and non-stationarity of the STS.

Example 5.1 First consider two different time series x and y, each of
length 500, as given in Fig. 5.2. From the Fig. 5.2a, only one variation
can be observed in the time series x. It indicates that, the corresponding
probability distributions are the same in each time window that covers at
least one vibration. So, x is weakly stationary. On the other hand, variable
oscillation can be observed in y over different time intervals. In fact, os-
cillation in y for the time interval I1 = (250, 500] is faster than the same

(a) 1 (b) 1 (c) 1 (d) 0.2

0.5 0.5 0.5 0.1


AC x (τ )

AC y (τ )

0 0 0
x

0
y

-0.5 -0.5 -0.5 -0.1

-1 -1 -1 -0.2
0 2 4 0 2 4 -1000 0 1000 -1000 0 1000
t t -τ τ -τ τ

Figure 5.2: (a), (b) represent a stationary time series {x} and a non-stationary
time series {y} respectively. In each case, we have considered same length L = 500.
(c) and (d) represent respective auto-correlation curves with τ ∈ [−1000, 1000]. The
dotted lines indicates auto-correlations at τ = 0.
Fractal and Multifractal in Stochastic Time Series  135

in I2 = [0, 250] (see Fig. 5.2b). It implies that the respective oscillating
patterns in I1 and I2 are always differs. As no common oscillation ex-
ists there, corresponding probability distribution will change over time and
hence non-stationarity indicates in {y}.

Self-similarity can be observed in both stationary and non-stationary


STS. So both the methods of monofractal and multifractal scaling law
can be investigated from the stationary and non-stationary time series
respectively. However, fluctuations are needed to be defined separately
in each of the cases. The next sections discuss various methods of fluc-
tuation analysis for monofractal STS.

5.4 Fluctuation analysis on monofractal station-


ary and non-stationary time series
5.4.1 Autocorrelation function
For a given STS {x}, the autocorrelation function of lag τ was already
described in (5.4). Now, if the data is uncorrelated (e.g., a random
walk), AC (τ ) = 0 for τ > 0. If the data has short term correlation, then
AC (τ ) decay exponentially with respect to tτc , where tc is the charac-
teristic decay time. Moreover, for the long-range correlation AC (τ ) sat-
isfies the condition AC (τ ) ∼ τ −γ (0 < γ < 1). The exponent γ is known
as autocorrelation exponent. In practice, value of γ can be calculated
by measuring the gradient of the straight line fitted on log AC (τ ) vs.
log τ plot. However, a direct calculation of AC (τ ) on non-stationary
time series always reveals incorrect results due to continuous changes
in average hxi. Further, values of AC (τ ) highly fluctuates around zero
for large τ which makes it impossible to find out the correct γ [69]. A
numerical illustration about the effectiveness of AC (τ ) is given at the
end of the Section 5.4.

5.4.2 Fourier based spectrum analysis


For any given continuous STS {x}, the Fourier spectrum S (f ) (f rep-
resents frequency of the STS) is defined by
S (f ) = kX (f )k2 , where (5.5)
X (f ) is known a Fourier transform of {x} and is given by
Z ∞
X (f ) = xe−2πt dt. (5.6)
−∞
136  Fractal Patterns in Nonlinear Dynamics and Applications

The discrete power spectrum can also be defined by discrete Fourier


transform.
For a given stationary time series, scaling behaviour of the time
series can be observed by measuring the fluctuation of S (f ) over long-
range of f . To do this, slope of the straight line is fitted on log S (f ) vs.
log f plot is calculated. The slope is equivalent to β , where β satisfies
S (f ) ∼ f −β . Further, it has been established that β is related to the
autocorrelation exponent γ by β = 1 − γ . So uncorrelated, short-term
as well as long-term correlation of an STS can also be characterized by
spectrum analysis.
We now discuss efficiency of spectrum analysis in Example 5.2.
Example 5.2 First consider two different STS-white noise ({x}) and
pink noise ({y}) shown in Fig. 5.3a and b respectively. From the Fig.
5.3a, it can be observed that fluctuation in the corresponding STS is com-
pletely random. On the other hand, less random fluctuation can be seen in
Fig. 5.3b. Then, calculate S (f ) using (5.5). To compute power β for each
STS, we have drawn log S (f ) vs. log f plots. The corresponding plots are
given in Fig. 5.3c and d respectively. By fitting straight lines on the mean
trend of the plots, the slope of the respective lines are found to be 0 and 1
respectively.

Note
In order to apply fluctuation analysis, one of the major tasks is to classify the
noisy and random walk nature from the given STS. The method of spectrum
analysis (described in Section 5.4.2) can be used to identify these natures.
Figure 5.4 shows some STS with β ∈ [0, 2]. An STS with β ∈ [0, 1] is known
as noisy time series. On the other hand, a random walk can be assured by
observing the value of β in (1, 2]. Moreover, β = 0 and 1 indicate white and
pink noise respectively.

However, this analysis does not reveal better results, until a logarithmic
binning procedure is applied to the double logarithmic plot of S (f ).

5.4.3 Hurst exponent


The method of Hurst exponent was proposed by a British hydrologist
H.E. Hurst, while he was working in Egypt on the Nile River Dam
Project [3]. He observed that both the axis of time and a statistical
quantity are not equivalent for an observation or time series. There-
fore, rescaling of time is necessary for adjusting the time series. He
rescaled the time scale k ∈ N by a factor ‘a’ and the corresponding
Fractal and Multifractal in Stochastic Time Series  137

(a) (b) 5
2

x
0 0

y
-2
-5
0 500 1000 1500 2000 0 500 1000 1500 2000
k k
(c) 5 (d) 20

log S(f)
log S(f)

10
0

0
-5
-8 -6 -4 -2 -8 -6 -4 -2
log f log f

Figure 5.3: (a), (b) represent white noise ({x}) and pink noise ({y}) respectively. In
each case, length of noise is considered N=2000. (c) and (d) represent corresponding
log S(f ) vs. log f plots. The solid line (pink) indicates fitted straight lines on the
log S(f ) vs. log f plots.

β=0
WHITE NOISE

NOISE

β=1
PINK NOISE

RANDOM
WALK

β=2
BROWN NOISE

Figure 5.4: Power noise signals f1β 1 are represents for for β = 0 (gray), 0.5 (blue),
1 (pink), 1.25 (red), 1.5 (brown).

time series {x(k )}N H H


k=1 by a x(ak ) (i.e; a x(ak ) → x(k )), which actually
reflects statistical self-similarity (self-affinity) of the time series. Then
138  Fractal Patterns in Nonlinear Dynamics and Applications

he has computed the exponent H based on a statistical quantity ap-


plied to the rescaled time series, which is known as the Hurst Exponent
(HE). The steps of calculating the HE from a given STS are given as
follows:
N
Step-1: For a given {x(k )}N k=1 , Ns (Ns = [ s ]) non-overlapping segments
Aν (ν = 1, 2, . . . , Ns ) of size s is defined by

A1 = {x(1), x(2), . . . , x(s)}


A2 = {x(s + 1), x(s + 2), . . . , x(2s)}
.. .. (5.7)
. .
ANs = {x(Ns − 1s + 1), x(Ns − 1s + 2), . . . , x(Ns − 1 + Ns )}.

Step-2: For each segment Aν (ν = 1, 2, . . . , Ns ), means hxν i and de-


viations Sν are calculated by
v
νs u νs
1X u1 X
hxν i = x(ν − 1s + k ), Sν = t {x(ν − 1s + k ) − hxν i}2 .
s s
k=1 k=1
(5.8)

Step-3: In the next, define two quantities-profile (Xνj ) and range


(Rν (s)) by
Xj
Xνj = {x(ν−1)s+k − hxν i}, (5.9)
k=1

Rν (s) = maxXνj − minXνj , j = (ν − 1)s + 1, . . . , νs. (5.10)

Step-4: Then, rescale range is obtained by averaging the fluctuation


function FRS (s), where
Ns
1 X Rν (s)
FRS (s) = . (5.11)
Ns ν=1 Sν (s)

It can be verified that, FRS (s) obeys the scaling law: FRS (s) ∼ sH ,
where H is known as HE. In practice, H is calculated by fitting a
straight line on the log FRS (s) vs. log s plot. In fact, slope of the fit-
ted straight line on the log FRS (s) vs. log s plot gives the value of H .
Example 5.2 illustrates calculations of H for three types of power noise:
Fractal and Multifractal in Stochastic Time Series  139

Example 5.3 Power noise is a kind of stochastic time series whose power
spectrum S (f ) (f being frequency of the time series) obeys the law: S (f ) ∼
f β , where β ∈ R+ ∪ {0}. For our purpose, we consider β = 0, 0.5 and
1. To calculate H, we have investigated log s vs. log R/s(s) graphs for the
respective aforesaid time series. The corresponding plots are shown in Fig.
5.5 a, b and c respectively. From the figures, it can be investigated that the
slope of the fitted straight lines on the corresponding plots are found to be
0.4959, 0.7495 and 0.8302 respectively. These correspond to the values of
H of the respective power noises. It is noted that the values of H for β = 0
and 0.5 (given in Fig. 5.5a and b) are almost equivalent to its standard
theoretical values. For β = 0.5, the value of H is 0.8302 which is far from
its theoretical value (see Fig. 5.5c). However, non-stationarity is one of
the major reasons for this incoherence.

2 3 (c) 3
(a) (b)
H=0.4959 H=0.7495 H=0.8302

log10 FRS (s)


log10 FRS (s)

log10 FRS (s)

2 2
1
1 1

0 0 0
0 2 4 0 2 4 0 2 4
log10 s log10 s log10 s

Figure 5.5: (a), (b) and (c) represents log10 FRS (s) vs. log10 s plot for β = 0, 0.5
and 1 respectively. For each plot, length of the noise are considered 2000. In order to
find the HE, straight lines are fitted on the linear region of log10 FRS (s) vs. log10 s
plots. On computation, we have chosen scale as 10s , where s = [0, 1, 2, 3, 4].

Another measure based on random walk theory, known as Fluctuation


analysis (FA), was also proposed to find the scaling law fluctuation of
the STS. The method of FA is discussed in the following section:

5.4.4 Fluctuation analysis (FA)


In this method, a time series {x(k )}N k=1 with zero mean (if not, then
mean is needed to make zero) is considered. In order to find fluctuation
from the series {x(k )}, the following steps are to be done:

Step-1: A profile y (j ) is constructed by


j
X
y (j ) = x(k ) for j = 1, 2, . . . , N. (5.12)
k=1
140  Fractal Patterns in Nonlinear Dynamics and Applications

The profile y (j ) is nothing but a random walker on a linear chain after


time step j .

Step-2: Define Ns (= [ Ns ]) segments Aν (ν = 0, 1, 2, . . . , Ns ) as given in


Eq. (5.7).

Step-3: On each Aν , square-fluctuation of the profile y (j ) is calculated


by

F 2 (ν, s) = [y (ν − 1s + 1) − y (νs)]2 , (ν = 1, 2, . . . , Ns ) (5.13)

Step-4: Then mean fluctuation of {F 2 (ν, s)} ((ν = 1, 2, . . . , Ns )) is cal-


culated by
Ns
1 X
F 2 (s) = F 2 (ν, s). (5.14)
Ns ν=1

From the Eq. (5.14), it is seen that F (s) is the root-mean-square


displacement of the random walker {y (j )} at the scale s. For a given
time series having long-term correlations, the corresponding fluctuation
F (s) increases with some power of s, say sα . So, we can write F (s) ∼ sα .
1
It can be observed that fluctuation {F (s) 2 } follows the scaling law:
1
{F (s) 2 } ∼ sα . Since α ≈ H holds for a monofractal time series, so
2α = 1 + β . In FA, values of α are always taken in (0, 1). At α = 0, 1,
significant inaccuracy has been observed in FA results. Also, it can be
seen that FA is reliable only when s satisfies s > N/10.
To overcome all the limitations of FA method, detrended fluctuation
analysis (DFA) was proposed by Pend et al. The next section describes
the DFA method with numerical illustrations.

5.4.5 Detrended fluctuation analysis


Detrended Fluctuation method can characterize the long-term correla-
tion of the non-stationary signal. In DFA, the detrending technique is
applied to the linear trend observed in the FA. As detrending applies to
the method of FA, it indicates that the DFA also counts the variance
of the random walk of the given time series. To calculate detrended
fluctuation from a time series {x(k )}N
k=1 , we need to perform following
steps:
Fractal and Multifractal in Stochastic Time Series  141

Step-1: A profile X (j ) (j = 1, 2, . . . , N ) is constructed from {x(k )}N


k=1
by
j
1 X
X (j ) = x(k )− < x >, where (5.15)
N
k=1

hxi denotes the mean of {x(k )}.

Step-2: Define Ns (= [N/s]) non-overlapping segments Aν of length


s same as given in Eq. (5.7).

Step-3: For each segment, a polynomial trend Xν,s (j ) is fitted by least-


square method and subtracted from the corresponding original profile.
The resultant profile X̃s (j ) is calculated by X̃s (j ) = X (j ) − Xν,s (j ). [see
the Example 5.4]

Step-4: The detrended variance is defined by


Ns νs
2 1 X X
F (s) = [X̃s (j )]2 . (5.16)
sNs ν=1
j=(ν−1)s+1

If the long-term exists in {x(k )}N 2


k=1 , then F (s) increases with s
according to power law, i.e; F 2 (s) ∼ sα . The exponent α is called the
scaling exponent and it is calculated by measuring the slope of the
fitted straight line on log F 2 (s) vs. log s plot.

Note
Before calculating the values of α, we discuss the detrending technique (de-
scribed in step-2). To do this, we choose f1 -noise and construct a profile X(j)
by (5.15). Figure 5.6 a and b show same profile X(j) for f1 -noise. Then, we
divide the profile on some non overlapping intervals of same length. In Fig.
5.6a, we have considered straight lines for fitting the profile X(j) in each of
the intervals. On the other hand, quadratic polynomials are fitted for X(j) in
Fig. 5.6b. From both the figures, it can be observed that the best fitting can
be obtained in Fig. 5.6b. It indicates, that DFA cannot reveal an appropriate
result until the detrending shows minimum errors.

To show the effect of DFA on monofractal analysis, a numerical


investigation is done on three power noises. The discussion is given in
Example 5.4.
142  Fractal Patterns in Nonlinear Dynamics and Applications

(a) 0.6
0.4

x (s, j)
0.2
0

(b) 0.6
0.4
x (s, j)

0.2
0

j
Figure 5.6: (a), (b) represents profile X(j)(j = 1, 2, . . . , 8000) (green) for a f1 -noise
of length 8000. In both the figures, dotted (red) lines represents trend Xν (s, j). In (a),
Xν (s, j) represents a straight line for each ν. On the other hand, Xν (s, j) represents
a quadratic polynomial for each ν in (b). The vertical (black) lines indicates non-
overlapping segments.

Example 5.4 By considering the quadratic trend, we have computed α


for each β = 0, 0.5, 1 using (5.16). Figure 5.7 the respective αs. It can
be observed that, values of α ≈ 0.5, 0.75 and 1 for the aforesaid noises.
These values also correspond to the receptive HEs. Moreover, it can be
verified that α = 0.5, > 0.5, < 0.5 indicates white noise, persistent and
anti-persistent behaviour respectively. If α = 1 and 1.5, signals look like
as f1 -noise and Brownian motion. So, unlike the HE, α shows equivalent
behaviour. So, this numerical illustration indicates that the DFA is very
effective towards finding the scaling behaviour of both stationary and non-
stationary STS.

However, the supremacy of DFA cannot be assured in this stage. To


do this, a comparative study between DFA and all the remaining mea-
sures for monofractal STS is needed. As DFA is an extension of FA,
we only compare the method of DFA with two independent methods
autocorrelation function and Fourier based spectrum analysis. For this
purpose, we have considered three types of power noise f10 , f 0.5 1
, and
1
f respectively. Figure 5.8a shows the auto-correlation of the respec-
tive noise. It can be observed from the figure that, all the fluctuations
having a similar trend with respect to the scale s. On the other hand,
corresponding trends of the log f vs. log S (f ) are all equivalent for the
three respective power noises (see Fig. 5.8b). So, the behaviour of the
noise cannot always be possible to distinguish by the method of auto-
correlation and Power spectral density even though stationarity exists.
Furthermore, we investigate the values of α for the aforesaid noise by
Fractal and Multifractal in Stochastic Time Series  143

(a) 32 (b) 32 (c) 32

16
α=0.4882 α=0.7486 α=0.9962
16 16

log F(s)
logF(s)

log F(s)
8 8 8

4 4 4

2 2 2

1 1 1
16 32 64 128 16 32 64 128 16 32 64
log s log s log s

Figure 5.7: (a), (b) and (c) represents log10 FRS (s) vs. log10 s plot for β = 0, 0.5
and 1 respectively. For each plot, length of the noise are considered 2000. In order to
find the HE, straight lines are fitted on the linear region of log10 FRS (s) vs. log10 s
plots. On computation, we have chosen scale as 10s , where s = [0, 1, 2, 3, 4].

DFA. The corresponding results are given in Fig. 5.8c. From Fig. 5.8c,
it can be observed that the values of α will always differ for different
noise. So, the autocorrelation method and power spectral analysis do
not always describe the true characteristics of the signals. Thus, it is
better to find the nature of a signal by DFA analysis.

(a) 0.2 (b) (c) 6

0.15 0
10 4
A corr (s)

0.1
F(s)
s(f)

0.05 2

0
10-1
0
-2 0 4 6 8 10
0 500 1000 10 10 Scale (s)
s f

Figure 5.8: (a), (b) and (c) represents log s vs. log F (s) plot for f10 , f 0.5 1
, and
1
f
-noise respectively. For each plot, same length (L = 2000) of the time series are
considered. The straight lines are fitted on the linear trend of log s vs. log F (s) plots.
The gradients of the fitted straight lines are considered as the values of α. In each
case, scales are chosen as s = [16, 32, 64, 128, 256, 512, 1024].
144  Fractal Patterns in Nonlinear Dynamics and Applications

Note
˜ abruptly changes at the extremum of the
In DFA, the detrending profile X(j)
segments, since the fitted polynomials in neighbouring segments are some-
˜
times uncorrelated. To adjust the abrupt jump of X(j), windows (in which
polynomials are fitted) are taken mutually overlapping. It has been noted
that this method takes to much time on computation. Due to this drawback,
people have tried to modify the DFA method through: Backward Moving Av-
erage (BMA) technique, Centered Moving Average Average (CMA) method,
Modified Detrended Fluctuation Analysis (MDFA), Fourier DFA, Empirical
mode decomposition, Singular value decomposition, DFA based on high-pass
filtering are developed [50].

Sometimes, the a single scaling laws are not able to identify the
proper self-similar structure in the STS. In this case, fluctuation is
needed to study with different scales and the concept of multifractality
is thus proposed. In the following section, we discuss multifractal STS.

5.5 Fluctuation analysis on stationary and non-


stationary multifractal time series
To investigate scaling law of a multifractal stationary STS, the standard
partition function multifractal formalism based of generalized fluctua-
tion analysis was first proposed. Unfortunately, it fails to reflect the un-
derlying scaling behaviour of the non-stationary STS. This deficiency
leds to the development of the wavelet transform modulus maxima
(WTMM) method based on wavelet transform.

5.5.1 Wavelet transform modulus maxima


For a given continuous STS {x(t)}, the wavelet transform is generally
defined by
1 ∞ t−τ
Z
Lψ (τ, s) = x(t)ψ ( )dt, (5.17)
s −∞ s
where ψ (t) represents the mother wavelet from which all the sub-
wavelets ψτ,s (t) = ψ ( t−τ
s ) can be calculated.
Similarly, discrete wavelet transform for a STS {x(k )}N
k=1 can be
defined as
N
1X k−τ
Lψ (τ, s) = x(k )ψ ( ) (5.18)
s s
k=1
Fractal and Multifractal in Stochastic Time Series  145

The wavelet coefficient Lψ (τ, s) depends on both time position τ and


frequency scale s. Hence, local fluctuation of the STS can be described
in both time and frequency resolution.
In WTMM method, the multifractal scaling coefficient is calculated
as follows:

Step-1: Find the position τj for which Lψ (τ, s) satisfies the condition:

|Lψ (τj − 1, s)| ≤ |Lψ (τj , s)| ≤ |Lψ (τj + 1, s)|,

where j = 1, 2, . . . , jmax .
This condition gives maxima of Lψ (τ, s) over j which is known as
modulus maxima. The reason for considering the modulus is

Step-2: Then q -th order fluctuation Z (q, s) is defined by


jX
max

Z (q, s) = |Lψ (τj , s)|q . (5.19)


j=1

For increasing s, it can be observed that Z (q, s) obeys the law: Z (q, s) ∼
sτ (q) , where τ (q ) is defined by
Ns
X
Zq (s) = |X (ν, s)|q ∼ sτ (q) for s > 0. (5.20)
ν=1
Ps
The quantity X (ν, s) is calculated by X (ν, s) = i=1 x(νs + i) for
ν = 1, 2, . . . , Ns , where Ns = [ Ns ] (N being length of the given STS
{x(k )}N
k=1 ).

Step-3: For each q , τ (q ) is then estimated by fitting a straight line


on log Z (q, s) vs. log s plot.
The quantity τ (q ) is known as scaling exponent and it character-
izes multifractal properties of the STS. In this method the processes
of getting the multiscaling exponent is very laborious and takes large
computational loops. To overcome this situation, the method of multi-
fractal Detrended Fluctuation Analysis (MF-DFA) was developed. In
MF-DFA, fast and reliable results can be obtained compared to WTMM
method.

5.5.2 Multifractal detrended fluctuation analysis


Multifractal detrended fluctuation analysis (MF-DFA) is basically a
combined method of DFA and Generalized Hurst exponent (GHE).
146  Fractal Patterns in Nonlinear Dynamics and Applications

For a given stochastic time series {x(k )}N


k=1 , MF-DFA algorithm is de-
scribed as follows:

Step-1: Calculate the mean hxi of the time series {x(k )}N
k=1 , where
hxi is given by
N
1 X
hxi = x(k ).
N
k=1

Check the value of hxi. If hxi = 0, then go to step-2. If hxi =


6 0, then
standardize the data to make its mean zero.
Let us consider the general case, i.e; hxi 6= 0 and the standardize
data {y (k )}N
k=1 with hyi = 0.

Step-2: Construct a profile X (k ) from the resultant {y (k )}N


k=1 by

k
X
X (k ) = [y (i) − hy (i)i] (k = 1, 2, . . . , N ). (5.21)
i=1

Step-3: Define Ns non-overlapping segments Aν (ν = 1, 2, . . . , Ns ) of


length s as given in (5.7).

Step-4: For each segments Aν , fit a local trend Xν,s (k ) (linear or higher
order polynomial) on X (k ) and subtract from X (k ). Then calculate de-
trended variance F 2 (ν, s) by

1 X
F 2 (ν, s) = [X (k ) − Xν,s (k )]2 (ν = 1, 2, . . . , Ns ) (5.22)
s
k=(s−1)ν+1

Step-5: Then q -th order fluctuation function Fq is calculated by


Ns
1 X
Fq (s) = { [F 2 (ν, s)]q/2 }1/q , (5.23)
Ns ν=1

where q is always taken real valued except zero. Using the above
method, it can be seen that 0-th order fluctuation reveals divergent
exponents. Instead, logarithmic average approach gives us
Ns
1 X
F0 (s) = exp{ log[F 2 (ν, s)]} ∼ sh(0) . (5.24)
2Ns ν=1

Step-6: Continue the above process with increasing s, a relation be-


tween Fq (s) and s can be obtained as Fq (s) ∼ sh(q) for the long-term
Fractal and Multifractal in Stochastic Time Series  147

STS. The exponent h(q ) is called scaling exponent or GHE. The value
of h(q ) is calculated by the slope of the linear regression of log Fq (s) vs.
log s plot.

Note
For the positive and negative values of q, h(q) describes large and small
scale fluctuations respectively. If a time series is monofractal, then h(q) is
independent of q. In this case, the scaling behaviour of the variances Fq (s) is
identical for all s. On the other hand, it has been observed that h(q) increases
with q for the multifractal process. It indicates non-identical Fq (s) for all s.
So, the mono-fractality and multifractality of a time series can be identified
using GHE. The computation of GHE needs appropriate values of q. If we
set q > 0 , then the segments Aν with greater fluctuations (segments with
relatively high F 2 (ν, s) will give a larger weight in the Fq (s) than that of the
segments relatively smaller fluctuation. The opposite holds for q < 0. The
next important parameter is the degree of the polynomials which are fitted
to detrending the fluctuations. Since MF-DFA is suitable for non-stationary
process, MF-DFA can be conducted for different polynomials and then decide
the best data fit. It has been observed that over fitting leads the value of Fq (s)
is close to zero for small values of s. As per selection of the scale is concerned,
the smallest scale needs to contain enough elements so that computed local
fluctuation can be reliable. In the most of the studies, the minimum scale is
taken between 10 and 20. The maximum scale ensures enough elements for
computation of the fluctuation function. Most of the studies have been done
with s 6> N/10.

The multifractal nature can also be investigated by utilizing the


concept of Hölder exponent and Legendre transform with GHE [81]. It
has been seen that Hölder exponents τ (q ) and GHE h(q ) are related by
τ (q ) = qh(q ) − 1. A linear trend in τ (q ) vs. q plot indicates monofractal-
ity of the the corresponding time series. On the other hand, nonlinear
trend in the same assures existence of multifractality. Example 5.5 il-
lustrates the efficiency of τ (q ) on testing monofractal and multifractal
STS.

Example 5.5 We consider monofractal and multifractal STS in advance,


shown in Fig. 5.9a and b respectively. Using (5.24), we first calculate
h(q ) for both the STS. Then by τ (q ) = qh(q ) − 1, we investigate τ (q )
for q ∈ [−5, 5]. Figure 5.9c and d shows corresponding τ (q ) vs. q curves
for monofractal and multifractal respectively. From the figures, linear and
nonlinear trends can be observed in the respective τ (q ) vs. q curves. As
linear and nonlinear trends correspond monofractality and multifractality
148  Fractal Patterns in Nonlinear Dynamics and Applications

(a) (b)
4 10

2
5
x(t)

x(t)
0

-2 0

-4
0 200 400 600 800 1000 0 200 400 600 800 1000
time (sec.) time (sec.)
(c) 4 (d)
data
2 τ(5) =2.5116 0
h(-5) =0.75417
τ(q)

0 h(0) =0.72715

τ(q) -5 data1
-2
tq(5) =1.5868
Hq(-5) =1.4477
-4
Hq(0) =0.98634

-6 -10
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
q q

Figure 5.9: (a) and (b) represents power noise f1β with β = 1.50 and 1.75 respec-
tively. Each of the time series are of length 3000. (c) represents multifractal spectrum
1 1
with f 0.5 (solid) and f 0.75 (dotted) respectively. In each computation, scale is chosen
N
2 , N = 3, 4, . . . , 15 (since degree of the fitted polynomial is taken 1). q is taken as
q ∈ [−5, 5].

respectively, it indicates GHE can characterize underlying fractal nature


in a STS.

Furthermore, it has been also established that singularity spectrum


f (α) is related to τ (q ) by the relations:
dτ (q )
α(q ) = , f (α(q )) = qα(q ) − τ (q ), (5.25)
dq
where α(q ) is the Lipschitz-Holder exponent.
We now discuss the effectiveness of f (α) on characterizing two dif-
ferent STS having same statistical properties. Example 5.6 describes
the corresponding numerical explanation.

1 1
Example 5.6 We first consider two power noises- f 0.5 and f 0.75 shown in
Fig. 5.10a and b respectively. From the figures, it can be investigated that
both the STS are statistical equivalents in nature. Using (5.25), We calcu-
late respective f (α)s. The corresponding f (α) vs. α curves are shown in
Fractal and Multifractal in Stochastic Time Series  149

(a) 40
(c) 1
20
x(t)
0 0.9

-20
0 1000 2000 3000 0.8

f(α )
time(sec.)
(b) 40 0.7

20
0.6 1/f 1.50
x(t)

0 1/f 1.75

-20 0.5
0 1000 2000 3000 1.2 1.3 1.4 1.5 1.6
α
time(sec.)

Figure 5.10: (a) and (b) represents power noise f1β with β = 1.50 and 1.75 respec-
tively. Each of time series are of length 3000. (c) represents multifractal spectrum
1 1
with f 0.5 (solid) and f 0.75 (dotted) respectively. In each computation, scale is chosen
N
2 , N = 3, 4, . . . , 15 (since degree of the fitted polynomial is taken 1). q is taken as
q ∈ [−5, 5].

Fig. 5.10c. From the figure, completely different multifractal spectrums can
be observed which indicates separate multifractal structure in the respective
STS.

The singularity spectrum of a monofractal signal is represented by a


single point in the f (α) plane, whereas the multifractal process reveals
a single-humped function. Multifractal analysis can describe the com-
plexity level of a fractal process by quantifying its spectrum. The quan-
tification method was first proposed by Shimizu et al. In this method,
the spectrum f (α) is fitted by a regression curve:

f (α) = a + b(α − α0 ) + c(α − α0 )2 , (5.26)

where α0 is the position of maximum of f (α) and a, b are regression


coefficients. The smaller value of α0 corresponds more correlated and
regular process. The coefficient b is known as the asymmetry param-
eter. Zero value of b indicates symmetric shape. For b > 0 and b < 0,
the respective α vs. f (α) curves will be right and left skewed. Basi-
cally dominance of high exponents reveals left skewed nature of f (α)
and it indicates a fine structure of the process. On the other hand, the
dominance of low exponents indicates more a smooth or regular struc-
ture. Further, the width of the spectrum, defined by W = αmax − αmin ,
measures the degree of the multifractality. Wide ranges indicate the
rich structure of the process. αmax and αmin , known as strongest and
150  Fractal Patterns in Nonlinear Dynamics and Applications

weakest Hölder exponent, calculated by considering f (αmax ) = 0 and


f (αmin ) = 0. In some cases, quadratic polynomial does not fit on the
multifractal spectrum f (α). In this concern, f (α) will be approximated
by

f (α) = a1 + a2 (α − α0 ) + a3 (α − α0 )2 + a4 (α − α0 )3 + a5 (α − α0 )4 . (5.27)

To quantify the regression parameters, except α0 and W , symmetry


parameter is calculated in a different way. In fact, symmetry parameter
r is defined by
αmax − α0
r= .
α0 − αmin
It has been observed that, the time series possesses symmetry, right
skewed and left skewed multifractal spectrum for r = 1, r > 1 and
r < 1 respectively.
By applying Legendre transform, the dimension of the fractal pro-
cess, known as a generalized fractal dimension, can also be calculated.
For a stochastic process with Hölder exponent τ (q ), generalized fractal
dimension is defined by Dq = τq−1
(q)
. The spectrum of q vs. Dq curve also
quantifies the deeper structure of the stochastic process.

5.6 Discussion
This chapter highlights the statistical self-similarity about a univariate
time series obtained from the stochastic systems. For an unknown sys-
tem, it is, therefore, the primary task to check the stochastic nature of
the corresponding time series. Several methods have been proposed to
identify the stochasticity of a time series [59, 45]. Among them, DVV
method [60] is the most robust and effective one. The STS can be ob-
tained in two forms−stationary and non-stationary. A stationary STS
shows invariant probability distribution with every time intervals. On
the contrary, a variable probability distribution can be observed in the
non-stationary STS for every time intervals. We have numerically in-
vestigated the stationary as well as non-stationary nature of the time
series by auto-correlation methods. The results show the effectiveness
of the auto-correlation method.
Self-similar nature is explained for both the stationary and non-
stationary stochastic time series. In both the cases, self-similarity can
be characterized by the theory of fluctuation in the STS. In order to
define fluctuation, different types of statistical measures have been pro-
posed. In fact, these measures correspond to scaling law which char-
acterizes the underlying monofractal and multifractal structure of the
Fractal and Multifractal in Stochastic Time Series  151

STS. Whether a monofractal STS identifies by a single scaling law, a


different scaling law is needed to characterize the multifractal STS.
For a monofractal STS, different methods−auto-correlation func-
tion, Fourier based spectrum analysis, Hurst exponent, fluctuation anal-
ysis are applied to find the scaling law. However, these are not able to
identify the proper law for a non-stationary STS. In this context, the
detrended fluctuation analysis is used. On the other hand, multifractal-
ity of an STS can be measured by wavelet transform modulus maxima
and Generalized Hurst exponent (GHE) methods. Due to the compu-
tational inefficiency of the wavelet-based method, we generally apply
GHE based fluctuation. It has been seen that GHE can successfully dif-
ferentiate monofractal and multifractal STS. Moreover, the underlying
multifractal structure can be described by the multifractal spectrum
which is also computed by GHE. Moreover, two STS which show sim-
ilar behaviour in time can also be quantified using a polynomial fit
technique.
So, monofractal as well as the multifractal stochastic process can
be analyzed successfully by fluctuation theory. Hence, the measure of
statistical fluctuation can be used for the long-term prediction of dy-
namics for any STS. It indicates applicability of monofractality and
multifractality in the field of stochastic signal analysis.
Chapter 6

Application in Image
Processing

6.1 Introduction
Signal processing is a broad branch of electrical engineering. This area is
a study about the different signals such as image signals, video signals,
voice/sound signals, transmission signals, etc., and their properties.
Image Processing is a sub area of the signal processing. It focuses on
the behavior and characteristics of image signals. These image signals
are represented into two variants based on the nature of the signals.
The categories of the images are analog image and digital image.
An image that manipulates using the analog (electrical) signals is
called ‘Analog Image’ whereas image that is composed by the discrete
signals using Sampling and Quantization methods is called ‘Digital Im-
age’.

6.1.1 Digital image


Two dimensional discrete function f (x, y ) where x and y are spatial
coordinates and the amplitude f at (x, y ) is defined as digital image.
Mathematically Digital image (R2 space) is represented as a matrix
form where x and y coordinate values are mapped with rows and
columns of the matrix and the amplitude values are assigned to the
cells of the matrix. Digital image of dimension M × N is represented
as M × N matrix. For each i = 1, 2, . . . , M and j = 1, 2, . . . , N , (i, j )
154  Fractal Patterns in Nonlinear Dynamics and Applications

presents the location of picture called pixel or picture element. The


mapping f : M × N −→ G, where G = 0, 1, 2, . . . , l − 1, is called the im-
age function or amplitude. The amplitude values are finite and discrete
values. The value of f at any pair of spatial coordinates (i, j ) describes
the intensity or gray level of the image at (i, j ).
 
f0,0 f0,1 · · · f0,N −1
 f0,1 f0,1 · · · f0,N −1 
I= . . . .. .
 
 . . .
. .
. . 
fM −1,0 fM,1 · · · fM −1,N −1
The number of gray levels l is 2k , where k ∈ N, usually k gives the bit
level. The dynamic range of the intensity of the k bit image is between
0 and 2k − 1. For example, 8-bit image has 256 gray levels and the
intensity value is between 0 and 255. The number of bits required to
store a digital image is M × N × k.

(a) 128 × 128 (b) 256 × 256 (c) 512 × 512

Figure 6.1: 8-bit camera man image with various sizes.

6.1.2 Digital image processing


The procedure that is processing the digital image by digital comput-
ers is called as Digital Image Processing (DIP). DIP finds many ap-
plications in a wide-range of domains such as Forensic, Medical, Re-
mote sensing, Manufacturing Industry and consumer electronics. The
main functionalities of the DIP are Image Quality improvement using
image restoration and image enhancement, extracting the image at-
tributes/features using image segmentation and object recognition and
reducing the storage size of the image.
Application in Image Processing  155

The fundamental steps of DIP are Image acquisition, Image en-


hancement, Image Restoration, Image Compression, Image Segmenta-
tion and Object Recognition (see for more details [56, 93, 63]).

 Image Acquisition is the process of creating/synthesizing the dig-


ital image using different sensors and different modalities. It also
involves in the preprocessing methods such as the scaling and
shrinking of the digital image.
 Image Enhancement aims to improve the quality of the image
for specific application. It increases the brightness and contrast
of the image and also dehazes the image.
 Image Restoration is a vital preprocessing in DIP. It mainly re-
covers the digital image from noise and blurring and restores it
as its original.
 The noise is creeping into the image by various sources such
as the atmospheric condition, faulty sensors, transmission error,
etc. The noise filters are used to suppress the noise in the image.
 Image Compression is the process of reducing the storage size of
the digital image by different compression methods. It resolves
the storage issues related to the image.
 Image Segmentation is the method used to partition the digital
image into its regions or objects with respect to the objective
of the process. Object recognition is the process that assigns a
features/labels to an object based on the descriptors.

This chapter investigates the image segmentation through mutlifrac-


tal methods. Further, as a extraction of the mid-sagittal plane from
the brains magnetic resonance image, this chapter investigates the pos-
sibility of generalized fractal dimensions to measure the asymmetry
between hemispheres which separates the parts of human brain.

6.1.3 Image segmentation


Image segmentation is the division of an image into regions or categories
with respect to the objective of the process. The image segmentation
is carried over by the similarity or dissimilarities intensities in the re-
gion of interest. The image segmentation is an application specific/task
specific process.
The image segmentation contributes the application in the domains
such as defense, medical, remote sensing and industry. In defense, image
156  Fractal Patterns in Nonlinear Dynamics and Applications

segmentation is employed for identifying the intruders in the border. In


industry, it helps the automation process of manufacturing by detect-
ing defects in the product during their making. In medical, the image
segmentation algorithms apply in a wide-range of automate tools. It as-
sists the physicians for the diagnostics of the diseases such as different
types of cancers, blocks in blood vessels, etc.
Segmentation adopts three general approaches namely thresholding,
edge-based methods and region-based methods. In thresholding, pixels
are categorized according to the fixed value/values and the image is
partitioned based on the threshold values. In edge-based segmentation,
image is segmented by using the different edge filters or edge detection
operators such as canny, Laplace, Sobel and Roberts. In region-based
segmentation algorithms operate iteratively by grouping together pixels
which are neighbours and have similar values and splitting groups of
pixels which are dissimilar in value.

6.2 Generalized fractal dimensions


Fractal dimension is insufficient to characterize the object of interest
having complex and inhomogeneous scaling properties, since different
irregular structures may have same fractal dimensions. Thus, general-
ized fractal dimensions give more information about the space filling
properties than the fractal dimension. Let us review the generalized
fractal dimensions followed by fractal dimension.

6.2.1 Monofractal dimensions


Suppose that K is a subset in Rn . The topological dimension of K ,
denoted as dimT K , is inductively defined as follows:
1. dimT ∅ = −1,
2. The topological dimension of K at a point p ∈ K is less than or
equal to n, written as dimpT K ≤ n, if there exists an arbitrar-
ily small neighborhood of p whose boundaries have topological
dimension at most n − 1,
3. K has topological dimension at most n if it has topological di-
mension at most n at each of its points p:

dimT K ≤ n ⇐⇒ dimpT K ≤ n for all p ∈ K.


Application in Image Processing  157

In addition, dimpT K = ∞, if the condition (2) does not hold for any
n ∈ N, and dimT K = ∞ if the condition (3) does not hold for any
n ∈ N.
If U is any non-empty subset of n-dimensional Euclidean space Rn ,
the diameter of U is defined as |U | = sup{|x − y| : x, y ∈ U }, i.e., the
greatest distance apart of any pair of points in U . If {Ui } is a countable
(or finite)
S∞ collection of sets of diameter at most δ that cover F , i.e.,
K ⊂ i=1 Ui with 0 < |Ui | ≤ δ for each i, we say that {Ui } is a δ -cover
of K .
Suppose that K is a subset of Rn and s is a non-negative number.
For any δ > 0 we define
(∞ )
X
s s
Hδ (K ) = inf |Ui | : {Ui } is a δ − cover of K .
i=1

As δ decreases, the class of permissible covers of K is reduced. There-


fore, the infimum Hδs (K ) increases, and so approaches a limit as δ → 0.
Thus,
Hs (K ) = lim Hδs (K ).
δ→0

This limit exists for any subset K of Rn , though the limiting value can
be 0 or ∞. We call Hs (K ) as the s-dimensional Hausdorff measure of K .
Then, the Hausdorff Dimension or Hausdorff-Besicovitch Dimension of
K is defined as,

dimH (K ) = inf {s : Hs (K ) = 0} = sup {s : Hs (K ) = ∞} ,

so that
(
s ∞, if s < dimH (K ),
H (K ) =
0, if s > dimH (K ).
If s = dimH (K ), then Hs (K ) may be zero or infinite, or may 0 <
Hs (K ) < ∞.
Hausdorff dimension has the advantage of being defined for any set,
and is mathematically convenient, as it is based on measures, which are
relatively easy to manipulate. A main disadvantage is that the explicit
computation of the Hausdorff dimension of a given set K is rather
difficult since it involves taking the infimum over covers consisting of
balls of radius less than or equal to a given  > 0. A slight simplification
is obtained by considering only covers by the balls of radius equal to .
This gives rise to the concepts of box dimension.
158  Fractal Patterns in Nonlinear Dynamics and Applications

Let K ∈ K (X ) and N (K, ) denotes the smallest number of closed


balls of radius  > 0 required to cover K . If
 
ln(N (K, ))
DB = lim (6.1)
→0 ln(1/)

exists, then DB is called the box dimension or fractal dimension


of K .

6.2.2 Box dimension of image


The box dimension of an image is estimated through the scaling law as
1
N () ∝
DB
N () = c−DB (6.2)
for  → 0, where N () denotes the number of boxes of size  which is
required to cover the entire intensity surface of the image and c is a
constant. In practice, the image includes only the discrete and finite
data and therefore the limit of  tents to zero cannot be possible. In
order to overcome this difficulty, the fractal dimension of any given
image is estimated through the slope of the best fitting line at the
point (− ln r, ln N ()), for various values of .
Let us see the procedure to calculate the N ():
M
X
N () = kP (k, ), (6.3)
k=1

where M is the total number of pixels of the image and P (k, ) denotes
the probability that there are k points within a box of size , centred
about an arbitrary point of the image. The probability P (k, ) can be
estimated by the following relation:
n(k, )
P (k, ) = (6.4)
Nr
where Nr is the number of randomly chosen reference points from the
image and n(k, ) denotes the number of cubes of size r, centred around
each reference point, containing the k point of the image.
Differential Box Counting (DBC): Consider an image of size M ×M .
The domain of the image is partitioned into grids of size r × r. On each
grid there is a column of boxes of size r × r × h, where h is the height of
a single box. If the total number of gray levels is G then G/h = M/r.
Application in Image Processing  159

The boxes are numbered sequentially 1, 2, . . .. Let the minimum and


maximum gray level of the image in (i, j )th grid fall in box number p
and q , respectively. Then nr (i, j ) = q − p + 1 is the contribution of the
(i, j )th grid in N (). Taking contributions form all grids, we have
X
N () = nr (i, j ).
i,j

Because of the differential nature of computing nr (i, j ) the method is


called the differential box counting (DBC) approach. Calculating N ()
in this manner gives a better approximation to the boxes intersecting
the image intensity surface, especially when there are sharp gray level
variations in neighbouring pixels.
Relative Differential Box Counting (RDBC): A modification of the
DBC, called the relative differential box-counting (RDBC) method, was
proposed. According to this method,N () is obtained by the following
equation:
X
N () = d(kdr (i, j )/r)e,
i,j

where dr (i, j ) denotes the difference between the maximum and the
minimum gray level of the image in the grid (i, j ), k = M/G and d(x)e
denotes for the ceiling function of x, i.e., the smallest integer which is
greater than or equal to x.
Correlation Algorithm: A very popular way to compute the dimen-
sion is to use the correlation algorithm, which estimates dimension
based on the statistics of pairwise distances. According to this algo-
rithm the dimension is defined as
ln C (r)
ν = lim ,
r→0 ln r
where C (r) is the correlation integral given by
Number of distances less than r
C (r) = .
Number of distances altogether
The correlation algorithm provides a particularly elegant formulation
and simultaneously has the substantial advantage that the function
C (r) is approximated even for r as small as the minimum interpoint
distance. For an image with M pixels, C (M, r) has a dynamic range of
O(M 2 ). Logarithmically speaking, this range is twice that available in
the box-counting method (see, [46, 72, 84]).
160  Fractal Patterns in Nonlinear Dynamics and Applications

6.2.3 Multifractal dimension


Alfred Renyi introduced the measure to quantify the uncertainty or
randomness of a given system.
PNIt has a vital role in information theory.
Given probabilities pi with i=1 pi = 1, the Renyi entropy of order q
is given by
N
1 X
REq = ln pqi ,
1 − q i=1
where q ≥ 0 and q 6= 1. At q = 1 the value of REq is potentially
undefined as it generates the indeterminate form, otherwise REq values
are decreasing as a function of q .
If q → 1, then REq → RE1 which is defined by
N
X
RE1 = −ln pi lnpi .
i=1

RE1 is called Shannon entropy [34, 43]. The Renyi Fractal Di-
mensions or Generalized Fractal Dimensions (GFD) of order q ∈
(−∞, ∞) is defined, in terms of generalized Renyi Entropy, as
P 
N q
1 ln p
i=1 i
Dq = lim (6.5)
r→0 q − 1 ln2 r
where pi is probability distribution. As q −→ 1, Dq converges to D1 ,
which is given by
PN
pi ln pi
D1 = lim i=1 , (6.6)
r→0 ln r
where D1 is the information dimension and Dq is a monotonically
decreasing function of q such that D0 ≥ D1 ≥ D2 . Here D0 and D2
denotes the fractal dimension and correlation dimension respectively.
Multifractal Spectra: The multifractal spectra f (α(q )) is defined as
PN (r)
i=1 µi (q, r) ln(µi (q, r))
f (q ) = lim , (6.7)
r→0 ln r
PN (r)
µi (q, r) ln(Pi (q, r))
i=1
α(q ) = lim , (6.8)
r→0 ln r
the normalized measures µi is defined, in terms of probability Pi , as
[Pi (r)]q
µi (q, r) = PN (r) , (6.9)
q
i=1 [Pi (r )]
Application in Image Processing  161

here N (r) is the number of boxes required to cover the object with box
size r and Pi is a probability of ith box of size r. Generally Pi is defined
as
area of ith part
Pi = .
total area
Some properties of multifractal spectra as follows:
 The spectrum of f (α) is concave,
 f (α) has a single inflection point at q=0,
 At q=0, f achieves maximum. f (α(0)) = D0 refers the fractal
dimension,
 In case of monofractal analysis the spectrum of f (α) is reduced
to a single point.

(a) Cameraman (b) Lena (c) Mandril

(d) Peppers (e) Pirate

Figure 6.2: Standard 8 bit images used for segmentation.


162  Fractal Patterns in Nonlinear Dynamics and Applications

Table 6.1: Generalized fractal dimensions of standard images.

Image D0 D1 D2
Cameraman 1.1959 1.1881 1.1855
Lena 1.2434 1.2398 1.2379
Mandrill 1.2489 1.2480 1.2471
Peppers 1.2465 1.2433 1.2407
Pirate 1.2412 1.2341 1.2294

6.3 Image thresholding


In thresholding, pixels are categorized according to the fixed value
(threshold value) and the image is divided based on the threshold value.
In this section, we partition the image based on the multifractal dimen-
sion as a threshold value.

6.3.1 Multifractal dimensions: A threshold measure


After the sampling and quantization process, image can be represented
as a matrix form. Let M, N be the finite subsets of natural numbers
and G = {0, 1, 2, ..., k − 1} be a set of positive integers to denote the
gray levels of k -bit. Then, an M × N dimensional image can be defined
as a function f : M × N −→ G by f (x, y ) = i, i ∈ G. Assume that
t ∈ G is a optimal thresholding value and B = {0, 1} be the binary gray
levels in G. Then the thresholding image can be defined as a mapping
T : M × N −→ B , such that

0 if f (x, y ) ≤ T
T (x, y ) =
1 if f (x, y ) > T

A gray scale image can be described in term of a mass distribution. We


assume that the total intensity as a finite mass scattered to the whole
image so that the white areas have high density and black areas have
low density. we have to determine threshold (t) value as alone from the
gray level of each pixel through following steps.

Step 1: Read the input MRI.

Step 2: Let N be the number of boxes to cover the image with box
size r.

Step 3: The probability pi for ith box of size r in the image is


defined as,
Application in Image Processing  163

Xi
pi =
X

where Xi is the intensity value of the image in the corresponding ith


box of size r and X is the total intensity value image.

Step 4: Estimate the value of N .

Step 5: Fix q , calculate Dq as defined in Eq. 6.5 for various r → 0.

Step 6: Repeat step 5 for various q ∈ (−150, 150), we found Dq for


each intensity level of the given input image using Eq. 6.5.

Step 7: Find the median value of Dq ’s and fix corresponding inten-


sity level as the optimal threshold, t = med(Dq ).

Step 8: Based on threshold value, the image is partitioning as fore-


ground and background regions. Binary image B(x,y) generated from
the original image f (x, y ) as

0 if f (x, y ) ≤ t
B (x, y ) =
1 if f (x, y ) > t
Step 9: Mask the input image by generated binary image.

Figure 6.3: Generalized fractal dimensions plot of standard images.


164  Fractal Patterns in Nonlinear Dynamics and Applications

6.4 Performance analysis


6.4.1 Evaluation measure for quantitative analysis
The region nonuniformity measure pronounces the inherent quality of
the segmented region. Assume that f (x, y ) is given gray scale image
then the region nonuniformity measure RN U is defined as
|F G| × V ar(F G)
RN U = ,
|F G + BG| × V ar(f )

where F G is foreground image pixels, BG is background image pixels,


Variance of the whole image is denoted by V ar(f ) and V ar(F G) repre-
sents the variance of the foreground region of the given image f (x, y ),
|.| cardinality of the given object. Good segmented images have a RNU
value close to 0.

6.4.2 Human visual perception


The proposed method uses the generalized fractal dimensions as a
thresholding measure to segment the gray scale images. Let us dis-
cuss the result of the proposed method for foreground extraction. As
mentioned in Chapter 2, one way to define a fractal is with non-integer
Hausdorff dimension, which exceeds its topological dimension. Accord-
ing to this definition of fractal, In Eq. (6.5), choose q = 0 then it
provides the fractal dimension D0 . Table 6.1 reveals the fractal dimen-
sion D0 of standard images given in Fig. 6.2 and it lies between 0
and 1. However, the topological dimension of gray scale image is 1 as
they are considered as objects in the Euclidean plane. Further, gener-
alized fractal dimensions of standard images depicted in Fig. 6.3 where
q values consider from −100 to 100. Threshold value of standard im-
ages obtained by the proposed method and Otsu method are elucidated
in Table 6.2, Table 6.3 respectively. Further, Region nonuniformity of
these two methods are also given in Table 6.2, Table 6.3. Extracted
foreground images by proposed method are depicted in Fig. 6.4 and
the foreground of standard images attained through Otsu method are
given in Fig. 6.5.
The objective of the proposed algorithm is to develop the precise
technique to find the optimal threshold value and thereby extract the
foreground of given gray scale image. Moreover, the developed method
has to provide the better performance on irregular gray level distri-
bution. In order to analyze the efficiency of the proposed method, it
has been compared with the state-of-the-art method namely the Otsu
method.
Application in Image Processing  165

Table 6.2: Threshold value and Region nonuniformity of standard images.

Image Threshold value Region nonuniformity


Cameraman 137 0.0305
Lena 133 0.1378
Mandrill 130 0.1271
Peppers 122 0.0973
Pirate 131 0.1098

Table 6.3: Threshold value and Region nonuniformity by Otus method.

Image Threshold value Region nonuniformity


Cameraman 86 0.0466
Lena 116 0.1374
Mandrill 126 0.1372
Peppers 106 0.1267
Pirate 115 0.1078

(a) Cameraman (b) Lena (c) Mandril

(d) Peppers (e) Pirate

Figure 6.4: Segmented images by multifractal method.


166  Fractal Patterns in Nonlinear Dynamics and Applications

(a) Cameraman (b) Lena (c) Mandril

(d) Peppers (e) Pirate

Figure 6.5: Segmented images by Otsu method.

The performance of the developed algorithm on standard images is


evaluated by quantitative metric namely Region nonuniformity. These
values are recorded in Table 6.2 and Table 6.3, which exhibits the com-
parison between the extracted foreground by the proposed method and
the Otsu method. It is evident that the multifractal based method gives
Region nonuniformity value significantly close to 1 which is better than
Otsu method Region nonuniformity value.
In order for the human visual perception analysis, extracted fore-
ground of standard images dataset through developed method and Otsu
method are depicted in Fig. 6.4 and Fig. 6.5. This comparison depicts
the out performance of the proposed method for extraction of the fore-
ground from the specified images dataset.

6.5 Medical image processing


Medical image diagnosis is one of the emerging fields which is dealing
with high-technology and modern instrumentation that plays a vital
role in diagnosis, analysis, and treatment. One among the precise tech-
niques of diagnosis processes is the segmentation of the medical images.
The aim of medical image segmentation lies in extracting the region
Application in Image Processing  167

of interest (RoI), in order to study the intrinsic anatomical structure


and thereby estimating the severity of abnormalities that helps for the
prognosis of the disease. This situation gives rise to the development
of efficient computing techniques for segmentation and analysis. Brain
magnetic resonance image segmentation is an essential step to clas-
sify the anatomical areas of interest for diagnosis, treatment, medical
planning of tumours, infarct and stroke.
The human brain is partitioned into two hemispheres, left and right
respectively by a tissue layer called inter-hemispheric fissure (IF) which
is filled with cerebrospinal fluid (CSF). This partition is done by Mid-
sagittal Plane (MSP) along with IF, which is generally estimated as
a plane passing through those hemispheres. The development of algo-
rithms for the automated detection of MSP has wide applications in
the analysis of brain tumours through Computer Assisted Diagnosis
(CAD). As brain tissues have complex structures, fractal analysis as-
sumes that the object of interest can be described by a single fractal
dimension alone. Thus, multifractal analysis provides us more informa-
tion about the space filling properties than the fractal dimension D0 .
In order to overcome this limitation, the proposed algorithm aims to
detect and segment MSP precisely from a brain image by exploring the
potential of generalized fractal dimensions along with its multifractal
spectra.
Many algorithms for the detection of MSP have been reported on
the literature in which most of the works are concentrated on image
intensity in order to detect the symmetry plane [51, 78, 79, 82]. Unfor-
tunately, most of the methods lack in analyzing the image slices by con-
sidering the texture of the image. The human brain possesses a complex
geometric structure which cannot be described only through Euclidean
geometry. As brain tissues possess to have self-similarity structures,
fractal based analysis would be prompt to detect MSP from a brain
image. Fractal dimension analyzes the irregularity of an object with
homogeneous scaling properties. The concept of fractal dimensions can
be realistic in the measurement and categorization of shape and texture.
It is a fact that only a handful of works have been depicted using fractal
analysis for the detection of MSP. Jayasuriya et al. [82] have proposed a
method for identifying the symmetry plane in three-dimensional brain
MRI images, based on the analysis using fractal dimension and lacunar-
ity. However, fractal dimension is insufficient to characterize the object
of interest having a complex and inhomogeneous scaling properties,
since different irregular structures may have same fractal dimension.
Lacunarity measurements includes significantly to the description of
an object with a known fractal dimension, which describes the empty
space around the object, and thus it relates to how the object fills the
168  Fractal Patterns in Nonlinear Dynamics and Applications

space. As brain tissues possess to have complex structures, fractal anal-


ysis assumes that object of interest can be described by a single fractal
dimension alone. Thus, multifractal analysis provides more information
about the space filling properties than fractal dimension D0 (see for de-
tails, [14, 15, 16, 84, 89, 96]). In order to overcome this limitation, the
present work aims to detect and segment MSP precisely from a brain
image by exploring the potential of generalized fractal dimensions along
with its multifractal spectra.

6.6 Mid-sagittal plane detection


The mid-sagittal plane divides the brain into two similar hemispheres,
namely right and left hemispheres. Each hemisphere should have sim-
ilar Generalized Fractal Dimensions (GFD), hence the difference be-
tween GFD of left hemisphere and GFD of right hemisphere should be
minimum at the MSP. Hence, the present method defines symmetry
measure, in terms of GFD, as
|GF DR − GF DL |
γ= , (6.10)
GF DW
where GF DR , GF DL and GF DW denote the generalized fractal dimen-
sions of right hemisphere, left hemisphere and whole image of the MRI
respectively. When γ tends to zero then the MSP is optimal.
For the input MRI I , the proposed method initially calculates the
GF DW for the whole image I . Based on the intensity levels, the ex-
tracted RoI is partitioned into left and right regions respectively for
which GFD is calculated accordingly as (GF DL , GF DR ). Now the sym-
metry measure is calculated using the values of GF DL , GF DR and
GF DW . Considering the optimal intensity using minimum symmetry
measure, the sagittal plane is found out. Apply multifractal spectra on
s to detect the final mid-sagittal plane from input MRI. The step-by-
step algorithm for the above said methodology is given as below.

—————————————————————————————–
Algorithm: Mid-sagittal Plane Detection from MRI
—————————————————————————————–
Phase I: Computation of Generalized Fractal Dimension of MRI
—————————————————————————————–
Step 1: Read a brain MRI I .

Step 2: Let N be the number of boxes required to cover the image


I with box size r.
Application in Image Processing  169

Step 3: The probability Pi for ith box of size r in the image is defined
as,
Xi
Pi = ,
X
th
where Xi is the intensity
PNvalue of the image in the corresponding i
box of size r and X = i=1 Xi .

Step 4: Fixing q and varying r, compute Dq using Eq. (6.5)

Begin
f or i : 1 −→ r
f or j : 1 −→ r
mass(i, j ) = sum(sum(I (i ∗ N − (N − 1) : i ∗ N, j ∗ N − (N − 1) :
j ∗ N )));
p(i, j ) = mass(i, j )/X ;
if q 6= 1
q
pq (i, j ) = [p(i, j )] ;
else
pq (i, j ) = q × [p(i, j )];
pi ← sum(sum(pq ));
D(q ) ← ln(pi)/ ln(r);
End

Step 5: Repeat step 4 for various q ∈ (−∞, ∞), to acquire GF DW ←


GFD of I .

—————————————————————————————
Phase II: Detection of Mid-sagittal Plane
—————————————————————————————
Step 6: Extract the initial sagittal region ∆ from the image I by re-
moving/subtracting the extreme 20% of region on both sides.

Step 7: Based on each intensity of ∆, partition ∆ into left ∆ and


right ∆ respectively.

Step 8: Estimate GFD of left ∆ and right ∆ to acquire GF DL and


GF DR respectively.
|GF DL −GF DR |
Step 9: Estimate the symmetric measure γ = GF DW .

Step 10: Estimate c∗ to be the optimal c for which γ is minimum.


170  Fractal Patterns in Nonlinear Dynamics and Applications

Step 11: Consider the RoI centered on c∗ , and select sagittal plane s
within the RoI centered on [c∗ ± δ ], where δ as 0.5 mm.

Step 12: Apply multifractal spectra on s using Eq. (6.7) and Eq. (6.8)
to acquire IM F .

Step 13: Compute the average multifractal spectra values λs of IM F


as
Begin f or i : 1 −→ N
Piq−1
PN
µi (q, r) = PNPi ; I1 = i=1
;
j=1 Pq
j
N

τ (q ) ← imgradient(polyf it(I1 , q, 1);


PN
I2 = i=1 µi × logPi ;

α(q ) ← imgradient(polyf it(I2 , q, 1);


PN
I3 = i=1 µi × logµi ;

f (α(q )) ← imgradient(polyf it(I3 , q, 1);

End
P
q∈sf (α(q))
compute λs ← P .
q∈s q

Step 14: Find the optimal sagittal plane s0 that gives the ∆γ value for
γ from ROI of MRI planes, where ∆γ = γmax − γmin .

Step 15: Output s0 , MSP.

Step 16: Stop.

—————————————————————————————–
In the developed algorithm, Phase I elucidated the computational
procedure of generalized fractal dimensions for the input MRI. In order
to obtain an accurate estimation of similarity between the left and
right hemispheres, the symmetry measure is defined in terms of GFD
as illustrated in Step 9. Hence, the proposed algorithm outperforms
on asymmetrical or pathological images.
Application in Image Processing  171

6.6.1 Description of experimental MRI data


The MRI datasets are taken from the Centre for Morphomet-
ric Analysis at Massachusetts General Hospital and is available
at http://www.cma.mgh.harvard. edu/ibsr/ [76] and BrainWeb at
http://www.bic.mni.mcgill.ca/brainweb/ [77]. In order to evaluate the
performance of the developed method, normal and pathological MRI
samples are used, which are given in Table 6.4. Further, simulated data
from BrainWeb were used to acquire the accuracy of proposed method
by comparing it with the ground truth line. The data volumes have
simulated using T1 and T2 weighted sequences with slice thicknesses
3 mm, 5 mm, 7 mm and intensity non-uniformities (INU) 20%, 40%
along with the noise levels 5%, 7%, 9%. These datasets are available
for showing in coronal orthogonal view, which is illustrated in Fig. 6.6
and Fig. 6.7.

6.6.2 Performance evaluation metrics


A quantitative measure is used to acquire the accuracy of the pro-
posed method by comparing it with the ground truth MSPs which are
marked manually, which is being provided by Internet Brain Segmen-
tation Repository (IBSR) and BrainWeb data. This is done with the
measure of the values of Yaw angle error and Roll angle error (for more
details, [51]). The computational details of evaluation metrics are ex-
plained as follows.
MSP can be described, in terms of coordinate system, as aX + bY +
d
cZ + d = 0, where (a, b, c) is the scaling vector and √ is
a2 + b2 + c2
vertical distance from the origin. Then the Yaw angle (φy ) and Roll
angle(φr ) are defined as
b
φy = arctan (6.11)
a
−c
φr = arctan √ . (6.12)
a + b2
2

Table 6.4: Brain MRI datasets for evaluation of the proposed method.

Pathology #Sample Modality Matrix Orientation


Infarct 23 T1 , T2 256 × 256 × 173 sagittal
Tumour 35 T1 , T2 256 × 256 × 124 sagittal, axial
Stroke 24 T1 256 × 256 × 76 sagittal, axial
Normal 33 T1 256 × 256 × 256 sagittal, coronal
172  Fractal Patterns in Nonlinear Dynamics and Applications

Slice thickness 5% Noise 7% Noise 9% Noise

3 mm

5 mm

7 mm

Figure 6.6: Simulated brain MRI with intensity non-uniformity 20% with the dif-
ferent noise levels.

The Yaw angle error φ0y and Roll angle error φ0r estimated as a dif-
ference between MSP detected by the proposed method and the ground
truth MSP.
The angular deviation θ is calculated between ground truth line
and MSP estimated by the proposed method. Furthermore, the average
deviation of the distance (d, in pixels) is estimated from the upper and
lower endpoints between the estimated MSPs and the ground truth
lines as
p p
(a − x)2 + (b − y )2 + (a0 − x0 )2 + (b0 − y 0 )2
d= , (6.13)
2
where (a, b), (a0 , b0 ) denotes the upper and lower endpoints of the MSP
estimated by the proposed method. Also (x, y), (x0, y0) denotes the up-
per and lower endpoints of the ground truth MSP [82].

6.6.3 Results and discussions


Jayasuriya et al. proposed the fractal based method for extracting the
symmetry plane from brain MRI. The limitation of their algorithm is
Application in Image Processing  173

Slice thickness 5% Noise 7% Noise 9% Noise

3 mm

5 mm

7 mm

Figure 6.7: Simulated brain MRI with intensity non-uniformity 40% with the dif-
ferent noise levels.

that it still requires a certain degree of symmetry between the left and
right hemispheres. Therefore, it may not provide accurate results on
images with severe global asymmetry, like substantial hemispheric re-
moval. This limitation couldn’t be successfully defeated for all correla-
tion based techniques. In order to overcome this problem, the proposed
method uses the generalized fractal dimensions as a symmetry measure
and multifractal spectra for the accurate estimation of MSP from brain
MRI samples. The results of the proposed method for MSP extraction
are presented here.
The fractal is defined as a set with non-integer Hausdorff dimension,
which exceeds its topological dimension. In Eq. (6.5), choose q = 0 then
it provides the fractal dimension D0 . In Table 6.5 and Table 6.6, the
fractal dimension value D0 which lies between 0 and 1, further digital
MRI datasets used in the proposed method are subsets of the Euclidean
plane, which have the topological dimension 1. Hence, these images
are fractal objects, which have irregular distribution of the gray levels
although they have a self-similar structure.
In Table 6.5, the tabulated values of generalized fractal dimensions
D0 , D1 and D2 of simulated MRI data as shown in Fig. 6.6, which are
174  Fractal Patterns in Nonlinear Dynamics and Applications

computed by fixing the intensity non-uniformity value 20% with slice


thickness of 3 mm, 5 mm, 7 mm and varying the noise levels by 5%,
7%, 9%. The multifractal strength 4α is estimated by the difference
between minimum and maximum values of the multifactal spectra. The
values enlisted in the Table 6.6 are computed for 40% INU with same
slice thickness and varying noise levels as mentioned above of MRI data
as depicted in Fig. 6.7.
Figure 6.8 depicts the GFD spectra for q values which lie between
-100 to 100 and the multifractal spectra for RoI centered at optimal
intensity c∗ obtained from Step 10 in the proposed algorithm. Figure
6.8 and Fig. 6.9 explains the multifractal spectra of brain MRI datasets
with INU 20% and INU 40% which respectively is convex with respect
to varying slice thicknesses and noise levels with a single inflection point
at α(0), which provides the fractal dimension D0 . Further, Fig. 6.10 and
Fig. 6.11 depicts the extracted MSP by the developed method for MRIs
in the Fig. 6.6 and Fig. 6.7 respectively.
Table 6.7 presents the mean and standard deviation (SD) of angular
deviation θ between ground truth lines and MSP estimated by the pro-
posed method, here θ is computed for datasets tabulated in Table 6.4.
Further, Table 6.7 provides the mean and standard deviation (SD) of
the average deviation of distance d between lower and upper endpoints
of estimated MSP and ground truth MSP, which is determined from
Eq. (6.13).
The MSP extracted from the proposed method is compared with
the three state-of-the-art methods presented by Liu et al. [78], Ruppert
et al. [79] and Zhang et al. [65]. In order for the human visual percep-
tion analysis, extracted MSP of a pathological and normal brain MRI

Table 6.5: Analysis of multifractal strength 4α, maximum value of the multifractal
spectrum αmax and generalized fractal dimensions of brain MRI with INU 20% with
slice thickness.

MRI Slice Thickness INU 20% with Noise Levels αmax 4α D0 D1 D2


3* 3 mm 5% Noise 2.9666 1.1829 1.9205 1.8229 1.8208
7% Noise 2.9406 1.1546 1.8275 1.8230 1.8210
9% Noise 2.6786 0.8828 1.8297 1.8233 1.8216
3*5 mm 5% Noise 2.8899 1.1041 1.8273 1.8233 1.8217
7% Noise 3.0290 1.2480 1.8286 1.8243 1.8229
9% Noise 2.7538 0.9554 1.8253 1.2834 1.8218
3* 7 mm 5% Noise 3.0293 1.2426 1.8267 1.8242 1.8232
7% Noise 3.0343 1.2445 1.8294 1.8239 1.8227
9% Noise 2.7204 0.9201 1.9204 1.8244 1.8235
Application in Image Processing  175

Table 6.6: Analysis of multifractal strength 4α, maximum value of the multifractal
spectrum αmax and generalized fractal dimensions of brain MRI with INU 40% with
slice thickness.

MRI Slice Thickness INU 40% with Noise Levels αmax 4α D0 D1 D2


3* 3 mm 5% Noise 2.8756 1.0937 1.8264 1.8243 1.8233
7% Noise 2.9488 1.1653 1.8290 1.8241 1.8230
9% Noise 2.8816 1.0924 1.8253 1.8244 1.8236
3* 5 mm 5% Noise 2.7310 0.9468 1.8257 1.8232 1.8214
7% Noise 3.0570 1.2697 1.8252 1.8231 1.8212
9% Noise 2.8060 1.0092 1.8253 1.8233 1.8216
3* 7 mm 5% Noise 3.0755 1.2892 1.8284 1.8243 1.8234
7% Noise 2.9663 1.1762 1.8253 1.8243 1.8231
9% Noise 2.8170 1.0190 1.8261 1.8242 1.8232

Table 6.7: Comparison of proposed method with ground truth MSP through an-
gular deviation θ and average deviation of distance d.

2*Dimension θ (in degree) d (in pixels)


Mean Standard Deviation Mean Standard Deviation
D0 0.91 0.21 0.72 0.19
D1 0.89 0.25 0.69 0.23
D2 0.85 0.27 0.67 0.27

datasets from aforementioned methods and proposed method are de-


picted in Fig. 6.12. Moreover, Yaw angle error φ0y and Roll angle error
φ0r are computed for the proposed method and the existing methods
using Eq. (6.11) and Eq. (6.12). Table 6.8 provides the mean and stan-
dard deviation of those φ0y and φ0r estimated for the considered datasets
illustrated in Table 6.4.
The objective of the present study is to develop the precise tech-
nique for appropriate estimation of the mid-sagittal plane from normal
and pathological brain MRI. Moreover, the developed method has pro-
vided better performance on irregular/asymmetrical MRI structure. In
order to analyze the stability of the proposed method, the Gaussian
noise is to be added along with intensity non-uniformity levels of 20%
and 40% in MRI at the time of simulation from BrainWeb database.
Besides, pathological images are also used to evaluate the efficiency of
the proposed method.
From Table 6.8, the mean and SD values of Yaw angle error φ0y and
Roll angle error φ0r of the proposed method is significantly minimum
among the other three state-of-the-art methods. Also, Table 6.8 exhibits
176  Fractal Patterns in Nonlinear Dynamics and Applications

Slice thickness: 3 mm, INU = 20% Slice thickness: 3 mm, INU = 20%
1.3 1.9

Noise 5%
Noise 5% 1.8 Noise 7%
Noise 7% Noise 9%
1.295
Noise 9% 1.7

1.6
1.29

f(q)
Dq

1.5

1.285
1.4

1.3
1.28

1.2

1.1
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3
q alpha(q)

(a) (b)

Slice thickness: 5 mm, INU = 20% Slice thickness: 5 mm, INU = 20%
1.3 2

Noise 5% 1.9 Noise 5%


Noise 7% Noise 7%
1.295
Noise 9% Noise 9%
1.8

1.7
1.29
f(q)
Dq

1.6

1.285
1.5

1.4
1.28

1.3

−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)

(c) (d)

Slice thickness: 7 mm, INU = 20% Slice thickness: 7 mm, INU = 20%
1.3 2

Noise 5%
Noise 5% 1.9
Noise 7%
Noise 7%
1.295 Noise 9%
Noise 9%
1.8

1.7
1.29
f(q)
Dq

1.6

1.285
1.5

1.4
1.28

1.3

−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)

(e) (f)

Figure 6.8: Comparative analysis of multifractal spectra and generalized fractal


dimensions spectra of simulated brain MRI with INU 20%, noise levels 5%, 7%, 9%
and various slice thickness.
Application in Image Processing  177

Slice thickness: 3 mm, INU = 40% Slice thickness: 3 mm, INU = 40%
1.305 2

Noise 5% 1.9
1.3 Noise 5%
Noise 7%
Noise 7%
Noise 9% Noise 9%
1.8
1.295

1.7

1.29
Dq

f(q)
1.6

1.285
1.5

1.28
1.4

1.275 1.3

1.27
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3
alpha(q)
q

(a) (b)

Slice thickness: 5 mm, INU = 40% Slice thickness: 5 mm, INU = 40%
1.305 2

Noise 5%
Noise 5% 1.9
1.3 Noise 7%
Noise 7%
Noise 9%
Noise 9% 1.8
1.295

1.7
1.29
f(q)
Dq

1.6

1.285
1.5

1.28
1.4

1.275
1.3

1.27
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)

(c) (d)

Slice thickness: 7 mm, INU = 40% Slice thickness: 7 mm, INU = 40%
1.305 2

Noise 5%
Noise 5% 1.9
1.3 Noise 7%
Noise 7%
Noise 9%
Noise 9% 1.8
1.295

1.7
1.29
f(q)
Dq

1.6

1.285
1.5

1.28
1.4

1.275
1.3

1.27
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)

(e) (f)

Figure 6.9: Comparative analysis of multifractal spectra and generalized fractal


dimensions spectra of simulated brain MRI with INU 40%, noise levels 5%, 7%, 9%
and various slice thickness.
178  Fractal Patterns in Nonlinear Dynamics and Applications

Slice thickness 5% Noise 7% Noise 9% Noise

3 mm

5 mm

7 mm

Figure 6.10: Extracted MSP from the simulated MRI brain volume as shown in
Fig. 6.6.

the efficiency of the method by using multifractal techniques such as


generalized fractal dimensions and multifractal spectra. In addition,
Table 6.7 and Fig. 6.12 depict the superior performance of proposed
method for extraction of the MSP on pathological MRI such as tumour,
infarct and stroke.
The performance of the developed method on pathology and normal
MRIs is evaluated by another quantitative metric namely angular devi-
ation θ and average deviation of distance d. These values are recorded in
Table 6.7, which exhibits the comparison between the extracted MSP
and the ground truth line. It is evident that the multifractal based
method is significantly close to the ground truth MSP via angle and
pixel distance. The performance of the developed method on a simu-
lated MRI obtained from the BrainWeb database is given in Fig. 6.10
and Fig. 6.11 for human visual perception.
Table 6.8: Comparison using Yaw angle error φ0y and Roll angle error φ0r of MSP.

3* Zhang and Hu Ruppert et al. Liu et al. Proposed Method


Type φ0y φ0r φ0y φ0r φ0y φ0r φ0y φ0r
Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD
Infarct 2.432 0.317 2.323 0.375 2.217 0.271 2.243 0.295 1.877 0.217 1.813 0.234 0.913 0.153 0.921 0.178
Tumour 2.340 0.312 2.317 0.369 2.204 0.274 2.043 0.291 1.873 0.213 1.795 0.227 0.897 0.149 0.917 0.163
Stroke 2.351 0.309 2.371 0.371 2.301 0.269 1.976 0.283 1.732 0.209 1.703 0.229 0.907 0.143 0.873 0.159
Normal 2.209 0.315 2.246 0.367 1.989 0.267 1.896 0.280 1.708 0.211 1.702 0.223 0.793 0.148 0.781 0.152
Application in Image Processing

179
180  Fractal Patterns in Nonlinear Dynamics and Applications

Slice thickness 5% Noise 7% Noise 9% Noise

3 mm

5 mm

7 mm

Figure 6.11: Extracted MSP from the simulated MRI brain volume as shown in
Fig. 6.7.

In order to compare the proposed method along with the existing


methods for MSP extraction by evaluation metrics and human visual
perception, the developed technique has following merits: (1) General-
ized fractal dimensions and multifractal spectra would be comfortable
enough to characterize the noise behaviour of MRI, hence the developed
method has a good tolerance of noise. (2) The algorithm works so well
on various intensities non-uniformity, and slice thicknesses of pathologi-
cal images. (3) GFD is used for symmetry measure, so that it computes
the accurate symmetry between the left and right hemispheres.
Degrees of symmetry between the left and right hemispheres are
measured by generalized fractal dimensions and multifractal spectra,
which are used to achieve the optimal MSP detection. Although, the
proposed algorithm is robust to normal and pathological images in
comparison with the other existing techniques, the proposed method
found MSP as straight, but not exactly planar. It might be a significant
MSP detection method, if the non-planar surface that separates the two
hemispheres. A possible future work is to extend the multifractal based
method for detecting the curved MSP.
Application in Image Processing  181

Zhang et al. Rupper et al. Liu et al. Proposed method

(a)Normal

(b) Stroke

(c)Tumour-b

(d)Infarct

(e)Tumour- m

Figure 6.12: Comparison between extracted MSP by the proposed method and
three state-of-the-art methods on MRI images: a) Normal b) Stroke c) Tumour-
benign d) Infarct e) Tumour-malignant.

6.6.4 Conclusion
The proposed algorithm has included the generalized fractal dimen-
sions as a symmetric measure and the multifractal spectra to refine
the optimal mid-sagittal plane. The proposed algorithm has combined
GFD and multifractal spectra for a more accurate and efficient extrac-
tion of mid-sagittal plane from brain MRI. The proposed algorithm
182  Fractal Patterns in Nonlinear Dynamics and Applications

has comprehensively tested on various normal and pathological brain


MRI samples with various modalities. Experiments using a large and
heterogeneous dataset have shown that the proposed method provided
the highest accuracy and precision better than the three state-of-the-
art methods, and assessed by the performance evaluation measures.
The method presented in this chapter provided the best estimate of
the mid-sagittal plane for numerous applications in computer assisted
diagnosis.
References

[1] M.V Smoluchowski, Versucheiner Mathematischen Theorie der


Koagulations Kinetic Kolloider Lousungen, Z. Phys. Chem. 92,
215, 1917.
[2] S. Chandrasekhar, Stochastic problems in physics and astronomy,
Rev. Mod. Phys. 15, 1, 1943.
[3] H.E. Hurst, Long-term storage capacity of reservoirs, Trans. of
Am. Soc. of Civil Engg. 116, 1951.
[4] Y.L. Luke, The Special Functions and their Approximations I,
New York: Academic Press, 1969.
[5] H.E. Stanley, Introduction to Phase Transitions and Critical Phe-
nomena, Oxford University Press, Oxford, New York, 1971.
[6] S. Karlin and H.M. Taylor, A First Course in Stochastic Pro-
cesses, 2nd edn. Academic Press, New York, 1975.
[7] W. Rudin, Principles of Mathematical Analysis, 3rd edn.,
McGraw-Hill Book company, New Delhi, 1976.
[8] B.B. Mandelbrot, Fractals, Fractal Cities, Freeman, San Fran-
cisco, 1977.
[9] S.K. Frielander, Smoke, Dust and Haze: Fundamentals of Aerosol
Behavior, Wiley, New York, 1977.
[10] P.G. de Gennes, Scaling Concepts in Polymer Physics, Cornell
University Press, Ithaca, NY, 1979.
[11] J.E. Hutchinson, Fractals and self similarity, Indiana University
Mathematics Journal, 30, 713–747, 1981.
184  References

[12] T.M. Apostol, Mathematical Analysis, 2nd edn. Addison-wesley


publishing company, London, 1981.
[13] B.B. Mandelbrot, The Fractal Geometry of Nature, W.H. Free-
man and Company, New York, 1983.
[14] P. Grassberger and I. Procaccia, Measuring the Strangeness of
Strange Attractors, Physica D 9, 189–208, 1983.
[15] P. Grassberger, Generalized dimensions of strange attractors,
Physics Letters A, 97, 227–320, 1983.
[16] H.G.E. Hentschel and I. Procaccia, The infinite number of gener-
alized dimensions of fractals and strange attractors, Physica D:
Nonlinear Phenomena, 8(3), 435–444, 1983.
[17] T. Viscek and F. Family, Dynamic scaling for aggregation of clus-
ters, Phys. Rev. Lett. 52, 1669, 1984.
[18] R.M. Ziff and E.D. McGrady, The kinetics of cluster fragmenta-
tion and depolymerisation, J. Phys. A: Math. Gen. 18, 3027–3037,
1984.
[19] R.M. Ziff and E.D. McGrady, The kinetics of cluster fragmenta-
tion and depolymerisation, J. Phys. A: Math. Gen., 18(5), 3027,
1985.
[20] P.G.J. van Dongen and M.H. Ernst, Dynamic scaling in the ki-
netics of clustering, Phys. Rev. Lett. 54, 1396, 1985.
[21] R.M. Ziff and E.D. McGrady, Kinetics of polymer degradation,
Macromolecules, 19(10), 2513–2519, 1986.
[22] C. Amitrano, A. Coniglio and F. di Liberto, Growth probability
distribution in kinetic aggregation processes, Phys. Rev. Lett. 57,
1016, 1986.
[23] J. Feder, Fractals, New York: Plenum, New York, 1988.
[24] R.M. Bethea and R.R. Rhinehart, Applied Engineering Statistics,
Marcel Dekker, Inc., New York, NY, 1991.
[25] C.-K. Peng, S.V. Buldyrev, A.L. Goldberger, S. Havlin, F.
Sciortino, M. Simons and H.E. Stanley, Long-range correlations
in nucleotide sequences, Nature 356, 168–171, 1992.
[26] P.L. Krapivsky, Kinetics of random sequential parking on a line,
J. Stat. Phys. 69, 135–150, 1992.
References  185

[27] M.F. Barnsley, Fractals Everywhere, 2nd edn. Academic Press,


USA, 1993.
[28] P.R. Massopust, Fractal Functions, in Fractal Surfaces and
Wavelets, Academic Press, San Diego, 1994.
[29] P.L. Krapivsky and E. Ben-Naim, Multiscaling in stochastic frac-
tals, Phys. Lett. A 196, 168–172, 1994.
[30] A. Bunde and S. Havlin, Fractals in Science, Springer, Berlin,
1994.
[31] P.L. Krapivsky and E. Ben-Naim, Multiscaling in stochastic frac-
tals, Phys. Lett. A, 196, 168, 1994.
[32] P.L. Krapivsk and E. Ben-Naim, Scaling and multiscaling in mod-
els of fragmentation, Phys. Rev. E, 50, 3502, 1994.
[33] G.J. Rodgers and M.K. Hassan, Fragmentation of particles with
more than one degree of freedom, Phys. Rev. E 50, 3458–3463
1994.
[34] A. Renyi, On a new axiomatic theory of probability, Acta Math-
ematica Hungarica, 6, 285–335, 1995.
[35] M.K. Hassan and G.J. Rodgers, Models of fragmentation and
stochastic fractals, Phys. Lett. A, 208, 95–98, 1995.
[36] M. Rao, S. Sengupta and H.K. Sahu, Kinematic scaling and
crossover to scale invariance in martensite growth, Phys. Rev.
Lett. 75, 2164, 1995.
[37] G.I. Barenblatt, Scaling, Self-similarity, and Intermediate
Asymptotics, Cmpridge University Press, 1996.
[38] H.E. Stanley in Fractals and Disordered Systems eds. Bunde A and
Havlin S, New York: Springer, 1996.
[39] P.L. Krapivsky and E. Ben-Naim, Kinematic scaling and
crossover to scale invariance in martensite growth, Phys. Rev.
Lett., 76, 3234, 1996.
[40] M.K. Hassan and G.J. Rodgers, Multifractality and multiscaling
in two dimensional fragmentation, Phys. Lett. A, 218, 207–211,
1996.
[41] M.K. Hassan, Multifractality and the shattering transition in
fragmentation processes, Phys. Rev. E 54, 1126–1133, 1996.
186  References

[42] M.K. Hassan, Fractal dimension and degree of order in sequential


deposition of a mixture of particles, Physical Review E, 55(5),
5302, 1997.
[43] C.E. Shannon, The Mathematical Theory of Communication,
University of Illinois Press, Champaign, IL, 1998.
[44] Lui Lam, Nonlinear Physics for Beginners, World Scientific, Sin-
gapore, 1998.
[45] N.A. Salingaros and B.J. West, A universal rule for the distribu-
tion of sizes, Environment and Planning B: Planning and Design
26(6), 909–923, 1999.
[46] P. Asvestas, G.K. Matsopoulos and K.S. Nikita, Estimation of
fractal dimension of images using a fixed mass approach, Pattern
Recognition Letters, 20, 347–354, 1999.
[47] T. Schreiber and A. Schmitz, Surrogate Time Series, Physica D,
142, 346–382, 2000.
[48] E. Ben-Naim and P.L. Krapivsky, Stochastic aggregation: rate
equations approach, J. Phys. A: Math. Gen., 33, 547, 2000.
[49] S.H. Strogatz, Nonlinear Dynamics And Chaos: With Applica-
tions To Physics, Biology, Chemistry, And Engineering, Persue
Book Publishing, 2001.
[50] J.W. Kantelhardt, E. Koscielny-Bunde, H.H.A. Rego and S.
Havlin, Detecting long-range correlations with detrended fluctu-
ation analysis, Physica A 295, 441454, 2001.
[51] Y. Liu, R.T. Collins and W.E. Rothfus, Robust midsagittal plane
extraction from normal and pathological 3-D neuroradiology im-
ages, IEEE Transactions on Medical Imaging, 20(3), 2001.
[52] S. Redner, A Guide to First-Passage Processes, Cambridge Uni-
versity Press, Cambridge, 2001.
[53] N.G. van Kampen, Stochastic Processes in Physics and Chem-
istry, North-Holland, Amsterdam, 2001.
[54] M.K. Hassan and J. Kurths, Transition from random to ordered
fractals in fragmentation of particles in an open system, Phys.
Rev. E, 64(1), 016119, 2001.
[55] M.K. Hassan and J. Kurths, Can randomness alone tune the frac-
tal dimension? Physica A, 315, 342–352, 2002.
References  187

[56] John C. Russ, The Image Processing Handbook, 4th ed., CRC
Press, London, 2002.
[57] K.J. Falconer, Fractal Geometry: Mathematical Foundations and
Applications, 2nd. edition, John Wiley & Sons Ltd., England,
2003.
[58] M.E.J. Newman, SIAM Review, 45, 167, 2003.
[59] C. Jingdong, Filtering Techniques for Noise Reduction and
Speech Enhancement, Chapter in Adaptive Signal Processing:
Applications to Real-World Problems, Springer Berlin Heidel-
berg, 129–154, 2003.
[60] T. Gautama, D.P. Mandic and M.M.V. Hulle, The delay vector
variance method for detecting determinism and nonlinearity in
time series, Physica D, 190, 167–176, 2004.
[61] S.N. Majumdar, D.S. Dean and P.L. Krapivsky, Understanding
search trees via statistical physics, Pramana - J. Phys., 64(6),
1175–1189, 2005.
[62] Y.S. Liang and W.Y. Su, The relationship between the fractal
dimensions of a type of fractal functions and the order of their
fractional calculus, Chaos, Solitons and Fractals, 34, 682–692,
2007.
[63] K.R. Castleman, Digital Image Processing, Pearson Education
India, 1st ed., 2007.
[64] G. Edgar, Measure, Topology, and Fractal Geometry, 2nd edition,
Springer, New York, 2008.
[65] Y. Zhang and Q. Hu, A PCA-based approach to the representa-
tion and recognition of MR brain midsagittal plane images, 30th
Annual International Conference of the IEEE-EMBS, 3916–3919,
2008.
[66] M.K. Hassan and M.Z. Hassan, Condensation-driven aggregation
in one dimension, Phys. Rev. E 77, 061404, 2008.
[67] G.W. Delaney, S. Hutzler and T. Aste, Relation between grain
shape and fractal properties in random apollonian packing with
grain rotation, Phys. Rev. Lett., 101, 120602, 2008.
[68] M.K. Hassan and M.Z. Hassan, Emergence of fractal behavior
in condensation-driven aggregation, Phys. Rev. E, 79(2), 021406,
2009.
188  References

[69] P.A. Varotsos, N.V. Sarlis and E.S. Skordas, Detrended fluctu-
ation analysis of the magnetic and electric field variations that
precede rupture, Chaos 19, 023114, 2009.
[70] M.K. Hassan and M.Z. Hassan, Emergence of fractal behavior in
condensation-driven aggregation, Phys. Rev. E, 79, 021406, 2009.
[71] J. Aguirre, R.L. Viana and M.A.F. Sanjuan, Rev. Mod. Phys. 81,
333, 2009.
[72] Jian Li, Qian Du and Caixin Sun, An improved box-counting
method for image fractal dimension estimation, Pattern Recog-
nition, 42, 2460–2469, 2009.
[73] R.K.P. Zia, E.F. Redish and S.R. McKay, Am. J. Phys., 77, 614,
2009.
[74] P.R. Massopust, Interpolation and Approximation with Splines
and Fractals, Oxford University Press, New York, 2010.
[75] M.K. Hassan, M.Z. Hassan and N.I. Pavel, Scale-free network
topology and multifractality in a weighted planar stochastic lat-
tice, New J. Phys., 12, 093045, 2010.
[76] MRI Image Database, website:
http://www.cma.mgh.harvard.edu/ibsr./
[77] MRI Image Database, website:
http://www.bic.mni.mcgill.ca/brainweb./
[78] S.X. Liu, J. Kender, C. Mielinska and A. Laine, Employing sym-
metry features for automatic misalignment correction in neuroim-
ages, Journal of Neuroimaging, 21, 15–33, 2011.
[79] G.C.S. Ruppert, L.A. Teverovskiy, C.P. Yu, A.X. Falcao and
Y. Liu, A new symmetry-based method for mid-sagittal plane
extraction in neuroimages, IEEE International Symposium on
Biomedical Imaging: From Nano to Macro, 285–288, 2011.
[80] M.K. Hassan, M.Z. Hassan and N.I. Pavel, J. Phys: Conf. Ser,
297, 012010, 2011.
[81] A. Biswas, T.B. Zeleke and B.C. Si, Multifractal detrended fluc-
tuation analysis in examining scaling properties of the spatial
patterns of soil water storage, Nonlin. Processes Geophys., 19,
227238, 2012.
References  189

[82] S.A. Jayasuriya, A.W.C. Liew and N.F. Law, Brain symmetry
plane detection based on fractal analysis, Computerized Medical
Imaging and Graphics, 37, 568–580, 2013.
[83] M.K. Hassan, M.Z. Hassan and N. Islam, Emergence of fractals in
aggregation with stochastic self-replication, Phys. Rev. E, 88(4),
042137, 2013.
[84] R. Uthayakumar and A. Gowrisankar, Generalized Fractal Di-
mensions in Image Thresholding Technique, Information Sciences
Letters, Natural Sciences, 3(3), 125–134, 2014.
[85] M.K. Hassan, N.I. Pavel, R.K. Pandit and J. Kurths, Chaos, Soli-
tons & Fractals, 60, 31–39, 2014.
[86] R. Uthayakumar and A. Gowrisankar, Generation of Fractals via
Self-Similar Group of Kannan Iterated Function System, Applied
Mathematics & Information Sciences, Natural Sciences, 9(6),
3245–3250, 2015.
[87] R. Uthayakumar and A. Gowrisankar, Attractor and self-similar
group of generalized fuzzy contraction mapping in fuzzy metric
space, Cogent Mathematics, Taylor & Francis 2(1): 1024579, 1–
12, 2015.
[88] A. Gowrisankar and R. Uthayakumar, Fractional calculus on
fractal interpolation for a sequence of data with countable it-
erated function system, Mediterranean Journal of Mathematics,
Springer, 13(3), 3887–3906, 2016.
[89] R. Uthayakumar and A. Gowrisankar, Mid-sagittal plane detec-
tion in magnetic resonance image based on multifractal tech-
niques, IET Image Processing, 10(10), 751–762, 2016.
[90] A. Gowrisankar, Generation of fractal through iterated function
systems, Ph.D. Thesis, The Gandhigram Rural Institute (Deemed
to be University), 2016.
[91] Y.S. Liang and Q. Zhang, A type of fractal interpolation functions
and their fractional calculus, Fractals, 24(2), 1650026, 2016.
[92] F.R. Dayeen and M.K. Hassan, Chaos, Solutions & Fractals 91
228, 2016.
[93] R.C. Gonzalez, R.E. Woods and S.L. Eddins, Digital Image Pro-
cessing Using MATLAB, 2nd ed., McGraw Hill Education, 2017.
190  References

[94] A. Gowrisankar and M. Guru Prem Prasad, Riemann-Liouville


calculus on quadratic fractal interpolation function with variable
scaling factors, The Journal of Analysis, Springer, 1–7, 2018.
[95] A. Gowrisankar and D. Easwaramoorthy, Local countable iter-
ated function systems, ICAMS-2017—Trends in Mathematics—
Springer Book Series, 1, 169–175, 2018.
[96] C. Raja Mohan, A. Gowrisankar, R. Uthayakumar and K.
Jayakumar, Morphology dependent electrical property of chitosan
film and modeling by fractal theory, The European Physical Jour-
nal Special Topics, 228, 233–243, 2019.
[97] M.K. Hassan, Is there always a conservation law behind the emer-
gence of fractal and multifractal, Eur. Phys. Journal ST, 228(1),
209–232, 2019.
Index

δ - cover 157 contraction ratios 58, 59


Correlation Algorithm 159
A correlation dimension 108, 112, 123,
aggregation kernel 85, 86 160
angular deviation 172, 174, 175, 178 Cut and paste model 98, 110, 112–
attractor 44, 61, 129 114
auto-correlation 130, 132–134, 142,
D
150, 151
Autocorrelation function 135, 142 degree of the multifractality 149
auto-covariance 133 deterministic dyadic Cantor set 106
deterministic fractal 37, 44, 45, 58,
B 61, 67
Banach contraction 49, 58, 61 Deterministic multifractal 107
Binomial coefficient 28 deterministic process 129
Bohr radius 3 detrended Fluctuation 140, 144, 145
box dimension 157, 158 detrended variance 141, 146
Brownian motion 86, 87, 142 DFA 140–147
Buckingham Pi-theorem 5 diameter 5, 15, 34, 157
Differential Box Counting 158, 159
C Digital image 153–155
Digital Image Processing 154
Cauchy sequence 48, 49, 54–57 Dilation 54
cerebrospinal fluid 167 dimension function 5, 21
change of scaling 19, 20 Dimensional analysis 3–6, 8, 12
completeness axiom 51 dimensionless 1, 4–9, 12, 13, 16, 17,
concave 110, 123, 161 80, 84, 91, 92, 95
conservation law 44, 80, 81, 85, 87, discrete metric 46
95, 119, 120, 125 distance function 45, 46
192 < Fractal Patterns in Nonlinear Dynamics and Applications

distribution function 73, 80, 84, 90, H


93, 94, 98, 117, 119, 120, 126
Hamiltonian 104
Dragon curve 64, 65
Hausdorff dimension 36, 37, 45, 61,
DVV method 150
67, 157, 164, 173
Dyadic Cantor Set 70–74, 77–79,
Hausdorff distance 52
106–108, 110, 115, 120
Hausdorff-Besicovitch dimension 33,
Dynamic scaling 16–18, 80, 85, 92
41, 44, 102, 121, 157
E HB mapping 58, 60, 61
Hilbertspace 34
Euclidean geometry 22, 23, 33–35, Holder exponents 110, 114, 122, 147,
37, 44, 67, 167 148, 150
Euclidean objects 32, 35, 36 homogeneous function 4, 18, 20, 21
human visual perception 164, 166,
F 174, 178, 180
Fern leaf 64 Hurst exponent 130, 136, 138, 145,
first return probability 23, 26 151
fixed point 49–51, 55, 58, 60, 61 hypergeometric function 118
Fixed point theorem 50, 58, 60 hyperspace 52, 57, 60
fluctuation analysis 135, 139, 144,
145 I
fluctuation in the STS 150 Image Acquisition 155
Fourier spectrum 135 Image Enhancement 154, 155
fractal 31–33, 35–37, 40, 41, 44, 45, Image thresholding 162
51, 58, 61, 63, 64, 66, 67, 69–73, information dimension 108, 112, 123,
77, 79, 85, 88–90, 93, 95, 97, 98, 160
102, 103, 105–108, 110–112, 114, information entropy 103
123, 127, 129, 132, 148–150, initiator 37–45, 67, 71–73, 75, 78, 97,
155, 156, 158, 160–164, 167, 98, 107, 108, 111, 112, 115, 116,
168, 170, 172–178, 180, 181 118, 124, 126, 127
fractal geometry 31–33, 35, 37 integer dimension 32, 33, 35
Fractal leaves 64, 66 integro-differential equation 8, 80
fractal process 132, 149, 150 intensity non-uniformities 171
fractal space 51 inter-hemispheric fissure 167
invariant set 61
G inverse Laplace transform 93
Generalized fractal dimension 150, Iterated function system 45, 58
155, 156, 160, 162–164, 167,
168, 170, 173–178, 180, 181 K
generalized homogeneous k-bit 162
function 20, 21 kinetic counterpart 79
GHE 145, 147, 148, 151 Kinetic dyadic Cantor set 73, 74, 77
gray levels 154, 158, 159, 162, 164, kinetic square lattice 124, 125
173
Index < 193

Koch curve 40–42, 44, 63, 64 multifractality 97, 101, 114, 123, 124,
Kumer’s function 81 126, 127, 129, 130, 144, 147,
149, 151
L multi-scaled network 15
Legendre transform 99, 100, 104,
125, 147, 150
N
Lipschitz-Holder 148 natural pattern 67
Lipschitz-Holder exponent 148 Newton’s second law 3, 5
logarithmic scale 25, 29 Noise 136–139, 141–143, 148, 149,
long-term correlation 136, 140 155, 171–178, 180
non-integer dimension 32, 35
M non-stationary time series 133–135
magnetic resonance image 155, 167
magnitude 2, 3, 22
O
mass exponent 101, 102, 104, 105, Open ball 48, 51
107–109, 112, 114, 121, 122, 125 orthogonal 117, 171
mass-length relation 34–36, 42 Otus method 165
measurements 1–5, 7, 9, 13, 16–19,
21, 167 P
Medical image 166 partition function 101, 105–107, 109,
Mellin transform 118 111, 120, 122, 125, 144
MF-DFA 145–147 pathological image 170, 175, 180
Midsagittal Plane 167 polynomial fitting 141
monofractal 130, 132, 135, 140–142, power monomial law 17, 18
147, 149–151, 156, 161 power noise 137–139, 141, 142, 148,
Monofractal dimension 156 149
monofractal STS 130, 135, 142, 151 Power spectral density 142
multifractal 98, 99, 101, 105, 107– power-law 4, 18, 19, 21–23, 25,
112, 114, 115, 120, 121, 123, 124, 28–30, 36, 41, 72, 76, 77, 88, 89,
126, 127, 129, 130, 132, 135, 144, 91, 101, 121, 124, 125
145, 147–151, 160–162, 165–168, Power-law distribution 21, 22, 25, 29,
170, 173–178, 180, 181 30, 101
multifractal analysis 98, 120, 121, pre-fractal 41
123, 129, 149, 167, 168 Pythagoras theorem 11
Multifractal dimension 160, 162 Pythagorean theorem 8, 11
multifractal formalism 98, 101, 105,
107, 109, 111, 112, 120, 123, 124, Q
144
multifractal process 132, 147, 149 q-th order fluctuation 145, 146
Multifractal Spectra 160, 161, 167,
R
168, 170, 173, 174, 176–178,
180, 181 Random fractal 37, 45, 70, 71, 73, 127
multifractal STS 130, 144, 147, 151 random process 130
194 < Fractal Patterns in Nonlinear Dynamics and Applications

Random walk 23–25, 135–137, 139, Stochastic fractal 69, 79, 85, 95, 98,
140 127
region nonuniformity 164–166 stochastic lattice 115, 116
Relative Differential Box stochastic process 70, 71, 129–132,
Counting 159 150, 151
Renyi entropy 160 stochastic self-replication 85, 93
Renyi Fractal Dimensions 160 stochastic time series 129–131, 139,
rescale range fluctuation 138 146, 150
right skewed 150 Stochastic variable 16, 71
Roll angle 171, 172, 175, 179 supremum 51, 53
symmetric measure 169, 181
S symmetry parameter 149, 150
scale-free 18, 19, 21
scale-invariance 1, 4, 18
T
scaling function 9, 17, 80, 83, 92, 93, topological dimension 33, 36, 37, 45,
95 61, 67, 156, 164, 173
scaling law 16, 20, 130, 132, 135, triadic Cantor set 38, 70, 72
138–140, 144, 150, 151, 158
Self-referential equation 60 U
self-similarity 1, 4, 8, 11, 13–16, 18, uniform metric 47
19, 35–37, 45, 67, 69, 75, 85, 95, uniformly continuous 47, 49, 50
130, 132, 135, 137, 150, 167 usual metric 46, 49, 52
Shannon entropy 160
short-term correlation 136 V
Sierpinski gasket 42–44, 62, 63
similarity 1, 4, 8, 11, 13–16, 18, 19, von Koch curve 40, 44, 63, 64
35–37, 45, 67, 69, 75, 85, 95,
W
130, 132, 135, 137, 150, 167
similarity parameters 12, 13, 132 wavelet transform 130, 144, 151
singularity spectrum 148, 149 weakly stationary 133, 134
Smoluchowski’s equation 85 Widom scaling 21
smooth or regular structure 149
spectrum of multifractal 108, 115 Y
square lattice 111, 115, 124, 125 yardstick 22, 23, 35, 36, 44
Statistical self-similarity 15, 45, 137, Yaw angle 171, 172, 175, 179
150
Sterling’s formula 28 Z
Stochastic dyadic Cantor set 77–79,
106 Zipf law 29

You might also like