Nonlinear Dynamics
and Applications
Santo Banerjee
Institute for Mathematical Research
University Putra Malaysia
Serdang, Malaysia
M K Hassan
Dhaka University
Dhaka, Bangladesh
Sayan Mukherjee
Department of Mathematics
Sivanath Sastri College
Kolkata, India
A Gowrisankar
Department of Mathematics
Vellore Institute of Technology
Vellore, Tamil Nadu, India
p,
p,
A SCIENCE PUBLISHERS BOOK
A SCIENCE PUBLISHERS BOOK
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted,
or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, includ-
ing photocopying, microfilming, and recording, or in any information storage or retrieval system, without written
permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com
(http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers,
MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety
of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment
has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe.
Mathematics and Physical Sciences have been used together to describe natural
phenomena. As a result of its numerous successes, there have been a growth of
scientific discoveries. This book is meant for anyone who wants to understand
patterns of fractal geometry in detail along with the physical aspects and basic
mathematical background. It is our goal to give readers a broad interpretation of
the underlying notions behind fractals and multifractals. Furthermore, we want
to illustrate the fundamentals of fractals, stochastic fractals and multifractals
with applications.
Many phenomena in nature exhibit self-similarity. That is, either a part is
similar to the whole or snapshots of the same system at different times are similar
to one another albeit it differs in size. Initially this book describes novel physical
applications and the recent progress through scale-invariance and self-similarity.
In general, mathematics is concerned with sets and functions to model real world
problems which are done by classical Euclidean geometry. However, there are
many phenomena which are traditionally observed as too irregular or complex
to be described using classical Euclidean geometry. In such a case, there is a
need for alternative geometry to resolve these complexities which helps in a
better understanding of natural patterns. The idea of irregular objects has been
revolutionized by Benoit B Mandelbrot and is called fractal geometry. It has
generated a widespread interest in almost every branch of science. The advent of
inexpensive computer power and graphics has led to the study of non-traditional
geometric objects in many fields of science and the idea of the fractal has been
used to describe them. In a sense, the idea of fractals has brought many seemingly
unrelated subjects under one umbrella. The second chapter deals the construction
of fractals through an iterated function system of contractive mappings and
illustrates some examples.
Nature loves randomness and natural objects which we see around us
evolve in time. The apparently complex look of most of natural objects does not
vi < Fractal Patterns in Nonlinear Dynamics and Applications
mean that nature favours complexity, rather that the opposite is true. Often the
inherent and the basic rule is trivially simple; it is in fact the randomness and the
repetition of the same simple rule over and over again that makes the object look
complex. Of course, natural fractals cannot be strictly self-similar rather they are
statistically self-similar. For instance, one can draw a curve describing the tip
of the trees in the horizon but the details of the two pictures drawn by the same
person will never be the same despite how hard one tries. Capturing the generic
feature of cloud distribution without knowing anything about self-similarity
can be described as our natural instinct. The subsequent section attempts to
incorporate both the ingredients (randomness and kinetic) to the various classical
fractals, namely stochastic fractals and multifractals, in order to know what role
these two quantities play in the resulting processes.
Contents
Preface v
Symbols xi
2. Fractals 31
2.1 Introduction 31
2.2 Euclidean geometry 33
2.3 Fractals 35
2.3.1 Recursive Cantor set 37
2.3.2 von Koch curve 40
2.3.3 Sierpinski gasket 42
viii < Fractal Patterns in Nonlinear Dynamics and Applications
3. Stochastic Fractal 69
3.1 Introduction 69
3.2 A brief description of stochastic process 70
3.3 Dyadic Cantor Set (DCS): Random fractal 71
3.4 Kinetic dyadic Cantor set 73
3.5 Stochastic dyadic Cantor set 77
3.6 Numerical simulation 81
3.7 Stochastic fractal in aggregation with stochastic 85
self-replication
3.8 Discussion and summary 95
4. Multifractality 97
4.1 Introduction 97
4.2 The Legendre transformation 99
4.3 Theory of multifractality 101
4.3.0.1 Properties of the mass exponent τ(q) 102
4.3.1 Legendre transformation of τs(q): f (α) spectrum 104
4.3.1.1 Physical significance of α and f(α) 105
4.4 Multifractal formalism in fractal 105
4.4.1 Deterministic multifractal 107
4.5 Cut and paste model on Sierpinski carpet 110
4.6 Stochastic multifractal 115
4.7 Weighted planar stochastic lattice model 115
4.8 Algorithm of the weighted planar stochastic lattice (WPSL) 116
4.9 Geometric properties of WPSL 119
4.9.0.1 Multifractal analysis to stochastic 120
Sierpinski carpet
4.9.0.2 Legendre transformation of the mass 122
exponent τs(q): The f (α) spectrum
4.10 Multifractal formalism in kinetic square lattice 124
4.10.1 Discussions 126
Scaling,
Scale-invariance and
Self-similarity
~2
a0 = m e e2
. (1.1)
4π0
is a dimensionless
√ quantity. Suppose that l = 1m and ω = 9.8cycles/sec
and hence Π = 9.8 in the SI system of units. Now let us change the
system of units so that the unit of mass is decreased by a factor of
M = 1000, the unit of length is decreased by a factor of L = 100, and
the unit of time is decreased by a factor of T = 1. With this change, the
units of frequency will decrease by a factor of T −1 = 1 and the units of
acceleration by a factor of LT −2 = 100. Therefore, the numerical value
of l in the system of units of measurement will be l100 instead of 1
and that of g will be 980. However, the numerical value of Π will still
remain invariant under a change of units and hence it is a dimensionless
quantity.
f (a1 , . . . , ak , . . . , an ) = aα γ
k+1 · · · an Φ((Π1 , . . . , Πk ). (1.11)
and so it is true for the governed parameter S as we can write the di-
mensional relation [S ] ∼ [c2 ]. We therefore can define two dimensionless
governing parameters
Π = φ(Π1 , Π2 ). (1.16)
Figure 1.1: This figure shows how area S for one of the triangle of the above figure
(say the largest triangle) varies as the base b is changed.
Figure 1.2: This shows that if we plot S/c2 vs b/c instead of S vs b then all the three
curves of Fig. (3) collapse on to a single master curve. It implies that for a given
numerical value of the ratio b/c regardless of the size of the triangle corresponding
S/c2 is the same because triangle that satisfies these conditions are similar and hence
the function φ(b/c) is unique.
Scaling, Scale-invariance and Self-similarity 11
Now the question is: what does this data collapse imply? It implies that
if two or more right triangles which have one of the acute angles iden-
tical then such triangles are similar. We shall see later that whenever
we will find a data collapse between two different systems of the same
phenomenon then it would mean that the corresponding systems or
their underlying mechanisms are similar. Similarly, if we find that data
collected from the whole system collapsed with similarly collected data
from a suitably chosen part of the whole system then we can conclude
that the part is similar to the whole. On the other hand, if we have a
set of data collected at many different times for a kinetic system and
find they all collapse onto a single curve then we can say that the same
system at different times are similar. However, similarity in this case is
found of the same system at different times and hence we may coin it
as temporal self-similarity.
Proof of the Pythagoras theorem: The area S of the original right
triangle is determined by the size of its hypotenuse c and the acute angle
θ since S = c2 φ(θ). Now, by drawing the altitude which is perpendicular
to the hypotenuse c we can divide the original triangle into two smaller
right triangles whose hypotenuses are a and b and say their respective
areas are S1 and S2 . The two smaller right triangles will also have their
acute angle theta and hence we can easily show that
S = S1 + S2 , (1.22)
1.4 Similarity
In most cases, before a large and expensive structure or object such
as a ship or aeroplane is built, efforts are always devoted to testing
on a model system instead of working directly with the real system.
The results of testing on model systems are then used to infer the
various physical characteristics of the real object or structure under
their future working conditions. In order to have successful modeling,
it is necessary that we know how to relate the results of testing on
models to the actual manufactured product. Needless to mention that
if such connections are not known or cannot be made then modeling
is simply a useless pursuit. For the purpose of rational modeling, the
12 Fractal Patterns in Nonlinear Dynamics and Applications
and
a(m) α2 a(m) β2 a(m) γ2
(m)
b2 = b(p)
1
1
(p)
2
(p)
3
(p)
, (1.32)
a1 a2 a3
whereas the model parameters a(m) (m) (m)
1 , a2 , a3 can be chosen arbitrarily.
The simple definitions and statements presented above describe the
entire content of the theory of similarity: we emphasize that there is
nothing more to this. We will now give a few examples to help us grasp
the idea.
1.5 Self-similarity
The notion of self-similarity will be an underlying theme for the rest
of the book. In a way the word self-similarity needs no explanation.
Perhaps the best way to perceive the meaning of self-similarity is to
consider an example and at this stage there is no better example than
a cauliflower. The cauliflower head contains branches or parts, which
when removed and compared with the whole is found to be very much
the same except it is scaled down. These isolated branches can again
be decomposed into smaller parts, which again look very similar to the
whole as well as of the branches. Such self-similarity can easily be car-
ried through for about three to four stages. After that the structures
14 Fractal Patterns in Nonlinear Dynamics and Applications
are too small to go for further dissection. Of course, from the mathe-
matical point of view the property of self-similarity may be continued
through an infinite stages though in real world such properties sustain
only a few stages. Self-similarity can also be found on smaller scales.
For instance, snowflakes often display self-similar branching patterns
and aggregating colloidal particles have growth patterns that are sta-
tistically self-similar. Self-similarity, in fact, is so widespread in nature
that they are exceedingly easy to find. Consequently, the main ideas of
self-similarity are not difficult to grasp. Hopefully, when we are done
with this chapter, we will have the basic understanding of self-similar
phenomena and when we are done with the whole book we will be
able to appreciate the fact that self-similar phenomena are ubiquitous
in nature as it is virtually everywhere we look. In our endeavour we
need to spend just a few more minutes laying the foundation for that
knowledge. We begin our exploration with three more examples of self-
similarity.
Firstly, think of a lone leafless tree standing against a gray winter
sky as the background. Needless to mention that branching patterns
are quite familiar to us. There is a single trunk that rises to a region
from which major branches split off. If we follow any one of these ma-
jor branches, we see that they also split into smaller branches, which
split into even smaller branches, and so on. A unit pattern (in this case
one length splitting into two or more thinner branches) is repeated on
ever-smaller size scales until we reach the tree-top. Having this pic-
ture in mind, we can state a simple generic definition: the fundamental
principle of a self-similar structure is the repetition of a unit pattern
on different size scales. Note that the delicate veins of leaves also have
self-similar branching patterns, as do virtually all root systems. A tree
is therefore the epitome of self-similarity.
Secondly, the decimal number system which is probably the oldest
and most useful construction that used the idea of self-similarity. How-
ever, we use it so much in our daily life that we hardly had time to
appreciate its self-similar properties. Let us look at the meter stick,
which has marks for decimeters (ten of it make a meter), centimeters
(ten of it makes a decimeter and a thousand make a meter), millimeters
(ten of it makes a centimeter). In a sense a decimeter together with its
marks looks like a meter with its marks, however, scaled down by a
factor of 10. This is not an accident. It is in strict correspondence with
the decimal system. When we say 248 mm, for example, we mean 2
decimeters, 4 centimeters, and 8 millimeters. In other words, the posi-
tion of the figures determines their place value, exactly as in the decimal
number system. One meter has a thousand millimeters to it and when
we have to locate position 248 only a fool would start counting from
Scaling, Scale-invariance and Self-similarity 15
f ∼ tθ . (1.35)
x −→ λz x, t −→ λt, f −→ λθ f, (1.39)
revealing the self-similar nature of the function f (x, t). One way of
verifying the dynamic scaling is to plot dimensionless variables f /tθ as
a function of x/tz of the data extracted at various different times. Then
if all the plots of f vs x obtained at different times collapse onto a single
18 Fractal Patterns in Nonlinear Dynamics and Applications
universal curve then it is said that the systems at different times are
similar and it obeys dynamic scaling. Essentially such systems can be
termed as temporal self-similarity since the the same system is similar
at different times.
or
g (λ) = ec λp . (1.51)
0 c p−1
Now from the above equation we have g (λ) = pe λ and the defi-
nition g 0 (1) = p implies that the integration constant c has the value
zero. Thus
g (λ) ∼ λp , (1.52)
and it makes the proof complete.
The power-law type distributions f (x) ∼ x−α are regarded as scale-
free because f (λx)/f (x) depends only on λ not on x and hence their
distribution requires no characteristic or typical scale. The idea is that
if the unit of measurement of x is increased (decreased) by a factor of
λ then the numerical value of f is decreased (increased) by a factor of
g (λ) but the overall shape of the function remains invariant. It is called
scale-free also because it ensures self-similarity in the sense that the
20 Fractal Patterns in Nonlinear Dynamics and Applications
function looks the same whatever scale we look at it. One immediate
consequence of the scale-invariant function f (r) is that if we know its
value at one point says at r = r0 , and we know the functional form of
g (λ), then the function f (r) is known everywhere. This follows because
any value of r can always be written in the form r = λr0 , and
The above equation says that the value f (r) at any point is related to
the value of f (r0 ) at a reference point r = r0 by a simple change of
scale. Of course, this change of scale is, in general, not linear.
and the converse statement is also valid. Since the above equation is of
the form of Eq. (1.54), we conclude that Eq. (1.55) is no more general
than Eq. (1.54). Other equivalent forms of Eq. (1.54) that frequently
appear in the literature on scaling laws are
and
f (λa x, λy ) = λp f (x, y ). (1.58)
Scaling, Scale-invariance and Self-similarity 21
The main point to note here is that there are at least two undetermined
parameters a and b for a generalized homogeneous function. So the
litmus test for the generalized homogeneous function is given by Eq.
(1.54). Say, that it is true also for any λ so that we can choose λa = 1/x.
Then Eq. (1.54) becomes
N (δ ) = L/δ, (1.63)
Combining the above two equations (1.63) and (1.64) one can immedi-
ately find that
N (δ/n) = nN (δ ). (1.65)
Extending the idea for the case of a plane and for an object that
occupies space we can write the generalized relation
N (δ/n) = nd N (δ ), (1.66)
N (δ ) ∼ δ −d , (1.67)
in 1D lattice for simplicity, and before each step, the walker flips an
honest coin. If it is head, the walker takes a step forward (or to the
right) and if it is tail then the walker takes a step backward (or to the
left). The coin is unbiased so that the chances of getting heads or tails
are equal. Often random RW is compared with a drunkard walk who
is so very drunk that he may move forward or backward with equal
probability and that each step is independent of the steps which are
already taken. The resulting walk is so irregular that one can predict
nothing with certainty about the next step. Instead, all we can talk
about is the probability of his covering a specific distance in a given
time and that too in the statistical sense.
Another interesting question one may ask in the context of RW
problem is the following: How long does it take for a random walker to
reach a given point for the first time? This is an well-known problem
which is more generally known as the first passage probability. It is
defined as the probability f (r, t) in which the walker returns for the
first time to the position r at time t [52]. We, however, will focus on
the special case where the walker starts walking from the origin and we
want to know the probability that the walker returns to zero for the
first time after time t. This is also known as the gamblers ruin process.
The statement of the gambler’s ruin problem and its relation with the
first return probability is trivially simple. Say, a gambler has i dollars
out of total n. Say, the rule is set so that if a coin is flipped and it
turns out to be head then the gambler will gain a dollar or else lose a
dollar. The game continues until the gambler goes broke. However, we
will restrict our discussion to only the first return probability.
Consider that we have N number of independent walkers and ini-
tially all of them are at the origin. Then all of them are set to walk at
the same time and we record the time they took to return to the origin.
Equivalently, one could also perform the same experiment with a single
walker and let the walker return to the origin N times and we take a
record of the time it took each time the walker returned. The records
for the corresponding return time will have exactly the same feature as
that of for N walker. If one now looks at the record they will definitely
find no order which may lead to the conclusion that there cannot exist
any law to characterize the emergent behaviour. However, if the size
of the record is large enough by letting N → ∞ and if they undergo a
systematic statistical processing then it may lead to finding some order
even in this seemingly disordered data records.
We now discuss the processing procedure to extract the first return
probability using the records obtained from N independent walks till
they return to the origin. We first bin the record into equal size say of
width 0.1. That is, say the first bin goes 0 to 0.1, the second from 0.1
Scaling, Scale-invariance and Self-similarity 25
to 0.2 and so forth. We then find what fraction of the total N walks
return to zero within these bins and which is actually the frequency
of returns within a specific bin. The plot of these fractions against bin
size is equivalent to plotting normal histogram of the record produced
by binning them into bins of equal size of width 0.1. To check if the
corresponding distribution reveals power-law or not it is better to plot
the histogram on logarithmic scale. Doing exactly this with the current
records gives the straight line which is the characteristic feature of the
power-law distribution when plotted on the logarithmic scale. Besides,
we find the slope of the straight line is exactly 3/2 which is essentially
the exponent of the power-law. However, one should also notice that
the right hand end of the distribution is quite noisy reflecting scarce
data points near the tail meaning each bin near the right hand end only
has a few samples in it, if any at all. Thus the fractional fluctuations
in the bin counts are large which appears ultimately as a fat tail.
We can solve the problem analytically. Let us assume that Sn rep-
resents the position of the walker at the end of n seconds. We say that
a return to the origin has occurred at time t = n, if Sn = 0. We note
that this can only occur if n is an even integer. In order to calculate
the probability of a return to origin at time 2m, we only need to count
the number of paths of length 2m whichbegins and ends at the origin.
2n
The number of such a path is clearly . We first consider u2m
n
the unconstrained probability of a return to the origin at time t = 2m
in which the walk is allowed to return to zero as many times as it likes,
before returning there again at time t = 2m. The probability of return
to the origin at time 2n is therefore given by
2n
u2n = 2−2n , (1.68)
n
since each path has the probability 2−2n . Note that there are now u2n 22n
paths of length 2n which have endpoints (0, 0) and (2n, 0). We make
one obvious assumption that u0 = 1 since the walker starts at zero.
A random walker is said to have a first return to the origin at time
2m if m > 0 and S2k 6= 0 for all k < m. We define f2m as the first return
probability at time t = 2m. In analogy with u2n we can also think of
the expression f2m 22m as the number of paths of length 2m between the
points (0, 0) and (2m, 0) so that the displacement S2n versus time (i.e.,
n) curve do not touch the horizontal axis except at the end points (0, 0)
and (2m, 0). Using this idea it is easy to establish a relation between
unconstrained return probability u2n and the first return probability
f2m . There are u2n 22n paths of length 2n which have end points (0, 0)
and (2n, 0). The collection of such paths can be partitioned into n sets,
26 Fractal Patterns in Nonlinear Dynamics and Applications
0
Log(f(t)) vs Log(t)
Fitting Curve
-2
-4
-6
Log(f(t))
-8
-10
-12
-14
-16
0 1 2 3 4 5 6 7 8 9 10
Log(t)
Figure 1.3: First return probability ln f (t) vs ln(t) is drawn where 25 indepen-
dent realizations are superimposed. The straight line with slope equal to 1.5 clearly
confirms analytical prediction.
depending upon the time of first return to the origin. Assume that a
path in this collection which has a first return to the origin at time
2m consists of an initial segment from (0, 0) to (2m, 0), in which the
path has not intercepted the time axis, and a terminal segment from
(2m, 0) to (2n, 0), with no further restrictions on this segment. Thus,
the number of paths in the collection which have a first return to the
origin at time 2m is
f2m 22m × u2n−2m 22n−2m = f2m u2n−2m 22n . (1.69)
If we sum the above equation over m we obtain the number of paths
u2n 22n of length 2n and hence we have
n
X
u2n 22n = f2m u2n−2m 22n , (1.70)
m
Now, if we can find a closed form solution for the function U (z ), we will
also be able to find closed-form solution for F (z ). The function U (z )
however is quite easy to calculate. The probability ut that the walker
is at position zero after t steps can be obtained from
N! 1 N
P (m, N ) = N +m N −m . (1.75)
( 2 )!( 2 )! 2
Therefore, we have √
F (z ) = 1 − 1 − z. (1.78)
We find it worthwhile to note that
U (z )
F0 = , (1.79)
2
28 Fractal Patterns in Nonlinear Dynamics and Applications
same time we then record the time they need to return to the origin
where they started the walk from. Equivalently, one could also perform
the same experiment with a single walker and let the walker return to
the origin N times. The records for the corresponding return times will
have exactly the same feature as that of for N walker. If one now looks
at the record they will definitely find no order which may lead to the
conclusion that there cannot exist any law to characterize the emergent
behaviour. However, if the size of the record is large enough by letting
N large and if they undergo a systematic statistical processing then it
may lead to finding some order even in this seemingly disordered data
records.
We now discuss the processing procedure to extract the first return
probability using the records obtained from N independent walks till
they return to the origin. We first bin the record into equal size say of
width 0.1. That is, the first bin goes 0 to 0.1, the second from 0.1 to 0.2
and so forth. We then find what fraction of the total N walks return
to zero within these bins. The plot of these fractions against bin size
is equivalent to plotting normal histogram of the record produced by
binning them into bins of equal size of width 0.1. To check if the corre-
sponding distribution reveals power-law or not it is better to plot the
histogram on a logarithmic scale. Doing exactly this with the current
records gives the straight line which is the characteristic feature of the
power-law distribution when plotted on the logarithmic scale. However,
it is worth mentioning that the right hand end of the distribution is
quite noisy reflecting scarce data points near the tail meaning each bin
near the right hand end only has a few samples in it, if any at all. Thus
the fractional fluctuations in the bin counts are large which appears
ultimately as a fat tail.
• Zipf law: In the 1930’s Zipf (the Harvard Linguistic professor)
found the frequency of occurrence of some event f , as a function rank r,
where the rank is determined by the extent of occurrence or frequency f ,
follows a power-law f ∼ r−k with exponent close to unity. In particular,
he found that the frequency of words in the English text follows such
power-law. Similar distributions are seen for words in other languages
too.
• Power-law has also been observed in earthquakes as observed by
Guttenberg and Richter in 1954. According to Guttenberg and Richter
the number of N (E ) of earthquakes which releases a certain amount
of energy E has the form N (E ) ∼ E −α with α = 1.5 independent
of geographic region. Such power-law implies that it is meaningless
to define a typical or average strength of an earthquake which would
correspond to the peak of a Gaussian distribution. It also implies that
the same mechanism is responsible for earthquakes of all sizes, including
30 Fractal Patterns in Nonlinear Dynamics and Applications
the larges one. The power-law distribution suggests that although the
large earthquakes are rare in occurrence, they are expected to occur
occasionally and do not require any special mechanism. It may oppose
our physical intuition, in the sense that we are used to think that small
disturbances lead to small consequences and some special mechanism
is required to produce large effects. It has been found that there exist a
power-law relation between the amplitude (strength) and the frequency
of occurrence.
In fact power-law distribution is found in an extraordinarily diverse
range of phenomena that includes solar flares, computer files and wars,
the numbers of papers scientists write, the number of citations received
by papers, the number of hits on web pages, the sales of books, music
recordings and almost every other branded commodity, the numbers of
species in biological taxa, etc. just to name a few.
Chapter 2
Fractals
2.1 Introduction
Benoit B. Mandelbrot (20 November 1924–14 October 2010) born in
Poland, and brought up in France and who later lived and worked in
America added a new word in scientific vocabulary through his cre-
ative work which he himself coined as fractal [8, 13]. He was Sterling
Professor of Mathematical Sciences at Yale University and IBM Fellow
Emeritus in Physics at the IBM T.J. Watson Research Center. He was
a maverick mathematician who conceived, enriched and at the same
time popularized the idea of fractal almost single handedly. With the
idea of fractal he has revolutionized the notion of geometry that has
generated a widespread interest in almost every branch of science. He
presented his new concept through his monumental book ‘The Fractal
Geometry of Nature’ in an awe-inspiring way and ever since then it has
remained as the standard reference book for both the beginner and
researcher. The advent in recent years of inexpensive computer power
and graphics has led to the study of nontraditional geometric objects
in many fields of science and the idea of fractals has been used to de-
scribe them. In a sense, the idea of fractal has brought many seemingly
unrelated subjects under one umbrella. Mandelbrot himself has written
a large number of scientific papers dealing with the fractal geometry
of things as diverse as the price changes and salary distributions, tur-
bulance, statistics of error in telephone message, word frequencies in
written texts, in aggregation and fragmentation processes are just to
name a few. To make his technical papers more accessible to scien-
32 Fractal Patterns in Nonlinear Dynamics and Applications
tific community he later wrote two more books on fractals which have
inspired many to use fractal geometry in their own fields of research.
In fact, the history of describing natural objects using geometry
is as old as the advent of science itself. Traditionally lines, squares,
rectangles, circles, spheres, etc. have been the basis of our intuitive
understanding of the geometry. However, nature is not restricted to
such Euclidean objects which are only characterized typically by inte-
ger dimensions. Yet, we confined ourselves for so long to only integer
dimensions. Even now in the early stage of our education we learn
that objects which have only length are one dimensional, objects with
length and width are two dimensional and those have length, width and
breadth are three dimensional. Did we ever question why we jump to
only integer dimensions? The answer to the best of my knowledge is:
Never. We never questioned, if there existed objects with non-integer
dimensions. However, one person did and he was none other than Man-
delbrot. He was bewildered with such thoughts that for many years.
He realized that nature is not restricted to Euclidean or integer di-
mensional space. Instead, most of the natural objects we see around
us are so complex in shape that conventional Euclidean or integer di-
mension is not sufficient to describe them. The idea of fractal geometry
appears to be indispensible for characterizing such complex objects at
least quantitatively.
The idea of fractals in fact enables us to see a certain symmetry and
order even in an otherwise seemingly disordered and complex system.
The importance of the discovery of fractals can hardly be exaggerated.
Since its discovery there has been a surge of research activities in using
this powerful concept in almost every branch of scientific disciplines
to gain deep insights into many unresolved problems. Yet, there is no
neat and complete definition of fractal. Possibly the simplest way to
define a fractal is as an object which appears self-similar under varying
degrees of magnification, i.e., one loosely associates a fractal with a
shape made of parts similar to the whole in some way. There is nothing
new in it. In fact, all Euclidean object also possess this self-similar
property. What is new though is the following: objects which are now
known as fractals were previously regarded as geometric monsters. We
could not appreciate the hidden symmetry they posses. Thanks to the
idea of fractals that we can now appreciate them and also quantify them
by a non-integer number exponent called the fractal dimension. The
numerical value of the fractal dimension can characterize the structure
and the extent of stringiness or degree of remification. This definition
immediately confirms the existence of scale invariance, that is, objects
look the same on different scales of observation. To understand fractals,
their physical origin and how they appear in nature we need to be able
Fractals 33
M = LDE . (2.1)
ρ ∼ L0 = const. (2.3)
Fractals 35
2.3 Fractals
Let us ask: How long is the coast of Bay of Bengal or coast of Britain?
Finding an answer to this question lies in appreciating the fact that
there are curves twisted so much that their length depends on the size
of the yard-stick we choose to measure them, there are surfaces that
fold so wildly that they fill space and their size too depends on the size
of the yard-stick we choose to measure them. Consider that we are to
measure the size of the coast of the Bay of Bengal, the longest beach
in the world. Obviously, the size will be larger if we use an yardstick of
1.0cm long than if we use an yardstick of 1.0m long since in the former
case we can capture the finer details of the twist of the coast than with
the later case. Likewise, there are curves, surfaces, and the volumes in
nature can be so complex and wild in character that they used to be
known as the geometric monster. However, with the advent of fractal
geometry they are no longer called so. The idea of fractal geometry
vis-a-vis the fractal dimension provides a systematic measure of the
degree of inhomogeneity in the structure. The two key ingredients for
any geometric structure to be coined as fractal are (i) self-similarity, at
36 Fractal Patterns in Nonlinear Dynamics and Applications
least in the statistical sense and (ii) the dimension of the object is less
than the dimension of the space where the object is being embedded.
Prior to Mandelbrot, we always took it for granted that d can only
assume an integer number and the existence of objects having non-
integer geometric dimension had never come under scrutiny. Now, one
may ask the following:
• Are there objects which result in non-integer exponents of the
mass-length relation or the power-law relation between the num-
ber N and the yardstick size δ ?
• If yes, then how do they differ from those which have integer
exponents?
Assume that we have infinitely many dots and we are to decorate
them to constitute an Euclidean object like a line or a plane. What do
we really have to ensure to that end? The dots must be embedded in an
Euclidean space making sure that the dots are uniformly distributed
following a regular pattern. It does not matter whether the dots are
closely or widely spaced as long as they are decorated in a regular
array. The constant density keeps its signature through their seemingly
apparent self-similar appearence. In such cases the relation between the
number N and the yardstick δ always exhibits power-law with integer
exponent coinciding with the dimension of the embedding space. Can
we decorate the dots in the space so that the power-law relation between
N and the corresponding yardstick δ still prevails where neither the
density is constant nor the self-similarity is seemingly as apparent as it
was in the case of its Euclidean counterpart? It was Mandelbrot who for
the first time raised the question that no one even thought of it before.
Indeed, we shall see several such cases in the sections below where dots
can be decorated in Euclidean space to constitute an object such that
power-law relation between N and the corresponding yardstick δ still
prevails but with an non-integer exponent. Below we will discuss a few
simple examples where a relation between N (δ ) and δ do exhibit inverse
power-law with an exponent non-integer.
The geometrical structures and properties of irregular objects are
addressed by Benoit B. Mandelbrot in 1975 and he coined the term
“fractal” based on the Latin word fractus, derived from the content
frengere meaning to break. Mandelbrot defined a fractal as a set with
non-integral Hausdorff dimension which is strictly greater than its topo-
logical dimension. This definition proved to be unsatisfactory in the
sense that it does not include a number of sets that are supposed to
be regarded as fractals. In fact, there is no well-defined mathematical
definition for characterizing the fractal. However, Falconer suggested
the following which guarantees a set to be a fractal [57].
Fractals 37
Figure 2.1: Construction of the triadic cantor set. The initiator is the unity interval
[0, 1]. The generator removes the open middle third.
the simplest form of the Cantor set, namely the triadic Cantor set.
Consider a line segment of unit length described by closed interval
C0 = [0.1] which is known as an initiator. We start generating the
set by diving the initiator into three equal parts and removing the
middle third. We now have two closed intervals [0, 13 ] and [ 23 , 1] in step
1 each of length 13 . Thus, C1 = [0, 13 ] ∪ [ 23 , 1]. Now remove the middle
thirds from the remaining two intervals leaving four closed intervals
of length 19 and these are [0, 19 ] and [ 29 , 13 ], [ 23 , 79 ], [ 89 , 1]. This leaves
C2 = [0, 19 ]∪[ 29 , 13 ]∪[ 23 , 79 ]∪[ 89 , 1]. Now remove the middle thirds from each
of the remaining four intervals to create eight smaller closed intervals.
Well perhaps the idea about the future steps is made clear enough. The
triadic Cantor set is the limit C of the sequence Cn of sets. The sets
decrease: C0 ⊇ C1 ⊇ C2 ⊇ · · · . So we will define the limit to be the
intersection of the sets,
\
C= Cn .
n∈N
So the total length of the Cantor set is zero. This is quite surprising as
it implies that there is hardly anything left in the Cantor set, but soon
we will see, there are tons of points in the Cantor set. Now let us see
chek the left overs. To this end, we look into the k th moment Mk of
the remaing intervals at the nth step of the construction process. The
moment Mk is
X2n
Mk = xki . (2.5)
i
Note that each of the remaining intervals at the nth step are of equal
size xi = 3−n and hence can write
Mk = en ln 2−kn ln 3 . (2.6)
ln 2
It means that if we choose k = ln 3 then we find
M ln 2 = 1, (2.7)
ln 3
regardless of the value of n. That is, this result is true even in the limit
n → ∞. We can thus conclude that the set is not empty which is one
more surprising feature of the Cantor set. We will now attempt to find
the significance of the value of k = ln 2
ln 3 .
Let us observe carefully what points form the Cantor set. We started
to constitute Cantor set with the initiator [0, 1] and the endpoints 0 and
1 belong to all of the future sets Ck , k ≥ n, and therefore belong to the
intersection C . Taking all the endpoints of all the intervals of all the
approximations Cn , we get an infinite set of points, all belonging to C .
40 Fractal Patterns in Nonlinear Dynamics and Applications
curve is as follows: We divide the initiator into three equal pieces. The
middle third is then replaced with two equal segments, both one-third
in length, which form an equilateral triangle (step k = 1) if the middle
third were not removed. This step is regarded as the generator of the
Koch curve. At the next step (k = 2), the middle third is removed from
each of the four remaining line segments and each of them is replaced by
the generator. The resulting cruve will have N = 42 line segments of size
δ = 1/32 and the length of the prefractal curve L = (4/3)2 . This process
is repeated ad infinitum to produce the Koch curve. Once again the self-
similarity of the set is evident: each sub-segment is an exact replica of
the original curve. As the number of generations increase the length
of the curve clearly diverges. By applying a reduced generator to all
segments of a generation of the curve a new generation is obtained. Such
a curve is called pre-fractal. Clearly in the nth step of the generation we
will have N = 4n of line segments of size δ = 3−n . The k th moment Mk
of the interval therefore is Mk = 4n 3−nk and hence Mk = en ln 4−nk ln 3 . It
implies that the k th moment is always equal to the size of the initiator
regardless of the value of n if we choose k = ln 4/ ln 3. We will soon see
that this is once again is the fractal dimension of the Koch curve.
Let us now follow the detailed calculation of how the Hausdorff-
Besicovitch dimension D can be obtained in the case of triadic Koch
curve. The length of the nth prefractal is given by
L(δ ) = (4/3)n . (2.8)
The length of each of the small line segments at the nth step is
1
δ= . (2.9)
3n
Note that the generation number n can be expressed as
ln δ
n=− . (2.10)
ln 3
We then find that the number of segments N (n) = 4n and using Eq.
(2.10) we find ln N (δ ) = −D ln 4. We thus immediately find that N (δ )
exhibits power-law
ln 4
N (δ ) ∼ δ − ln 3 , (2.11)
We see once again that the exponent of the inverse power-law relation
between N (δ ) and δ is a non-integer. First, note that the Koch curve
is embedded in a plane and hence the dimension of the embedding sp
ace is d = 2 which is higher than its Hausdorff-Besicovitch dimension
ln 4
ln 3 . It implies that the Koch curve is a fractal with fractal dimension
df = ln 4
ln 3 .
42 Fractal Patterns in Nonlinear Dynamics and Applications
Like Cantor set, the Koch curve too can be constructed by aggre-
gation of intervals of unit size L(0) = 1 and unit mass to find the
mass-length relation. In step one, we can assmeble three intervals of
unit size and unit mass next to each other. We replace the middle one
by an equilateral triangle of sides L(1) = 1 and delete the base. This
is known as the generator which contains four line segments and hence
M = 4 and the linear size of the curve L = 3. In step two we place
three line segments of length L(2) = 3 next to each other and replace
the middle one by an equilateral triangle deleting the base again. Each
of these line segments is then replaced by the generator. The system
in step two thus has a mass M = 42 and the linear size of the sys-
tem L = 32 . In step three we again place three line segments of length
L(3) = 9 next to each other. We then replace the line segment in the
middle by an equilateral triangle and delete the base. Each line segment
is then replaced by the generator to give M = 43 and the linear size
of the system L = 33 . We continue the process ad infinitum. In the nth
step the system will have mass M = 4n and the linear size of the system
is L = 3n . Like before, we can eliminate n in favur of L to obtain
ln 4
M ∼ L ln 3 . (2.12)
(e) Setp 10
The line segments that make up the boundary of one of the triangles
of Sn remain in all the later approximations Sn , n ≥ k . So the set S
contains at least all of these line segments. In Sn there are 3n triangles,
each having 3 sides of length 2−n . So the total length of S is at least
3n · 3 · 2−n . This goes to ∞ as n → ∞. So it makes sense to say that the
total length of S is infinite.
At the nth step there are N = 3n triangles of side δ = 2−n . Elim-
ln 3
inating n in favour of δ we find N ∼ δ − ln 2 . One can also show that
44 Fractal Patterns in Nonlinear Dynamics and Applications
ln 3
M ∼ L ln 2 and M− ln 3 = 1 Likewise, in the case of Sierpinsky carpet the
ln 2
initiator is assumed to be a square instead of triangle and the generator
divides it into b2 equal sized squares and remove one of the square, say
top left if b = 2 and the square in the middle if b = 3, is removed. This
action is then applied ad infinitum to all the available squares in the
subsequent steps. That is, in each application of the generator every
survived square is divided into b2 smaller squares and remove one in
a certain prescribed fashion. The resultant fractal is called Sierpinski
carpet. In the nth generation the number of squares N = (b2 − 1)n and
each squares are of equal sized with sides δ = (1/b)n . According to the
definition of the Hausdorff-Besicovitch dimension we use δ as the yard-
stick to measure the set created in the nth generation and find that the
number N scales with δ as
2
N (δ ) ∼ δ − ln(b −1)/ ln b
. (2.13)
Following the same procedure we can also obtain a similar relation for
the Sierpinsky gasket
N (δ ) ∼ δ − ln 3/ ln 2 . (2.14)
We thus find that the dimension of the Seirpinsky gasket or the Seir-
pinsky carpet is less than the dimension of their embedding space d = 2
in both the cases the exponents are non-integer. The conservation law
M− ln 3 = 1, (2.15)
ln 2
are obeyed here as well regardless whether the object is Sierpinsky gas-
ket or it is the Sierpinsky carpet. So far we have looked at constructions
of strictly deterministic fractals embedded in a line (Cantor set) and
embedded in the plane (Koch curve and Sierpinski gasket and carpet).
We can also construct fractals embedded in 3d space known as the
Menger sponge.
Aforementioned attractors von Koch curve and Sierpinski gasket
have infinite length as well as zero area. If they have been thought as
1-dimensional objects, it is too big. If they have been considered as 2-
dimensional objects, it is too small. Thus, Cantor set, von Koch curve
and Sierpinski gasket are satisfying the following:
(i) Distance is zero if and only if two objects lie on the same position.
(ii) Distance is symmetry with respect to the position of object, that
is the distance between two points is same whichever the point
it is measured.
(iii) The distance between two points x, y cannot exceed sum of the
distances from x to an intermediate point z and from y to z .
Example 2.1 The set of all real numbers R together with d(x, y ) = |x−y|
is metric space and it is denoted as (R, |.|), where
x if x > 0,
|x| = −x if x < 0,
0 if x = 0.
Proof: The range of modulus function |.| is [0, ∞), hence d(x, y ) ≥ 0
for all x, y ∈ R.
(i) |x − y| = 0 ⇐⇒ x = y
(ii) |x − y| = | − (x − y )| = |y − x|
(iii) Observe that |x| ≥ ± x for all x, y ∈ R. By using this inequality
it is easy to derive the triangle inequality for |.|,
|x − y| = |x − z + z − y| ≤ |x − z| + |z − y|.
Example 2.4 Let C [a, b] be set of all continuous real-valued functions de-
fined on the closed interval [a, b]. Consider the function |.|∞ : C [a, b] ×
C [a, b] → [0, ∞) defined by
This function |.|∞ := d∞ defines the metric on the set C [a, b] and it is
called uniform metric.
The metric d∞ measures the distance from f to g as the maximum of
vertical distance from the point (x, f (x)) to (x, g (x)) on the graphs of f
and g.
and
B [x, r] = {y ∈ X : d(x, y ) ≤ r}
are respectively called the open and closed ball centred at x with radius r
with respect to the metric d.
Proof: Let (xn ) converge to x in a metric space (X, d). Given > 0,
we can choose n0 ∈ N such that d(xn , x) < /2 for n ≥ n0 . Then, if
n, m ≥ n0 ,
Proposition 2.1 Let {xn }∞ n=1 be a sequence in a metric space (X, d) and
d(xn , xn+1 ) < 21n for all n. Then {xn }∞n=1 is a Cauchy sequence.
have
n−1
X
d(xm , xn ) ≤ d(xk , xk+1 )
k=m
n−1
X 1
< k
k=m
2
∞
X 1 1
< k
= m−1
2
k=m
2
1
≤ < .
2n0 −1
Finally we have d(xm , xn ) < for all m, n. Therefore {xn }∞
n=1 is a
Cauchy sequence.
Definition 2.8 A metric space (X, d) is said to be complete if every
Cauchy sequence in X converges to an element in X.
compact, there exists subsequences (ank ) of (an ) and (bnk ) of (bn ) such
that limk→∞ ank = a0 , limk→∞ bnk = b0 . Therefore,
Proof: Let xnk be a Cauchy sequence in X such that xnk ∈ Ank for
all k . For each n in between nk−1 and nk choose x̃n ∈ An such that
d(xnk , An ) = d(xnk , yn ). Then we get
Since xnk ∈ Ank , then d(xnk , ynk ) = d(xnk , Ank ) = 0. It gives x̃nk = xnk
for all k .
Given {xnk }∞
n=1 is a Cauchy sequence in X . So, for given > 0, there
exists a positive integer N0 such that d(xnk , xj ) < /3 for all k, j ≥ N0 .
Fractals 55
Since {An }∞ n=1 is Cauchy in (H(X ), so there exists N1 ≥ nN0 such that
Hd (An , Am ) < /3 for all n, m ≥ N1 . If n, m ≥ N1 , then there exists
integers j, k ≥ N0 such that nk−1 < n ≤ nk , nj−1 < m ≤ nj . Then
Hence, {x̃n }∞
n=1 is Cauchy sequence in X such that x̃n ∈ An for all n
and x̃nk = xnk for all k .
Theorem 2.8 Let {An }∞ n=1 be a sequence in (H(X ), Hd ) and let A be the
set of all points x ∈ X such that there is a sequence {xn }∞
n=1 that converges
to x and satisfies xn ∈ An for all n. If {An }∞ n=1 is Cauchy sequence, then
the set A is closed and nonempty.
Proof: Let {An }∞ n=1 be a Cauchy sequence in H(X ) and define A be the
set of all points x in X such that there is a sequence {xn } that converges
to x and satisfies xn ∈ An for all n. To prove the completeness of H(X ),
we need to show {An } converges to A and A is a member of H(X ).
By Theorem 2.8, the set A is nonempty closed set. It gives that A is
complete, since A is closed subset of a complete metric space (X, d). Let
> 0. Then there exists a positive integer N such that Hd (An , Am ) <
for all m, n ≥, since {An } is a Cauchy sequence. Further, Theorem 2.6
gives Am ⊆ An + for all m > n ≥ N . Let a ∈ A and fix n ≥ N . By
definition of A, there exists a sequence {xk } such that xk ∈ Ak for all k
and {xk } converges to a. Then, Theorem 2.5 gives that An + is closed.
Since xk ∈ An + for each k , then it follows that a ∈ An + . We took
a is arbitrary element in A and showed that a ∈ An + . Hence,
A ⊆ An + . (2.20)
Since d(y, xnk ) ≤ for all k , it follows that d(y, a) ≤ and hence
y ∈ A + . Thus, there exists N such that An ⊆ A + . Moreover,
from equation 2.20 A ⊆ An + for all n ≥ N . Hence, by Theorem 2.6,
Hd (An , A) < for all n ≥ N . It seems that {An } converges to A. Thus,
(H(X ), Hd ) is complete.
Hd (A ∪ B, C ∪ D) = d(A, C ∪ D) ≤ d(A, C )
≤ Hd (A, C ) ≤ max{Hd (A, C ), Hd (B, D)}.
Hd (A ∪ B, C ∪ D) = d(C, A ∪ B ) ≤ d(C, A)
≤ Hd (A, C ) ≤ max{Hd (A, C ), Hd (B, D)}.
60 Fractal Patterns in Nonlinear Dynamics and Applications
(a) Behaviour of f1 on S0
(b) Behaviour of f2 on S0
(c) Behaviour of f3 on S0
All the above contractions have contraction factor 1/2. By Theorem 2.13,
the resulting fractal is called Sierpinski gasket as depicted in Fig. 2.8.
Example 2.9 (Sierpinski carpet) Let X be a unit square in R2 with
the vertices A = (0, 0), B = (1, 0) C = (0, 1) and D = (1, 1), and consider
the IFS on X consists of the following eight contractions,
1 1 1 1 1
f1 (x, y ) = x, y ; f2 (x, y ) = x, y +
3 3 3 3 3
1 1 2 1 1 1
f3 (x, y ) = x, y + ; f4 (x, y ) = x+ , y
3 3 3 3 3 3
1 1 1 2 1 2 1
f5 (x, y ) = x+ , y+ ; f6 (x, y ) = x+ , y ;
3 3 3 3 3 3 3
1 2 1 1 1 2 1 2
f7 (x, y ) = x+ , y+ ; f8 (x, y ) = x+ , y+ .
3 3 3 3 3 3 3 3
All the above contractions have contraction factor 1/3. By Theorem 2.13,
the resulting fractal is called Sierpinski Carpet as depicted in Fig. 2.8.
Example 2.10 (von Koch curve) Let X = [−1, 1] ×{0} ⊆ [−1, 1]2 and
consider the IFS on X consists of the following four contractions,
1 2 1
f1 (x, y ) = x− , y ;
3 3 3
√ √ √ !
1 3 1 3 1 3
f2 (x, y ) = x− y− , x+ y+ ;
6 6 6 6 6 6
√ √ √ !
1 3 1 3 1 3
f3 (x, y ) = x+ y+ , − x+ y+ ;
6 6 6 6 6 6
1 2 1
f4 (x, y ) = x+ , y .
3 3 3
64 Fractal Patterns in Nonlinear Dynamics and Applications
All the above contractions have contraction factor 1/3. By Theorem 2.13,
the resulting fractal is called von Koch curve as shown in Fig. 2.3.2.
The von Koch curve is an example of a continuous curve which is nowhere
differentiable. It is also a curve of infinite length.
Example 2.11 (Dragon curve) Let X be the line segment joining two
points (0, 0), (1, 0) and consider the IFS on X consists of the following two
contractions,
1 1 1 1
f1 (x, y ) = x − y, x + y ;
2 2 2 2
1 1 1 1
f2 (x, y ) = − x − y + 1, x − y ;
2 2 2 2
√
All the above contractions have contraction factor 1/ 2. By Theorem
2.13, the resulting fractal is called Dragon curve as shown in Fig. 2.9.
Example 2.12 (Fractal leaves) Let X be the subset of R2 and the fol-
lowing IFS on X consists of the four contractions that generate the fern
leaf,
Fractals 65
4
f1 (x, y ) = y;
25
17 1 −1 17 4
f2 (x, y ) = x + y, x+ y+ ;
20 25 25 20 25
1 −13 23 11 4
f3 (x, y ) = x+ y, − x+ y+ ;
2 50 100 50 25
−3 7 13 6 11
f4 (x, y ) = x + y, x+ y+ .
20 25 50 25 25
The following IFS on X consists of the four contractions that generate the
maple leaf,
49 1 1 31 −1
f1 (x, y ) = x+ y+ , y+ ;
100 100 4 100 50
27 13 −2 9 14
f2 (x, y ) = x + y, x+ y+ ;
100 25 5 25 25
9 −73 22 1 13 2
f3 (x, y ) = x+ y+ , x+ y+ ;
50 100 55 2 50 25
1 −1 13 1 8
f4 (x, y ) = x+ y+ , x+ .
25 100 25 2 25
66 Fractal Patterns in Nonlinear Dynamics and Applications
Stochastic Fractal
3.1 Introduction
All the examples discussed in the previous chapter are far removed
from what we have around us in nature. Nature loves randomness and
the natural objects which we see around us evolve with time. The ap-
parently complex look of most of natural objects does not mean that
nature favours complextity, rather that the opposite is true. Often the
inherent and the basic rule is trivially simple; it is in fact the random-
ness and the repetition of the same simple rule over and over again
makes the object look complex. Of course, natural fractals cannot be
strictly self-similar rather they are statistically self-similar [8, 13, 38].
For instance, if a child is shown a picture of a sky full of clouds dis-
tributed all over, then even the child can capture the generic feature
of cloud distribution and use it in the future. Later if the same child
is asked to draw the picture of clouds without looking at it, the child
may well do it. The two pictures will of course never be the same but it
will be similar depending on how well the child can draw. Similarly, we
can all draw a curve describing the tip of the trees in the horizon but
details of the two pictures drawn by the same person will never be the
same despite how hard one tries. Capturing the generic feature of cloud
distribution without having to know anything about self-similarity can
be described as our natural instinct. In the following section we at-
tempt to incorporate both the ingredients (randomness and kinetic)
to the various classical fractals in order to know what role these two
quantities play in the resulting processes [29, 35, 40, 42, 54, 55].
70 Fractal Patterns in Nonlinear Dynamics and Applications
where the integral extends over the whole range. The probability that
x has a value between x and x + dx is P (x)dx.
Once a stochastic variable x has been defined, an infinity of other
stochastic variables can be derived from it. For instance, all quantities
y = f (x) obtained by some mapping f is again a stochastic variable
which always indicates some conceptual method. These quantities y
could also be functions of an additional variable t, i.e.,
y (t) = f (x, t), (3.3)
where t is the time and typically it represents the realization of the pro-
cess. The function y (t) describes the stochastic process if t stands for
time [53, 6]. Thus a stochastic function is simply a function of two vari-
ables, one of which is the time t and the other is a stochastic variable x
as defined above. In other words, systems which evolve probabilistically
in time or systems in which there exists a certain time-dependent ran-
dom variable x(t) then it is regarded as stochastic process(refer [53, 6]).
and the other half will have only one intervals to make the average
number of intervals equal to 1 + p where p = 1/2 in this case. In the
next step, the generator is applied to each of the available (1 + p) sub-
intervals to divide them into two equal parts and remove the right half
from each of the 1 + p intervals with probability (1 − p). The system
will then have on the average (1 + p)2 number of intervals of size 1/4
as (1 − p)(1 + p) number of intervals of size 1/4 are removed on the
average. The process is then continued over and over again by applying
the generator on all the available intervals at each step recursively. Like
its definition, finding the fractal dimension of the DCS problem is also
trivially simple. According to the construction of the DCS process there
are N = (1 + p)n intervals in the nth generation each of which have
size δ = 2−n and hence it is also the mean interval size. Like in the
recursive triadic Cantor set we can construct the k th moment Mk of
the intervals size at nth step of construction process of DCS too and
find that here too it is a conserved quantity.
Once again the most convenient yard-stick to measure the size of the
set in the nth step is the mean interval size δ = 2−n which coincides
with the individual size of the intervals in the nth step. Expressing N in
favour of δ using δ = 2−n we find that the number N falls off following
power-law against mean interval size δ , i.e.,
N (δ ) ∼ δ −df , (3.4)
with df = ln(1+p)
ln 2 < 1 ∀ 0 < p < 1. Note that the exponent df is non-
integer and at the same time it is less than the dimension of the space
d = 1 where the set is embedded if we have 0 < p < 1 then it is the
fractal dimension of the resulting dyadic Cantor set [85]. Unlike triadic
Cantor set where the Cantor dusts are distributed in a strictly self-
similar fashion the Cantor dust in the dyadic Cantor set are distributed
in a random fashion yet it is self-similar but in the statistical sense
[54, 55, 66, 67, 70, 83, 85, 97].
It is well-known that the triadic Cantor set possess the following
unusual property. For instance, the intervals which are removed from
the set are 1/3, 2/9, 4/27, ..., etc. and if we add them up we get
∞
1X
(2/3)n = 1, (3.5)
3 n=0
which is the size of the initiator. This leads us to conclude that the size
of the set that remains is precisely zero since the sum of the sizes that
are removed equal to the size of the initiator. The question is: Does the
dyadic Cantor set too possess the same properties? It is indeed the case.
For instance, on the average in step one the amount of size removed is
Stochastic Fractal 73
1−p
2 , in step two the total amount of size removed is (1−p)(1+p) 4 , in step
(1−p)(1+p)2 (1−p)(1+p)3
three it is 8 , in step four it is 16 and so on. If we add
these intervals we obtain
∞
(1 − p) X 1 + p n
=1 (3.6)
2 n=0 2
which is again the size of the initiator. It means there is hardly anything
left in the set. However, we will show later that there is still tons of
members in the set. One of the virtues of the dyadic Cantor set is its
simplicity. The notion of random fractal and its inherent character, self-
similarity, can be introduced to the beginner through this example in
the simplest possible way.
In this equation the kernel F (x, y ) describes the rules and rate at which
a parent interval of x + y is divided into two smaller intervals of size
x and y [19, 21]. The first term on the right hand side of Eq. (3.7)
describes the loss of interval of size x due to their division into two
smaller intervals. On the other hand, the second term describes the
gain of interval of size x due to division of an interval of size y > x into
two smaller intervals so that one of the two smaller intervals is of size
x. The factor “2” in the gain term actually 21 that describes that at
each breaking event we create two intervals out of one interval.
To customize Eq. (3.7) for the kinetic dyadic Cantor set we just
need to replace the factor “2” in the gain term by 1 + p and choose the
following kernel
F (x, y ) = (x + y )δ (x − y ). (3.8)
The delta function ensures the fact that the two product intervals x and
y must be equal in size. However, it differs from the earlier definition
of dyadic Cantor set Note that in the fragmentation process, at each
step only one interval is divided. Thus it is necessary to choose a rule to
decide how an interval is picked when there is more than one interval of
different sizes. To this end, the pre-factor (x + y ) describes that a parent
interval of size x + y is chosen preferentially according to their sizes and
then the delta function ensures that it is divided into two intervals of
size x and y so that x = y . The resulting equation for kinetic dyadic
Cantor set becomes
∂c(x, t) x
= − c(x, t) + (1 + p)2xc(2x, t) (3.9)
∂t 2
which is the required rate equation that describes the kinetic DCS
problem.
The algorithm of the j th step which starts with, say Nj number of
intervals, can be described as follows.
(a) Generate a random number R from the open interval (0, 1).
(b) Check which of the 1, 2, ..., Nj intervals contains the random
number R. Say, the interval that contain R is labelled as m and
hence pick the interval m. Else, if none of the surviving Nj in-
tervals contain R then increase time by one unit and go to step
(a).
(c) Apply the generator to the sub-interval m to divide it into two
equal pieces and remove one of the two parts with probability
(1 − p).
Stochastic Fractal 75
(d) Label the newly created intervals starting from the left end which
is labelled with its parents label m and the interval on the right
if it remains there then it is labelled with a new number Nj+1 .
(e) Increase time by one unit.
(f) Repeat the steps (a)-(e) ad infinitum.
To solve Eq. (3.9) we find it convenient to introduce the nth moment
of c(x, t) Z ∞
Mn (t) = xn c(x, t)dx, (3.10)
0
instead of finding solution for c(x, t) itself we find attempt to find the
solution for its moment since finding the the latter is much simpler
than the former. Incorporating it in Eq. (3.9) gives the rate equation
for Mn (t) which reads as
dMn (t) h 1 (1 + p) i
= − − n+1 Mn+1 (t). (3.11)
dt 2 2
We can easily find a value of n = n∗ for which Mn∗ is a time independent
or a conserved quantity simply by finding the root of the following
equation
1 (1 + p)
− n∗ +1 = 0. (3.12)
2 2
Solving it we immediately find that n∗ = ln(1+p) ln 2 implying that the
quantity M ln(1+p) (t) is a conserved quantity. Numerically, it means
ln 2
that if we label all the surviving intervals, say, at the j th step as
x1 , x2 , x3 , ...xj starting from the left most till the right most interval
then the n∗ th moment is
∗ ∗ ∗ ∗
Mn∗ = xn1 + xn2 + xn3 + ... + ... + xnj . (3.13)
The numerical simulation too suggest this which is shown in Fig. (3.1).
Note that the value of this moment in a given realization remains the
same although the exact numerical value at which the value remains
the same may vary with different realization. Interestingly, the ensemble
averaged value on the other hand is equal to the size of the initiator
regardless of the time we choose to measure. To find why the index of
the moment n∗ = ln(1+p)
ln 2 is so special we need to know the solution for
the nth moment.
It is expected that the kinetic DCS problem too, like the DCS prob-
lem, will generate fractals in the long time limit and hence must exhibits
self-similarity an essential property of fractals. It is therefore reasonable
76 Fractal Patterns in Nonlinear Dynamics and Applications
Figure 3.1: The plot show that the number of intervals N grows with time t
following power-law N (t) ∼ tdf with exponent n∗ = ln(1 + p)/ ln 2 which is exactly
what has been predicted by Eq. (3.18).
to anticipate that the solution of Eq. (3.11) for general n will exhibit
scaling. Existence of scaling means that the various moments of c(x, t)
should have power-law relation with time [26]. We therefore can write
a tentative solution of Eq. (3.11) as
which is verified numerically (see Fig. (3.1). On the other hand, the
mean interval size δ = M1 (t)/M0 (t) decreases with time as
d d
Figure 3.2: The sum of the df th power of all the remaining intervals x1f + x2f +
d
... + xj f = 1 regardless of time provided df = ln(1 + p)/ ln 2 which is the fractal
dimension of the kinetic dyadic Cantor set. An interesting point to note that the
numerical value of the conserved quantity is different in every independent realization
albeit remain constant with time time.
78 Fractal Patterns in Nonlinear Dynamics and Applications
into two equal intervals? Obviously, the interval sizes will now be a ran-
dom variable x whose sample space and the corresponding probability
will be different at a different time in a given realization. That is, the
system will now evolve probabilistically with time in which the ran-
dom variable is time dependent x(t) and hence we regard the system
as stochastic dyadic Cantor set. The process starts with an initiator
which can be of unit interval [0, 1] as before. However, the generator
here divides an interval randomly into two pieces and remove one with
probability (1 − p). Perhaps the model can be best described by giv-
ing the algorithm for the j th generation step that starts say with Nj
number of intervals. It can be described as follows.
(i) Generate a random number R from the open interval (0, 1).
(ii) Check which of the available intervals contains R.
(iii) Pick the interval [a, b] labelled as k if it contains R else go to
step (i) if none of the available intervals contain R and increase
the time by one unit in either case.
(iv) Apply the generator onto the interval k to divide it randomly
into two pieces and remove one with probability (1 − p). For this
we generate a random number, say C , from the open interval
(a, b) to divide it into [a, c] and [c, b] and delete the open interval
(c, b) with probability (1 − p).
(v) Update the logbook by labeling the left end of the two newly
created interval [a, c] with its parents label k since the label k
was already redundant and label the other interval as Nj + 1 if
it has not been removed in step (iv).
(vi) Repeat the steps (i)-(vi) ad infinitum.
The binary fragmentation equation given by Eq. (3.7) can describe
the rules of the SDCS problem stated in the algorithm (i)-(vi) if we
choose
F (x, y ) = 1, (3.21)
and the factor 0 20 in the gain term is replaced by (1 + p). The master
equation for the stochastic dyadic Cantor set then is
Z ∞
∂c(x, t)
= −xc(x, t) + (1 + p) c(y, t)dy. (3.22)
∂t x
To solve it for Mn (t) we follow the same procedure as for the KDCS
problem and find the asymptotic solution for the nth moment as
N (t) ∼ tp , (3.25)
and the total mass M (t), which is the first moment M1 (t), decreases
with time as
M (t) ∼ t−(1−p) , (3.26)
since (1 − p) > 0 for 0 < p < 1. Using these in the definition of the mean
interval size
M (t)
δ= , (3.27)
N (t)
we find that
δ (t) ∼ t−1 , (3.28)
which is non-trivial as it is independent of p. Expressing the expression
for the number of intervals N (t) in terms δ we find it scales as
N (δ ) ∼ δ −df , (3.29)
However, the knowledge about the decay law for the mean interval size
implies that one of the parameters, say x, can be expressed in terms of
t since according to Eq. (3.28) t−1 bear the dimension of interval size
x. We therefore can define a dimensionless governing parameter
x
ξ= , (3.30)
t−1
and a dimensionless governed parameter
c(x, t)
Π= . (3.31)
tθ
The numerical value of the right side of the above equation remains the
same even if the time t is changed by some factor, say µ for example,
since the left hand side is a dimensionless quantity. It means that the
two parameters x and t must combine to form a dimensionless quantity
ξ = x/t−1 and the dimensionless parameter Π can only depends on ξ .
In other words we can write
c(x, t)
= φ(x/t−1 ), (3.32)
tθ
which leads to the following dynamic scaling form
d2 φ(ξ ) dφ(ξ )
ξ + ξ + (2 + p ) + (2 + p)φ(ξ ) = 0, (3.35)
dξ 2 dξ
which we can re-write as
d2 φ(ξ ) dφ(ξ )
(−ξ ) + (2 + p ) − (−ξ ) − (2 + p)φ(ξ ) = 0, (3.36)
d(−ξ )2 d(−ξ )
Stochastic Fractal 81
11
t = 106, E = 104
10 p = 0.50
p = 0.65
9
log(N) p = 0.75
8
4
9 10 11 12 13 14
log(t)
Figure 3.3: (a) Plots of log(N ) versus log(t) for different p where N is the zeroth
moment M0 (t). In each case we find a straight line with slope equal to p as it should
be according to Eq. (3.25).
3
6 4
p = 0.75, t = 10 , E = 10
2.5
2
Md = Σxidf
1.5
f
0.5
0
0 200000 400000 600000 800000 1x106
t
d d d
Figure 3.4: The df th moment of all the remaining intervals x1f + x2f + ... + xj f
remain constant in independent realization. However, the numerical value at which
it is constant may be different as shown by colored lines. On the other hand, their
ensemble averaged value is always equal to one as shown by black colored lines.
Stochastic Fractal 83
We then plot log(N ) versus log(δ ) in Fig. (3.5) and find an excellent
straight line with slope equal to p as predicted by Eq. (3.29).
To test the analytic solution for c(x, t) of the rate equation, given by
Eq. (3.22), we perform a simulation for a a given fixed time and then
extract the sizes of all intervals. The distribution of interval sizes can
then be constructed. We bin the data to find the number of intervals
within a suitable class ∆ which is then normalized by the width of
the class ∆ itself so that c(x, t)∆x gives the number of intervals which
falls within the range x and x + ∆x. We then plot a histogram in Fig.
(3.6) where the number of intervals that fall in each class is normalized
by the width ∆x of the class size. The resulting curves for different
times are shown in Fig. (3.7) which represent plots of c(x, t) versus x
at four different instant such as at t1 = 100k , t2 = 200k , t3 = 300k
and t4 = 400k time unit. We then then divide the ordinate by t1+p i
and multiply abscissa by ti of the c(x, t) vs x data where ti is the time
when the snapshot were taken. The resulting plot, which is equivalent
to plots of t−(1+p) c(x, t) vs xt, is shown in Fig. (3.7(a)) which clearly
shows that all the distinct plots of Fig. (3.6) now collapsed superbly into
one universal curve. Plotting the same curve in the log −linear scale, as
shown in Fig. (3.7(b)), gives a straight line suggesting the solution for
the scaling function is exponential as we found analytically.
11 6 4
t = 10 , E = 10
10 p = 0.50
p = 0.65
9 p = 0.75
log(N)
-4 -3 -2 -1 0
log(δ)
Figure 3.5: We plot log(N ) versus log(δ) and find a set of straight lines with slope
df = p. It provides numerical verification of the theoretical relation N ∼ δ −df for
the stochastic DCS.
84 Fractal Patterns in Nonlinear Dynamics and Applications
7x109
p = 0.75, E = 104
9
6x10 t = 100K
t = 200K
5x109
t = 300K
c(x,t)
4x109 t = 400K
3x109
2x109
1x109
0
0 5x10-6 1x10-5 1.5x10-5 2x10-5
x
Figure 3.6: The distribution function c(t, x) is drawn as a function of x for three
different times. (b) The same set of data are used to plot in the log-linear scale.
The straight line clearly proves that c(x, t) for fixed time decays exponentially as
predicted by the solution given by Eq. (3.39).
1.2
p = 0.75, E = 104 0 p = 0.75, E = 104
1 t = 100K t = 100K
-2
log[c(x,t)/t1.75]
t = 200K t = 200K
0.8 t = 300K t = 300K
c(x,t)/t1.75
-4
t = 400K t = 400K
0.6 -6
-8
0.4
-10
0.2
-12
0 -14
0 1 2 3 4 5 6 7 0 2 4 6 8 10 12 14 16
xt xt
(a) (b)
Figure 3.7: (a) The distribution function c(t, x) is drawn as a function of x for
three different times. (b) The same set of data are used to plot in the log-linear
scale. The straight line clearly proves that c(x, t) for fixed time decays exponentially
as predicted by the solution given by Eq. (3.56).
Recall that two triangles are called similar even if their dimensional
quantities are different though the corresponding dimensionless quan-
Stochastic Fractal 85
tities coincide. For evolving systems like SDCS we can conclude that
the system is similar to itself at different times and hence we say that
it exhibits self-similarity. Note that self-similarity is also a kind of sym-
metry as we find that x is replaced by λ−1 x, time t is replaced by λt in
Eq. 3.39) we find that c(x, t) can be brought to itself
c(λ−1 x, λt) = λ1+df c(x, t), (3.40)
which is a kind of symmetry. That is, as the system evolves if we take a
few snapshots at any arbitrary time and they can all be obtained from
one another by similarity transformation. On the other hand, data-
collapse, which proves dynamic scaling, mean self-similarity. Actually,
self-similarity and similarity transformation are the same thing. All
these suggest that there is a continuous symmetry along the time axis.
Emmy Noether showed that whenever there exists a continuous sym-
metry there must exist a conservation law. This is now a well-known
Noether’s theorem. Indeed, we find that for every value of p within
0 < p < 1 and β there exists a unique conservation law. That is, the
system is governed by a conservation law for each value of β and p
which ultimately can be held responsible for fixing the value of θ and z
for a given value of albeit all the major variables of the system changes
with time. We can thus conclude that the fractal that emerges through
evolution in time must obey the Noether’s theorem.
The factor 1/2 in the gain term of Eq. (3.41) implies that at each step
two particles combine to form one particle. The Smoluchowski equation
has been studied extensively in and around the eighties for a large class
of kernels satisfying K (bx, by ) = bλ K (x, y ), where b > 0 and λ is known
as the homogeneity index [9].
We can consider the case where each of the time two intervals, say,
of size x and y combine to form an aggregate of size x + y an interval
of the same size is added to the system with probability p. To describe
this process we just need to replace the factor 12 of the gain term of
Eq. (3.41) by 1+p 2 . The mechanism that the resulting Smoluchowski
equation describes can be best understood by giving an algorithm. The
process starts with a system that comprise of a large number of chem-
ically identical Brownian particles and a fixed value for the probability
p ∈ [0, 1] by which particles are self-replicated. The algorithm of the
model is as follows:
(i) Two particles, say of sizes x and y , are picked randomly from
the system to mimic a random collision via Brownian motion.
(ii) Add the sizes of the two particles to form one particle of their
combined size (x + y ) to mimic aggregation.
(iii) Pick a random number 0 < R < 1. If R ≤ p then add another
particle of size (x + y ) to the system to mimic self-replication.
(iv) The steps (i)–(iii) are repeated ad infinitum to mimic the time
evolution.
One may also consider that the system has two different kinds of parti-
cles: active and passive. As the systems evolve, active particles always
remain active and take part in aggregation while the character of the
passive particles are altered irreversibly to an active particle with prob-
ability p. Once a passive particle turns into an active particle it can take
part in further aggregation like other active particles already present
in the system on an equal footing and it never turns into a passive
particle. This interpretation is very similar to the work of Krapivsky
and Ben-Naim [32, 48]. While in their work the character of an active
particle is altered, in our work it is the other way around. The two
models are different also because here we only consider the dynamics
of the active particles, whereas Krapivsky and Ben-Naim studied the
Stochastic Fractal 87
dynamics of both the entities since a passive particle in their case exists
at the expense of an active particle and therefore a consistency check is
required. However, the present model does not require such consistency
check.
Note that random collision due to Brownian motion can be ensured
if we choose a constant kernel K (x, y ), e.g.,
K (x, y ) = 2, (3.44)
Figure 3.8: We plot ln(s(t)) against ln(t) for three different values of p starting
with mono-disperse initial conditions (we choose 50, 000 particles of unit size). The
1+p
1+p
lines have slopes given by the relation 1−p
, confirming that s(t) ∼ t 1−p .
We thus see that for 0 ≤ p < 1 the mean particle size s(t) in the limit
t → ∞ grows following a power-law
1+p
s(t) ∼ ((1 − p)t) (1−p) . (3.51)
To verify this we plot ln(s(t)) against ln(t) in Fig. (3.8) using data
from numerical simulation for three different values of p with the same
mono-disperse initial condition in each case. Appreciating the fact that
t ∼ 1/N in the long-time limit we obtain three straight lines whose
1+p
gradients are given by ( (1−p) ), providing numerical confirmation of the
theoretically derived result given by Eq. (3.51).
In fractal analysis, one usually seeks for a power-law relation be-
tween the number N needed to cover the object under investigation
and a suitable yard-stick size. Let us choose s(t) is the yard-stick size
and hence expressing N (t) in terms of s(t) we find
Figure 3.9: Plots of ln(N (s)) against ln(s) are drawn for three different values of p
for the same initial conditions. The lines have slopes equal to −( 1−p
1+p
) as predicted by
theory. In each case simulation was performed till 30, 000 aggregation events while
the process started with initially N (0) = 50, 000 particles of unit size.
Figure 3.10: The parallel lines resulting from plots of ln(N (s)) against ln(s) for
−( 1−p )
mono-disperse and poly-disperse initial conditions confirming that N (s) ∼ s 1+p
is independent of the initial conditions. In each case simulation started with initially
50,000 particles were drawn randomly from the size range between 1 and 1000 for
poly 1000, between 1 and 3000 for poly 3000 and for monodisperse initial condition
all the particles were chosen to be of unit size.
Figure 3.11: ln(M( 1−p ) (t)) is plotted against ln(t) for various values of p and vari-
1+p
ous different initial conditions. The horizontal straight lines indicate that M( 1−p ) (t)
1+p
is constant in the scaling regime. In all cases initially 50,000 particles were drawn
randomly from the size range between 1 and n where n = 1000, 3000 and denoted
as poly n.
c(x, t)
Π= . (3.54)
((1 − p)t)θ
The numerical value of the right hand side of the above two equations
remains the same even if the time t is changed by some factor µ for
example since the left hand side are dimensionless. It means that the
two parameters x and t must combine to form a dimensionless quantity
ξ = x/tz such that the dimensionless governed parameter Π can only
depends on ξ . In other words, we can write
c(x, t)
= f (x/tz ), (3.55)
((1 − p)t)θ
92 Fractal Patterns in Nonlinear Dynamics and Applications
(1 − p)2θ+z h
t−(θ+z+1) = − 2µ0 f (ξ )
F (p, ξ )
Z ξ i
+ (1 + p) f (η )f (ξ − η )dη , (3.57)
0
where h df (ξ ) i
F (p, ξ ) = θ(1 − p)θ f (ξ ) − z (1 − p)θ ξ , (3.58)
dξ
and Z ∞
µ0 = dξf (ξ ), (3.59)
0
is the zeroth moment of the scaling function. The right hand side of Eq.
(3.57) is dimensionless and hence the dimensional consistency requires
θ + z + 1 = 0 or
2
θ=− . (3.60)
1−p
The equation for the scaling function f (ξ ) which we have to solve for
this θ value is
h df (ξ ) Z ξ i
(1 + p) ξ + f (η )f (ξ − η )dη = 2f (ξ )(µ0 − 1). (3.61)
dξ 0
Figure 3.13: Log-linear plot of the same data as in Fig. (3.12) showing the expo-
nential decay of the particle size distribution function ct (x) with particle size x at
fixed time as seen analytically.
Stochastic Fractal 95
Figure 3.14: The three distinct curves of Figs. (1) and (2) for three different system
sizes are well collapsed onto a single universal curve when c(x, t) is measured in units
2 1+p
−
of t (1−p) and x measured in units of t (1−p) . Such data-collapse implies that the
process evolves with time preserving its self-similar character. We have chosen semi-
log scale to demonstrate that the scaling function decays exponentially f (ξ) ∼ e−ξ
as predicted by the theory.
Multifractality
4.1 Introduction
Fractals only tell us about an object in which a certain physical quan-
tity is distributed evenly wherever it is found in the embedding space.
Therefore, the fractal dimension of the object will still be the same
even if the content is diluted to change the density of the content and
then distributed throughout the space exactly in the same way as be-
fore revealing that it tells us nothing about the distribution of the
content. There are situations in which physical quantities are unevenly
distributed over the space. That is, the distribution may not be ho-
mogeneous and instead heterogeneous in the sense that the density of
the content over the occupied regions only are different. Many precious
elements such as Gold for instance are found in high concentrations at
only a few places, in moderate to lower concentration at many places
and in very low concentrations (where the cost of extraction is higher
than the price of gold extracted) almost everywhere. In such cases, the
fractal vis-a-vis the fractal dimension tells us nothing about distribu-
tion. We therefore have to invoke the idea of multifractality which tells
us about the uneven or heterogeneous distribution of the content on a
geometric support. The dimension of the support may itself be a fractal
or not.
Perhaps an example can provide better understanding than long
description. For this we consider the case of the generalized Sierpinsky
carpet which is constructed as follows. Let us consider that the initiator
is a square of unit area and unit mass. That is, a unit amount of mass is
distributed in square of unit area uniformly. The generator then divides
98 Fractal Patterns in Nonlinear Dynamics and Applications
the initiator into four equal smaller squares and the content of the upper
left square is pasted onto the lower left square. This process involves
cutting and pasting by applying the generator on the available squares
over and over again. In the end we have two things: (i) We have the
support without content, (ii) We have unevenly distributed mass on
the geometric support which is the Sierpinsky carpet. We shall show
rigorously in this chapter that the distribution of mass on the support is
multifractal. Note that we talk about fractals if all the occupied squares
of equal size at any stage have the same amount of mass. It is in this
sense we say that the mass is evenly distributed. However, in the case
of a cut and paste model the mass is not distributed evenly rather it is
distributed unevenly. In this section we develop multifractal formalism
in the context of both deterministic as well as a random or stochastic
case to be more precise.
So, a system is a candidate for multifractal analysis if a certain
physical quantity or variables fluctuate wildly in space or on a support.
Consider the case of a stochastic cantor set. In the previous chapter we
have seen that the nth moment Mn (t) of the interval size distribution
function c(x, t) of stochastic fractal has in general the following form
Z ∞
xn c(x, t)dx = Mn (t) ∼ t−(n−D)z . (4.1)
0
and R∞
xc(x, t)dx M1 (t)
hxi = R0 ∞ = . (4.3)
0
c(x, t)dx M0 (t)
Using Eq. (4.1) in Eq. (4.2) and Eq. (4.3) we find
Thus, there exists a single length scale called the mean interval size
< x > which can characterize all the moments of the distribution func-
tion. This is true because the exponent n−D(β)
3β+2 of Mn (t) is linear in n
which implies that there exists a constant gap between the exponents
of consecutive n value [32]. Such linear dependence of the exponent of
the nth moment is taken as the lit-mass test for simple scaling.
There are cases where one finds
hxn i =
6 hxin , (4.5)
Multifractality 99
y = y (x). (4.8)
Now,
∂y
m= , (4.9)
∂x
is the slope of this curve y (x) at the point (x, y ). In order to treat m as
an independent variable in place of x, one may be tempted to eliminate
x between the Eq. (4.8) and Eq. (4.9), thereby getting y as a function
of m. However, this is not correct since in doing so one would sacrifice
some of the mathematical content of Eq. (4.8) in the sense knowing
y as a function of m we will not be able to reconstruct the function
y = y (x) as emphasized in Fig. (1).
As an illustrative example, consider the function y = 12 e2x for which
m = (dy/dx) = e2x and hence, y (m) = m/2. Let us try to reconstruct
y (x) from the relation y (m) = m/2. Since m = (dy/dx) = 2y , we get
y (x) = Ce2x where the integration constant C remains unspecified.
Clearly, the inadequacy of this procedure arises from the fact that the
relation y = y (m) involves dy/dx and the integration required to get
y (x) from this first order differential equation yields y (x) up to an
unspecified constant of integration.
100 Fractal Patterns in Nonlinear Dynamics and Applications
(iv) The quantity in the parenthesis of the left hand side of the above
equation can be defined as a new quantity
Ψ ≡ Y − ax, (4.18)
where the exponent τ (q ) is called the mass exponent not the fractal
dimension if it is non-linear in q [23]. The mass exponent is more re-
vealing than the simple fractal dimension as we shall soon find out. It
can equivalently be defined as
ln Z (q, δ )
τ (q ) = − lim . (4.22)
δ→0 ln δ
Using Eq. (4.21) in Eq. (4.20) we can write
d−τ (q) δ→0 0 d > τ (q )
Md (q, δ ) ∼ d =−→ . (4.23)
∞ d < τ (q )
τ (1) = 0. (4.25)
where the prime on the sum indicates that only cells with µi = µmin
contribute. The expression may be written as
dτ (q ) ln µmin
= − lim = −αmax . (4.28)
dq q→−∞ δ→0 ln δ
where µmax is the largest value of µi , which leads to the smallest value
of α. Eqs. (4.28) and (4.29) implies that
α
µmin ∼ δmax , (4.30)
and
α
µmax ∼ δmin . (4.31)
We can combine the above two relations and write a general equation
µ ∼ δα. (4.32)
Later we shall show that it is indeed the case and that
α(q ) = −dτ /dq, (4.33)
is true in general [16, 22].
dτ
For q = 1 we find that dqhas an interesting value:
P
dτ (q ) µi ln µi S (δ )
= − lim i = lim , (4.34)
dq q=1 ln δ δ→0 ln δ
δ→0
The exponent α(1) = −(dτ /dq )|q=1 = dS is also the fractal dimension
of the set onto which the measure concentrates and describes the scal-
ing with the box size δ of the entropy of the measure. Note that the
partition entropy S (δ ) at resolution δ is given in terms of the entropy
S of the measure by S (δ ) = −S ln δ .
104 Fractal Patterns in Nonlinear Dynamics and Applications
and hence
d(τ + αq ) = qdα. (4.41)
The above equation immediately reveals that we can define a new func-
tion
f (α) = τ + αq, (4.42)
where the new function f is function of α.
Multifractality 105
We also have
d2
[qα − f (α )] > 0, (4.47)
dα2
α=α(q)
Let us first consider the dyadic Cantor set [85]. In the nth step of its
construction there are (1+ p)n number of intervals each of size xi = 2−n
and hence the mean interval size is also δ = 2−n . Thus the partition
function is X q
Zq = µi = N (2−n )qdf = δ −(1−q)df , (4.51)
i
so that
τ (q ) = (1 − q )df . (4.52)
Its Legendre transformation gives
f (α) = df , (4.53)
and hence it is just a simple fractal.
On the other hand, if we apply the formalism in the stochastic dyadic
Cantor set then the partition is
Z ∞
Zq = µq c(x, t)dx, (4.54)
0
df
where µ = x . We can then immediately write
Zq = Mqdf (t). (4.55)
We know the solution for the nth moment
Mn (t) ∼ t−(n−df )z . (4.56)
Using Eq. (4.56) in Eq. (4.55) gives
Zq ∼ t−(q−1)df z . (4.57)
Expressing it in terms of δ we
Zq ∼ δ −(1−q)df . (4.58)
Once again the same result as it is for the deterministic dyadic Cantor
set [85].
Multifractality 107
with df = ln(1+ p)/ ln 2. The mass exponent satisfies the two conditions
(i) τ (0) = ln 3
ln 2 and τ (1) = 0. It is worthwhile to mention as a passing
note that Hentschel and Procaccia introduced another set of dimensions
defined by
τ (q ) = (1 − q )Dq . (4.63)
A system can be described as multifractal only if Dq depends on q .
In such case D0 is just the dimension of the support while D1 is the
information dimension and D2 is the correlation dimension. Legendre
transformation of τ (q ) immediately gives
ln(1 + p)
f (α) = . (4.64)
ln 2
Thus, instead of getting f (α) spectrum we get a constant value and
hence it is not a multifractal since slope of the mass exponent is the
same regardless of the value of q . We conclude with the note that the
resulting system is the fractal since the fraction of the total measure
contains is the same in each cell of same size [97].
Now we slightly change the construction process of the DCS. Instead
of throwing away the right half of the interval after each time we apply
the generator and paste it onto the left half that always remains with
probability 1 − p [97]. That is, after step one there are 1 + p intervals of
which one in the left has mass (2 − p)/2 and the other one in the right
has mass p/2 so that the sum is equal to the mass of the initiator. In
f (α)
0.6
0.4
0.2
α
0.4 0.5 0.6 0.7 0.8 0.9
Figure 4.1: Multifractal f (α) spectrum of multifractal dyadic Cantor set for p =
1/3, 1/2 and p = 2/3. The peak of the respective curves occur at log(1 + p)/ log(2)
which is the skeleton on which the measure is distributed.
Multifractality 109
such a case we could assume that the left cell has the fractional mass
equal to p1 = (2 − p)/2 and the right cell has mass equal to p2 = 1/2
with probability p. That is, after step n = 1 there are P1two types of
mass µ0 = p1 and µ1 = p2 such that the total mass k=0 Nk µk = 1
1
is conserved since Nk = pk where k = 0, and 1. In step two,
k
the generator is applied to the remaining 1 + p intervals and in each
we paste the right half onto the left half with probability (1 − p) and
hence at the end of step two there are (1 + p)2 intervals with three
types of masses µ0 = p2 with probability one and two cells have mass
µ2 = p1 p2 with probability p and µ3 = p22 with probability p2 . The total
P2
mass of all the cells
is k=0 Nk µk = 1 where the number of cells of
2
k type is Nk = and k = 0, 1 and 2. This process of cut and
k
paste is repeated ad infinitum. After the nth step we have (1 + p)n
intervals with n + 1 different types of mass. We label each type by an
integer k (k = 0, 1., ..., n) such that the mass of each of the k -type is
given by µk = pn−k
1 pk2 . The number of the k -type squares Nk is given
n
by Nk = pk where k = 0, 1, ..., n. The distribution of the mass
k
which is clearly not uniform rather heterogeneous. We shall now analyze
it below invoking the idea of multifractal formalism.
We can now construct the partition function as
n n
X q
X n 2 − p q n−k p k
Zq = Nk µk = (4.65)
k 2 2q
k=0 k=0
We can re-write it as
2 − p q p n
Zq = + . (4.66)
2 2q
If we now measure the partition function Zq in terms of s = 2−n , then
we can eliminate n from Eq. (4.66) in favour of s and find that
where h q i
2−p p
log 2 + 2q
τ (q ) = . (4.68)
log[2]
One can easily check that two essential properties of the mass expo-
nent τ (q ) are obeyed. For instance, τ (1) = 0 and τ (0) = ln(1+p)
ln 2 is the
dimension of the support. How to obtain the f (α) curve?
110 Fractal Patterns in Nonlinear Dynamics and Applications
dτ (q )
α(q ) = − . (4.70)
dq
We can now write Eq. (4.69) as follows
The above equation immediately reveals that we can define the new
function
f (α) = τ (q ) + αq, (4.72)
where the new function f is function of α since the quantity q is constant
on right hand side of Eq. (4.71) while α is variable [97]. In this way q is
replaced by α as independent variable. We can thus obtain the following
expression for the multifractal spectrum
h q i
log 2−p 2 + 2
p
q
f (α) = qα + , (4.73)
log[2]
and the Hölder exponent is
q h i
p 2−p 2−p
1 2q ln 2 − 2 ln 2
α(q ) = q , (4.74)
ln 2
2−p p
2 + 2 q
for which exact analytical calculations can be made. Here the support
is the Sierpinsky carpet with b = 2 which we have already discussed in
section 4.3.3. However, let us apply the multifractal formalism to the
Sierpinsky carpet and show that it results in a unique f value instead
of f (α) spectrum. Recall that at step one the generator divides the
initiator of unit area into b2 = 4 equal pieces and removes one which
we assume always to be the top left of the four new squares. In the
next, the generator is applied to each of the remaining three blocks
to divide them into four equal pieces. If we continue the process then
in the nth step we will have N = 3n blocks of size δ = 2−n and we
already know that in the large n limit the resulting system emerges as a
fractal of dimension df = ln 3/ ln 2. However, here we want to apply the
multifractal formalism so that we can appreciate the difference between
fractal and multifractal.
In order to apply the multifractal formalism we first have to know
what to choose as a measure and find what fraction of this measure is
in the ith cell. To make things simpler let us think of the case where
the generator divides the initiator into four equal parts but remove
nothing. In such case we could assume that each block is occupied with
a content equal to its area and the sum of all the areas would be equal
to one as the initiator is chosen to be a square of unit area. So, we could
describe the area of the respective block as the occupation probability
pi = x2i of each block where x2i is the area of the ith block and hence
P P4n 2
i pi = i=1 = 1. The exponent 2 in pi = xi is in fact the dimension
of the resulting system which is actually the square lattice. We can
generalize the idea. In the context of the Sierpinsky carpet we already
know its dimension df = ln 3/ ln 2. So, in analogy with the square lattice
we find that the df th moment of the sides of the remaining squares
P3n df
i=1 xi too is a conserved quantity. We, therefore, can assume that
ith block of 3n remaining block is occupied with a certain content equal
Pn
to pi = xln
i
3/ ln 2
. It can be easily shown that indeed 3i=1 pi = 1 since
and hence n
3
d
X
xi f = 3n 2−ndf = en ln 3 e−ndf ln 2 = 1. (4.76)
i=1
Figure 4.2: The first two steps are shown to illustrate the cut and paste model on
the Sierpinski carpet.
squares of which the two on the right have a fractional mass p2 = 1/4
and one on the bottom left has mass p1 = 1/2 such that p1 + 2p2 = 1.
That is, after step n = 1 there are two types of mass µ0 = p1 and
µ1 = p2 . Repeat this process recursively for each of the smaller squares.
The case of n = 2 is shown in Fig (2). So, in step n = 2 the generator
when applied to the lower left square of µ0 will now have two types of
mass µ0 = p21 and µ1 = p1 p2 . Similarly, when the generator is applied
to the upper right squares then the bottom left of the four new smaller
squares will have mass µ1 = p2 p1 and the two squares on the right will
have mass µ2 = p22 . In the same way, when the generator will be applied
on the third square will again have two types of mass as obtained for
the upper right square. Thus at step n = 2 we find three types of mass.
It can be shown that at the n-th step, the linear size of each square is
δ = 2−n , and there are (n + 1) different types of squares when classified
according to their masses. Each type can be designated by an integer
k (k = 0.1., ..., n) such that the mass of each of the k -type squares is
given by µk= pn−k k
1 p2 . The number of the k -type squares Nk is given
n
by Nk = 2k .
k
These results can easily be seen by observing that the different
masses in the n-th step can be obtained in a multiplicative process.
At the n-th step, the possible masses in the squares are given by all the
114 Fractal Patterns in Nonlinear Dynamics and Applications
How to obtain the mass exponent? The fraction of masses in the i-th
cellµi can be taken to be the fractal measure. The sequence of mass
exponents τ (q ) is defined by
X q
N (q, δ ) = µi ∼ δ −τ (q) , (4.83)
i
f (α)
1.5
1.0
0.5
α
1.2 1.4 1.6 1.8 2.0
Figure 4.3: Multifractal f (α) spectrum of multifractal dyadic Cantor set for p =
1/3, 1/2 and p = 2/3. The peak of the respective curves occur at log(1 + p)/ log(2)
which is the skeleton on which the measure is distributed.
Figure 4.4: A snapshot of the weighted planar stochastic lattice containing 30001
blocks.
where kernel F (x1 , x2 , y1 , y2 ) determines the rules and the rate at which
the block of sides (x1 + x2 ) and (y1 + y2 ) is divided into four smaller
blocks whose sides are the arguments of the kernel [31, 32, 33]. The
first term on the right hand side of equation (4.89) represents the loss
118 Fractal Patterns in Nonlinear Dynamics and Applications
F (x1 , x2 , y1 , y2 ) = 1 (4.90)
dM (m, n; t) 4
= − 1 M (m + 1, n + 1; t). (4.93)
dt mn
Iterating equation (4.93) to get all the derivatives of M (m, n; t) and
then substituting them into the Taylor series expansion of M (m, n; t)
about t = 0 one can immediately write its solution as
M (m, n; t) = 2 F2 a+ , a− ; m, n; −t , (4.94)
One can see that (i) M (1, 1; t) = 1 + 3t is the total number of blocks
N (t) and (ii) M (2, 2; t) = 1 is the sum of areas of all the blocks which
is obviously a conserved quantity [35, 41]. Both properties are again
consistent with the definition of the WPSL depicted in the algorithm.
The behaviour of M (m, n; t) in the long time limit is
n−1 4/n−1
Figure 4.5: The plots of N
P
i xi yi vs N for n = 3, 4, 5 are drawn using data
collected from one realization.
120 Fractal Patterns in Nonlinear Dynamics and Applications
It suggests that a single length scale cannot characterize all the mo-
ments of the distribution function n(x, t).
Figure 4.6: The plots of τ (q) vs q to show its slope as a function of q varies.
f (α) = qα + τ (q ). (4.107)
On the
Pother hand, using p ∼ δ α in the expression for partition function
q
Zq = i p and replacing the sum by integral while indexing the blocks
by a continuous Lipschitz-Hölder exponent α as variable with a weight
ρ(α) we obtain
Z
N (q (α), δ ) ∼ ρ(α)dαN (α, δ )δ qα , (4.109)
S (δ ) = ln δ −α(1) (4.112)
6
where the exponent α1 = 5 obtained from
Therefore, in reality the system obeys only one conservation law - con-
servation of total area.
We again look into the q th moment of n(x, t) using equation (4.98)
and appreciating the fact that Mq (t) equal to M (q + 1, 1; t) or M (1, q +
1; t) we immediately find that
q−2
Mq (t) ∼ t− 2 . (4.122)
Unlike in the previous case where the exponent of the power-law so-
lution of the Mq (t) is non-linear, here we have an exponent which is
linear in q . It immediately implies that in the case of a kinetic square
lattice
< xq >=< x >q , (4.123)
and hence a single length-scale is enough to characterize all the mo-
ments of n(x, t). That is, the system now exhibits simple scaling instead
of multiscaling. Like before let us consider that each block is occupied
with P
a fraction of the measure equal to square of its own length or area
N
pi = i x2i and hence the corresponding partition function is
N
X
Zq = pqi = M2q (t). (4.124)
i
τ (q ) = 2 − 2q. (4.127)
f (α) = 2, (4.128)
Dq = 2 . (4.129)
126 Fractal Patterns in Nonlinear Dynamics and Applications
We thus find that if the generator divides the initiator into four equal
squares and we apply it thereafter sequentially then the resulting lat-
tice is no longer a multifractal. The reason is that the distribution of
the population in the resulting support in this case is uniform. The
two models therefore provide a unique opportunity to look for possi-
ble origin of multifractality. The two models discussed in this section
differs only in the definition of the generator. In the case when the
generator divides the initiator randomly into four blocks and we apply
it over and over again sequentially then we have multifractality since
the underlying mechanism in this case is governed by random multi-
plicative process. This is not, however, the case if the generator divides
the initiator into equal four blocks and applies it over and over again
sequentially since the resulting dynamic is governed by deterministic
multiplicative process instead.
4.10.1 Discussions
In this section, we discuss two main features, (i) finding the explicit
time dependent and scaling properties of the particle size distribution
function, when particles are characterized by more than one variable
and (ii) the connection between the kinetics of the fragmentation pro-
cess and the occurrence of multifractality to describe the rich pattern
formed due to the breakup process. Both these results have important
applications including a unique opportunity to search for the origin
of multifractality and multiscaling. In reality, fragmenting objects will
have both size and shape, i.e., a geometry. Intrigued by the possibil-
ity that the geometry of the fragmenting objects may influence the
fragmentation process, we have investigated three distinct models of
fragmentation. For some simple choices of the fragmentation rule we
give exact and explicit solutions to these geometric models.
We find it difficult to find the explicit solution for general homo-
geneity in the case of two-dimensions. Nevertheless, we find that the
solution of the rate equation for the moments is analytically tractable
to find the temporal behaviour of the moment in the long time limit.
Since the moment keeps the signature of some generic features of the
particle size distribution function we confined ourselves to the asymp-
totic behaviour of the moment for general homogeneity indices which is
essential to look out for the occurrence of the shattering transition. We
suggest that the existence of an infinite number of hidden conserved
quantities clearly indicates the absence of scaling solutions.
Multifractality 127
with strong fluctuations between different copies and each copy can
be partitioned into subsets such that each subset scales with different
exponents, yet they can be recognized due to their generic features.
Note that when three fragments are removed from the system at each
time event (s = 1) the dimension of the measure support (Df (s)) is
zero where the measure can be distributed.
Chapter 5
Fractal and
Multifractal in
Stochastic Time Series
5.1 Introduction
In the previous chapters, we have discussed the concept of self-
similarity, fractality and multifractality for both the deterministic and
stochastic processes. In each case, a rule (deterministic or stochastic)
is considered to generate the process and define its fractal or multi-
fractal behaviour. The deterministic rule is mathematical and the cor-
responding process is called deterministic [47, 49]. On the other hand,
a stochastic rule is completely statistical, i.e; behaviour of the system
depends on a probabilistic law and the associated system is known as a
stochastic process [50, 53]. However, these kinds of predefined rules do
not work well on prediction of real world phenomena (e.g.; human heart
oscillation, neuro system, tumor cell growth, cancer growth, environ-
ment, socio-economical system, etc). Moreover, it is impossible to define
an exact rule of the corresponding phenomena. In such a case, the only
way is to predict the process is by time series analysis. A time series
is deterministic or stochastic due to the corresponding nature of the
process. For the deterministic time series, fractal as well as multifractal
analysis can be done by reconstruction of the attractor from the given
time series [47, 49]. Reconstruction of the attractor requires a suitable
time-delay and proper embedding dimension [14, 47, 49]. On the other
130 Fractal Patterns in Nonlinear Dynamics and Applications
Figure 5.1: A flow chart represents a plan of discussion of the remaining topics.
The chart starts from upper left corner (the rectangle) and end at bottom left corner
(the square). Each geometrical figure signifies different sections and arrow indicates
directions of the progression of the successive sections.
Definition 5.2 For a given time series {x}, a measure µ is said to obey a
scaling law with an exponent α if µ(cs) = cα µ(s) hold, where c is a constant
and s represents scale parameter.
Definition 5.6 Let {x(k)}N k=1 be a given time series. Then, auto-covariance
of {x(k)}N
k=1 with delay τ is denoted by ACOV (τ ) and defined as
N −τ
1 X
ACOV (τ ) = (x(k) − hxi)(x(k + τ ) − hx(k + τ )i), (5.2)
N
k=1
where τ = 0, 1, 2, 3, . . ..
134 Fractal Patterns in Nonlinear Dynamics and Applications
From the Definition 5.7, it can be verified that a time series will be
stationary if and only if AC (τ ) = AC (−τ ). Otherwise, a time series is
called non-stationary. A numerical illustration is given in Example 5.1
to show the applicability of auto-correlation measure on verifying the
stationarity and non-stationarity of the STS.
Example 5.1 First consider two different time series x and y, each of
length 500, as given in Fig. 5.2. From the Fig. 5.2a, only one variation
can be observed in the time series x. It indicates that, the corresponding
probability distributions are the same in each time window that covers at
least one vibration. So, x is weakly stationary. On the other hand, variable
oscillation can be observed in y over different time intervals. In fact, os-
cillation in y for the time interval I1 = (250, 500] is faster than the same
AC y (τ )
0 0 0
x
0
y
-1 -1 -1 -0.2
0 2 4 0 2 4 -1000 0 1000 -1000 0 1000
t t -τ τ -τ τ
Figure 5.2: (a), (b) represent a stationary time series {x} and a non-stationary
time series {y} respectively. In each case, we have considered same length L = 500.
(c) and (d) represent respective auto-correlation curves with τ ∈ [−1000, 1000]. The
dotted lines indicates auto-correlations at τ = 0.
Fractal and Multifractal in Stochastic Time Series 135
in I2 = [0, 250] (see Fig. 5.2b). It implies that the respective oscillating
patterns in I1 and I2 are always differs. As no common oscillation ex-
ists there, corresponding probability distribution will change over time and
hence non-stationarity indicates in {y}.
Note
In order to apply fluctuation analysis, one of the major tasks is to classify the
noisy and random walk nature from the given STS. The method of spectrum
analysis (described in Section 5.4.2) can be used to identify these natures.
Figure 5.4 shows some STS with β ∈ [0, 2]. An STS with β ∈ [0, 1] is known
as noisy time series. On the other hand, a random walk can be assured by
observing the value of β in (1, 2]. Moreover, β = 0 and 1 indicate white and
pink noise respectively.
However, this analysis does not reveal better results, until a logarithmic
binning procedure is applied to the double logarithmic plot of S (f ).
(a) (b) 5
2
x
0 0
y
-2
-5
0 500 1000 1500 2000 0 500 1000 1500 2000
k k
(c) 5 (d) 20
log S(f)
log S(f)
10
0
0
-5
-8 -6 -4 -2 -8 -6 -4 -2
log f log f
Figure 5.3: (a), (b) represent white noise ({x}) and pink noise ({y}) respectively. In
each case, length of noise is considered N=2000. (c) and (d) represent corresponding
log S(f ) vs. log f plots. The solid line (pink) indicates fitted straight lines on the
log S(f ) vs. log f plots.
β=0
WHITE NOISE
NOISE
β=1
PINK NOISE
RANDOM
WALK
β=2
BROWN NOISE
Figure 5.4: Power noise signals f1β 1 are represents for for β = 0 (gray), 0.5 (blue),
1 (pink), 1.25 (red), 1.5 (brown).
It can be verified that, FRS (s) obeys the scaling law: FRS (s) ∼ sH ,
where H is known as HE. In practice, H is calculated by fitting a
straight line on the log FRS (s) vs. log s plot. In fact, slope of the fit-
ted straight line on the log FRS (s) vs. log s plot gives the value of H .
Example 5.2 illustrates calculations of H for three types of power noise:
Fractal and Multifractal in Stochastic Time Series 139
Example 5.3 Power noise is a kind of stochastic time series whose power
spectrum S (f ) (f being frequency of the time series) obeys the law: S (f ) ∼
f β , where β ∈ R+ ∪ {0}. For our purpose, we consider β = 0, 0.5 and
1. To calculate H, we have investigated log s vs. log R/s(s) graphs for the
respective aforesaid time series. The corresponding plots are shown in Fig.
5.5 a, b and c respectively. From the figures, it can be investigated that the
slope of the fitted straight lines on the corresponding plots are found to be
0.4959, 0.7495 and 0.8302 respectively. These correspond to the values of
H of the respective power noises. It is noted that the values of H for β = 0
and 0.5 (given in Fig. 5.5a and b) are almost equivalent to its standard
theoretical values. For β = 0.5, the value of H is 0.8302 which is far from
its theoretical value (see Fig. 5.5c). However, non-stationarity is one of
the major reasons for this incoherence.
2 3 (c) 3
(a) (b)
H=0.4959 H=0.7495 H=0.8302
2 2
1
1 1
0 0 0
0 2 4 0 2 4 0 2 4
log10 s log10 s log10 s
Figure 5.5: (a), (b) and (c) represents log10 FRS (s) vs. log10 s plot for β = 0, 0.5
and 1 respectively. For each plot, length of the noise are considered 2000. In order to
find the HE, straight lines are fitted on the linear region of log10 FRS (s) vs. log10 s
plots. On computation, we have chosen scale as 10s , where s = [0, 1, 2, 3, 4].
Note
Before calculating the values of α, we discuss the detrending technique (de-
scribed in step-2). To do this, we choose f1 -noise and construct a profile X(j)
by (5.15). Figure 5.6 a and b show same profile X(j) for f1 -noise. Then, we
divide the profile on some non overlapping intervals of same length. In Fig.
5.6a, we have considered straight lines for fitting the profile X(j) in each of
the intervals. On the other hand, quadratic polynomials are fitted for X(j) in
Fig. 5.6b. From both the figures, it can be observed that the best fitting can
be obtained in Fig. 5.6b. It indicates, that DFA cannot reveal an appropriate
result until the detrending shows minimum errors.
(a) 0.6
0.4
x (s, j)
0.2
0
(b) 0.6
0.4
x (s, j)
0.2
0
j
Figure 5.6: (a), (b) represents profile X(j)(j = 1, 2, . . . , 8000) (green) for a f1 -noise
of length 8000. In both the figures, dotted (red) lines represents trend Xν (s, j). In (a),
Xν (s, j) represents a straight line for each ν. On the other hand, Xν (s, j) represents
a quadratic polynomial for each ν in (b). The vertical (black) lines indicates non-
overlapping segments.
16
α=0.4882 α=0.7486 α=0.9962
16 16
log F(s)
logF(s)
log F(s)
8 8 8
4 4 4
2 2 2
1 1 1
16 32 64 128 16 32 64 128 16 32 64
log s log s log s
Figure 5.7: (a), (b) and (c) represents log10 FRS (s) vs. log10 s plot for β = 0, 0.5
and 1 respectively. For each plot, length of the noise are considered 2000. In order to
find the HE, straight lines are fitted on the linear region of log10 FRS (s) vs. log10 s
plots. On computation, we have chosen scale as 10s , where s = [0, 1, 2, 3, 4].
DFA. The corresponding results are given in Fig. 5.8c. From Fig. 5.8c,
it can be observed that the values of α will always differ for different
noise. So, the autocorrelation method and power spectral analysis do
not always describe the true characteristics of the signals. Thus, it is
better to find the nature of a signal by DFA analysis.
0.15 0
10 4
A corr (s)
0.1
F(s)
s(f)
0.05 2
0
10-1
0
-2 0 4 6 8 10
0 500 1000 10 10 Scale (s)
s f
Figure 5.8: (a), (b) and (c) represents log s vs. log F (s) plot for f10 , f 0.5 1
, and
1
f
-noise respectively. For each plot, same length (L = 2000) of the time series are
considered. The straight lines are fitted on the linear trend of log s vs. log F (s) plots.
The gradients of the fitted straight lines are considered as the values of α. In each
case, scales are chosen as s = [16, 32, 64, 128, 256, 512, 1024].
144 Fractal Patterns in Nonlinear Dynamics and Applications
Note
˜ abruptly changes at the extremum of the
In DFA, the detrending profile X(j)
segments, since the fitted polynomials in neighbouring segments are some-
˜
times uncorrelated. To adjust the abrupt jump of X(j), windows (in which
polynomials are fitted) are taken mutually overlapping. It has been noted
that this method takes to much time on computation. Due to this drawback,
people have tried to modify the DFA method through: Backward Moving Av-
erage (BMA) technique, Centered Moving Average Average (CMA) method,
Modified Detrended Fluctuation Analysis (MDFA), Fourier DFA, Empirical
mode decomposition, Singular value decomposition, DFA based on high-pass
filtering are developed [50].
Sometimes, the a single scaling laws are not able to identify the
proper self-similar structure in the STS. In this case, fluctuation is
needed to study with different scales and the concept of multifractality
is thus proposed. In the following section, we discuss multifractal STS.
Step-1: Find the position τj for which Lψ (τ, s) satisfies the condition:
where j = 1, 2, . . . , jmax .
This condition gives maxima of Lψ (τ, s) over j which is known as
modulus maxima. The reason for considering the modulus is
For increasing s, it can be observed that Z (q, s) obeys the law: Z (q, s) ∼
sτ (q) , where τ (q ) is defined by
Ns
X
Zq (s) = |X (ν, s)|q ∼ sτ (q) for s > 0. (5.20)
ν=1
Ps
The quantity X (ν, s) is calculated by X (ν, s) = i=1 x(νs + i) for
ν = 1, 2, . . . , Ns , where Ns = [ Ns ] (N being length of the given STS
{x(k )}N
k=1 ).
Step-1: Calculate the mean hxi of the time series {x(k )}N
k=1 , where
hxi is given by
N
1 X
hxi = x(k ).
N
k=1
k
X
X (k ) = [y (i) − hy (i)i] (k = 1, 2, . . . , N ). (5.21)
i=1
Step-4: For each segments Aν , fit a local trend Xν,s (k ) (linear or higher
order polynomial) on X (k ) and subtract from X (k ). Then calculate de-
trended variance F 2 (ν, s) by
sν
1 X
F 2 (ν, s) = [X (k ) − Xν,s (k )]2 (ν = 1, 2, . . . , Ns ) (5.22)
s
k=(s−1)ν+1
where q is always taken real valued except zero. Using the above
method, it can be seen that 0-th order fluctuation reveals divergent
exponents. Instead, logarithmic average approach gives us
Ns
1 X
F0 (s) = exp{ log[F 2 (ν, s)]} ∼ sh(0) . (5.24)
2Ns ν=1
STS. The exponent h(q ) is called scaling exponent or GHE. The value
of h(q ) is calculated by the slope of the linear regression of log Fq (s) vs.
log s plot.
Note
For the positive and negative values of q, h(q) describes large and small
scale fluctuations respectively. If a time series is monofractal, then h(q) is
independent of q. In this case, the scaling behaviour of the variances Fq (s) is
identical for all s. On the other hand, it has been observed that h(q) increases
with q for the multifractal process. It indicates non-identical Fq (s) for all s.
So, the mono-fractality and multifractality of a time series can be identified
using GHE. The computation of GHE needs appropriate values of q. If we
set q > 0 , then the segments Aν with greater fluctuations (segments with
relatively high F 2 (ν, s) will give a larger weight in the Fq (s) than that of the
segments relatively smaller fluctuation. The opposite holds for q < 0. The
next important parameter is the degree of the polynomials which are fitted
to detrending the fluctuations. Since MF-DFA is suitable for non-stationary
process, MF-DFA can be conducted for different polynomials and then decide
the best data fit. It has been observed that over fitting leads the value of Fq (s)
is close to zero for small values of s. As per selection of the scale is concerned,
the smallest scale needs to contain enough elements so that computed local
fluctuation can be reliable. In the most of the studies, the minimum scale is
taken between 10 and 20. The maximum scale ensures enough elements for
computation of the fluctuation function. Most of the studies have been done
with s 6> N/10.
(a) (b)
4 10
2
5
x(t)
x(t)
0
-2 0
-4
0 200 400 600 800 1000 0 200 400 600 800 1000
time (sec.) time (sec.)
(c) 4 (d)
data
2 τ(5) =2.5116 0
h(-5) =0.75417
τ(q)
0 h(0) =0.72715
τ(q) -5 data1
-2
tq(5) =1.5868
Hq(-5) =1.4477
-4
Hq(0) =0.98634
-6 -10
-5 -4 -3 -2 -1 0 1 2 3 4 5 -5 -4 -3 -2 -1 0 1 2 3 4 5
q q
Figure 5.9: (a) and (b) represents power noise f1β with β = 1.50 and 1.75 respec-
tively. Each of the time series are of length 3000. (c) represents multifractal spectrum
1 1
with f 0.5 (solid) and f 0.75 (dotted) respectively. In each computation, scale is chosen
N
2 , N = 3, 4, . . . , 15 (since degree of the fitted polynomial is taken 1). q is taken as
q ∈ [−5, 5].
1 1
Example 5.6 We first consider two power noises- f 0.5 and f 0.75 shown in
Fig. 5.10a and b respectively. From the figures, it can be investigated that
both the STS are statistical equivalents in nature. Using (5.25), We calcu-
late respective f (α)s. The corresponding f (α) vs. α curves are shown in
Fractal and Multifractal in Stochastic Time Series 149
(a) 40
(c) 1
20
x(t)
0 0.9
-20
0 1000 2000 3000 0.8
f(α )
time(sec.)
(b) 40 0.7
20
0.6 1/f 1.50
x(t)
0 1/f 1.75
-20 0.5
0 1000 2000 3000 1.2 1.3 1.4 1.5 1.6
α
time(sec.)
Figure 5.10: (a) and (b) represents power noise f1β with β = 1.50 and 1.75 respec-
tively. Each of time series are of length 3000. (c) represents multifractal spectrum
1 1
with f 0.5 (solid) and f 0.75 (dotted) respectively. In each computation, scale is chosen
N
2 , N = 3, 4, . . . , 15 (since degree of the fitted polynomial is taken 1). q is taken as
q ∈ [−5, 5].
Fig. 5.10c. From the figure, completely different multifractal spectrums can
be observed which indicates separate multifractal structure in the respective
STS.
f (α) = a1 + a2 (α − α0 ) + a3 (α − α0 )2 + a4 (α − α0 )3 + a5 (α − α0 )4 . (5.27)
5.6 Discussion
This chapter highlights the statistical self-similarity about a univariate
time series obtained from the stochastic systems. For an unknown sys-
tem, it is, therefore, the primary task to check the stochastic nature of
the corresponding time series. Several methods have been proposed to
identify the stochasticity of a time series [59, 45]. Among them, DVV
method [60] is the most robust and effective one. The STS can be ob-
tained in two forms−stationary and non-stationary. A stationary STS
shows invariant probability distribution with every time intervals. On
the contrary, a variable probability distribution can be observed in the
non-stationary STS for every time intervals. We have numerically in-
vestigated the stationary as well as non-stationary nature of the time
series by auto-correlation methods. The results show the effectiveness
of the auto-correlation method.
Self-similar nature is explained for both the stationary and non-
stationary stochastic time series. In both the cases, self-similarity can
be characterized by the theory of fluctuation in the STS. In order to
define fluctuation, different types of statistical measures have been pro-
posed. In fact, these measures correspond to scaling law which char-
acterizes the underlying monofractal and multifractal structure of the
Fractal and Multifractal in Stochastic Time Series 151
Application in Image
Processing
6.1 Introduction
Signal processing is a broad branch of electrical engineering. This area is
a study about the different signals such as image signals, video signals,
voice/sound signals, transmission signals, etc., and their properties.
Image Processing is a sub area of the signal processing. It focuses on
the behavior and characteristics of image signals. These image signals
are represented into two variants based on the nature of the signals.
The categories of the images are analog image and digital image.
An image that manipulates using the analog (electrical) signals is
called ‘Analog Image’ whereas image that is composed by the discrete
signals using Sampling and Quantization methods is called ‘Digital Im-
age’.
In addition, dimpT K = ∞, if the condition (2) does not hold for any
n ∈ N, and dimT K = ∞ if the condition (3) does not hold for any
n ∈ N.
If U is any non-empty subset of n-dimensional Euclidean space Rn ,
the diameter of U is defined as |U | = sup{|x − y| : x, y ∈ U }, i.e., the
greatest distance apart of any pair of points in U . If {Ui } is a countable
(or finite)
S∞ collection of sets of diameter at most δ that cover F , i.e.,
K ⊂ i=1 Ui with 0 < |Ui | ≤ δ for each i, we say that {Ui } is a δ -cover
of K .
Suppose that K is a subset of Rn and s is a non-negative number.
For any δ > 0 we define
(∞ )
X
s s
Hδ (K ) = inf |Ui | : {Ui } is a δ − cover of K .
i=1
This limit exists for any subset K of Rn , though the limiting value can
be 0 or ∞. We call Hs (K ) as the s-dimensional Hausdorff measure of K .
Then, the Hausdorff Dimension or Hausdorff-Besicovitch Dimension of
K is defined as,
so that
(
s ∞, if s < dimH (K ),
H (K ) =
0, if s > dimH (K ).
If s = dimH (K ), then Hs (K ) may be zero or infinite, or may 0 <
Hs (K ) < ∞.
Hausdorff dimension has the advantage of being defined for any set,
and is mathematically convenient, as it is based on measures, which are
relatively easy to manipulate. A main disadvantage is that the explicit
computation of the Hausdorff dimension of a given set K is rather
difficult since it involves taking the infimum over covers consisting of
balls of radius less than or equal to a given > 0. A slight simplification
is obtained by considering only covers by the balls of radius equal to .
This gives rise to the concepts of box dimension.
158 Fractal Patterns in Nonlinear Dynamics and Applications
where M is the total number of pixels of the image and P (k, ) denotes
the probability that there are k points within a box of size , centred
about an arbitrary point of the image. The probability P (k, ) can be
estimated by the following relation:
n(k, )
P (k, ) = (6.4)
Nr
where Nr is the number of randomly chosen reference points from the
image and n(k, ) denotes the number of cubes of size r, centred around
each reference point, containing the k point of the image.
Differential Box Counting (DBC): Consider an image of size M ×M .
The domain of the image is partitioned into grids of size r × r. On each
grid there is a column of boxes of size r × r × h, where h is the height of
a single box. If the total number of gray levels is G then G/h = M/r.
Application in Image Processing 159
where dr (i, j ) denotes the difference between the maximum and the
minimum gray level of the image in the grid (i, j ), k = M/G and d(x)e
denotes for the ceiling function of x, i.e., the smallest integer which is
greater than or equal to x.
Correlation Algorithm: A very popular way to compute the dimen-
sion is to use the correlation algorithm, which estimates dimension
based on the statistics of pairwise distances. According to this algo-
rithm the dimension is defined as
ln C (r)
ν = lim ,
r→0 ln r
where C (r) is the correlation integral given by
Number of distances less than r
C (r) = .
Number of distances altogether
The correlation algorithm provides a particularly elegant formulation
and simultaneously has the substantial advantage that the function
C (r) is approximated even for r as small as the minimum interpoint
distance. For an image with M pixels, C (M, r) has a dynamic range of
O(M 2 ). Logarithmically speaking, this range is twice that available in
the box-counting method (see, [46, 72, 84]).
160 Fractal Patterns in Nonlinear Dynamics and Applications
RE1 is called Shannon entropy [34, 43]. The Renyi Fractal Di-
mensions or Generalized Fractal Dimensions (GFD) of order q ∈
(−∞, ∞) is defined, in terms of generalized Renyi Entropy, as
P
N q
1 ln p
i=1 i
Dq = lim (6.5)
r→0 q − 1 ln2 r
where pi is probability distribution. As q −→ 1, Dq converges to D1 ,
which is given by
PN
pi ln pi
D1 = lim i=1 , (6.6)
r→0 ln r
where D1 is the information dimension and Dq is a monotonically
decreasing function of q such that D0 ≥ D1 ≥ D2 . Here D0 and D2
denotes the fractal dimension and correlation dimension respectively.
Multifractal Spectra: The multifractal spectra f (α(q )) is defined as
PN (r)
i=1 µi (q, r) ln(µi (q, r))
f (q ) = lim , (6.7)
r→0 ln r
PN (r)
µi (q, r) ln(Pi (q, r))
i=1
α(q ) = lim , (6.8)
r→0 ln r
the normalized measures µi is defined, in terms of probability Pi , as
[Pi (r)]q
µi (q, r) = PN (r) , (6.9)
q
i=1 [Pi (r )]
Application in Image Processing 161
here N (r) is the number of boxes required to cover the object with box
size r and Pi is a probability of ith box of size r. Generally Pi is defined
as
area of ith part
Pi = .
total area
Some properties of multifractal spectra as follows:
The spectrum of f (α) is concave,
f (α) has a single inflection point at q=0,
At q=0, f achieves maximum. f (α(0)) = D0 refers the fractal
dimension,
In case of monofractal analysis the spectrum of f (α) is reduced
to a single point.
Image D0 D1 D2
Cameraman 1.1959 1.1881 1.1855
Lena 1.2434 1.2398 1.2379
Mandrill 1.2489 1.2480 1.2471
Peppers 1.2465 1.2433 1.2407
Pirate 1.2412 1.2341 1.2294
Step 2: Let N be the number of boxes to cover the image with box
size r.
Xi
pi =
X
—————————————————————————————–
Algorithm: Mid-sagittal Plane Detection from MRI
—————————————————————————————–
Phase I: Computation of Generalized Fractal Dimension of MRI
—————————————————————————————–
Step 1: Read a brain MRI I .
Step 3: The probability Pi for ith box of size r in the image is defined
as,
Xi
Pi = ,
X
th
where Xi is the intensity
PNvalue of the image in the corresponding i
box of size r and X = i=1 Xi .
Begin
f or i : 1 −→ r
f or j : 1 −→ r
mass(i, j ) = sum(sum(I (i ∗ N − (N − 1) : i ∗ N, j ∗ N − (N − 1) :
j ∗ N )));
p(i, j ) = mass(i, j )/X ;
if q 6= 1
q
pq (i, j ) = [p(i, j )] ;
else
pq (i, j ) = q × [p(i, j )];
pi ← sum(sum(pq ));
D(q ) ← ln(pi)/ ln(r);
End
—————————————————————————————
Phase II: Detection of Mid-sagittal Plane
—————————————————————————————
Step 6: Extract the initial sagittal region ∆ from the image I by re-
moving/subtracting the extreme 20% of region on both sides.
Step 11: Consider the RoI centered on c∗ , and select sagittal plane s
within the RoI centered on [c∗ ± δ ], where δ as 0.5 mm.
Step 12: Apply multifractal spectra on s using Eq. (6.7) and Eq. (6.8)
to acquire IM F .
End
P
q∈sf (α(q))
compute λs ← P .
q∈s q
Step 14: Find the optimal sagittal plane s0 that gives the ∆γ value for
γ from ROI of MRI planes, where ∆γ = γmax − γmin .
—————————————————————————————–
In the developed algorithm, Phase I elucidated the computational
procedure of generalized fractal dimensions for the input MRI. In order
to obtain an accurate estimation of similarity between the left and
right hemispheres, the symmetry measure is defined in terms of GFD
as illustrated in Step 9. Hence, the proposed algorithm outperforms
on asymmetrical or pathological images.
Application in Image Processing 171
Table 6.4: Brain MRI datasets for evaluation of the proposed method.
3 mm
5 mm
7 mm
Figure 6.6: Simulated brain MRI with intensity non-uniformity 20% with the dif-
ferent noise levels.
The Yaw angle error φ0y and Roll angle error φ0r estimated as a dif-
ference between MSP detected by the proposed method and the ground
truth MSP.
The angular deviation θ is calculated between ground truth line
and MSP estimated by the proposed method. Furthermore, the average
deviation of the distance (d, in pixels) is estimated from the upper and
lower endpoints between the estimated MSPs and the ground truth
lines as
p p
(a − x)2 + (b − y )2 + (a0 − x0 )2 + (b0 − y 0 )2
d= , (6.13)
2
where (a, b), (a0 , b0 ) denotes the upper and lower endpoints of the MSP
estimated by the proposed method. Also (x, y), (x0, y0) denotes the up-
per and lower endpoints of the ground truth MSP [82].
3 mm
5 mm
7 mm
Figure 6.7: Simulated brain MRI with intensity non-uniformity 40% with the dif-
ferent noise levels.
that it still requires a certain degree of symmetry between the left and
right hemispheres. Therefore, it may not provide accurate results on
images with severe global asymmetry, like substantial hemispheric re-
moval. This limitation couldn’t be successfully defeated for all correla-
tion based techniques. In order to overcome this problem, the proposed
method uses the generalized fractal dimensions as a symmetry measure
and multifractal spectra for the accurate estimation of MSP from brain
MRI samples. The results of the proposed method for MSP extraction
are presented here.
The fractal is defined as a set with non-integer Hausdorff dimension,
which exceeds its topological dimension. In Eq. (6.5), choose q = 0 then
it provides the fractal dimension D0 . In Table 6.5 and Table 6.6, the
fractal dimension value D0 which lies between 0 and 1, further digital
MRI datasets used in the proposed method are subsets of the Euclidean
plane, which have the topological dimension 1. Hence, these images
are fractal objects, which have irregular distribution of the gray levels
although they have a self-similar structure.
In Table 6.5, the tabulated values of generalized fractal dimensions
D0 , D1 and D2 of simulated MRI data as shown in Fig. 6.6, which are
174 Fractal Patterns in Nonlinear Dynamics and Applications
Table 6.5: Analysis of multifractal strength 4α, maximum value of the multifractal
spectrum αmax and generalized fractal dimensions of brain MRI with INU 20% with
slice thickness.
Table 6.6: Analysis of multifractal strength 4α, maximum value of the multifractal
spectrum αmax and generalized fractal dimensions of brain MRI with INU 40% with
slice thickness.
Table 6.7: Comparison of proposed method with ground truth MSP through an-
gular deviation θ and average deviation of distance d.
Slice thickness: 3 mm, INU = 20% Slice thickness: 3 mm, INU = 20%
1.3 1.9
Noise 5%
Noise 5% 1.8 Noise 7%
Noise 7% Noise 9%
1.295
Noise 9% 1.7
1.6
1.29
f(q)
Dq
1.5
1.285
1.4
1.3
1.28
1.2
1.1
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3
q alpha(q)
(a) (b)
Slice thickness: 5 mm, INU = 20% Slice thickness: 5 mm, INU = 20%
1.3 2
1.7
1.29
f(q)
Dq
1.6
1.285
1.5
1.4
1.28
1.3
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)
(c) (d)
Slice thickness: 7 mm, INU = 20% Slice thickness: 7 mm, INU = 20%
1.3 2
Noise 5%
Noise 5% 1.9
Noise 7%
Noise 7%
1.295 Noise 9%
Noise 9%
1.8
1.7
1.29
f(q)
Dq
1.6
1.285
1.5
1.4
1.28
1.3
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)
(e) (f)
Slice thickness: 3 mm, INU = 40% Slice thickness: 3 mm, INU = 40%
1.305 2
Noise 5% 1.9
1.3 Noise 5%
Noise 7%
Noise 7%
Noise 9% Noise 9%
1.8
1.295
1.7
1.29
Dq
f(q)
1.6
1.285
1.5
1.28
1.4
1.275 1.3
1.27
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3
alpha(q)
q
(a) (b)
Slice thickness: 5 mm, INU = 40% Slice thickness: 5 mm, INU = 40%
1.305 2
Noise 5%
Noise 5% 1.9
1.3 Noise 7%
Noise 7%
Noise 9%
Noise 9% 1.8
1.295
1.7
1.29
f(q)
Dq
1.6
1.285
1.5
1.28
1.4
1.275
1.3
1.27
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)
(c) (d)
Slice thickness: 7 mm, INU = 40% Slice thickness: 7 mm, INU = 40%
1.305 2
Noise 5%
Noise 5% 1.9
1.3 Noise 7%
Noise 7%
Noise 9%
Noise 9% 1.8
1.295
1.7
1.29
f(q)
Dq
1.6
1.285
1.5
1.28
1.4
1.275
1.3
1.27
−100 −80 −60 −40 −20 0 20 40 60 80 100 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
q alpha(q)
(e) (f)
3 mm
5 mm
7 mm
Figure 6.10: Extracted MSP from the simulated MRI brain volume as shown in
Fig. 6.6.
3 mm
5 mm
7 mm
Figure 6.11: Extracted MSP from the simulated MRI brain volume as shown in
Fig. 6.7.
(a)Normal
(b) Stroke
(c)Tumour-b
(d)Infarct
(e)Tumour- m
Figure 6.12: Comparison between extracted MSP by the proposed method and
three state-of-the-art methods on MRI images: a) Normal b) Stroke c) Tumour-
benign d) Infarct e) Tumour-malignant.
6.6.4 Conclusion
The proposed algorithm has included the generalized fractal dimen-
sions as a symmetric measure and the multifractal spectra to refine
the optimal mid-sagittal plane. The proposed algorithm has combined
GFD and multifractal spectra for a more accurate and efficient extrac-
tion of mid-sagittal plane from brain MRI. The proposed algorithm
182 Fractal Patterns in Nonlinear Dynamics and Applications
[56] John C. Russ, The Image Processing Handbook, 4th ed., CRC
Press, London, 2002.
[57] K.J. Falconer, Fractal Geometry: Mathematical Foundations and
Applications, 2nd. edition, John Wiley & Sons Ltd., England,
2003.
[58] M.E.J. Newman, SIAM Review, 45, 167, 2003.
[59] C. Jingdong, Filtering Techniques for Noise Reduction and
Speech Enhancement, Chapter in Adaptive Signal Processing:
Applications to Real-World Problems, Springer Berlin Heidel-
berg, 129–154, 2003.
[60] T. Gautama, D.P. Mandic and M.M.V. Hulle, The delay vector
variance method for detecting determinism and nonlinearity in
time series, Physica D, 190, 167–176, 2004.
[61] S.N. Majumdar, D.S. Dean and P.L. Krapivsky, Understanding
search trees via statistical physics, Pramana - J. Phys., 64(6),
1175–1189, 2005.
[62] Y.S. Liang and W.Y. Su, The relationship between the fractal
dimensions of a type of fractal functions and the order of their
fractional calculus, Chaos, Solitons and Fractals, 34, 682–692,
2007.
[63] K.R. Castleman, Digital Image Processing, Pearson Education
India, 1st ed., 2007.
[64] G. Edgar, Measure, Topology, and Fractal Geometry, 2nd edition,
Springer, New York, 2008.
[65] Y. Zhang and Q. Hu, A PCA-based approach to the representa-
tion and recognition of MR brain midsagittal plane images, 30th
Annual International Conference of the IEEE-EMBS, 3916–3919,
2008.
[66] M.K. Hassan and M.Z. Hassan, Condensation-driven aggregation
in one dimension, Phys. Rev. E 77, 061404, 2008.
[67] G.W. Delaney, S. Hutzler and T. Aste, Relation between grain
shape and fractal properties in random apollonian packing with
grain rotation, Phys. Rev. Lett., 101, 120602, 2008.
[68] M.K. Hassan and M.Z. Hassan, Emergence of fractal behavior
in condensation-driven aggregation, Phys. Rev. E, 79(2), 021406,
2009.
188 References
[69] P.A. Varotsos, N.V. Sarlis and E.S. Skordas, Detrended fluctu-
ation analysis of the magnetic and electric field variations that
precede rupture, Chaos 19, 023114, 2009.
[70] M.K. Hassan and M.Z. Hassan, Emergence of fractal behavior in
condensation-driven aggregation, Phys. Rev. E, 79, 021406, 2009.
[71] J. Aguirre, R.L. Viana and M.A.F. Sanjuan, Rev. Mod. Phys. 81,
333, 2009.
[72] Jian Li, Qian Du and Caixin Sun, An improved box-counting
method for image fractal dimension estimation, Pattern Recog-
nition, 42, 2460–2469, 2009.
[73] R.K.P. Zia, E.F. Redish and S.R. McKay, Am. J. Phys., 77, 614,
2009.
[74] P.R. Massopust, Interpolation and Approximation with Splines
and Fractals, Oxford University Press, New York, 2010.
[75] M.K. Hassan, M.Z. Hassan and N.I. Pavel, Scale-free network
topology and multifractality in a weighted planar stochastic lat-
tice, New J. Phys., 12, 093045, 2010.
[76] MRI Image Database, website:
http://www.cma.mgh.harvard.edu/ibsr./
[77] MRI Image Database, website:
http://www.bic.mni.mcgill.ca/brainweb./
[78] S.X. Liu, J. Kender, C. Mielinska and A. Laine, Employing sym-
metry features for automatic misalignment correction in neuroim-
ages, Journal of Neuroimaging, 21, 15–33, 2011.
[79] G.C.S. Ruppert, L.A. Teverovskiy, C.P. Yu, A.X. Falcao and
Y. Liu, A new symmetry-based method for mid-sagittal plane
extraction in neuroimages, IEEE International Symposium on
Biomedical Imaging: From Nano to Macro, 285–288, 2011.
[80] M.K. Hassan, M.Z. Hassan and N.I. Pavel, J. Phys: Conf. Ser,
297, 012010, 2011.
[81] A. Biswas, T.B. Zeleke and B.C. Si, Multifractal detrended fluc-
tuation analysis in examining scaling properties of the spatial
patterns of soil water storage, Nonlin. Processes Geophys., 19,
227238, 2012.
References 189
[82] S.A. Jayasuriya, A.W.C. Liew and N.F. Law, Brain symmetry
plane detection based on fractal analysis, Computerized Medical
Imaging and Graphics, 37, 568–580, 2013.
[83] M.K. Hassan, M.Z. Hassan and N. Islam, Emergence of fractals in
aggregation with stochastic self-replication, Phys. Rev. E, 88(4),
042137, 2013.
[84] R. Uthayakumar and A. Gowrisankar, Generalized Fractal Di-
mensions in Image Thresholding Technique, Information Sciences
Letters, Natural Sciences, 3(3), 125–134, 2014.
[85] M.K. Hassan, N.I. Pavel, R.K. Pandit and J. Kurths, Chaos, Soli-
tons & Fractals, 60, 31–39, 2014.
[86] R. Uthayakumar and A. Gowrisankar, Generation of Fractals via
Self-Similar Group of Kannan Iterated Function System, Applied
Mathematics & Information Sciences, Natural Sciences, 9(6),
3245–3250, 2015.
[87] R. Uthayakumar and A. Gowrisankar, Attractor and self-similar
group of generalized fuzzy contraction mapping in fuzzy metric
space, Cogent Mathematics, Taylor & Francis 2(1): 1024579, 1–
12, 2015.
[88] A. Gowrisankar and R. Uthayakumar, Fractional calculus on
fractal interpolation for a sequence of data with countable it-
erated function system, Mediterranean Journal of Mathematics,
Springer, 13(3), 3887–3906, 2016.
[89] R. Uthayakumar and A. Gowrisankar, Mid-sagittal plane detec-
tion in magnetic resonance image based on multifractal tech-
niques, IET Image Processing, 10(10), 751–762, 2016.
[90] A. Gowrisankar, Generation of fractal through iterated function
systems, Ph.D. Thesis, The Gandhigram Rural Institute (Deemed
to be University), 2016.
[91] Y.S. Liang and Q. Zhang, A type of fractal interpolation functions
and their fractional calculus, Fractals, 24(2), 1650026, 2016.
[92] F.R. Dayeen and M.K. Hassan, Chaos, Solutions & Fractals 91
228, 2016.
[93] R.C. Gonzalez, R.E. Woods and S.L. Eddins, Digital Image Pro-
cessing Using MATLAB, 2nd ed., McGraw Hill Education, 2017.
190 References
Koch curve 40–42, 44, 63, 64 multifractality 97, 101, 114, 123, 124,
Kumer’s function 81 126, 127, 129, 130, 144, 147,
149, 151
L multi-scaled network 15
Legendre transform 99, 100, 104,
125, 147, 150
N
Lipschitz-Holder 148 natural pattern 67
Lipschitz-Holder exponent 148 Newton’s second law 3, 5
logarithmic scale 25, 29 Noise 136–139, 141–143, 148, 149,
long-term correlation 136, 140 155, 171–178, 180
non-integer dimension 32, 35
M non-stationary time series 133–135
magnetic resonance image 155, 167
magnitude 2, 3, 22
O
mass exponent 101, 102, 104, 105, Open ball 48, 51
107–109, 112, 114, 121, 122, 125 orthogonal 117, 171
mass-length relation 34–36, 42 Otus method 165
measurements 1–5, 7, 9, 13, 16–19,
21, 167 P
Medical image 166 partition function 101, 105–107, 109,
Mellin transform 118 111, 120, 122, 125, 144
MF-DFA 145–147 pathological image 170, 175, 180
Midsagittal Plane 167 polynomial fitting 141
monofractal 130, 132, 135, 140–142, power monomial law 17, 18
147, 149–151, 156, 161 power noise 137–139, 141, 142, 148,
Monofractal dimension 156 149
monofractal STS 130, 135, 142, 151 Power spectral density 142
multifractal 98, 99, 101, 105, 107– power-law 4, 18, 19, 21–23, 25,
112, 114, 115, 120, 121, 123, 124, 28–30, 36, 41, 72, 76, 77, 88, 89,
126, 127, 129, 130, 132, 135, 144, 91, 101, 121, 124, 125
145, 147–151, 160–162, 165–168, Power-law distribution 21, 22, 25, 29,
170, 173–178, 180, 181 30, 101
multifractal analysis 98, 120, 121, pre-fractal 41
123, 129, 149, 167, 168 Pythagoras theorem 11
Multifractal dimension 160, 162 Pythagorean theorem 8, 11
multifractal formalism 98, 101, 105,
107, 109, 111, 112, 120, 123, 124, Q
144
multifractal process 132, 147, 149 q-th order fluctuation 145, 146
Multifractal Spectra 160, 161, 167,
R
168, 170, 173, 174, 176–178,
180, 181 Random fractal 37, 45, 70, 71, 73, 127
multifractal STS 130, 144, 147, 151 random process 130
194 < Fractal Patterns in Nonlinear Dynamics and Applications
Random walk 23–25, 135–137, 139, Stochastic fractal 69, 79, 85, 95, 98,
140 127
region nonuniformity 164–166 stochastic lattice 115, 116
Relative Differential Box stochastic process 70, 71, 129–132,
Counting 159 150, 151
Renyi entropy 160 stochastic self-replication 85, 93
Renyi Fractal Dimensions 160 stochastic time series 129–131, 139,
rescale range fluctuation 138 146, 150
right skewed 150 Stochastic variable 16, 71
Roll angle 171, 172, 175, 179 supremum 51, 53
symmetric measure 169, 181
S symmetry parameter 149, 150
scale-free 18, 19, 21
scale-invariance 1, 4, 18
T
scaling function 9, 17, 80, 83, 92, 93, topological dimension 33, 36, 37, 45,
95 61, 67, 156, 164, 173
scaling law 16, 20, 130, 132, 135, triadic Cantor set 38, 70, 72
138–140, 144, 150, 151, 158
Self-referential equation 60 U
self-similarity 1, 4, 8, 11, 13–16, 18, uniform metric 47
19, 35–37, 45, 67, 69, 75, 85, 95, uniformly continuous 47, 49, 50
130, 132, 135, 137, 150, 167 usual metric 46, 49, 52
Shannon entropy 160
short-term correlation 136 V
Sierpinski gasket 42–44, 62, 63
similarity 1, 4, 8, 11, 13–16, 18, 19, von Koch curve 40, 44, 63, 64
35–37, 45, 67, 69, 75, 85, 95,
W
130, 132, 135, 137, 150, 167
similarity parameters 12, 13, 132 wavelet transform 130, 144, 151
singularity spectrum 148, 149 weakly stationary 133, 134
Smoluchowski’s equation 85 Widom scaling 21
smooth or regular structure 149
spectrum of multifractal 108, 115 Y
square lattice 111, 115, 124, 125 yardstick 22, 23, 35, 36, 44
Statistical self-similarity 15, 45, 137, Yaw angle 171, 172, 175, 179
150
Sterling’s formula 28 Z
Stochastic dyadic Cantor set 77–79,
106 Zipf law 29