Swarm Inteligence Seminar Report
Swarm Inteligence Seminar Report
Swarm Inteligence Seminar Report
systems, natural or artificial. The concept is employed in work on artificial intelligence. The
expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of
cellular robotic systems.
SI systems are typically made up of a population of simple agents or boids
interacting locally with one another and with their environment. The agents follow very
simple rules, and although there is no centralized control structure dictating how
individual agents should behave, local, and to a certain degree random, interactions
between such agents lead to the emergence of "intelligent" global behavior, unknown to the
individual agents. Natural examples of SI include ant colonies, bird flocking, animal
herding, bacterial growth, and fish schooling.
The application of swarm principles to robots is called swarm robotics, while
'swarm intelligence' refers to the more general set of algorithms. 'Swarm prediction' has
been used in the context of forecasting problems.
Swarm describes behaviour of an aggregate of animals of similar size and body
orientation, often moving en masse or migrating in the same direction. Swarming is a
general term that can be applied to any animal that swarms. The term is applied
particularly to insects, but can also be applied to birds, fish, various microorganisms such
as bacteria, and people.
The term flocking is usually used to refer to swarming behaviour in birds, while the terms
shoaling or schooling are used to refer to swarming behaviour in fish. The swarm size is a
major parameter of a swarm.
I. INTRODUCTION
ST is the property of a system whereby the collective behaviours of agents interacting
locally with their environment cause coherent functional global patterns to emerge. SI
provides a basis with which it is possible to explore distributed problem solving without
centralized control or the provision of a global model. One of the cores tenets of SI work is
that often a decentralized, bottom-up approach to controlling a system is much more
effective than traditional, centralized approach. Groups performing tasks effectively by
using only a small set of rules for individual behaviour is called swarm intelligence. Swarm
Intelligence is a property of systems of no intelligent agents exhibiting collectively
intelligent behaviour. In Swarm Intelligence, two individuals interact indirectly when one
of them modifies the environment and the other responds to the new environment at a
later time. For years scientists have been studying about insects like ants, bees, termites
etc. The most amazing thing about social insect colonies is that there’s no individual in
charge. For example consider the case of ants. But the way social insects form highways
and other amazing structures such as bridges, chains, nests and can perform complex tasks
is very different: they self-organize through direct and indirect interactions. The
characteristics of social insects are
1. Flexibility
2. Robustness
3. Self-Organization
Many current models use variations on these rules, often implementing them by
means of concentric "zones" around each animal. In the zone of repulsion, very close to the
animal, the focal animal will seek to distance itself from its neighbours to avoid collision.
Slightly further away, in the zone of alignment, the focal animal will seek to align its
direction of motion with its neighbours. In the outermost zone of attraction, which extends
as far away from the focal animal as it is able to sense, the focal animal will seek to move
towards a neighbour?
Figure i ACO
Figure ii PSO
As stated before, PSO simulates the behaviors of bird flocking. Suppose the following
scenario: a group of birds are randomly searching food in an area. There is only one piece
of food in the area being searched. All the birds do not know where the food is. But they
know how far the food is in each iteration. So what's the best strategy to find the food? The
effective one is to follow the bird which is nearest to the food.
PSO learned from the scenario and used it to solve the optimization problems. In PSO, each
single solution is a "bird" in the search space. We call it "particle". All of particles have
fitness values which are evaluated by the fitness function to be optimized, and have
velocities which direct the flying of the particles. The particles fly through the problem
space by following the current optimum particles.
PSO is initialized with a group of random particles (solutions) and then searches for optima
by updating generations. In every iteration, each particle is updated by following two "best"
values. The first one is the best solution (fitness) it has achieved so far. (The fitness value is
also stored.) This value is called pbest. Another "best" value that is tracked by the particle
swarm optimizer is the best value, obtained so far by any particle in the population. This
best value is a global best and called gbest. When a particle takes part of the population as
its topological neighbors, the best value is a local best and is called lbest.
After finding the two best values, the particle updates its velocity and positions with
following equation (a) and (b).
Where:-
v[] is the particle velocity,
persent[] is the current particle (solution).
pbest[] and gbest[] are defined as stated before. i.e. personal best and global best respv.
rand () is a random number between (0,1).
c1, c2 are learning factors. Usually c1 = c2 = 2.
Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness
value (pBest) in history
set current value as the new pBest
End
Choose the particle with the best fitness value of all the
particles as the gBest
For each particle
Calculate particle velocity according equation (a)
Update particle position according equation (b)
End
While maximum iterations or minimum error criteria is not
attained
Particles' velocities on each dimension are clamped to a maximum velocity Vmax. If the sum
of accelerations would cause the velocity on that dimension to exceed Vmax, which is a
parameter specified by the user. Then the velocity on that dimension is limited to Vmax.
From the procedure, we can learn that PSO shares many common points with GA. Both
algorithms start with a group of a randomly generated population, both have fitness values
to evaluate the population. Both update the population and search for the optimum with
random techniques. Both systems do not guarantee success.
Compared with genetic algorithms (GAs), the information sharing mechanism in PSO is
significantly different. In GAs, chromosomes share information with each other. So the
whole population moves like a one group towards an optimal area. In PSO, only gbest (or
lbest) gives out the information to others. It is a one -way information sharing mechanism.
The evolution only looks for the best solution. Compared with GA, all the particles tend to
converge to the best solution quickly even in the local version in most cases.
From the above case, we can learn that there are two key steps when applying PSO to
optimization problems: the representation of the solution and the fitness function. One of
the advantages of PSO is that PSO take real numbers as particles. It is not like GA, which
needs to change to binary encoding, or special genetic operators have to be used. For
example, we try to find the solution for f(x) = x1^2 + x2^2+x3^2, the particle can be set as
(x1, x2, x3), and fitness function is f(x). Then we can use the standard procedure to find the
optimum. The searching is a repeat process, and the stop criteria are that the maximum
iteration number is reached or the minimum error condition is satisfied.
There are not many parameter need to be tuned in PSO. Here is a list of the parameters and
their typical values.
The number of particles: the typical range is 20 - 40. Actually for most of the problems 10
particles is large enough to get good results. For some difficult or special problems, one can
try 100 or 200 particles as well.
Range of particles: It is also determined by the problem to be optimized, you can specify
different ranges for different dimension of particles.
Vmax: it determines the maximum change one particle can take during one iteration. Usually
we set the range of the particle as the Vmax for example, the particle (x1, x2, x3)
X1 belongs [-10, 10], then Vmax = 20
Learning factors: c1 and c2 usually equal to 2. However, other settings were also used in
different papers. But usually c1 equals to c2 and ranges from [0, 4]
The stop condition: the maximum number of iterations the PSO execute and the minimum
error requirement. for example, for ANN training in previous section, we can set the
Global version vs. local version: we introduced two versions of PSO. Global and local
version. Global version is faster but might converge to local optimum for some problems.
Local version is a little bit slower but not easy to be trapped into local optimim. One can use
global version to get quick result and use local version to refine the search.
a. Crowd Simulation
Artists are using swarm technology as a means of creating complex interactive systems
or simulating crowds.
Stanley and Stella in: Breaking the Ice was the first movie to make use of swarm technology
for rendering, realistically depicting the movements of groups of fish and birds using the
Boids system. Tim Burton's Batman Returns also made use of swarm technology for
showing the movements of a group of bats. The Lord of the Rings film trilogy made use of
similar technology, known as Massive, during battle scenes. Swarm technology is
particularly attractive because it is cheap, robust, and simple.
Airlines have used swarm theory to simulate passengers boarding a plane. Southwest
Airlines researcher Douglas A. Lawson used an ant-based computer simulation employing
only six interaction rules to evaluate boarding times using various boarding methods.
Ants build cemeteries by collecting dead bodies into a single place in the nest. They also
organize the spatial disposition of larvae into clusters with the younger, smaller larvae in
the cluster center and the older ones at its periphery. This clustering behavior has
motivated a number of scientific studies.
Wasps build nests with a highly complex internal structure that is well beyond the
ognitive capabilities of a single wasp. Termites build nests whose dimensions are normous
when compared to a single individual, which can measure as little as a few millimeters.
Scientists have been studying the coordination mechanisms that allow the construction of
these structures and have proposed probabilistic models exploiting insects behavior. Some
of these models are implemented in computer programs to produce simulated structures
that recall the morphology of the real nests.
Scientists have shown that these elegant swarm-level behaviors can be understood as
the result of a self-organized process where no leader is in charge and each individual
bases its movement decisions solely on locally available information: the distance,
perceived speed, and direction of movement of neighbours. These studies have inspired a
number of computer simulations that are now used in the computer graphics industry for
the realistic reproduction of flocking in movies and computer games.
In ant colony optimization (ACO), a set of software agents called "artificial ants" search
for good solutions to a given optimization problem transformed into the problem of finding
the minimum cost path on a weighted graph. The artificial ants incrementally build
solutions by moving on the graph. The solution construction process is stochastic and is
biased by a pheromone model, that is, a set of parameters associated with graph
components the values of which are modified at runtime by the ants.
Figure ix ACO
It is inspired by social behaviors in flocks of birds and schools of fish. In practice, in the
initialization phase each particle is given a random initial position and an initial velocity.
The position of the particle represents a solution of the problem and has therefore a value,
given by the objective function. At each iteration of the algorithm, each particle moves with
a velocity that is a weighted sum of three components: the old velocity, a velocity
component that drives the particle towards the location in the search space where it
previously found the best solution so far, and a velocity component that drives the particle
towards the location in the search space where the neighbor particles found the best
solution so far.
There are a number of swarm behaviours observed in natural systems that have inspired
innovative ways of solving problems by using swarms of robots. This is what is called
swarm robotics. In other words, swarm robotics is the application of swarm intelligence
principles to the control of swarms of robots. As with swarm intelligence systems in
general, swarm robotics systems can have either a scientific or an engineering flavour.
Clustering in a swarm of robots was mentioned above as an example of artificial/scientific
system.
ü It is non-immediate as linear systems tend to be very direct. Flip a switch and the
light comes on. Simple collective systems tend to operate simply. But complex
swarm systems with rich hierarchies take time.