GA Vs PSO
GA Vs PSO
GA Vs PSO
Consider the flowchan shown in Figure 1. In this study, the algorithm stans by
initialiring a group of 50 particles, with random positions in a 200dimensional
hyperspace, constrained between zero and one in each dimension. A set of random
velocities is also initialized, with values between -I and 1. For our phased array synthesis
problem, the 200-dimensional position vector is mapped to LOO amplitude and LOO phase
weights and the concsponding far field pattem is scorcd for each paniclc. Afler all these
panicles are scored, Ihc bcst performer is identified as the initial global best.
NOW the particles arc flown lhrough the problem hyperspace for 10,000 iterations
(500,000 cos1 function evalualionr), using stochastic velocily and position update rulcr.
For each particle in turn, the first step is to update the vclocity separately along each
02003 IEEE
0-7803-7846-6103/SI7.00 181
dimension according to the velocity update rule given in Figure I. Three components
typically contribute to thc new velocity. Thc First part ('incnia,' 'momentum,' or 'habit')
is proponional to the old velocity and is the Tendency of thc panicle Io continue in the
same dircction it has becn traveling. This component is scalcd by the on st ant w. taken
hcic to have a Y ~ U Cof 0.4. The second component of the vclocily update equation
('memory,' 'self-knowledge,' 'nostalgia,' or 'remembrance') is a lincar attraction toward
the best position ever found by the given particle (often called local beat), scaled by the
product of a fixed constaint 4, and a random number r m d ( ) bctween zero and onc. Note
that a different random numbcr is used for each dimension. The third component of the
velocity updatc equation ('cooperation,' 'social knowledge,' 'group knowledge,' or
'shared information') is a linear attraction toward thc bcst position found by any pvrticlc
(oflen called global best), scalcd by the product of a fixed constant +2 and another random
number rorrd() between z.cm and one chosen scparately for each dimension. Following
common practice, we sct 4 1 ~= +.= 2. Thesc paradigms allow pallicles to profit both from
their own discoveries as ,well as the collective knowledge of thc entire swarm. mixing
local and global information
uniquely for each paniclc.
Phased Array Notch Synthesis. To compare the two algorithms, complex phased a m y
weights are synthesized Io meet a far field sidelobe requircment including a 60 dB notch
on one side. This might be desired if the appmximalc direction to an interference SOUTCC
W E ~ Cknown. Thc antenna i s B linear phased a m y of one hundred half-wavelcngth spaced
radiators. For convenience and afcompafiron, both the particle swarm optimizer and
the genetic algorithm are designed to provide vectors of 200 real valucs between zero and
one. The first 100 values are scalcd to the dcsired amplitudc range and the second 100 are
I83
scaled to the desired phase
range. The cost measure to
he minimized i s the sum of
the squares of the CXECSS
far field magnitude above
the specified sidelohe
envelope. This penalizes
sidelohes above the
envelope, while neither
penalty nor reward i s
given for sidelobes below
the specification. The main
beam is excluded from the
cost function. The lower .80! I
the cost, the mom fir the 0.5 0 0.5 1
Smlol
array distribution. The Complex Smlhesb using Genetlc Algonfhm
optimization i s considered
fully converged if the cost
function reaches -40 dB. I"
RCSUIIS.As seen in Figurc
2, the particle swarm
algorithm is competitive
with thc genetic algorithm.
For amplitude-only
synthesis, the particle
swarm algorithm performs
better early on, hut can he
outpaced by the genetic I, I I'
algorithm at higher 4.5 0 0.5 1
Sinle)
iteration. For phase-only
synthesis, algorithms Figure 3. Sidelohe notch synthesis results using particle
haw
~~ ~
comnarahle
~~~~
swarm optimizer (tap) and genetic algorithm (bottom).
performance, while the genetic algorithm slightly outperforms particle swarm
optimization for complex synthesis. Figure 3 shows typical complex synthesis results for
the particle Swam optimizer and for the gcnctic algorithm, with the final amplitude and
phase weights inset. Although the sidelobe envelope constraint allows variations in lhe
sidelohe arrangcment, the quality of the final results i s similar regardless of the optimizer
used, despite the relative simplicity of the particle swarm optimizer.
I84