Intro To Genetic Algorithms: I400/I590 Artificial Life As An Approach To Artificial Intelligence Larry Yaeger
Intro To Genetic Algorithms: I400/I590 Artificial Life As An Approach To Artificial Intelligence Larry Yaeger
Intro To Genetic Algorithms: I400/I590 Artificial Life As An Approach To Artificial Intelligence Larry Yaeger
Optimization Techniques
Analytical Given y = f(x), take the derivative of f w.r.t. x, set the result to zero, solve for x
x y = 0
y
y x
y x
Optimization Techniques
Gradient-based, or hill-climbing Given y = f(x)
pick a point x0 compute the gradient f(x0) step along the gradient to obtain x1 = x0 + f(x0). repeat until extremum is obtained
Optimization Techniques
Gradient-based, or hill-climbing
Optimization Techniques
Enumerative Test every point in the space in order Random Test every point in the space randomly
Optimization Techniques
Genetic Algorithm (Evolutionary Computation) Does not require derivatives, just an evaluation function (a fitness function) Samples the space widely, like an enumerative or random algorithm, but more efficiently Can search multiple peaks in parallel, so is less hampered by local extrema than gradient-based methods Crossover allows the combination of useful building blocks, or schemata (mutation avoids evolutionary dead-ends) Robust!
Considered by most to be the seminal work in the field Established formal, theoretical basis for evolutionary optimization with introduction of schemata (building blocks, partial solutions)
Genetic Programming
1985 N.L. Cramer, A Representation for the Adaptive Generation of Simple Sequential Programs 1987 D. Dickmanns, J. Schmidhuber, A. Winklhofer, Der genetische Algorithmus: Eine Implementierung in Prolog 1987 C. Fujiki and J. Dickinson published on using GAs to evolve lisp source code for solving the prisoners dilemma problem 1988 (patent filed) /1989 (publication) John Koza, at Stanford, introduced genetic programming, an application of evolutionary computing to tree representations of lisp programs for solving engineering problems
String-based GAs
Hollands original approach, and probably still the most common Genome (or chromosome) is a string of bits
1 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0
Other Representations
The basic vary-and-select GA approach can be applied to almost any data structure Trees
Arrays
Lists
Fitness
Fitness is computed for each individual To help maintain diversity or differentiate between similar individuals, raw objective scores are sometimes scaled to produce the final fitness scores Rank - no scaling Linear scaling (fitness proportionate) - normalizes based on min and max fitness in population Sigma truncation scaling - normalizes using population mean and std. dev., truncating low-fitness individuals Sharing (similarity scaling) - reduces fitness for individuals that are similar to other individuals in the population
Selection
The selection scheme determines how individuals are chosen for mating, based on their fitness scores Too great a bias towards the best individuals can result in premature convergence, so the best selection schemes are designed to maintain a diverse population Rank - pick the best individuals every time Roulette wheel (proportionate) - probability of selection is proportional to fitness Tournament - initial large number are selected via roulette wheel, then the best ranked are chosen Stochastic - various methods of replenishment of less fit stock (useful) or initial selection (not useful) Elite - in combination with other selection schemes, always keep the fittest individual around
Crossover
There are a number of techniques, but all involve swapping genessequences of bits in the stringsbetween two individuals (or between two strands of a diploid individual) Parent 1
1 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0
1 1 0 1 1 0 1 1 0 1 0 1 1 1 0 0
Child
Parent 2
0 0 1 1 1 0 1 1 0 1 0 0 1 0 1 1
Mutation
Generally, bits are flipped with a small probability This explores parts of the space that crossover might miss, and helps prevent premature convergence Before
1 1 0 1 1 0 1 1 0 1 0 1 1 1 0 0
1 1 0 1 1 0 1 0 0 1 0 1 1 1 0 0
After
Replacement
Simple or generational GAs replace entire population, per the dictates of the selection scheme Steady state or online GAs use different replacement schemes Replace worst Replace best Replace parent Replace random Replace most similar (crowding) Only replace worst and replace most similar are generally very effective (when replace parent works, it is because parents are similar)
Defunct: http://ai.bpa.arizona.edu/~mramsey/ga.html
http://www.econ.iastate.edu/tesfatsi/holland.GAIntro.htm http://myducksoup.com/family/alex/davidfogel.shtml http://www.talkorigins.org/faqs/genalg/genalg.html http://www.hao.ucar.edu/public/research/si/pikaia/tutorial.html http://lancet.mit.edu/~mbwall/presentations/IntroToGAs/ http://en.wikipedia.org/wiki/Genetic_algorithm http://samizdat.mines.edu/ga_tutorial/ http://www.generation5.org/content/2000/ga.asp http://gaul.sourceforge.net/intro.html http://www.burns-stat.com/pages/Tutor/genetic.html http://www.cs.bgu.ac.il/~sipper/ga.html
http://www.natural-selection.com/Library/2000/FogelFraser-CEC2000.pdf