Chapter4 Beyond Classical Search
Chapter4 Beyond Classical Search
Chapter4 Beyond Classical Search
Chapter 4
Some real world problems
• Suppose you had to solve VLSI layout
problems (minimize distance between
components, unused space, etc.)…
• schedule airlines
• schedule workers with specific skills sets to do
tasks that have resource and ordering
constraints
Common to all such problems
Characteristics of these problems are:
• Hill climbing
• Simulated annealing
• Genetic algorithms
Iterative Improvement Algorithms
• In the most problems the solution is the
path. For example, the solution to the 8-
puzzle is a series of movements for the
“blank tile.” The solution to the traveling in
Romania problem is a sequence of cities
to get to Bucharest.
• For many optimization problems, solution
path is irrelevant
–Just want to reach goal state
Iterative Improvement Algorithms
• State space / search space
– Set of “complete” configurations
– Want to find optimal configuration (or at least
one that satisfies goal constraints)
• For these cases, use iterative improvement
algorithm
– Keep a single current state
– Try to improve it
• Constant memory: The space complexity is
constant!
Example: Travelling Salesperson Problem
Start with any complete tour, perform pairwise exchanges
B C B C
D D
A E A E
ACBDE AB C DE
5 Steps
from (a)
to (b)
current ← MAKE-NODE(problem.INITIAL-STATE)
loop do
neighbor ← a highest-valued successor of current
If neighbor.VALUE ≤ current.VALUE then return current.STATE
current ← neighbor
objective
function global maximum
shoulder
local maximum
"flat" local maximum
state space
current state
Difficulties with ridges
The “ridge” creates a sequence of local maxima that are not directly
connected to each other. From each local maximum, all the available
actions point downhill.
Variation of Hill-Climbing techniques
stochastic: choose randomly from uphill moves. The probability
of selection can vary with steepness. This variation
converges more slowly than steepest ascent, but in
some state landscapes it finds better solutions.
first-choice: generate successors randomly one-by- one until one
better than the current state is found. This is a good
strategy for states with many (e.g., thousands) of
successors.
Random-restart: restart with a randomly generated initial state
to escape from local maxima or plateau
Better fix: Instead of picking the best move pick any move
that produces an improvement. This is called randomized hill
climbing
Random-Restart Hill-Climbing
Advice: If at first you don’t succeed, try, try again!
Random-restart hill-climbing conducts a series of hill-
climbing searches from randomly generated initial states,
stopping when a goal is found.
With probability approaching 1, we will eventually
generate a goal state as the initial state.
If each hill-climbing search has a probability p of success,
then the expected number of restarts required to reach a
solution is 1/p.
Random-Restart Hill-Climbing Cont’d.
Example: 8-queens As we saw earlier, p ≈ 0.14.
In this case we need roughly 7 iterations (6 failures and 1
success).
Random restart hill-climbing is very effective for 8-queens.
Even for three million queens, the approach can find
solutions in under a minute.
The success of random-restart hill-climbing depends very
much on the shape of the state space. There are practical
problems with state spaces with very bad shapes.
Hill Climbing
Cost
States
16
Hill Climbing
Current
Solution
17
Hill Climbing
Current
Solution
18
Hill Climbing
Current
Solution
19
Hill Climbing
Current
Solution
20
Hill Climbing
Best
21
Simulated annealing
function S i m u l a t e d A n n e a l i n g (problem, schedule) returns a
solution state
inputs: problem, a problem
schedule, a mapping from time to “temperature”
current ← Make-Node(problem.Initial-State)
for t = 1 to ∞ do
T ← schedule(t)
if T=0 then return current
next ← a randomly selected successor of current
∆E ← next.Value - current.Value
if ∆E > 0 then current ← next
else current ← next only with probability e ∆E/T
The simulated annealing algorithm, a version of stochastic hill
climbing where some downhill moves are allowed. Downhill
moves are accepted readily early in the annealing schedule and
then less often as time goes on. The schedule input determines
the value of the temperature T as a function of time.
Simulated Annealing
States
26
Simulated Annealing
Cost Best
States
27
Simulated Annealing
Cost Best
States
28
Simulated Annealing
Cost Best
States
29
Simulated Annealing
Cost Best
States
30
Simulated Annealing
Cost Best
States
31
Simulated Annealing
Cost Best
States
32
Simulated Annealing
Cost Best
States
33
Simulated Annealing
Cost Best
States
34
Simulated Annealing
Cost Best
States
35
Simulated Annealing
Cost Best
States
36
Simulated Annealing
Cost Best
States
37
Simulated Annealing
Cost Best
States
38
Simulated Annealing
Cost Best
States
39
Simulated Annealing
Cost Best
States
40
Simulated Annealing
Cost Best
States
41
Simulated Annealing
Cost Best
States
42
Simulated Annealing
Cost Best
States
43
Simulated Annealing
Cost Best
States
44
Simulated Annealing
Cost Best
States
45
Simulated Annealing
Cost Best
States
46
Simulated Annealing
Cost Best
States
47
Simulated Annealing
Cost Best
States
48
Local beam search
States
50
Local Beam Search
51
Local Beam Search
52
Local Beam Search
53
Local Beam Search
54
Local Beam Search
55
Local Beam Search
56
Local Beam Search
57
Local Beam Search
58
Local Beam Search
59
Local Beam Search
60
Local Beam Search
61
Local Beam Search
62
Local Beam Search
63
Local Beam Search
64
Local Beam Search
65
Genetic Algorithms
Genetic Algorithms are computer algorithms
that search for good solutions to a problem from
among a large number of possible solutions.
They were proposed and developed in the 1960s
by John Holland, his students, and his colleagues
at the University of Michigan. These
computational paradigms were inspired by the
mechanics of natural evolution, including
survival of the fittest, reproduction, and
mutation.
Genetic Algorithms
Basic idea behind Gas
In this example, Offspring 1 inherits bits in position 1, 2, and 3 from the left side
of the crossover point from Parent 1 and the rest from the right side of the
crossover point from Parent 2. Similarly, Offspring 2 inherits bits in position 1, 2,
and 3 from the left side of Parent 2 and the rest from the right side of Parent 1.
Crossover could be one- point, two-point and Uniform crossover.
The genetic algorithm
Mutation
Mutation is performed after crossover to prevent falling
all solutions in the population into a local optimum of
solved problem. Mutation changes the new offspring by
flipping bits from 1 to 0 or from 0 to 1. Mutation can
occur at each bit position in the string with some
probability, usually very small (e.g. 0.001). For example,
consider the following chromosome with mutation point
at position 2:
Not mutated chromosome: 1 0 0 0 1 1 1
Mutated: 1 1 0 0 1 1 1 The 0 at position 2 flips to 1 after
mutation.
Genetic Algorithms
• What two requirements should a problem satisfy
in order to be suitable for
solving it by a GA?
Answer: GA can only be applied to problems that
satisfy the following requirements:
11111 11000
00000 00111
Mutation
• With small probability, randomly alter 1 bit
• Minor operator
• An insurance policy against lost bits
• Pushes out of local minima
Population: Goal: 0 1 1 1 1 1
States
93
Genetic Algorithms
Mutation
Cross-Over
94
Genetic Algorithms
95
Genetic Algorithms
96
Genetic Algorithms
97
Genetic Algorithms
98
Genetic Algorithms
99
Genetic Algorithms
100
Genetic Algorithms
101
Genetic Algorithms
102
Genetic Algorithms
103
Genetic Algorithms
104
Genetic Algorithms
105
Genetic Algorithms
106
Genetic Algorithms
107
Genetic Algorithms
108
Genetic Algorithms
109
Genetic Algorithms
110
Genetic Algorithms
111
Summary
Hill climbing is a steady monotonous ascent to
better nodes.
Simulated annealing, local beam search, and genetic
algorithms are “random” searches with a bias towards
better nodes.
All needed very little space which is defined by the
population size.
None guarantees to find the globally optimal
solution.