Heuristic Search: Introduction To Artificial Intelligence

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 22

Introduction to Artificial Intelligence

Heuristic Search

Ruth Bergman
Fall 2004
Search Strategies
• Uninformed search (= blind search)
– have no information about the number of steps or the path
cost from the current state to the goal
• Informed search (= heuristic search)
– have some domain-specific information
– we can use this information to speed-up search
– e.g. Bucharest is southeast of Arad.
– e.g. the number of tiles that are out of place in an 8-puzzle
position
– e.g. for missionaries and cannibals problem, select moves
that move people across the river quickly
Heuristic Search
• Let us suppose that we have one piece of information: a
heuristic function
• h(n) = 0, n a goal node
• h(n) > 0, n not a goal node
• we can think of h(n) as a “guess” as to how far n is from the goal

Best-First-Search(initial-state) ;; functions h, succ, and GoalTest defined


open <- MakePriorityQueue(initial-state, h(initial-state))
while (not(empty(open)))
node  pop(open), state  node-state(node)
if GoalTest(state) succeeds return node
for each child in succ(state)
open <- push(child,h(child))
return failure
Heuristics: Example
• Travel: h(n) = distance(n, goal)
Oradea Neamt
71
(380) (234)
Zerind 87 Iasi (226)
75 (374) 151
92
Arad (366) Vaslui
140 99 Fagaras (199)
Sibiu (253)
118 (178) 142
211
Timisoara Rimnicu Vilcea 98
(329) (193) Urziceni Hirsova
111 85 (80) (151)
Lugoj 97 Pitesti (98)
70 (244) 101 Bucharest (0)
146 86
Mehadia 138 90
75
Dobreta Giurgiu Eforie
(242) 120 Craiova (161)
(77)
(160)
Heuristics: Example
• 8-puzzle: h(n) = tiles out of place

Goal state
1 2 3 h(n) = 3 1 2 3
8 6 8 4
7 5 4 7 6 5
Example - cont
1 2 3 h(n) = 3
8 6
7 5 4
h(n) = 2 h(n) = 4
h(n) = 3
1 2 3 1 2
1 2 3
8 6 4 8 6 3
8 6
7 5 7 5 4
7 5 4
1 2 3 h(n) = 3
8 6
7 5 4
h(n) = 2 h(n) = 4
h(n) = 3
1 2 3 1 2
1 2 3
8 6 4 8 6 3
8 6
7 5 7 5 4
7 5 4
h(n) = 3 h(n) = 1
1 2 3 1 2 3
8 6 8 6 4
7 5 4 7 5
1 2 3 h(n) = 3
8 6
7 5 4
h(n) = 2 h(n) = 4
h(n) = 3
1 2 3 1 2
1 2 3
8 6 4 8 6 3
8 6
7 5 7 5 4
7 5 4
h(n) = 3 h(n) = 1
1 2 3 1 2 3
8 6 8 6 4
7 5 4 7 5
h(n) = 0 h(n) = 2
1 2 3 1 2 3 1 2 3
h(n) = 2
8 6 4 8 4 8 6 4
7 5 7 6 5 7 5
Best-First-Search
Performance
• Completeness
– Complete if either finite depth, or minimum drop in h value for each
operator
• Time complexity
– Depends on how good the heuristic function is
– A “perfect” heuristic function will lead search directly to the goal
– We rarely have “perfect” heuristic function
• Space Complexity
– Maintains fringe of search in memory
– High storage requirement
3 2 x
• Optimality
– Non-optimal solutions
Suppose the heuristic drops to 1
one everywhere except along the 1 1 x
path along which the solution lies 1
Iterative Improvement
Algorithms
• Start with the complete configuration and
make modifications to improve the quality
• Consider the states laid out on the surface of
a landscape
• Keep track of only the current state
=> Simplification of Best-First-Search
• Do not look ahead beyond the immediate
neighbors of that state
– Ex: amnesiac climb to summit in a thick fog
Iterative Improvement
Basic Principle

“Like climbing Everest in thick fog with amnesia”


Hill-Climbing

• Simple loop that continually moves in the direction of decreasing


value
• does not maintain a search tree, so the node data structure
need only record the state and its evaluation.
• Always try to make changes that improve the current state
• Steepest-descent: pick the steepest next state

Hill-Climbing(initial-state)
state  initial-state
do forever
next  minimum valued successor of state
if (h(next) >= h(state)) return current
state  next
8-queens
State contains 8 queens on the board
Successor function returns all states generated by moving a single
queen to another square in the same column (8*7 = 56 next
states)
h(s) = number of queens that attack each other in state s.

H(s) = 17, best next is 12 H(s) = 1, local minimum


Drawbacks
• Local maxima/minima : halt with local maximum/minimum
• Plateaux : random walk
• Ridges : oscillate from side to side, limit progress

• Random-Restart Hill-Climbing
Conducts a series of hill-climbing searches from
randomly generated initial states.
Hill-Climbing
Performance
• Completeness
– Not complete, does not use systematic search
method
• Time complexity
– Depends on heuristic function
• Space Complexity
– Very low storage requirement
• Optimality
– Non-optimal solutions
– Often results in locally optimal solution
Simulated-Annealing
• Take some upnhill steps to escape the local minimum
• Instead of picking the best move, it picks a random
move
• If the move improves the situation, it is executed.
Otherwise, move with some probability less than 1.
• Physical analogy with the annealing process:
– Allowing liquid to gradually cool until it freezes
• The heuristic value is the energy, E
• Temperature parameter, T, controls speed of
convergence.
Simulated-Annealing
Algorithm
Simulated-Annealing(initial-state,schedule)
state  initial-state
For t=1,2,…
T = schedule(t)
If T=0 return state
next  a randomly selected successor of state
DE = h(state) - h(next)
if (DE>0) state  next
DE / T
else state = next with probability e
Simulated Annealing
Solution Quality
20
•The schedule determines the rate 18
at which the temperature is 14
lowered
10
•If the schedule lowers T slowly 6
enough, the algorithm will find a 2
global optimum
• High temperature T is 5 10 15 20
characterized by a large portion of T= 100 – t*5
accepted uphill moves whereas at Probability > 0.9
low temperature only downhill
moves are accepted

=> If a suitable annealing schedule is chosen, simulated


annealing has been found capable of finding a good solution,
though this is not guaranteed to be the absolute minimum.
Beam Search
• Overcomes storage complexity of Best-First-Search
• Maintains the k best nodes in the fringe of the search
tree (sorted by the heuristic function)
• When k = 1, Beam search is equivalent to Hill-
Climbing
• When k is infinite, Beam search is equivalent to Best-
First-Search
• If you add a check to avoid repeated states, memory
requirement remains high
• Incomplete, search may delete the path to the
solution.
Beam Search
Algorithm
Beam-Search(initial-state, k)
open <- MakePriorityQueue(initial-state, h(initial-state))
while (not(empty(open)))
node  pop(open), state  node-state(node)
if GoalTest(state) succeeds return node
for each child in succ(state)
open <- push(child,h(child))
If size(open) > k
delete last item in open
return failure
Search Performance
8-Square Heuristic 1: Heuristic 2:
Tiles out of place Manhattan distance*
Search Algorithm expanded solution length expanded solution length
Iterative Deepening 1105 9
hill-climbing 2 no solution found 10 9
best-first 495 24 9 9
*Manhattan distance =.total number of horizontal and vertical moves required
to move all tiles to their position in the goal state from their current position.
3 2 5 1 2
7 1 h1 = 7 3 4 5
4 6 8 h2 = 2+1+1+2+1+1+1+0=9 6 7 8

=> Choice of heuristic is critical to heuristic search algorithm performance.


Blast
• Basic Local Alignment Search Tool
• provides a method for rapid searching
of nucleotide and protein databases
• A heuristic approach to the local
alignment problem.

You might also like