6 Search Algorithms2
6 Search Algorithms2
6 Search Algorithms2
Data-Driven Reasoning
- takes the facts of the problem
- applies rules and legal moves
- produces new facts that eventually lead to a goal
- also called forward chaining
Goal-Driven Reasoning
- focuses on the goal
- finds the rules that could produce the goal
- chains backward through sub goals to the given facts
- also called backward chaining
Data-Driven and Goal-Driven Search Cont..
Goal-Driven search is recommended if:
• A goal or hypothesis is given in the problem statement or can easily be
formulated.
• There is a large number of rules that match the facts of the problem and thus
produce an increasing number of conclusions or goals.
• Problem data are not given but must be acquired by the problem solver.
• Introduction:
• AI problem solvers employ heuristics in two
situations:-
– First: A problem may not have an exact
solution, because of inherent ambiguities in the
problem statement or available data.
• Optical illusions exemplify these ambiguities.
• Vision systems use heuristics to select the most
likely of several possible interpretations of a given
scene.
4/03/2013 lec 6b CSC 450-AI by Asma Kausar 6
@UT, Tabouk
HEURISTIC SEARCH
• Introduction:
• Al problem solvers employ heuristics in two
situations:-
– Second: A problem may have an exact solution,
but the computational cost of finding it may be
prohibitive.
• In many problems, state space growth is
combinatorially explosive, with the number of possible
states increasing exponentially or factorially with the
depth of the search.
4/03/2013 lec 6b CSC 450-AI by Asma Kausar 7
@UT, Tabouk
Three levels of the tic-tac-toe state space reduced by symmetry
8
The “most wins” heuristic applied to the first children in tic-tac-toe.
9
Heuristically reduced state space for tic-tac-toe.
10
Best-First Search
•Organize the toDo (open) list as an ordered list i.e. make sure it is kept sorted,
with the “best” node at the front.
•New nodes are always added at the appropriate place in the list i.e. go down
the list until one reaches the first node that is “worse” than the one to be
inserted, and insert the new node in front of it. Insert new node at end if there
is no node worse than it.
•Since nodes are always taken off from the front this makes toDo operate as a
best-first ordered queue.
•NOTE: This should really be called something like
“apparently-best-first search”. Usually there is no guarantee the chosen node
is really the best - just the one that seems the best at the time.
Best-First-Search
Best first search uses lists to maintain states:
12
ALGORITHM FOR HEURISTICS SEARCH
Best-First-Search
13
Best-First Search Cont..
A Quick Review
• g(n) = cost from the initial state to the current
state n
• Implementation
– expand the “most desirable” node into the fringe queue
– sort the queue in decreasing order of desirability
20
Greedy Best-First Search
23
A* Search
•Avoid expanding paths that are already expensive
Consider costs for getting to x in addition to estimate of
costs for getting from x to goal
•Evaluation function: f(xIdea) = g(x) + h(x)
g(x): cost so far to reach x
h(x): estimated cost to goal from x
(heuristic function)
f(x): estimated total cost of path through x to goal
Use greedy search heuristic:
•Expand node with smallest f(x) value
(but this time consider actual costs so far plus estimate)
24
Heuristic Functions
• To use A* a heuristic function must be used
that never overestimates the number of steps
to the goal
26
A* graph search
30
Local search Algorithms
• Local search algorithms operate using a single current node (rather
than multiple paths) and generally move only to neighbors of that
node. Typically, the paths followed by the search are not retained.
• Two key advantages:
– (1) they use very little memory—usually a constant amount;
and
– (2) they can often find reasonable solutions in large or infinite
(continuous) state spaces for which systematic algorithms are
unsuitable.
• Are useful for solving pure optimization problems, in which the
aim is to find the best state according to an objective function.
• Local search algorithms explore the landscape.
• A complete local search algorithm always finds a goal if one exists;
• An optimal algorithm always finds a global minimum/ maximum. 31
Hill climbing
S
• Perform depth-first,
A 10.4 D 8.9 BUT:
• instead of left-to-
A 10.4 E
6.9 right selection,
• FIRST select the child
6.7 B F 3.0 with the best
heuristic value
G
33
Hill climbing_1 algorithm:
1. QUEUE <-- path only containing the root;
3. IF goal reached
THEN success;
ELSE failure;
34
Hill climbing search
• The hill-climbing search algorithm is simply a loop
that continually moves in the direction of increasing
value—that is, uphill.
• Terminates when it reaches a "peak" where no
neighbor has a higher value.
• The algorithm does not maintain a search tree,
– the current node need only record the state and the
value of the objective function.
• Hill climbing does not look ahead beyond the
immediate neighbors of the current state.
35
Hill-climbing search
36
Hill-climbing search
Problem: depending on initial state, can get
stuck in local maxima
•
37
Example: n-queens
Put n queens on an n × n board with no two
queens on the same row, column, or diagonal
• To illustrate hill climbing, we will use the 8-
queens problem
38
the 8-queens problem
39
The 8-queens problem
• h = number of pairs of queens that are attacking each other, either directly or
indirectly
• h = 7 for the above state
40
The 8-queens problem
41
Hill climb advantages
• The relative simplicity of the algorithm
– makes it a popular first choice amongst optimizing
algorithms.
• It is used widely in artificial intelligence, for reaching a
goal state from a starting node.
• Hill climbing can often produce a better result than
other algorithms when the amount of time available to
perform a search is limited,
– such as with real-time systems.
– It is an anytime algorithm: it can return a valid solution even
if it's interrupted at any time before it ends.
42
Hill climb Disadvantages
• Unfortunately, hill climbing often gets stuck
for the following reasons:
1. Local maxima
2. Plateau
3. Ridges
• In each case, the algorithm reaches a point at
which no progress is being made.
43
Hill climb Disadvantages
• Unfortunately, hill climbing often gets stuck
for the following reasons:
1. Local maxima
2. Plateau
3. Ridges
• In each case, the algorithm reaches a point at
which no progress is being made.
44
Hill Climbing: Disadvantages
Local maxima
• A state that is better than all of its neighbours, but not
better than some other states far away.
45
Hill Climbing: Disadvantages
Plateau
A flat area of the search space in which all
neighbouring states have the same value.
46
Hill Climbing: Disadvantages
Ridge
The orientation of the high region, compared to the set
of available moves, makes it impossible to climb up.
However, two moves executed serially may increase the
height.
47
Hill Climbing: Disadvantages
• The hill-climbing algorithms can often fails to find a
goal when one exists because they can get stuck.
49
Simulated annealing search
• Idea: escape local maxima by allowing some "bad" moves but
gradually decrease their frequency.
50
Properties of simulated annealing search
51
Beam search
A D
54
Genetic algorithms
• A successor state is generated by combining two parent states
• Operations
– Crossover (2 parents -> 2 children)
– Mutation (one bit)
• Basic structure
– Create population
– Perform crossover & mutation (on fittest)
– Keep only fittest children
62
62