6 Search Algorithms2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 58

Search Algorithms 2

Data-Driven and Goal-Driven Search

Data-Driven Reasoning
- takes the facts of the problem
- applies rules and legal moves
- produces new facts that eventually lead to a goal
- also called forward chaining

Goal-Driven Reasoning
- focuses on the goal
- finds the rules that could produce the goal
- chains backward through sub goals to the given facts
- also called backward chaining
Data-Driven and Goal-Driven Search Cont..
Goal-Driven search is recommended if:
• A goal or hypothesis is given in the problem statement or can easily be
formulated.
• There is a large number of rules that match the facts of the problem and thus
produce an increasing number of conclusions or goals.
• Problem data are not given but must be acquired by the problem solver.

Data-Driven search is recommended if:


• All or most of the data are given in the initial problem statement.
• There is a large number of potential goals, but there are only a few ways to
use the facts and given information of the particular problem.
• It is difficult to form a goal or hypothesis.
Informed (Heuristic) Search Strategies
• Informed Search – a strategy that uses problem-
specific knowledge beyond the definition of the
problem itself
• Best-First Search – an algorithm in which a node is
selected for expansion based on an evaluation
function f(n)
– Traditionally the node with the lowest evaluation function
is selected
– Not an accurate name…expanding the best node first
would be a straight march to the goal.
– Choose the node that appears to be the best

AI: Chapter 4: Informed Search and


January 31, 2006 4
Exploration
Informed (Heuristic) Search Strategies
• There is a whole family of Best-First Search
algorithms with different evaluation functions
– Each has a heuristic function h(n)

• h(n) = estimated cost of the cheapest path from node


n to a goal node

• Example: in route planning the estimate of the cost


of the cheapest path might be the straight line
distance between two cities

AI: Chapter 4: Informed Search and


January 31, 2006 5
Exploration
HEURISTIC SEARCH

• Introduction:
• AI problem solvers employ heuristics in two
situations:-
– First: A problem may not have an exact
solution, because of inherent ambiguities in the
problem statement or available data.
• Optical illusions exemplify these ambiguities.
• Vision systems use heuristics to select the most
likely of several possible interpretations of a given
scene.
4/03/2013 lec 6b CSC 450-AI by Asma Kausar 6
@UT, Tabouk
HEURISTIC SEARCH

• Introduction:
• Al problem solvers employ heuristics in two
situations:-
– Second: A problem may have an exact solution,
but the computational cost of finding it may be
prohibitive.
• In many problems, state space growth is
combinatorially explosive, with the number of possible
states increasing exponentially or factorially with the
depth of the search.
4/03/2013 lec 6b CSC 450-AI by Asma Kausar 7
@UT, Tabouk
Three levels of the tic-tac-toe state space reduced by symmetry

8
The “most wins” heuristic applied to the first children in tic-tac-toe.

9
Heuristically reduced state space for tic-tac-toe.

10
Best-First Search

•Organize the toDo (open) list as an ordered list i.e. make sure it is kept sorted,
with the “best” node at the front.
•New nodes are always added at the appropriate place in the list i.e. go down
the list until one reaches the first node that is “worse” than the one to be
inserted, and insert the new node in front of it. Insert new node at end if there
is no node worse than it.
•Since nodes are always taken off from the front this makes toDo operate as a
best-first ordered queue.
•NOTE: This should really be called something like
“apparently-best-first search”. Usually there is no guarantee the chosen node
is really the best - just the one that seems the best at the time.

The goal of best–first search is to find the goal state by looking at as


few states as possible
ALGORITHM FOR HEURISTICS SEARCH

Best-First-Search
Best first search uses lists to maintain states:

– OPEN LIST to keep track of the current fringe of the


search.

– CLOSED LIST to record states already visited.

• Algorithm orders the states on OPEN according to some


heuristics estimate of their closeness to a goal.

12
ALGORITHM FOR HEURISTICS SEARCH

Best-First-Search

• Each iteration of the loop consider the most promising

state on the OPEN list.

• Example algorithm sorts and rearrange OPEN in

precedence of lowest heuristics value.

13
Best-First Search Cont..
A Quick Review
• g(n) = cost from the initial state to the current
state n

• h(n) = estimated cost of the cheapest path


from node n to a goal node

• f(n) = evaluation function to select a node for


expansion (usually the lowest cost node)

AI: Chapter 4: Informed Search and


January 31, 2006 18
Exploration
Greedy Best-First Search
• Greedy Best-First search tries to expand the node that is
closest to the goal assuming it will lead to a solution quickly
– f(n) = h(n)

• Implementation
– expand the “most desirable” node into the fringe queue
– sort the queue in decreasing order of desirability

• Example: consider the straight-line distance heuristic h


Expand the node that appears to be closest to the goal

20
Greedy Best-First Search

AI: Chapter 4: Informed Search and


January 31, 2006 21
Exploration
A* Search
• A* (A star) is the most widely known form of
Best-First search
– It evaluates nodes by combining g(n) and h(n)
– f(n) = g(n) + h(n)
– Where
• g(n) = cost so far to reach n
• h(n) = estimated cost to goal from n
• f(n) = estimated total cost of path through n

AI: Chapter 4: Informed Search and


January 31, 2006 22
Exploration
A* Search
• When h(n) = actual cost to goal
– Only nodes in the correct path are expanded
– Optimal solution is found
• When h(n) < actual cost to goal
– Additional nodes are expanded
– Optimal solution is found
• When h(n) > actual cost to goal
– Optimal solution can be overlooked

23
A* Search
•Avoid expanding paths that are already expensive
Consider costs for getting to x in addition to estimate of
costs for getting from x to goal
•Evaluation function: f(xIdea) = g(x) + h(x)
g(x): cost so far to reach x
h(x): estimated cost to goal from x
(heuristic function)
f(x): estimated total cost of path through x to goal
Use greedy search heuristic:
•Expand node with smallest f(x) value
(but this time consider actual costs so far plus estimate)
24
Heuristic Functions
• To use A* a heuristic function must be used
that never overestimates the number of steps
to the goal

• h1=the number of misplaced tiles

• h2=the sum of the Manhattan distances of the


tiles from their goal positions
Heuristic Functions
• h1 = 7
• h2 = 4+0+3+3+1+0+2+1 = 14

26
A* graph search

1. Start with OPEN containing just the initial state.


2. Until a goal is found or there are no nodes on OPEN do:
(a) Select the node on OPEN with the lowest f-value.
(b) Generate its successors.
(c) For each successor do:
i. If it hasn’t been generated before (i.e., it’s not in CLOSED),
evaluate it, add it to OPEN, and record its parent.
ii. If it has been generated before, change the parent if this
new path is better than the previous one. In that case,
update the cost of getting to this node and to any successors that
this node may already have. Then add node to CLOSED list.
Local search
Algorithms
Local search Algorithms

• covers algorithms that perform purely local search


in the state space
– evaluating and modifying one or more current states
rather than systematically exploring paths from an
initial state.
• These algorithms are suitable for problems in
which all that matters is the solution state, not the
path cost to reach it.
• The family of local search algorithms includes
methods inspired by statistical physics and
evolutionary biology.
Local search Algorithms
• Basic search algorithms explore search space
systematically
– keeping one or more paths in memory ,recording which
alternatives have been explored , When a goal is found, the
path to that goal also constitutes a solution to the problem.

• In many problems, however, the path to the goal is


irrelevant.
– For example, in the 8-queens problem ,what matters is the final
configuration of queens, not the order in which they are added.
• Local search algorithms are a different class of algorithms.
– ones that do not worry about paths at all

30
Local search Algorithms
• Local search algorithms operate using a single current node (rather
than multiple paths) and generally move only to neighbors of that
node. Typically, the paths followed by the search are not retained.
• Two key advantages:
– (1) they use very little memory—usually a constant amount;
and
– (2) they can often find reasonable solutions in large or infinite
(continuous) state spaces for which systematic algorithms are
unsuitable.
• Are useful for solving pure optimization problems, in which the
aim is to find the best state according to an objective function.
• Local search algorithms explore the landscape.
• A complete local search algorithm always finds a goal if one exists;
• An optimal algorithm always finds a global minimum/ maximum. 31
Hill climbing

A basic heuristic search method:


depth-first + heuristic
Hill climbing_1
 Example: using the straight-line distance:

S
• Perform depth-first,
A 10.4 D 8.9 BUT:
• instead of left-to-
A 10.4 E
6.9 right selection,
• FIRST select the child
6.7 B F 3.0 with the best
heuristic value
G

33
Hill climbing_1 algorithm:
1. QUEUE <-- path only containing the root;

2. WHILE QUEUE is not empty


AND goal is not reached

DO remove the first path from the QUEUE;


create new paths (to all children);
reject the new paths with loops;
sort new paths (HEURISTIC) ;
add the new paths to front of QUEUE;

3. IF goal reached
THEN success;
ELSE failure;
34
Hill climbing search
• The hill-climbing search algorithm is simply a loop
that continually moves in the direction of increasing
value—that is, uphill.
• Terminates when it reaches a "peak" where no
neighbor has a higher value.
• The algorithm does not maintain a search tree,
– the current node need only record the state and the
value of the objective function.
• Hill climbing does not look ahead beyond the
immediate neighbors of the current state.
35
Hill-climbing search

36
Hill-climbing search
Problem: depending on initial state, can get
stuck in local maxima

37
Example: n-queens
Put n queens on an n × n board with no two
queens on the same row, column, or diagonal
• To illustrate hill climbing, we will use the 8-
queens problem

38
the 8-queens problem

• Local search algorithms use a complete-state formulation,


where each state has S queens on the board, one per column.
• The successors of a state are all possible states generated by
moving a single queen to another square in the same column
– (so each state has 8 x 7= 56 successors).
• The heuristic cost function h is the number of pairs of queens
that are attacking each other, either directly or indirectly.
• The global minimum of this function is zero, which occurs
only at perfect solutions. ( no attack)

39
The 8-queens problem

• h = number of pairs of queens that are attacking each other, either directly or
indirectly
• h = 7 for the above state

40
The 8-queens problem

• A local minimum with h = 1, close to the optimum solution

41
Hill climb advantages
• The relative simplicity of the algorithm
– makes it a popular first choice amongst optimizing
algorithms.
• It is used widely in artificial intelligence, for reaching a
goal state from a starting node.
• Hill climbing can often produce a better result than
other algorithms when the amount of time available to
perform a search is limited,
– such as with real-time systems.
– It is an anytime algorithm: it can return a valid solution even
if it's interrupted at any time before it ends.
42
Hill climb Disadvantages
• Unfortunately, hill climbing often gets stuck
for the following reasons:
1. Local maxima
2. Plateau
3. Ridges
• In each case, the algorithm reaches a point at
which no progress is being made.

43
Hill climb Disadvantages
• Unfortunately, hill climbing often gets stuck
for the following reasons:
1. Local maxima
2. Plateau
3. Ridges
• In each case, the algorithm reaches a point at
which no progress is being made.

44
Hill Climbing: Disadvantages
Local maxima
• A state that is better than all of its neighbours, but not
better than some other states far away.

• Is a peak that is higher than each of its neighboring states but


lower than the global maximum.

45
Hill Climbing: Disadvantages
Plateau
A flat area of the search space in which all
neighbouring states have the same value.

46
Hill Climbing: Disadvantages
Ridge
The orientation of the high region, compared to the set
of available moves, makes it impossible to climb up.
However, two moves executed serially may increase the
height.

47
Hill Climbing: Disadvantages
• The hill-climbing algorithms can often fails to find a
goal when one exists because they can get stuck.

• Backtrack to some earlier node and try going in a


different direction.
• Stochastic hill climbing chooses at random from
among the uphill moves
• Random restart. "If at first you don't succeed, try,
try again."
48
Simulated Annealing
the idea
• A hill-climbing algorithm is incomplete
– It gets stuck
• In contrast, a purely random walk—that is, moving to
a successor chosen uniformly at random from the set
of successors—is complete but extremely inefficient.
• Therefore, it seems reasonable to combine hill
climbing with a random walk in some way that yields
both efficiency and completeness.
• Simulated annealing is such an algorithm.

49
Simulated annealing search
• Idea: escape local maxima by allowing some "bad" moves but
gradually decrease their frequency.

50
Properties of simulated annealing search

• One can prove: If T decreases slowly enough, then


simulated annealing search will find a global
optimum with probability approaching 1
• Widely used in VLSI layout, airline scheduling, etc

51
Beam search

Narrowing the width of the breadth-first search


Local beam search
• Keep track of k states rather than just one
• Start with k randomly generated states
• At each iteration, all the successors of all k states are
generated
• If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.

• In a local beam search, useful information is passed among


the parallel search threads. In effect, the states that
generate the best successors say to the others, "Come
over here, the grass is greener!"
53
Beam search (1):
Depth 1) S • Assume a pre-fixed WIDTH
(example : 2 )
10.4 8.9
A D • Perform breadth-first, BUT:
• Only keep the WIDTH best
new nodes
 depending on heuristic
Depth 2) S
 at each new level.

A D

6.7 8.9 10.4 6.9


B D A E
X X
ignore ignore

54
Genetic algorithms
• A successor state is generated by combining two parent states

• Start with k randomly generated states (population)

• A state is represented as a string over a finite alphabet (often


a string of 0s and 1s)

• Evaluation function (fitness function). Higher values for better


states.

• Produce the next generation of states by selection, crossover,


and mutation
Genetic Algorithms
• Quicker but randomized searching for an optimal
parameter vector

• Operations
– Crossover (2 parents -> 2 children)
– Mutation (one bit)

• Basic structure
– Create population
– Perform crossover & mutation (on fittest)
– Keep only fittest children

AI: Chapter 4: Informed Search and


January 31, 2006 56
Exploration
Genetic Algorithms
• Children carry parts of their parents’ data

• Only “good” parents can reproduce


– Children are at least as “good” as parents?
• No, but “worse” children don’t last long

• Large population allows many “current points” in


search
– Can consider several regions (watersheds) at once

AI: Chapter 4: Informed Search and


January 31, 2006 57
Exploration
Genetic Algorithms
• Representation
– Children (after crossover) should be similar to parent, not
random
– Binary representation of numbers isn’t good - what
happens when you crossover in the middle of a number?
– Need “reasonable” breakpoints for crossover (e.g.
between R, xcenter and ycenter but not within them)
• “Cover”
– Population should be large enough to “cover” the range of
possibilities
– Information shouldn’t be lost too soon
– Mutation helps with this issue

AI: Chapter 4: Informed Search and


January 31, 2006 58
Exploration
Experimenting With GAs
• Be sure you have a reasonable “goodness” criterion
• Choose a good representation (including methods for
crossover and mutation)
• Generate a sufficiently random, large enough
population
• Run the algorithm “long enough”
• Find the “winners” among the population
• Variations: multiple populations, keeping vs. not
keeping parents, “immigration / emigration”,
mutation rate, etc.

AI: Chapter 4: Informed Search and


January 31, 2006 59
Exploration
Genetic algorithms

• Fitness function: number of non-attacking pairs of queens


(min = 0, max = 8 × 7/2 = 28)
• 24/(24+23+20+11) = 31%
• 23/(24+23+20+11) = 29% etc
Genetic algorithms
Uninformed vs. informed
• Blind (or uninformed) search algorithms:
– Solution cost is not taken into account.

• Heuristic (or informed) search algorithms:


– A solution cost estimation is used to guide the
search.
– The optimal solution, or even a solution, are not
guaranteed.

62
62

You might also like