3-Solving problems by searching-31-07-2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 160

Problem Solving

Dr K Ganesan
Professor – HAG, SCORE
VIT, Vellore – 632 014
[email protected]
Phone ; 6382203768
• Introduction
• State space search is where an AI is the hunter, say
navigating a map filled with many possible routes.
• The AI's challenge is to find the quickest path to the
destination, namely, the solution to a problem.
• It's like playing a game of chess where the AI has to
think ahead about every move to win the game.
• What is State Space Search in AI?
• It's like having many doors to open, with each door
leading to a room full of other doors.
• Some paths lead to dead ends, some go in circles,
and some lead to the goal and the "state space" is
the name for all the rooms and doors combined.
• Each "state" is a snapshot of problem at some point.
• The AI looks at all these snapshots and tries to figure
out which sequence of moves will get it to the final
picture, where the problem is solved.
• It's a bit like trying to solve a jigsaw puzzle by picturing
what the completed picture should look like and then
finding pieces that fit together to make that picture.
• AI uses this method to do things like planning the best
route for delivery trucks, figuring out the moves in a
game, or even making decisions about investments.
• It's a fundamental part of how AI systems think and
make decisions.
• Famous Algorithms of State Space Search in AI
• 1. Breadth-First Search (BFS)
• This algorithm starts at tree root (starting point) and
explores all neighbor nodes at the present depth before
moving on to nodes at the next level of depth.
• It's like searching all the rooms on one floor before taking
the stairs to the next.
• 2. Depth-First Search (DFS)
• DFS goes straight down into the tree as deep as it can go,
before backing up and trying different paths.
• It's similar to exploring a path in a cave until we hit a dead
end, then backtracking to try a different path.
• 3. Uniform Cost Search
• This method expands the least costly node first.
• It's like choosing paths based on toll costs, always taking the
cheapest route available until we reach our destination.
• 4. Greedy Best-First Search
• It picks the path that seems best at that moment, like
choosing brightest corridor in a maze, hoping it leads to exit.
• 5. A* Search
• A* Search combines the best features of both the Uniform
Cost Search and Greedy Best-First Search.
• It uses a heuristic to estimate the best path to the goal from
current state and considers cost from start to current state.
• It's like choosing paths based on both the toll cost and how
brightly they're lit.
• The choice depends on the problem the AI is trying to solve.
• Some are faster but may miss shortest path, while others
are slower but guarantee they'll find route in the end.
• Features of State Space Search
• 1. Completeness
• Completeness refers to the algorithm's ability to guarantee
a solution if one exists.
• For example, BFS and A* are complete, meaning they will
always find a solution if there is one to be found.
• 2. Optimality
• An algo is optimal if it always finds least-cost path to goal.
• Uniform Cost Search and A* are examples of optimal path.
• 3. Time Complexity
• This is a measure of algo's performance in terms of how
quickly it can find a solution.
• DFS can be faster than BFS in some cases because it doesn't
need to store all the fringe nodes at each level.
• 4. Space Complexity
• It measures how much memory algorithm uses. BFS can
have a high space complexity to keep track of all the nodes
at the current depth level.
• 5. Heuristics
• Heuristics (used in A*) can help to estimate the cost of the
cheapest path from the current node to the goal.
• A good heuristic can speed up the search process.
• Steps in State Space Search
• 1. Define the Initial State
• This is where problem begins. For e.g, in a puzzle, the initial
state would be starting arrangement of the pieces.
• 2. Define the Goal State
• The goal state is the desired arrangement or condition that
solves the problem. While traveling, it would be the
reaching the destination.
• 3. Generate Successor States
• From the current state, create a set of possible 'next moves'
or states that are reachable directly from the current state.
• 4. Apply Search Strategy
• Choose a path based on the chosen search strategy, such as
depth-first, breadth-first, or best-first search. This decision is
crucial and can affect the efficiency of the search.
• 5. Check for Goal State
• At each new state, check to see if the goal has been reached.
If it has, the search is successful.
• 6. Path Cost
• Calculate the cost of the current path. If cost is too high, or if
a dead end is reached, backtrack and try a different path
• 7. Loop and Expand
• Repeat the process: generate successors, apply the search
strategy, and check for goal state until goal is reached or no
more states can be generated.
• Example of State Space Search
• Consider a classic problem in AI: Puzzle of the water jugs.
• We have a 3-liter jug and a 5-liter jug, and we need to
measure out exactly 4 liters of water using these two jugs.
• Initial State: (similar to input)
• We start with both jugs empty: (0, 0) where first value is the
amount of water in the 3-liter jug, and second value is the
amount in the 5-liter jug.
• Goal State: (similar to output)
• The state where any of the jugs contains exactly 4 liters of
water: (0, 4) or (4, x).
• Actions: (Similar to allowed functions / rules)
• Fill a jug completely.
• Empty a jug.
• Pour water from one jug to the other until either the first
jug is empty or the second jug is full.
• State Space Representation:
• We can represent this problem with a graph where
each node represents the state of the jugs, and
each edge represents one of the actions that can
change the state.
• For example:
• Filling the 5-liter jug from the tap: (0, 0) → (0, 5)
• Pouring water from the 5-liter jug into the 3-liter
jug: (0, 5) → (3, 2)
• Emptying the 3-liter jug: (3, 2) → (0, 2)
• Pouring water from the 5-liter jug to fill the 3-liter
jug: (0, 2) → (2, 0)
• And so on, until reaching the goal state.
• Water Jug Problem Definition,
• “You are given two jugs, a 4-liter one and a 3-liter one.
Neither has any measuring markers on it. There is a
pump that can be used to fill the jugs with water. How
can we get exactly 2 liters of water into a 4-liter jug?”

• In production rules for the water jug problem, let x


denote a 4-litre jug, and y denote a 3-litre jug, i.e.
x=0,1,2,3,4 or y=0,1,2,3
• Start state (0,0)
• Goal state (2,n) from any n
• Start from start state and end up at the goal state.
Production rules for water jug problem in AI are:
Pour water from 4-lit jug into 3-lit jug until it is full

y
• Solution to the Water Jug Problem in Artificial Intelligence
• Current state (0,0)
• Loop till the goal state (2,0) is reached.
• Apply a rule when the left side matches the current state
• Set the new current state to the resulting state
• Start state (0,0)
• (0,3) Apply Rule 2, Fill the 3-litre Jug
• (3,0) Apply Rule 9: Pour all the water from a 3-litre jug into a
4-litre jug as 4 > 3.
• (3,3) Apply Rule 2, Fill the 3-litre Jug
• (4,2) Apply Rule 7: Pour water from a 3-litre jug into a 4-litre
jug until it is full (3 lit is there in 1st jug hence pour 1 lit from
2nd jug and 1st becomes 3+1=4 and second becomes 3-1=2)
• (0,2) Apply Rule 5, Empty 4-litre jug on the ground
• (2,0) Apply Rule 9: Pour all the water from a 3-litre jug into a
4-litre jug
• Another solution for water jug is:
• (0, 0) – Start State
• (4, 0) – Rule 1: Fill the 4-litre jug
• (1, 3) – Rule 8: Pour water from the 4-litre jug into the
3-litre jug until the 3-litre jug is full (second jug can
take only 3 lit hence first jug becomes 4-3=1 lit)
• (1, 0) – Rule 6: Empty the 3-litre jug on the ground
• (0, 1) – Rule 10: Pour all the water from the 4-litre jug
into the 3-litre jug.
• (4, 1) – Rule 1: Fill the 4-litre jug.
• (2, 3) – Rule 8: Pour water from the 4-litre jug into the
3-litre jug until the 3-litre jug is full (second jug can
take only 2 lit from first jug)
• (2, 0) – Rule 6: Empty the 3-liter jug on the ground
• Goal State reached
• Develop all production rules for following water jug problem
• You are given two jugs, 6 litre one and 5 litre one. Neither has
any measuring marks on it. There is a pump to fill jugs with
water. How can we get exactly 2 litres of water in 6-litre jug?
• 1.Fill the 6-litre jug.
• 2.Pour water from the 6-litre jug into 5-litre jug until 5-litre jug
is full. This leaves you with 1 litre of water in the 6-litre jug.
• 3.Empty the 5-litre jug.
• 4.Pour 1 litre of water from 6-litre jug into the empty 5-litre jug
• 5.Fill the 6-litre jug again.
• 6.Carefully pour water from 6-litre jug into the 5-litre jug until
5-litre jug is full. Since you already have 1 litre of water in 5-
litre jug, you can only pour 4 litres from 6-litre jug.
• Now, we have exactly 2 litres of water in the 6-litre jug.
• Consider water jug problem using a 10-liter jug and a 7-liter
jug, where the goal is to measure exactly 6 liters of water:
• A. State Representation and Initial State: Represent a state
of the problem using a tuple (x, y), with x represent amount
of water in 10-liter jug, y represent amount of water in 7-liter
jug, and initial state being (0, 0)
• B. Goal Predicate: The goal state is (6, y), where 0 ≤ y ≤ 7
• Each action, such as filling, emptying, or pouring water,
transitions the system from one state to another.
• C. Operators that transition from one state to another:
• Fill 10-liter jug: (x, y) → (10, y) if x < 10
• Fill 7-liter jug: (x, y) → (x, 7) if y < 7
• Empty 10-liter jug: (x, y) → (0, y) if x > 0
• Empty 7-liter jug: (x, y) → (x, 0) if y > 0
• Pour water from 7-liter jug into 10-liter jug: (x, y) → (10, y –
(10 – x)) if 0 < x + y ≥ 10 and y > 0
• Pour water from 10-liter jug into 7-liter jug: (x, y) → (x – (7 –
y), 7) if 0 < x + y ≥ 7 and x > 0
• Pour all water from 7-liter jug into 10-liter jug: (x, y) →
(x + y, 0) if 0 < x + y ≤ 10 and y ≥ 0
• Pour all water from 10-liter jug into 7-liter jug: (x, y) →
(0, x + y) if 0 < x + y ≤ 7 and x ≥ 0
• Using state space search (brute force approach), the
following solution can be found:
• Start: (0, 0)
• Fill 10-liter jug: (10, 0)
• Pour from 10-liter jug to 7-liter jug: (3, 7)
• Empty 7-liter jug: (3, 0)
• Pour from 10-liter jug to 7-liter jug: (0, 3)
• Fill 10-liter jug: (10, 3)
• Pour from 10-liter jug to 7-liter jug: (6, 7)
• The final state (6, 7) is achieved - goal of measuring
exactly 6 liters of water in the 10-liter jug.
• Applications of State Space Search
• 1. Puzzle Solving:
• State space search algorithms are often used to solve complex
puzzles like Rubik's Cube, sliding tile puzzles, water jug problem.
• These puzzles can be represented as a state space, and algorithms
can be applied to find solutions.
• 2. Pathfinding:
• In video games and robotics, state space search is used for
pathfinding – determining the shortest or most cost-effective
path from one point to another.
• Algos like A* and Dijkstra's are used in maps and game levels to
find paths considering various terrains and obstacles.
• 3. Planning and Scheduling:
• AI planning involves creating a sequence of actions to reach goal.
State space search helps in finding best sequence of events in
logistics, production schedules, or AI for playing strategy games.
• 4. Natural Language Processing:
• State space search can be used in parsing algorithms for
natural language processing, where states represent
possible interpretations of sentences & search is for most
likely meaning.
• 5. Machine Learning:
• In reinforcement learning, state space search helps in
exploring different states and actions to maximize the
reward function.
• 6. Problem Solving in AI:
• It is used in expert systems, theorem proving, and even in
medical diagnosis systems where different states represent
the presence or absence of symptoms and diseases.
• 7. Robotics:
• Robots use state space search to navigate and interact with
their environment.
• They need to understand current state, explore possible
next states, and choose best action to perform tasks.
• 8. Automated Reasoning:
• State space search is used in automated reasoning,
where a system needs to infer new knowledge from
the known facts. This is crucial in fields like law and
computer-aided verification.
• 9. Optimization Problems:
• Many optimization problems can be framed as state
space searches, where goal is to find the state that
maximizes or minimizes a particular function.
• 10. Quantum Computing:
• In quantum computing, state space search can be used
to explore possibilities of quantum states to solve
problems that are intractable on classical computers.
• Disadvantages of State Space Search
• 1. Complexity:
• The biggest challenge is the combinatorial explosion.
• As the complexity of the problem increases, number of
states can grow exponentially, making the search space vast
and difficult to navigate within a reasonable time frame.
• 2. Memory Consumption:
• Storing state space requires significant memory, for
complex problems. This can be a limiting factor for systems
with limited resources.
• 3. Time-Consuming:
• Searching through a large state space can be very time-
consuming.
• Algorithms may take an impractical amount of time to find a
solution, or to determine that no solution exists.
• 4. Local Minima:
• Some search algorithms can get stuck in local minima –
suboptimal states that are better than their neighbors but
not the best overall. Escaping local minima can be difficult
and may require additional strategies.
• 5. Overhead:
• The overhead of managing the search, such as keeping track
of visited states and the paths taken, can be significant,
which adds to the computational burden.
• 6. Knowledge Representation:
• The effectiveness of a state space search is heavily dependent
on how well problem is represented. Poor representation
can lead to inefficient searches and missed solutions.
• 7. Dynamic Environments:
• State space search is less effective in dynamic environments
where state can change independently of the searcher's
actions. This requires the search algorithm to be adaptive
and responsive to changes.
• 8. Heuristic Dependence:
• Many state space search algorithms rely on heuristics
(proceeding to a solution by trial and error or by rules
that are only loosely defined) to guide the search.
• Finding a good heuristic is often a problem-specific
challenge and can be difficult.
• 9. Scalability Issues:
• While state space search is suitable for many problems,
it may not scale well for problems with high-
dimensional spaces or those requiring real-time
solutions.
• 10. Incomplete Information:
• In cases where the state space cannot be fully known
or is partially observable, traditional state space
search techniques may not be applicable or need to
be significantly adapted.
• Types of search algorithms
• Based on search problems we can classify search
algorithms into uninformed (Blind search) search
and informed search (Heuristic search) algorithms.
• Breadth-First Search(BFS)
• It is a graph traversal algorithm that starts traversing the
graph from the root node and explores all the neighboring
nodes.
• Then, it selects the nearest node and explores all the
unexplored nodes.
• While using BFS for traversal, any node in the graph can be
considered as the root node.
• There are many ways to traverse the graph, but among
them, BFS is the most commonly used approach.
• It is a recursive algorithm to search all the vertices of a tree or
graph data structure.
• BFS puts every vertex of the graph into two categories -
visited and non-visited.
• It selects a single node in a graph and, after that, visits all
the nodes adjacent to the selected node.
• Applications of BFS
• BFS can be used to find the neighboring locations from a
given source location.
• In a peer-to-peer network, BFS algorithm can be used as a
traversal method to find all the neighboring nodes.
• BFS can be used in web crawlers to create web page indexes.
• It starts traversing from the source page and follows the links
associated with the page.
• Here, every web page is considered as a node in the graph.
• BFS is used to determine the shortest path and minimum
spanning tree.
• It can be used in ford-Fulkerson method to compute the
maximum flow in a flow network.
• In example below, there is a directed graph having 7 vertices.
In the above graph, minimum path 'P' can be found by using
the BFS that will start from Node A and end at Node E.
Algorithm uses two queues, namely QUEUE1 and QUEUE2.
QUEUE1 holds all the nodes that are to be processed, while
QUEUE2 holds all nodes that are processed and deleted
from QUEUE1.
Now, let's start examining the graph starting from Node A.
• Step 1 - First, add A to queue1 and NULL to queue2.
• QUEUE1 = {A}
• QUEUE2 = {NULL}
• Step 2 - Now, delete node A from queue1 and add it into
queue2. Insert all neighbors of node A to queue1.
• QUEUE1 = {B, D}
• QUEUE2 = {A}
• Step 3 - Now, delete node B from queue1 and add it into
queue2. Insert all neighbors of node B to queue1.
• QUEUE1 = {D, C, F}
• QUEUE2 = {A, B}
• Step 4 - Now, delete node D from queue1 and add it into
queue2. Insert all neighbors of node D to queue1.
• The only neighbor of Node D is F since it is already inserted,
so it will not be inserted again.
• QUEUE1 = {C, F}
• QUEUE2 = {A, B, D}
• Step 5 - Delete node C from queue1 and add it into queue2.
Insert all neighbors of node C to queue1.
• QUEUE1 = {F, E, G}
• QUEUE2 = {A, B, D, C}
• Step 6 - Delete node F from queue1 and add it into queue2.
• Insert all neighbors of node F to queue1. Since all the
neighbors of node F (say, A) are already present, we will not
insert them again.
• QUEUE1 = {E, G}
• QUEUE2 = {A, B, D, C, F}
• Step 7 - Delete node E from queue1 and add it into queue2.
• Since all of its neighbors have already been added (B,F), we
will not insert them again.
• Now, all the nodes are visited, and the target node E is
present in queue2.
• QUEUE1 = {G}
• QUEUE2 = {A, B, D, C, F, E}
• Suppose we have a 3-liter jug and a 5-liter jug, and we want
to measure exactly 4 liters of water.
• We'll use BFS to find the optimal solution.
• 1. Start with an Empty State: (0, 0)
• Both jugs are initially empty.
• 2. Apply All Possible Actions from the Current State: (0, 0)
• Fill the 3-liter jug: (3, 0)
• Fill the 5-liter jug: (0, 5)
• 3. Expand to the Next Level:
• We now have two new states to explore, (3, 0) and (0, 5).
• 4. Continue Expanding:
• From (3, 0), we can:
– Pour from the 3-liter jug to the 5-liter jug: (0, 3)
– Fill the 3-liter jug again: (3, 3)
• From (0, 5), we can:
– Pour from the 5-liter jug to the 3-liter jug: (3, 2)
– Empty the 5-liter jug: (0, 0)
• 5. Explore Further:
• Continue expanding the states at the next level:
– From (0, 3), we can pour to reach (3, 0).
– From (3, 3), we can reach (0, 3) or (3, 5).
• 6. Goal State Achieved:
• Proceeding further, we've reached the goal state (0, 4).
• 7. Backtrack to Find the Solution:
• To find the solution path, we backtrack from the goal state to
the initial state:
– (0, 4) -> (3, 1) -> (0, 1) -> (1, 0) -> (1, 5) -> (3, 4) -> (0, 4)
• This step-by-step demonstration shows how Breadth-First
Search systematically explores the state space to find the
optimal solution to the Water Jug Problem.
• It ensures that we find the shortest path to the goal state
while examining all possible actions.
• While BFS guarantees optimality, it may not always be the
most efficient choice for larger problem spaces.
• DFS (Depth First Search) algorithm
• It is a recursive algorithm to search all the vertices of a tree
data structure or a graph.
• Stack data structure can be used to implement the DFS
algorithm
• The depth-first search (DFS) algorithm starts with the initial
node of graph G and goes deeper until we find the goal node
or the node with no children.
• Process to implement the DFS traversal is given as follows -
• 1.First, create a stack with total number of vertices in graph.
• 2.Now, choose any vertex as the starting point of traversal,
and push that vertex into the stack.
• 3.After that, push a non-visited vertex (adjacent to the vertex
on the top of the stack) to the top of the stack.
• 4.Now, repeat steps 3 and 4 until no vertices are left to visit
from the vertex on the stack's top.
• 5.If no vertex is left, go back and pop a vertex from the stack.
• Repeat steps 2, 3, and 4 until the stack is empty.
Depth-first search on a binary tree. The unexplored region is shown in
light gray. Explored nodes with no descendants in the frontier are
removed from memory. Nodes at depth 3 have no successors and M is
the only goal node.
• Depth First Search – How it Works?
• Here stack is used for keeping track of all visited nodes.
• Initially push node A in to stack and add it in the output
sequence and mark node A as visited (yellow shading).
• At the top of the stack is node A and check all the
unvisited nodes of A.
• We have B and S.
Let us choose the unvisited nodes in alphabetical order.
Hence node B is pushed in to stack.
Now, add node B into the output sequence.
Shade node B as it is already visited.
Now no neighbouring unvisited node is available from
node B.
Hence pop up the node B from the stack.
• Now A is at the top of the stack.
• S is the other adjacent unvisited node.
• Push S into the top of the stack and add S to the
output sequence and shade S as visited node.
• S is at the top of the stack and it has C and G as its
neighbouring unvisited nodes.
• Following alphabetical order, push C in to stack.
• Store C into the output sequence and mark C as visited.
• C has 3 neighbouring unvisited nodes D, E and F.
• Alphabetically select node D, push into the top of stack.
• Add D into the output sequence and mark D as visited.
• Now D is at the top of the stack and it has only one visited
node, namely, C.
• There is no unvisited node and hence pop D from the
stack. Now C will be at the top of the stack.
• C has 2 unvisited adjacent nodes E and F.
• Choose alphabetically node E, push it on the top of stack
and add E to the output list and shade E as visited node.
• E has one unvisited, adjacent node H and push it in to the
top of the stack and add H to the output list and shade H
as visited node.
• H has one unvisited neighbour node G and push it
into the top of the stack and add G to the output
sequence and mark G as visited node.
• G is at the top of the stack, and F is the unvisited
node of G and hence push F into the top of the
stack and mark F as visited.
• Now F is at the top of the stack and has no adjacent
unvisited nodes and hence pop it out of stack.
• Now G is at the top of the stack and pop it up. Now H
will be at the top of the stack.

• H does not have any unvisited nodes and hence pop it up


from the stack.
Now node E will be at the top of the stack as it does not have
any unvisited adjacent nodes, remove it from stack.
Now node C will be at the top of the stack and it does not
have any unvisited nodes and hence pop it up from the stack.
Now node S will be at the top of the stack and it
does not have any unvisited adjacent nodes and
hence pop it up from the stack.
Then node A will be at the top of the stack.
It does not have any unvisited neighbouring nodes
and hence can be popped up from the stack.
• Finally the stack becomes empty.
• The DFS algorithm stops.
• The output sequence is ABSCDEHGF.
Pseudo code for water jug problem using stack
• In the example, the directed graph has 7 vertices.

Now, let's start examining the graph starting from Node H.


• Step 1 - First, push H onto the stack.
• STACK: H
• Step 2 - POP the top element from the stack, i.e., H, and print it. Now,
PUSH all the neighbors of H onto the stack that are in ready state.
• Print: H
• STACK: A
• Step 3 - POP the top element from the stack, i.e., A, and print it. Now,
PUSH all the neighbors of A onto the stack that are in ready state.
• Print: A
• STACK: B, D
• Step 4 - POP the top element from the stack, i.e., D, and print it. Now,
PUSH all the neighbors of D onto the stack that are in ready state.
• Print: D
• STACK: B, F
• Step 5 - POP the top element from the stack, i.e., F, and print it. Now,
PUSH all the neighbors of F onto the stack that are in ready state.
• Print: F
• STACK: B (Note A is already removed)
• Step 6 - POP the top element from the stack, i.e., B, and print it. Now,
PUSH all the neighbors of B onto the stack that are in ready state.
• Print: B
• STACK: C (Note F is already removed)
• Step 7 - POP the top element from stack, i.e., C, and print it.
• PUSH all neighbors of C onto stack that are in ready state.
Print: C
• STACK: E, G (H is already removed)
• Step 8 - POP the top element from the stack, i.e., G and
PUSH all neighbors of G onto the stack that are in ready state.
• Print: G
• STACK: E
• Step 9 - POP the top element from the stack, i.e., E and PUSH
all the neighbors of E onto the stack that are in ready state.
• Print: E
• STACK: (B and F are already removed)
• Now all graph nodes have been traversed, and stack is empty
• We have 2 jugs with capacities of 5 & 3 liters, we want to
measure out exactly 4 liters of water. Initial state of jugs is empty
• Fill a jug.
• Empty a jug.
• Pour water from one jug to another until the pouring jug is
empty or the receiving jug is full. The goal is to find a sequence of
these operations to measure out exactly 4 liters of water.
• DFS explores one branch of the search tree until it reaches a goal
state or exhausts all possibilities before backtracking.
• Fill the 5-liter jug. (5, 0)
• Pour water from the 5-liter jug into the 3-liter jug. (2, 3)
• Empty the 3-liter jug. (2, 0)
• Pour water from the 5-liter jug into the 3-liter jug. (0, 2)
• Fill the 5-liter jug. (5, 2)
• Pour water from the 5-liter jug into the 3-liter jug until the 3-liter
jug is full. (4, 3)
• We have exactly 4 liters of water in the 5-liter and 3-liter jugs.
• Let the capacity of first jug is 5 litre and second jug is 3 litre.
• The goal state is to have 2 litre of water in either of the jug.
• Allowed operations are:
• Filling (5 or 3), empting (5 or 3) and swapping (5->3, 3->5)
• Attempt all 6 possible operations at each state.
• The possible goal states are: (2,3), (2,2), (2,1), (2,0) and
(0,2), (1,2), (2,2), (3,2), (4,2), (5,2)
• Note : In BFS, steps do not follow any specific sequence of
operations. See the results of BFS below.
• In DFS, as we follow specific sequence of operations (fill,
empty, swap) the states repeat.
• In water jug problem, we expand our tree until we reach the
goal state. For this kind of problem DFS is not good.
• When we change the sequence of operations (swap, fill,
empty) we may get immediate results.
Do swapping as first,
filling as second and
empting the water in
the jug as third
operation.
• Depth Limited Search
• Failure of depth-first search in infinite state spaces can
be alleviated by supplying a predetermined depth
limit l.
• That is, nodes at depth l are treated as if they have no
successors.
• This approach is called depth-limited search.
• It also introduces an additional source of
incompleteness if we choose l < d, that is, the
shallowest goal is beyond the depth limit.
• Depth limits can be decided based on knowledge of
the problem.
• For e.g, let on the map of Romania there are 20 cities.
• Then if there is a solution, it must be of length 19 at
the longest, so = 19 is a possible choice.
• Iterative deepening search (or iterative deepening
depth-first search) is a strategy, used in
combination with depth-first tree search, that
finds best depth limit.
• It does this by gradually increasing the limit—first
0, then 1, then 2, and so on—until a goal is found.
• This will occur when the depth limit reaches d, the
depth of the shallowest goal node.
• Iterative deepening combines the benefits of
depth-first and breadth-first search.
• Its memory requirements are modest.
• Like breadth-first search, it is complete when the
branching factor is finite and optimal.
• The algorithm is explained in Figure below:
• Uniform Cost Search Algorithm (UCS)
• UCS is an informed search algorithm that explores a
graph by gradually expanding nodes starting from the
initial node and moving towards the goal node while
considering cost associated with each edge or step.
• This algorithm is used when step costs are not same,
but we need optimal solution to the goal state.
• In such cases, we use Uniform Cost Search to find the
goal and the path, including the cumulative cost to
expand each node from the root node to the goal
node.
• It does not go depth or breadth. Sometimes it goes
depth wise and sometimes breadth wise depending
on the cost value.
• It searches for the next node with the lowest cost,
and in the case of the same path cost, it considers
lexicographical order.
• Suppose we have the following graph with Source =
1 and Destination = 4.
To go from node to node we have 4
possible paths:
124, with cost equal to 17.
1234 with cost equal to 10 .
134 with cost equal to 42 .
1324 with cost equal to 51 .
When we apply uniform cost search
algorithm, we will get minimal cost to go
from source node S=1 to destination
node D=4 .
In this example, the lowest cost will be
equal to 10; therefore, the path that the
algorithm will detect between nodes S
and D is 1234.
In Fig. above, the path chosen is SADG.
Advantages
Uniform cost search is an optimal search method because at every
state, the path with the least cost is chosen.
Disadvantages
It does not care about the number of steps or finding the shortest path
involved in the search problem, it is only concerned about path cost.
This algorithm may be stuck in an infinite loop.
Consider the following graph. Let the starting
node be A and the goal node be G.
Hence we get:
Actual path => A -- D -- F -- G , with Path Cost = 8.
Traversed path => A -- D -- B -- E -- F -- C -- G
• Consider a graph with following nodes and edges, with their weights:
• Nodes: A, B, C, D, E Edges:
• (A->B, 4), (A->C, 2), (B->D, 5),
• (C->D, 1), (C->E, 7), (D->E, 3)
• Here numbers after comma
• represent weights of the edges.
• Let's say we want to find lowest
• cost path from Node A to Node E
• Start with the initial node A.
• Expand the node with the lowest cost. Initially, it's A.
• Expand the neighboring nodes (B and C) and update their costs.
• Continue expanding node with lowest cost, updating costs along way.
• Repeat until goal node E is reached or all nodes have been explored.
• Path with lowest cost found by the UCS algorithm will be solution.
• Algorithm ensures that it explores paths in increasing order of cost.
• Lowest cost path from A to E A-C-D-E is with a cost of 6 (2+1+3).
• Insert the root into the
queue
While the queue is not
empty
Dequeue the maximum
priority element from the
queue
(If priorities are same,
alphabetically smaller path
is chosen)
If the path is ending in
the goal state, print the
path and exit
Else
Insert all the children
of the dequeued element,
with the cumulative costs
as priority
• Uniform Cost Search uses a priority queue.
• Uniform Cost Search gives minimum cumulative cost the
maximum priority.
• Each element of the priority queue is written as [path,
cumulative cost].
• Initialization: { [ S , 0 ] }
Iteration1: { [ S->A , 1 ] , [ S->G , 12 ] }
Iteration2: { [ S->A->C , 2 ] , [ S->A->B , 4 ] , [ S->G , 12] }
Iteration3: { [ S->A->C->D , 3 ] , [ S->A->B , 4 ] , [ S->A->C-
>G , 4 ] , [ S->G , 12 ] }
Iteration4: { [ S->A->B , 4 ] , [ S->A->C->G , 4 ] , [ S->A->C-
>D->G , 6 ] , [ S->G , 12 ] }
Iteration5: { [ S->A->C->G , 4 ] , [ S->A->C->D->G , 6 ] , [ S-
>A->B->D , 7 ] , [ S->G , 12 ] }
Iteration6: gives the final output as S->A->C->G.
• Greedy best-first search is an informed search algorithm
where evaluation function = heuristic function,
disregarding edge weights in a weighted graph
• To search for a goal node it expands node that is closest to
the goal as determined by the heuristic function.
• However, solution from a greedy best-first search may not
be optimal since a shorter path may exist.
• Here, search cost is at a minimum since solution is found
without expanding a node that is not on solution path.
• This algorithm is minimal, but not complete, since it can
lead to a dead end.
• It’s called “Greedy” because at each step it tries to get as
close to the goal as it can.
• Heuristic Function
• A heuristic function, h(x), evaluates successive node based on how
close it is to target node. It chooses the immediate low-cost option.
It does not necessarily find the shortest path to the goal.
• Suppose a bot is trying to move from point A to point B. In greedy
best-first search, bot will choose to move to the position that brings
it closest to the goal, disregarding other positions a shorter distance.
• In case if there is an obstruction, it will evaluate the previous nodes
with the shortest distance to the goal, and continuously choose the
node that is closest to the goal.
• The Algorithm
• Initialize a tree with root node being the start node in the open list.
• If the open list is empty, return a failure, otherwise, add current node
to the closed list.
• Remove node with lowest h(x) value from open list for exploration.
• If a child node is target, return a success. Otherwise, if node has not
been in either open or closed list, add it to open list for exploration.
• Consider finding the path from P to S in the following :
graph:
• Here cost is measured using a heuristic value
(shown in brackets). P is connected to A, C & R.
• In other words, how close it is to target.

• C has the lowest cost of 6. Therefore, search will


continue like as:
Node C is connected with
Nodes M, R and U.

U has lowest cost compared to M and R, so the search will continue by


exploring U.
Finally, S has a heuristic value of 0 since that is target node:
Node U is connected
with Nodes N and S.

The total cost for the path (P -> C -> U -> S) evaluates to 11.
Problem with a greedy best-first search is that the path (P -
> R -> E -> S) has a cost of 10, is lower than (P -> C -> U -> S).
Greedy best-first search ignored this path as it does not
consider edge weights.
Simplified road map of Romania
• Let us see how this works for route-finding problems in
Romania; we use straight line distance heuristic, call hSLD.
• If the goal is Bucharest, we need to know the straight-line
distances to Bucharest, which are shown in Figure below:
Stages in a greedy best-first tree search for Bucharest with the straight-
line distance heuristic hSLD. Nodes are labeled with their h-values.
• Dijkstra’s Algorithm
• Given a weighted graph and a source vertex , find shortest
paths from source to all the other vertices in the given graph.
• Input: src=0 Output: 0 4 12 19 21 11 9 8 14
Explanation: Distance from 0 to 1 = 4.
Minimum distance from 0 to 2 = 12.
0->1->2
Minimum distance from 0 to 3 = 19.
0->1->2->3
Minimum distance from 0 to 4 = 21.
0->7->6->5->4
Minimum distance from 0 to 5 = 11.
0->7->6->5
Minimum distance from 0 to 6 = 9.
0->7->6
Minimum distance from 0 to 7 = 8. 0->7
Minimum distance from 0 to 8 = 14.
0->1->2->8
• A* Search Algorithm
• A* Search algorithm is an alternative to Dijkstra’s
Shortest Path algorithm.
• It is used to find shortest path between two nodes of
a weighted graph.
• A* Search algorithm performs better than Dijkstra’s
algorithm as it uses heuristics.
• Heuristic we will use is the straight line distance
between a node and the end node (Z).
• This distance will always be the shortest distance
between two points, a distance that cannot be reduced
whatever path we follow to reach node Z.
• A* tries to look for a better path by using a heuristic
function, which gives priority to nodes that are
supposed to be better than others while Dijkstra's just
explore all possible ways.
Select a node as a
current node for
which the total
distance is
minimum, here, C
• Set the current node (A) to “visited” and use the
unvisited node with the smallest total distance as
the current node (e.g. in this case: Node C).
• Check all unvisited nodes connected to the current
node and add the distance from A to C to all
distances from the connected nodes.
• Replace their values only if the new distance is
lower than the previous one.
• C -> D: 3 + 7 = 10 < ∞ – Change Node D
C -> E: 3 + 10 = 13 < ∞ – Change Node E
• The next current node (unvisited node with the
shortest total distance) could be either node B or
node D. Let’s use node B.
Check all unvisited nodes connected to the current node (B) and add
the distance from A to B to all distances from the connected nodes.
Replace their values only if the new distance is lower than the
previous one. B -> E: 4 + 12 = 16 > 13 – Do not change Node E
B -> F: 4 + 5 = 9 < ∞ – Change Node F
Next current node (unvisited node with shortest total distance) is D.
• We found a path from A to Z, but is it shortest
one?
• Check all unvisited nodes.
• In this e.g, there is only one unvisited node (F).
• But its total distance (20) is greater than the
distance we have from A to Z (17) so there is no
need to visit node F as it will not lead to a shorter
path.
• We found the shortest path from A to Z.
• Read path from Z to A using previous node
column:
Z>E>D>C>A
• So the Shortest Path is:
A – C – D – E – Z with a length of 17
Solution :
SDEFG
Solve: Apply the steps of the A* Search algorithm to find the
shortest path from A to Z using the following graph:
Solve : Apply the steps of the A* Search
algorithm to find the shortest path from A
to Z using the following graph:
Simplified road map of Romania
Solve :
Stages in an A∗
search for
Bucharest.
Nodes are labeled
with f = g +h.
The h values are
the straight-line
distances to
Bucharest taken
from Figure given
before.
• Generate and Test Algorithm
• Generate a possible solution. This means generating a possible
point in the problem space or generating a path from a start state.
• Test to see, if this is actually a solution by comparing the chosen
point or the endpoint of the chosen path to the set of acceptable
goal state. If solution is found, quit. Otherwise repeat the process
• Simple Hill Climbing Algorithm
• Step 1: Start with an initial state.
• Step 2: Check if initial state is the goal. If so, return success and exit.
• Step 3: Enter a loop to search for a better state continuously.
• Select a neighboring state within the loop by applying an operator to
the current state. Evaluate this new state:
– If it’s the goal state, return success and exit.
– If it’s better than current state, update current state to new state.
– If it’s not better, discard it and continue the loop.
• Step 4: End process if no better state is found & goal isn’t achieved.
• Steepest-Ascent Hill Climbing
• This variant assesses all neighboring solutions, choosing the
one with the most significant improvement.
• Algorithm:
• Step 1: Evaluate the initial state. If it’s the goal, we can return
success; otherwise, set it as the current state.
• Step 2: Repeat until a solution is found or no further
improvement is possible.
• Initialize “BEST_SUCCESSOR” as the best potential
improvement over the current state.
• For each operator, apply to current state, evaluate new state.
– If it’s the goal, return success.
– If better than “BEST_SUCCESSOR,” update “BEST_SUCCESSOR” to
this new state.
• If “BEST_SUCCESSOR” is an improvement, update current
state.
• Step 3: Stop algorithm if no solution is found or further
improvement is possible.
• Both the Hill climbing algorithms may fail to find a solution, if the
program reaches either (i) local maximum (ii) plateau or (iii) ridge.
• X-axis: denotes the state space i.e states or configuration our
algorithm may reach. Y-axis: denotes the values of objective
function corresponding to a particular state.
• The best solution will be a state space where the objective function
has a maximum value(global maximum).
• Different regions in the State Space Diagram:
• Local maximum: It is a state which is better than its
neighboring state however there exists a state which is
better than it (global maximum). This state is better because
value of the objective function is higher than its
neighbors. It is also called foothill.
• To overcome the local maximum problem: Utilize
backtracking technique. Maintain a list of visited states. If
the search reaches an undesirable state, it can backtrack to
the previous configuration and explore a new path.
• Global maximum: It is the best possible state in the state
space diagram. This is because, at this stage, the objective
function has the highest value.
• Plateau/flat local maximum: It is a flat region of state space
where neighboring states have the same value. Hence it is
difficult to find the best direction to move further.
• To overcome plateaus: Make a big jump. Randomly select a
state far away from the current state. Chances are that we
will land in a non-plateau region.
• Ridge: It is a region that is higher than its neighbors but itself has a
slope. It is a special kind of local maximum. Because it is very
difficult to traverse a ridge by a single move.
• To overcome Ridge: In this kind of obstacle, use two or more rules
before testing. It implies moving in several directions at once.
• Current state: The region of the state space diagram where we are
currently present during the search.
• Shoulder: It is a plateau that has an uphill edge.
Advantages Disadvantages
Susceptibility to Local Optima: The algorithm
Simplicity: The algorithm is straightforward to
can become stuck at locally optimal solutions
understand and implement.
that aren’t the best overall.
Limited Exploration: Its tendency to focus on
Memory Efficiency: It’s memory-efficient, the immediate vicinity limits its exploration,
maintaining only the current state’s data. potentially overlooking globally optimal
solutions.
Rapid Convergence: It often converges swiftly Dependence on Initial State: The quality and
to a solution, which is beneficial in scenarios effectiveness of the solution found heavily
where time is critical. depend on the starting point.
• Hill climbing is a local search algorithm that continuously
moves towards direction of increasing elevation to find
peak of mountain. It is a greedy algorithm as it makes best
choice at each step, hoping to reach the global maximum.
• Initialize: Start with an initial solution.
• Evaluate: Evaluate the current solution.
• Generate neighbors: Create neighboring solutions by making
small modifications to the current solution.
• Select the best neighbor: Choose the neighbor with the
highest evaluation value.
• Update: Move to the selected neighbor and repeat the
process until a peak is reached or no better neighbor is found.
• Numerical Example:
• Consider a simple example of finding the maximum value in a
one-dimensional space.
• Suppose we have the function: f(x) = -x^2 + 4x
• We want to find value of x that maximizes this function using
hill climbing.
• Steps:
• Initialize: Start with an initial solution, let's say x = 2.
• Evaluate: Evaluate the function at x = 2:
• f(2) = -2^2 + 4*2 = -4 + 8 = 4.
• Generate neighbors: Create neighboring solutions by slightly
modifying x. Let's consider x + 1 and x - 1.
• Select best neighbor: Evaluate the function at x + 1 and x - 1.
• If either of them has a higher function value than the
current solution, move to that neighbor. If not, stop.
• Update: Suppose evaluating f(3) and f(1) gives f(3) = 1 and
f(1) = 0. Since f(3) > f(2), move to x = 3 and repeat process.
• Repeat these steps until we reach a peak or there is no
better neighbor.
• In this case, algorithm will move towards maximum value of
function, which is peak for this specific problem.
A H
Solution : Using Heuristic Function
Local
H G Add one point for every block that is resting on
the thing it is suppose to be resting on.
G F Subtract one point for every block that is sitting
on the wrong thing.
F E
Initial State : Score = +4
E D (One point added for each of C, D, E, F, G & H = 6.
Subtract one point for A & B = 2. So, 6 – 2 = 4.
D C Goal State : Score = +8
There is only one move from the initial state,
C B namely move block A on the table.
B A
H Current State
G Score = +6
Initial State Goal State F
E
D
C
B A
• Next State ( There are 3 alternatives)
• A
H
G G
G
F F
F
E E E
D D D
C C H C
B B A B A H

A) Score = +4 b) Score = +4 c) Score = +4

Here, Hill climbing will halt, because all 3 possible move


have lower score than the current state.
The process has reached a local maxima.
• To solve this, we define a new heuristic function.
• Global: For each block that has the correct support
structure ( i.e, the complete structure underneath it is
exactly as it should be), add one point for every block
in the support structure.
• For each block that has incorrect support structure,
subtract one point for every block in the existing
support structure.
• Using this function, the global state score = 28 (1 for B,
2 for C, etc.
• The initial state has score of -28.
• Moving A to the table, yields a state with score = -21.
• The next produced 3 states will have the score :
• a) = -28 b) = -16 c) = -15.
• This time the steepest hill climbing will choose move
c) which is the correct one.
Two player games – Minimax Algorithm in AI
• It is a recursive or backtracking algorithm used in decision-
making and game theory & it provides an optimal move for
the player assuming that opponent is also playing optimally.
• Mini-Max algorithm uses recursion to search through the
game-tree and this algorithm is used for game playing in AI,
such as Chess, Checkers, tic-tac-toe, go, etc.
• Both the players fight in such a way that the opponent player
gets the minimum benefit while they get maximum benefit.
• Both Players of game are opponent of each other, where MAX
will select maximized value & MIN will select minimized
value (just like one chooses white and black in chess)
• The minimax algorithm proceeds all the way down to the
terminal node of the tree, then backtrack the tree as the
recursion (depth first search).
• Working of Min-Max Algorithm:
• Take an e.g of game-tree representing two-player game.
• There are 2 players - Maximizer and Minimizer.
• Maximizer will try to get the Maximum possible score, and
Minimizer will try to get the minimum possible score.
• As this algorithm applies DFS, we have to go all the way
through the leaves to reach terminal nodes.
• At terminal node, terminal values are given so we will
compare those value, backtrack tree until initial state occurs
• Step-1: Algorithm generates entire game-tree and apply
utility function to get utility values for the terminal states.
• In the tree diagram, let's take A is the initial state of tree.
• Suppose maximizer takes first turn which has worst-case
initial value =- infinity, and minimizer will take next turn
which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the
Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of
Maximizer and determine the higher node’s values.
It will find the maximum among the all.
• For node D max(-1,- -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7
• Step 3: In next step, it's a turn for minimizer, so it will
compare all nodes value with +∞, and will find 3rd layer
node values.
• For node B= min(4, 6) = 4
• For node C= min(-3, 7) = -3
• Step 4: Now it's a turn for Maximizer, and it will again
choose the maximum of all nodes value and find the
maximum value for the root node. In this game tree, there
are only 4 layers, hence we reach immediately to the root
node, but in real games, there will be more than 4 layers.
• For node A max(4, -3)= 4
• Properties of Mini-Max algorithm:
• Min-Max algorithm is Complete. It will definitely find a
solution (if exist), in the finite search tree.
• Optimal- Min-Max algorithm is optimal if both opponents are
playing optimally.
• As it performs DFS for game-tree, time complexity of Min-Max
algorithm is O(bm), where b is branching factor of game-tree,
m is maximum depth of tree.
• Space complexity of Mini-max algorithm is similar to DFS
which is O(bm).
• Limitation of the minimax Algorithm:
• Main drawback of minimax algorithm is that it gets really slow
for complex games such as Chess, go, etc.
• This type of games has a huge branching factor, and the player
has lots of choices to decide.
• Alpha-beta pruning is a modified version of minimax algo.
• It is an optimization technique for the minimax algorithm.
• In the minimax search algorithm, number of game states it
has to examine are exponential in depth of the tree.
• We cannot eliminate exponent, but we can cut it to half.
• Hence there is a technique in which without checking each
node of the game tree we can compute the correct minimax
decision, and this technique is called pruning.
• This involves 2 threshold parameter Alpha and beta for
future expansion, so it is called alpha-beta pruning.
• It is also called as Alpha-Beta Algorithm.
• Alpha-beta pruning can be applied at any depth of a tree, and
sometimes it not only prune tree leaves but entire sub-tree.
• The two-parameter can be defined as:
Alpha: The best (highest-value) choice we have found so far at any
point along the path of Maximizer. Initial value of alpha is -∞.
• Beta: The best (lowest-value) choice we have found so far at
any point along the path of Minimizer.
• The initial value of beta is +∞.
• The Alpha-beta pruning removes all the nodes which are not
really affecting the final decision.
• Hence by pruning these nodes, it makes algorithm fast.
• Condition for Alpha-beta pruning:
• Main condition required for alpha-beta pruning is: α >= β
• Key points about alpha-beta pruning:
• The Max player will only update the value of alpha.
• The Min player will only update the value of beta.
• While backtracking the tree, node values will be passed to
upper nodes instead of values of alpha and beta.
• We will only pass alpha, beta values to the child nodes.
• Let's take an example of two-player search tree to
understand the working of Alpha-beta pruning
• Step 1: At the first step, Max player will start first move
from node A where α= -∞ and β= +∞, these value of alpha
and beta are passed down to child node B where again α=
-∞ and β= +∞, Node B passes same value to its child D.
• Step 2: At Node D, value of α will be calculated as its turn for Max.
The value of α is compared with firstly 2 and then 3, and the max (2,
3) = 3 will be the value of α at node D and the node value will also is 3
• Step 3: Now algorithm backtrack to node B, where the value of β
will change as this is a turn of Min, now β= +∞, will compare with
the available subsequent nodes value, i.e. min (∞, 3) = 3, hence at
node B now α= -∞, and β= 3.
• In the next step, algorithm traverse the next successor of Node B
which is node E, and the values of α= -∞, and β= 3 will be passed.
• Step 4: At node E, Max will take its turn, and the value of alpha will
change. Current value of alpha will be compared with 5, so max (-∞,
5) = 5, hence at node E, α= 5 and β= 3, where α >= β, so the right
successor of E will be pruned, and algorithm will not traverse it, and
the value at node E will be 5.
• Step 5: At next step, algorithm again backtrack the tree, from node B
to node A. At node A, the value of alpha will be changed. The
maximum available value is 3 as max (-∞, 3)= 3, and β= +∞, these
two values are passed to right successor of A which is Node C.
• At node C, α=3 and β= +∞, same values will be passed on to node F.
• Step 6: At node F, again the value of α will be compared with left
child which is 0, and max(3,0)= 3, and then compared with right
child which is 1, and max(3,1)= 3 still α remains 3, but the node
value of F will become 1.
• Step 7: Node F returns the node value 1 to node C, at C α=
3 and β= +∞, here the value of beta will be changed, it
will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and
β= 1, and again it satisfies the condition α>=β, so the next
child of C which is G will be pruned, and the algorithm will
not compute the entire sub-tree G.

• Step 8: C now returns the value of 1 to A here the best
value for A is max (3, 1) = 3. Following is the final game
tree which is the showing the nodes which are computed
and nodes which has never computed. Hence the optimal
value for the maximizer is 3 for this example.
• The effectiveness of alpha-beta pruning is highly dependent
on the order in which each node is examined.
• Move order is an important aspect of alpha-beta pruning.
• It can be of two types:
• Worst ordering: In some cases, alpha-beta pruning algorithm
does not prune any of the leaves of the tree, and works
exactly as minimax algorithm.
• In this case, it also consumes more time because of alpha-
beta factors, such a move of pruning is called worst
ordering. In this case, the best move occurs on the right side
of the tree.
• Ideal ordering: The ideal ordering for alpha-beta pruning
occurs when lots of pruning happens in the tree, and best
moves occur at the left side of the tree.
• We apply DFS hence it first search left of the tree and go
deep twice as minimax algorithm in the same amount of
time.
8 puzzle problem in AI – Heuristic function
8
• Let us use the heuristic function h1
• Let the initial state is:

• Let the Goal state is:


4
• In the above diagram, for the Right 6 state, we
notice that the tiles numbered as 6, 5 and 4 do not
match with the goal states and hence number of
misplaced tiles = h(n) = 3 here.
• Similarly for the Up 4 state, the tiles numbered as 6
and 5 do not match with the goal states and hence
the number of misplaced tiles = h(n) = 2 here.
• Similarly for the Down 3 state, the tiles numbered
as 6, 3, 5 and 4 do not match with the goal states
and hence the number of misplaced tiles = h(n) = 4
here.
• At this level, we consider only the state Up 4 for
further processing as its h(n) value is the minimum.
• In the case of Right 5 state, the tile numbered as 6
do not match with goal state and hence h(n) = 1.
• In the case of Down 4 state, the tiles numbered as
6, 5 and 4 are not matching with goal states, hence
h(n) = 3.
• At this level, again we choose the Right 5 state for
further processing as it has the lowest h(n) value.
• In the case of Down 6 state, all the tiles exactly
match with the goal state and hence its h(n) = 0.
• But if we use Down 4 state, the tiles numbered as 6,
5 and 4 do not match with the goal states and
hence h(n) = 3.
• As we have already arrived the goal state from
Down 6 state we stop the algorithm.
• In the above example, we have not used any heuristic
function but used uninformed search method such as
Breadth First Search (BFS) method.
• We have considered all possible states from the start
state S.
• At level 1 we have got 3 states as the empty tile is at
the left edge.
• Similarly when we expand level 1, as the empty tile is at
the middle we get 4 new states at level 2.
• When we expand level 2, as the empty tile is at the
bottom edge, we get 2 new states and one of them
happens to be our goal state and hence we stop
exploring further.
• But the time complexity of this method is more O (320)
when compared with informed search method.
• A good heuristic for the 8-puzzle is the number of tiles out
of place.
• A better heuristic is the sum of the distances of each tile
from its goal position ("Manhattan distance").

• Heuristic 1: tiles 4, 5, 6, 7, and 8 are out of place, so


estimate = 5
Heuristic 2: estimate = 0 + 0 + 0 + 1 + 1 + 2 + 1 + 1 = 6

You might also like