3-Solving problems by searching-31-07-2024
3-Solving problems by searching-31-07-2024
3-Solving problems by searching-31-07-2024
Dr K Ganesan
Professor – HAG, SCORE
VIT, Vellore – 632 014
[email protected]
Phone ; 6382203768
• Introduction
• State space search is where an AI is the hunter, say
navigating a map filled with many possible routes.
• The AI's challenge is to find the quickest path to the
destination, namely, the solution to a problem.
• It's like playing a game of chess where the AI has to
think ahead about every move to win the game.
• What is State Space Search in AI?
• It's like having many doors to open, with each door
leading to a room full of other doors.
• Some paths lead to dead ends, some go in circles,
and some lead to the goal and the "state space" is
the name for all the rooms and doors combined.
• Each "state" is a snapshot of problem at some point.
• The AI looks at all these snapshots and tries to figure
out which sequence of moves will get it to the final
picture, where the problem is solved.
• It's a bit like trying to solve a jigsaw puzzle by picturing
what the completed picture should look like and then
finding pieces that fit together to make that picture.
• AI uses this method to do things like planning the best
route for delivery trucks, figuring out the moves in a
game, or even making decisions about investments.
• It's a fundamental part of how AI systems think and
make decisions.
• Famous Algorithms of State Space Search in AI
• 1. Breadth-First Search (BFS)
• This algorithm starts at tree root (starting point) and
explores all neighbor nodes at the present depth before
moving on to nodes at the next level of depth.
• It's like searching all the rooms on one floor before taking
the stairs to the next.
• 2. Depth-First Search (DFS)
• DFS goes straight down into the tree as deep as it can go,
before backing up and trying different paths.
• It's similar to exploring a path in a cave until we hit a dead
end, then backtracking to try a different path.
• 3. Uniform Cost Search
• This method expands the least costly node first.
• It's like choosing paths based on toll costs, always taking the
cheapest route available until we reach our destination.
• 4. Greedy Best-First Search
• It picks the path that seems best at that moment, like
choosing brightest corridor in a maze, hoping it leads to exit.
• 5. A* Search
• A* Search combines the best features of both the Uniform
Cost Search and Greedy Best-First Search.
• It uses a heuristic to estimate the best path to the goal from
current state and considers cost from start to current state.
• It's like choosing paths based on both the toll cost and how
brightly they're lit.
• The choice depends on the problem the AI is trying to solve.
• Some are faster but may miss shortest path, while others
are slower but guarantee they'll find route in the end.
• Features of State Space Search
• 1. Completeness
• Completeness refers to the algorithm's ability to guarantee
a solution if one exists.
• For example, BFS and A* are complete, meaning they will
always find a solution if there is one to be found.
• 2. Optimality
• An algo is optimal if it always finds least-cost path to goal.
• Uniform Cost Search and A* are examples of optimal path.
• 3. Time Complexity
• This is a measure of algo's performance in terms of how
quickly it can find a solution.
• DFS can be faster than BFS in some cases because it doesn't
need to store all the fringe nodes at each level.
• 4. Space Complexity
• It measures how much memory algorithm uses. BFS can
have a high space complexity to keep track of all the nodes
at the current depth level.
• 5. Heuristics
• Heuristics (used in A*) can help to estimate the cost of the
cheapest path from the current node to the goal.
• A good heuristic can speed up the search process.
• Steps in State Space Search
• 1. Define the Initial State
• This is where problem begins. For e.g, in a puzzle, the initial
state would be starting arrangement of the pieces.
• 2. Define the Goal State
• The goal state is the desired arrangement or condition that
solves the problem. While traveling, it would be the
reaching the destination.
• 3. Generate Successor States
• From the current state, create a set of possible 'next moves'
or states that are reachable directly from the current state.
• 4. Apply Search Strategy
• Choose a path based on the chosen search strategy, such as
depth-first, breadth-first, or best-first search. This decision is
crucial and can affect the efficiency of the search.
• 5. Check for Goal State
• At each new state, check to see if the goal has been reached.
If it has, the search is successful.
• 6. Path Cost
• Calculate the cost of the current path. If cost is too high, or if
a dead end is reached, backtrack and try a different path
• 7. Loop and Expand
• Repeat the process: generate successors, apply the search
strategy, and check for goal state until goal is reached or no
more states can be generated.
• Example of State Space Search
• Consider a classic problem in AI: Puzzle of the water jugs.
• We have a 3-liter jug and a 5-liter jug, and we need to
measure out exactly 4 liters of water using these two jugs.
• Initial State: (similar to input)
• We start with both jugs empty: (0, 0) where first value is the
amount of water in the 3-liter jug, and second value is the
amount in the 5-liter jug.
• Goal State: (similar to output)
• The state where any of the jugs contains exactly 4 liters of
water: (0, 4) or (4, x).
• Actions: (Similar to allowed functions / rules)
• Fill a jug completely.
• Empty a jug.
• Pour water from one jug to the other until either the first
jug is empty or the second jug is full.
• State Space Representation:
• We can represent this problem with a graph where
each node represents the state of the jugs, and
each edge represents one of the actions that can
change the state.
• For example:
• Filling the 5-liter jug from the tap: (0, 0) → (0, 5)
• Pouring water from the 5-liter jug into the 3-liter
jug: (0, 5) → (3, 2)
• Emptying the 3-liter jug: (3, 2) → (0, 2)
• Pouring water from the 5-liter jug to fill the 3-liter
jug: (0, 2) → (2, 0)
• And so on, until reaching the goal state.
• Water Jug Problem Definition,
• “You are given two jugs, a 4-liter one and a 3-liter one.
Neither has any measuring markers on it. There is a
pump that can be used to fill the jugs with water. How
can we get exactly 2 liters of water into a 4-liter jug?”
y
• Solution to the Water Jug Problem in Artificial Intelligence
• Current state (0,0)
• Loop till the goal state (2,0) is reached.
• Apply a rule when the left side matches the current state
• Set the new current state to the resulting state
• Start state (0,0)
• (0,3) Apply Rule 2, Fill the 3-litre Jug
• (3,0) Apply Rule 9: Pour all the water from a 3-litre jug into a
4-litre jug as 4 > 3.
• (3,3) Apply Rule 2, Fill the 3-litre Jug
• (4,2) Apply Rule 7: Pour water from a 3-litre jug into a 4-litre
jug until it is full (3 lit is there in 1st jug hence pour 1 lit from
2nd jug and 1st becomes 3+1=4 and second becomes 3-1=2)
• (0,2) Apply Rule 5, Empty 4-litre jug on the ground
• (2,0) Apply Rule 9: Pour all the water from a 3-litre jug into a
4-litre jug
• Another solution for water jug is:
• (0, 0) – Start State
• (4, 0) – Rule 1: Fill the 4-litre jug
• (1, 3) – Rule 8: Pour water from the 4-litre jug into the
3-litre jug until the 3-litre jug is full (second jug can
take only 3 lit hence first jug becomes 4-3=1 lit)
• (1, 0) – Rule 6: Empty the 3-litre jug on the ground
• (0, 1) – Rule 10: Pour all the water from the 4-litre jug
into the 3-litre jug.
• (4, 1) – Rule 1: Fill the 4-litre jug.
• (2, 3) – Rule 8: Pour water from the 4-litre jug into the
3-litre jug until the 3-litre jug is full (second jug can
take only 2 lit from first jug)
• (2, 0) – Rule 6: Empty the 3-liter jug on the ground
• Goal State reached
• Develop all production rules for following water jug problem
• You are given two jugs, 6 litre one and 5 litre one. Neither has
any measuring marks on it. There is a pump to fill jugs with
water. How can we get exactly 2 litres of water in 6-litre jug?
• 1.Fill the 6-litre jug.
• 2.Pour water from the 6-litre jug into 5-litre jug until 5-litre jug
is full. This leaves you with 1 litre of water in the 6-litre jug.
• 3.Empty the 5-litre jug.
• 4.Pour 1 litre of water from 6-litre jug into the empty 5-litre jug
• 5.Fill the 6-litre jug again.
• 6.Carefully pour water from 6-litre jug into the 5-litre jug until
5-litre jug is full. Since you already have 1 litre of water in 5-
litre jug, you can only pour 4 litres from 6-litre jug.
• Now, we have exactly 2 litres of water in the 6-litre jug.
• Consider water jug problem using a 10-liter jug and a 7-liter
jug, where the goal is to measure exactly 6 liters of water:
• A. State Representation and Initial State: Represent a state
of the problem using a tuple (x, y), with x represent amount
of water in 10-liter jug, y represent amount of water in 7-liter
jug, and initial state being (0, 0)
• B. Goal Predicate: The goal state is (6, y), where 0 ≤ y ≤ 7
• Each action, such as filling, emptying, or pouring water,
transitions the system from one state to another.
• C. Operators that transition from one state to another:
• Fill 10-liter jug: (x, y) → (10, y) if x < 10
• Fill 7-liter jug: (x, y) → (x, 7) if y < 7
• Empty 10-liter jug: (x, y) → (0, y) if x > 0
• Empty 7-liter jug: (x, y) → (x, 0) if y > 0
• Pour water from 7-liter jug into 10-liter jug: (x, y) → (10, y –
(10 – x)) if 0 < x + y ≥ 10 and y > 0
• Pour water from 10-liter jug into 7-liter jug: (x, y) → (x – (7 –
y), 7) if 0 < x + y ≥ 7 and x > 0
• Pour all water from 7-liter jug into 10-liter jug: (x, y) →
(x + y, 0) if 0 < x + y ≤ 10 and y ≥ 0
• Pour all water from 10-liter jug into 7-liter jug: (x, y) →
(0, x + y) if 0 < x + y ≤ 7 and x ≥ 0
• Using state space search (brute force approach), the
following solution can be found:
• Start: (0, 0)
• Fill 10-liter jug: (10, 0)
• Pour from 10-liter jug to 7-liter jug: (3, 7)
• Empty 7-liter jug: (3, 0)
• Pour from 10-liter jug to 7-liter jug: (0, 3)
• Fill 10-liter jug: (10, 3)
• Pour from 10-liter jug to 7-liter jug: (6, 7)
• The final state (6, 7) is achieved - goal of measuring
exactly 6 liters of water in the 10-liter jug.
• Applications of State Space Search
• 1. Puzzle Solving:
• State space search algorithms are often used to solve complex
puzzles like Rubik's Cube, sliding tile puzzles, water jug problem.
• These puzzles can be represented as a state space, and algorithms
can be applied to find solutions.
• 2. Pathfinding:
• In video games and robotics, state space search is used for
pathfinding – determining the shortest or most cost-effective
path from one point to another.
• Algos like A* and Dijkstra's are used in maps and game levels to
find paths considering various terrains and obstacles.
• 3. Planning and Scheduling:
• AI planning involves creating a sequence of actions to reach goal.
State space search helps in finding best sequence of events in
logistics, production schedules, or AI for playing strategy games.
• 4. Natural Language Processing:
• State space search can be used in parsing algorithms for
natural language processing, where states represent
possible interpretations of sentences & search is for most
likely meaning.
• 5. Machine Learning:
• In reinforcement learning, state space search helps in
exploring different states and actions to maximize the
reward function.
• 6. Problem Solving in AI:
• It is used in expert systems, theorem proving, and even in
medical diagnosis systems where different states represent
the presence or absence of symptoms and diseases.
• 7. Robotics:
• Robots use state space search to navigate and interact with
their environment.
• They need to understand current state, explore possible
next states, and choose best action to perform tasks.
• 8. Automated Reasoning:
• State space search is used in automated reasoning,
where a system needs to infer new knowledge from
the known facts. This is crucial in fields like law and
computer-aided verification.
• 9. Optimization Problems:
• Many optimization problems can be framed as state
space searches, where goal is to find the state that
maximizes or minimizes a particular function.
• 10. Quantum Computing:
• In quantum computing, state space search can be used
to explore possibilities of quantum states to solve
problems that are intractable on classical computers.
• Disadvantages of State Space Search
• 1. Complexity:
• The biggest challenge is the combinatorial explosion.
• As the complexity of the problem increases, number of
states can grow exponentially, making the search space vast
and difficult to navigate within a reasonable time frame.
• 2. Memory Consumption:
• Storing state space requires significant memory, for
complex problems. This can be a limiting factor for systems
with limited resources.
• 3. Time-Consuming:
• Searching through a large state space can be very time-
consuming.
• Algorithms may take an impractical amount of time to find a
solution, or to determine that no solution exists.
• 4. Local Minima:
• Some search algorithms can get stuck in local minima –
suboptimal states that are better than their neighbors but
not the best overall. Escaping local minima can be difficult
and may require additional strategies.
• 5. Overhead:
• The overhead of managing the search, such as keeping track
of visited states and the paths taken, can be significant,
which adds to the computational burden.
• 6. Knowledge Representation:
• The effectiveness of a state space search is heavily dependent
on how well problem is represented. Poor representation
can lead to inefficient searches and missed solutions.
• 7. Dynamic Environments:
• State space search is less effective in dynamic environments
where state can change independently of the searcher's
actions. This requires the search algorithm to be adaptive
and responsive to changes.
• 8. Heuristic Dependence:
• Many state space search algorithms rely on heuristics
(proceeding to a solution by trial and error or by rules
that are only loosely defined) to guide the search.
• Finding a good heuristic is often a problem-specific
challenge and can be difficult.
• 9. Scalability Issues:
• While state space search is suitable for many problems,
it may not scale well for problems with high-
dimensional spaces or those requiring real-time
solutions.
• 10. Incomplete Information:
• In cases where the state space cannot be fully known
or is partially observable, traditional state space
search techniques may not be applicable or need to
be significantly adapted.
• Types of search algorithms
• Based on search problems we can classify search
algorithms into uninformed (Blind search) search
and informed search (Heuristic search) algorithms.
• Breadth-First Search(BFS)
• It is a graph traversal algorithm that starts traversing the
graph from the root node and explores all the neighboring
nodes.
• Then, it selects the nearest node and explores all the
unexplored nodes.
• While using BFS for traversal, any node in the graph can be
considered as the root node.
• There are many ways to traverse the graph, but among
them, BFS is the most commonly used approach.
• It is a recursive algorithm to search all the vertices of a tree or
graph data structure.
• BFS puts every vertex of the graph into two categories -
visited and non-visited.
• It selects a single node in a graph and, after that, visits all
the nodes adjacent to the selected node.
• Applications of BFS
• BFS can be used to find the neighboring locations from a
given source location.
• In a peer-to-peer network, BFS algorithm can be used as a
traversal method to find all the neighboring nodes.
• BFS can be used in web crawlers to create web page indexes.
• It starts traversing from the source page and follows the links
associated with the page.
• Here, every web page is considered as a node in the graph.
• BFS is used to determine the shortest path and minimum
spanning tree.
• It can be used in ford-Fulkerson method to compute the
maximum flow in a flow network.
• In example below, there is a directed graph having 7 vertices.
In the above graph, minimum path 'P' can be found by using
the BFS that will start from Node A and end at Node E.
Algorithm uses two queues, namely QUEUE1 and QUEUE2.
QUEUE1 holds all the nodes that are to be processed, while
QUEUE2 holds all nodes that are processed and deleted
from QUEUE1.
Now, let's start examining the graph starting from Node A.
• Step 1 - First, add A to queue1 and NULL to queue2.
• QUEUE1 = {A}
• QUEUE2 = {NULL}
• Step 2 - Now, delete node A from queue1 and add it into
queue2. Insert all neighbors of node A to queue1.
• QUEUE1 = {B, D}
• QUEUE2 = {A}
• Step 3 - Now, delete node B from queue1 and add it into
queue2. Insert all neighbors of node B to queue1.
• QUEUE1 = {D, C, F}
• QUEUE2 = {A, B}
• Step 4 - Now, delete node D from queue1 and add it into
queue2. Insert all neighbors of node D to queue1.
• The only neighbor of Node D is F since it is already inserted,
so it will not be inserted again.
• QUEUE1 = {C, F}
• QUEUE2 = {A, B, D}
• Step 5 - Delete node C from queue1 and add it into queue2.
Insert all neighbors of node C to queue1.
• QUEUE1 = {F, E, G}
• QUEUE2 = {A, B, D, C}
• Step 6 - Delete node F from queue1 and add it into queue2.
• Insert all neighbors of node F to queue1. Since all the
neighbors of node F (say, A) are already present, we will not
insert them again.
• QUEUE1 = {E, G}
• QUEUE2 = {A, B, D, C, F}
• Step 7 - Delete node E from queue1 and add it into queue2.
• Since all of its neighbors have already been added (B,F), we
will not insert them again.
• Now, all the nodes are visited, and the target node E is
present in queue2.
• QUEUE1 = {G}
• QUEUE2 = {A, B, D, C, F, E}
• Suppose we have a 3-liter jug and a 5-liter jug, and we want
to measure exactly 4 liters of water.
• We'll use BFS to find the optimal solution.
• 1. Start with an Empty State: (0, 0)
• Both jugs are initially empty.
• 2. Apply All Possible Actions from the Current State: (0, 0)
• Fill the 3-liter jug: (3, 0)
• Fill the 5-liter jug: (0, 5)
• 3. Expand to the Next Level:
• We now have two new states to explore, (3, 0) and (0, 5).
• 4. Continue Expanding:
• From (3, 0), we can:
– Pour from the 3-liter jug to the 5-liter jug: (0, 3)
– Fill the 3-liter jug again: (3, 3)
• From (0, 5), we can:
– Pour from the 5-liter jug to the 3-liter jug: (3, 2)
– Empty the 5-liter jug: (0, 0)
• 5. Explore Further:
• Continue expanding the states at the next level:
– From (0, 3), we can pour to reach (3, 0).
– From (3, 3), we can reach (0, 3) or (3, 5).
• 6. Goal State Achieved:
• Proceeding further, we've reached the goal state (0, 4).
• 7. Backtrack to Find the Solution:
• To find the solution path, we backtrack from the goal state to
the initial state:
– (0, 4) -> (3, 1) -> (0, 1) -> (1, 0) -> (1, 5) -> (3, 4) -> (0, 4)
• This step-by-step demonstration shows how Breadth-First
Search systematically explores the state space to find the
optimal solution to the Water Jug Problem.
• It ensures that we find the shortest path to the goal state
while examining all possible actions.
• While BFS guarantees optimality, it may not always be the
most efficient choice for larger problem spaces.
• DFS (Depth First Search) algorithm
• It is a recursive algorithm to search all the vertices of a tree
data structure or a graph.
• Stack data structure can be used to implement the DFS
algorithm
• The depth-first search (DFS) algorithm starts with the initial
node of graph G and goes deeper until we find the goal node
or the node with no children.
• Process to implement the DFS traversal is given as follows -
• 1.First, create a stack with total number of vertices in graph.
• 2.Now, choose any vertex as the starting point of traversal,
and push that vertex into the stack.
• 3.After that, push a non-visited vertex (adjacent to the vertex
on the top of the stack) to the top of the stack.
• 4.Now, repeat steps 3 and 4 until no vertices are left to visit
from the vertex on the stack's top.
• 5.If no vertex is left, go back and pop a vertex from the stack.
• Repeat steps 2, 3, and 4 until the stack is empty.
Depth-first search on a binary tree. The unexplored region is shown in
light gray. Explored nodes with no descendants in the frontier are
removed from memory. Nodes at depth 3 have no successors and M is
the only goal node.
• Depth First Search – How it Works?
• Here stack is used for keeping track of all visited nodes.
• Initially push node A in to stack and add it in the output
sequence and mark node A as visited (yellow shading).
• At the top of the stack is node A and check all the
unvisited nodes of A.
• We have B and S.
Let us choose the unvisited nodes in alphabetical order.
Hence node B is pushed in to stack.
Now, add node B into the output sequence.
Shade node B as it is already visited.
Now no neighbouring unvisited node is available from
node B.
Hence pop up the node B from the stack.
• Now A is at the top of the stack.
• S is the other adjacent unvisited node.
• Push S into the top of the stack and add S to the
output sequence and shade S as visited node.
• S is at the top of the stack and it has C and G as its
neighbouring unvisited nodes.
• Following alphabetical order, push C in to stack.
• Store C into the output sequence and mark C as visited.
• C has 3 neighbouring unvisited nodes D, E and F.
• Alphabetically select node D, push into the top of stack.
• Add D into the output sequence and mark D as visited.
• Now D is at the top of the stack and it has only one visited
node, namely, C.
• There is no unvisited node and hence pop D from the
stack. Now C will be at the top of the stack.
• C has 2 unvisited adjacent nodes E and F.
• Choose alphabetically node E, push it on the top of stack
and add E to the output list and shade E as visited node.
• E has one unvisited, adjacent node H and push it in to the
top of the stack and add H to the output list and shade H
as visited node.
• H has one unvisited neighbour node G and push it
into the top of the stack and add G to the output
sequence and mark G as visited node.
• G is at the top of the stack, and F is the unvisited
node of G and hence push F into the top of the
stack and mark F as visited.
• Now F is at the top of the stack and has no adjacent
unvisited nodes and hence pop it out of stack.
• Now G is at the top of the stack and pop it up. Now H
will be at the top of the stack.
The total cost for the path (P -> C -> U -> S) evaluates to 11.
Problem with a greedy best-first search is that the path (P -
> R -> E -> S) has a cost of 10, is lower than (P -> C -> U -> S).
Greedy best-first search ignored this path as it does not
consider edge weights.
Simplified road map of Romania
• Let us see how this works for route-finding problems in
Romania; we use straight line distance heuristic, call hSLD.
• If the goal is Bucharest, we need to know the straight-line
distances to Bucharest, which are shown in Figure below:
Stages in a greedy best-first tree search for Bucharest with the straight-
line distance heuristic hSLD. Nodes are labeled with their h-values.
• Dijkstra’s Algorithm
• Given a weighted graph and a source vertex , find shortest
paths from source to all the other vertices in the given graph.
• Input: src=0 Output: 0 4 12 19 21 11 9 8 14
Explanation: Distance from 0 to 1 = 4.
Minimum distance from 0 to 2 = 12.
0->1->2
Minimum distance from 0 to 3 = 19.
0->1->2->3
Minimum distance from 0 to 4 = 21.
0->7->6->5->4
Minimum distance from 0 to 5 = 11.
0->7->6->5
Minimum distance from 0 to 6 = 9.
0->7->6
Minimum distance from 0 to 7 = 8. 0->7
Minimum distance from 0 to 8 = 14.
0->1->2->8
• A* Search Algorithm
• A* Search algorithm is an alternative to Dijkstra’s
Shortest Path algorithm.
• It is used to find shortest path between two nodes of
a weighted graph.
• A* Search algorithm performs better than Dijkstra’s
algorithm as it uses heuristics.
• Heuristic we will use is the straight line distance
between a node and the end node (Z).
• This distance will always be the shortest distance
between two points, a distance that cannot be reduced
whatever path we follow to reach node Z.
• A* tries to look for a better path by using a heuristic
function, which gives priority to nodes that are
supposed to be better than others while Dijkstra's just
explore all possible ways.
Select a node as a
current node for
which the total
distance is
minimum, here, C
• Set the current node (A) to “visited” and use the
unvisited node with the smallest total distance as
the current node (e.g. in this case: Node C).
• Check all unvisited nodes connected to the current
node and add the distance from A to C to all
distances from the connected nodes.
• Replace their values only if the new distance is
lower than the previous one.
• C -> D: 3 + 7 = 10 < ∞ – Change Node D
C -> E: 3 + 10 = 13 < ∞ – Change Node E
• The next current node (unvisited node with the
shortest total distance) could be either node B or
node D. Let’s use node B.
Check all unvisited nodes connected to the current node (B) and add
the distance from A to B to all distances from the connected nodes.
Replace their values only if the new distance is lower than the
previous one. B -> E: 4 + 12 = 16 > 13 – Do not change Node E
B -> F: 4 + 5 = 9 < ∞ – Change Node F
Next current node (unvisited node with shortest total distance) is D.
• We found a path from A to Z, but is it shortest
one?
• Check all unvisited nodes.
• In this e.g, there is only one unvisited node (F).
• But its total distance (20) is greater than the
distance we have from A to Z (17) so there is no
need to visit node F as it will not lead to a shorter
path.
• We found the shortest path from A to Z.
• Read path from Z to A using previous node
column:
Z>E>D>C>A
• So the Shortest Path is:
A – C – D – E – Z with a length of 17
Solution :
SDEFG
Solve: Apply the steps of the A* Search algorithm to find the
shortest path from A to Z using the following graph:
Solve : Apply the steps of the A* Search
algorithm to find the shortest path from A
to Z using the following graph:
Simplified road map of Romania
Solve :
Stages in an A∗
search for
Bucharest.
Nodes are labeled
with f = g +h.
The h values are
the straight-line
distances to
Bucharest taken
from Figure given
before.
• Generate and Test Algorithm
• Generate a possible solution. This means generating a possible
point in the problem space or generating a path from a start state.
• Test to see, if this is actually a solution by comparing the chosen
point or the endpoint of the chosen path to the set of acceptable
goal state. If solution is found, quit. Otherwise repeat the process
• Simple Hill Climbing Algorithm
• Step 1: Start with an initial state.
• Step 2: Check if initial state is the goal. If so, return success and exit.
• Step 3: Enter a loop to search for a better state continuously.
• Select a neighboring state within the loop by applying an operator to
the current state. Evaluate this new state:
– If it’s the goal state, return success and exit.
– If it’s better than current state, update current state to new state.
– If it’s not better, discard it and continue the loop.
• Step 4: End process if no better state is found & goal isn’t achieved.
• Steepest-Ascent Hill Climbing
• This variant assesses all neighboring solutions, choosing the
one with the most significant improvement.
• Algorithm:
• Step 1: Evaluate the initial state. If it’s the goal, we can return
success; otherwise, set it as the current state.
• Step 2: Repeat until a solution is found or no further
improvement is possible.
• Initialize “BEST_SUCCESSOR” as the best potential
improvement over the current state.
• For each operator, apply to current state, evaluate new state.
– If it’s the goal, return success.
– If better than “BEST_SUCCESSOR,” update “BEST_SUCCESSOR” to
this new state.
• If “BEST_SUCCESSOR” is an improvement, update current
state.
• Step 3: Stop algorithm if no solution is found or further
improvement is possible.
• Both the Hill climbing algorithms may fail to find a solution, if the
program reaches either (i) local maximum (ii) plateau or (iii) ridge.
• X-axis: denotes the state space i.e states or configuration our
algorithm may reach. Y-axis: denotes the values of objective
function corresponding to a particular state.
• The best solution will be a state space where the objective function
has a maximum value(global maximum).
• Different regions in the State Space Diagram:
• Local maximum: It is a state which is better than its
neighboring state however there exists a state which is
better than it (global maximum). This state is better because
value of the objective function is higher than its
neighbors. It is also called foothill.
• To overcome the local maximum problem: Utilize
backtracking technique. Maintain a list of visited states. If
the search reaches an undesirable state, it can backtrack to
the previous configuration and explore a new path.
• Global maximum: It is the best possible state in the state
space diagram. This is because, at this stage, the objective
function has the highest value.
• Plateau/flat local maximum: It is a flat region of state space
where neighboring states have the same value. Hence it is
difficult to find the best direction to move further.
• To overcome plateaus: Make a big jump. Randomly select a
state far away from the current state. Chances are that we
will land in a non-plateau region.
• Ridge: It is a region that is higher than its neighbors but itself has a
slope. It is a special kind of local maximum. Because it is very
difficult to traverse a ridge by a single move.
• To overcome Ridge: In this kind of obstacle, use two or more rules
before testing. It implies moving in several directions at once.
• Current state: The region of the state space diagram where we are
currently present during the search.
• Shoulder: It is a plateau that has an uphill edge.
Advantages Disadvantages
Susceptibility to Local Optima: The algorithm
Simplicity: The algorithm is straightforward to
can become stuck at locally optimal solutions
understand and implement.
that aren’t the best overall.
Limited Exploration: Its tendency to focus on
Memory Efficiency: It’s memory-efficient, the immediate vicinity limits its exploration,
maintaining only the current state’s data. potentially overlooking globally optimal
solutions.
Rapid Convergence: It often converges swiftly Dependence on Initial State: The quality and
to a solution, which is beneficial in scenarios effectiveness of the solution found heavily
where time is critical. depend on the starting point.
• Hill climbing is a local search algorithm that continuously
moves towards direction of increasing elevation to find
peak of mountain. It is a greedy algorithm as it makes best
choice at each step, hoping to reach the global maximum.
• Initialize: Start with an initial solution.
• Evaluate: Evaluate the current solution.
• Generate neighbors: Create neighboring solutions by making
small modifications to the current solution.
• Select the best neighbor: Choose the neighbor with the
highest evaluation value.
• Update: Move to the selected neighbor and repeat the
process until a peak is reached or no better neighbor is found.
• Numerical Example:
• Consider a simple example of finding the maximum value in a
one-dimensional space.
• Suppose we have the function: f(x) = -x^2 + 4x
• We want to find value of x that maximizes this function using
hill climbing.
• Steps:
• Initialize: Start with an initial solution, let's say x = 2.
• Evaluate: Evaluate the function at x = 2:
• f(2) = -2^2 + 4*2 = -4 + 8 = 4.
• Generate neighbors: Create neighboring solutions by slightly
modifying x. Let's consider x + 1 and x - 1.
• Select best neighbor: Evaluate the function at x + 1 and x - 1.
• If either of them has a higher function value than the
current solution, move to that neighbor. If not, stop.
• Update: Suppose evaluating f(3) and f(1) gives f(3) = 1 and
f(1) = 0. Since f(3) > f(2), move to x = 3 and repeat process.
• Repeat these steps until we reach a peak or there is no
better neighbor.
• In this case, algorithm will move towards maximum value of
function, which is peak for this specific problem.
A H
Solution : Using Heuristic Function
Local
H G Add one point for every block that is resting on
the thing it is suppose to be resting on.
G F Subtract one point for every block that is sitting
on the wrong thing.
F E
Initial State : Score = +4
E D (One point added for each of C, D, E, F, G & H = 6.
Subtract one point for A & B = 2. So, 6 – 2 = 4.
D C Goal State : Score = +8
There is only one move from the initial state,
C B namely move block A on the table.
B A
H Current State
G Score = +6
Initial State Goal State F
E
D
C
B A
• Next State ( There are 3 alternatives)
• A
H
G G
G
F F
F
E E E
D D D
C C H C
B B A B A H