Informed Search

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Informed Search Algorithms

• Uninformed search algorithms looked through search space for all


possible solutions of the problem without having any additional
knowledge about search space.
• Informed search algorithm contains an array of knowledge such as
how far we are from the goal, path cost, how to reach to goal node,
etc.
• This knowledge help agents to explore less to the search space and
find more efficiently the goal node.
• The informed search algorithm is more useful for large search space.
Informed search algorithm uses the idea of heuristic, so it is also
called Heuristic search.
• Heuristics function:
• Heuristic is a function which is used in Informed Search, and it finds the most
promising path.
• It takes the current state of the agent as its input and produces the estimation of
how close agent is from the goal.
• The heuristic method might not always give the best solution, but it guaranteed
to find a good solution in reasonable time.
• Heuristic function estimates how close a state is to the goal. It is represented by
h(n), and it calculates the cost of an optimal path between the pair of states. The
value of the heuristic function is always positive.
• h(n) <= h*(n)
• Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost
should be less than or equal to the estimated cost.
Pure Heuristic Search

• Pure heuristic search is the simplest form of heuristic search


algorithms. It expands nodes based on their heuristic value h(n). It
maintains two lists, OPEN and CLOSED list. In the CLOSED list, it places
those nodes which have already expanded and in the OPEN list, it
places nodes which have yet not been expanded.
• On each iteration, each node n with the lowest heuristic value is
expanded and generates all its successors and n is placed to the
closed list. The algorithm continues unit a goal state is found.
Best-first Search Algorithm (Greedy Search)

• Greedy best-first search algorithm always selects the path which appears
best at that moment.
• It is the combination of depth-first search and breadth-first search
algorithms.
• It uses the heuristic function and search.
• With the help of best-first search, at each step, we can choose the most
promising node.
• In the best first search algorithm, we expand the node which is closest to
the goal node and the closest cost is estimated by heuristic function
f(n)= g(n).
• Greedy best first algorithm is implemented by the priority queue.
Best first search algorithm

Step 1: Place the starting node into the OPEN list.


Step 2: If the OPEN list is empty, Stop and return failure.
Step 3: Remove the node n, from the OPEN list which has the lowest value of
h(n), and places it in the CLOSED list.
Step 4: Expand the node n, and generate the successors of node n.
Step 5: Check each successor of node n, and find whether any node is a goal
node or not. If any successor node is goal node, then return success and
terminate the search, else proceed to Step 6.
Step 6: For each successor node, algorithm checks for evaluation function
f(n), and then check if the node has been in either OPEN or CLOSED list. If
the node has not been in both list, then add it to the OPEN list.
Step 7: Return to Step 2.
• Advantages:
• Best first search can switch between BFS and DFS by gaining the
advantages of both the algorithms.
• This algorithm is more efficient than BFS and DFS algorithms.
• Disadvantages:
• It can behave as an unguided depth-first search in the worst case
scenario.
• It can get stuck in a loop as DFS.
• This algorithm is not optimal.
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration, each
node is expanded using evaluation function f(n)=h(n) , which is given in the below table
In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the iteration for traversing the above example.
• Expand the nodes of S and put in the CLOSED list
• Initialization: Open [A, B], Closed [S]
• Iteration 1: Open [A], Closed [S, B]
• Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
• Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
• Hence the final solution path will be: S----> B----->F----> G
• Time Complexity: The worst case time complexity of Greedy best first
search is O(bm).
A* search
• Minimizing the total estimated solution cost
• The most widely known form of best-first search is called A∗
• A search (pronounced “A-star ∗ SEARCH
• search”). It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost to
get from the node to the goal:
• f(n) = g(n) + h(n) .
• Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost
• of the cheapest path from n to the goal, we have
• f(n) = estimated cost of the cheapest solution through n .
• Thus, if we are trying to find the cheapest solution, a reasonable thing to try first is the node with
the lowest value of g(n) + h(n). It turns out that this strategy is more than just
• reasonable: provided that the heuristic function h(n) satisfies certain conditions, A∗ search is
• both complete and optimal. The algorithm is identical to UNIFORM-COST-SEARCH except that A∗
uses g + h instead of g
Hill Climbing Algorithm in Artificial Intelligence

• Hill climbing algorithm is a local search algorithm which continuously moves in


the direction of increasing elevation/value to find the peak of the mountain or
best solution to the problem. It terminates when it reaches a peak value where
no neighbour has a higher value.
• Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. Examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by the
salesman.
• It is also called greedy local search as it only looks to its good immediate
neighbour state and not beyond that.
• A node of hill climbing algorithm has two components which are state and value.
• Hill Climbing is mostly used when a good heuristic is available.
• In this algorithm, we don't need to maintain and handle the search tree or graph
as it only keeps a single current state.
Features of Hill Climbing:

• Generate and Test variant: Hill Climbing is the variant of Generate


and Test method. The Generate and Test method produce feedback
which helps to decide which direction to move in the search space.
• Greedy approach: Hill-climbing algorithm search moves in the
direction which optimizes the cost.
• No backtracking: It does not backtrack the search space, as it does
not remember the previous states.
State-space Diagram for Hill Climbing:
• The state-space landscape is a graphical representation of the hill-
climbing algorithm which is showing a graph between various states
of algorithm and Objective function/Cost.
• On Y-axis we have taken the function which can be an objective
function or cost function, and state-space on the x-axis. If the
function on Y-axis is cost then, the goal of search is to find the global
minimum and local minimum. If the function of Y-axis is Objective
function, then the goal of the search is to find the global maximum
and local maximum.
Different regions in the state space landscape:

• Local Maximum: Local maximum is a state which is better than its


neighbor states, but there is also another state which is higher than it.
• Global Maximum: Global maximum is the best possible state of state
space landscape. It has the highest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is
currently present.
• Flat local maximum: It is a flat space in the landscape where all the
neighbor states of current states have the same value.
• Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:

• Simple hill Climbing:


• Steepest-Ascent hill-climbing:
• Stochastic hill Climbing:
1. Simple Hill Climbing:

• Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only evaluates the
neighbor node state at a time and selects the first one which optimizes current cost and set it as a current
state. It only checks it's one successor state, and if it finds better than the current state, then move else be in
the same state. This algorithm has the following features:
• Less time consuming
• Less optimal solution and the solution is not guaranteed
• Algorithm for Simple Hill Climbing:
• Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
• Step 2: Loop Until a solution is found or there is no new operator left to apply.
• Step 3: Select and apply an operator to the current state.
• Step 4: Check new state:
• If it is goal state, then return success and quit.
• Else if it is better than the current state then assign new state as a current state.
• Else if not better than the current state, then return to step2.
• Step 5: Exit.
2. Steepest-Ascent hill climbing:

• The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state. This algorithm consumes more time as it searches for multiple neighbors
• Algorithm for Steepest-Ascent hill climbing:
• Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current
state as initial state.
• Step 2: Loop until a solution is found or the current state does not change.
• Let SUCC be a state such that any successor of the current state will be better than it.
• For each operator that applies to the current state:
• Apply the new operator and generate a new state.
• Evaluate the new state.
• If it is goal state, then return it and quit, else compare it to the SUCC.
• If it is better than SUCC, then set new state as SUCC.
• If the SUCC is better than the current state, then set current state to SUCC.
• Step 5: Ex
3. Stochastic hill climbing:

• Stochastic hill climbing does not examine for all its neighbor before
moving. Rather, this search algorithm selects one neighbor node at
random and decides whether to choose it as a current state or
examine another state.
Problems in Hill Climbing Algorithm:

• 1. Local Maximum: A local maximum is a peak state in the landscape


which is better than each of its neighboring states, but there is
another state also present which is higher than the local maximum.
• Solution: Backtracking technique can be a solution of the local
maximum in state space landscape. Create a list of the promising
path so that the algorithm can backtrack the search space and
explore other paths as well.
• 2. Plateau: A plateau is the flat area of the search space in which all
the neighbor states of the current state contains the same value,
because of this algorithm does not find any best direction to move. A
hill-climbing search might be lost in the plateau area.
• Solution: The solution for the plateau is to take big steps or very little
steps while searching, to solve the problem. Randomly select a state
which is far away from the current state so it is possible that the
algorithm could find non-plateau region.
• 3. Ridges: A ridge is a special form of the local maximum. It has an
area which is higher than its surrounding areas, but itself has a slope,
and cannot be reached in a single move.
• Solution: With the use of bidirectional search, or by moving in
different directions, we can improve this problem.
Simulated Annealing:

• A hill-climbing algorithm which never makes a move towards a lower value


guaranteed to be incomplete because it can get stuck on a local maximum. And if
algorithm applies a random walk, by moving a successor, then it may complete
but not efficient. Simulated Annealing is an algorithm which yields both
efficiency and completeness.
• In mechanical term Annealing is a process of hardening a metal or glass to a high
temperature then cooling gradually, so this allows the metal to reach a low-
energy crystalline state. The same process is used in simulated annealing in which
the algorithm picks a random move, instead of picking the best move. If the
random move improves the state, then it follows the same path. Otherwise, the
algorithm follows the path which has a probability of less than 1 or it moves
downhill and chooses another path.

You might also like