Ainotes Module1
Ainotes Module1
Ainotes Module1
Intelligence: It is the knowledge in operation towards the solution – how to do? How to apply
the solution?
Artificial Intelligence: Artificial intelligence is the study of how make computers to do things
which people do better at the moment. It refers to the intelligence controlled by a computer
machine.
One View of AI is
About designing systems that are as intelligent as humans
Computers can be acquired with abilities nearly equal to human intelligence
How system arrives at a conclusion or reasoning behind selection of actions
How system acts and performs not so much on reasoning process.
The AI Problem
There are some of the problems contained within AI.
1. Game Playing and theorem proving share the property that people who do them well
are considered to be displaying intelligence.
2. Another important foray into AI is focused on Commonsense Reasoning. It includes
reasoning about physical objects and their relationships to each other, as well as reasoning
about actions and other consequences.
3. To investigate this sort of reasoning Nowell Shaw and Simon built the General Problem
Solver (GPS) which they applied to several common sense tasks as well as the problem of
performing symbolic manipulations of logical expressions. But no attempt was made to
create a program with a large amount of knowledge about a particular problem domain.
Only quite simple tasks were selected.
4. The following are the figures showing some of the tasks that are the targets of work in AI:
Perception of the world around us is crucial to our survival. Animals with much less intelligence
than people are capable of more sophisticated visual perception. Perception tasks are difficult
because they involve analog signals. A person who knows how to perform tasks from several of
the categories shown in figure learns the necessary skills in standard order.
First perceptual, linguistic and commonsense skills are learned. Later expert skills such as
engineering, medicine or finance are acquired.
What is an AI Technique?
Artificial Intelligence problems span a very broad spectrum. They appear to have very little in
common except that they are hard. There are techniques that are appropriate for the solution of
a variety of these problems. The results of AI research tells that
Important AI Techniques:
Search: Provides a way of solving problems for which no more direct approach is
available as well as a framework into which any direct techniques that are
available can be embedded.
Use of Knowledge: Provides a way of solving complex problems by exploiting the
structures of the objects that are involved.
Abstraction: Provides a way of separating important features and variations from
the many unimportant ones that would otherwise overwhelm any process.
typing questions and receiving typed responses. The interrogator knows them only as Z and X
and aims to determine who the person is and who the machine is.
The goal of machine is to fool the interrogator into believing that it is the person. If the machine
succeeds we conclude that the machine can think. The machine is allowed to do whatever it can
do to fool the interrogator.
For example, if asked the question “How much is 12,324 times 73,981?” The machine
could wait several minutes and then respond with wrong answer.
The interrogator receives two sets of responses, but does not know which set comes from human
and which from computer. After careful examination of responses, if interrogator cannot
definitely tell which set has come from the computer and which from human, then the computer
has passed the Turing Test. The more serious issue is the amount of knowledge that a machine
would need to pass the Turing test.
We will see the introduction of the systems which equal or exceed human abilities and see them
because an important part of most business and government operations as well as our daily
activities.
Definition of AI: Artificial Intelligence is a branch of computer science concerned with the study
and creation of computer systems that exhibit some form of intelligence such as systems that learn
new concepts and tasks, systems that can understand a natural language or perceive and
comprehend a visual scene, or systems that perform other types of feats that require human types
of intelligence.
AI is not the study and creation of conventional computer systems. The study of the mind, the
body, and the languages as customarily found in the fields of psychology, physiology, cognitive
science, or linguistics.
In AI, the goal is to develop working computer systems that are truly capable of performing tasks
that require high levels of intelligence.
State space is a set of legal positions, starting at the initial state, using the set of rules
to move from one state to another and attempting to end up in a goal state.
We must make explicit the preciously implicit goal of not only playing a legal game of
chess but also winning the game, if possible.
Production System
The entire procedure for getting a solution for AI problem can be viewed as “Production
System”. It provides the desired goal. It is a basic building block which describes the AI problem
and also describes the method of searching the goal. Its main components are:
A Set of Rules, each consisting of a left side (a pattern) that determines the applicability of
the rule and right side that describes the operation to be performed if the rule is applied.
Knowledge Base – It contains whatever information is appropriate for a particular task.
Some parts of the database may be permanent, while the parts of it may pertain only to
the solution of the current problem.
Control Strategy – It specifies the order in which the rules will be compared to the
database and the way of resolving the conflicts that arise when several rules match at one.
o The first requirement of a goal control strategy is that it is cause motion; a control
strategy that does not cause motion will never lead to a solution.
o The second requirement of a good control strategy is that it should be systematic.
A rule applier: Production rule is like below
if(condition) then
consequence or action
A partially commutative production system is a production system with the property that
if the application of a particular sequence of rules transforms state X into state Y, then any
permutation of those rules that is allowable also transforms state X into state Y.
In a formal sense, there is no relationship between kinds of problems and kinds of production of
systems, since all problems can be solved by all kinds of systems. But in practical sense, there
definitely is such a relationship between kinds of problems and the kinds of systems that led
themselves naturally to describing those problems.
The following figure shows the four categories of production systems produced by the two
dichotomies, monotonic versus non-monotonic and partially commutative versus non-partially
commutative along with some problems that can be naturally be solved by each type of system.
Monotonic Non-monotonic
Partially commutative, monotonic production systems are useful for solving ignorable
problems that involves creating new things rather than changing old ones generally
ignorable. Theorem proving is one example of such a creative process partially
commutative, monotonic production system are important for a implementation stand
point because they can be implemented without the ability to backtrack to previous states
when it is discovered that an incorrect path has been followed.
This is usually the case in physical manipulation problems such as “Robot navigation on a
flat plane”. The 8-puzzle and blocks world problem can be considered partially
commutative production systems are significant from an implementation point of view
because they tend to read too much duplication of individual states during the search
process.
Production systems that are not partially commutative are useful for many problems in
which changes occur. For example “Chemical Synthesis”
Non-partially commutative production system less likely to produce the same node many
times in the search process.
Problem Characteristics
In order to choose the most appropriate method (or a combination of methods) for a particular
problem, it is necessary to analyze the problem along several key dimensions:
• Is the problem decomposable?
• Can solution steps be ignored or undone?
• Is the universe predictable?
• Is a good solution absolute or relative?
• Is the solution a state or a path?
• What is the role of knowledge?
• Does the task require human-interaction?
• Problem Classification
At each step it checks to see whether the problem it is working on is immediately solvable. If so,
then the answer is returned directly. If the problem is not easily solvable, the integrator checks to
see whether it can decompose the problem into smaller problems. It can create those problems
and calls itself recursively on using this technique of problem decomposition we can often solve
very large problem easily.
Now consider the 8-puzzle game. A sample game using the 8-puzzle is shown below:
In attempting to solve the 8 puzzle, we might make a stupid move for example; we slide the tile
5 into an empty space. We actually want to slide the tile 6 into empty space but we can back
track and undo the first move, sliding tile 5 back to where it was then we can know tile 6 so
mistake and still recovered from but not quit as easy as in the theorem moving problem. An
additional step must be performed to undo each incorrect step.
Now consider the problem of playing chess. Suppose a chess playing problem makes a stupid
move and realize a couple of moves later. But here solutions steps cannot be undone.
The above three problems illustrate difference between three important classes of problems:
1) Ignorable: in which solution steps can be ignored.
Example: Theorem Proving
2) Recoverable: in which solution steps can be undone.
Example: 8-Puzzle
3) Irrecoverable: in which solution steps cannot be undone.
Example: Chess
The recoverability of a problem plays an important role in determining the complexity of the
control structure necessary for problem solution.
Ignorable problems can be solved using a simple control structure that never backtracks.
Recoverable problems can be solved by slightly complicated control strategy that does sometimes
make mistakes using backtracking. Irrecoverable problems can be solved by recoverable style
methods via planning that expands a great deal of effort making each decision since the decision
is final.
In the uncertain problems, this planning process may not be possible. Example: Bridge Game –
Playing Bridge. We cannot know exactly where all the cards are or what the other players will do
on their turns.
We can do fairly well since we have available accurate estimates of a probabilities of each of the
possible outcomes. A few examples of such problems are
Controlling a robot arm: The outcome is uncertain for a variety of reasons. Someone
might move something into the path of the arm. The gears of the arm might stick.
Helping a lawyer decide how to defend his client against a murder charge. Here we
probably cannot even list all the possible outcomes, which leads outcome to be uncertain.
For certain-outcome problems, planning can used to generate a sequence of operators that is
guaranteed to lead to a solution.
For uncertain-outcome problems, a sequence of generated operators can only have a good
probability of leading to a solution.
Plan revision is made as the plan is carried out and the necessary feedback is provided.
Since we are interested in the answer to the question, it does not matter which path we follow. If
we do follow one path successfully to the answer, there is no reason to go back and see if some
other path might also lead to a solution. These types of problems are called as “Any path
Problems”.
Now consider the Travelling Salesman Problem. Our goal is to find the shortest path route that
visits each city exactly once.
Suppose we find a path it may not be a solution to the problem. We also try all other paths. The
shortest path (best path) is called as a solution to the problem. These types of problems are
known as “Best path” problems. But path problems are computationally harder than any path
problems.
of finding the interpretation we need to produce only the interpretation itself. No record of the
processing by which the interpretation was found is necessary. But with the “water-jug” problem
it is not sufficient to report the final state we have to show the “path” also.
So the solution of natural language understanding problem is a state of the world. And the
solution of “Water jug” problem is a path to a state.
The above two problems illustrate the difference between the problems for which a lot of
knowledge is important only to constrain the search for a solution and those for which a lot of
knowledge is required even to be able to recognize a solution.
Problem Classification
When actual problems are examined from the point of view all of these questions it becomes
apparent that there are several broad classes into which the problem fall. The classes can be each
associated with a generic control strategy that is approached for solving the problem. There is a
variety of problem-solving methods, but there is no one single way of solving all problems. Not
all new problems should be considered as totally new. Solutions of similar problems can be
exploited.
PROBLEMS
Water-Jug Problem
Problem is “You are given two jugs, a 4-litre one and a 3-litre one. One neither has any
measuring markers on it. There is a pump that can be used to fill the jugs with water. How can
you get exactly 2 litres of water into 4-litre jug?”
Solution:
The state space for the problem can be described as a set of states, where each state represents
the number of gallons in each state. The game start with the initial state described as a set of
ordered pairs of integers:
• State: (x, y)
– x = number of lts in 4 lts jug
– y = number of lts in 3 lts jug
x = 0, 1, 2, 3, or 4 y = 0, 1, 2, 3
• Start state: (0, 0) i.e., 4-litre and 3-litre jugs is empty initially.
• Goal state: (2, n) for any n that is 4-litre jug has 2 litres of water and 3-litre jug has any
value from 0-3 since it is not specified.
• Attempting to end up in a goal state.
Production Rules: These rules are used as operators to solve the problem. They are represented
as rules whose left sides are used to describe new state that result from approaching the rule.
Chess Problem
Problem of playing chess can be defined as a problem of moving around in a state space where
each state represents a legal position of the chess board.
The game start with an initial state described as an 8x8 of each position contains symbol standing
for the appropriate place in the official chess opening position. A set of rules is used to move
from one state to another and attempting to end up on one of a set of final states which is
described as any board position in which the opponent does not have a legal move as his/her
king is under attacks.
The state space representation is natural for chess. Since each state corresponds to a board
position i.e. artificial well organized.
Production Rules:
These rules are used to move around the state space. They can be described easily as a set of rules
consisting of two parts:
1. Left side serves as a pattern to be matching against the current board position.
2. Right side that serves decides the chess to be made to the board position to reflect the
move.
To describe these rules it is convenient to introduce a notation for pattern and substitutions
E.g.:
1. White pawn at square (file1,rank2)
Move pawn from square (file i, rank2) AND square (file i, rank2)
AND
8-Puzzle Problem
The Problem is 8-Puzzle is a square tray in which 8 square tiles are placed. The remaining 9th
square is uncovered. Each tile has a number on it. A file that is adjacent to the blank space can be
slide into that space. The goal is to transform the starting position into the goal position by
sliding the tiles around.
Solution:
State Space: The state space for the problem can be written as a set of states where each state is
position of the tiles on the tray.
Initial State: Square tray having 3x3 cells and 8 tiles number on it that are shuffled
2 8 3
1 6 4
7 5
Goal State
1 2 3
8 4
7 6 5
Production Rules: These rules are used to move from initial state to goal state. These are also
defined as two parts left side pattern should match with current position and left side will be
resulting position after applying the rule.
Solution:
Solution:
State Space: The state space for this problem represents states in which the cities traversed by
salesman and state described as salesman starting at any city in the given list of cities. A set of
rules is applied such that the salesman will not traverse a city traversed once. These rules are
resulted to be states with the salesman will complex the round trip and return to his starting
position.
Initial State
Salesman starting at any arbitrary city in the given list of cities
Goal State
Visiting all cities once and only and reaching his starting state
Production rules:
These rules are used as operators to move from one state to another. Since there is a path
between any pair of cities in the city list, we write the production rules for this problem as
• Visited(city[i]) AND Not Visited(city[j])
– Traverse(city[i],city[j])
• Visited(city[i],city[j]) AND Not Visited(city[k])
– Traverse(city[j],city[k])
• Visited(city[j],city[i]) AND Not Visited(city[k])
– Traverse(city[i],city[k])
• Visited(city[i],city[j],city[k]) AND Not Visited(Nil)
– Traverse(city[k],city[i])
Initial State:
Full(T1) | Empty(T2) | Empty(T3)
Goal State:
Empty(T1) | Full(T2) | Empty (T3)
Production Rules:
These are rules used to reach the Goal State. These rules use the following operations:
POP(x) Remove top element x from the stack and update top
PUSH(x,y) Push an element x into the stack and update top. [Push an element x on to
the y]
Now to solve the problem the production rules can be described as follows:
1. Top(T1)<Top(T2) PUSH(POP(T1),T2)
2. Top(T2)<Top(T1) PUSH(POP(T2),T1)
3. Top(T1)<Top(T3) PUSH(POP(T1),T3)
4. Top(T3)<Top(T1) PUSH(POP(T3),T1)
5. Top(T2)<Top(T3) PUSH(POP(T2),T3)
6. Top(T3)<Top(T2) PUSH(POP(T3),T2)
7. Empty(T1) PUSH(POP(T2),T1)
8. Empty(T1) PUSH(POP(T3),T1)
9. Empty(T2) PUSH(POP(T1),T3)
10. Empty(T3) PUSH(POP(T1),T3)
11. Empty(T2) PUSH(POP(T3),T2)
12. Empty(T3) PUSH(POP(T2),T3)
Solution: Example: 3 Disks, 3 Towers
1) T1 T2
2) T1 T3
3) T2 T3
4) T1 T2
5) T3 T1
6) T3 T2
7) T1 T2
Solution: The state space for this problem is a set of states representing the position of the
monkey, position of chair, position of the stick and two flags whether monkey on the chair &
whether monkey holds the stick so there is a 5-tuple representation.
(M, C, S, F1, F2)
– M: position of the monkey
– C: position of the chair
– S: position of the stick
– F1: 0 or 1 depends on the monkey on the chair or not
– F2: 0 or 1 depends on the monkey holding the stick or not
Production Rules:
These are the rules which have a path for searching the goal state here we assume that when
monkey hold a stick then it will swing it this assumption is necessary to simplify the
representation.
Some of the production rules are:
Solution:
1) (M,C,S,0,0)
2) (C,C,S,0,0)
3) (G,G,S,0,0)
4) (S,G,S,0,0)
5) (G,G,G,0,0)
6) (G,G,G,0,1)
7) (G,G,G,1,1)
Solution:
The state space for the problem contains a set of states which represent the present number of
cannibals and missionaries on the either side of the bank of the river.
(C,M,C1,M1,B)
– C and M are number of cannibals and missionaries on the starting bank
– C1 and M1 are number of cannibals and missionaries on the destination bank
– B is the position of the boat wither left bank (L) or right bank (R)
Production System: These are the operations used to move from one state to other state. Since at
any bank the number of cannibals must less than or equal to missionaries we can write two
production rules for this problem as follows:
C M BOAT POSITION C1 M1
3 3 0 0
1 3 2 0
2 3 1 0
0 3 3 0
1 3 2 0
1 1 2 2
2 2 1 1
2 0 1 3
3 0 0 3
1 0 2 3
2 0 1 3
0 0 3 3
Algorithm:
1) Create a variable called NODE_LIST and set it to the initial state.
2) Until a goal state is found or NODE_LIST is empty do:
a. Remove the first element from NODE_LIST and call it E. If NODE_LIST was empty
quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is goal state, quit and return this state
iii. Otherwise add the new state to the end of NODE_LIST
Algorithm:
1) If the initial state is the goal state, quit return success.
2) Otherwise, do the following until success or failure is signaled
a. Generate a successor E of the initial state, if there are no more successors, signal
failure
b. Call Depth-First Search with E as the initial state
c. If success is returned, signal success. Otherwise continue in this loop.
The time to examine a single path is proportional to N. So the total time required to perform this
search is proportional to N!
Another strategy is, begin generating complete paths, keeping track of the shorter path so far and
neglecting the paths where partial length is greater than the shortest found. This method is better
than the first but it is inadequate.
HEURISTIC SEARCH
Heuristic:
– It is a "rule of thumb" used to help guide search
– It is a technique that improves the efficiency of search process, possibly by
sacrificing claims of completeness.
– It is involving or serving as an aid to learning, discovery, or problem-solving by
experimental and especially trial-and-error methods.
Heuristic Function:
– It is a function applied to a state in a search space to indicate a likelihood of
success if that state is selected
– It is a function that maps from problem state descriptions to measures of
desirability usually represented by numbers
– Heuristic function is problem specific.
The purpose of heuristic function is to guide the search process in the most profitable direction by
suggesting which path to follow first when more than one is available (best promising way).
We can find the TSM problem in less exponential items. On the average Heuristic improve the
quality of the paths that are explored. Following procedure is to solve TRS problem
– Select a Arbitrary City as a starting city
– To select the next city, look at all cities not yet visited, and select one closest to
the current city
– Repeat steps until all cities have been visited
Heuristic search methods which are the general purpose control strategies for controlling search is
often known as "weak methods" because of their generality and because they do not apply a
great deal of knowledge.
Weak Methods
a) Generate and Test
b) Hill Climbing
c) Best First Search
d) Problem Reduction
e) Constraint Satisfaction
f) Means-ends analysis
Algorithm:
1. Generate a possible solution. For some problems, this means generating a particular point
in the problem space. For others, it means generating a path from a start state.
2. Test to see if this is actually a solution by comparing the chosen point or the endpoint of
the chosen path to the set of acceptable goal states.
If there exists a solution for one problem then this strategy definitely finds the solution. Because
the complete solution must be generated before they can be tested. So, we can say that
Generate-and-test algorithm is a Depth-First Search procedure. It will take if the problem space is
very large. In the strategy we can operate by generating solution randomly instead of
systematically. Then we cannot give the surety that we will set the solution.
To implement this generate and test usually, we will use depth-first tree. If there are cycles then
we use graphs rather than a tree. This is not an efficient (mechanism) technique when the
problem is much harder. It is acceptable for simple problems. When it is combined with the other
techniques it will restrict the space.
For example, one of the most successful AI program is DENDRAL, which informs the structure of
organ i.e. components using mass spectrum and nuclear magnetic resonance data. It uses the
strategy called plan-generate-test, in which a planning process that uses constraint satisfaction
techniques, which creates lists of recommended structures. The generate-and-test procedure then
uses those lists so that it can explain only a limited set of structures, which is proved highly
effective.
Examples:
- Searching a ball in a bowl (Pick a green ball) - State
- Water Jug Problem – State and Path
Hill Climbing
A GENERATE and TEST procedure, if not only generates the alternative path but also the
direction of the path in the alternatives which be near, than all the paths in Generate and Test
procedures the heuristic function responds only yes or no but this heuristic function responds only
yes will generate an estimate of how close a given state is to a goal state.
Algorithm:
1. Evaluate the initial state. If it is also goal state then return it, otherwise continue with the
initial states as the current state.
2. Loop until the solution is found or until there are no new operators to be applied in the
current state
a) Select an operator that has not yet been applied to the current state and apply it
to produce new state
b) Evaluate the new state
i. If it is a goal state then return it and quit
ii. If it is not a goal state but it is better than the current state, then make it as
current state
iii. If it is not better than the current state, then continue in loop.
The key difference between this algorithm and generate and test algorithm is the use of an
evaluation function as a way to inject task-specific knowledge into the control process.
Algorithm:
1. Evaluate the initial state. If it is also a goal state then return it and quit. Otherwise
continue with the initial state as the current state.
2. Loop until a solution is found or until a complete iteration produces no change to current
state:
a. Let SUCC be a state such that any possible successor of the current state will be
better than SUCC.
b. For each operator that applies to the current state do:
i. Apply the operator and generate a new state.
ii. Evaluate the new state. If it is a goal state, then return it and quit. If not
compare it to SUCC. If it is better, then set SUCC to this state. If it is not
better, leave SUCC alone.
c. IF the SUCC is better than current state, then set current state to SUCC.
Bothe basic and steepest-ascent hill climbing may fail to find a solution. Either algorithm may
terminate not by finding a goal state but by getting a state from which no better states can be
generated. This will happen if the program has reached a local maximum, a plateau or a ridge.
A ridge is a special kind of maximum. It is an area of the search space that is higher than
surrounding areas and that itself has a slope.
There are some ways of dealing with these problems, although these methods are by no means
guaranteed:
Backtrack to some earlier node and try going in a different direction. This is particularly
reasonable if at that node there was another direction that looked as promising or almost
as promising as the one that was chosen earlier. This is a fairly good way to deal with
local maxima.
Make a big jump in some direction to try to get to a new section of the search space. This
is a good way of dealing with plateaus.
Apply two or more rules before doing the test. This corresponds to moving in several
directions at once. This is a good strategy for dealing with ridges.
Simulated Annealing:
A variation of hill climbing in which, at the beginning of the process, some downhill moves may
be made.
In simulated annealing at the beginning of the process some hill moves may be made. The idea is
to do enough exploration of the whole space early on. So that the final solution in relatively
insensitive to the starting state. By doing so we can lower the chances of getting caught at local
maximum, plateau or a ridge.
In this we attempt to minimize rather than maximize the value of the objective function. Thus
this process is one of valley descending in which the object function is the energy level.
Physical Annealing
• Physical substances are melted and then gradually cooled until some solid state is reached.
• The goal is to produce a minimal-energy state.
• Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal will be
attained.
• Nevertheless, there is some probability for a transition to a higher energy state: e -E/kT.
The probability that a transaction to a higher energy state will occur and so given by a function:
∆ /
=
E is the +ve level in the energy level
T is the temperature
k is Boltzmann’s constant
The rate at which the system is cooled is called annealing schedule in an analogous process. The
units for both E and T are artificial. It makes sense to incorporate k into T.
Algorithm:
1. Evaluate the initial state. If it is also a goal state then return and quit. Otherwise continue
with the initial state as a current state.
2. Initialize Best-So-Far to the current state.
3. Initialize T according to the annealing schedule.
4. Loop until a solution is found or until there are no new operators left to be applied in the
current state:
a. Select an operator that has not yet been applied to the current state and apply it
to produce a new state.
b. Evaluate the new state. Compute
∆ =( )− ( )
(i) If the new state is goal state then return it and quit
(ii) If it is not a goal state but is better than the current state then make it the
current state. Also set BEST-SO-FAR to this new state.
(iii) If it is not better than the current state, then make it the current state with
probability ’ as defined above. This step is usually implemented by
invoking a random number generator to produce a number in the range
[0,1]. If that number is less than ’ then the move is accepted. Otherwise
do nothing.
c. Revise T as necessary according to the annealing schedule.
5. Return BEST-SO-FAR as the answer.
Note:
For each step we check the probability of the successor with the current state. If it is greater than
the current state the move is accepted. Otherwise move is rejected and search in other direction.
Best-First Search
Best-First Search (BFS) is a way of combining the advantages of both depth-first search and
breadth first search into a single method, i.e., is to follow a single path at a time but switch paths
whenever completing path looks more promising than the current one does.
The process is to select the most promising of the new nodes we have generated so far. We then
expand the chosen node by using the rules to generate its successors. If one of them is a solution,
then we can quit, else repeat the process until we search goal.
In BFS, one move is selected, but others are kept around so that they can be revisited later if the
selected path becomes less promising. This is not the case steepest ascent climbing.
OR Graphs
A graph is called OR graph, since each of its branches represents alternative problems solving
path.
Algorithm:
1. Start with OPEN containing just the initial state
2. Until a goal is found or there are no nodes left on OPEN do:
a. Pick the best node on OPEN
b. Generate its successors
c. For each successor do:
i. If it is not been generated before, evaluate it, add it to OPEN and record
its parent.
ii. If it has been generated before, change the parent if this new path is better
than the previous one. In that case update the cost of getting to this node
and to any successors that this node may already have.
Step 1:
A NIL
Step 2:
A NIL
BA
CA
DA
Step 3:
A NIL
BA
CA
DA
ED
FD
Step 4:
A NIL
BA
CA
DA
ED
FD
GB
HB
Step 5:
A NIL
BA
CA
DA
ED
FD
GB
HB
IE
JE
The Element with the low cost is the first element. The new states are added according to the
cost value.
A* Algorithm:
A* algorithm is a best first graph search algorithm that finds a least cost path from a given
initial node to one goal node. The simplification of Best First Search is called A* algorithm.
This algorithm uses , ℎ functions as well as the lists OPEN and CLOSED.
For many applications, it is convenient to define function as the sum of two components
that we call g and h’.
• g:
– Measures of the cost of getting from the initial state to the current node.
– It is not the estimate; it is known to be exact sum of the costs.
• h’ :
– is an estimate of the additional cost of getting from current node to goal state.
Algorithm:
1) Start with OPEN containing only the initial state (node) set that node g value 0 its ℎ’ value
to whatever it is and its ’ value ℎ’ + 0 or ℎ’. Set CLOSED to the empty list.
2) Until a goal node is found repeat the following procedure: If there are no nodes on
OPEN, report failure. Otherwise pick the node on OPEN with lowest ’ value. CALL it
BESTNODE. Remove from OPEN. Place it on CLOSED. If BESTNODE is the goal node, exit
and report a solution. Otherwise, generate the successors of BESTNODE. For each
successor, do the following
a) Set successors to point back to BESTNODE this backwards links will make possible to
recover the path once a solution is found.
b) Compute
( )= ( )+
c) If successor is already exist in OPEN call that node as OLD and we must decide
whether OLD’ s parent link should reset to point to BESTNODE (graphs exist in this
case)
If OLD is cheaper then we need do nothing. If successor is cheaper then reset
OLD’s parent link to point to BESTNODE. Record the new cheaper path in
( ) and update ’( ).
d) If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so, call node on CLOSED
OLD and add OLD to the list of BESTNODE successors. Calculate all the g, f’ and h’
values for successors of that node which is better then move that.
So to propagate the new cost downward, do a depth first traversal of the tree
starting at OLD, changing each nodes value (and thus also its ’ value),
terminating each branch when you reach either a node with no successor or a
node which an equivalent or better path has already been found.
e) If successor was not already on either OPEN or CLOSED, then put it on OPEN and
add it to the list of BESTNODE successors. Compute
’( ) = ( ) + ℎ’( )
A* algorithm is often used to search for the lowest cost path from the start to the goal location in
a graph of visibility/quad tree. The algorithm solves problems like 8-puzzle problem and
missionaries & Cannibals problem.
Problem Reduction:
Planning how best to solve a problem that can be recursively decomposed into sub-
problems in multiple ways.
There can be more than one decompositions of the same problem. We have to decide
which is the best way to decompose the problem so that the total solution or cost of the
solution is good.
Examples:
o Matrix Multiplication
o Towers of Hanoi
o Blocks World Problem
o Theorem Proving
Formulations: (AND/OR Graphs)
o An OR node represents a choice between possible decompositions.
o An AND node represents a given decomposition.
The AND-OR graph (or tree) is useful for representing the solution of problems that can be
solved by decomposing them into a set of smaller problems, all of which must then be solved.
This decomposition or reduction generate arcs that we call AND arcs.
One AND arc may point to any number of successors nodes all of which must be solved in order
for the arc to point to a solution. Just as in OR graph, several arcs may emerge from a single
node, indicating a variety of ways in which the original problem might be solved.
In order to find solutions in an AND-OR graph, we need an algorithm similar to best-first search
but with the ability to handle the AND arcs appropriately.
To see why our Best-First search is not adequate for searching AND-OR graphs, consider Fig (a).
– The top node A has been expanded, producing 2 arcs, one leading to B and one leading
to C and D. The numbers at each node represent the value of f' at that node.
– We assume for simplicity that every operation has a uniform cost, so each arc with a
single successor has a cost of 1 and each AND arc with multiple successors has a cost of 1
for each of its components.
– If we look just at the nodes and choose for expansion the one with the lowest f' value,
we must select C. It would be better to explore the path going through B since to use C
we must also use D, for a total cost of 9 (C+D+2) compared to the cost of 6 that we get
through B.
– The choice of which node to expand next must depend not only on the f' value of that
node but also on whether that node is part of the current best path from the initial node.
In order to describe an algorithm for searching an AND-OR graph we need to exploit a value
that we call FUTILITY. If the estimated cost of a solution becomes greater than the value of
FUTILITY, then we abandon the search. FUTILITY should be chosen to correspond to a threshold
such any solution with a cost above it is too expensive to be practical even if it could ever be
found.
Algorithm:
1. Initialize the graph to the starting node.
2. Loop until the starting node is labeled SOLVED or until its cost goes above FUTILITY:
a. Traverse the graph, starting at the initial node following the current best path and
accumulate the set of nodes that are on that path and have not yet been
expanded or labeled solved.
b. Pick up one of those unexpanded nodes and expand it. If there are no successors,
assign FUTILITY as the value of this node. Otherwise add the successors to the
graph and each of this compute f’ (use only h’ and ignore g). If f’ of any node is
“0”, mark the node as SOLVED.
c. Change the f’ estimate of the newly expanded node to reflect the new
information provided by its successors. Propagate this change backward through
the graph. If any node contains a successor whose descendants are all solved, label
the node itself as SOLVED. At each node that is visible while going up the graph,
decide which of its successors arcs is the most promising and mark it as part of the
current best path. This may cause the current best path to change. The
propagation of revised cost estimates backup the tree was not necessary in the
best-first search algorithm because only unexpanded nodes were examined. But
now expanded nodes must be reexamined so that the best current path can be
selected. Thus it is important that their f’ values be the best estimates available.
At Step 1, A is the only node, so it is at the end of the current best path. It is expanded,
yielding nodes B, C and D. The arc to D is labeled as the most promising one emerging from
A, since it costs 6 compared to B and C, which costs 9.
In Step 2, node D is chosen for expansion. This process produces one new arc, the AND arc
to E and F, with a combined cost estimate of 10. So we update the f' value of D to 10.
We see that the AND arc B-C is better than the arc to D, so it is labeled as the current best
path. At Step 3, we traverse that arc from A and discover the unexpanded nodes B and C. If
we are going to find a solution along this path, we will have to expand both B and C
eventually. SO explore B first.
This generates two new arcs, the ones to G and to H. Propagating their f' values backward,
we update f' to B to 6. This requires updating the cost of AND arc B-C to 12 (6+4+2). Now
the arc to D is again the better path from A, so we record that as the current best path and
either node E or F will be chosen for the expansion at Step 4.
This process continues until either a solution is found or all paths have led to dead ends,
indicating that there is no solution.
Limitations
1. A longer path may be better
In Fig (a), the nodes were
generated. Now suppose that
node J is expanded at the next
step and that one of its successors is node E, producing the graph shown in Fig (b). The
new path to E is longer than the previous path to E going through C. Since the path
through C will only lead to a solution if there is also a solution to D, which there is not.
The path through J is better.
While solving any problem please don’t try to travel the nodes which are already labeled
as solved because while implementing it may be struck in loop.
2. Interactive Sub-goals
Another limitation of the algorithm fails to take into account
any interaction between sub-goals. Assume in figure that both
node C and node E ultimately lead to a solution; our algorithm
will report a complete solution that includes both of them. The
AND-OR graph states that for A to be solved, both C and D
must be solved. But the algorithm considers the solution of D as a completely separate
process from the solution of C.
While moving to the goal state, keep track of all the sub-goals we try to move which one
is giving an optimal cost.
AO* Algorithm:
AO* Algorithm is a generalized algorithm, which will always find minimum cost solution. It is
used for solving cyclic AND-OR graphs The AO* will use a single structure GRAPH representing
the part of the search graph that has been explicitly generated so far. Each node in the graph will
point both down to its immediate successors and up to immediate predecessors. The top down
traversing of the best-known path which guarantees that only nodes that are on the best path
will ever be considered for expansion. So h’ will serve as the estimate of goodness of a node.
Algorithm (1):
1) Initialize: Set G* = {s}, f(s) = h(s).
If ∈ , label s as SOLVED, where T is terminal node.
3) Select: Select a non-terminal leaf node n from the marked sub tree
4) Expand: Make explicit the successors of n.
For each new successor, m: Set f(m) = h(m)
If m is Terminal, label m as SOLVED.
Set (m) = ∑ [ ( ) + ( , ) ]
Mark the edge to each successor of m. If each successor is labeled SOLVED then label m as
SOLVED.
5. If m is an OR node with successors
r1, r2, …, rk
Set (m) = min{ ( ) + ( , ) }
Mark the edge to each successor of m. If each successor is labeled SOLVED then label m as
SOLVED.
6. If the cost or label of m has changed, then insert those parents of m into Z for which m is
marked successor.
Algorithm (2):
Means-Ends Analysis:
One general-purpose technique used in AI is means-end analysis, a step-by-step, or incremental,
reduction of the difference between the current state and the final goal. The program selects
actions from a list of means—in the case of a simple robot this might consist of PICKUP,
PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is
reached. This means we could solve major parts of a problem first and then return to smaller
problems when assembling the final solution.
Usually, we search strategies that can reason either forward or backward. Often, however a
mixture of the two directions is appropriate. Such mixed strategy would make it possible to solve
the major parts of problem first and solve the smaller problems arise when combining them
together. Such a technique is called "Means - Ends Analysis".
This process centers on the detection of difference between the current state and goal state. After
the difference had been found, we should find an operator which reduces the difference. But this
operator cannot be applicable to the current state. Then we have to set up a sub-problem of
getting to the state in which it can be applied if the operator does not produce the goal state
which we want. Then we should set up a sub-program of getting from state it does produce the
goal. If the chosen inference is correct, the operator is effective, then the two sub-problems
should be easier to solve than the original problem.
The means-ends analysis process can be applied recursively to them. In order to focus system
attention on the big problems first, the difference can be assigned priority levels, in which high
priority can be considered before lower priority.
Like the other problems, it also relies on a set of rules rather than can transform one state to
another these rules are not represented with complete state description. The rules are represented
as a left side that describes the conditions that must be met for the rule applicable and right side
which describe those aspects of the problem state that will be changed by the application of the
rule.
Consider the simple HOLD ROBOT DOMAIN. The available operators are as follows:
OPERATOR PRECONDITIONS RESULTS
At(robot,obj)^large(obj)^clear(obj)^
PUSH(obj,loc) At(obj,loc)^at(robot,loc)
armempty
CARRY(obj,loc) At(robot,obj)^small(obj) At(obj,loc)^at(robot,loc)
Difference Table
PUSH has 4-preconditions. Two of which produce difference between start and goal states since
the desks is already large. One precondition creates no difference. The ROBOT can be brought to
the location by using WALK, the surface can be cleared by two uses of pickup but after one pick-
up the second results in another difference – the arm must be empty. PUTDOWN can be used to
reduce the difference.
One PUSH is performed; the problem state is close to the goal state, but not quite. The objects
must be placed back on the desk. PLACE will put them there. But it cannot be applied
immediately. Another difference must be eliminated, since the robot is holding the objects. Then
we will find the progress as shown above. The final difference between C and E can be reduced
by using WALK to get the ROBOT back to the objects followed by PICKUP and CARRY.
Algorithm:
1. Until the goal is reached or no more procedures are available:
– Describe the current state, the goal state and the differences between the two.
– Use the difference the describe a procedure that will hopefully get nearer to goal.
– Use the procedure and update current state.
2. If goal is reached then success otherwise fail.
Algorithm:
Constraint Satisfaction
Search procedure operates in a space of constraint sets. Initial state contains the original
constraints given in the problem description.
A goal state is any state that has been constrained enough – Cryptarithmetic: “enough”
means that each letter has been assigned a unique numeric value.
Constraint satisfaction is a 2-step process:
o Constraints are discovered and propagated as far as possible.
o If there is still not a solution, then search begins. A guess about is made and added
as a new constraint.
To apply the constraint satisfaction in a particular problem domain requires the use of 2
kinds of rules:
o Rules that define valid constraint propagation
o Rules that suggest guesses when necessary
Goal State:
We have to assign unique digit for the above specified alphabets.