Chapter 3-AI
Chapter 3-AI
Chapter 3-AI
ARTIFICIAL INTELLIGENCE
Introduction
Well defined problems and solutions
Solving Problems by Searching
Problem Solving Agents
Problem Formulation
Search Strategies
Avoiding Repeated States Constraint Satisfaction
Constraint Satisfaction Problem
3 INTRODUCTION
Intelligent agents are supposed to maximize their performance measure; achieving this is
sometimes simplified if the agent can adopt a goal and aim to satisfy it.
Goal helps to organize behavior by limiting objectives.
We consider a goal to be a set of world states – exactly those states in which the goal is
satisfied.
The process of looking for a sequence of actions that reaches the goal is called search.
A search algorithm takes a problem as input and returns a solution in the form of an action
sequence.
A goal and a set of means for achieving the goal is called a problem.
4 WELL-DEFINED PROBLEMS AND SOLUTIONS
4. The goal test, which determines whether a given state is a goal state.
Frequently the goal test is intuitive (e.g. check if we arrived at the destination) – but note
that it is also sometimes specified by an abstract property (e.g. “check mate”).
5. A path cost function that assigns a numeric cost to each path.
The problem solving agent chooses a cost function that reflects its own performance
measure.
Commonly (but not always), the cost of a path is additive in terms of the individual
actions along a path. Denote the step cost to take action ‘a’ in state s, arriving in s’ as:
c(s,a,s’).
6 PROBLEM-SOLVING AGENTS
A goal can be a set of world states, just those states in which the goal is satisfied.
Actions can be viewed as causing transitions between world states, so obviously the agent has
to find out which actions will get it to a goal state.
Before it can do this, it needs to decide what sorts of actions and states to consider.
Problem formulation is the process of deciding what actions and states to consider, and follows
goal formulation.
An agent with several immediate options of unknown value can decide what to do by first
examining different possible sequences of actions that lead to states of known value, and then
choosing the best one.
This process of looking for such a sequence is called search.
8 PROBLEM-SOLVING AGENTS
A search algorithm takes a problem as input and returns a solution in the form of an action
sequence.
Once a solution is found, the actions it recommends can be carried out.
This is called the execution phase.
9 PROBLEM-SOLVING AGENTS
Our agent has now adopted the goal of driving to Bucharest, and is considering which town to
drive to from Arad.
10 PROBLEM-SOLVING AGENTS
The goal is to clean up all the dirt; that is, the goal is equivalent to the state set {7,8}.
For example, if its initial state is 5, then it can calculate that the action sequence [ Right, Suck]
will get to a goal state.
This is the simplest case, which we call a single-state problem.
14 PROBLEM FORMULATION
Multiple-state problem
Suppose that the agent knows all the effects of its actions, but has limited access to the world
state, in the extreme case, it may have no sensors at all.
In that case, it knows only that its initial state is one of the set {1,2,3,4,5,6,7,8}.
One might suppose that the agent's predicament is hopeless, but in fact it can do quite well.
Because it knows what its actions do, it can, for example, calculate that the action Right will
cause it to be in one of the states {2,4,6,8}.
In fact, the agent can discover that the action sequence [Right, Suck, Left, Suck] is guaranteed
to reach a goal state no matter what the start state.
To summarize: when the world is not fully accessible, the agent must reason about sets of
states that it might get to, rather than single states.
We call this a multiple-state problem.
15 PROBLEM FORMULATION
Contingency problem
Obviously, the agent does have a way to solve the problem starting from one of {1,3}: first
suck, then move right, then suck only if there is dirt there.
Thus, solving this problem requires sensing during the execution phase.
Notice that the agent must now calculate a whole tree of actions, rather than a single action
sequence.
In general, each branch of the tree deals with a possible contingency that might arise.
For this reason, we call this a contingency problem.
Many problems in the real, physical world are contingency problems, because exact prediction
is impossible.
For this reason, many people keep their eyes open while walking around or driving
16 PROBLEM FORMULATION
exploration problems
an agent that has no information about the effects of its actions.
This is somewhat equivalent to being lost in a strange country with no map at all, and is the
hardest task faced by an intelligent agent.
The agent must experiment, gradually discovering what its actions do and what sorts of states
exist.
This is a kind of search, but a search in the real world rather than in a model thereof.
Taking a step in the real world, rather than in a model, may involve significant danger for an
ignorant agent.
If it survives, the agent learns a "map" of the environment, which it can then use to solve
subsequent problems.
17 SEARCH STRATEGIES
The majority of work in the area of search has gone into finding the right search strategy for a
problem.
we will evaluate strategies in terms of four criteria:
Completeness: is the strategy guaranteed to find a solution when there is one?
Time complexity: how long does it take to find a solution?
Space complexity: how much memory does it need to perform the search?
Optimality: does the strategy find the highest-quality solution when there are several different
solutions?
18 SEARCH STRATEGIES
Breadth-first search
It is simple search strategy
The root node is expanded first, then all the nodes generated by the root node are expanded
next, and then their successors, and so on.
All the nodes at depth d in the search tree are expanded before the nodes at depth d + 1.
If there is a solution, breadth-first search is guaranteed to find it, and if there are several
solutions, breadth-first search will always find the shallowest goal state first.
In terms of the four criteria, breadth-first search is complete, and it is optimal provided the path
cost is a non decreasing function of the depth of the node. (This condition is usually satisfied
only when all operators have the same cost).
The memory requirements are a bigger problem for breadth-first search than the execution
time.
19 SEARCH STRATEGIES
Because the path to A is cheapest, it is expanded next, generating the path SAG, which is in
fact a solution, though not the optimal one.
However, the algorithm does not yet recognize this as a solution, because it has cost 11, and
thus is buried in the queue below the path SB, which has cost 5.
The next step is to expand SB, generating SBG, which is now the cheapest path remaining in
the queue, so it is goal-checked and returned as the solution.
Uniform cost search finds the cheapest solution provided a simple requirement is met: the cost
of a path must never decrease as we go along the path.
22 SEARCH STRATEGIES
Depth-first search
Depth-first search always expands one of the nodes at the deepest level of the tree.
Only when the search hits a dead end (a non goal node with no expansion) does the search go
back and expand nodes at shallower levels.
Depth-first search has very modest memory requirements.
For problems that have very many solutions, depth-first may actually be faster than breadth-
first.
The drawback of depth-first search is that it can get stuck going down the wrong path.
Many problems have very deep or even infinite search trees, so depth-first search will never be
able to recover from an unlucky choice at one of the nodes near the top of the tree.
23 SEARCH STRATEGIES
It needs to store only a single path from the root to a leaf node, along with the remaining
unexpanded sibling nodes for each node on the path.
24 SEARCH STRATEGIES
Depth-limited search
Depth-limited search avoids the pitfalls of depth-first search by imposing a cutoff on the
maximum depth of a path.
This cutoff can be implemented with a special depth-limited search algorithm, or by using the
general search algorithm with operators that keep track of the depth.
Depth-limited search is complete but not optimal.
25 AVOIDING REPEATED STATES
Up to this point, we have all but ignored one of the most important complications to the search
process: the possibility of wasting time by expanding states that have already been encountered
and expanded before on some other path.
For some problems, this possibility never comes up; each state can only be reached one way.
For many problems, repeated states are unavoidable.
There are three ways to deal with repeated states:
Do not return to the state you just came from.
Do not create paths with cycles in them.
Do not generate any state that was ever generated before.
26 CONSTRAINT SATISFACTION SEARCH
A constraint satisfaction problem (or CSP) is a special kind of problem that satisfies some
additional structural properties beyond the basic requirements for problems in general.
In a CSP, the states are defined by the values of a set of variables and the goal test specifies a
set of constraints that the values must obey.
For example, the 8-queens problem can be viewed as a CSP in which the variables are the
locations of each of the eight queens; the possible values are squares on the board; and the
constraints state that no two queens can be in the same row, column or diagonal.
A solution to a CSP specifies values for all the variables such that the constraints are satisfied.