AI Chap1 J2 J3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 136

Wollega university

Department of Computer Science

Introduction to Artificial Intelligence

Chapter one: Introduction to AI

11/20/2024 Tariku Birhanu 1


Introduction to AI

• The field of artificial intelligence, or AI, attempts to understand


intelligent entities.
• Thus, one reason to study it is to learn more about ourselves.
• But unlike philosophy and psychology, which are also
concerned with intelligence, AI strives to build intelligent
entities as well as understand them

11/20/2024 Tariku Birhanu 2


Introduction to AI…
Definition of AI
• The definition of AI iterates around thought, reasoning and
behavior
• "The exciting new effort to make computers think . . . machines
with minds, in the full and literal sense" (Haugeland, 1985)
• [The automation of] activities that we associate with human
thinking, activities such as decision-making, problem solving,
learning ..."(Bellman, 1978)

11/20/2024 Tariku Birhanu 3


Introduction to AI…
• "The study of mental faculties through the use of computational
models" (Charniak and McDermott, 1985)
• "The study of the computations that make it possible to perceive,
reason, and act" (Winston, 1992)
• "A field of study that seeks to explain and emulate intelligent
behavior in terms of computational processes" (Schalkoff, 1 990)
• "The branch of computer science that is concerned with the
automation of intelligent behavior" (Luger and Stubblefield, 1993)

11/20/2024 Tariku Birhanu 4


Introduction to AI…

• In summary, the definitions are organized into four categories:


Systems that think like humans.

Systems that act like humans.

Systems that think rationally.

Systems that act rationally.

11/20/2024 Tariku Birhanu 5


Acting humanly: The Turing Test approach

•The Turing Test, proposed by Alan Turing (1950), was


designed to provide a satisfactory operational definition of
intelligence.
•Turing defined intelligent behavior as the ability to achieve
human-level performance in all cognitive tasks, sufficient to
fool an interrogator.

11/20/2024 Tariku Birhanu 6


Acting humanly: The Turing Test approach…

•Roughly speaking, the test he proposed is that the computer


should be interrogated by a human via a teletype, and passes
the test if the interrogator cannot tell if there is a computer or a
human at the other end
•For a computer to pass the test would need to possess the
following capabilities

11/20/2024 Tariku Birhanu 7


Acting humanly: The Turing Test approach…
• natural language processing
– to enable it to communicate successfully in English (or some
other human language)
• knowledge representation
– to store information provided before or during the
interrogation;
• automated reasoning
– to use the stored information to answer questions and to draw
new conclusions;
• machine learning
– to adapt to new circumstances and to detect and extrapolate
patterns
11/20/2024 Tariku Birhanu 8
Thinking humanly: The cognitive modeling approach
• If we are going to say that a given program thinks like a human, we
must have some way of determining how humans think.
• We need to get inside the actual workings of human minds.
• Once we have a sufficiently precise theory of the mind,
– it becomes possible to express the theory as a computer program.

• If the program's input/output and timing behavior matches human


behavior,
– that is evidence that some of the program's mechanisms may also be
operating in humans

11/20/2024 Tariku Birhanu 9


Thinking rationally: The laws of thought approach

• Aristotle was one of the first to attempt to codify "right thinking," that is,
irrefutable reasoning processes
– correct conclusions given correct premises

• For example,

Socrates is a man;

all men are mortal;

therefore Socrates is mortal."


• These laws of thought were supposed to govern the operation of the mind,
and initiated the field of logic
11/20/2024 Tariku Birhanu 10
Acting rationally: The rational agent
approach
• Acting rationally means acting so as to achieve one's goals,
given one's beliefs.
• An agent is just something that perceives and acts

• In this approach, AI is viewed as the study and construction


of rational agents
• In the "laws of thought" approach to AI, the whole emphasis
was on correct inferences.

11/20/2024 Tariku Birhanu 11
Acting rationally: The rational agent approach…

• Making correct inferences is sometimes part


of being a rational agent, because one way to
act rationally is to reason logically to the
conclusion that a given action will achieve
one's goals, and then to act on that
conclusion.
11/20/2024 Tariku Birhanu 12
History of AI

• Formally initiated in 1956 and the name AI was coined by John


McCarthy.
• The advent of general purpose computers provided a vehicle for
creating artificially intelligent entities.
– Used for solving general-purpose problems
• Which one is preferred?
– General purpose problem solving systems
– Domain specific systems

11/20/2024 Tariku Birhanu 13


History of AI
• Development of knowledge-based systems: the key to power

– Performance of general-purpose problem solving methods is


weak for many complex domains.
– Using knowledge is more suited to make better reasoning in
narrow areas of expertise (like human experts do).
– Early knowledge intensive systems include:

• MYCIN (1976): used for medical diagnosis.


• etc.
11/20/2024 Tariku Birhanu 14
History of AI…
• Shifts from procedural to declarative programming paradigm.
– Rather than telling the computer how to compute a solution, AI
program consists of a knowledge base of facts and
relationships.
– Rather than running a program to obtain a solution, the user
asks question so that the system searches through the KB to
determine the answer.
• Simulate human mind, and also learning and reasoning behavior
(Neural Network, Belief Network, Hidden Markov Models, etc. )

11/20/2024 Tariku Birhanu 15


Applications of AI

Solving problems that required thinking by humans:


• Playing games (chess, checker, cards, ...)
• Proving theorems (mathematical theorems, laws of physics, …)
• Classification of text (Politics, Economic, Social, Sports, etc,)
• Information filtering and summarization of text
• Giving advice in Medical diagnosis,

11/20/2024 Tariku Birhanu 16


AI vs. HI?
• Artificial Intelligence (or AI) is the concept that it is possible for a
computer to think in the same sense as humans do.
 Remember computer based chess program (Deep Blue) beats
human expert (Gary Kasparov). What do you understand from
this?
• Is AI equals human intelligence?
– Can we create a KB called mind?
• What is our concern in designing an Intelligent agent?
– Is that to replace human beings or to support them?

11/20/2024 Tariku Birhanu 17


Strong AI vs. Weak AI

• Strong AI argues that it is possible that one day a computer will


be invented which can be called a mind in its fullest sense.
– Strong AI aims to create an agent that can replicate humans
intelligence completely; i.e., it can think, reason, imagine,
etc., & do all the things that we currently associate with the
human brain.

11/20/2024 Tariku Birhanu 18


Strong AI vs. Weak AI…
• Weak AI, on the other hand, argues that computers can only
appear to think & are not actually conscious in the same way as
human brains are.
– The weak AI position holds that AI should try to develop
systems which have facets of intelligence, but the objective is
not to build a completely sentient entity.
– Weak AI researchers see their contribution as things like expert
systems used for medical diagnosis, which use "intelligent"
models, but they do not help create a conscious agent

11/20/2024 Tariku Birhanu 19


Foundation of AI…
• Although AI itself is a young field, it has inherited many ideas,
viewpoints, and techniques from other disciplines

philosophy
– materialism, which holds that all the world (including the brain
and mind) operate according to physical law

– Other argue that the mind has a physical basis, but denies that it
can be explained by a reduction to ordinary physical processes

– philosophy had thus established a tradition in which the mind was


conceived of as a physical device operating principally by
reasoning with the knowledge that it contained.
11/20/2024 Tariku Birhanu 20
Foundation of AI…

• Mathematics

– AI needs a formal science i.e. a level of mathematical


formalization in three main areas: computation, logic,
ALGORITHM and probability
– Probability:

• invaluable part of all the quantitative sciences, helping to


deal with uncertain measurements and incomplete theories
• degree of belief rather than an objective ratio of out come

11/20/2024 Tariku Birhanu 21


Foundation of AI…

• Psychology

– we have the tools with which to investigate the human mind

– specified the three key steps of a knowledge-based agent:

(1) the stimulus must be translated into an internal representation,

(2) the representation is manipulated by cognitive processes to


derive new internal representations, and

(3) these are in turn retranslated back into action.

– clearly explained this a good design for an agent


11/20/2024 Tariku Birhanu 22
Foundation of AI…

• linguistic

– we have theories of the structure and meaning of language

– AI requires an understanding of the subject matter and context,


and understanding of the structure of sentences.

• Computer science

– Finally, from computer science, we have the tools with which to


make AI a reality

11/20/2024 Tariku Birhanu 23


CHAPTER TWO
INTELLIGENT AGENTS

11/20/2024 Tariku Birhanu 24


Intelligent Agent

• I want to build a robot that will(individual)


– Clean my house
– Handle my emails (Information filtering agent)
– Cook when I don’t want to
– Take a note when I am in a meeting
– Cut my hair
– Wash my clothes
– Fix my car (or take it to be fixed)
i.e. do the things that I don’t feel like doing…
• AI is the science of building software or physical agents that act
rationally with respect to a goal.

11/20/2024 Tariku Birhanu 25


Types of Intelligent Agents

• Software agents:
– also called a softbot (software robot) which operates within the
confines of the computer system or network.
– It is an agent that interacts with a software environment by
issuing commands and interpreting the environments feedback.
– Softbots effectors are commands (e.g. mv or compress in
UNIX shell) to change the external environments state.
– E.g. mail handling agent, information filtering agent

11/20/2024 Tariku Birhanu 26


Types of Intelligent Agents…

• Physical agents
– are robots that have the ability to move and act in the
physical world
– can perceive and manipulate objects in that world, possibly
for responding to new perceptions.

11/20/2024 Tariku Birhanu 27


Types of Intelligent Agents…

• An agent is anything that can be viewed as perceiving its environment


through sensors and acting upon that environment through effectors.

– A human agent has eyes, ears, and other organs for sensors, and
hands, legs, mouth, and other body parts for effectors.

– A robotic agent substitutes cameras and infrared range finders for


the sensors and various motors for the effectors.
• The agent is assumed to exist in an environment in which it perceives
and acts

11/20/2024 Tariku Birhanu 28


Types of Intelligent Agents…

Rational Agent

– A rational agent is one that does the right thing.

– An agent is rational since it does the right thing to achieve the


specified goal

– The right action is the one that will cause the agent to be most
successful.

– an agent which has clear preference, models uncertainty, and acts in


a way to maximize its performance measure with all possible
actions.

– how and when to evaluate the agent's success?.


11/20/2024 Tariku Birhanu 29
Types of Intelligent Agents…

11/20/2024 Tariku Birhanu 30


How Agents should act?
• A rational agent should strive to "do the right thing", based on what it
can perceive and the actions it can perform.
– What does right thing mean? It is an action that will cause the
agent to be most successful and is expected to maximize goal
achievement, given the available information
• A rational agent is not omniscient
– An Omniscient agent knows the actual outcome of its actions, and
can act accordingly, but in reality omniscience is impossible.
– Rational agents take action with expected success, where as
omniscient agent take action with 100% sure of its success

11/20/2024 Tariku Birhanu 31


Rational agent
• In summary what is rational at any given point depends on PEAS
(Performance measure, Environment, Actuators, Sensors) framework.
– Performance measure
• The performance measure that defines degrees of success of the
agent
– Environment
• Knowledge: What an agent already knows about the environment
– Actuators – generating actions
• The actions that the agent can perform back to the environment
– Percept sequence(sensors-receiving percept)
• Perception: Everything that the agent has perceived so far
concerning the current scenario in the environment
11/20/2024 Tariku Birhanu 32
PEAS for self-driving cars:

11/20/2024 Tariku Birhanu 33


PEAS for self-driving cars:…

• Let's suppose a self-driving car then PEAS representation will be:

• Performance: Safety, time, legal drive, comfort

• Environment: Roads, other vehicles, road signs, pedestrian

• Actuators: Steering, accelerator, brake, signal, horn

• Sensors: Camera, GPS, speedometer, odometer, accelerometer,

sonar.

11/20/2024 Tariku Birhanu 34


Example of Agents with their PEAS representation

11/20/2024 Tariku Birhanu 35


Mapping from percept sequence to action
• Once we realize that an agent's behavior depends only on its
percept sequence to date, then we can describe any particular
agent by making a table of the action it takes in response to each
possible percept sequence.
• Such a list is called a mapping from percept sequences to
actions
• We can, find out which mapping correctly describes an agent by
trying out all possible percept sequences and recording which
actions the agent does in response
• Specifying which action an agent ought to take in response to any
given percept sequence provides a design for an ideal agent
11/20/2024 Tariku Birhanu 36
11/20/2024 Tariku Birhanu 37
Autonomy
• If the agent's actions are based completely on built-in knowledge, such that its
AUTONOMY need pay no attention to its percepts, then we say that the agent
lacks autonomy
• For example, if the clock manufacturer already knows that the clock's owner
would be going to Australia at some particular date, then a mechanism could
be built in to adjust the hands automatically by some hours at just the right
time
• This would certainly be successful behavior, but the intelligence seems to
belong to the clock's designer rather than to the clock itself
• An agent's behavior can be based on both its own experience and the
built-in knowledge

11/20/2024 Tariku Birhanu 38


STRUCTURE OF INTELLIGENT AGENTS

• So far we have described about the behavior of an agent, but here we


focus on about how the insides work.
• The job of AI is to design the agent program:
– a function that implements the agent mapping from percepts to
actions
• We assume this program will run on some sort of computing device,
which we will call the architecture
– The architecture might be a plain computer, or it might include
special-purpose hardware for certain tasks, such as processing
camera images or filtering audio input
11/20/2024 Tariku Birhanu 39
STRUCTURE OF INTELLIGENT AGENTS…

• The relationship among agents, architectures, and programs can be


summed up as follows
agent = architecture + program
Programs
– Accepts percept from an environment and generates actions
• Before designing an agent program, we need to know the
possible percept and actions

11/20/2024 Tariku Birhanu 40


STRUCTURE OF INTELLIGENT AGENTS…

Architecture
– makes the percepts from the sensors available to the program,
– runs the program

– and feeds the program's action choices to the effectors as they are
generated.
• The design of an agent is complex because of

1. The complexity of the relationship among the behavior of the agent,


2. the percept sequence generated by the environment,
3. and the goals that the agent is supposed to achieve

11/20/2024 Tariku Birhanu 41


Five types of agents

• Five basic types in order of increasing generality:

1. Simple reflex agents

2. Model-based reflex agent

3. Goal-based agents

4. Utility-based agents

5. Learning agents

11/20/2024 Tariku Birhanu 42


Simple reflex agents

• The Simple reflex agents are the simplest agents.


• These agents take decisions on the basis of the current percepts and
ignore the rest of the percept history.
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history
during their decision and action process.
• The Simple reflex agent works on Condition-action rule, which
means it maps the current state to action. Such as a Room Cleaner
agent, it works only if there is dirt in the room.

11/20/2024 Tariku Birhanu 43


Simple reflex agents…

• A simple reflex agent is an agent that selects actions based


only on the current percept

Examples of reflex actions include


– Withdrawal of hand suddenly from a hot object or if it gets pricked.
– Coughing
– Sneezing
– Closing of eyelids when some particles enter the eye.
– Shivering when it is too cold.

11/20/2024 Tariku Birhanu 44


Simple reflex agents…

• Problems for the simple reflex agent design approach:


– They have very limited intelligence
– They do not have knowledge of non-perceptual parts of the
current state
– Mostly too big to generate and to store.
– Not adaptive to changes in the environment.

11/20/2024 Tariku Birhanu 45


Simple reflex agents…

11/20/2024 Tariku Birhanu 46


Model-based reflex agent

• The Model-based agent can work in a partially observable environment, and


track the situation.
• A model-based agent has two important factors:

– Model: It is knowledge about "how things happen in the world," so it is called


a Model-based agent.
– Internal State: It is a representation of the current state based on percept
history.
• These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
• Updating the agent state requires information about:

– How the world evolves


11/20/2024
– How the agent's action affectsTariku
theBirhanu
world. 47
Model-based reflex agent…

• an intelligent agent that uses an internal model of its environment to


make decisions based on the current percept.

• Self-driving cars are a great example of a model-based reflex agent,


as they are equipped with sensors that detect obstacles and feed
percepts into the car’s memory and internal model of its
environment.

11/20/2024 Tariku Birhanu 48


Model-based reflex agent…

11/20/2024 Tariku Birhanu 49


Goal-based agents

• The knowledge of the current state environment is not always


sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by
having the "goal" information.(Model based+ Goal)
• They choose an action, so that they can achieve the goal.
• These agents may have to consider a long sequence of possible actions
before deciding whether the goal is achieved or not. Such
considerations of different scenario are called searching and
planning, which makes an agent proactive.
11/20/2024 Tariku Birhanu 50
Goal-based agents…

• A goal-based agent is an agent that performs actions to achieve a


desired goal.
• A goal-based agent needs to have some information about the current
state, the possible actions, and the goal situation.
• A goal-based agent can learn from past experiences and choose the
best action to reach the goal.
• Examples of goal-based agents include:
– A taxi that decides which way to turn at a junction to reach the
passenger’s destination.
– A group of friends that choose a car over other vehicles for a road trip.
– A person that maps the pathway in his mind while walking in a lane.

11/20/2024 Tariku Birhanu 51


Goal-based agents…

11/20/2024 Tariku Birhanu 52


Utility-based agents
• These agents are similar to the goal-based agent but provide an
extra component of utility measurement which makes them
different by providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to
achieve the goal.
• The Utility-based agent is useful when there are multiple possible
alternatives, and an agent has to choose in order to perform the best
action.
• The utility function maps each state to a real number to check how
efficiently each action achieves the goals.

11/20/2024 Tariku Birhanu 53


Utility-based agents…

• A utility-based agent is an agent that acts to maximize its own


expected utility.
• Some real life examples of utility-based agents are

• A drone that has a goal and also considers the costs and benefits
of its actions
• A tourist that wants to enjoy the journey and the destination2.

• An AI tool that helps users optimize their rewards and benefits


from using cards
11/20/2024 Tariku Birhanu 54
Utility-based agents…

11/20/2024 Tariku Birhanu 55


Learning Agents

• A learning agent in AI is the type of agent which can learn from its

past experiences, or it has learning capabilities.

• It starts to act with basic knowledge and then able to act and adapt

automatically through learning.

11/20/2024 Tariku Birhanu 56


Learning Agents…

• A learning agent has mainly four conceptual components, which are:


– Learning element: It is responsible for making improvements by
learning from environment
– Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance
standard.
– Performance element: It is responsible for selecting external action
– Problem generator: This component is responsible for suggesting
actions that will lead to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for
new ways to improve the performance.
11/20/2024 Tariku Birhanu 57
Learning Agents…

11/20/2024 Tariku Birhanu 58


Agent Environment in AI

• An environment is everything in the world which surrounds


the agent, but it is not a part of an agent itself.

• An environment can be described as a situation in which


an agent is present.

• The environment is where agent lives, operate and provide


the agent with something to sense and act upon it.
• Agents design is affected by the environment
– actions are done by the agent on the environment, which in turn
provides percepts to the agent

11/20/2024 Tariku Birhanu 59


Features of Environment
• an environment can have various features from the point of
view of an agent:

1. Fully observable vs Partially Observable

2. Static vs Dynamic

3. Discrete vs Continuous

4. Deterministic vs Stochastic

5. Single-agent vs Multi-agent

6. Episodic vs sequential

7. Known vs Unknown

8. Accessible vs Inaccessible
11/20/2024 Tariku Birhanu 60
Fully observable vs Partially Observable

o If an agent sensor can sense or access the complete


state of an environment at each point of time then it is a
fully observable environment, else it is partially
observable.

o A fully observable environment is easy as there is no


need to maintain the internal state to keep track history
of the world.

o An agent with no sensors in all environments then such


an environment is called as unobservable.
11/20/2024 Tariku Birhanu 61
Fully observable vs Partially Observable…
• When an agent sensor is capable to sense or access
the complete state of an agent at each point in time, it
is said to be a fully observable environment else it is
partially observable.
• Maintaining a fully observable environment is easy as
there is no need to keep track of the history of the
surrounding.
• An environment is called unobservable when the
agent has no sensors in all environments.
• Examples:
• Chess – the board is fully observable, and so are
the opponent’s moves.
• Driving – the environment is partially observable
because what’s around the corner is not known.

11/20/2024 Tariku Birhanu 62


Deterministic vs Stochastic:

o If an agent's current state and selected action can


completely determine the next state of the
environment, then such environment is called a
deterministic environment.

o A stochastic environment is random in nature and


cannot be determined completely by an agent.

• In a deterministic, fully observable environment, agent


does not need to worry about uncertainty.

11/20/2024 Tariku Birhanu 63


Deterministic vs Stochastic…
• When a uniqueness in the agent’s current state completely
determines the next state of the agent, the environment is
said to be deterministic.
• The stochastic environment is random in nature which is not
unique and cannot be completely determined by the agent.
• Examples:

• Chess – there would be only a few possible moves for a


coin at the current state and these moves can be
determined.
• Self-Driving Cars- the actions of a self-driving car are
not unique, it varies time to time.
11/20/2024 Tariku Birhanu 64
Episodic vs Sequential:

o In an episodic environment, there is a series of


one-shot actions, and only the current percept is
required for the action.

o However, in Sequential environment, an agent


requires memory of past actions to determine the
next best actions.

11/20/2024 Tariku Birhanu 65


Episodic vs Sequential:

 In an Episodic task environment, each of the


agent’s actions is divided into atomic incidents or
episodes. There is no dependency between current
and previous incidents. In each incident, an agent
receives input from the environment and then
performs the corresponding action.

 Example: Consider an example of Pick and Place


robot, which is used to detect defective parts from
the conveyor belts. Here, every time robot(agent) will
11/20/2024 Tariku Birhanu 66
make the decision on the current part i.e. there is no
Episodic vs Sequential..

 In a Sequential environment, the previous


decisions can affect all future decisions. The next
action of the agent depends on what action he has
taken previously and what action he is supposed to
take in the future.

 Example:

 Checkers- Where the previous move can affect all


the following moves.

11/20/2024 Tariku Birhanu 67


Single-agent vs Multi-agent

o If only one agent is involved in an environment, and


operating by itself then such an environment is called
single agent environment.

o However, if multiple agents are operating in an


environment, then such an environment is called a
multi-agent environment.

o The agent design problems in the multi-agent


environment are different from single agent
environment.
11/20/2024 Tariku Birhanu 68
Single-agent vs Multi-agent…

• An environment consisting of only one agent is said to


be a single-agent environment.
• A person left alone in a maze is an example of the
single-agent system.
• An environment involving more than one agent is a
multi-agent environment.
• The game of football is multi-agent as it involves 11
players in each team.

11/20/2024 Tariku Birhanu 69


Static vs Dynamic

o If the environment can change itself while an agent is


deliberating then such environment is called a dynamic
environment else it is called a static environment.

o Static environments are easy to deal because an agent does


not need to continue looking at the world while deciding for
an action.

o However for dynamic environment, agents need to keep


looking at the world at each action.

o Taxi driving is an example of a dynamic environment whereas


Crossword puzzles are an example of a static environment.

11/20/2024 Tariku Birhanu 70


Static vs Dynamic…
 An environment that keeps constantly changing itself
when the agent is up with some action is said to be
dynamic.
 A roller coaster ride is dynamic as it is set in motion
and the environment keeps changing every instant.
 An idle environment with no change in its state is
called a static environment.
 An empty house is static as there’s no change in the
surroundings when an agent enters.

11/20/2024 Tariku Birhanu 71


Discrete vs Continuous

o If in an environment there are a finite number of


percepts and actions that can be performed within it,
then such an environment is called a discrete
environment else it is called continuous environment.

o A chess gamecomes under discrete environment as


there is a finite number of moves that can be performed.

o A self-driving car is an example of a continuous


environment.

11/20/2024 Tariku Birhanu 72


Discrete vs Continuous…
 If an environment consists of a finite number of actions
that can be deliberated in the environment to obtain the
output, it is said to be a discrete environment.
 The game of chess is discrete as it has only a finite
number of moves. The number of moves might vary with
every game, but still, it’s finite.
 The environment in which the actions are performed
cannot be numbered i.e. is not discrete, is said to be
continuous.
• Self-driving cars are an example of continuous
environments as their actions are driving, parking, etc.
which cannot be numbered.

11/20/2024 Tariku Birhanu 73


Known vs Unknown
o Known and unknown are not actually a feature of an
environment, but it is an agent's state of knowledge to
perform an action.
o In a known environment, the results for all actions are
known to the agent. While in unknown environment,
agent needs to learn how it works in order to perform
an action.
o It is quite possible that a known environment to be
partially observable and an Unknown environment to
be fully observable.
o In a known environment, the output for all probable
actions is given. Obviously, in case of unknown
environment, for an agent to make a decision, it has
to gain knowledge about how the environment works.
11/20/2024 Tariku Birhanu 74
Accessible vs Inaccessible

o If an agent can obtain complete and accurate


information about the state's environment, then
such an environment is called an Accessible
environment else it is called inaccessible.

o An empty room whose state can be defined by its


temperature is an example of an accessible
environment.

11/20/2024 Tariku Birhanu 75


Chapter Three

Problem Solving by searching

11/20/2024 Tariku Birhanu 76


Solving Problems by Searching

• Reflex agent is simple


• base their actions on
• a direct mapping from states to actions
• but cannot work well in environments
• which this mapping would be too large to store
• and would take too long to learn

• Hence, goal-based agent is used

11/20/2024 Tariku Birhanu 77


Problem-solving agent

• Problem-solving agent
• A kind of goal-based agent
• It solves problem by
• finding sequences of actions that lead to desirable states (goals)
• To solve a problem,
• the first step is the goal formulation, based on the current situation

11/20/2024 Tariku Birhanu 78


Goal formulation
• The goal is formulated
• as a set of world states, in which the goal is satisfied

• Reaching from initial state  goal state


• Actions are required

• Actions are the operators


• causing transitions between world states
• Actions should be abstract enough at a certain degree, instead
of very detailed
• E.g., turn left VS turn left 30 degree, etc.
11/20/2024 Tariku Birhanu 79
Problem formulation

• The process of deciding


• what actions and states to consider

• E.g., driving Amman  Zarqa


• in-between states and actions defined
• States: Some places in Amman & Zarqa
• Actions: Turn left, Turn right, go straight, accelerate & brake, etc.

11/20/2024 Tariku Birhanu 80


Search

• Because there are many ways to achieve the same goal


• Those ways are together expressed as a tree
• Multiple options of unknown value at a point,
• the agent can examine different possible sequences of actions, and
choose the best
• This process of looking for the best sequence is called search
• The best sequence is then a list of actions, called solution

11/20/2024 Tariku Birhanu 81


Search algorithm

• Defined as
• taking a problem
• and returns a solution

• Once a solution is found


• the agent follows the solution
• and carries out the list of actions – execution phase

• Design of an agent
• “Formulate, search, execute”
11/20/2024 Tariku Birhanu 82
Well-defined problems and solutions

A problem is defined by 5 components:


1. Initial state
2. Actions
3. Transition model or (Successor functions)
4. Goal Test.
5. Path Cost.

11/20/2024 Tariku Birhanu 83


Well-defined problems and solutions…

• A problem is defined by 4 components:


• The initial state
• that the agent starts in
• The set of possible actions
• Transition model: description of what each action does.

(successor functions): refer to any state reachable from given state


by a single action
• Initial state, actions and Transition model define the state space
• the set of all states reachable from the initial state by any
sequence of actions.
• A path in the state space:
• any sequence of states connected by a sequence of actions.
11/20/2024 Tariku Birhanu 84
Well-defined problems and solutions…
• The goal test
• Applied to the current state to test
• if the agent is in its goal

-Sometimes there is an explicit set of possible goal states. (example: in


Amman).
-Sometimes the goal is described by the properties
• instead of stating explicitly the set of states
• Example: Chess
• the agent wins if it can capture the KING of the opponent on next
move ( checkmate).
• no matter what the opponent
11/20/2024 does
Tariku Birhanu 85
Well-defined problems and solutions…

• A path cost function,


• assigns a numeric cost to each path
• = performance measure
• denoted by g
• to distinguish the best path from others

• Usually the path cost is


• the sum of the step costs of the individual actions (in the action
list)

11/20/2024 Tariku Birhanu 86


Well-defined problems and solutions…
• Together a problem is defined by
• Initial state
• Actions
• Successor function
• Goal test
• Path cost function

• The solution of a problem is then


• a path from the initial state to a state satisfying the goal test

• Optimal solution
• the solution with lowest path cost among all solutions
11/20/2024 Tariku Birhanu 87
Formulating problems
• Besides the four components for problem formulation
• anything else?

• Abstraction
• the process to take out the irrelevant information
• leave the most essential parts to the description of the states

( Remove detail from representation)


• Conclusion: Only the most important parts that are contributing to
searching are used

11/20/2024 Tariku Birhanu 88


Problem-Solving Agents

• agents whose task is to solve a particular problem (steps)


• goal formulation
• what is the goal state
• what are important characteristics of the goal state
• how does the agent know that it has reached the goal
• are there several possible goal states
• are they equal or are some more preferable

11/20/2024 Tariku Birhanu 89


Problem-Solving Agents…

• problem formulation
• what are the possible states of the world relevant for solving
the problem
• what information is accessible to the agent
• how can the agent progress from state to state

11/20/2024 Tariku Birhanu 90


Example

11/20/2024 Tariku Birhanu 91


Our Example
1. Problem : To Go from Ajlun to Amman
2. Initial State : Ajlween
3. Operator : Go from One City To another .
4. State Space : {Jarash , Salat , irbed,……..}
5. Goal Test : are the agent in Amman.
6. Path Cost Function : Get The Cost From The Map.
7. Solution :{ {Aj  Ja  Ir  Ma  Za  Am} , {Aj Ir  Ma  Za 
Am} …. {Aj  Ja  Am} }
8. State Set Space : {Ajlun  Jarash  Amman}

11/20/2024 Tariku Birhanu 92


Example: Romania

• On holiday in Romania; currently in Arad.

• Flight leaves tomorrow from Bucharest

• Formulate goal:
• be in Bucharest

• Formulate problem:
• states: various cities
• actions: drive between cities

• Find solution:
• sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

11/20/2024 Tariku Birhanu 93


Example: Romania

11/20/2024 Tariku Birhanu 94


Single-state problem formulation
A problem is defined by four items:

initial state e.g., "at Arad"


1. actions or successor function S(x) = set of action–state pairs
• e.g., S(Arad) = {<Arad  Zerind, Zerind>, … }

2. goal test, can be


• explicit, e.g., x = "at Bucharest"
• implicit, e.g., Checkmate(x)

3. path cost (additive)


• e.g., sum of distances, number of actions executed, etc.
• c(x,a,y) is the step cost, assumed to be ≥ 0
• A solution is a sequence of actions leading from the initial state to a
goal state

11/20/2024 Tariku Birhanu 95


Toy problems

• Example: vacuum world


Number of states: 8
Initial state: Any
Number of actions: 4
 left, right, suck, noOp
Goal: clean up all dirt
 Goal states: {7, 8}
 Path Cost:
 Each step costs 1
11/20/2024 Tariku Birhanu 96
11/20/2024 Tariku Birhanu 97
The 8-puzzle

11/20/2024 Tariku Birhanu 98


The 8-puzzle

• States:
• a state description specifies the location of each of the eight tiles
and blank in one of the nine squares

• Initial State:
• Any state in state space

• Successor function:
• the blank moves Left, Right, Up, or Down

• Goal test:
• current state matches the goal configuration

• Path cost:
• each step costs 1, so the path cost is just the length of the path
11/20/2024 Tariku Birhanu 99
The 8-queens

• There are two ways to formulate the problem

• All of them have the common followings:


• Goal test: 8 queens on board, not attacking to each other
• Path cost: zero

(1) Incremental formulation


• involves operators that augment the state description starting from an empty state
• Each action adds a queen to the state
• States:
• any arrangement of 0 to 8 queens on board
• Successor function:
• add a queen to any empty square

11/20/2024 Tariku Birhanu 100


The 8-queens
• (2) Complete-state formulation
• starts with all 8 queens on the board
• move the queens individually around
• States:
• any arrangement of 8 queens, one per column in the leftmost columns
• Operators: move an attacked queen to a row, not attacked by any other
• Conclusion:
• the right formulation makes a big difference to the size of the search space

11/20/2024 Tariku Birhanu 101


Example: River Crossing

• Items: Man, Wolf, Corn, Chicken.

• Man wants to cross river with all items.


• Wolf will eat Chicken
• Chicken will eat corn.
• Boat will take max of two.

11/20/2024 Tariku Birhanu 102


3.3 Searching for solutions

• Finding out a solution is done by


• searching through the state space

• All problems are transformed


• as a search tree
• generated by the initial state and successor function

11/20/2024 Tariku Birhanu 103


Search tree

• Initial state
• The root of the search tree is a search node

• Expanding
• applying successor function to the current state
• thereby generating a new set of states

• leaf nodes
• the states having no successors

Fringe : Set of search nodes that have not been expanded yet.

• Refer to next figure


11/20/2024 Tariku Birhanu 104
Tree search example

11/20/2024 Tariku Birhanu 105


Tree search example

11/20/2024 Tariku Birhanu 106


Search tree

• The essence of searching


• in case the first choice is not correct
• choosing one option and keep others for later inspection

• Hence we have the search strategy


• which determines the choice of which state to expand
• good choice  fewer work  faster

• Important:
• state space ≠ search tree

11/20/2024 Tariku Birhanu 107


Search tree

• State space
• has unique states {A, B}
• while a search tree may have cyclic paths: A-B-A-B-A-B- …

• A good search strategy should avoid such paths

11/20/2024 Tariku Birhanu 108


Search tree

• A node is having five components:


• STATE: which state it is in the state space
• PARENT-NODE: from which node it is generated
• ACTION: which action applied to its parent-node to generate it
• PATH-COST: the cost, g(n), from initial state to the node n itself
• DEPTH: number of steps along the path from the initial state

11/20/2024 Tariku Birhanu 109


Properties of Search Algorithms
• Following are the four essential properties of search algorithms to compare
the efficiency of these algorithms:

• Completeness: A search algorithm is said to be complete if it guarantees


to return a solution if at least any solution exists for any random input.

• Optimality: If a solution found for an algorithm is guaranteed to be the


best solution (lowest path cost) among all other solutions, then such a
solution for is said to be an optimal solution.

• Time Complexity: Time complexity is a measure of time for an algorithm


to complete its task.

• Space Complexity: It is the maximum storage space required at any point


during
11/20/2024
the search, as the complexity of the problem.
Tariku Birhanu 110
Measuring problem-solving performance…

• In AI, complexity is expressed in


• b, branching factor, maximum number of successors of any node
• d, the depth of the shallowest goal node.

(depth of the least-cost solution)


• m, the maximum length of any path in the state space

• Time and Space is measured in


• number of nodes generated during the search
• maximum number of nodes stored in memory

11/20/2024 Tariku Birhanu 111


Measuring problem-solving performance
• For effectiveness of a search algorithm
• we can just consider the total cost
• The total cost = path cost (g) of the solution found + search cost
• search cost = time necessary to find the solution

• Tradeoff:
• (long time, optimal solution with least g)
• vs. (shorter time, solution with slightly larger path cost g)

11/20/2024 Tariku Birhanu 112


Uninformed search strategies

• Uninformed search

• no information about the number of steps or the path cost from

the current state to the goal

• search the state space blindly

• Informed search, or heuristic search

• a cleverer strategy that searches toward the goal,

• based on the information from the current state so far

11/20/2024 Tariku Birhanu 113


11/20/2024 Tariku Birhanu 114
Uninformed search strategies

Following are the various types of uninformed search algorithms:


• Breadth-first search
• Uniform cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search
• Bidirectional search

11/20/2024 Tariku Birhanu 115


11/20/2024 Tariku Birhanu 116
11/20/2024 Tariku Birhanu 117
11/20/2024 Tariku Birhanu 118
11/20/2024 Tariku Birhanu 119
11/20/2024 Tariku Birhanu 120
Uniform cost search

• Breadth-first finds the shallowest goal state

• but not necessarily be the least-cost solution

• work only if all step costs are equal

• Uniform cost search

• modifies breadth-first strategy

• by always expanding the lowest-cost node

• The lowest-cost node is measured by the path cost g(n)

11/20/2024 Tariku Birhanu 121


Uniform cost search

11/20/2024 Tariku Birhanu 122


Uniform cost search

11/20/2024 Tariku Birhanu 123


11/20/2024 Tariku Birhanu 124
Depth-first search

• Always expands one of the nodes at the deepest


level of the tree
• Only when the search hits a dead end
• goes back and expands nodes at shallower levels
• Dead end  leaf nodes but not the goal
• Backtracking search
• only one successor is generated on expansion
• rather than all successors
• fewer memory

11/20/2024 Tariku Birhanu 125


Depth-first search

11/20/2024 Tariku Birhanu 126


11/20/2024 Tariku Birhanu 127
Depth-limited search
• It is depth-first search
• with a predefined maximum depth
• However, it is usually not easy to define the suitable maximum depth
• too small  no solution can be found
• too large  the same problems are suffered from
• Anyway the search is
• complete
• but still not optimal

11/20/2024 Tariku Birhanu 128


Depth-limited search
S depth = 3
A D 3
6
B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
11/20/2024 Tariku Birhanu 129
Iterative deepening search

• No choosing of the best depth limit

• It tries all possible depth limits:

• first 0, then 1, 2, and so on

• combines the benefits of depth-first and breadth-first search

11/20/2024 Tariku Birhanu 130


Iterative deepening search

11/20/2024 Tariku Birhanu 131


Iterative deepening search
(Analysis)
• optimal
• complete
• Time and space complexities
• reasonable
• suitable for the problem
• having a large search space
• and the depth of the solution is not known

11/20/2024 Tariku Birhanu 132


Bidirectional search
• Run two simultaneous searches
• one forward from the initial state another backward from the goal
• stop when the two searches meet
• However, computing backward is difficult
• A huge amount of goal states
• at the goal state, which actions are used to compute it?
• can the actions be reversible to computer its predecessors?

11/20/2024 Tariku Birhanu 133


Bidirectional Strategy
2 fringe queues: FRINGE1 and FRINGE2

Time and space complexity = O(bd/2) << O(bd)


11/20/2024 Tariku Birhanu 134
Bidirectional search
S
Forward
A D Backwards

B D A E

C E E B B F
11

D F B F C E A C G
14 17 15 15 13
G C G F
19 19 17
G 25
11/20/2024 Tariku Birhanu 135
11/20/2024 Tariku Birhanu 136

You might also like