Summary - Intelligent Agents

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 37

Contents

• Summary
• Intelligent Agents
summary
• AI attempts;
– To understand intelligence as well as
– To build intelligent artifacts
• But intelligence is a complex concept
– Therefore, there is variations of AI definitions and
approaches
Approaches to AI
• Human-centered
– Attempts to understand or imitate human intelligence
• “acting humanly”
– If one can’t distinguish the intelligent work of a machine
from that of a person -> the machine is intelligent

• “thinking humanly”
– “imitating human being is not the right way”
– By understanding and precisely stating the underlying
principles of human intelligence
– We can build intelligent artifacts that “think” (reason) like a
human being
Rationality-Centered
• Rather than working entirely around the complex
human intelligence
– Basis on an ideal concept of intelligence called
rationality
– That is not entirely related with intelligence as it is in
a human being
– A rational system, unlike intelligent system, is defined
precisely
– A system is rational if it does the right thing given
what it knows
…Cont
• “thinking rationally”
– There is a concept called “right thinking”
– It can be represented by logical notations
– Therefore, we can build the system
• “acting rationally”
– “thinking rationally” is not the right approach because:
– All things can’t be expressed by logical notations
(informal knowledge) and
– There are situations that can’t be said to involve “right
thinking” (reflex actions)
…Cont
• So, it is better to a system that can act rationally
because;
– “thinking rationally” is a part of “acting rationally” – in
order to act rationally the system has to reason logically
(“think rationally”)
– Precise definition of a “rationally acting” system exists (is
the one that acts so as to achieve the best outcome)
– By representing knowledge, we can build a system that
can act with it
(we will follow this approach as our basis for understand AI)
Chapter 2
Intelligent Agents
• What is an agent?
– An agent is anything that can be viewed as
perceiving its environment through sensors and
– Acting upon that environment through actuators
• Example: A human agent
– Sensors: eyes, ears and other organs
– Actuators: hands, legs and other body parts
– A robotic Agent
– Sensors: cameras and infrared range finders
– Actuators: various motors
Cont..

• Key terms
– Percept; agent’s perceptual input at any given instant
– Percept sequence; everything the agent perceived to
date
..Cont
• What makes an agent intelligent (in our case
rational)?
– We said a rational agent is one that does the right
thing.
– Example: Vacuum-Cleaner Agent
…cont
– This agent can perceive which square it is in
– Can choose to move left, move right, suck up the dirt, or do
nothing
• To do the right thing, this agent:
– Should have prior knowledge of the environment
– Should perceive its environment
– Should act on the environment
• But, how do we know whether it does the right thing?
– The right thing is the one that will cause the agent to be most
successful
– Hence, we need some way to measure success
…Cont
• Therefore, what is rational at any given time depends on
four things:
– The performance measure
– The agent’s prior knowledge of the environment
– The action the agent can perform
– The agent’s percept sequence to date
• Definition of a rational agent:
– For every possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever build-in knowledge the agent has
…Cont
• Is the vacuum cleaner a rational agent, given these
conditions:
– The performance measure is one point for each clean square
within some time interval
– The “geography” of the environment is known a priori
– The agent can perceive its location and whether the location
contains dirt
– Can move left, move right, suck, or do nothing
• Under this circumstances the agent is indeed rational-its
expected performance is high
• But, we can still have a better agent
…cont
• Properties of a rational agent
– Rationality is not the same as perfection-rationality
maximizes expected outcome
– But not to say the agent should act under intelligent
• Rational agent should learn as much as possible
from what is perceives
• A rational agent should be autonomous
– But, may incorporate some prior knowledge during
design
– That is, just as evolution provide some prior knowledge
Agent Function
• Used to describe the behavior of agents
• Maps percept sequence to an action
– [f: P*  A]
• Example: for the vacuum-cleaner agent
– The agent function can partially tabulated as
Cont..
• The agent function is just external
characterization of agents, internally an agent
is implemented by agent program
• The agent program runs on the physical
architecture to produce f (agent function)
• For most agents the table would be very large-
infinite
Performance Measure
• An objective criterion for success of an agent's
behavior
• E.g., performance measure of a vacuum-
cleaner agent could be amount of dirt cleaned
up, amount of time taken, amount of electricity
consumed, amount of noise generated, etc.
• Measure is better if design based on what is
needed in the environment
Agent Environment
• In agent design the first step is to specify task
environment or PEAS (Performance,
Environment, Actuator, Sensor) description
• Consider, e.g., the task of designing an automated
taxi driver:

– Performance measure: Safe, fast, legal, comfortable trip,
maximize profits
– Environment: Roads, other traffic, pedestrians, customers
– Actuators: Steering wheel, accelerator, brake, signal, horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard
Cont…
• Agent: Medical diagnosis system
– Performance measure: Healthy patient, minimize
costs
– Environment: Patient, hospital, staff
– Actuators: Screen display (questions, tests,
diagnoses, treatments, referrals)
– Sensors: Keyboard (entry of symptoms, findings,
patient's answers)
Cont..
• Agent: Interactive English tutor
– Performance measure: Maximize student's score
on test
– Environment: Set of students
– Actuators: Screen display (exercises, suggestions,
corrections)
– Sensors: Keyboard
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.
– That is from the point of the agent
– Example; a vacuum agent with only a local dirt sensor cannot tell
whether there is dirt in other square (parially)

• Deterministic (vs. stochastic): The next state of the


environment is completely determined by the current state
and the action executed by the agent.
– Otherwise stochastic
– Example:
• the vacuum world as we describe it is deterministic
• Taxi driving is stochastic
Cont..
• Episodic (vs. sequential): The agent's experience is divided
into atomic "episodes" (each episode consists of the agent
perceiving and then performing a single action), and the
choice of action in each episode depends only on the
episode itself
– Example
• An agent that spot defective parts on an assembly is episodic
• Chess and taxi driving are sequential
• Static (vs. dynamic): The environment is unchanged while an
agent is deliberating
– Example
• Crossword puzzle is static
• Taxi driving is dynamic; the other cars keep moving while the agent
dithers
Cont..
• Discrete (vs. continuous): A limited number of distinct,
clearly defined percepts and actions.
– Example
• Chess game is discrete – because it has a finite number of distinct state
• Taxi driving is continuous
• Single agent (vs. multiagent): An agent operating by itself in
an environment.
– Example
• An agent solving crossword puzzle by itself – single agent
• Chess – two agent
• Clearly the hardest case is partially observable, stochastic,
sequential, dynamic, continuous and multiagent
• The environment type largely determines the agent design
Cont..
• Examples
Agent Types
• Four basic types in order of increasing
generality and based on program:
– Simple reflex agents
– Model-based reflex agents
– Goal-based agents
– Utility-based agents
Simple reflex agents
• The simplest kind
• Select action on the basis of the current percept
– Ignoring the percept history
• Example: the vacuum agent
– Its decision based only on the current location and its dirtiness
• Follows the condition-action rule
– Example: taxi driver agent
• If car-in-front-is-braking then initiate-braking
• Works if the environment is fully observable
Cont..
• Structure of simple reflex agents
Model-based reflex agents
• Keeps the model of the world
– How the world evolves
– How the agent’s own action affect the world
• Can handle partially observable cases
• Structure
Goal Based Agents
• Knowing about the current state of the
environment is not always enough to decide
what to do
– For example, at a road junction, the taxi can turn
left, turn right, or go straight on
• The correct decision depends on where the
taxi is trying to get to
• the agent needs some sort of goal information
Cont..
• Structure of goal-based agents
Utility-based agents
• Goals alone are not really enough to generate
high-quality behavior
– For example, there are many action sequences
that will get the taxi to its destination
• some are quicker, safer, more reliable, or cheaper than
other
• The word "utility" here refers to "the quality of
being useful,"
Cont..
• Structure of utility-based agents
Learning Agents
• All agents come in to being by learning
• A learning agent has four conceptual
components:
– The learning element
– Performance element
– Critic
– Problem generator
Cont.
• Learning element
– Makes improvement
– Takes feedback from critic and modify
performance element
– Its design depends on the design of the
performance element
Cont..
• Performance element
– Selects external actions
• Critic
– Tells the learning element how the agents is doing
– The agent must not modify it
– It provides indication of success or failure of the
agent with respect to some performance measure
Cont..
• Problem generator
– Suggests action for future improvement
• A general model of learning agents

You might also like