Lesson 03 - Intelligent Agents

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Intelligent Agents

SIT305: Foundations of Artificial


Intelligence
In this lesson;
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Environment types
• Agent types
Agents
• An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
• Human agent: eyes, ears, and other organs for
sensors; hands, legs, mouth, and other body
parts for actuators
• Robotic agent: cameras and infrared range
finders for sensors; various motors for actuators
Characteristics of intelligent
agents
• They have some level of autonomy that allows them to
perform certain tasks on their own.
• They have a learning ability that enables them to learn
even as tasks are carried out.
• They can interact with other entities such as agents,
humans, and systems.
• New rules can be accommodated by intelligent agents
incrementally.
• They exhibit goal-oriented habits.
• They are knowledge-based. They use knowledge
regarding communications, processes, and entities.
The structure of intelligent
agents
• Architecture: This refers to machinery or devices that
consists of actuators and sensors. The intelligent agent
executes on this machinery. Examples include a
personal computer, a car, or a camera.
• Agent function: This is a function in which actions are
mapped from a certain percept sequence. Percept
sequence refers to a history of what the intelligent agent
has perceived.
• Agent program: This is an implementation or execution
of the agent function. The agent function is produced
through the agent program’s execution on the physical
architecture.
Agents and environments

• The agent function maps from percept histories


to actions:
[f: P*  A]
• The agent program runs on the physical
architecture to produce f
• agent = architecture + program
How intelligent agents work
• Sensors: These are devices that detect any changes in
the environment. This information is sent to other
devices. In artificial intelligence, the environment of the
system is observed by intelligent agents through
sensors.
• Actuators: These are components through which
energy is converted into motion. They perform the role of
controlling and moving a system. Examples include rails,
motors, and gears.
• Effectors: The environment is affected by effectors.
Examples include legs, fingers, wheels, display screen,
and arms.
The following diagram shows how these components are positioned in the
AI system.
How intelligent agents work
Vacuum-cleaner world

iRobot Corporation

• Percepts: location
Founder Rodney Brooks (MIT)

and contents, e.g.,


[A,Dirty]
• Actions: Left, Right,
Suck, NoOp
Rational agents
An agent should strive to "do the right thing", based on what:
– it can perceive and
– the actions it can perform.

The right action is the one that will cause the agent to be most successful

Performance measure: An objective criterion for success of an agent's behavior.

Performance measures of a vacuum-cleaner agent: amount of dirt cleaned up,


amount of time taken, amount of electricity consumed, level of noise
generated, etc.

Performance measures self-driving car: time to reach destination (minimize),


safety, predictability of behavior for other agents, reliability, etc.

Performance measure of game-playing agent: win/loss percentage (maximize),


robustness, unpredictability (to “confuse” opponent), etc.
Definition of Rational Agent:

For each possible percept sequence, a rational agent should select


an action that maximizes its performance measure (in expectation)
given the evidence provided by the percept sequence and whatever built-
in knowledge the agent has.

Why “in expectation”?


Captures actions with stochastic / uncertain effects or
actions performed in stochastic environments.
We can then look at the expected value of an action.

In high-risk settings, we may also want to limit the


worst-case behavior.
Rational agents
Notes:

Rationality is distinct from omniscience (“all knowing”). We can


behave rationally even when faced with incomplete information.

Agents can perform actions in order to modify future percepts so as


to obtain useful information: information gathering, exploration.

An agent is autonomous if its behavior is determined by its own


experience (with ability to learn and adapt).
Characterizing a Task Environment

Must first specify the setting for intelligent agent design.

PEAS: Performance measure, Environment, Actuators, Sensors

Example: the task of designing a self-driving car

– Performance measure Safe, fast, legal, comfortable trip


– Environment Roads, other traffic, pedestrians
– Actuators Steering wheel, accelerator, brake, signal, horn
– Sensors Cameras, LIDAR (light/radar), speedometer, GPS, odometer
engine sensors, keyboard
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions,
tests, diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of
parts in correct bins
• Environment: Conveyor belt with parts,
bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)
• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single
action), and the choice of action in each episode
depends only on the episode itself.
Environment types
• Static (vs. dynamic): The environment is
unchanged while an agent is deliberating. (The
environment is semidynamic if the environment
itself does not change with the passage of time
but the agent's performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent
operating by itself in an environment.
Environment types
Chess with Chess without Taxi driving
a clock a clock
Fully observable Yes Yes No
Deterministic Strategic Strategic No
Episodic No No No
Static Semi Yes No
Discrete Yes Yes No
Single agent No No No

• The environment type largely determines the agent design


• The real world is (of course) partially observable, stochastic,
sequential, dynamic, continuous, multi-agent
Agent functions and programs
• An agent is completely specified by the
agent function mapping percept
sequences to actions
• One agent function (or a small
equivalence class) is rational
• Aim: find a way to implement the rational
agent function concisely
Table-lookup agent
• Drawbacks:
– Huge table
– Take a long time to build the table
– No autonomy
– Even with learning, need a long time to learn
the table entries
Agent types
Four basic types in order of increasing
generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents
Learning agents
Applications of intelligent agents
• Information search, retrieval, and
navigation
• Repetitive office activities
• Medical diagnosis
• Vacuum cleaning
• Autonomous driving
Applications of intelligent agents
• Information search, retrieval, and
navigation
• Repetitive office activities
• Medical diagnosis
• Vacuum cleaning
• Autonomous driving
Summary
• An agent perceives and acts in an environment, has an architecture, and is
implemented by an agent program.
• A rational agent always chooses the action which maximizes its expected
performance, given its percept sequence so far.
• An autonomous agent uses its own experience rather than built-in knowledge
of the environment by the designer.
• An agent program maps from percept to action and updates its internal state.
– Reflex agents (simple / model-based) respond immediately to percepts.
– Goal-based agents act in order to achieve their goal(s), possible
sequence of steps.
– Utility-based agents maximize their own utility function.
– Learning agents improve their performance through learning.
• Representing knowledge is important for successful agent design.

• The most challenging environments are partially observable, stochastic,
sequential, dynamic, and continuous, and contain multiple intelligent agents.

You might also like