Agents: G.Victo Sudha George

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

Agents

G.Victo Sudha George


What is artificial intelligence  
• Artificial Intelligence is the branch of computer science concerned with making computers
behave like humans.
• Systems that think like humans
• ―The exciting new effort to make computers think … machines with minds,in the full and
literal sense.(Haugeland,1985)

 
• Systems that act like humans
• The art of creating machines that perform functions that require intelligence when
performed by people.(Kurzweil,1990)

Systems that think rationally


• ―The study of mental faculties through the use of computer models. (Charniak and
McDermont,1985)
• Systems that act rationally
• “Computational intelligence is the study of the design of intelligent agents.(Poole et al.,1998
The four approaches in more detail are as follows :
(a) Acting humanly : The Turing Test approach

• Test proposed by Alan Turing in 1950


• . Programming a computer to pass ,the computer need to possess the following
capabilities :

• Natural language processing to enable it to communicate successfully in English.


• Knowledge representation to store what it knows or hears
• Automated reasoning to use the stored information to answer questions and to
draw new conclusions.
• Machine learning to adapt to new circumstances and to detect and extrapolate
patterns 
• To pass the complete Turing Test,the computer will need
• Computer vision to perceive the objects,and
• Robotics to manipulate objects and move about.
(b)Thinking humanly : The cognitive modeling approach

• We need to get inside actual working of the human mind :


• (a) through introspection – trying to capture our own
thoughts as they go by;
(b) through psychological experiments
Allen Newell and Herbert Simon,who developed GPS,the
―General Problem Solver‖
• The interdisciplinary field of cognitive science brings
together computer models from AI and experimental
techniques from psychology to try to construct precise and
testable theories of the workings of the human mind
•  
(c) Thinking rationally : The “laws of thought approach”

• provided patterns for argument structures


that always yielded correct conclusions when
given correct premises
• for example,‖Socrates is a man;all men are
mortal;therefore Socrates is mortal.. These
laws of thought were supposed to govern the
operation of the mind;their study initiated a
field called logic.
(d) Acting rationally : The rational agent
approach

• An agent is something that acts. Computer agents


are not mere programs ,but they are expected to
• have the following attributes also : (a) operating
under autonomous control, (b) perceiving their
environment, (c) persisting over a prolonged time
period, (e) adapting to change. A rational agent is
one that acts so as to achieve the best outcome.
1.2 INTELLIGENT AGENTS
 

• 1.2.1 Agents and environments


An agent is anything that can be viewed as perceiving its environment through sensors
and
 
• SENSOR acting upon that environment through actuators. This simple idea is
illustrated in Figure 1.2.
 
• A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth,
and other body parts for actuators.
• A robotic agent might have cameras and infrared range finders for sensors and various
motors for actuators.
• A software agent receives keystrokes, file contents, and network packets as sensory
inputs and acts on the environment by displaying on the screen, writing files, and
sending network packets.
• Percept
We use the term percept to refer to the agent's perceptual inputs at any given instant.
• Percept Sequence
An agent's percept sequence is the complete history of everything the agent has ever
perceived.
• Agent function
Mathematically speaking, we say that an agent's behavior is described by the agent
functionthat maps any given percept sequence to an action.

• Agent program Internally, The agent function for an artificial agent will be
implemented by an agent program. It is important to keep these two ideas distinct.
The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent
architecture. 
• To illustrate these ideas, we will use a very simple
example-the vacuum-cleaner world shown in Figure
1.3. This particular world has just two locations:
squares A and B. The vacuum agent perceives which
square it is in and whether there is dirt in the square. It
can choose to move left, move right, suck up the dirt, or
do nothing. One very simple agent function is the
following: if the current square is dirty, then suck,
otherwise move to the other square. A partial
tabulation of this agent function is shown in Figure 1.4.
Rational Agent
 
• A rational agent is one that does the right thing ie. every entry in the table for the agent
function is filled out correctly. The right action is the one that will cause the agent to be
most successful.
• Performance measures
A performance measure represents the criterion for success of an agent's behavior. When
an agent is placed in an environment, it generates a sequence of actions according to
the percepts it receives. This sequence of actions causes the environment to go through
a sequence of states. If the sequence is desirable, then the agent has performed well. 
• Rationality,, at any given time depends on four things:
o The performance measure that defines the criterion of success.
o The agent's prior knowledge of the environment.
o The actions that the agent can perform.
o The agent's percept sequence to date.
 
• This leads to a definition of a rational agent:
• For each possible percept sequence, a rational
agent should select an action that is ex- pected
to maximize its performance measure, given
the evidence provided by the percept
• sequence and whatever built-in knowledge the
agent has.
Omniscience, learning, and autonomy

• An omniscient agent knows the actual outcome of its actions


and can act accordingly; butomniscience is impossible in reality.
• information gathering is an important part of rationality. It
helps to do actions in order to modify future percepts
• A rational agent should gather information and also learn as
much as possible from what it perceives.
• To the extent that an agent relies on the prior knowledge of its
designer rather thanon its own percepts, we say that the agent
lacks autonomy.
• A rational agent should be autonomous-it should learn what it
can to compensate for partial or incorrect prior knowledge. 
Task environments

• task environments- essentially the "problems" to which rational agents are


the "solutions."
• Specifying the task environment
The rationality of the simple vacuum-cleaner agent, needs specification of
o the performance measure
o the environment 
o the agent's actuators and sensors.  
 
All these are grouped together under the heading of the task environment.
We call this the PEAS (Performance, Environment, Actuators, Sensors)
description.
• In designing an agent, the first step must always be to specify the task
environment as fully as possible.
Properties of task environments

• Fully observable vs. partially observable o


Deterministic vs. stochastic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Single agent vs. multiagent
Fully observable vs. partially observable.

• If an agent's sensors give it access to the complete


state of the environment at each point in time, then
we say that the task environment is fully observable.
A task envi- ronment is effectively fully observable if
the sensors detect all aspects that are relevant to
the choice of action;
• An environment might be partially observable
because of noisy and inaccurate sensors or because
parts of the state are simplly missing from the
sensor data.
Deterministic vs. stochastic.

• If the next state of the environment is


completely determined by the current state
and the action executed by the agent, then
we say the environment is deterministic;
other- wise, it is stochastic.
Episodic vs. sequential

• In an episodic task environment, the agent's experience is


divided into atomic episodes. Each episode consists of the
agent perceiving and then performing a single action. The
next episode does not depend on the actions taken in
previous episodes.
 
• example: an agent that has to spot defective parts on an
assembly line bases each decision on the current part,
regardless of previous decisions. In sequential environments,
The current decision could affect all future decisions.
• Eg: Chess and taxi driving are sequential..
• Discrete vs. continuous.
• The discrete or continuous distinction can be applied to the state of
the environment, to the way time is handled, and to the percepts
and actions of the agent.
• For example, a discrete-state environment such as a chess game
has a finite number of distinct states. Chess also has a discrete set
of percepts and actions.
• Taxi driving is a continuous- state and continuous-time problem:
the speed and location of the taxi and of the other vehicles sweep
through a range of continuous values and do so smoothly over
time. Taxi-driving actions are also continuous (steering angles, etc.).
 
Single agent vs. multiagent.

• An agent solving a crossword puzzle by itself is


clearly in a single-agent environment, whereas
an agent playing chess is in a two-agent
environment.

• Note: As one might expect, the hardest case is


partially observable, stochastic, sequential,
dynamic,
• continuous, and multiagent
Agent programs

• The agent programs all have the same skeleton: they


take the current percept as input from the
• sensors and return an action to the actuatom6 Notice
the difference between the agent program, which takes
the current percept as input, and the agent function,
which takes the entire percept history. The agent
program takes just the current percept as input because
nothing more is available from the environment; if the
agent's actions depend on the entire percept sequence,
the agent will have to remember the percepts.
Drawbacks

• Table lookup of percept-action pairs defining all possible condition-action


rules necessaryto interact in an environment
• Problems
– Too big to generate and to store (Chess has about 10^120 states, for
example)
– No knowledge of non-perceptual parts of the current state
– Not adaptive to changes in the environment; requires entire table to be
updated if changes occur
– Looping: Can't make actions conditional
 
• Take a long time to build the table
• No autonomy
• Even with learning, need a long time to learn the table entries
Some Agent Types

• Table-driven agents
– use a percept sequence/action table in memory to find the next action. They
areimplemented by a (large) lookup table.
• Simple reflex agents 
– are based on condition-action rules, implemented with an appropriate production
system. They are stateless devices which do not have memory of past world states.
• Agents with memory
– have internal state, which is used to keep track of past states of the world.
• Agents with goals
– are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
• Utility-based agents
– base their decisions on classic axiomatic utility theory in order to act rationally.

You might also like