Agents: G.Victo Sudha George
Agents: G.Victo Sudha George
Agents: G.Victo Sudha George
• Systems that act like humans
• The art of creating machines that perform functions that require intelligence when
performed by people.(Kurzweil,1990)
• Agent program Internally, The agent function for an artificial agent will be
implemented by an agent program. It is important to keep these two ideas distinct.
The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent
architecture.
• To illustrate these ideas, we will use a very simple
example-the vacuum-cleaner world shown in Figure
1.3. This particular world has just two locations:
squares A and B. The vacuum agent perceives which
square it is in and whether there is dirt in the square. It
can choose to move left, move right, suck up the dirt, or
do nothing. One very simple agent function is the
following: if the current square is dirty, then suck,
otherwise move to the other square. A partial
tabulation of this agent function is shown in Figure 1.4.
Rational Agent
• A rational agent is one that does the right thing ie. every entry in the table for the agent
function is filled out correctly. The right action is the one that will cause the agent to be
most successful.
• Performance measures
A performance measure represents the criterion for success of an agent's behavior. When
an agent is placed in an environment, it generates a sequence of actions according to
the percepts it receives. This sequence of actions causes the environment to go through
a sequence of states. If the sequence is desirable, then the agent has performed well.
• Rationality,, at any given time depends on four things:
o The performance measure that defines the criterion of success.
o The agent's prior knowledge of the environment.
o The actions that the agent can perform.
o The agent's percept sequence to date.
• This leads to a definition of a rational agent:
• For each possible percept sequence, a rational
agent should select an action that is ex- pected
to maximize its performance measure, given
the evidence provided by the percept
• sequence and whatever built-in knowledge the
agent has.
Omniscience, learning, and autonomy
• Table-driven agents
– use a percept sequence/action table in memory to find the next action. They
areimplemented by a (large) lookup table.
• Simple reflex agents
– are based on condition-action rules, implemented with an appropriate production
system. They are stateless devices which do not have memory of past world states.
• Agents with memory
– have internal state, which is used to keep track of past states of the world.
• Agents with goals
– are agents that, in addition to state information, have goal information that
describes desirable situations. Agents of this kind take future events into
consideration.
• Utility-based agents
– base their decisions on classic axiomatic utility theory in order to act rationally.