Lecture1 Intro
Lecture1 Intro
Lecture1 Intro
International University
School of Computer Science and Engineering
1
Basic information about course
2
Basic information about course
Instructor : Dr. Nguyen Trung Ky
Ask immediately after class or by appointment via email
[email protected]
Ph.D. Grenoble Alpes University 2019; second year at IU
Research on Computational Linguistics (Natural Language Processing,
Natural Language Generation) and Machine Learning.
3
Agenda (Part 1)
Week 1 Introduction and Intelligent Agents
Week 2 Introduction and Intelligent Agents
7
This course
8
Agenda of today’s lecture
Introduction
Intelligent Agents
9
What is artificial intelligence?
Is AI in our daily life?
10
What is artificial intelligence?
11
if our system can be
Four Categories of more rational than
humans in some
Views AI cases, why not?
13
Turing Test on unsuspecting judges
It is possible to (temporarily) fool humans who do not
realize they may be talking to a bot
ELIZA first chatbot program [Weizenbaum 66] rephrases
partner’s statements and questions (~psychotherapist)
14
Is Turing Test the right goal?
15
Foundations of AI
Philosophy Logic, methods of reasoning, mind as physical
system foundations of learning, language,
rationality
Mathematics Formal representation and proof algorithms,
computation, (un)decidability, (in)tractability,
probability
Economics utility, decision theory, game theory
19
Self Driving Car
20
AlphaGo
21
Honda’s Asimo Robot
22
NAO Robotics
23
Image to Text Application
24
Text to Image Application
Text to Image (CC12M Diffusion).ipynb - Colaboratory (google.com)
25
Agenda of today’s lecture
Introduction
Intelligent Agents
26
Intelligent Agents
Agent and environments
Nature of environments influences agent design
Basic “skeleton” agent designs
27
Agents
An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators
Examples:
Human agent
Robotic agent
Software agent
28
Terminologies
Percept: the agent’s perceptual inputs
Percept sequence: the complete history of
everything the agent has perceived
Agent function maps any given percept sequence to
an action [f: p* => A]
The agent program runs on the physical architecture
to produce f
Agent = architecture + program
29
Questions
Can there be more than one agent program
that implements a given agent function?
30
Vacuum-Cleaner World
31
A Simple Agent Function
Percept sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck
… …
[A, Clean], [A, Clean], [A, Clean] Right
[A, Clean], [A, Clean], [A, Dirty] Suck
… …
32
Rationality
An agent should "do the right thing", based on what it can
perceive and the actions it can perform. The right action is the
one that will cause the agent to be most successful
33
Rational Agent
Definition:
For each possible percept sequence, a rational
agent should select an action that is expected
to maximize its performance measure, given
the evidence provided by the percept
sequence and whatever built-in knowledge
the agent has.
34
Rational Agent
Definition:
For each possible percept sequence, a rational
agent should select an action that is expected
to maximize its performance measure, given
the evidence provided by the percept
sequence and whatever built-in knowledge
the agent has.
35
Vacuum–Cleaner Example
A simple agent that cleans a square if it is dirty and
moves to the other square if not
Is it rational?
Assumption:
performance measure: 1 point for each clean square at each
time step
environment is known a priori
actions = {left, right, suck, no-op}
agent is able to perceive the location and dirt in that location
Given different assumption, it might not be rational
anymore
36
Omniscience, Learning and Autonomy
Distinction between rationality and omniscience
expected performance vs. actual performance
Agents can perform actions in order to modify future
percepts so as to obtain useful information
(information gathering, exploration)
An agent can also learn from what it perceives
An agent is autonomous if its behavior is determined
by its own experience (with ability to learn and
adapt)
37
Questions
38
Intelligent Agents
39
PEAS
Specifying the task environment is always
the first step in designing agent
PEAS:
Performance, Environment, Actuators, Sensors
40
Taxi Driver Example
Performance Environment Actuators Sensors
Measure
camera,
safe, fast, steering, sonar,
roads, other
legal, accelerator, speedometer,
traffic,
comfortable brake, GPS,
pedestrians,
trip, signal, horn, odometer,
customers
maximize display engine
profits sensors,
keyboard,
accelerator
42
Mushroom-Picking Robot
43
Properties of Task Environments
Fully observable (vs. partially observable):
An agent's sensors give it access to the complete state of
the environment at each point in time
44
Properties of Task Environments
Static (vs. dynamic):
The environment is unchanged while an agent is deliberating
Semi-dynamic if the environment itself doesn’t change with
time but the agent's performance score does
45
Examples
Task Environment Oberserva Deterministic Episodic Static Discrete Agents
ble
Crossword puzzle fully deterministic sequential static discrete single
46
Exercises
Develop PEAS description for the following
task environment:
Robot soccer player
Shopping for used AI books on the Internet
47
Intelligent Agents
49
Agent = Architecture + Program
The job of AI is to design the agent program that
implements the agent function mapping percepts
to actions
50
Agent Program vs. Agent Function
Agent program takes the current percept as
input
Nothing is available from the environment
51
Table-Driven Agent
Designer needs to construct a table that
contains the appropriate action for
every possible percept sequence
Drawbacks?
huge table
take a long time to construct such a table
no autonomy
even with learning, need a long time to
learn the table entries
52
Five Basic Agent Types
Arranged in order of increasing generality:
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents; and
Learning agents
53
Simple Reflex Agent
54
Pseudo-Code
55
Infinite loops are often unavoidable for simple reflex
agent operating in partially observable environments
No location sensor
56
Model-Based Reflex Agent
57
Pseudo-Code
58
Goal-Based Agent
59
Utility-Based Agent
60
Utility Function
Utility function maps a state or a sequence of
states onto a real number to degree of
happiness
Conflicting goals
Speed and safety
Multiple goals
61
Learning Agent
Determines
performance
Selects the
Best action
Making
improvements
Suggest
exploratory
actions
62
Exercise
Select a suitable agent design for:
Robot soccer player
Shopping for used AI books on the Internet
63
Summary
Agent, agent function, agent program
Rational agent and its performance measure
PEAS
Five major agent program skeletons
64