Lecture1 Intro

Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

Vietnam National University of HCMC

International University
School of Computer Science and Engineering

Introduction to Artificial Intelligence


(IT097IU)
Lecture 01: Introduction and Intelligent Agents

Instructor: Nguyen Trung Ky

1
Basic information about course

 Monday 10:35-13:05 pm, room A1.109


 From 30/01/2023 – 22/05/2023 (include 2 weeks for
midterm exam)
 Text Book:
[1] Artificial Intelligence: A Modern Approach, Stuart
Russell and Peter Norvig, Fourth Edition.
 References:
[2] Artificial Intelligence: Foundations of Computational
Agents, David L. Poole and Alan K. Mackworth.

2
Basic information about course
 Instructor : Dr. Nguyen Trung Ky
 Ask immediately after class or by appointment via email
[email protected]
 Ph.D. Grenoble Alpes University 2019; second year at IU
 Research on Computational Linguistics (Natural Language Processing,
Natural Language Generation) and Machine Learning.

 Lab Tutor: Dr. Nguyen Trung Ky


 6-8 labs (first lab will begin from week 4) , room LA1.302
 duration of each lab will be from 8 A.M. to 12 A.M.

3
Agenda (Part 1)
Week 1 Introduction and Intelligent Agents
Week 2 Introduction and Intelligent Agents

Week 3 States and Searching: Graph Searching Techniques

Week 4 States and Searching: Heuristic Search and More


Sophisticated Search
Week 5 Features and Constraints: Constraint Satisfaction Problems

Week 6 Features and Constraints: Constraint Satisfaction Problems


(continue)
Week 7 Reasoning Under Uncertainty
Week 8 Reasoning Under Uncertainty (continue)
4
Agenda (Part 2 – Machine Learning)
Week 9 Supervised Learning: Neural Networks

Week 10 Supervised Learning: Neural Networks (continue)

Week 11 Supervised Learning: Support Vector Machine

Week 12 Supervised Learning: Support Vector Machine (continue)

Week 13 Beyond Supervised Learning: Kernels and Clustering

Week 14 Beyond Supervised Learning: Kernels and Clustering (continue)

Week 15 Gaussian Mixture Model and Expectation-Maximization


Algorithm
5
Prerequisites
 Comfortable programming in language such as C (or
Java) or Python
 Some knowledge of algorithmic concepts such as
running times of algorithms
 Ideally, some familiarity with probability (we will go
over this from the beginning but we will cover the
basics only briefly)
 Not scared of mathematics; ideally, some background
in discrete mathematics, able to do simple
mathematical proofs 6
Grading
 Theory: 15 weeks (03 periods/week)
 Practices: 8 weeks (05 periods/week)
 Assignments: 20%
 May discuss with another person (should acknowledge); write up
and code must be your own
 Midterm exams: 30%
 Final exam: 40%
 Homework: 10%
 Attend at least 80% to be eligible for final examination

7
This course

 Focus on general AI techniques that have been useful in


many applications

 Will try to avoid application-specific techniques (still


interesting and worthwhile!)

8
Agenda of today’s lecture

 Introduction
 Intelligent Agents

9
What is artificial intelligence?
Is AI in our daily life?

10
What is artificial intelligence?

Definition from John McCarthy


 It is the science and engineering of making intelligent machines,
especially intelligent computer programs.
 What is Intelligence then?
 Intelligence is the computational part of the ability to achieve goals in
the world. Varying kinds and degrees of intelligence occur in people,
many animals and some machines.
John McCarthy’s what is AI?
http://www-formal.stanford.edu/jmc/whatisai/whatisai.html

11
if our system can be
Four Categories of more rational than
humans in some
Views AI cases, why not?

Systems that think Systems that think


focus on action avoids like humans rationally
philosophical issues
such as “is the system Systems that act Systems that act
conscious” etc. like humans rationally
• We will follow “act rationally” or “rational agent”
approach
• Agent: something that acts in an environment
• Rational: doing the right thing => maximized achieve goal
12
Act Rationally:
Turing Test Approach

 (Human) judge communicates with a human and a


machine over text-only channel,
 Both human and machine try to act like a human,
 Judge tries to tell which is which.
 Numerous variants image from http://en.wikipedia.org/wiki/Turing_test

 Loebner prize (Mitsuku, 2019 winner)

13
Turing Test on unsuspecting judges
 It is possible to (temporarily) fool humans who do not
realize they may be talking to a bot
 ELIZA first chatbot program [Weizenbaum 66] rephrases
partner’s statements and questions (~psychotherapist)

14
Is Turing Test the right goal?

“Aeronautical engineering texts do not define the goal of


their field as making ‘machines that fly so exactly like
pigeons that they can fool even other pigeons.’” [Russell
and Norvig]

15
Foundations of AI
Philosophy Logic, methods of reasoning, mind as physical
system foundations of learning, language,
rationality
Mathematics Formal representation and proof algorithms,
computation, (un)decidability, (in)tractability,
probability
Economics utility, decision theory, game theory

Neuroscience physical substrate for mental activity

Psychology phenomena of perception and motor control, experimental


techniques
Computer Engineering building fast computers

Control Theory design systems that maximize an objective


function over time
Linguistics knowledge representation, natural language processing
16
History of AI
1943 McCulloch & Pitts: Boolean circuit model of brain
1950 Turing's "Computing Machinery and Intelligence"
1956 Dartmouth meeting: "Artificial Intelligence" adopted
1952–69 Look, Ma, no hands!
1950s Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist,
Gelernter's Geometry Engine
1965 Robinson's complete algorithm for logical reasoning
1966–73 AI discovers computational complexity
Neural network research almost disappears
1969--79 Early development of knowledge-based systems

1980-- AI becomes an industry


1986-- Neural networks return to popularity
1987-- AI becomes a science
1995-- The emergence of intelligent agents
2001-- Availability of very large datasets (Big data, deep learning) 17
Major Branches of AI
Machine Learning
 Enables a computer to train a system to predict outcome using the past
experience (supervised learning, unsupervised learning, reinforcement
learning).
Natural Language Processing
 The process of making a machine read, decipher, understand and make
sense out of human interaction.
Expert System
 Enables the computer to mimic the decision-making ability of humans
(user interfaces, inference engine and knowledge base).
Computer vision
 Program computers to perceive and understand visual information in the
same way that humans can.
Robotics
 Help humans with tedious and bulky tasks (control computer system,
18
manufacturing of automobiles).
Some AI Examples

https://www.youtube.com/watch?v=cdgQpa1pUUE (Self Driving Car)


https://www.youtube.com/watch?v=8dMFJpEGNLQ (Alpha Go)
https://www.youtube.com/watch?v=QdQL11uWWcI (Honda’s Asimo)
https://www.youtube.com/watch?v=x89r6X-7lIg (MIT’s Nao)

19
Self Driving Car

20
AlphaGo

21
Honda’s Asimo Robot

22
NAO Robotics

https://www.youtube.com/watch?v=cdgQpa1pUUE (Self Driving Car)


https://www.youtube.com/watch?v=8dMFJpEGNLQ (Alpha Go)
https://www.youtube.com/watch?v=QdQL11uWWcI (Honda’s Asimo)
https://www.youtube.com/watch?v=x89r6X-7lIg (MIT’s Nao)

23
Image to Text Application

image2text.ipynb - Colaboratory (google.com)

24
Text to Image Application
Text to Image (CC12M Diffusion).ipynb - Colaboratory (google.com)

25
Agenda of today’s lecture

 Introduction
 Intelligent Agents

26
Intelligent Agents
 Agent and environments
 Nature of environments influences agent design
 Basic “skeleton” agent designs

27
Agents
An agent is anything that can be viewed as
perceiving its environment through sensors and
acting upon that environment through actuators

Examples:
Human agent
Robotic agent
Software agent

28
Terminologies
Percept: the agent’s perceptual inputs
Percept sequence: the complete history of
everything the agent has perceived
Agent function maps any given percept sequence to
an action [f: p* => A]
The agent program runs on the physical architecture
to produce f
Agent = architecture + program

29
Questions
Can there be more than one agent program
that implements a given agent function?

Given a fixed machine architecture, does


each agent program implement exactly one
agent function?

30
Vacuum-Cleaner World

Percepts: location and contents, e.g., [A, dirty]


Actions: Left, Right, Suck, NoOp

31
A Simple Agent Function
Percept sequence Action
[A, Clean] Right
[A, Dirty] Suck
[B, Clean] Left
[B, Dirty] Suck
[A, Clean], [A, Clean] Right
[A, Clean], [A, Dirty] Suck
… …
[A, Clean], [A, Clean], [A, Clean] Right
[A, Clean], [A, Clean], [A, Dirty] Suck
… …

32
Rationality
An agent should "do the right thing", based on what it can
perceive and the actions it can perform. The right action is the
one that will cause the agent to be most successful

Performance measure: An objective criterion for success of an


agent's behavior

Back to the vacuum-cleaner example


 Amount of dirt cleaned within certain time
 +1 credit for each clean square per unit time

33
Rational Agent
Definition:
 For each possible percept sequence, a rational
agent should select an action that is expected
to maximize its performance measure, given
the evidence provided by the percept
sequence and whatever built-in knowledge
the agent has.

34
Rational Agent
Definition:
 For each possible percept sequence, a rational
agent should select an action that is expected
to maximize its performance measure, given
the evidence provided by the percept
sequence and whatever built-in knowledge
the agent has.

35
Vacuum–Cleaner Example
A simple agent that cleans a square if it is dirty and
moves to the other square if not
Is it rational?
Assumption:
performance measure: 1 point for each clean square at each
time step
environment is known a priori
actions = {left, right, suck, no-op}
agent is able to perceive the location and dirt in that location
Given different assumption, it might not be rational
anymore

36
Omniscience, Learning and Autonomy
Distinction between rationality and omniscience
 expected performance vs. actual performance
Agents can perform actions in order to modify future
percepts so as to obtain useful information
(information gathering, exploration)
An agent can also learn from what it perceives
An agent is autonomous if its behavior is determined
by its own experience (with ability to learn and
adapt)

37
Questions

Discuss possible agent designs for the cases


in which clean squares can become dirty and
the geography of the environment is
unknown.

38
Intelligent Agents

 Agent and environments


 Nature of environments influences agent design
 Basic “skeleton” agent designs

39
PEAS
Specifying the task environment is always
the first step in designing agent

PEAS:
Performance, Environment, Actuators, Sensors

40
Taxi Driver Example
Performance Environment Actuators Sensors
Measure

camera,
safe, fast, steering, sonar,
roads, other
legal, accelerator, speedometer,
traffic,
comfortable brake, GPS,
pedestrians,
trip, signal, horn, odometer,
customers
maximize display engine
profits sensors,
keyboard,
accelerator

DARPA urban challenge 07:


http://www.youtube.com/watch?v=SQFEmR50HAk
17
CS 420: Artificial Intelligence 41
Medical Diagnosis System

Performance Environment Actuators Sensors


Measure
healthy patient, display keyboard
patient, hospital, staff questions, entry of
minimize tests, symptoms,
costs, lawsuits diagnosis, findings,
treatments, patient’s
referrals answers

42
Mushroom-Picking Robot

Performance Environment Actuators Sensors


Measure
Percentage of Conveyor belt Jointed arm camera, joint
good with and hand angle sensors
mushrooms in mushrooms,
correct bins bins

43
Properties of Task Environments
Fully observable (vs. partially observable):
An agent's sensors give it access to the complete state of
the environment at each point in time

Deterministic (vs. stochastic):


next state of the env. determined by current state and the
agent’s action
If the environment is deterministic except for the actions of
other agents, then the environment is strategic

Episodic (vs. sequential):


Agent's experience is divided into atomic "episodes"
Choice of action in each episode depends only on the
episode itself

44
Properties of Task Environments
Static (vs. dynamic):
The environment is unchanged while an agent is deliberating
Semi-dynamic if the environment itself doesn’t change with
time but the agent's performance score does

Discrete (vs. continuous):


A limited number of distinct, clearly defined percepts and
actions

Single agent (vs. multi-agent):


An agent operating by itself in an environment
Competitive vs. cooperative

45
Examples
Task Environment Oberserva Deterministic Episodic Static Discrete Agents
ble
Crossword puzzle fully deterministic sequential static discrete single

Chess with a clock fully strategic sequential semi discrete multi

Taxi driver partially stochastic sequential dynamic conti. multi

mushroom-picking partially stochastic episodic dynamic conti. single

The environment type largely determines the agent design

The real world is (of course) partially observable, stochastic,


sequential, dynamic, continuous, multi-agent

46
Exercises
Develop PEAS description for the following
task environment:
Robot soccer player
Shopping for used AI books on the Internet

Analyze the properties of the above


environments

47
Intelligent Agents

 Agent and environments


 Nature of environments influences agent design
 Basic “skeleton” agent designs

49
Agent = Architecture + Program
The job of AI is to design the agent program that
implements the agent function mapping percepts
to actions

Aim: find a way to implement the rational agent


function concisely

Same skeleton for agent program: it takes the


current percept as input from the sensors and
returns an action to the actuators

50
Agent Program vs. Agent Function
Agent program takes the current percept as
input
Nothing is available from the environment

Agent function takes the entire percept


history
To do this, remember all the percepts

51
Table-Driven Agent
Designer needs to construct a table that
contains the appropriate action for
every possible percept sequence

Drawbacks?
huge table
take a long time to construct such a table
no autonomy
even with learning, need a long time to
learn the table entries
52
Five Basic Agent Types
Arranged in order of increasing generality:
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents; and
Learning agents

53
Simple Reflex Agent

54
Pseudo-Code

Example: write a simple reflex agent for the vacuum


cleaner example

55
Infinite loops are often unavoidable for simple reflex
agent operating in partially observable environments
No location sensor

Randomization will help


A randomized simple reflex agent might outperform
a deterministic simple reflex agent

Better way: keep track of the part of the world it


can’t see now
Maintain internal states

56
Model-Based Reflex Agent

57
Pseudo-Code

58
Goal-Based Agent

59
Utility-Based Agent

60
Utility Function
Utility function maps a state or a sequence of
states onto a real number to degree of
happiness

Conflicting goals
Speed and safety
Multiple goals

61
Learning Agent

Determines
performance

Selects the
Best action
Making
improvements

Suggest
exploratory
actions

62
Exercise
Select a suitable agent design for:
Robot soccer player
Shopping for used AI books on the Internet

63
Summary
Agent, agent function, agent program
Rational agent and its performance measure
PEAS
Five major agent program skeletons

Next class, solving problems by searching

64

You might also like