Ai Unit-I
Ai Unit-I
Ai Unit-I
ARTIFICIAL INTELLIGENCE
UNIT – I
In today's world, technology is growing very fast, and we are getting in touch with
different new technologies day by day.
AI is one of the fascinating and universal fields of Computer science which has a
great scope in future. AI holds a tendency to cause a machine to work as a human.
Artificial Intelligence:
Artificial Intelligence exists when a machine can have human-based skills such as
learning, reasoning, and solving problems
You can say a machine, or a system is artificially intelligent when it is equipped with
at least one and at most all intelligences in it.
1. Reasoning
2. Learning
3. Problem Solving
4. Perception
5. Linguistic Intelligence
3. Problem solving: It is the process in which one perceives and tries to arrive
at a desired solution from a present situation by taking some path, which is
blocked by known or unknown hurdles. Problem solving also includes decision
making, which is the process of selecting the best suitable alternative out of
multiple alternatives to reach the desired goal are available.
1. weak AI and
2. strong AI.
Weak AI refers to AI systems that are designed to perform specific tasks and are
limited to those tasks only. These AI systems excel at their designated functions
but lack general intelligence. Examples of weak AI include voice assistants like Siri
or Alexa, recommendation algorithms, and image recognition systems. Weak AI
operates within predefined boundaries and cannot generalize beyond their
specialized domain.
Strong AI, also known as general AI, refers to AI systems that possess human-level
intelligence or even surpass human intelligence across a wide range of tasks.
Strong AI would be capable of understanding, reasoning, learning, and applying
knowledge to solve complex problems in a manner like human cognition. However,
the development of strong AI is still largely theoretical and has not been achieved
to date.
Before Learning about Artificial Intelligence, we should know that what is the
importance of AI and why should we learn it. Following are some main reasons to
learn about AI:
o With the help of AI, you can create such software or devices which can solve
real-world problems very easily and with accuracy such as health issues,
marketing, traffic issues, etc.
o With the help of AI, you can create your personal virtual Assistant, such as
Cortana, Google Assistant, Siri, etc.
o With the help of AI, you can build such Robots which can work in an
environment where survival of humans can be at risk.
o AI opens a path for other new technologies, new devices, and new
Opportunities.
4. Building a machine which can perform tasks that requires human intelligence
such as:
o Proving a theorem
o Playing chess
5. Creating some system which can exhibit intelligent behavior, learn new
things by itself, demonstrate, explain, and can advise to its user.
o High Accuracy with less errors: AI machines or systems are prone to less
making, because of that AI systems can beat a chess champion in the Chess
game.
o High reliability: AI machines are highly reliable and can perform the same
defusing a bomb, exploring the ocean floor, where to employ a human can be
risky.
o Useful as a public utility: AI can be very useful for public utilities such as a
self-driving car which can make our journey safer and hassle-free, facial
recognition for security purpose, Natural language processing to
communicate with the human in human-language, etc.
detect and respond to cyber threats in real time, helping companies protect
their data and systems.
o Can't think out of the box: Even we are making smarter machines with AI,
but still they cannot work out of the box, as the robot will only do that work
for which they are trained or programmed.
o Job Concerns: As AI gets better, it might take away not just basic
jobs but also some skilled ones. This worries people about losing jobs
in different fields.
Artificial intelligence (AI) has a wide range of applications across various industries
and domains. Here are some notable applications of AI:
AI plays a crucial role in robotics and automation systems. Robots equipped with AI
algorithms can perform complex tasks in manufacturing, healthcare, logistics, and
exploration. They can adapt to changing environments, learn from experience, and
collaborate with humans.
• Recommendation Systems
• Financial Services
• Healthcare
AI-powered virtual assistants and chatbots interact with users, understand their
queries, and provide relevant information or perform tasks. They are used in
customer support, information retrieval, and personalized assistance.
• Gaming
AI enables the development of smart home systems that can automate tasks,
control devices, and learn from user preferences. AI can enhance the functionality
and efficiency of Internet of Things (IoT) devices and networks.
• Cybersecurity
Artificial Intelligence (AI) has become an integral part of our daily lives,
revolutionizing various industries and enhancing user experiences. Here are
some notable examples of AI applications:
ChatGPT
Google Maps
Smart Assistants
Smart assistants like Amazon's Alexa, Apple's Siri, and Google Assistant
employ AI technologies to interpret voice commands, answer questions, and
perform tasks. These assistants use natural language processing and
machine learning algorithms to understand user intent, retrieve relevant
information, and carry out requested actions.
Snapchat Filters
Self-Driving Cars
Wearables
MuZero
Intelligent Agents
An AI system can be defined as the study of the rational agent and its environment.
In artificial intelligence, an agent is an independent computer program or system
or anything that is designed to perceive its environment, make decisions and take
actions to achieve a specific goal or set of goals. The agent operates
autonomously, meaning it is not directly controlled by a human operator.
Sensors
Actuators
An intelligent agent may learn from the environment to achieve their goals. A
thermostat is an example of an intelligent agent.
An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which work
for sensors and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder,
NLP for sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cell phone, camera,
and even we are also agents. Driverless cars and the Siri virtual assistant are
examples of intelligent agents in AI.
Before moving forward, we should first know about sensors, effectors, and
actuators.
Sensor: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. An agent observes its environment
through sensors.
Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
Rational Agent:
A rational agent is an agent which has clear preference, models uncertainty, and
acts in a way to maximize its performance measure with all possible actions.
A rational agent is said to perform the right things. AI is about creating rational
agents to use for game theory and decision theory for various real-world scenarios.
Rationality:
The rationality of an agent is measured by its performance measure. Rationality can
be judged based on following points:
Structure of an AI Agent
The task of AI is to design an agent program which implements the agent function.
The structure of an intelligent agent is a combination of architecture and agent
program. It can be viewed as:
Following are the main three terms involved in the structure of an AI agent:
f:P* → A
PEAS Representation:
PEAS is a type of model on which an AI agent works upon. When we define an AI
agent or rational agent, then we can group its properties under PEAS
representation model. Many AI Agents use the PEAS model in their structure It is
made up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
1. Keyboard
o Healthy o Patient o Tests
Medical (Entry of
patient o Hospital o Treatments symptoms)
Diagnos
e o Minimized o Staff
cost
2.
o Cleanness o Room o Wheels o Camera
Vacuum
Cleaner o Efficiency o Table o Brushes o Dirt
o Battery life o Wood floor o Vacuum detection
Extractor sensor
o Security o Carpet
o Cliff
o Various
sensor
obstacles
o Bump
Sensor
o Infrared
Wall
Sensor
3. Part -
o Percentage o Conveyor o Jointed Arms o Camera
picking
Robot of parts in belt with o Hand o Joint
correct bins. parts, angle
o Bins sensors.
Artificial Intelligence Agents improve in the same way. The Agent gets better
by saving its previous attempts and states, learning how to respond better
next time. This place is where Machine Learning and Artificial Intelligence
meet.
Agents are often grouped into five classes based on their degree of perceived intelligence and
capability. all these agents can improve their performance and generate better action over time.
These are given below:
• Simple Reflex Agent
• Model-Based Reflex Agent
• Goal-Based Agents
• Utility-Based Agent
• Learning Agent
o The Simple reflex agents are the simplest agents. These agents take decisions based on the current
percepts and ignore the rest of the percept history.
o The Simple reflex agent does not consider any part of percepts history during their decision and action
process.
o The Simple reflex agent works on Condition-action rule, which means it maps the current state to action.
Such as a Room Cleaner agent, it works only if there is dirt in the room.
o The Model-based agent can work in a partially observable environment and track the situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a Model-
based agent.
o Internal State: It is a representation of the current state based on percept history.
o These agents have the model, "which is knowledge of the world" and based on the model
they perform actions.
o Updating the agent state requires information about:
a. How the world evolves
b. How the agent's action affects the world.
There are some important points that we need to remember about model-based reflex agents:
• Unlike simple reflex agents that rely solely on the current percept, model-based reflex
agents consider a broader context.
• It can adapt its behavior based on changes in the environment or new information.
• The ability to reason, plan, and consider a wider context generally leads to improved
performance compared to simple reflex agents.
Goal-Based Agent:
o The knowledge of the current state environment is not always sufficient to decide for an agent
to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
Utility-Based Agents:
o These agents are like the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an agent
must choose to perform the best action.
o The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
• The utility function maps each state to a true number to see how efficiently each action
achieves the goals.
• These agents aim to make rational decisions that maximize their expected utility.
Learning Agents:
o A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically through
learning.
o A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning from
environment.
b. Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action.
d. Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.
o Hence, learning agents can learn, analyze performance, and look for new ways to improve
the performance.
The Functions of an Artificial Intelligence Agent:
Uses of Agents:
Medical Evaluation:
• The intelligent agent uses this data to determine the best plan of action.
• Various sensors are used in automatic vehicles to gather data from the
surroundings.
• The environment in these agents could consist of people, other cars, roads, or
road signs. Actions are started using a variety of devices. For instance, the car's
brakes are used to stop it. Self-driving vehicles operate better with the assistance
of intelligent agents.
• Customer service and sales are two functional areas that have been automated.
An environment is everything in the world which surrounds the agent, but it is not a part of
an agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate, and provide the agent with something to
sense and act upon it. An environment is mostly said to be non-feministic.
Features of Environment
As per Russell and Norvig, an environment can have various features from the point of view
of an agent:
o For an agent sensor can sense or access the complete state of an environment at
each point of time then it is a fully observable environment, else it is partially
observable.
o A fully observable environment is easy as there is no need to maintain the internal
state to keep track history of the world.
o An agent with no sensors in all environments then such an environment is called
as unobservable.
2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next
state of the environment, then such environment is called a deterministic
environment.
o A stochastic environment is random in nature and cannot be determined completely
by an agent.
o In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
o However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs multi-agent
o If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
o However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
o The agent design problems in the multi-agent environment are different from single
agent environment.
5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
o However, for dynamic environment, agents need to keep looking at the world at
each action.
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.
6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else
it is called continuous environment.
o A chess game comes under discrete environment as there is a finite number of
moves that can be performed.
o A self-driving car is an example of a continuous environment.
7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works to perform an action.
o It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment else it
is called inaccessible.
o An empty room whose state can be defined by its temperature is an example of an
accessible environment.
o Information about an event on earth is an example of Inaccessible environment.
Turing Test in AI
o In 1950, Alan Turing introduced a test to check whether a machine can think like a
human or not, this test is known as the Turing Test. In this test, Turing proposed that
the computer can be said to be an intelligent if it can mimic human response under
specific conditions.
o Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and
Intelligence," which considered the question, "Can Machine think?"
The Turing test is based on a party game "Imitation game," with some modifications. This
game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two
players and his job are to find that which player is machine among two of them.
Consider, Player A is a computer, Player B is human, and Player C is an interrogator.
Interrogator is aware that one of them is machine, but he needs to identify this based on
questions and their responses.
The conversation between all players is via keyboard and screen so the result would not
depend on the machine's ability to convert words as speech.
The test result does not depend on each correct answer, but only how closely its responses
like a human answer. The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
PlayerA (Computer): No
In this game, if an interrogator would not be able to identify which is a machine and which
is human, then the computer passes the test successfully, and the machine is said to be
intelligent and can think like a human.
"In 1991, the New York businessman Hugh Loebner announces the prize competition,
offering a $100,000 prize for the first computer to pass the Turing test. However, no AI
program to till date, come close to passing an undiluted Turing test".
ELIZA: ELIZA was a Natural language processing computer program created by Joseph
Weizenbaum. It was created to demonstrate the ability of communication between machine
and humans. It was one of the first chatterbots, which has attempted the Turing Test.
Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to
simulate a person with Paranoid schizophrenia(most common chronic mental disorder).
Parry was described as "ELIZA with attitude." Parry was tested using a variation of the Turing
Test in the early 1970s.
There were many philosophers who really disagreed with the complete concept of Artificial
Intelligence. The most famous argument in this list was "Chinese Room."
In the year 1980, John Searle presented "Chinese Room" thought experiment, in his paper
"Mind, Brains, and Program," which was against the validity of Turing's Test. According to
his argument, "Programming a computer may make it to understand a language, but
it will not produce a real understanding of language or consciousness in a computer."
He argued that Machine such as ELIZA and Parry could easily pass the Turing test by
manipulating keywords and symbol, but they had no real understanding of language. So it
cannot be described as "thinking" capability of a machine such as a human.
The reflex agents are known as the simplest agents because they directly map states
into actions. Unfortunately, these agents fail to operate in an environment where the
mapping is too large to store and learn. Goal-based agent, on the other hand, considers
future actions and the desired outcomes.
Problem-solving agent
The problem-solving agent perfoms precisely by defining problems and its severalsolutions.
PROBLEM DEFINITION
To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of
theinitial situations and also final situations which constitute (i.e) acceptable
solution to the problem.
(ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact
onthe appropriateness of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the
problem.
• Search: It identifies all the best possible sequence of actions to reach the
goal state fromthe current state. It takes a problem as an input and returns
solution as its output.
• Solution: It finds the best algorithm out of various algorithms, which may be
proven as thebest optimal solution.
• Execution: It executes the best optimal solution from the searching algorithms
to reach thegoal state from the current state.
Example Problems
• Toy Problem: It is a concise and exact description of the problem which is used
by the researchers to compare the performance of algorithms.
• Real-world Problem: It is real-world based problems which require solutions.
Unlike a toyproblem, it does not depend on descriptions, but we can have a
general formulation of the problem.
Some Toy Problems
• 8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered
from 1 to 8 with a blank space. The tile adjacent to the blank space can slide
into that space. The objective is to reach a specified goal state similar to the goal
state, as shown in the below figure.
• In the figure, our task is to convert the current state into goal state by sliding
digits into theblank space.
In the above figure, our task is to convert the current(Start) state into goal state by
sliding digitsinto the blank space.
It is noticed from the above figure that each queen is set into the chessboard in a position
where no other queen is placed diagonally, in same row or column. Therefore, it is one right
approach tothe 8-queens problem.
• States: Arrangement of all the 8 queens one per column with no queen
attacking the otherqueen.
• Actions: Move the queen at the location where it is safe from the attacks.
This formulation is better than the incremental formulation as it reduces the state space from
1.8 x1014 to 2057, and it is easy to find the solutions.
• Cell layout: Here, the primitive components of the circuit are grouped into cells,
each performing its specific function. Each cell has a fixed shape and size. The task
is to placethe cells on the chip without overlapping each other.
• Channel routing: It finds a specific route for each wire through the gaps between
the cells.
• Protein Design: The objective is to find a sequence of amino acids which will fold
into 3D protein having a property to cure some disease.
We have seen many problems. Now, there is a need to search for solutions to solve them.
In this section, we will understand how searching can be used by the agent to solve a problem.
For solving different kinds of problem, an agent makes use of different strategies to
reach thegoal by searching the best possible algorithms. This process of searching is known
as search strategy.
Search Algorithms in Artificial Intelligence
Search algorithms are one of the most important areas of Artificial Intelligence. This topic will explain
all about the search algorithms in AI.
Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational
agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve
a specific problem and provide the best result. Problem-solving agents are the goal-based agents
and use atomic representation. In this topic, we will learn various problem-solving search algorithms.
Following are the four essential properties of search algorithms to compare the efficiency of these
algorithms:
Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path
cost) among all other solutions, then such a solution for is said to be an optimal solution.
Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.
Space Complexity: It is the maximum storage space required at any point during the search, as the
complexity of the problem.
Based on the search problems we can classify the search algorithms into uninformed (Blind search)
search and informed search (Heuristic search) algorithms.
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as closeness, the location of
the goal. It operates in a brute-force way as it only includes information about how to traverse the
tree and how to identify leaf and goal nodes. Uninformed search applies a way in which search tree
is searched without any information about the search space like initial state operators and test for
the goal, so it is also called blind search. It examines each node of the tree until it achieves the goal
node.
o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
Informed Search
Informed search algorithms use domain knowledge. In an informed search, problem information is
available which can guide the search. Informed search strategies can find a solution more efficiently
than an uninformed search strategy. Informed search is also called a Heuristic search.
A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find
a good solution in reasonable time.
Informed search can solve much complex problem which could not be solved in another way.
1. Greedy Search
2. A* Search
1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search
1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph. This
algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all successor node
at the current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.
Advantages:
Disadvantages:
o It requires lots of memory since each level of the tree must be saved into memory to expand
the next level.
o BFS needs lots of time if the solution is far away from the root node.
Example:
In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the
root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which
is shown by the dotted arrow, and the traversed path will be:
1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes
traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a
node at every state.
Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which
is O(bd).
Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth,
then BFS will find a solution.
Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each path
to its greatest depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is like the BFS algorithm.
Advantage:
o DFS requires very less memory as it only needs to store a stack of the nodes on the path from
root node to the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).
Disadvantage:
o There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
o DFS algorithm goes for deep down searching and sometimes it may go to the infinite loop.
Example:
In the below search tree, we have shown the flow of depth-first search, and it will follow the order
as:
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it
will traverse node C and then G, and here it will terminate as it found goal node.
Completeness: DFS search algorithm is complete within finite state space as it will expand every
node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm.
It is given by:
Completeness: DFS search algorithm is complete within finite state space as it will expand every
node within a limited search tree.
Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm.
It is given by:
Where, m= maximum depth of any node and this can be much larger than d (Shallowest
solution depth)
Space Complexity: DFS algorithm needs to store only single path from the root node, hence space
complexity of DFS is equivalent to the size of the fringe set, which is O(bm).
Optimal: DFS search algorithm is non-optimal, as it may generate many steps or high cost to reach
to the goal node.