Machine Vision

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 67

UNIT-1

What is artificial intelligence?


Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific
applications of AI include expert systems, natural language processing, speech recognition and machine vision.
How does AI work?
As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI.
Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of
specialized hardware and software for writing and training machine learning algorithms. No one programming language is
synonymous with AI, but a few, including Python, R and Java, are popular.
In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and
patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text
chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe
objects in images by reviewing millions of examples.
AI programming focuses on three cognitive skills: learning, reasoning and self-correction.
Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data
into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step
instructions for how to complete a specific task.
Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.
Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they
provide the most accurate results possible.
Why is artificial intelligence important?
AI is important because it can give enterprises insights into their operations that they may not have been aware of
previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive,
detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools
often complete jobs quickly and with relatively few errors.
This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger
enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders
to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated
machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get
drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of
online services by using machine learning to understand how people use their services and then improving them. In 2017,
the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.
Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their
competitors.
What are the advantages and disadvantages of artificial intelligence?
Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI
processes large amounts of data much faster and makes predictions more accurately than humanly possible.
While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use 
machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary
disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.
Advantages
•Good at detail-oriented jobs;
•Reduced time for data-heavy tasks;
•Delivers consistent results; and
•AI-powered virtual agents are always available.
Disadvantages
•Expensive;
•Requires deep technical expertise;
•Limited supply of qualified workers to build AI tools;
•Only knows what it's been shown; and
•Lack of ability to generalize from one task to another.
Strong AI vs. weak AI
AI can be categorized as either weak or strong.
•Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a specific task. Industrial
robots and virtual personal assistants, such as Apple's Siri, use weak AI.
•Strong AI, also known as artificial general intelligence (AGI), describes programming that can replicate the cognitive
abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply
knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to
pass both a Turing Test and the Chinese room test.
Types of Artificial Intelligence:
Artificial Intelligence can be divided in various types, there are mainly two types of main categorization which are based on capabilities and based on functionally of AI.
ollowing is flow diagram which explain the types of AI.

AI type-1: Based on Capabilities


1. Weak AI or Narrow AI:
Narrow AI is a type of AI which is able to perform a dedicated task with intelligence.The most common and currently available AI
s Narrow AI in the world of Artificial Intelligence.
Narrow AI cannot perform beyond its field or limitations, as it is only trained for one specific task. Hence it is also termed as weak
AI. Narrow AI can fail in unpredictable ways if it goes beyond its limits.
Apple Siriis a good example of Narrow AI, but it operates with a limited pre-defined range of functions.
IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert system approach combined with Machine learning
and natural language processing.
Some Examples of Narrow AI are playing chess, purchasing suggestions on e-commerce site, self-driving cars, speech recognition,
and image recognition.
2. General AI:
•General AI is a type of intelligence which could perform any intellectual task with efficiency like a human.
•The idea behind the general AI to make such a system which could be smarter and think like a human by its own.
•Currently, there is no such system exist which could come under general AI and can perform any task as perfect as a human.
•The worldwide researchers are now focused on developing machines with General AI.
•As systems with general AI are still under research, and it will take lots of efforts and time to develop such systems.
3. Super AI:
•Super AI is a level of Intelligence of Systems at which machines could surpass human intelligence, and can perform any task
better than human with cognitive properties. It is an outcome of general AI.
•Some key characteristics of strong AI include capability include the ability to think, to reason,solve the puzzle, make judgments,
plan, learn, and communicate by its own.
•Super AI is still a hypothetical concept of Artificial Intelligence. Development of such systems in real is still world changing task.
Artificial Intelligence type-2: Based on functionality
1. Reactive Machines
•Purely reactive machines are the most basic types of Artificial Intelligence.
•Such AI systems do not store memories or past experiences for future actions.
•These machines only focus on current scenarios and react on it as per possible best action.
•IBM's Deep Blue system is an example of reactive machines.
•Google's AlphaGo is also an example of reactive machines.
2. Limited Memory
•Limited memory machines can store past experiences or some data for a short period of time.
•These machines can use stored data for a limited time period only.
•Self-driving cars are one of the best examples of Limited Memory systems. These cars can store recent speed of nearby cars, the
distance of other cars, speed limit, and other information to navigate the road.
3. Theory of Mind
•Theory of Mind AI should understand the human emotions, people, beliefs, and be able to interact socially like humans.
•This type of AI machines are still not developed, but researchers are making lots of efforts and improvement for developing such
AI machines.
4. Self-Awareness
•Self-awareness AI is the future of Artificial Intelligence. These machines will be super intelligent, and will have their own
consciousness, sentiments, and self-awareness.
•These machines will be smarter than human mind.
•Self-Awareness AI does not exist in reality still and it is a hypothetical concept.
What are examples of AI technology and how is it used today?
AI is incorporated into a variety of different types of technology. Here are six examples:
•Automation. When paired with AI technologies, automation tools can expand the volume and types of tasks performed.
An example is robotic process automation (RPA), a type of software that automates repetitive, rules-based data processing
tasks traditionally done by humans. When combined with machine learning and emerging AI tools, RPA can automate
bigger portions of enterprise jobs, enabling RPA's tactical bots to pass along intelligence from AI and respond to process
changes.
•Machine learning. This is the science of getting a computer to act without programming. Deep learning is a subset of
machine learning that, in very simple terms, can be thought of as the automation of predictive analytics. There are three
types of machine learning algorithms:
• Supervised learning. Data sets are labeled so that patterns can be detected and used to label new data sets.
• Unsupervised learning. Data sets aren't labeled and are sorted according to similarities or differences.
• Reinforcement learning. Data sets aren't labeled but, after performing an action or several actions, the AI system
is given feedback.
•Machine vision. This technology gives a machine the ability to see. Machine vision captures and analyzes visual
information using a camera, analog-to-digital conversion and digital signal processing. It is often compared to human
eyesight, but machine vision isn't bound by biology and can be programmed to see through walls, for example. It is used in
a range of applications from signature identification to medical image analysis. Computer vision, which is focused on
machine-based image processing, is often conflated with machine vision.
•Natural language processing (NLP). This is the processing of human language by a computer program. One of the older
and best-known examples of NLP is spam detection, which looks at the subject line and text of an email and decides if it's
junk. Current approaches to NLP are based on machine learning. NLP tasks include text translation, sentiment analysis and
speech recognition.
•Robotics. This field of engineering focuses on the design and manufacturing of robots. Robots are often used to perform
tasks that are difficult for humans to perform or perform consistently. For example, robots are used in assembly lines for
car production or by NASA to move large objects in space. Researchers are also using machine learning to build robots that
can interact in social settings.
•Self-driving cars. Autonomous vehicles use a combination of computer vision, image recognition and deep learning to
build automated skill at piloting a vehicle while staying in a given lane and avoiding unexpected obstructions, such as
pedestrians.
What is knowledge representation?
Humans are best at understanding, reasoning, and interpreting knowledge. Human knows things, which is knowledge and as per
their knowledge they perform various actions in the real world. But how machines do all these things comes under knowledge
representation and reasoning. Hence we can describe Knowledge representation as following:
•Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence which concerned with AI agents thinking
and how thinking contributes to intelligent behavior of agents.
•It is responsible for representing information about the real world so that a computer can understand and can utilize this
knowledge to solve the complex real world problems such as diagnosis a medical condition or communicating with humans in
natural language.
•It is also a way which describes how we can represent knowledge in artificial intelligence. Knowledge representation is not just
storing data into some database, but it also enables an intelligent machine to learn from that knowledge and experiences so that it
can behave intelligently like a human.

The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural
language processing, perception and the ability to move and manipulate objects. General intelligence (the ability to
solve an arbitrary problem) is among the field's long-term goals.
What to Represent:
Following are the kind of knowledge which needs to be represented in AI systems:
•Object: All the facts about objects in our world domain. E.g., Guitars contains strings, trumpets are brass instruments.
•Events: Events are the actions which occur in our world.
•Performance: It describe behavior which involves knowledge about how to do things.
•Meta-knowledge: It is knowledge about what we know.
•Facts: Facts are the truths about the real world and what we represent.
•Knowledge-Base: The central component of the knowledge-based agents is the knowledge base. It is represented as KB. The
Knowledgebase is a group of the Sentences (Here, sentences are used as a technical term and not identical with the English
language).
Knowledge: Knowledge is awareness or familiarity gained by experiences of facts, data, and situations. Following are the types of
knowledge in artificial intelligence:
Types of knowledge
Following are the various types of knowledge:
1. Declarative Knowledge:
•Declarative knowledge is to know about something.
•It includes concepts, facts, and objects.
•It is also called descriptive knowledge and expressed in declarativesentences.
•It is simpler than procedural language.
2. Procedural Knowledge
•It is also known as imperative knowledge.
•Procedural knowledge is a type of knowledge which is responsible for knowing how to do something.
•It can be directly applied to any task.
•It includes rules, strategies, procedures, agendas, etc.
•Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge:
•Knowledge about the other types of knowledge is called Meta-knowledge.
4. Heuristic knowledge:
•Heuristic knowledge is representing knowledge of some experts in a filed or subject.
•Heuristic knowledge is rules of thumb based on previous experiences, awareness of approaches, and which are good to work but
not guaranteed.
5. Structural knowledge:
•Structural knowledge is basic knowledge to problem-solving.
•It describes relationships between various concepts such as kind of, part of, and grouping of something.
•It describes the relationship that exists between concepts or objects.
The relation between knowledge and intelligence:
Knowledge of real-worlds plays a vital role in intelligence and same for creating artificial intelligence. Knowledge plays an
important role in demonstrating intelligent behavior in AI agents. An agent is only able to accurately act on some input when he
has some knowledge or experience about that input.
Let's suppose if you met some person who is speaking in a language which you don't know, then how you will able to act on that.
The same thing applies to the intelligent behavior of the agents.
As we can see in below diagram, there is one decision maker which act by sensing the environment and using knowledge. But if
the knowledge part will not present then, it cannot display intelligent behavior.
The knowledge pyramid can be used to explain why AI is different from IT. Here is how.
The fundamental difference is that AI works with knowledge, not data, and the significant differences between
knowledge and data are essential. It is much more than just words.
Information as a Hiraki
A way to understand the difference between AI and IT is to look at how they work with information as a Hiraki. The
concept can be illustrated as a pyramid, also known as the knowledge pyramid. The pyramid shows different levels of
enriched knowledge.

It is shaped like a pyramid because the upper layers are based on the lower ones. Meaning, for each step you go up,
more knowledge is added, and it is assumed that you master the lower layers before an upper layer can be realized.

Thus, the lowest level of the pyramid of knowledge is data, and the highest is wisdom. The definitions of the four levels
are:
• Data: A collection of facts in raw or in unorganized form.
• Information: Organized and structured data that has been cleared of errors. It can, therefore, be measured,
analyzed, and visualized.
• Knowledge: Learning is the central component of the knowledgeable part. Here you learn on the basis of
insights and understanding of data and information.
• Wisdom: The last level is wisdom. Here, Reflection is the central component, as well as being an action-
oriented stage.
• 5 Things AI Does Better Than Humans in 2019
UPDATED JUNE 22, 2020 WIREDELTA
Artificial Intelligence (AI) does better than humans in many areas. It is getting more advanced each day. Examples range from 
Boston Dynamics demos backflipping robots to the likes of AI is dominating advertising. But it doesn’t stop there so let’s get a
better sense of what AI can do better than humans
 
1. AI Does Better In Gaming
When AI beat the reigning ‘Go’ champion Lee Sedol in 2016, researchers rejoiced. Furthermore, in 2017, a new neural network –
 ‘AlphaGo Zero’ beat the old ‘Alpha Go’ 100 times in a row. Moreover, it also taught itself how to play using only the basic rules
of the game. Even though ‘Go’ is far from the only game humans play, there are new AI’s in town playing all kinds of games.

2. AI (potentially) Does Better AI


Machines have built machines for a long time. Therefore it isn’t unreasonable to think that AI could build better AI than any
human could. In theory, AI is able to pull this off, but it isn’t certain if it can be done in reality.

Both Google’s AutoML and Microsoft’s DeepCoder hold an unprecedented capacity to build the next generation of AI. DeepCoder
does not simply copy the building blocks of code as researchers have given it to it. The algorithm looks at how the codes fit
together, how they function and it learns and recognizes other codes.
  The video below, shows Google’s CEO telling his team “we have to go deeper”, describing AutoML. This is also a nod to the
movie ‘Inception’. 
3. AI Does Better At Providing More Accurate Medical Diagnosis
Ever since the achievements of AI has been the talk out of town, a major focus area for their use has been in medical
diagnosis. For example in the field of oncology and diagnosis of cancer, it is challenging for humans to have an absolutely
accurate diagnosis. According to research by University Hospitals Birmingham. The delivery of results by AI systems correctly
detected a disease state 87% of the time – compared with 86% for healthcare professionals – and correctly gave the all-clear 93%
of the time, compared with 91% for human experts.
 
More recently, IBM has used Watson’s ability to absorb huge volumes of information. That will help them with diagnosing rare
illnesses. Some of which most doctors may only see within a few cases in their lifetime. It will help doctors at the Centre for
Undiagnosed and Rare Diseases at the University Hospital Marburg, Germany, deal with the thousands of patients referred to them
yearly. As well as with the thousands of pages of medical records that are supplied by the patients to be analyzed.
 
 

4. AI Does Better At Transcribing


AI can now transcribe audio better than humans. 
Microsoft’s researchers have been tweaking an AI-based automated speech recognition system so that it performs as well as, or
better than, people. They have proven that through testing.
 
write scripts for movies. 
 

5. AI Does Better At Creating Entertainment (than most people)


French songwriter Benoît Carré has collaborated with an AI program called Flow Machines. This was used to create a Europop
album that exceeds expectations. Carré who has worked with some of France’s biggest names like Johnny Halliday and Françoise
Hardy explains that the system has the capability to write original melodies. He goes on by saying that the AI can even suggest the
chords and sounds to play. However, Carré admits that AI produced music still needs a human touch to bring it all together. So
while it is still a very ‘human album’, there is no denying that the future of music will include lots of AI contributions.
 
Not long ago, MIDI seemed new and futuristic before it transformed music and pop culture for good. It is not a fantasy that AI
does better at making music than us humans. An AI system capable of creating original pieces of art has been developed by a team
of scientists. The idea is to make art that is “novel, but not too novel”. 
 
Conclusion
AI having a real function in everyday life has been displaced from being a thing of lofty dreams to reality. Just as in the past
decade we have seen Artificial Intelligent Assistants go from a novelty feature on mobile devices to become ‘Home Assistants’
that run our households and lives. In the next decade, AI will probably become a completely normal tool found in many aspects of
our lives. Even though today it might seem far fetched or far off.
 
AI has already proved it can beat us at games, make art and music. Moreover, it can reproduce itself and consolidate medical
records so it can help make diagnoses. It also has the ability to transcribe audio. 
 
Such as the future of AI is sure to be exciting, so is the future of many other technologies. Therefore, keep an eye on our site to not
miss out on any! Also, if you have an idea for a native of a hybrid app or a web application, don’t hesitate to contact us! Let’s
make the world digital together.

• Artificial Intelligence Characteristics


1. Deep Learning
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans, to learn
by example. Innumerable developers are leveraging the latest deep learning innovative technologies to take their
business to the new high.
There are large numbers of fields of Artificial Intelligence technology like autonomous vehicles, computer vision,
automatic text generation, and the like, where the scope and use of deep learning are increasing.
Take an example of Self Driving feature in cars like Tesla(Autopilot), where Deep learning is a key technology
behind enabling them to recognize a stop sign or to distinguish a pedestrian from a lamppost.
2. Facial Recognition
Artificial Intelligence has made it possible to recognize individual faces using biometric mapping. This has lead to
pathbreaking advancements in surveillance technologies. It compares the knowledge with a database of known faces to
seek out a match.
However, this has also faced a lot of criticism for breach of privacy.
For example, Clearview AI, an American technology company, offers surveillance technology for law agencies to
monitor entire cities with a network of CCTV Cameras exactly assigning each and every citizen with their Social Credit
Score in real-time.
3. Automate Simple and Repetitive Tasks
AI has the ability to execute the same kind of work over and over again without breaking a sweat. To understand this
feature better, let’s take the example of Siri, a voice-enabled assistant created by Apple Inc. It can handle so many
commands in a single day!
From asking to take up notes for a brief, to rescheduling the calendar for a meeting, to guiding us through the streets
with navigation, the assistant has it all covered.
Earlier, all of these activities had to be done manually which used to take up a lot of time and effort.
The automation would not only lead to increased efficiencies but also result in lower overhead costs and in some cases
a safer work environment.
4. Data Ingestion
With every passing day, the data that we are all producing is growing exponentially, which is where AI steps in. Instead
of manually feeding this data, AI-enabled not just gathers this data but also analyzes it with the help of its previous
experiences.
Data ingestion is that the transportation of knowledge from assorted sources to a data-storage medium where it are
often accessed, used, and analyzed by a corporation.
AI, with the help of neural networks, analyzes a large amount of such data and helps in providing a logical inference
out of it.
5. Chatbots
Chatbots are software to provide a window for solving customer problems’ through either audio or textual input.
Earlier the bots used to respond only to specific commands. If you say the wrong thing, it didn’t know what you meant.
The bot was only as smart as it was programmed to be. The real change came when these chatbots were enabled by
artificial intelligence.
Now, you don’t have to be ridiculously specific when you are talking to the chatbots. It understands language, not just
commands.
For example, Watson Assistant, an AI-powered assistant, developed by IBM which can run across various channels like
websites, messengers, and apps and requires zero human intervention once programmed.
There are a lot of companies that have moved on from voice process executives to chatbots to help customers solve
their problems.
The chatbots not only offer services revolving around issues that the customers face but also provides product
suggestions to the users. All this, just because of AI.
6. Quantum Computing
AI is helping solve complex quantum physics problems with the accuracy of supercomputers with the help of quantum
neural networks. This can lead to path-breaking developments in the near future.
It is an interdisciplinary field that focuses on building quantum algorithms for improving computational tasks within
AI, including sub-fields like machine learning.
The whole concept of quantum-enhanced AI algorithms remains in the conceptual research domain
For example, A pioneer in this field is Google AI Quantum whose objective is to develop superconducting qubit
processors and quantum-assisted optimization for varied applications.
7. Cloud Computing
Next Artificial Intelligence characteristics is Cloud Computing. With such a huge amount of data being churned out
every day, data storage in a physical form would have been a major problem.
AI capabilities are working within the business cloud computing environment to make organizations more efficient,
strategic, and insight-driven.
However, the advent of Cloud Computing has saved us from such worries.
Microsoft Azure is one of the prominent players in the cloud computing industry. It offers to deploy your own machine
learning models to your data stored in cloud servers without any lock-in.
Problem Representation in AI: Artificial Intelligence (AI) is a fastly growing field of computer science and engineering
with the latest objective to build machines that are capable of acting, thinking, and behaving like human beings. ...
Before an AI problem can be solved it must be represented as a state space.

• Problem Representation in AI
Before a solution can be found, the prime condition is that the problem must be very precisely defined. By defining it
properly, one converts the abstract problem into real workable states that are really understood.

The most common methods of problem representation in AI are:-


•State Space Representation
•Problem Reduction
State Space Representation:  A set of all possible states for a given problem is known as the state space of the state
space of the problem. Suppose you are asked to make a cup of coffee. What will you do? You will verify whether the
necessary ingredients like instant coffee powder, milk powder, sugar, kettle, stove etc. are available.
If so, you will follow the following steps:
1.     Boil necessary water in the kettle.
2.     Take some of the boiled water in a cup add necessary amount of instant coffee powder to make decoction.
3.     Add milk powder to the remaining boiling water to make milk.
4.     Mix decoction and milk.
5.     Add sufficient quantity of sugar to your taste and the coffee is ready.
Now think a bit what has exactly happened. You started with the ingredients (initial state), followed a sequence of steps
(called states) and at last had a cup of coffee (goal state). You added only needed amount of coffee powder, milk
powder and sugar (operators). The following fig. shows the sequence:
Problem Reduction: In this method a complex problem is broken down or decomposed into a set of primitive sub
problems. Solutions for these primitive sub-problems are easily obtained. The solutions for all the sub-problems
collectively give the solution for the complex problem.
In fact, the human mind adopts this strategy for finding solutions to majority of problems it encounters.
For e.g. consider the activities that must be done to set right a punctured tyre.  The following fig. shows the activities.
The top level specifies the overall goal which is combination of tasks given in level 2, level 3 and level 4 indicate
primitive sub-problems.
The above fig. is actually a pictorial representation of the problem by an AND/OR tree. An arc connecting different
branches is called an AND tree.
Between the complex and the sub-problem, there exist two kinds of relationships, viz. AND relationship and OR
relationship.
In AND relationship, the solution for the problem is obtained by solving all the sub problems.
In OR relationship, the solution for the problem is obtained by solving any of the sub problems.
AI and Search Process
•—  Every AI program has to do the process of searching for the solution steps are not explicit in nature.
•—  This searching is needed for solution steps that are not known beforehand and have to be found out.
•—  Basically, to do a search process, the following are needed.
1.The Initial State description of the problem. For example, the initial positions of all pieces in the chess board.
2.A set of logical operators that change the state. In chess game, it is the rules of the game.
3.The final or the goal state. The final position that a problem solver has to reach in order to succeed.

• What are the Components of AI?


a.    Learning
Similar to humans, computer programs also learn in different manners. Talking of AI, learning by this platform is
further segregated into a varied number of forms. One of the essential components of ai, learning for AI includes the
trial-and-error method. The solution keeps on solving problems until it comes across the right results. This way, the
program keeps a note of all the moves that gave positive results and stores it in its database to use the next time the
computer is given the same problem.
The learning component of AI includes memorizing individual items like different solutions to problems, vocabulary,
foreign languages, etc., also known as rote learning. This learning method is later implemented using the
generalization method.
b.    Reasoning
The art of reasoning was something that was only limited to humans until five decades ago. The ability to differentiate
makes Reasoning one of the essential components of artificial intelligence. To reason is to allow the platform to draw
inferences that fit with the provided situation. Further, these inferences are also categorized as either inductive or
deductive. The differ
ence is that in an inferential case, the solution of a problem provides guarantees of conclusion. In contrast, in the
inductive case, the accident is always a result of instrument failure.
The use of deductive interferences by programming computers has provided them with considerable success.
c.     Problem-solving
However, reasoning always involves drawing relevant inferences from the situation at hand.
In its general form, the AI’s problem-solving ability comprises data, where the solution needs to find x. AI witnesses a
considerable variety of problems being addressed in the platform. The different methods of ‘Problem-solving’ count for
essential artificial intelligence components that divide the queries into special and general purposes.
In the situation of a  special-purpose method, the solution to a given problem is tailor-made, often exploiting some of
the specific features provided in the case where a suggested problem is embedded. On the other hand, a general-
purpose method implies a wide variety of vivid issues. Further, the problem-solving component in AI allows the
programs to include step-by-step reduction of difference, given between any goal state and current state.
d.    Perception
In using the ‘perception’ component of Artificial Intelligence, the element scans any given environment by using
different sense-organs, either artificial or real. Further, the processes are maintained internally and allow the perceiver
to analyze other scenes in suggested objects and understand their relationship and features. This analysis is often
complicated as one, and similar items might pose considerable amounts of different appearances over different
occasions, depending on the view of the suggested angle.
At its current state, perception is one of those components of artificial intelligence that can propel self-driving cars at
moderate speeds.  FREDDY was one of the robots at its earliest stage to use perception to recognize different objects
and assemble different artifacts.
e.    Language-understanding
In simpler terms, language can be defined as a set of different system signs that justify their means using convention.
Occurring as one of the widely used artificial intelligence components, language understanding uses distinctive types
of language over different forms of natural meaning, exemplified overstatements.
One of the essential characteristics of languages is humans’ English, allowing us to differentiate between different
objects. Similarly, AI is developed in a manner that it can easily understand the most commonly used human language,
English. This way, the platform allows the computers to understand the different computer programs executed over
them easily.
a.    Supervised Learning:
One of the most common forms of machine learning, supervised learning, aims to train the different algorithms to
describe input data. It allows the algorithms to present the input data in such a manner that it can produce outputs
effectively and without making many errors.
The learning problems in Supervised learning include problems like classification and regression. The different
classified outputs used in these problems account for different categories, putting numerical value for the problems.
You can notice the different applications of supervised learning around recognizing speech, faces, objects,
handwriting, or gestures.
b.   Unsupervised Learning
Unlike supervised learning, where the platform uses labeled data to train the applications, unsupervised learning uses
unlabeled data for its training. More of a trial and error method, the unsupervised learning method is a reliable means
to showcase different unknown data features and patterns, allowing categorization. Broadly categorized as
association problems and clustering, this form of learning allows AI to ask questions the right way.
By framing the right question to be asked, this platform allows the program to model several data organizations to
highlight anomalies. Further, the association over this type of learning could be applied to know more about tendencies
based on newly discovered relationships among variables over a vast database.
c.    Semi-supervised Learning (SSL)
Semi-supervised learning falls in between unsupervised and supervised learning. This method of learning is used by AI
when it requires solving balance around different approaches. In several cases using this learning method, the
reference data needed to find a solution is available, but it is somewhere either accurate or incomplete. This is where
SSL comes to play as it can easily access reference data and imply the use of unsupervised learning techniques to find
the nearest possible solution.
Interestingly, SSL uses both labeled & unlabelled data. This way, AI can easily implement the function of both the data
set to be able to find relationships, patterns, and structures. It is also used in reducing human biases in the process.
d.   Reinforcement Learning
A form of the dynamic learning process, Reinforcement learning allows the systems to train algorithms with the use of
punishment and reward systems. The reinforcement learning algorithm finds solutions by interacting with the
individual components of the environment. The language uses rewards by executing operations correctly and penalties
in a situation where it cannot execute operations nicely.
This way, the algorithm learns without being taught by any human and uses the least menial intervention in learning.
Usually consisting of three components: agent, environment, and actions. This learning process focuses on
maximizing the reward and diminishing the penalty to learn well.
• Common AI Applications
Listed below are some of the most common applications of AI in the modern world:  
1.     Chatbots:
In a time where customers want real-time resolution of their issues, AI has been the key to catering to their demand.
Today, chatbots are delivering flexible and smart analytics via engaging visitors over conversations. Interestingly, over 
67% of online visitors prefer chatbots.
2.    Artificial Intelligence in eCommerce:
AI caters to different forms of the e-Commerce model by availing its services to every size of the business. Leveraging
the use of machine learning, the AI software automates the process of adding tags to products,  organizing, and
improving the ability of visual searches. With expectations to reach $49 billion by 2021 in its market value, the best is
yet to come!
3.    Human Resource Management:
AI and machine learning have drastically changed the scope of hiring in companies of different sizes. It provides
businesses with the opportunity to retain the best talent, securing a reliable future for operations.
4.    Healthcare department:
Expected to reach $6.6 billion in market valuation by 2021, the healthcare department has also been primarily
benefitted from AI. In its present form, AI can manage an entire clinic cost-effectively. 
5.    Logistics & Supply Chain:
Using the mix of customer data and analytics, AI has helped the logistics & supply chain reach its peak. It allows
businesses to act according to the consumer data and take all important decisions regarding the supply chain
operations.
• Evolution of AI
AI is divided broadly into three stages: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and
artificial super intelligence (ASI).
The first stage, ANI, as the name suggests, is limited in scope with intelligence restricted to only one functional area. ANI is, for
example, on par with an infant. The second stage, AGI, is at an advanced level: it covers more than one field like power of
reasoning, problem solving and abstract thinking, which is mostly on par with adults. ASI is the final stage of the intelligence
explosion, in which AI surpasses human intelligence across all fields.
The transition from the first to the second stage has taken a long time (see chart), but we believe we are currently on the cusp of
completing the transition to the second stage - AGI, in which the intelligence of machines can equal humans. This is by no means
a small achievement.
Artificial intelligence (AI) can have a major impact on the way modern societies respond to the hard challenges they
face. Properly harnessed, AI can create a more fair, healthy, and inclusive society. Today, AI has become a mature
technology and an increasingly important part of the modern life fabric. AI is already deployed in different application
domains, e.g. recommendation systems, spam filters, image recognition, voice recognition, virtual assistants, etc.It
spans across many sectors, from medicine to transportation, and across decades, since the term was introduced in the
1950s. The approaches also evolved, from the foundational AI algorithms of the 1950s, to the paradigm shift in
symbolic algorithms and expert system development in the 1970s, the introduction of machine learning in the 1990s
and the deep learning algorithms of the 2010s.Starting with the fundamental definitions and building on the historical
context, this report summarizes the evolution of AI, it introduces the “seasons” of AI development (i.e. winters for the
decline and springs for the growth), describes the current rise of interest in AI, and concludes with the uncertainty on
the future of AI, with chances of another AI winter or of an even greater AI spring.
HISTORY OF AI
WHAT IS A ARTIFICIAL INTELLIGENCE?
Artificial intelligence is the ability of machines to perform certain tasks, which need the intelligence showcased
by humans and animals. This definition is often ascribed to Marvin Minsky and John McCarthy from the 1950s,
who were also known as the fathers of the field.
 
Artificial intelligence allows machines to understand and achieve specific goals. AI includes machine learning via
deep learning. The former refers to machines automatically learning from existing data without being assisted by
human beings. Deep learning allows the machine to absorb huge amounts of unstructured data such as text,
images, and audio.
Maturation of Artificial Intelligence (1943-1952)
•Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They
proposed a model of artificial neurons.
•Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now
called Hebbian learning.
•Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan Turing
publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can check the machine's ability to
exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
The birth of Artificial Intelligence (1952-1956)
•Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic
Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems.
•Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the Dartmouth
Conference. For the first time, AI coined as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for AI was
very high at that time.
The golden years-Early enthusiasm (1956-1974)
•Year 1966: The researchers emphasized developing algorithms which can solve mathematical problems. Joseph Weizenbaum
created the first chatbot in 1966, which was named as ELIZA.
•Year 1972: The first intelligent humanoid robot was built in Japan which was named as WABOT-1.
The first AI winter (1974-1980)
•The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the time period where computer
scientist dealt with a severe shortage of funding from government for AI researches.
•During AI winters, an interest of publicity on artificial intelligence was decreased.
A boom of AI (1980-1987)
•Year 1980: After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the
decision-making ability of a human expert.
•In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford
University.
The second AI winter (1987-1993)
•The duration between the years 1987 to 1993 was the second AI Winter duration.
•Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert system
such as XCON was very cost effective.
The emergence of intelligent agents (1993-2011)
•Year 1997: In the year 1997, IBM Deep Blue beats world chess champion, Gary Kasparov, and became the first computer to beat
a world chess champion.
•Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
•Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI.
Deep learning, big data and artificial general intelligence (2011-present)
•Year 2011: In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as
riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly.
•Year 2012: Google has launched an Android app feature "Google now", which was able to provide information to the user as a
prediction.
•Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test."
•Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also performed extremely
well.
•Google has demonstrated an AI program "Duplex" which was a virtual assistant and which had taken hairdresser appointment on
call, and lady on other side didn't notice that she was talking with the machine.
Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science are now trending like a
boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing devices. The
future of Artificial Intelligence is inspiring and will come with high intelligence.

• What is Turing test in artificial intelligence?


The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is
capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an
English computer scientist, cryptanalyst, mathematician and theoretical biologist.
The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable
of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English
computer scientist, cryptanalyst, mathematician and theoretical biologist.
Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under
specific conditions. The original Turing Test requires three terminals, each of which is physically separated from the
other two. One terminal is operated by a computer, while the other two are operated by humans.
During the test, one of the humans functions as the questioner, while the second human and the computer function as
respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and
context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent
was human and which was a computer.
The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the
computer is considered to have artificial intelligence because the questioner regards it as "just as human" as the
human respondent.
The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of
thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist,
cryptanalyst, mathematician and theoretical biologist.
Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific
conditions. The original Turing Test requires three terminals, each of which is physically separated from the other two. One
terminal is operated by a computer, while the other two are operated by humans.
During the test, one of the humans functions as the questioner, while the second human and the computer function as
respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and context.
After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and
which was a computer.
The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer
is considered to have artificial intelligence because the questioner regards it as "just as human" as the human respondent.
History of the Turing Test
The test is named after Alan Turing, who pioneered machine learning during the 1940s and 1950s. Turing introduced the test in
his 1950 paper called "Computing Machinery and Intelligence" while at the University of Manchester.
Variations and alternatives to the Turing Test
There have been a number of variations to the Turing Test to make it more relevant. Such examples include:
•Reverse Turing Test -- where a human tries to convince a computer that it is not a computer. An example of this is a CAPTCHA.
•Total Turing Test -- where the questioner can also test perceptual abilities as well as the ability to manipulate objects.
•Minimum Intelligent Signal Test -- where only true/false and yes/no questions are given.

Alternatives to Turing Tests were later developed because many see the Turing test to be flawed. These alternatives include tests
such as:
•The Marcus Test -- in which a program that can 'watch' a television show is tested by being asked meaningful questions about
the show's content.
•The Lovelace Test 2.0 -- which is a test made to detect AI through examining its ability to create art.
•Winograd Schema Challenge -- which is a test that asks multiple-choice questions in a specific format.
Why is the Turing Test important?
"The Turing Test is a vital tool for combatting that threat. It is important to understand more fully how online, real-time
communication of this type can influence an individual human in such a way that they are fooled into believing something is true
when in fact it is not."
What is the significance of the Turing Test in AI?
The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of
thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer
scientist, cryptanalyst, mathematician and theoretical biologist.
What is the Turing Test?
In 1950, Alan Turing published a seminal paper titled “Computing Machinery and Intelligence” in Mind magazine. In this detailed
paper the question “Can Machines Think?” was proposed. The paper suggested abandoning the quest to define if a machine can
think, to instead test the machine with the ‘imitation game’. This simple game is played with three people:
•a man (A)
•a woman (B),
•and an interrogator (C) who may be of either sex.
The concept of the game is that the interrogator stays in a room that is separate from both the man (A) and the woman (B), the
goal is for the interrogator to identify who the man is, and who the woman is. In this instance the goal of the man (A) is to
deceive the interrogator, meanwhile the woman (B) can attempt to help the interrogator (C). To make this fair, no verbal cues can
be used, instead only typewritten questions and answers are sent back and forth. The question then becomes: How does the
interrogator know who to trust?
The interrogator only knows them by the labels X and Y, and at the end of the game he simply states either ‘X is A and Y is B’ or ‘X
is B and Y is A’.
The question then becomes, if we remove the man (A) or the woman (B), and replace that person with an intelligent machine,
can the machine use its AI system to trick the interrogator (C) into believing that it’s a man or a woman? This is in essence the
nature of the Turing Test.
Turing Test in Artificial Intelligence

• The Turing test was developed by Alan Turing(Computer scientist) in 1950. He proposed that the
“Turing test is used to determine whether or not a computer(machine) can think intelligently like
humans”? 
• Imagine a game of three players having two humans and one computer, an interrogator(as a
human) is isolated from the other two players. The interrogator’s job is to try and figure out
which one is human and which one is a computer by asking questions from both of them. To
make things a harder computer is trying to make the interrogator guess wrongly. In other words,
computers would try to be indistinguishable from humans as much as possible. 
 
The “standard interpretation” of the Turing Test, in which player C, the interrogator, is given the task of trying
to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using
the responses to written questions to make the determination 
The conversation between interrogator and computer would be like this: 
C(Interrogator): Are you a computer? 
A(Computer): No 
C: Multiply one large number to another, 158745887 * 56755647 
A: After a long pause, an incorrect answer! 
C: Add 5478012, 4563145 
A: (Pause about 20 seconds and then give as answer)10041157 
If the interrogator wouldn’t be able to distinguish the answers provided by both humans and computers then the
computer passes the test and the machine(computer) is considered as intelligent as a human. In other words, a
computer would be considered intelligent if its conversation couldn’t be easily distinguished from a human’s. The whole
conversation would be limited to a text-only channel such as a computer keyboard and screen. 
He also proposed that by the year 2000 a computer “would be able to play the imitation game so well that an average
interrogator will not have more than a 70-percent chance of making the right identification (machine or human) after
five minutes of questioning.” No computer has come close to this standard. 
But in the year 1980, Mr John Searle proposed the “Chinese room argument“. He argued that the Turing test could
not be used to determine “whether or not a machine is considered as intelligent like humans”. He argued that any
machine like ELIZA and PARRY could easily pass the Turing Test simply by manipulating symbols of which they had
no understanding. Without understanding, they could not be described as “thinking” in the same sense people do. We
will discuss more this in the next article. 
UNIT=2
Expert System

• Components of Expert System

An expert system generally consists of four components: a knowledge base, the search or inference system, a knowledge
acquisition system, and the user interface or communication system.

Expert systems in Artificial Intelligence are a prominent domain for research in AI. It was initially introduced by researchers at
Stanford University and were developed to solve complex problems in a particular domain. The following topics will be covered
through this blog on Expert Systems in Artificial Intelligence.
Introduction to Expert Systems in Artificial Intelligence
An Expert system is a domain in which Artificial Intelligence stimulates the behavior and judgement of a human or an
organization containing experts. It acquires relevant knowledge from its knowledge base, and interprets it as per the user’s
problem. The data in the knowledge base is essentially added by humans who are experts in a particular domain. However, the
software is used by non-experts to gain information. It is used in various areas of medical diagnosis, accounting, coding, gaming
and more. 
Breaking down an expert system, essentially is an AI software that uses knowledge stored in a knowledge base to solve problems.
This usually requires a human expert, and thus, it aims at preserving human expert knowledge in its knowledge base. Hence,
expert systems are computer applications developed to solve complex problems in a particular domain, at an extraordinary level
of human intelligence and expertise.
History of Expert Systems in AI
Expert Systems were first presented by Stanford University specialists during the 1970s, in spite of the fact that it has been on PC
researchers’ psyches since the mid-1940s and 1950s.
Edward Feigenbaum and Joshua Lederberg, who were key individuals from the Stanford Heuristic Programming Project, built up
the principal master framework in 1965. The analysts needed to make a specific framework instead of a universally useful one.
One of the gadget’s initial applications included synthetic examination (DENDRAL) and clinical diagnostics (MYCIN). MYCIN, an
irresistible infection diagnostics device, makes findings through reverse affixing.
Master frameworks have clarification offices that let clients ask them how they arrived at a specific resolution or why they
couldn’t. All things considered, its equipped for legitimizing its thinking and yield.
Examples of AI Expert Systems
1. MYCIN
MYCIN is amongst the oldest expert systems. It was designed upon the fundamental of backward chaining and was capable to
identify infection-causing bacteria.
MYCIN treats certain bacterial infections and controls acne, additionally to other acne treatments. It prevents infections in people
with a history of rheumatic disease, congenital heart condition or other acquired valvular heart condition and who are allergic to
penicillin antibiotics.
2. DENDRAL
An expert system designed to determine the structure of the chemical using its spectrographic data. Its primary aim was to
review hypothesis formation and discovery in science.
The software program DENDRAL is said to be the primary expert system because it automated the decision-making process and
problem-solving behavior of organic chemists.
3. R1/XCON
An Expert System that had the ability to select the best-suited software to perform a particular task assigned by the user.
A system that ensured the customer was furnished with all the components and software that was needed to form up the
required computing system that that they had ordered.
4. PXDES
Pneumoconiosis X-Ray Diagnosis Expert System (PXDES) is an expert system which is used to diagnose in which stage a patient of
lung cancer. The shadow is employed to work out the sort and degree of carcinoma .
5. DXplain
This was an expert system that was capable of diagnosing a number of diseases in a patient based on the input provided.
DXplain provides information and supplies clinical manifestations, if any, for diseases that are unusual or atypical.
Components/ Architecture of Expert Systems
There are 5 Components of expert systems:
•Knowledge Base
•Inference Engine
•Knowledge acquisition and learning module
•User Interface
•Explanation module 
1.)Knowledge base: The knowledge base in an expert system represents facts and rules. It contains knowledge in specific
domains along with rules in order to solve problems, and form procedures that are relevant to the domain. It is where data
contributed by specialists from the required domains is put away. Think about an information base as a book or an article.
To make entries from a book sound, you need to refer to data from specialists to make it more credible. It contains both factual
and heuristic knowledge.
Components of Knowledge Base
a. Factual Knowledge
As the name suggests, factual knowledge is based upon facts. Information in the form of facts are proven and widely accepted by
one and all.
b. Heuristic Knowledge
While Factual Knowledge is about facts, heuristic knowledge is unorganized in nature and relies on one’s own evaluation.
2.)Inference engine: The most basic function of the inference engine is to acquire relevant data from the knowledge base,
interpret it, and to find a solution as per the user’s problem. Inference engines also have explanationatory and debugging
abilities. Inference Engine is the mind behind the UI. It contains a predefined set of rules to tackle a particular issue and alludes to
the information from the Knowledge Base.
It chooses realities and rules to apply when attempting to answer the client’s inquiry. Inference Engine gives thinking about the
data in the information base.
It likewise helps in deducting the issue to discover the arrangement. This part is additionally useful for detailing ends.
The two basic strategies used in inference engines are:

a. Forward Chaining
This strategy used to determine the probable outcome in the future. With the given inputs and conditions, this strategy
utilizes expert systems to find out the probable outcome. This helps to extract data till a particular goal is reached.
b. Backward Chaining
This strategy used to determine why would a particular event take happen with the current circumstances provided. It is utilized
in automated theorem provers, inference engines, proof assistants, and other AI applications
3.)Knowledge acquisition and learning module: This component functions to allow the expert systems to acquire more data
from various sources and store it in the knowledge base.
4.)User interface: This component is essential for a non-expert user to interact with the expert system and find solutions.
Explanation module: As the name suggests, this module helps in providing the user with an explanation of the achieved
conclusion. With the assistance of a UI, the master framework communicates with the client, accepts inquiries as a contribution
to a clear arrangement, and passes it to the deduction motor.
In the wake of getting the reaction from the deduction motor, it shows the yield to the client. As it were, it is an interface that
helps a non-master client to speak with the master framework to discover an answer.
It takes the client’s inquiry in a coherent structure and forwards it to the inference engine. From that point onward, it shows the
outcomes to the client, as such, it’s an interface that enables the client to speak with the master framework.
The (UI) is the space that encourages correspondence between the framework and its clients. It’s synonymous with your PC work
area or cell phone home screen.
Applications of Expert Systems
•Expert systems are being used in designing and manufacturing domain for the production of vehicles and gadgets like cameras.
•In the knowledge domain, Expert Systems are used for delivering the required knowledge to the client. The knowledge can be
legal advice, tax advice, or something other than that.
•In the banking and finance sector, expert systems are widely used for the detection of frauds.
•Expert Systems can also use in the diagnosis and troubleshooting of medical equipment.
•Apart from this, Expert Systems can also have use cases in Planning and Scheduling tasks.
Advantages of Expert Systems
•Expert Systems are easily available as they are not so difficult to develop and are thus easier to reproduce.
•Increased accuracy is a prominent advantage of Expert Systems.
•They can be of use at workstations where there is a risk to human lives.
•They can be made to work 24×7 without the need for any human intervention.
•Expert Systems offer a very speedy decision-making process which in most cases is error-free.
Limitations of Expert Systems
•Its judgment is based solely on the information being stored in the knowledge base. Any discrepancy in the information can lead
to flawed decision making.
•Unlike humans, it is not capable of providing an out of the box solution for problems
•The cost required for the maintenance of these systems is quite hefty.
• Expert System Life Cycle,
Topics covered are the following:
problem identification phase, feasibility study phase, project planning phase, knowledge acquisition phase, knowledge
representation phase, knowledge implementation phase, verification and validation, installation/transition/ training,
operation/evaluation/maintenance.

What are the phases of expert system development life cycle?


Waterman [11] provided five phases/stages approach in the development of Expert System: Identification , Conceptualization,
Formalization, Implementation, and Testing
Expert System Development Life Cycle

The term “expert system” could be applied to any computer program which is able to draw conclusions and make decisions,
based on knowledge, represented as a database, it has. An expert system doesn’t have to be a replacement for a human expert.
Such systems are often used as a support when a human can not collect all vital information due to theirs amount or complexity.
That is why there is a need for systems that work in real-time and perform theirs functions faster and better then a human is able
to do. There is also another reason, computer programs are much more cheaper then human experts (not in terms of their value,
which may not be compared, but maintenance: costs of educations, salaries etc.).
Stages in the Development of an Expert System:
The essential principles:
 get a prototype – a small, preliminary version of the final system – up & running at an early stage;
 present this to the domain expert, for criticism;
 proceed to refine this prototype with repeated debugging & knowledge accretion stages;
 Continue with this cycle until the knowledgebase is finished.
Expert System Development Life Cycle:
Problem Identification Phase:
Identifying the problem and opportunity where the organization can obtain benefits from expert system, and establishing the
Expert system general goals.
Feasibility Study Phase:
Assessing the feasibility of the expert system development in terms of its technical operational and economical feasibility.
Project Planning Phase:
Planning for the expert system project, including development team members, working environment, project schedule, and
budget.
Knowledge Acquisition Phase:
Extracting domain knowledge from domain experts and determining the system’s requirements.
Knowledge Representation Phase:
Representing key concepts from domain and inter relationships between these concepts using formal representation methods.
Knowledge Implementation Phase:
Coding the formalized knowledge in to a working prototype.
Verification and Validation:
Verifying and validating working prototype against the system requirements, and revising it necessary according to domain
expert’s feedback.
Installation and Training:
Installing the final prototype in an operating environment, training the users and developing documentation and user manual.
Operation/ Evolution / Maintenance:
Running the system in an operating environment, evaluating its performance and benefits and maintaining system.
What are rule-based expert systems?
A rule-based expert system is the simplest form of artificial intelligence and uses prescribed knowledge-based rules to solve a
problem 1. ... The aim of the expert system is to take knowledge from a human expert and convert this into a number of
hardcoded rules to apply to the input data.

Rule-based systems (also known as production systems or expert systems) are the simplest form of artificial intelligence. A rule
based system uses rules as the knowledge representation for knowledge coded into the system. The definitions of rule-based
system depend almost entirely on expert systems, which are system that mimic the reasoning of human expert in solving a
knowledge intensive problem. Instead of representing knowledge in a declarative, static way as a set of things which are true,
rule-based system represent kn
owledge in terms of a set of rules that tells what to do or what to conclude in different situations.

What do you mean by rule-based expert system in the area of AI?


A rule-based expert system is the simplest form of artificial intelligence and uses prescribed knowledge-based rules to solve a
problem 1. The aim of the expert system is to take knowledge from a human expert and convert this into a number of hardcoded
rules to apply to the input data.

A rule-based expert system is the simplest form of artificial intelligence and uses prescribed knowledge-based rules to solve a
problem 1. The aim of the expert system is to take knowledge from a human expert and convert this into a number of hardcoded
rules to apply to the input data.
What is knowledge expert system?
Expert systems have specific knowledge to one problem domain, e.g., medicine, science, engineering, etc. The expert's
knowledge is called a knowledge base, and it contains accumulated experience that has been loaded and tested in the system.
What is the role of knowledge in expert system?
A Knowledge-based expert system use human knowledge to solve problems that normally would require human intelligence.
Expert systems are designed to carry the intelligence and information found in the intellect of experts and provide this knowledge
to other members of the organization for problem solving purposes.
Applications of Expert Systems 
Applications  Role
Design Domain Camera lens designAutomobile design
Diagnosis Systems to deduce the cause of
Medical Domain disease from observed dataConduction
medical operations on humans.
Monitoring systems Comparing data continuously with observed
systems 
Controlling physical processes based on
Process Control Systems monitoring.
Knowledge Domain Finding faults in vehicles or computers.
Detection of possible fraud Suspicious
Commerce  transactions Stock market trading Airline
scheduling, Cargo scheduling.
Advantages of Expert Systems
•Availability − They are easily available due to mass production of software.
•Less Production Cost − Production costs of expert systems are extremely reasonable and affordable.
•Speed − They offer great speed and reduce the amount of work.
•Less Error Rate − Error rate is much lower as opposed to human errors.
•Low Risks − They are capable of working in environments that are dangerous to humans.
•Steady response − They avoid motions, tensions and fatigues.
Limitations of Expert Systems
It is evident that no technology is entirely perfect to offer easy and complete solutions. Larger systems are not only expensive but
also require a significant amount of development time and computer resources. Limitations of Es include:

•Difficult knowledge acquisition


•Maintenance costs 
•Development costs
•Adheres only to specific domains.
•Requires constant manual updates, it cannot learn by itself.
•It is incapable of providing logic behind the decisions.
Expert systems have managed to evolve to an extent that they have stirred various debates about the fate of humanity in the face
of such intelligence. Considering that Expert systems were among the first truly successful forms of artificial intelligence (AI)
software, it might just be the future of technology. 
• development of expert system in artificial intelligence

In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.
Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if-
then rules rather than through conventional procedural code.
How do you develop an expert system?
Here is a six-step formula for building your core expert systems.
1.Step One: Define All Deliverables. ...
2.Step Two: Lay Out the Process. ...
3.Step Three: Determine the Optimal Level of Expertise for Each Step. ...
4.Step Four: Control for Consistency. ...
5.Step Five: Map Out the Key Components of Your Expert System to Refine First.
What are the stages in the development of expert system?
Waterman [11] provided five phases/stages approach in the development of Expert System: Identification , Conceptualization,
Formalization, Implementation, and Testing.
What is the main purpose of expert system development?
Expert systems are aimed at rebuilding human reasoning on the expertise obtained from experts, stores knowledge, establishes
links between knowledge, have the knowledge and ability to perform human intellectual activities.
The following points highlight the five main stages to develop an expert system.The stages are: 1. Identification 2.
Conceptualisation 3. Formalisation (Designing) 4. Implementation 5. Testing (Validation, Verification and Maintenance).
A knowledge engineer is an AI specialist, perhaps a computer scientist or programmer, who is skilled in the ‘Art’ of
developing expert systems, unlike other engineering disciplines, there are no generally accepted criteria to determine
exactly who is knowledge engineer; the field is much too new. You don’t need a degree in “knowledge engineering” to
call yourself a knowledge engineer; infact, nearly everyone who has ever contributed to the technical side of the expert
system development process could be considered a knowledge engineer.
A domain expert is an individual who has significant expertise in the domain of the expert system being developed. It is
not critical that the domain expert understand AI or expert systems; that is one of the functions of the knowledge
engineer.
ADVERTISEMENTS:
The knowledge engineer and the domain expert usually work very closely together for long periods of time throughout
the several stages of the development process.
An expert system is developed and refined over a period of several years since it is typically a computer-based soft
ware. Fig. 12.8 divides the process of expert system development into five distinct stages since an expert system is
typically a computer based system.
In practice, it may not be possibl
e to break down the expert system development cycle precisely. However, an examination of these five stages may ser
ve to provide us with some insight into the ways in which expert systems are developed.
Stage # 1. Identification:
Before we can begin to develop an expert system, it is important to describe, with as much precision as possible, the problem
which the system is intended to solve. It is not enough simply to feel that an expert system would be helpful in a certain situation;
we must determine the exact nature of the problem and state the precise goals which indicate exactly how the expert system is
expected to contribute to the solution.
To begin, the knowledge engineer, who may be unfamiliar with this particular domain, consults manuals and training guides to
gain some familiarity with the subject. Then the domain expert describes several typical problem states. The knowledge engineer
attempts to extract fundamental concepts from the similar cases in order to develop a more general idea of the purpose of the
expert system.
After the domain expert describes several cases, the knowledge engineer develops a ‘first-pass’ problem description. Typically,
the domain expert may feel that the description does not entirely represent the problem. The domain expert then suggests
changes to the description and provides the knowledge engineer with additional examples to illustrate further the problem’s fine
points.
Next, the knowledge engineer revises the description, and the domain expert suggests further changes. This process is repeated
until the domain expert is satisfied that the knowledge engineer understands the problems and until both are satisfied that the
description adequately portrays the problem which the expert system is expected to solve.
This ‘iterative’ procedure (Fig. 12.9) is typical of the entire expert-system development process. The results are evaluated at each
stage of the process and compared to the expectations. If the results do not meet the expectations, adjustments are made to
that stage of the process, and the new results are evaluated. The process continues until satisfactory results are achieved.
Stage # 2. Conceptualisation:
Once it has been identified for the problem an expert system is to solve, the next stage involves analysing the problem further to
ensure that its specifics, as well as generalities, are understood.
In the conceptualisation stage, the knowledge engineer frequently creates a diagram of the problem to depict graphically the
relationships between the objects and processes in the problem domain. It is often helpful at this stage to divide the problem
into a series of sub-problems and to diagram both the relationships among the pieces of each sub-problem and the relationships
among the various sub-problems.
As in the identification stage, the conceptualisation stage involves a circular procedure of iteration and reiteration between the
knowledge engineer and the domain expert. When both agree that the key concepts-and the relationships among them-have
been adequately conceptualised, this stage is complete.
Not only is each stage in the expert system development process circular, the relationships among the stages may be circular as
well. Since each stage of the development process adds a level of detail to the previous stage, any stage may expose a weakness
in a previous stage.
Stage # 3. Formalisation (Designing):
In the preceding stages, no effort has been made to relate the domain problem to the artificial intelligence technology which may
solve it. During the identification and formalization stages, the focus is entirely on understanding the problem. Now, during the
formalization stage, the problem is connected to its proposed solution, an expert system is supplied by analyzing the relationships
depicted in the conceptualization stage. The knowledge engineer begins to select the techniques which are appropriate for
developing this particular expert system.
During formalization, it is important that the knowledge engineer be familiar with the following:
1. The various techniques of knowledge representation and intelligent search techniques used in expert systems.
2. The expert system tools which can greatly expedite the development process.
3. Other expert systems which may solve similar problems and
thus may be adaptable to problem at hand.
Stage # 4. Implementation:
During the implementation stage the formalized concepts are programmed into the computer which has been chosen for system
development, using the predetermined techniques and tools to implement a ‘first-pass’ (prototype) of the expert system.
Theoretically, if the methods of the previous stages have been followed with diligence and care, the implementation of the
prototype should proceed smoothly. In practice, the development of an expert system may be as much an art as it is a science,
because following all the rules does not guarantee that the system will work the first time it is implemented. In fact, experience
suggests the opposite. Many scientists actually consider the prototype to be a ‘throw-away’ system, useful for evaluating
progress but hardly a usable expert system.
If the prototype works at all, the knowledge engineer may be able to determine if the techniques chosen to implement the
expert system were the appropriate ones. On the other hand, the knowledge engineer may discover that the chosen techniques
simply cannot be implemented. It may not be possible, for example, to integrate the knowledge representation techniques

Stage # 5. Testing (Validation, Verification and Maintenance):

The chance of prototype expert system executing flawlessly the first time it is tested are so slim as to be virtually non-existent. A
knowledge engineer does not expect the testing process to verify that the system has been constructed entirely correctly. Rather,
testing provides an opportunity to identify the weaknesses in the structure and implementation of the system and to make the
appropriate corrections.
Depending on the types of problems encountered, the testing procedure may indicate that the system was implemented
incorrectly, or perhaps that the rules were implemented correctly but were poorly or incompletely formulated. Results from the
tests are used as ‘feedback’ to return to a previous stage and adjust the performance of the system.
Once the system has proven to be capable of correctly solving straight-forward problems, the domain expert suggests complex
problems which typically would require a great deal of human expertise. These more demanding tests should uncover more
serious flaws and provide ample opportunity to ‘fine tune’ the system even further.
Applications of Expert Systems
Some popular Application of Expert System:
•Information management
•Hospitals and medical facilities
•Help desks management
•Employee performance evaluation
•Loan analysis
•Virus detection
•Useful for repair and maintenance projects
•Warehouse optimization
•Planning and scheduling
•The configuration of manufactured objects
•Financial decision making Knowledge publishing
•Process monitoring and control
•Supervise the operation of the plant and controller
•Stock market trading
•Airline scheduling & cargo schedules
Applications of Expert Systems 
Applications  Role
Design Domain Camera lens designAutomobile design
Diagnosis Systems to deduce the cause of disease
Medical Domain from observed dataConduction medical operations on
humans.
Monitoring systems Comparing data continuously with observed systems 
Process Control Systems Controlling physical processes based on monitoring.
Knowledge Domain Finding faults in vehicles or computers.
Detection of possible fraud Suspicious
Commerce  transactions Stock market trading Airline scheduling,
Cargo scheduling.

You might also like