Unit V - AI
Unit V - AI
Unit V - AI
Unit 5 : Applications of AI
The function and popularity of Artificial Intelligence are soaring by the day. Artificial
Intelligence is the ability of a system or a program to think and learn from experience. AI
applications have significantly evolved over the past few years and have found their applications
in almost every business sector. This article will help you learn the top Artificial Intelligence
applications in the real world.
1. AI Application in E-Commerce
Personalized Shopping
Artificial Intelligence technology is used to create recommendation engines through which you
can engage better with your customers. These recommendations are made in accordance with
their browsing history, preference, and interests. It helps in improving your relationship with
your customers and their loyalty towards your brand.
AI-Powered Assistants
Virtual shopping assistants and chatbots help improve the user experience while shopping online.
Natural Language Processing is used to make the conversation sound as human and personal as
possible. Moreover, these assistants can have real-time engagement with your customers. Did
you know that on amazon.com, soon, customer service could be handled by chatbots?
Fraud Prevention
Credit card frauds and fake reviews are two of the most significant issues that E-Commerce
companies deal with. By considering the usage patterns, AI can help reduce the possibility of
credit card fraud taking place. Many customers prefer to buy a product or service based on
customer reviews. AI can help identify and handle fake reviews.
Artificial Intelligence helps create a rich learning experience by generating and providing audio
and video summaries and integral lesson plans.
Voice Assistants
Without even the direct involvement of the lecturer or the teacher, a student can access extra
learning material or assistance through Voice Assistants. Through this, printing costs of
temporary handbooks and also provide answers to very common questions easily.
Personalized Learning
Using top AI technologies, hyper-personalization techniques can be used to monitor students’
data thoroughly, and habits, lesson plans, reminders, study guides, flash notes, frequency or
revision, etc., can be easily generated.
Autonomous Vehicles
Automobile manufacturing companies like Toyota, Audi, Volvo, and Tesla use machine learning
to train computers to think and evolve like humans when it comes to driving in any environment
and object detection to avoid accidents.
Spam Filters
The email that we use in our day-to-day lives has AI that filters out spam emails sending them to
spam or trash folders, letting us see the filtered content only. The popular email provider, Gmail,
has managed to reach a filtration capacity of approximately 99.9%.
Facial Recognition
Our favorite devices like our phones, laptops, and PCs use facial recognition techniques by using
face filters to detect and identify in order to provide secure access. Apart from personal usage,
facial recognition is a widely used Artificial Intelligence application even in high security-related
areas in several industries.
Recommendation System
Various platforms that we use in our daily lives like e-commerce, entertainment websites, social
media, video sharing platforms, like youtube, etc., all use the recommendation system to get user
data and provide customized recommendations to users to increase engagement. This is a very
widely used Artificial Intelligence application in almost all industries.
It can be used for:Carrying goods in hospitals, factories, and warehouses, Cleaning offices and
large equipment, Inventory management
It can also be used to predict human behavior using which game design and testing can be
improved.
Facebook
Artificial Intelligence is also used along with a tool called DeepText. With this tool, Facebook
can understand conversations better. It can be used to translate posts from different languages
automatically.
Twitter
AI is used by Twitter for fraud detection, for removing propaganda, and hateful content. Twitter
also uses AI to recommend tweets that users might enjoy, based on what type of tweets they
engage with.
Using AI, marketers can deliver highly targeted and personalized ads with the help of behavioral
analysis, and pattern recognition in ML, etc. It also helps with retargeting audiences at the right
time to ensure better results and reduced feelings of distrust and annoyance.
AI can help with content marketing in a way that matches the brand's style and voice. It can be
used to handle routine tasks like performance, campaign reports, and much more.
Chatbots powered by AI, Natural Language Processing, Natural Language Generation, and
Natural Language Understanding can analyze the user's language and respond in the ways
humans do.
AI can provide users with real-time personalizations based on their behavior and can be used to
edit and optimize marketing campaigns to fit a local market's needs.
offered through AI can help to significantly improve a wide range of financial services. For
example, customers looking for help regarding wealth management solutions can easily get the
information they need through SMS text messaging or online chat, all AI-powered. Artificial
Intelligence can also detect changes in transaction patterns and other potential red flags that can
signify fraud, which humans can easily miss, and thus saving businesses and individuals from
significant loss. Aside from fraud detection and task automation, AI can also better predict and
assess loan risks.
15. AI in Astronomy
If there's one concept that has caught everyone by storm in this beautiful world of technology, it
has to be - AI (Artificial Intelligence), without a question. AI or Artificial Intelligence has seen a
wide range of applications throughout the years, including healthcare, robotics, eCommerce, and
even finance.
Astronomy, on the other hand, is a largely unexplored topic that is just as intriguing and thrilling
as the rest. When it comes to astronomy, one of the most difficult problems is analyzing the data.
As a result, astronomers are turning to machine learning and Artificial Intelligence (AI) to create
new tools. Having said that, consider how Artificial Intelligence has altered astronomy and is
meeting the demands of astronomers.
Astronomers use this location to study an exoplanet's orbit and develop a picture of the light
dips. They then identify the planet's many parameters, such as its mass, size, and distance from
its star, to mention a few. However, AI proves to be more than a savior in this case. Using AI's
time-series analysis capabilities, it is feasible to analyze data as a sequential sequence and
identify planetary signals with up to 96% accuracy.
Finding the signals of the universe's most catastrophic events is critical for astronomers. When
exoplanets collide with each other, they cause ripples in space-time. These can be identified
further by monitoring feeble signals on Earth. Collaborations on gravitational-wave detectors -
Ligo and Virgo have performed admirably in this regard. Both of them were effective in
recognizing signals using machine learning. Astronomers now get notifications, allowing them to
point their telescopes in the appropriate direction.
Data security, which is one of the most important assets of any tech-oriented firm, is one of the
most prevalent and critical applications of AI. With confidential data ranging from consumer
data (such as credit card information) to organizational secrets kept online, data security is vital
for any institution to satisfy both legal and operational duties. This work is now as difficult as it
is vital, and many businesses deploy AI-based security solutions to keep their data out of the
wrong hands.
Because the world is smarter and more connected than ever before, the function of Artificial
Intelligence in business is critical today. According to several estimates, cyberattacks will get
more tenacious over time, and security teams will need to rely on AI solutions to keep systems
and data under control.
A human may not be able to recognize all of the hazards that a business confronts. Every year,
hackers launch hundreds of millions of assaults for a variety of reasons. Unknown threats can
cause severe network damage. Worse, they can have an impact before you recognize, identify,
and prevent them.
As attackers test different tactics ranging from malware assaults to sophisticated malware
assaults, contemporary solutions should be used to avoid them. Artificial Intelligence has shown
to be one of the most effective security solutions for mapping and preventing unexpected threats
from wreaking havoc on a corporation.
Flaw Identification
AI assists in detecting data overflow in a buffer. When programs consume more data than usual,
this is referred to as buffer overflow. Aside from the fault caused by human triggers breaking
crucial data. These blunders are also observable by AI, and they are detected in real-time,
preventing future dangers.
AI can precisely discover cybersecurity weaknesses, faults, and other problems using Machine
Learning. Machine Learning also assists AI in identifying questionable data provided by any
application. Malware or virus used by hackers to gain access to systems as well as steal data is
carried out via programming language flaws.
Threat Prevention
Artificial Intelligence technology is constantly being developed by cyber security vendors. In its
advanced version, AI is designed to detect flaws in the system or even the update. It’d instantly
exclude anybody attempting to exploit those issues. AI would be an outstanding tool for
preventing any threat from occurring. It may install additional firewalls as well as rectify code
faults that lead to dangers.
Responding to Threats
It's something that happens after the threat has entered the system. As previously explained, AI is
used to detect unusual behavior and create an outline of viruses or malware. AI is currently
taking appropriate action against viruses or malware. The reaction consists mostly of removing
the infection, repairing the fault, and administering the harm done. Finally, AI guarantees that
such an incident does not happen again and takes proper preventative actions.
Intelligent transportation systems have the potential to become one of the most effective methods
to improve the quality of life for people all around the world. There are multiple instances of
similar systems in use in various sectors.
The lead vehicle in a truck platoon is steered by a human driver, however, the human drivers in
any other trucks drive passively, just taking the wheel in exceptionally dangerous or difficult
situations.
Because all of the trucks in the platoon are linked via a network, they travel in formation and
activate the actions done by the human driver in the lead vehicle at the same time. So, if the lead
driver comes to a complete stop, all of the vehicles following him do as well.
Traffic Management
Clogged city streets are a key impediment to urban transportation all around the world. Cities
throughout the world have enlarged highways, erected bridges, and established other modes of
transportation such as train travel, yet the traffic problem persists. However, AI advancements in
traffic management provide a genuine promise of changing the situation.
Intelligent traffic management may be used to enforce traffic regulations and promote road
safety. For example, Alibaba's City Brain initiative in China uses AI technologies such as
predictive analysis, big data analysis, and a visual search engine in order to track road networks
in real-time and reduce congestion.
Building a city requires an efficient transformation system, and AI-based traffic management
technologies are powering next-generation smart cities.
Ride-Sharing
Platforms like Uber and OLA leverage AI to improve user experiences by connecting riders and
drivers, improving user communication and messaging, and optimizing decision-making. For
example, Uber has its own proprietary ML-as-a-service platform called Michelangelo that can
anticipate supply and demand, identify trip abnormalities like wrecks, and estimate arrival
timings.
Route Planning
AI-enabled route planning using predictive analytics may help both businesses and people. Ride-
sharing services already achieve this by analyzing numerous real-world parameters to optimize
route planning.
AI-enabled route planning is a terrific approach for businesses, particularly logistics and
shipping industries, to construct a more efficient supply network by anticipating road conditions
and optimizing vehicle routes. Predictive analytics in route planning is the intelligent evaluation
by a machine of a number of road usage parameters such as congestion level, road restrictions,
traffic patterns, consumer preferences, and so on.
Cargo logistics companies, such as vehicle transport services or other general logistics firms,
may use this technology to reduce delivery costs, accelerate delivery times, and better manage
assets and operations.
With Artificial Intelligence driving more applications to the automotive sector, more businesses
are deciding to implement Artificial Intelligence and machine learning models in production.
Manufacturing
Infusing AI into the production experience allows automakers to benefit from smarter factories,
boosting productivity and lowering costs. AI may be utilized in automobile assembly, supply
chain optimization, employing robots on the manufacturing floor, improving performance using
sensors, designing cars, and in post-production activities.
Supply Chain
The automobile sector has been beset by supply chain interruptions and challenges in 2021 and
2022. AI can also assist in this regard. AI helps firms identify the hurdles they will face in the
future by forecasting and replenishing supply chains as needed. AI may also assist with routing
difficulties, volume forecasts, and other concerns.
Inspections
The procedure of inspecting an automobile by a rental agency, insurance provider, or even a
garage is very subjective and manual. With AI, car inspection may go digital, with modern
technology being able to analyze a vehicle, identify where the flaws are, and produce a thorough
status report.
Quality Control
Everyone desires a premium vehicle and experience. Wouldn't you prefer to know if something
is wrong with your automobile before it breaks down? In this application, AI enables extremely
accurate predictive monitoring, fracture detection, and other functions.
Language Models
What is language models?
Language models, often abbreviated as LM, are a crucial component of natural language
processing (NLP) in the field of artificial intelligence. They are statistical models that analyze
and predict sequences of words to facilitate communication between humans and machines. By
understanding the structure and context of human language, language models enable AI systems
to comprehend and generate human-like text, ultimately enhancing the quality and efficiency of
interactions.
The definition of language models in the AI context underscores their ability to provide a
framework for understanding and generating human language, allowing for seamless integration
and interaction between individuals and AI-powered systems. These models are designed to
capture the nuances and intricacies of language, facilitating tasks such as speech recognition,
language translation, and text generation in a manner that closely resembles human
communication.
The evolution of language models has been shaped by advancements in computational power,
the accumulation of vast linguistic datasets, and innovations in neural network architectures. This
collective progress has propelled language models into the forefront of AI, significantly
enhancing their capabilities in understanding and processing human language with incredible
precision.
The pivotal role of language models in driving AI advancements is evident in their impact on
natural language processing, machine translation, and text generation. Moreover, language
models have facilitated the development of sophisticated chatbots and virtual agents, which are
increasingly adept at understanding and responding to human queries and commands.
Working mechanism
The working mechanism of language models revolves around their ability to analyze and predict
sequences of words based on existing linguistic patterns and structures. This process involves
incorporating contextual information from surrounding words to generate coherent and
contextually relevant text. Key features of language models include their capacity to capture
long-range dependencies, understand semantic nuances, and adapt to diverse linguistic styles and
genres.
Decoding the mechanics of language models reveals their proficiency in handling complex
language tasks such as language modeling, text generation, and sentiment analysis. Whether it's
autocomplete suggestions in search engines or conversational interfaces in smart devices,
language models underpin the seamless integration of AI into various facets of daily life.
Speech recognition. This involves a machine being able to process speech audio. Voice
assistants such as Siri and Alexa commonly use speech recognition.
Text generation. This application uses prediction to generate coherent and contextually
relevant text. It has applications in creative writing, content generation, and
summarization of structured data and other text.
Chatbots. These bots engage in humanlike conversations with users as well as generate
accurate responses to questions. Chatbots are used in virtual assistants, customer support
applications and information retrieval systems.
Machine translation. This involves the translation of one language to another by a
machine. Google Translate and Microsoft Translator are two programs that do this.
Another is SDL Government, which is used to translate foreign social media feeds in real
time for the U.S. government.
Parts-of-speech tagging. This use involves the markup and categorization of words by
certain grammatical characteristics. This model is used in the study of linguistics. It was
first and perhaps most famously used in the study of the Brown Corpus, a body of
random English prose that was designed to be studied by computers. This corpus has
been used to train several important language models, including one used by Google to
improve search quality.
Parsing. This use involves analysis of any string of data or sentence that conforms to
formal grammar and syntax rules. In language modeling, this can take the form of
sentence diagrams that depict each word's relationship to the others. Spell-checking
applications use language modeling and parsing.
Optical character recognition. This application involves the use of a machine to convert
images of text into machine-encoded text. The image can be a scanned document or
document photo, or a photo with text somewhere in it -- on a sign, for example. Optical
character recognition is often used in data entry when processing old paper records that
need to be digitized. It can also be used to analyze and identify handwriting samples.
Information retrieval. This approach involves searching in a document for information,
searching for documents in general and searching for metadata that corresponds to a
document. Web browsers are the most common information retrieval applications.
Observed data analysis. These language models analyze observed data such as sensor
data, telemetric data and data from experiments.
Sentiment analysis. This application involves determining the sentiment behind a given
phrase. Specifically, sentiment analysis is used to understand opinions and attitudes
expressed in a text. Businesses use it to analyze unstructured data, such as product
reviews and general posts about their product, as well as analyze internal data such as
employee surveys and customer support chats. Some services that provide sentiment
analysis tools are Repustate and HubSpot's Service Hub. Google's NLP tool Bert is also
used for sentiment analysis.
Information Retrieval
Information Retrieval (IR) can be defined as a software program that deals with the organization,
storage, retrieval, and evaluation of information from document repositories, particularly textual
information. Information Retrieval is the activity of obtaining material that can usually be
documented on an unstructured nature i.e. usually text which satisfies an information need from
within large collections which is stored on computers. For example, Information Retrieval can be
when a user enters a query into the system.
Not only librarians, professional searchers, etc engage themselves in the activity of information
retrieval but nowadays hundreds of millions of people engage in IR every day when they use
web search engines. Information Retrieval is believed to be the dominant form of Information
access. The IR system assists the users in finding the information they require but it does not
explicitly return the answers to the question. It notifies regarding the existence and location of
documents that might consist of the required information. Information retrieval also extends
support to users in browsing or filtering document collection or processing a set of retrieved
documents. The system searches over billions of documents stored on millions of computers. A
spam filter, manual or automatic means are provided by Email program for classifying the mails
so that it can be placed directly into particular folders.
An IR system has the ability to represent, store, organize, and access information items. A set of
keywords are required to search. Keywords are what people are searching for in search engines.
These keywords summarize the description of the information.
What is an IR Model?
An Information Retrieval (IR) model selects and ranks the document that is required by the user
or the user has asked for in the form of a query. The documents and the queries are represented
in a similar manner, so that document selection and ranking can be formalized by a matching
function that returns a retrieval status value (RSV) for each document in the collection. Many of
the Information Retrieval systems represent document contents by a set of descriptors, called
terms, belonging to a vocabulary V. An IR model determines the query-document matching
function according to four main approaches:
The estimation of the probability of user’s relevance rel for each document d and query q with
respect to a set R q of training documents: Prob (rel|d, q, Rq)
Types of IR Models
Acquisition: In this step, the selection of documents and other objects from various web
resources that consist of text-based documents takes place. The required data is collected
by web crawlers and stored in the database.
Representation: It consists of indexing that contains free-text terms, controlled
vocabulary, manual & automatic techniques as well. example: Abstracting contains
summarizing and Bibliographic description that contains author, title, sources, data, and
metadata.
File Organization: There are two types of file organization methods. i.e. Sequential: It
contains documents by document data. Inverted: It contains term by term, list of records
under each term. Combination of both.
Query: An IR process starts when a user enters a query into the system. Queries are
formal statements of information needs, for example, search strings in web search
engines. In information retrieval, a query does not uniquely identify a single object in the
collection. Instead, several objects may match the query, perhaps with different degrees
of relevancy.
The software program that deals with Data retrieval deals with obtaining data from a
the organization, storage, retrieval, and database management system such as ODBMS.
evaluation of information from It is A process of identifying and retrieving the
document repositories particularly data from the database, based on the query
textual information. provided by user or application.
Small errors are likely to go unnoticed. A single error object means total failure.
Does not provide a solution to the user Provides solutions to the user of the database
of the database system. system.
The User Task: The information first is supposed to be translated into a query by the user. In the
information retrieval system, there is a set of words that convey the semantics of the information
that is required whereas, in a data retrieval system, a query expression is used to convey the
constraints which are satisfied by the objects. Example: A user wants to search for something but
ends up searching with another thing. This means that the user is browsing and not searching.
The above figure shows the interaction of the user through different tasks.
Logical View of the Documents: A long time ago, documents were represented through a set of
index terms or keywords. Nowadays, modern computers represent documents by a full set of
words which reduces the set of representative keywords. This can be done by eliminating
stopwords i.e. articles and connectives. These operations are text operations. These text
operations reduce the complexity of the document representation from full text to set of index
terms.
Past, Present, and Future of Information Retrieval
1. Early Developments: As there was an increase in the need for a lot of information, it became
necessary to build data structures to get faster access. The index is the data structure for faster
retrieval of information. Over centuries manual categorization of hierarchies was done for
indexes.
2. Information Retrieval In Libraries: Libraries were the first to adopt IR systems for information
retrieval. In first-generation, it consisted, automation of previous technologies, and the search
was based on author name and title. In the second generation, it included searching by subject
heading, keywords, etc. In the third generation, it consisted of graphical interfaces, electronic
forms, hypertext features, etc.
3. The Web and Digital Libraries: It is cheaper than various sources of information, it provides
greater access to networks due to digital communication and it gives free access to publish on a
larger medium.
Information Extraction :
Information Extraction’s main goal is to find out meaningful information from the document set.
IE is one type of IR. IE automatically gets structured information from a set of unstructured
documents or corpus. IE focuses more on texts that can be read and written by humans and
utilize them with NLP (natural language processing). But information retrieval system finds
information that is relevant to the user’s information need and that is stored into a computer. It
returns documents of text (unstructured form) from a large set of corpses.
The information extraction system used in online text extraction should come at a low cost. It
needs to have flexibility in development and must have an easy conversion to new domains.
Let’s take the natural language processing of the machine as an example, i.e. Here
IE(information extraction) is able to recognize the IR system of a person’s need. Using
information extraction we want to make a machine capable of extracting structured information
from documents. The importance of an information extraction system is determined by the
growing amount of information available in unstructured form(data without metadata), like on
the Internet. This knowledge can be made more accessible utilizing transformation into relational
form, or by marking-up with XML tags.
We always try to use automated learning systems in information extraction and we always use
this. This type of IE system will decrease the faults in information extraction. This will also
reduce dependencies on a domain by diminishing the requirement for supervision. IE of
structured information relies on the basic content management principle: “Content must be in
context to have value“. Information Extraction is difficult than Information Retrieval.
The main goal of IE is to extract meaningful information from corps of documents that might be
in different languages. Here meaningful information contains types of information like events,
facts, components, or relations. These facts are then usually stored automatically into a database,
which may then be used to analyze the data for trends, to give a natural language summary, or
simply to serve for online access. More formally, Information Extraction gets facts out of
documents while Information Retrieval gets sets of relevant documents.
The goal is to find documents that are The goal is to extract pre-specified features
3. relevant to the user’s information need from documents or display information.
Used in many search engines – Google is Used in database systems to enter extracted
6. the best IR system for the web. features automatically.
Typically uses a bag of words model of Typically based on some form of semantic
7. the source text. analysis of the source text.
Mostly use the theory of information, Emerged from research into rule-based
8. probability, and statistics. systems.
Natural Language Processing (NLP) is a subfield of artificial intelligence that deals with the
interaction between computers and humans in natural language. It involves the use of
computational techniques to process and analyze natural language data, such as text and speech,
with the goal of understanding the meaning behind the language.
NLP is used in a wide range of applications, including machine translation, sentiment analysis,
speech recognition, chatbots, and text classification. Some common techniques used in NLP
include:
Part-of-speech tagging: the process of labeling each word in a sentence with its
grammatical part of speech.
Named entity recognition: the process of identifying and categorizing named entities,
such as people, places, and organizations, in text.
Sentiment analysis: the process of determining the sentiment of a piece of text, such as
whether it is positive, negative, or neutral.
Machine translation: the process of automatically translating text from one language to
another.
Text classification: the process of categorizing text into predefined categories or topics.
Recent advances in deep learning, particularly in the area of neural networks, have led to
significant improvements in the performance of NLP systems. Deep learning techniques such as
Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) have been
applied to tasks such as sentiment analysis and machine translation, achieving state-of-the-art
results.
Overall, NLP is a rapidly evolving field that has the potential to revolutionize the way we
interact with computers and the world around us.
NLP powers many applications that use language, such as text translation, voice recognition, text
summarization, and chatbots. You may have used some of these applications yourself, such as
voice-operated GPS systems, digital assistants, speech-to-text software, and customer service
bots. NLP also helps businesses improve their efficiency, productivity, and performance by
simplifying complex tasks that involve language.
Human language is filled with ambiguities that make it incredibly difficult to write software that
accurately determines the intended meaning of text or voice data. Homonyms, homophones,
sarcasm, idioms, metaphors, grammar and usage exceptions, variations in sentence structure—
these just a few of the irregularities of human language that take humans years to learn, but that
programmers must teach natural language-driven applications to recognize and understand
accurately from the start, if those applications are going to be useful.
NLP Tasks
Several NLP tasks break down human text and voice data in ways that help the computer make
sense of what it’s ingesting. Some of these tasks include the following:
NLU and NLG are the key aspects depicting the working of NLP devices. These 2 aspects are
very different from each other and are achieved using different methods.
Individuals working in NLP may have a background in computer science, linguistics, or a related
field. They may also have experience with programming languages such as Python, and C++ and
be familiar with various NLP libraries and frameworks such as NLTK, spaCy, and OpenNLP.
Speech Recognition:
First, the computer must take natural language and convert it into machine-readable language.
This is what speech recognition or speech-to-text does. This is the first step of NLU.
Hidden Markov Models (HMMs) are used in the majority of voice recognition systems
nowadays. These are statistical models that use mathematical calculations to determine
what you said in order to convert your speech to text.
HMMs do this by listening to you talk, breaking it down into small units (typically 10-20
milliseconds), and comparing it to pre-recorded speech to figure out which phoneme you
uttered in each unit (a phoneme is the smallest unit of speech). The program then
examines the sequence of phonemes and uses statistical analysis to determine the most
likely words and sentences you were speaking.
Natural Language Understanding (NLU):
The next and hardest step of NLP is the understanding part.
First, the computer must comprehend the meaning of each word. It tries to figure out
whether the word is a noun or a verb, whether it’s in the past or present tense, and so on.
This is called Part-of-Speech tagging (POS).
A lexicon (a vocabulary) and a set of grammatical rules are also built into NLP systems.
The most difficult part of NLP is understanding.
The machine should be able to grasp what you said by the conclusion of the process.
There are several challenges in accomplishing this when considering problems such as
words having several meanings (polysemy) or different words having similar meanings
(synonymy), but developers encode rules into their NLU systems and train them to learn
to apply the rules correctly.
Natural Language Generation (NLG):
NLG is much simpler to accomplish. NLG converts a computer’s machine-readable
language into text and can also convert that text into audible speech using text-to-speech
technology.
First, the NLP system identifies what data should be converted to text. If you asked the
computer a question about the weather, it most likely did an online search to find your
answer, and from there it decides that the temperature, wind, and humidity are the factors
that should be read aloud to you.
Then, it organizes the structure of how it’s going to say it. This is similar to NLU except
backward. NLG system can construct full sentences using a lexicon and a set of grammar
rules.
Finally, text-to-speech takes over. The text-to-speech engine uses a prosody model to
evaluate the text and identify breaks, duration, and pitch. The engine then combines all
the recorded phonemes into one cohesive string of speech using a speech database.
is important for individuals working in NLP to stay up-to-date with the latest developments and
advancements.
These spam filters look at the text in all the emails you receive and try to figure out what
it means to see if it’s spam or not.
Algorithmic Trading: Algorithmic trading is used for predicting stock market conditions.
Using NLP, this technology examines news headlines about companies and stocks and
attempts to comprehend their meaning in order to determine if you should buy, sell, or
hold certain stocks.
Questions Answering: NLP can be seen in action by using Google Search or Siri
Services. A major use of NLP is to make search engines understand the meaning of what
we are asking and generate natural language in return to give us the answers.
Summarizing Information: On the internet, there is a lot of information, and a lot of it
comes in the form of long documents or articles. NLP is used to decipher the meaning of
the data and then provides shorter summaries of the data so that humans can comprehend
it more quickly.
Machine Translation in AI
Machine translation of languages refers to the use of artificial intelligence (AI) and machine
learning algorithms to automatically translate text or speech from one language to another. This
technology has been developed over the years and has become increasingly sophisticated, with
the ability to produce accurate translations across a wide range of languages.
There have been three primary uses of machine translation in the past:
Rough translation, such as that given by free internet services, conveys the “gist” of a
foreign statement or document but is riddled with inaccuracies. Companies utilize pre-
edited translation to publish documentation and sales materials in several languages.
The original source content is written in a limited language that makes machine
translation easier, and the outputs are often edited by a person to rectify any flaws.
Restricted-source translation is totally automated, but only for highly stereotyped
language like a weather report.
Language learning: Machine translation can be a valuable tool for language learners,
helping them to understand the meaning of unfamiliar words and phrases and improving
their language skills.
Perception is a process to interpret, acquire, select and then organize the sensory information
that is captured from the real world.
For example: Human beings have sensory receptors such as touch, taste, smell, sight and
hearing. So, the information received from these receptors is transmitted to human brain to
organize the received information.
According to the received information, action is taken by interacting with the environment to
manipulate and navigate the objects.
Perception and action are very important concepts in the field of Robotics. The following
figures show the complete autonomous robot.
There is one important difference between the artificial intelligence program and robot. The AI
program performs in a computer stimulated environment, while the robot performs in the
physical world.
For example:
In chess, an AI program can be able to make a move by searching different nodes and has no
facility to touch or sense the physical world.
However, the chess playing robot can make a move and grasp the pieces by interacting with the
physical world.
Robotics Planning
Definition: Robotics planning involves the process of generating a sequence of actions for a
robot to achieve a specific goal while navigating its environment and considering various
constraints and uncertainties.