2018014249.umar Badamasi I. Ass 5

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

NAME: UMAR BADAMASI IBRAHIM

SYSTEM ID: 2018014249


SEC. (IT) G1
ASSIGNMENT-5

PAST, PRESENT AND FUTURE OF ARTIFICIAL INTELLIGENCE

Classical AI:
The goal of classical AI was to explicitly represent human knowledge using facts and
rules. That is you program a machine with some rules and when you input a query, it
gives the answer based on the rules. Taking the example of the calculator, it is
programmed with the rules of mathematical operations and when you input 2+3,
it returns 5. Expert systems such as WebMD are another example. WebMD has
a database of medical knowledge and rules compiled by experts from the medical
field such as doctors. When you input your symptoms into it, it goes through the
database to find what you could possibly be suffering from.

While classical AI worked very well in certain situations, it still suffered from a major
problem, that of rules. What if there are lots of rules or if the rules are not well-
defined. In these situations it becomes difficult to codify or specify the rules
explicitly. One example is if you want a machine to identify a cat, what rules would
you specify to be able to identify it? All these problems led to a mismatch between
the expectations and realities of AI and led to an AI ‘winter’. During this period,
primarily in the 70s and 80s, there were a lot of funding cuts towards AI research
and the pace of growth slowed down.

Humans have from time immemorial have tried to get machines and contraptions to
do what they do or at least make it easy. Be it anything like the printing press or
automatons to transport water. It started off by having machines do simple tasks and
as humanity advanced, have it do more complex tasks. As such classical AI is simply
a continuation of that trend and served well during that time period. But with
advancements we needed a different form of AI, one that could do even more
complex tasks that classical AI could not.
Modern AI
Modern AI is the latest incarnation of AI which is meant to handle many of the
weaknesses of classical AI. Unlike classical AI, modern AI doesn’t need people to
explicitly specify rules for it. It is capable of learning rules on its own. You specify the
inputs/queries and the expected outputs/answers to a machine and it can infer and
extract the rules and patterns required to get from the input to the output. If you
want a machine to be able to identify cats, you give it tons of cat pictures and have it
understand what are the common features of cats. Then whenever it sees another cat
it is able to match its features and identify it as one.

Modern AI is in short a powerful method of pattern recognition and in many ways


mimics how people learn. The rise in modern AI can be attributed to three factors:

1. Data Volume
2. Statistical models
3. Computing power

Recent years have seen a rise in the amount of data collected from the internet or
from the devices you use. The rise in computing power due to better CPUs and GPUs
means that the vast amounts of data aren’t simply stored in databases but can now be
used. This allows for applying new statistical models on the vast amounts of data
using better computing resources to extract valuable information and patterns. We
can in short call this as machine learning.

Let's take one example such as facebook’s friend recommendation algorithm. For
instance, say person A has made many friends over the years. The algorithm is able
to pick up on all these friends and then creates a network allowing it to find common
patterns. In the network there might be predominant clusters formed corresponding
to the circles he moves in. One cluster might correspond to people he knows from
childhood, another cluster is people he met in university, another of colleagues and
so on. From these clusters the algorithm is able to learn and identify common
features and then predict who else person A might be knowing and recommend them
as friends.

Here machine learning is used to teach/learn the rules and patterns and AI is the
action of applying it to recommend friends. Classical AI used rules to power AI and
modern AI uses machine learning to power AI. Over time the definition of AI has not
changed, only that which powers it and makes decisions.

Case Study: Using Natural Language Processing for


Healthcare

A leading healthcare organization recently engaged Manceps to help them bring


Machine Learning solutions to their case preparation process.
By using natural language processing and state-of-the-art language models to
integrate their wealth of data into a scalable system, the company was able to
automatically structure complex case files into single-page medical narratives.

Our Client

In most complex medical claims, insurers and patients have the right to request a
medical review of prescribed treatment from an independent reviewer. Our client is
such a reviewer, acting as a mediator between payers and providers for medical
necessity reviews and pre-authorizations. Once our client receives the details of the
case, the organization must then validate (or overturn) the insurer's decision.
Validating treatment plans is just one of many ways that this organization helps at
the intersection of the insurer, physician, and patient. In addition to providing an
appeal mechanism, our client can also provide treatment pre-authorizations as
outsourced by insurance providers.

The Problem

When a case is brought before this healthcare organization, it receives an upload


through their application portal of hundreds — if not thousands — of pertinent
medical document pages that it will need to interpret in order to render a verdict. For
liability purposes, this information tends to be overwhelmingly comprehensive. Not
only will the organization receive information about the case, such as the patient’s
medical records and test results but it will also receive documentation relating to the
insurance company, its policies, and other extenuating details. Further complicating
matters, the information can come in a variety of formats such as printed text,
scanned handwritten notes, images, and/or computer-generated EHR dumps, all of
which can have inaccuracies or otherwise be incomplete. It is the job of our client and
its clinical staff to transform this poorly-organized data into a decision — one that
must be made quickly and accurately.

Our Solution

Manceps built a scalable, containerized data engineering system to structure their


patents’ files through Natural Language Processing (NLP) to summarize the case and
drastically reduced the number of hours their in-house medical team had to spend
evaluating case files.

Step 1: Organize the Data

Our first step was to organize the crush of content they receive and convert it into a
normalized, structured data set that our Artificial Intelligence system could
eventually interpret. To do this, we built a service that extracts embedded and
scanned language through digital extraction and OCR (optical character recognition),
respectively, in order to process every word on every page into something that could
be read, tagged, and understood by our AI system. During this process, we also built
an exhaustive set of intelligent validators to guarantee the accuracy of the case
materials, ensuring that all the records were accurately associated with the correct
patient and the case at hand.

Step 2: Add Natural Language Processing Capabilities

The core challenge of any NLP project is that people understand sequences of words
while computers understand sequences of numbers. By translating words, sentences,
and language into numbers — or vectors, as Data Scientists call them — computers
are able to map the relationships words have to one another. These word
relationships are the key to understanding language. Only by associating the
word leopard to the words “wild”, “cat”, and “spots” can humans begin to understand
what a leopard is. It is in this way that Natural Language Processing becomes
Natural Language Understanding. Instead of associating the word “leopard” with the
word “cat” in a holistic way, however, computers do this mathematically, converting
words into a veritable constellation of numerical understanding.
The most important part of any NLP implementation is finding the right language
model for translating text into such vectors, while maintaining a common link
between the two distinct entities.

Fortunately, state-of-the-art pre-trained language models are available to perform


these tasks with deep-learning-powered language processing. Once we had built our
data pipeline to properly extract and stream text, we were able to do two things with
it: provide indexed text for dynamic end-user interaction and funnel language
embeddings to power our ML models training and inference.

This enabled our Deep Learning models to understand whether particular sections or
sentences of the case file were relevant to the medical procedures under review.
Relevant information was then sent back and forth across the system to different
stakeholders.

By layering the language model onto our client’s data, our Machine Learning system
could now understand the story of the case file and begin to summarize it.

Step 3: Summarize the Case


Pragmatically speaking, using natural language processing to summarize dense text
requires two steps. The first is to extract relevant information. The second is to
rewrite that extracted information into a coherent narrative. Because the source
material was exceedingly long for this project, Manceps performed multiple to
produce the best results.

Pre-Extraction.
First, our system dug through the original case file and extracted the 500 most
important sentences, based on the set priorities.

Extraction.
At the extraction phase, our system then reduced the word count further. It chose 10
of the 500 sentences to serve as the most concise summary possible. In this case, we
tuned the system to prioritize comprehensively capturing all information
contained in the source material, even if that meant repeating information.

Generation.
Finally, once the system had reduced the case file down to a single page, we
used Natural Language Generation tools to rewrite those 10 sentences into a
completely summarized, totally comprehensive narrative.

Results

Our system has already saved this organization thousands of hours. By automatically
organizing and summarizing case file information, its physicians are now able to
quickly understand case elements so they can make informed, medically accurate,
and timely determinations.

For health care companies, the stakes of getting this right couldn’t be higher. If our
system were to miss a crucial part of a patient’s case, the consequences could be
serious. By trusting Manceps to build this mission-critical system for them, this
medical organization could serve more cases, more quickly, at a fraction of the cost.

Hardware in Robotics
Hardware side includes processor, buses, memory and peripherals like
co-processors, sensors, robotic arm, controllers, UARTs, etc.

Computer Vision
Computer vision is a field of artificial intelligence that trains computers to interpret
and understand the visual world. Using digital images from cameras and videos
and deep learning models, machines can accurately identify and classify objects —
and then react to what they “see.”

Early experiments in computer vision took place in the 1950S, using some of
the first neural networks to detect the edges of an object and to sort simple
objects into categories like circles and squares. In the 1970S, the first commercial use
of computer vision interpreted typed or handwritten text using optical character
recognition. This advancement was used to interpret written text for the blind.

As the internet matured in the 1990S, making large sets of images available online for
analysis, facial recognition programs flourished. These growing data sets helped
make it possible for machines to identify specific people in photos and videos. Today,
a number of factors have converged to bring about a renaissance in computer vision.
The effects of these advances on the computer vision field have been astounding.
Accuracy rates for object identification and classification have gone from 50 percent
to 99 percent in less than a decade — and today’s systems are more accurate than
humans at quickly detecting and reacting to visual inputs.

Case Study: Usability-Based Navigation Menu Optimization

Not long ago, we published an article about navigation menu usability testing. Now,
it’s time we looked at how it’s put into practice in navigation menu optimization.

We analyzed the navigation menu performance of a European online timber store in


order to ensure an optimal navigation experience for its customers.

To be able to come up with recommendations on how to improve the store’s


navigation menu, we tried out 8 hypotheses by conducting tests that show how users
navigate the website. We had actual store customers participate in first click testing
and tree testing to do this. Here are the hypotheses we tested, how we proved them,
and what implications they had on the existing navigation menu of the store. Learn
the art and science of navigation menu optimization as we walk you through the
steps and turns we took to give the best recommendations for this real-world client.

After testing the above hypotheses, we then came up with a set of recommendations
for the new website layout and what to do with the categories in the navigation
menu. Here’s a summary of what an optimized version of the website would look
like:

1. The left navigation is reduced from 11 categories to 8, and moved to the top of the
page.

2. Categories from the original upper menu are modified thus:

● Implemented as filter options to facilitate product search

● Displayed as category pages at the bottom of the page

3. A search bar is prominently placed at the top of the page.

4. A value proposition with a clear CTA is displayed on the hero banner.

5. Unique selling points are displayed below the banner. Overall, the website should
look cleaner and become easier to navigate.
Ambient Intelligence

Ambient intelligence (AmI) is the element of a pervasive computing environment


that enables it to interact with and respond appropriately to the humans in that
environment.

That capacity is enabled by unobtrusive embedded devices in the environment and


natural user interfaces (NUI), providing some services autonomously in response to
perceived needs and accepting user input through voice, gesture and other non-
interruptive methods.

Popular examples of ambient intelligence include Google Assistant and Amazon


Alexa – devices that automatically respond to a person's voice.

Some elements of an AmI environment:

Embeddedness: Computers are not typically standalone devices in the


environment but many man made and organic systems have built-in intelligence and
computing abilities. The current Internet of Things (I oT) development, which
involves outfitting just about any type of object imaginable with computing ability
and connectivity, is leading us to embedded computing.

Transparency: Transparency, in the context of transparent computing, essentially


means "invisibility." People interact with embedded systems naturally -- asking a
question rather than, for example, picking up a tablet and typing a search query.

Context awareness: This component is the ability of a system or system


component to gather information about its environment at any given time and adapt
behaviors accordingly. Contextual or context-aware computing uses software and
hardware to automatically collect and analyze data to guide responses. Potential
systems for data collection and response include sensors, emotion analytics and
Affective computing software.

Machine learning: This capacity makes it possible for devices in the environment
to learn from experience, extrapolate from current data and expand on their
knowledge and capabilities autonomously.

AmI, the Internet of Things, artificial intelligence (AI), robotics, nanotechnology and
other developing trends are transforming the world to such an extent that the current
scenario is sometimes called the fourth industrial revolution.

You might also like