2018014249.umar Badamasi I. Ass 5
2018014249.umar Badamasi I. Ass 5
2018014249.umar Badamasi I. Ass 5
Classical AI:
The goal of classical AI was to explicitly represent human knowledge using facts and
rules. That is you program a machine with some rules and when you input a query, it
gives the answer based on the rules. Taking the example of the calculator, it is
programmed with the rules of mathematical operations and when you input 2+3,
it returns 5. Expert systems such as WebMD are another example. WebMD has
a database of medical knowledge and rules compiled by experts from the medical
field such as doctors. When you input your symptoms into it, it goes through the
database to find what you could possibly be suffering from.
While classical AI worked very well in certain situations, it still suffered from a major
problem, that of rules. What if there are lots of rules or if the rules are not well-
defined. In these situations it becomes difficult to codify or specify the rules
explicitly. One example is if you want a machine to identify a cat, what rules would
you specify to be able to identify it? All these problems led to a mismatch between
the expectations and realities of AI and led to an AI ‘winter’. During this period,
primarily in the 70s and 80s, there were a lot of funding cuts towards AI research
and the pace of growth slowed down.
Humans have from time immemorial have tried to get machines and contraptions to
do what they do or at least make it easy. Be it anything like the printing press or
automatons to transport water. It started off by having machines do simple tasks and
as humanity advanced, have it do more complex tasks. As such classical AI is simply
a continuation of that trend and served well during that time period. But with
advancements we needed a different form of AI, one that could do even more
complex tasks that classical AI could not.
Modern AI
Modern AI is the latest incarnation of AI which is meant to handle many of the
weaknesses of classical AI. Unlike classical AI, modern AI doesn’t need people to
explicitly specify rules for it. It is capable of learning rules on its own. You specify the
inputs/queries and the expected outputs/answers to a machine and it can infer and
extract the rules and patterns required to get from the input to the output. If you
want a machine to be able to identify cats, you give it tons of cat pictures and have it
understand what are the common features of cats. Then whenever it sees another cat
it is able to match its features and identify it as one.
1. Data Volume
2. Statistical models
3. Computing power
Recent years have seen a rise in the amount of data collected from the internet or
from the devices you use. The rise in computing power due to better CPUs and GPUs
means that the vast amounts of data aren’t simply stored in databases but can now be
used. This allows for applying new statistical models on the vast amounts of data
using better computing resources to extract valuable information and patterns. We
can in short call this as machine learning.
Let's take one example such as facebook’s friend recommendation algorithm. For
instance, say person A has made many friends over the years. The algorithm is able
to pick up on all these friends and then creates a network allowing it to find common
patterns. In the network there might be predominant clusters formed corresponding
to the circles he moves in. One cluster might correspond to people he knows from
childhood, another cluster is people he met in university, another of colleagues and
so on. From these clusters the algorithm is able to learn and identify common
features and then predict who else person A might be knowing and recommend them
as friends.
Here machine learning is used to teach/learn the rules and patterns and AI is the
action of applying it to recommend friends. Classical AI used rules to power AI and
modern AI uses machine learning to power AI. Over time the definition of AI has not
changed, only that which powers it and makes decisions.
Our Client
In most complex medical claims, insurers and patients have the right to request a
medical review of prescribed treatment from an independent reviewer. Our client is
such a reviewer, acting as a mediator between payers and providers for medical
necessity reviews and pre-authorizations. Once our client receives the details of the
case, the organization must then validate (or overturn) the insurer's decision.
Validating treatment plans is just one of many ways that this organization helps at
the intersection of the insurer, physician, and patient. In addition to providing an
appeal mechanism, our client can also provide treatment pre-authorizations as
outsourced by insurance providers.
The Problem
Our Solution
Our first step was to organize the crush of content they receive and convert it into a
normalized, structured data set that our Artificial Intelligence system could
eventually interpret. To do this, we built a service that extracts embedded and
scanned language through digital extraction and OCR (optical character recognition),
respectively, in order to process every word on every page into something that could
be read, tagged, and understood by our AI system. During this process, we also built
an exhaustive set of intelligent validators to guarantee the accuracy of the case
materials, ensuring that all the records were accurately associated with the correct
patient and the case at hand.
The core challenge of any NLP project is that people understand sequences of words
while computers understand sequences of numbers. By translating words, sentences,
and language into numbers — or vectors, as Data Scientists call them — computers
are able to map the relationships words have to one another. These word
relationships are the key to understanding language. Only by associating the
word leopard to the words “wild”, “cat”, and “spots” can humans begin to understand
what a leopard is. It is in this way that Natural Language Processing becomes
Natural Language Understanding. Instead of associating the word “leopard” with the
word “cat” in a holistic way, however, computers do this mathematically, converting
words into a veritable constellation of numerical understanding.
The most important part of any NLP implementation is finding the right language
model for translating text into such vectors, while maintaining a common link
between the two distinct entities.
This enabled our Deep Learning models to understand whether particular sections or
sentences of the case file were relevant to the medical procedures under review.
Relevant information was then sent back and forth across the system to different
stakeholders.
By layering the language model onto our client’s data, our Machine Learning system
could now understand the story of the case file and begin to summarize it.
Pre-Extraction.
First, our system dug through the original case file and extracted the 500 most
important sentences, based on the set priorities.
Extraction.
At the extraction phase, our system then reduced the word count further. It chose 10
of the 500 sentences to serve as the most concise summary possible. In this case, we
tuned the system to prioritize comprehensively capturing all information
contained in the source material, even if that meant repeating information.
Generation.
Finally, once the system had reduced the case file down to a single page, we
used Natural Language Generation tools to rewrite those 10 sentences into a
completely summarized, totally comprehensive narrative.
Results
Our system has already saved this organization thousands of hours. By automatically
organizing and summarizing case file information, its physicians are now able to
quickly understand case elements so they can make informed, medically accurate,
and timely determinations.
For health care companies, the stakes of getting this right couldn’t be higher. If our
system were to miss a crucial part of a patient’s case, the consequences could be
serious. By trusting Manceps to build this mission-critical system for them, this
medical organization could serve more cases, more quickly, at a fraction of the cost.
Hardware in Robotics
Hardware side includes processor, buses, memory and peripherals like
co-processors, sensors, robotic arm, controllers, UARTs, etc.
Computer Vision
Computer vision is a field of artificial intelligence that trains computers to interpret
and understand the visual world. Using digital images from cameras and videos
and deep learning models, machines can accurately identify and classify objects —
and then react to what they “see.”
Early experiments in computer vision took place in the 1950S, using some of
the first neural networks to detect the edges of an object and to sort simple
objects into categories like circles and squares. In the 1970S, the first commercial use
of computer vision interpreted typed or handwritten text using optical character
recognition. This advancement was used to interpret written text for the blind.
As the internet matured in the 1990S, making large sets of images available online for
analysis, facial recognition programs flourished. These growing data sets helped
make it possible for machines to identify specific people in photos and videos. Today,
a number of factors have converged to bring about a renaissance in computer vision.
The effects of these advances on the computer vision field have been astounding.
Accuracy rates for object identification and classification have gone from 50 percent
to 99 percent in less than a decade — and today’s systems are more accurate than
humans at quickly detecting and reacting to visual inputs.
Not long ago, we published an article about navigation menu usability testing. Now,
it’s time we looked at how it’s put into practice in navigation menu optimization.
After testing the above hypotheses, we then came up with a set of recommendations
for the new website layout and what to do with the categories in the navigation
menu. Here’s a summary of what an optimized version of the website would look
like:
1. The left navigation is reduced from 11 categories to 8, and moved to the top of the
page.
5. Unique selling points are displayed below the banner. Overall, the website should
look cleaner and become easier to navigate.
Ambient Intelligence
Machine learning: This capacity makes it possible for devices in the environment
to learn from experience, extrapolate from current data and expand on their
knowledge and capabilities autonomously.
AmI, the Internet of Things, artificial intelligence (AI), robotics, nanotechnology and
other developing trends are transforming the world to such an extent that the current
scenario is sometimes called the fourth industrial revolution.