!!!going Beyond Machine Learning To Machine Reasoning

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

FORBES INNOVATION AI

Going Beyond Machine


Learning To Machine
Reasoning
Ron Schmelzer Contributor
COGNITIVE WORLD Contributor Group

Jan 9, 2020, 11:00pm EST

Listen to article 10 minutes

This article is more than 3 years old.

From Machine Learning to Machine Reasoning GETTY

The conversation around Artificial Intelligence usually revolves


around technology-focused topics: machine learning,
conversational interfaces, autonomous agents, and other aspects of
data science, math, and implementation. However, the history and
evolution of AI is more than just a technology story. The story of AI
is also inextricably linked with waves of innovation and research
breakthroughs that run headfirst into economic and technology
roadblocks. There seems to be a continuous pattern of discovery,
innovation, interest, investment, cautious optimism, boundless
enthusiasm, realization of limitations, technological roadblocks,
withdrawal of interest, and retreat of AI research back to academic
settings. These waves of advance and retreat seem to be as
consistent as the back and forth of sea waves on the shore.

This pattern of interest, investment, hype, then decline, and rinse-


and-repeat is particularly vexing to technologists and investors
because it doesn't follow the usual technology adoption lifecycle.
Popularized by Geoffrey Moore in his book "Crossing the Chasm",
technology adoption usually follows a well-defined path.
Technology is developed and finds early interest by innovators, and
then early adopters, and if the technology can make the leap across
the "chasm", it gets adopted by the early majority market and then
it's off to the races with demand by the late majority and finally
technology laggards. If the technology can't cross the chasm, then it
ends up in the dustbin of history. However, what makes AI distinct
is that it doesn't fit the technology adoption lifecycle pattern.

But AI isn't a discrete technology. Rather it's a series of


technologies, concepts, and approaches all aligning towards the
quest for the intelligent machine. This quest inspires academicians
and researchers to come up with theories of how the brain and
intelligence works, and their concepts of how to mimic these
aspects with technology. AI is a generator of technologies, which
individually go through the technology lifecycle. Investors aren't
investing in "AI”, but rather they're investing in the output of AI
research and technologies that can help achieve the goals of AI. As
researchers discover new insights that help them surmount
previous challenges, or as technology infrastructure finally catches
up with concepts that were previously infeasible, then new
technology implementations are spawned and the cycle of
investment renews.

The Need for Understanding

It's clear that intelligence is like an onion (or a parfait) — many


layers. Once we understand one layer, we find that it only explains
a limited amount of what intelligence is about. We discover there's
another layer that’s not quite understood, and back to our research
institutions we go to figure out how it works. In Cognilytica’s
exploration of the intelligence of voice assistants, the benchmark
aims to tease at one of those next layers: understanding. That is,
knowing what something is — recognizing an image among a
category of trained concepts, converting audio waveforms into
words, identifying patterns among a collection of data, or even
playing games at advanced levels, is different from actually
understanding what those things are. This lack of understanding is
why users get hilarious responses from voice assistant questions,
and is also why we can't truly get autonomous machine capabilities
in a wide range of situations. Without understanding, there's no
common sense. Without common sense and understanding,
machine learning is just a bunch of learned patterns that can't
adapt to the constantly evolving changes of the real world.

One of the visual concepts that’s helpful to understand these layers


of increasing value is the "DIKUW Pyramid":

MORE FOR YOU

The ‘Backsies’ Billionaire: Texan Builds Second Fortune From


Wreckage Of Real Estate Empire He’d Sold
Why 2023 Will Be The Year Of AI Education

When Time Makes A Difference: The Hublot Big Bang Unico SORAI
Helps Save Rhinos

While the Wikipedia entry above


conveniently skips the
Understanding step in their entry,
we believe that understanding is
DIKUW Pyramid DIKUW the next logical threshold of AI
capability. And like all previous
layers of this AI onion, tackling this layer will require new research
breakthroughs, dramatic increases in compute capabilities, and
volumes of data. What? Don't we have almost limitless data and
boundless computing power? Not quite. Read on.

Forbes Innovation

00:05 01:12

The Quest for Common Sense: Machine Reasoning

Early in the development of artificial intelligence, researchers


realized that for machines to successfully navigate the real world,
they would have to gain an understanding of how the world works
and how various different things are related to each other. In 1984,
the world's longest-lived AI project started. The Cyc project is
focused on generating a comprehensive "ontology" and knowledge
base of common sense, basic concepts and "rules of thumb" about
how the world works. The Cyc ontology uses a knowledge graph to
structure how different concepts are related to each other, and an
inference engine that allows systems to reason about facts.

The main idea behind Cyc and other understanding-building


knowledge encodings is the realization that systems can't be truly
intelligent if they don't understand what the underlying things they
are recognizing or classifying are. This means we have to dig deeper
than machine learning for intelligence. We need to peel this onion
one level deeper, scoop out another tasty parfait layer. We need
more than machine learning - we need machine reasoning.

Machine reason is the concept of giving machines the power to


make connections between facts, observations, and all the magical
things that we can train machines to do with machine learning.
Machine learning has enabled a wide range of capabilities and
functionality and opened up a world of possibility that was not
possible without the ability to train machines to identify and
recognize patterns in data. However, this power is crippled by the
fact that these systems are not really able to functionally use that
information for higher ends, or apply learning from one domain to
another without human involvement. Even transfer learning is
limited in application.

Indeed, we're rapidly facing the reality that we're going to soon hit
the wall on the current edge of capabilities with machine learning-
focused AI. To get to that next level we need to break through this
wall and shift from machine learning-centric AI to machine
reasoning-centric AI. However, that's going to require some
breakthroughs in research that we haven't realized yet.

The fact that the Cyc project has the distinction as being the
longest-lived AI project is a bit of a back-handed compliment. The
Cyc project is long lived because after all these decades the quest
for common sense knowledge is proving elusive. Codifying
commonsense into a machine-processable form is a tremendous
challenge. Not only do you need to encode the entities themselves
in a way that a machine knows what you're talking about but also
all the inter-relationships between those entities. There are
millions, if not billions, of "things" that a machine needs to know.
Some of these things are tangible like "rain" but others are
intangible such as "thirst". The work of encoding these
relationships is being partially automated, but still requires
humans to verify the accuracy of the connections... because after
all, if machines could do this we would have solved the machine
recognition challenge. It's a bit of a chicken and egg problem this
way. You can't solve machine recognition without having some way
to codify the relationships between information. But you can't
scalable codify all the relationships that machines would need to
know without some form of automation.

Are we still limited by data and compute power?

Machine learning has proven to be very data-hungry and compute-


intensive. Over the past decade, many iterative enhancements have
lessened compute load and helped to make data use more efficient.
GPUs, TPUs, and emerging FPGAs are helping to provide the raw
compute horsepower needed. Yet, despite these advancements,
complicated machine learning models with lots of dimensions and
parameters still require intense amounts of compute and data.
Machine reasoning is easily one order or more of complexity
beyond machine learning. Accomplishing the task of reasoning out
the complicated relationships between things and truly
understanding these things might be beyond today's compute and
data resources.

The current wave of interest and investment in AI doesn't show any


signs of slowing or stopping any time soon, but it's inevitable it will
slow at some point for one simple reason: we still don't understand
intelligence and how it works. Despite the amazing work of
researchers and technologists, we're still guessing in the dark about
the mysterious nature of cognition, intelligence, and consciousness.
At some point we will be faced with the limitations of our
assumptions and implementations and we'll work to peel the onion
one more layer and tackle the next set of challenges. Machine
reasoning is quickly approaching as the next challenge we must
surmount on the quest for artificial intelligence. If we can apply our
research and investment talent to tackling this next layer, we can
keep the momentum going with AI research and investment. If not,
the pattern of AI will repeat itself, and the current wave will crest.
It might not be now or even within the next few years, but the ebb
and flow of AI is as inevitable as the waves upon the shore.
Follow me on Twitter or LinkedIn. Check out my website or some
of my other work here.

Ron Schmelzer Follow

Ronald Schmelzer is Managing Partner & Principal Analyst at AI Focused


Research and Advisory firm Cognilytica (http://cognilytica.com), a leading
analyst firm... Read More

Editorial Standards Reprints & Permissions

ADVERTISEMENT

You might also like