Nautilus

Why Robot Brains Need Symbols

Nowadays, the words “artificial intelligence” seem to be on practically everyone’s lips, from Elon Musk to Henry Kissinger. At least a dozen countries have mounted major AI initiatives, and companies like Google and Facebook are locked in a massive battle for talent. Since 2012, virtually all the attention has been on one technique in particular, known as deep learning, a statistical technique that uses sets of of simplified “neurons” to approximate the dynamics inherent in large, complex collections of data. Deep learning has powered advances in everything from speech recognition and computer chess to automatically tagging your photos. To some people, it probably seems like “superintelligence”—machines vastly more intelligent than people—are just around the corner.

The truth is, they are not. Getting a machine to recognize the syllables in your sentence is not the same as it getting to understand the meaning of your sentences. A system like Alexa can understand a simple request like “turn on the lights,” but it’s a long way from holding a meaningful conversation. Similarly, robots can vacuum your floor, but the AI that powers them remains weak, and they are a long way from being clever enough (and reliable enough) to watch your kids. There are lots of things that people can do that machines still can’t.

I tried to take a step back, to explain why deep learning might not be enough, and where we ought to look to take AI to the next level.

And lots of controversy about what we should do next. I should know: For the last three decades, since I started graduate school at the Massachusetts Institute of Technology, studying with the inspiring cognitive scientist Steven Pinker, I have been embroiled in on-again, off-again debate about the nature of the human mind, and the best way to build AI. I have taken the sometimes unpopular position that techniques like deep learning (and predecessors that were around back then) aren’t enough to capture the richness of the human mind.

That on-again off-again debate flared up in an unexpectedly big way last week, leading to a huge Tweetstorm that brought in a host of luminaries, ranging from Yann LeCun, a founder of deep learning and current Chief AI Scientist at Facebook, to (briefly) Jeff Dean, who runs AI at Google, and Judea Pearl, a Turing Award winner at the University of California, Los Angeles.

When 140 characters no longer seemed like enough, I tried to take a step back, to explain why deep learning might not be enough, and where we perhaps ought to look for another idea that might combine with deep learning to take AI to the next level. The following is a slight adaptation of my personal perspective on what the debate is all about.

You’re reading a preview, subscribe to read more.

More from Nautilus

Nautilus10 min read
Meet My Pal, the Ancient Philosopher
To do philosophy, you don’t need expensive labs or equipment. You don’t need a huge team. You can do it all by yourself. The downside is that philosophers are often lonely. Reading in solitude while wrestling with your own thoughts is difficult. We d
Nautilus9 min read
How Big Is Your Family?
In the spring of 1987, I stooped over the desk in my shared student office in Cambridge, England, running my finger across a map of Papua New Guinea and squinting at the tiny typescript. I was trying to establish the location of a cluster of tribes i
Nautilus6 min read
Is Technology Worthy of Our Faith?
1 Tech Has Become the Most Dominant Faith of Our Time Though I am an atheist, I have built my life and career around religion. First I spent five years pursuing ordination as a secular humanist rabbi, including 18 months living in Jerusalem and Tel A

Related Books & Audiobooks