Should We Expect Super-Intelligent AI?

Disclaimer: Do I have any kind of a technical understanding of AI progress? Absolutely not. Am I going to let that stop me from wading in with wild speculation? No way! So here goes: 

“Solve intelligence, and then use that to solve everything else.”
Demis Hassabis

From one perspective, it looks like we’re centuries away from creating AI that can compete with humans. The complexity and versatility of a human brain is so far beyond any software we’ve created, it’s like we’re not even in the same league yet. You could have 1000 top developers working for 1000 years and you still might not create anything resembling an adult human brain. It’s just not a project that scales well. You can’t just write a cooking module and then add a chess module and a hundred other narrow skills and end up with anything that’s close to a functional human being. That approach will never give you any kind of general-purpose intelligence. From this perspective, it looks hopeless.

AND YET…

I think there’s one main misconception that makes people more skeptical of AI than is warranted. I think they’re misunderstanding the goal. Creating software that’s like an adult human brain is not the goal! The goal is to create something more like a human newborn. Babies are stupid! Really stupid! But they have a spectacular framework for learning. Creating an AI that can match a newborn infant is a far less ambitious project.

When babies are born, they know nothing! They don’t even know that they exist in a physical world. They don’t know that the people around them are corporeal beings. They can’t make any sense of the sensory input they are bombarded with. They don’t know that they have hands and feet. But they’re fantastic at sorting through all of the sensory input over time, recognizing patterns, and using those patterns to form expectations and predictions. Essentially a human baby receives a bunch of input data, forms predictive models about the world, and then use those models to build new models and further refine the current ones. After eighteen years or so of learning, you get to find out what kind of human you’ve created.

An intelligent AI would be something similar. It would be a general-purpose algorithm that is constantly learning and updating its models and predictions. The difference is, you wouldn’t have to wait 18 years to see what you’ve created. Since the AI is running in computer time, you could input a bunch of data for it to churn through and then wait a few hours or days, instead of years, to see what kind of entity you’ve created. It would learn concepts naturally, the way we do. At first, developers will struggle to create something as smart as a mouse, or a one-year-old. But after each iteration, they’ll make some changes and try again. Some of the best minds in the world are working on this, and I’d be very hesitant to bet against them and against human ingenuity. They’ll keep tweaking the algorithms, and I wouldn’t be surprised if they eventually figure out the secrets of general intelligence. And then the question is, what happens when you get that formula right? How smart does an AI become? Does it hit an intelligence wall, somewhere in the same range where all humans hit it? Or does it smash through that wall, having none of the speed/energy/volume constrains of a human brain in a human skull? Does it level off at an IQ of 200? 500? 1000? If it manages to get smart enough to re-write its own code, what happens when it starts recursively self-improving and enters a feedback loop where it becomes smarter and smarter? At that point, further speculation about the future of Earth becomes hopelessly inadequate.

We’ve come a long way since DeepBlue, the champion chess program of the 90s that excelled at that one narrow task, but was incapable of anything else. Current AI research is definitely already headed in the direction of becoming more versatile and more general-purpose. For example, the AI company Deepmind made a splash in 2015 when its AI mastered 49 Atari arcade games. What’s truly impressive is that they used the same general-purpose algorithm to master all 49 games, and the only input it received was the pixels on the screen. Then in 2016, Deepmind challenged and defeated the top player in the world at the ancient game of Go. Go is a complex game of strategy that is notoriously difficult for AI, and this milestone was thought to be years or decades away. The algorithm, named AlphaGo, learned by studying a dataset of previous games played by humans, and then playing millions of games against itself. Then in 2018, things really started getting interesting when Deepmind unleashed AlphaZero, a new generalized version of the algorithm. AlphaZero conquered the world’s best at three separate games; chess, go, and shogi. When I say it conquered the world’s best, I mean that it not only outmatched humans, but it also defeated the most powerful programs ever written. And this time, it learned entirely through self-play without observing any human games whatsoever. It learned to play entirely from scratch, and it did so in mere hours. It took just four hours of training for AlphaZero to surpass the entire history of strategy innovations and progress in the game of go; insights that had been accumulated over thousands of years. It took just eight hours to do the same with chess, and to eclipse the world’s best chess programs. Now Deepmind has turned its attention to the strategy game Starcraft, in an attempt to navigate an even more complex environment, and it has already had some victories against world-class players.

Some point out that if it takes millions and millions of games for algorithms to learn to beat the top human plays, those algorithms must be learning at a pretty pathetic rate. But I think this criticism is mostly off-base. When humans play a game like Starcraft, they’ve already learned millions of useful concepts throughout their lives. They bring all of that knowledge to the game. An AI starts with nothing; no background knowledge whatsoever. To me, this is pretty good demonstration that we’re getting closer to creating an AI “baby” that learns in a general-purpose way. Exactly how far out is that achievement? Predicting the future of technological breakthroughs is difficult. To quote Eliezer Yudkowsky:

History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.

In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.

In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction.

And of course if you’re not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.

That article argues that we should start thinking more about AI safety right now, and figuring out how to force a machine’s values to match our own. I won’t get into the AI safety debate here, except to agree that the human-value alignment problem is really hard, and the default scenario is that a super-intelligent AI would definitely melt down all the humans and use them for spare atoms. Unless we’re really smart and really careful, attempts to align human and AI values will go horribly wrong. On the flip side, if they go right, we could see the end of all of our greatest troubles.

To summarize my actual prediction about the future of AI: I expect to see a ton of progress in AI in the coming decades. Will it be enough to lead to a general intelligence explosion that ushers in a new era? I have absolutely no idea. And that’s really exciting! I don’t know if humanity will ever create intelligent AI, let alone in my lifetime. But it looks like it could happen. It’s possible. To me, it’s pretty wild that this unfathomable world-changing event might actually happen! And I might even live to see it!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s