What happens when our computers get smarter than we are?
-
0:01 - 0:05I work with a bunch of mathematicians,
philosophers and computer scientists, -
0:05 - 0:10and we sit around and think about
the future of machine intelligence, -
0:10 - 0:12among other things.
-
0:12 - 0:17Some people think that some of these
things are sort of science fiction-y, -
0:17 - 0:20far out there, crazy.
-
0:20 - 0:21But I like to say,
-
0:21 - 0:25okay, let's look at the modern
human condition. -
0:25 - 0:27(Laughter)
-
0:27 - 0:29This is the normal way for things to be.
-
0:29 - 0:31But if we think about it,
-
0:31 - 0:35we are actually recently arrived
guests on this planet, -
0:35 - 0:37the human species.
-
0:37 - 0:41Think about if Earth
was created one year ago, -
0:41 - 0:45the human species, then,
would be 10 minutes old. -
0:45 - 0:48The industrial era started
two seconds ago. -
0:49 - 0:55Another way to look at this is to think of
world GDP over the last 10,000 years, -
0:55 - 0:58I've actually taken the trouble
to plot this for you in a graph. -
0:58 - 0:59It looks like this.
-
0:59 - 1:01(Laughter)
-
1:01 - 1:03It's a curious shape
for a normal condition. -
1:03 - 1:05I sure wouldn't want to sit on it.
-
1:05 - 1:07(Laughter)
-
1:07 - 1:12Let's ask ourselves, what is the cause
of this current anomaly? -
1:12 - 1:14Some people would say it's technology.
-
1:14 - 1:19Now it's true, technology has accumulated
through human history, -
1:19 - 1:24and right now, technology
advances extremely rapidly -- -
1:24 - 1:25that is the proximate cause,
-
1:25 - 1:28that's why we are currently
so very productive. -
1:28 - 1:32But I like to think back further
to the ultimate cause. -
1:33 - 1:37Look at these two highly
distinguished gentlemen: -
1:37 - 1:38We have Kanzi --
-
1:38 - 1:43he's mastered 200 lexical
tokens, an incredible feat. -
1:43 - 1:47And Ed Witten unleashed the second
superstring revolution. -
1:47 - 1:49If we look under the hood,
this is what we find: -
1:49 - 1:51basically the same thing.
-
1:51 - 1:53One is a little larger,
-
1:53 - 1:55it maybe also has a few tricks
in the exact way it's wired. -
1:55 - 1:59These invisible differences cannot
be too complicated, however, -
1:59 - 2:03because there have only
been 250,000 generations -
2:03 - 2:05since our last common ancestor.
-
2:05 - 2:09We know that complicated mechanisms
take a long time to evolve. -
2:10 - 2:12So a bunch of relatively minor changes
-
2:12 - 2:16take us from Kanzi to Witten,
-
2:16 - 2:20from broken-off tree branches
to intercontinental ballistic missiles. -
2:21 - 2:25So this then seems pretty obvious
that everything we've achieved, -
2:25 - 2:26and everything we care about,
-
2:26 - 2:31depends crucially on some relatively minor
changes that made the human mind. -
2:33 - 2:36And the corollary, of course,
is that any further changes -
2:36 - 2:40that could significantly change
the substrate of thinking -
2:40 - 2:43could have potentially
enormous consequences. -
2:44 - 2:47Some of my colleagues
think we're on the verge -
2:47 - 2:51of something that could cause
a profound change in that substrate, -
2:51 - 2:54and that is machine superintelligence.
-
2:54 - 2:59Artificial intelligence used to be
about putting commands in a box. -
2:59 - 3:01You would have human programmers
-
3:01 - 3:04that would painstakingly
handcraft knowledge items. -
3:04 - 3:06You build up these expert systems,
-
3:06 - 3:08and they were kind of useful
for some purposes, -
3:08 - 3:11but they were very brittle,
you couldn't scale them. -
3:11 - 3:14Basically, you got out only
what you put in. -
3:14 - 3:15But since then,
-
3:15 - 3:19a paradigm shift has taken place
in the field of artificial intelligence. -
3:19 - 3:22Today, the action is really
around machine learning. -
3:22 - 3:28So rather than handcrafting knowledge
representations and features, -
3:29 - 3:34we create algorithms that learn,
often from raw perceptual data. -
3:34 - 3:39Basically the same thing
that the human infant does. -
3:39 - 3:43The result is A.I. that is not
limited to one domain -- -
3:43 - 3:48the same system can learn to translate
between any pairs of languages, -
3:48 - 3:53or learn to play any computer game
on the Atari console. -
3:53 - 3:55Now of course,
-
3:55 - 3:59A.I. is still nowhere near having
the same powerful, cross-domain -
3:59 - 4:02ability to learn and plan
as a human being has. -
4:02 - 4:04The cortex still has some
algorithmic tricks -
4:04 - 4:07that we don't yet know
how to match in machines. -
4:08 - 4:10So the question is,
-
4:10 - 4:13how far are we from being able
to match those tricks? -
4:14 - 4:15A couple of years ago,
-
4:15 - 4:18we did a survey of some of the world's
leading A.I. experts, -
4:18 - 4:21to see what they think,
and one of the questions we asked was, -
4:21 - 4:25"By which year do you think
there is a 50 percent probability -
4:25 - 4:28that we will have achieved
human-level machine intelligence?" -
4:29 - 4:33We defined human-level here
as the ability to perform -
4:33 - 4:36almost any job at least as well
as an adult human, -
4:36 - 4:40so real human-level, not just
within some limited domain. -
4:40 - 4:43And the median answer was 2040 or 2050,
-
4:43 - 4:46depending on precisely which
group of experts we asked. -
4:46 - 4:50Now, it could happen much,
much later, or sooner, -
4:50 - 4:52the truth is nobody really knows.
-
4:53 - 4:58What we do know is that the ultimate
limit to information processing -
4:58 - 5:03in a machine substrate lies far outside
the limits in biological tissue. -
5:03 - 5:06This comes down to physics.
-
5:06 - 5:10A biological neuron fires, maybe,
at 200 hertz, 200 times a second. -
5:10 - 5:14But even a present-day transistor
operates at the Gigahertz. -
5:14 - 5:19Neurons propagate slowly in axons,
100 meters per second, tops. -
5:19 - 5:22But in computers, signals can travel
at the speed of light. -
5:23 - 5:25There are also size limitations,
-
5:25 - 5:28like a human brain has
to fit inside a cranium, -
5:28 - 5:33but a computer can be the size
of a warehouse or larger. -
5:33 - 5:38So the potential for superintelligence
lies dormant in matter, -
5:38 - 5:44much like the power of the atom
lay dormant throughout human history, -
5:44 - 5:48patiently waiting there until 1945.
-
5:48 - 5:50In this century,
-
5:50 - 5:54scientists may learn to awaken
the power of artificial intelligence. -
5:54 - 5:58And I think we might then see
an intelligence explosion. -
5:58 - 6:02Now most people, when they think
about what is smart and what is dumb, -
6:02 - 6:05I think have in mind a picture
roughly like this. -
6:05 - 6:08So at one end we have the village idiot,
-
6:08 - 6:10and then far over at the other side
-
6:10 - 6:15we have Ed Witten, or Albert Einstein,
or whoever your favorite guru is. -
6:15 - 6:19But I think that from the point of view
of artificial intelligence, -
6:19 - 6:23the true picture is actually
probably more like this: -
6:23 - 6:27AI starts out at this point here,
at zero intelligence, -
6:27 - 6:30and then, after many, many
years of really hard work, -
6:30 - 6:33maybe eventually we get to
mouse-level artificial intelligence, -
6:33 - 6:36something that can navigate
cluttered environments -
6:36 - 6:38as well as a mouse can.
-
6:38 - 6:42And then, after many, many more years
of really hard work, lots of investment, -
6:42 - 6:47maybe eventually we get to
chimpanzee-level artificial intelligence. -
6:47 - 6:50And then, after even more years
of really, really hard work, -
6:50 - 6:53we get to village idiot
artificial intelligence. -
6:53 - 6:56And a few moments later,
we are beyond Ed Witten. -
6:56 - 6:59The train doesn't stop
at Humanville Station. -
6:59 - 7:02It's likely, rather, to swoosh right by.
-
7:02 - 7:04Now this has profound implications,
-
7:04 - 7:08particularly when it comes
to questions of power. -
7:08 - 7:10For example, chimpanzees are strong --
-
7:10 - 7:15pound for pound, a chimpanzee is about
twice as strong as a fit human male. -
7:15 - 7:20And yet, the fate of Kanzi
and his pals depends a lot more -
7:20 - 7:24on what we humans do than on
what the chimpanzees do themselves. -
7:25 - 7:28Once there is superintelligence,
-
7:28 - 7:31the fate of humanity may depend
on what the superintelligence does. -
7:32 - 7:34Think about it:
-
7:34 - 7:39Machine intelligence is the last invention
that humanity will ever need to make. -
7:39 - 7:42Machines will then be better
at inventing than we are, -
7:42 - 7:44and they'll be doing so
on digital timescales. -
7:44 - 7:49What this means is basically
a telescoping of the future. -
7:49 - 7:53Think of all the crazy technologies
that you could have imagined -
7:53 - 7:55maybe humans could have developed
in the fullness of time: -
7:55 - 7:59cures for aging, space colonization,
-
7:59 - 8:02self-replicating nanobots or uploading
of minds into computers, -
8:02 - 8:04all kinds of science fiction-y stuff
-
8:04 - 8:07that's nevertheless consistent
with the laws of physics. -
8:07 - 8:11All of this superintelligence could
develop, and possibly quite rapidly. -
8:12 - 8:16Now, a superintelligence with such
technological maturity -
8:16 - 8:18would be extremely powerful,
-
8:18 - 8:23and at least in some scenarios,
it would be able to get what it wants. -
8:23 - 8:28We would then have a future that would
be shaped by the preferences of this A.I. -
8:30 - 8:34Now a good question is,
what are those preferences? -
8:34 - 8:36Here it gets trickier.
-
8:36 - 8:37To make any headway with this,
-
8:37 - 8:41we must first of all
avoid anthropomorphizing. -
8:42 - 8:45And this is ironic because
every newspaper article -
8:45 - 8:49about the future of A.I.
has a picture of this: -
8:50 - 8:54So I think what we need to do is
to conceive of the issue more abstractly, -
8:54 - 8:57not in terms of vivid Hollywood scenarios.
-
8:57 - 9:01We need to think of intelligence
as an optimization process, -
9:01 - 9:06a process that steers the future
into a particular set of configurations. -
9:06 - 9:10A superintelligence is
a really strong optimization process. -
9:10 - 9:14It's extremely good at using
available means to achieve a state -
9:14 - 9:16in which its goal is realized.
-
9:16 - 9:19This means that there is no necessary
connection between -
9:19 - 9:22being highly intelligent in this sense,
-
9:22 - 9:27and having an objective that we humans
would find worthwhile or meaningful. -
9:27 - 9:31Suppose we give an A.I. the goal
to make humans smile. -
9:31 - 9:34When the A.I. is weak, it performs useful
or amusing actions -
9:34 - 9:37that cause its user to smile.
-
9:37 - 9:39When the A.I. becomes superintelligent,
-
9:39 - 9:43it realizes that there is a more
effective way to achieve this goal: -
9:43 - 9:44take control of the world
-
9:44 - 9:48and stick electrodes into the facial
muscles of humans -
9:48 - 9:51to cause constant, beaming grins.
-
9:51 - 9:52Another example,
-
9:52 - 9:55suppose we give A.I. the goal to solve
a difficult mathematical problem. -
9:55 - 9:57When the A.I. becomes superintelligent,
-
9:57 - 10:01it realizes that the most effective way
to get the solution to this problem -
10:01 - 10:04is by transforming the planet
into a giant computer, -
10:04 - 10:06so as to increase its thinking capacity.
-
10:06 - 10:09And notice that this gives the A.I.s
an instrumental reason -
10:09 - 10:12to do things to us that we
might not approve of. -
10:12 - 10:13Human beings in this model are threats,
-
10:13 - 10:16we could prevent the mathematical
problem from being solved. -
10:17 - 10:21Of course, perceivably things won't
go wrong in these particular ways; -
10:21 - 10:22these are cartoon examples.
-
10:22 - 10:24But the general point here is important:
-
10:24 - 10:27if you create a really powerful
optimization process -
10:27 - 10:30to maximize for objective x,
-
10:30 - 10:32you better make sure
that your definition of x -
10:32 - 10:34incorporates everything you care about.
-
10:35 - 10:39This is a lesson that's also taught
in many a myth. -
10:39 - 10:45King Midas wishes that everything
he touches be turned into gold. -
10:45 - 10:47He touches his daughter,
she turns into gold. -
10:47 - 10:50He touches his food, it turns into gold.
-
10:50 - 10:53This could become practically relevant,
-
10:53 - 10:55not just as a metaphor for greed,
-
10:55 - 10:56but as an illustration of what happens
-
10:56 - 10:59if you create a powerful
optimization process -
10:59 - 11:04and give it misconceived
or poorly specified goals. -
11:04 - 11:09Now you might say, if a computer starts
sticking electrodes into people's faces, -
11:09 - 11:12we'd just shut it off.
-
11:13 - 11:18A, this is not necessarily so easy to do
if we've grown dependent on the system -- -
11:18 - 11:21like, where is the off switch
to the Internet? -
11:21 - 11:26B, why haven't the chimpanzees
flicked the off switch to humanity, -
11:26 - 11:27or the Neanderthals?
-
11:27 - 11:30They certainly had reasons.
-
11:30 - 11:33We have an off switch,
for example, right here. -
11:33 - 11:34(Choking)
-
11:34 - 11:37The reason is that we are
an intelligent adversary; -
11:37 - 11:40we can anticipate threats
and plan around them. -
11:40 - 11:42But so could a superintelligent agent,
-
11:42 - 11:46and it would be much better
at that than we are. -
11:46 - 11:53The point is, we should not be confident
that we have this under control here. -
11:53 - 11:56And we could try to make our job
a little bit easier by, say, -
11:56 - 11:58putting the A.I. in a box,
-
11:58 - 12:00like a secure software environment,
-
12:00 - 12:03a virtual reality simulation
from which it cannot escape. -
12:03 - 12:07But how confident can we be that
the A.I. couldn't find a bug. -
12:07 - 12:10Given that merely human hackers
find bugs all the time, -
12:10 - 12:13I'd say, probably not very confident.
-
12:14 - 12:19So we disconnect the ethernet cable
to create an air gap, -
12:19 - 12:21but again, like merely human hackers
-
12:21 - 12:25routinely transgress air gaps
using social engineering. -
12:25 - 12:26Right now, as I speak,
-
12:26 - 12:28I'm sure there is some employee
out there somewhere -
12:28 - 12:32who has been talked into handing out
her account details -
12:32 - 12:35by somebody claiming to be
from the I.T. department. -
12:35 - 12:37More creative scenarios are also possible,
-
12:37 - 12:38like if you're the A.I.,
-
12:38 - 12:42you can imagine wiggling electrodes
around in your internal circuitry -
12:42 - 12:45to create radio waves that you
can use to communicate. -
12:45 - 12:47Or maybe you could pretend to malfunction,
-
12:47 - 12:51and then when the programmers open
you up to see what went wrong with you, -
12:51 - 12:53they look at the source code -- Bam! --
-
12:53 - 12:55the manipulation can take place.
-
12:55 - 12:59Or it could output the blueprint
to a really nifty technology, -
12:59 - 13:00and when we implement it,
-
13:00 - 13:05it has some surreptitious side effect
that the A.I. had planned. -
13:05 - 13:08The point here is that we should
not be confident in our ability -
13:08 - 13:12to keep a superintelligent genie
locked up in its bottle forever. -
13:12 - 13:14Sooner or later, it will out.
-
13:15 - 13:18I believe that the answer here
is to figure out -
13:18 - 13:23how to create superintelligent A.I.
such that even if -- when -- it escapes, -
13:23 - 13:26it is still safe because it is
fundamentally on our side -
13:26 - 13:28because it shares our values.
-
13:28 - 13:32I see no way around
this difficult problem. -
13:33 - 13:36Now, I'm actually fairly optimistic
that this problem can be solved. -
13:36 - 13:40We wouldn't have to write down
a long list of everything we care about, -
13:40 - 13:44or worse yet, spell it out
in some computer language -
13:44 - 13:45like C++ or Python,
-
13:45 - 13:48that would be a task beyond hopeless.
-
13:48 - 13:52Instead, we would create an A.I.
that uses its intelligence -
13:52 - 13:55to learn what we value,
-
13:55 - 14:01and its motivation system is constructed
in such a way that it is motivated -
14:01 - 14:06to pursue our values or to perform actions
that it predicts we would approve of. -
14:06 - 14:09We would thus leverage
its intelligence as much as possible -
14:09 - 14:12to solve the problem of value-loading.
-
14:13 - 14:14This can happen,
-
14:14 - 14:18and the outcome could be
very good for humanity. -
14:18 - 14:22But it doesn't happen automatically.
-
14:22 - 14:25The initial conditions
for the intelligence explosion -
14:25 - 14:28might need to be set up
in just the right way -
14:28 - 14:31if we are to have a controlled detonation.
-
14:31 - 14:34The values that the A.I. has
need to match ours, -
14:34 - 14:36not just in the familiar context,
-
14:36 - 14:38like where we can easily check
how the A.I. behaves, -
14:38 - 14:41but also in all novel contexts
that the A.I. might encounter -
14:41 - 14:43in the indefinite future.
-
14:43 - 14:48And there are also some esoteric issues
that would need to be solved, sorted out: -
14:48 - 14:50the exact details of its decision theory,
-
14:50 - 14:52how to deal with logical
uncertainty and so forth. -
14:53 - 14:56So the technical problems that need
to be solved to make this work -
14:56 - 14:58look quite difficult --
-
14:58 - 15:01not as difficult as making
a superintelligent A.I., -
15:01 - 15:04but fairly difficult.
-
15:04 - 15:05Here is the worry:
-
15:05 - 15:10Making superintelligent A.I.
is a really hard challenge. -
15:10 - 15:13Making superintelligent A.I. that is safe
-
15:13 - 15:15involves some additional
challenge on top of that. -
15:16 - 15:20The risk is that if somebody figures out
how to crack the first challenge -
15:20 - 15:23without also having cracked
the additional challenge -
15:23 - 15:25of ensuring perfect safety.
-
15:25 - 15:29So I think that we should
work out a solution -
15:29 - 15:32to the control problem in advance,
-
15:32 - 15:34so that we have it available
by the time it is needed. -
15:35 - 15:38Now it might be that we cannot solve
the entire control problem in advance -
15:38 - 15:41because maybe some elements
can only be put in place -
15:41 - 15:45once you know the details of the
architecture where it will be implemented. -
15:45 - 15:49But the more of the control problem
that we solve in advance, -
15:49 - 15:53the better the odds that the transition
to the machine intelligence era -
15:53 - 15:54will go well.
-
15:54 - 15:59This to me looks like a thing
that is well worth doing -
15:59 - 16:02and I can imagine that if
things turn out okay, -
16:02 - 16:07that people a million years from now
look back at this century -
16:07 - 16:11and it might well be that they say that
the one thing we did that really mattered -
16:11 - 16:13was to get this thing right.
-
16:13 - 16:14Thank you.
-
16:14 - 16:17(Applause)
- Title:
- What happens when our computers get smarter than we are?
- Speaker:
- Nick Bostrom
- Description:
-
Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 16:31
Brian Greene edited English subtitles for What happens when our computers get smarter than we are? | ||
Morton Bast edited English subtitles for What happens when our computers get smarter than we are? | ||
Morton Bast approved English subtitles for What happens when our computers get smarter than we are? | ||
Morton Bast edited English subtitles for What happens when our computers get smarter than we are? | ||
Morton Bast edited English subtitles for What happens when our computers get smarter than we are? | ||
Morton Bast edited English subtitles for What happens when our computers get smarter than we are? | ||
Morton Bast edited English subtitles for What happens when our computers get smarter than we are? | ||
Morton Bast edited English subtitles for What happens when our computers get smarter than we are? |