Return to Video

Can we build AI without losing control over it?

  • 0:01 - 0:03
    I'm going to talk
    about a failure of intuition
  • 0:03 - 0:05
    that many of us suffer from.
  • 0:05 - 0:09
    It's really a failure
    to detect a certain kind of danger.
  • 0:09 - 0:11
    I'm going to describe a scenario
  • 0:11 - 0:14
    that I think is both terrifying
  • 0:14 - 0:16
    and likely to occur,
  • 0:17 - 0:18
    and that's not a good combination,
  • 0:19 - 0:20
    as it turns out.
  • 0:20 - 0:23
    And yet rather than be scared,
    most of you will feel
  • 0:23 - 0:25
    that what I'm talking about
    is kind of cool.
  • 0:25 - 0:28
    I'm going to describe
    how the gains we make
  • 0:28 - 0:30
    in artificial intelligence
  • 0:30 - 0:32
    could ultimately destroy us.
  • 0:32 - 0:35
    And in fact, I think it's very difficult
    to see how they won't destroy us
  • 0:35 - 0:37
    or inspire us to destroy ourselves.
  • 0:37 - 0:39
    And yet if you're anything like me,
  • 0:39 - 0:42
    you'll find that it's fun
    to think about these things.
  • 0:42 - 0:45
    And that response is part of the problem.
  • 0:45 - 0:47
    OK? That response should worry you.
  • 0:48 - 0:51
    And if I were to convince you in this talk
  • 0:51 - 0:54
    that we were likely
    to suffer a global famine,
  • 0:54 - 0:57
    either because of climate change
    or some other catastrophe,
  • 0:57 - 1:01
    and that your grandchildren,
    or their grandchildren,
  • 1:01 - 1:02
    are very likely to live like this,
  • 1:03 - 1:04
    you wouldn't think,
  • 1:05 - 1:07
    "Interesting.
  • 1:07 - 1:08
    I like this TED Talk."
  • 1:09 - 1:11
    Famine isn't fun.
  • 1:12 - 1:15
    Death by science fiction,
    on the other hand, is fun,
  • 1:15 - 1:19
    and one of the things that worries me most
    about the development of AI at this point
  • 1:19 - 1:23
    is that we seem unable to marshal
    an appropriate emotional response
  • 1:23 - 1:25
    to the dangers that lie ahead.
  • 1:25 - 1:28
    I am unable to marshal this response,
    and I'm giving this talk.
  • 1:30 - 1:33
    It's as though we stand before two doors.
  • 1:33 - 1:34
    Behind door number one,
  • 1:34 - 1:37
    we stop making progress
    in building intelligent machines.
  • 1:37 - 1:41
    Our computer hardware and software
    just stops getting better for some reason.
  • 1:41 - 1:44
    Now take a moment
    to consider why this might happen.
  • 1:45 - 1:49
    I mean, given how valuable
    intelligence and automation are,
  • 1:49 - 1:52
    we will continue to improve our technology
    if we are at all able to.
  • 1:53 - 1:55
    What could stop us from doing this?
  • 1:56 - 1:58
    A full-scale nuclear war?
  • 1:59 - 2:01
    A global pandemic?
  • 2:02 - 2:04
    An asteroid impact?
  • 2:06 - 2:08
    Justin Bieber becoming
    president of the United States?
  • 2:08 - 2:11
    (Laughter)
  • 2:13 - 2:17
    The point is, something would have to
    destroy civilization as we know it.
  • 2:17 - 2:22
    You have to imagine
    how bad it would have to be
  • 2:22 - 2:25
    to prevent us from making
    improvements in our technology
  • 2:25 - 2:26
    permanently,
  • 2:26 - 2:28
    generation after generation.
  • 2:28 - 2:30
    Almost by definition,
    this is the worst thing
  • 2:30 - 2:32
    that's ever happened in human history.
  • 2:33 - 2:34
    So the only alternative,
  • 2:34 - 2:36
    and this is what lies
    behind door number two,
  • 2:36 - 2:39
    is that we continue
    to improve our intelligent machines
  • 2:39 - 2:41
    year after year after year.
  • 2:42 - 2:45
    At a certain point, we will build
    machines that are smarter than we are,
  • 2:46 - 2:49
    and once we have machines
    that are smarter than we are,
  • 2:49 - 2:51
    they will begin to improve themselves.
  • 2:51 - 2:53
    And then we risk what
    the mathematician IJ Good called
  • 2:53 - 2:55
    an "intelligence explosion,"
  • 2:55 - 2:57
    that the process could get away from us.
  • 2:58 - 3:01
    Now, this is often caricatured,
    as I have here,
  • 3:01 - 3:04
    as a fear that armies of malicious robots
  • 3:04 - 3:05
    will attack us.
  • 3:05 - 3:08
    But that isn't the most likely scenario.
  • 3:08 - 3:13
    It's not that our machines
    will become spontaneously malevolent.
  • 3:13 - 3:16
    The concern is really
    that we will build machines
  • 3:16 - 3:18
    that are so much
    more competent than we are
  • 3:18 - 3:22
    that the slightest divergence
    between their goals and our own
  • 3:22 - 3:23
    could destroy us.
  • 3:24 - 3:26
    Just think about how we relate to ants.
  • 3:27 - 3:28
    We don't hate them.
  • 3:28 - 3:30
    We don't go out of our way to harm them.
  • 3:30 - 3:33
    In fact, sometimes
    we take pains not to harm them.
  • 3:33 - 3:35
    We step over them on the sidewalk.
  • 3:35 - 3:37
    But whenever their presence
  • 3:37 - 3:39
    seriously conflicts with one of our goals,
  • 3:39 - 3:42
    let's say when constructing
    a building like this one,
  • 3:42 - 3:44
    we annihilate them without a qualm.
  • 3:44 - 3:47
    The concern is that we will
    one day build machines
  • 3:47 - 3:50
    that, whether they're conscious or not,
  • 3:50 - 3:52
    could treat us with similar disregard.
  • 3:54 - 3:57
    Now, I suspect this seems
    far-fetched to many of you.
  • 3:57 - 4:04
    I bet there are those of you who doubt
    that superintelligent AI is possible,
  • 4:04 - 4:05
    much less inevitable.
  • 4:05 - 4:09
    But then you must find something wrong
    with one of the following assumptions.
  • 4:09 - 4:11
    And there are only three of them.
  • 4:12 - 4:17
    Intelligence is a matter of information
    processing in physical systems.
  • 4:17 - 4:20
    Actually, this is a little bit more
    than an assumption.
  • 4:20 - 4:23
    We have already built
    narrow intelligence into our machines,
  • 4:23 - 4:25
    and many of these machines perform
  • 4:25 - 4:28
    at a level of superhuman
    intelligence already.
  • 4:29 - 4:31
    And we know that mere matter
  • 4:31 - 4:34
    can give rise to what is called
    "general intelligence,"
  • 4:34 - 4:38
    an ability to think flexibly
    across multiple domains,
  • 4:38 - 4:41
    because our brains have managed it. Right?
  • 4:41 - 4:45
    I mean, there's just atoms in here,
  • 4:45 - 4:49
    and as long as we continue
    to build systems of atoms
  • 4:49 - 4:52
    that display more and more
    intelligent behavior,
  • 4:52 - 4:55
    we will eventually,
    unless we are interrupted,
  • 4:55 - 4:58
    we will eventually
    build general intelligence
  • 4:58 - 4:59
    into our machines.
  • 4:59 - 5:03
    It's crucial to realize
    that the rate of progress doesn't matter,
  • 5:03 - 5:06
    because any progress
    is enough to get us into the end zone.
  • 5:06 - 5:10
    We don't need Moore's law to continue.
    We don't need exponential progress.
  • 5:10 - 5:12
    We just need to keep going.
  • 5:13 - 5:16
    The second assumption
    is that we will keep going.
  • 5:17 - 5:20
    We will continue to improve
    our intelligent machines.
  • 5:21 - 5:25
    And given the value of intelligence --
  • 5:25 - 5:29
    I mean, intelligence is either
    the source of everything we value
  • 5:29 - 5:32
    or we need it to safeguard
    everything we value.
  • 5:32 - 5:34
    It is our most valuable resource.
  • 5:34 - 5:36
    So we want to do this.
  • 5:36 - 5:39
    We have problems
    that we desperately need to solve.
  • 5:39 - 5:42
    We want to cure diseases
    like Alzheimer's and cancer.
  • 5:43 - 5:47
    We want to understand economic systems.
    We want to improve our climate science.
  • 5:47 - 5:49
    So we will do this, if we can.
  • 5:49 - 5:52
    The train is already out of the station,
    and there's no brake to pull.
  • 5:54 - 5:59
    Finally, we don't stand
    on a peak of intelligence,
  • 5:59 - 6:01
    or anywhere near it, likely.
  • 6:02 - 6:04
    And this really is the crucial insight.
  • 6:04 - 6:06
    This is what makes
    our situation so precarious,
  • 6:06 - 6:10
    and this is what makes our intuitions
    about risk so unreliable.
  • 6:11 - 6:14
    Now, just consider the smartest person
    who has ever lived.
  • 6:15 - 6:18
    On almost everyone's shortlist here
    is John von Neumann.
  • 6:18 - 6:21
    I mean, the impression that von Neumann
    made on the people around him,
  • 6:21 - 6:25
    and this included the greatest
    mathematicians and physicists of his time,
  • 6:26 - 6:27
    is fairly well-documented.
  • 6:27 - 6:31
    If only half the stories
    about him are half true,
  • 6:31 - 6:32
    there's no question
  • 6:33 - 6:35
    he's one of the smartest people
    who has ever lived.
  • 6:35 - 6:38
    So consider the spectrum of intelligence.
  • 6:38 - 6:40
    Here we have John von Neumann.
  • 6:42 - 6:43
    And then we have you and me.
  • 6:44 - 6:45
    And then we have a chicken.
  • 6:45 - 6:47
    (Laughter)
  • 6:47 - 6:49
    Sorry, a chicken.
  • 6:49 - 6:50
    (Laughter)
  • 6:50 - 6:54
    There's no reason for me to make this talk
    more depressing than it needs to be.
  • 6:54 - 6:55
    (Laughter)
  • 6:56 - 7:00
    It seems overwhelmingly likely, however,
    that the spectrum of intelligence
  • 7:00 - 7:03
    extends much further
    than we currently conceive,
  • 7:04 - 7:07
    and if we build machines
    that are more intelligent than we are,
  • 7:07 - 7:09
    they will very likely
    explore this spectrum
  • 7:09 - 7:11
    in ways that we can't imagine,
  • 7:11 - 7:14
    and exceed us in ways
    that we can't imagine.
  • 7:15 - 7:19
    And it's important to recognize that
    this is true by virtue of speed alone.
  • 7:19 - 7:24
    Right? So imagine if we just built
    a superintelligent AI
  • 7:24 - 7:28
    that was no smarter
    than your average team of researchers
  • 7:28 - 7:30
    at Stanford or MIT.
  • 7:30 - 7:33
    Well, electronic circuits
    function about a million times faster
  • 7:33 - 7:34
    than biochemical ones,
  • 7:35 - 7:38
    so this machine should think
    about a million times faster
  • 7:38 - 7:39
    than the minds that built it.
  • 7:40 - 7:41
    So you set it running for a week,
  • 7:41 - 7:46
    and it will perform 20,000 years
    of human-level intellectual work,
  • 7:46 - 7:48
    week after week after week.
  • 7:50 - 7:53
    How could we even understand,
    much less constrain,
  • 7:53 - 7:55
    a mind making this sort of progress?
  • 7:57 - 7:59
    The other thing that's worrying, frankly,
  • 7:59 - 8:04
    is that, imagine the best case scenario.
  • 8:04 - 8:08
    So imagine we hit upon a design
    of superintelligent AI
  • 8:08 - 8:10
    that has no safety concerns.
  • 8:10 - 8:13
    We have the perfect design
    the first time around.
  • 8:13 - 8:15
    It's as though we've been handed an oracle
  • 8:15 - 8:17
    that behaves exactly as intended.
  • 8:17 - 8:21
    Well, this machine would be
    the perfect labor-saving device.
  • 8:22 - 8:24
    It can design the machine
    that can build the machine
  • 8:24 - 8:26
    that can do any physical work,
  • 8:26 - 8:27
    powered by sunlight,
  • 8:27 - 8:30
    more or less for the cost
    of raw materials.
  • 8:30 - 8:33
    So we're talking about
    the end of human drudgery.
  • 8:33 - 8:36
    We're also talking about the end
    of most intellectual work.
  • 8:37 - 8:40
    So what would apes like ourselves
    do in this circumstance?
  • 8:40 - 8:44
    Well, we'd be free to play Frisbee
    and give each other massages.
  • 8:46 - 8:49
    Add some LSD and some
    questionable wardrobe choices,
  • 8:49 - 8:51
    and the whole world
    could be like Burning Man.
  • 8:51 - 8:53
    (Laughter)
  • 8:54 - 8:56
    Now, that might sound pretty good,
  • 8:57 - 9:00
    but ask yourself what would happen
  • 9:00 - 9:02
    under our current economic
    and political order?
  • 9:02 - 9:05
    It seems likely that we would witness
  • 9:05 - 9:09
    a level of wealth inequality
    and unemployment
  • 9:09 - 9:11
    that we have never seen before.
  • 9:11 - 9:13
    Absent a willingness
    to immediately put this new wealth
  • 9:13 - 9:15
    to the service of all humanity,
  • 9:16 - 9:19
    a few trillionaires could grace
    the covers of our business magazines
  • 9:19 - 9:22
    while the rest of the world
    would be free to starve.
  • 9:22 - 9:25
    And what would the Russians
    or the Chinese do
  • 9:25 - 9:27
    if they heard that some company
    in Silicon Valley
  • 9:27 - 9:30
    was about to deploy a superintelligent AI?
  • 9:30 - 9:33
    This machine would be capable
    of waging war,
  • 9:33 - 9:35
    whether terrestrial or cyber,
  • 9:35 - 9:37
    with unprecedented power.
  • 9:38 - 9:40
    This is a winner-take-all scenario.
  • 9:40 - 9:43
    To be six months ahead
    of the competition here
  • 9:43 - 9:46
    is to be 500,000 years ahead,
  • 9:46 - 9:47
    at a minimum.
  • 9:47 - 9:52
    So it seems that even mere rumors
    of this kind of breakthrough
  • 9:52 - 9:55
    could cause our species to go berserk.
  • 9:55 - 9:58
    Now, one of the most frightening things,
  • 9:58 - 10:00
    in my view, at this moment,
  • 10:00 - 10:05
    are the kinds of things
    that AI researchers say
  • 10:05 - 10:06
    when they want to be reassuring.
  • 10:07 - 10:10
    And the most common reason
    we're told not to worry is time.
  • 10:10 - 10:13
    This is all a long way off,
    don't you know.
  • 10:13 - 10:15
    This is probably 50 or 100 years away.
  • 10:16 - 10:17
    One researcher has said,
  • 10:17 - 10:19
    "Worrying about AI safety
  • 10:19 - 10:21
    is like worrying
    about overpopulation on Mars."
  • 10:22 - 10:24
    This is the Silicon Valley version
  • 10:24 - 10:26
    of "don't worry your
    pretty little head about it."
  • 10:26 - 10:27
    (Laughter)
  • 10:28 - 10:29
    No one seems to notice
  • 10:29 - 10:32
    that referencing the time horizon
  • 10:32 - 10:35
    is a total non sequitur.
  • 10:35 - 10:38
    If intelligence is just a matter
    of information processing,
  • 10:38 - 10:41
    and we continue to improve our machines,
  • 10:41 - 10:44
    we will produce
    some form of superintelligence.
  • 10:44 - 10:48
    And we have no idea
    how long it will take us
  • 10:48 - 10:50
    to create the conditions
    to do that safely.
  • 10:52 - 10:53
    Let me say that again.
  • 10:54 - 10:57
    We have no idea how long it will take us
  • 10:57 - 11:00
    to create the conditions
    to do that safely.
  • 11:01 - 11:04
    And if you haven't noticed,
    50 years is not what it used to be.
  • 11:04 - 11:07
    This is 50 years in months.
  • 11:07 - 11:09
    This is how long we've had the iPhone.
  • 11:09 - 11:12
    This is how long "The Simpsons"
    has been on television.
  • 11:13 - 11:15
    Fifty years is not that much time
  • 11:15 - 11:18
    to meet one of the greatest challenges
    our species will ever face.
  • 11:20 - 11:24
    Once again, we seem to be failing
    to have an appropriate emotional response
  • 11:24 - 11:26
    to what we have every reason
    to believe is coming.
  • 11:26 - 11:30
    The computer scientist Stuart Russell
    has a nice analogy here.
  • 11:30 - 11:35
    He said, imagine that we received
    a message from an alien civilization,
  • 11:35 - 11:37
    which read:
  • 11:37 - 11:39
    "People of Earth,
  • 11:39 - 11:41
    we will arrive on your planet in 50 years.
  • 11:42 - 11:43
    Get ready."
  • 11:43 - 11:48
    And now we're just counting down
    the months until the mothership lands?
  • 11:48 - 11:51
    We would feel a little
    more urgency than we do.
  • 11:53 - 11:55
    Another reason we're told not to worry
  • 11:55 - 11:58
    is that these machines
    can't help but share our values
  • 11:58 - 12:00
    because they will be literally
    extensions of ourselves.
  • 12:00 - 12:02
    They'll be grafted onto our brains,
  • 12:02 - 12:04
    and we'll essentially
    become their limbic systems.
  • 12:05 - 12:07
    Now take a moment to consider
  • 12:07 - 12:10
    that the safest
    and only prudent path forward,
  • 12:10 - 12:11
    recommended,
  • 12:11 - 12:14
    is to implant this technology
    directly into our brains.
  • 12:15 - 12:18
    Now, this may in fact be the safest
    and only prudent path forward,
  • 12:18 - 12:21
    but usually one's safety concerns
    about a technology
  • 12:21 - 12:25
    have to be pretty much worked out
    before you stick it inside your head.
  • 12:25 - 12:27
    (Laughter)
  • 12:27 - 12:32
    The deeper problem is that
    building superintelligent AI on its own
  • 12:32 - 12:34
    seems likely to be easier
  • 12:34 - 12:36
    than building superintelligent AI
  • 12:36 - 12:38
    and having the completed neuroscience
  • 12:38 - 12:40
    that allows us to seamlessly
    integrate our minds with it.
  • 12:41 - 12:44
    And given that the companies
    and governments doing this work
  • 12:44 - 12:48
    are likely to perceive themselves
    as being in a race against all others,
  • 12:48 - 12:51
    given that to win this race
    is to win the world,
  • 12:51 - 12:53
    provided you don't destroy it
    in the next moment,
  • 12:53 - 12:56
    then it seems likely
    that whatever is easier to do
  • 12:56 - 12:57
    will get done first.
  • 12:59 - 13:01
    Now, unfortunately,
    I don't have a solution to this problem,
  • 13:01 - 13:04
    apart from recommending
    that more of us think about it.
  • 13:04 - 13:06
    I think we need something
    like a Manhattan Project
  • 13:06 - 13:08
    on the topic of artificial intelligence.
  • 13:09 - 13:11
    Not to build it, because I think
    we'll inevitably do that,
  • 13:11 - 13:15
    but to understand
    how to avoid an arms race
  • 13:15 - 13:18
    and to build it in a way
    that is aligned with our interests.
  • 13:18 - 13:20
    When you're talking
    about superintelligent AI
  • 13:20 - 13:23
    that can make changes to itself,
  • 13:23 - 13:27
    it seems that we only have one chance
    to get the initial conditions right,
  • 13:27 - 13:29
    and even then we will need to absorb
  • 13:29 - 13:32
    the economic and political
    consequences of getting them right.
  • 13:34 - 13:36
    But the moment we admit
  • 13:36 - 13:40
    that information processing
    is the source of intelligence,
  • 13:41 - 13:46
    that some appropriate computational system
    is what the basis of intelligence is,
  • 13:46 - 13:50
    and we admit that we will improve
    these systems continuously,
  • 13:51 - 13:56
    and we admit that the horizon
    of cognition very likely far exceeds
  • 13:56 - 13:57
    what we currently know,
  • 13:58 - 13:59
    then we have to admit
  • 13:59 - 14:02
    that we are in the process
    of building some sort of god.
  • 14:03 - 14:05
    Now would be a good time
  • 14:05 - 14:07
    to make sure it's a god we can live with.
  • 14:08 - 14:10
    Thank you very much.
  • 14:10 - 14:15
    (Applause)
Title:
Can we build AI without losing control over it?
Speaker:
Sam Harris
Description:

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
14:27

English subtitles

Revisions Compare revisions