Return to Video

A new equation for intelligence

  • 0:01 - 0:05
    Intelligence -- what is it?
  • 0:05 - 0:07
    If we take a look back at the history
  • 0:07 - 0:09
    of how intelligence has been viewed,
  • 0:09 - 0:13
    one seminal example has been
  • 0:13 - 0:17
    Edsger Dijkstra's famous quote that
  • 0:17 - 0:20
    "the question of whether a machine can think
  • 0:20 - 0:21
    is about as interesting
  • 0:21 - 0:24
    as the question of whether a submarine
  • 0:24 - 0:26
    can swim."
  • 0:26 - 0:30
    Now, Edsger Dijkstra, when he wrote this,
  • 0:30 - 0:32
    intended it as a criticism
  • 0:32 - 0:35
    of the early pioneers of computer science,
  • 0:35 - 0:36
    like Alan Turing.
  • 0:36 - 0:39
    However, if you take a look back
  • 0:39 - 0:41
    and think about what have been
  • 0:41 - 0:43
    the most empowering innovations
  • 0:43 - 0:45
    that enabled us to build
  • 0:45 - 0:47
    artificial machines that swim
  • 0:47 - 0:50
    and artificial machines that [fly],
  • 0:50 - 0:53
    you find that it was only through understanding
  • 0:53 - 0:56
    the underlying physical mechanisms
  • 0:56 - 0:58
    of swimming and flight
  • 0:58 - 1:02
    that we were able to build these machines.
  • 1:02 - 1:04
    And so, several years ago,
  • 1:04 - 1:07
    I undertook a program to try to understand
  • 1:07 - 1:10
    the fundamental physical mechanisms
  • 1:10 - 1:13
    underlying intelligence.
  • 1:13 - 1:14
    Let's take a step back.
  • 1:14 - 1:18
    Let's first begin with a thought experiment.
  • 1:18 - 1:20
    Pretend that you're an alien race
  • 1:20 - 1:23
    that doesn't know anything about Earth biology
  • 1:23 - 1:27
    or Earth neuroscience or Earth intelligence,
  • 1:27 - 1:29
    but you have amazing telescopes
  • 1:29 - 1:31
    and you're able to watch the Earth,
  • 1:31 - 1:33
    and you have amazingly long lives,
  • 1:33 - 1:35
    so you're able to watch the Earth
  • 1:35 - 1:38
    over millions, even billions of years.
  • 1:38 - 1:41
    And you observe a really strange effect.
  • 1:41 - 1:46
    You observe that, over the course of the millennia,
  • 1:46 - 1:50
    Earth is continually bombarded with asteroids
  • 1:50 - 1:52
    up until a point,
  • 1:52 - 1:54
    and that at some point,
  • 1:54 - 1:58
    corresponding roughly to our year, 2000 AD,
  • 1:58 - 2:00
    asteroids that are on
  • 2:00 - 2:01
    a collision course with the Earth
  • 2:01 - 2:03
    that otherwise would have collided
  • 2:03 - 2:06
    mysteriously get deflected
  • 2:06 - 2:09
    or they detonate before they can hit the Earth.
  • 2:09 - 2:11
    Now of course, as earthlings,
  • 2:11 - 2:13
    we know the reason would be
  • 2:13 - 2:14
    that we're trying to save ourselves.
  • 2:14 - 2:17
    We're trying to prevent an impact.
  • 2:17 - 2:19
    But if you're an alien race
  • 2:19 - 2:20
    who doesn't know any of this,
  • 2:20 - 2:23
    doesn't have any concept of Earth intelligence,
  • 2:23 - 2:25
    you'd be forced to put together
  • 2:25 - 2:27
    a physical theory that explains how,
  • 2:27 - 2:30
    up until a certain point in time,
  • 2:30 - 2:34
    asteroids that would demolish the surface of a planet
  • 2:34 - 2:38
    mysteriously stop doing that.
  • 2:38 - 2:42
    And so I claim that this is the same question
  • 2:42 - 2:46
    as understanding the physical nature of intelligence.
  • 2:46 - 2:50
    So in this program that I
    undertook several years ago,
  • 2:50 - 2:52
    I looked at a variety of different threads
  • 2:52 - 2:56
    across science, across a variety of disciplines,
  • 2:56 - 2:58
    that were pointing, I think,
  • 2:58 - 3:00
    towards a single, underlying mechanism
  • 3:00 - 3:02
    for intelligence.
  • 3:02 - 3:04
    In cosmology, for example,
  • 3:04 - 3:07
    there have been a variety of
    different threads of evidence
  • 3:07 - 3:10
    that our universe appears to be finely tuned
  • 3:10 - 3:13
    for the development of intelligence,
  • 3:13 - 3:15
    and, in particular, for the development
  • 3:15 - 3:17
    of universal states
  • 3:17 - 3:21
    that maximize the diversity of possible futures.
  • 3:21 - 3:23
    In game play, for example, in Go --
  • 3:23 - 3:26
    everyone remembers in 1997
  • 3:26 - 3:30
    when IBM's Deep Blue beat
    Garry Kasparov at chess --
  • 3:30 - 3:32
    fewer people are aware
  • 3:32 - 3:34
    that in the past 10 years or so,
  • 3:34 - 3:35
    the game of Go,
  • 3:35 - 3:37
    arguably a much more challenging game
  • 3:37 - 3:39
    because it has a much higher branching factor,
  • 3:39 - 3:41
    has also started to succumb
  • 3:41 - 3:43
    to computer game players
  • 3:43 - 3:44
    for the same reason:
  • 3:44 - 3:47
    the best techniques right now
    for computers playing Go
  • 3:47 - 3:51
    are techniques that try to maximize future options
  • 3:51 - 3:53
    during game play.
  • 3:53 - 3:57
    Finally, in robotic motion planning,
  • 3:57 - 3:59
    there have been a variety of recent techniques
  • 3:59 - 4:01
    that have tried to take advantage
  • 4:01 - 4:04
    of abilities of robots to maximize
  • 4:04 - 4:05
    future freedom of action
  • 4:05 - 4:08
    in order to accomplish complex tasks.
  • 4:08 - 4:11
    And so, taking all of these different threads
  • 4:11 - 4:12
    and putting them together,
  • 4:12 - 4:15
    I asked, starting several years ago,
  • 4:15 - 4:18
    is there an underlying mechanism for intelligence
  • 4:18 - 4:20
    that we can factor out
  • 4:20 - 4:21
    of all of these different threads?
  • 4:21 - 4:26
    Is there a single equation for intelligence?
  • 4:26 - 4:29
    And the answer, I believe, is yes.
    ["F = T ∇ Sτ"]
  • 4:29 - 4:31
    What you're seeing is probably
  • 4:31 - 4:34
    the closest equivalent to an E = mc²
  • 4:34 - 4:37
    for intelligence that I've seen.
  • 4:37 - 4:39
    So what you're seeing here
  • 4:39 - 4:42
    is a statement of correspondence
  • 4:42 - 4:46
    that intelligence is a force, F,
  • 4:46 - 4:51
    that acts so as to maximize future freedom of action.
  • 4:51 - 4:53
    It acts to maximize future freedom of action,
  • 4:53 - 4:55
    or keep options open,
  • 4:55 - 4:57
    with some strength T,
  • 4:57 - 5:02
    with the diversity of possible accessible futures, S,
  • 5:02 - 5:04
    up to some future time horizon, tau.
  • 5:04 - 5:08
    In short, intelligence doesn't like to get trapped.
  • 5:08 - 5:11
    Intelligence tries to maximize
    future freedom of action
  • 5:11 - 5:13
    and keep options open.
  • 5:13 - 5:16
    And so, given this one equation,
  • 5:16 - 5:18
    it's natural to ask, so what can you do with this?
  • 5:18 - 5:20
    How predictive is it?
  • 5:20 - 5:22
    Does it predict human-level intelligence?
  • 5:22 - 5:25
    Does it predict artificial intelligence?
  • 5:25 - 5:27
    So I'm going to show you now a video
  • 5:27 - 5:30
    that will, I think, demonstrate
  • 5:30 - 5:32
    some of the amazing applications
  • 5:32 - 5:35
    of just this single equation.
  • 5:35 - 5:37
    (Video) Narrator: Recent research in cosmology
  • 5:37 - 5:39
    has suggested that universes that produce
  • 5:39 - 5:42
    more disorder, or "entropy," over their lifetimes
  • 5:42 - 5:45
    should tend to have more favorable conditions
  • 5:45 - 5:48
    for the existence of intelligent
    beings such as ourselves.
  • 5:48 - 5:50
    But what if that tentative cosmological connection
  • 5:50 - 5:52
    between entropy and intelligence
  • 5:52 - 5:54
    hints at a deeper relationship?
  • 5:54 - 5:56
    What if intelligent behavior doesn't just correlate
  • 5:56 - 5:58
    with the production of long-term entropy,
  • 5:58 - 6:01
    but actually emerges directly from it?
  • 6:01 - 6:03
    To find out, we developed a software engine
  • 6:03 - 6:05
    called Entropica, designed to maximize
  • 6:05 - 6:07
    the production of long-term entropy
  • 6:07 - 6:10
    of any system that it finds itself in.
  • 6:10 - 6:12
    Amazingly, Entropica was able to pass
  • 6:12 - 6:15
    multiple animal intelligence
    tests, play human games,
  • 6:15 - 6:18
    and even earn money trading stocks,
  • 6:18 - 6:20
    all without being instructed to do so.
  • 6:20 - 6:22
    Here are some examples of Entropica in action.
  • 6:22 - 6:25
    Just like a human standing
    upright without falling over,
  • 6:25 - 6:27
    here we see Entropica
  • 6:27 - 6:29
    automatically balancing a pole using a cart.
  • 6:29 - 6:31
    This behavior is remarkable in part
  • 6:31 - 6:34
    because we never gave Entropica a goal.
  • 6:34 - 6:37
    It simply decided on its own to balance the pole.
  • 6:37 - 6:39
    This balancing ability will have appliactions
  • 6:39 - 6:41
    for humanoid robotics
  • 6:41 - 6:43
    and human assistive technologies.
  • 6:43 - 6:45
    Just as some animals can use objects
  • 6:45 - 6:46
    in their environments as tools
  • 6:46 - 6:48
    to reach into narrow spaces,
  • 6:48 - 6:50
    here we see that Entropica,
  • 6:50 - 6:52
    again on its own initiative,
  • 6:52 - 6:55
    was able to move a large
    disk representing an animal
  • 6:55 - 6:57
    around so as to cause a small disk,
  • 6:57 - 7:00
    representing a tool, to reach into a confined space
  • 7:00 - 7:02
    holding a third disk
  • 7:02 - 7:05
    and release the third disk
    from its initially fixed position.
  • 7:05 - 7:07
    This tool use ability will have applications
  • 7:07 - 7:09
    for smart manufacturing and agriculture.
  • 7:09 - 7:11
    In addition, just as some other animals
  • 7:11 - 7:14
    are able to cooperate by pulling
    opposite ends of a rope
  • 7:14 - 7:16
    at the same time to release food,
  • 7:16 - 7:18
    here we see that Entropica is able to accomplish
  • 7:18 - 7:20
    a model version of that task.
  • 7:20 - 7:23
    This cooperative ability has interesting implications
  • 7:23 - 7:26
    for economic planning and a variety of other fields.
  • 7:26 - 7:28
    Entropica is broadly applicable
  • 7:28 - 7:30
    to a variety of domains.
  • 7:30 - 7:33
    For example, here we see it successfully
  • 7:33 - 7:35
    playing a game of pong against itself,
  • 7:35 - 7:38
    illustrating its potential for gaming.
  • 7:38 - 7:39
    Here we see Entropica orchestrating
  • 7:39 - 7:41
    new connections on a social network
  • 7:41 - 7:44
    where friends are constantly falling out of touch
  • 7:44 - 7:47
    and successfully keeping
    the network well connected.
  • 7:47 - 7:49
    This same network orchestration ability
  • 7:49 - 7:52
    also has applications in health care,
  • 7:52 - 7:55
    energy, and intelligence.
  • 7:55 - 7:57
    Here we see Entropica directing the paths
  • 7:57 - 7:58
    of a fleet of ships,
  • 7:58 - 8:02
    successfully discovering and
    utilizing the Panama Canal
  • 8:02 - 8:04
    to globally extend its reach from the Atlantic
  • 8:04 - 8:06
    to the Pacific.
  • 8:06 - 8:07
    By the same token, Entropica
  • 8:07 - 8:09
    is broadly applicable to problems
  • 8:09 - 8:14
    in autonomous defense, logistics and transportation.
  • 8:14 - 8:16
    Finally, here we see Entropica
  • 8:16 - 8:19
    spontaneously discovering and executing
  • 8:19 - 8:21
    a buy-low, sell-high strategy
  • 8:21 - 8:23
    on a simulated range traded stock,
  • 8:23 - 8:26
    successfully growing assets under management
  • 8:26 - 8:27
    exponentially.
  • 8:27 - 8:28
    This risk management ability
  • 8:28 - 8:31
    will have broad applications in finance
  • 8:31 - 8:34
    and insurance.
  • 8:34 - 8:36
    Alex Wissner-Gross: So what you've just seen
  • 8:36 - 8:41
    is that a variety of signature human intelligent
  • 8:41 - 8:42
    cognitive behaviors
  • 8:42 - 8:45
    such as tool use and walking upright
  • 8:45 - 8:47
    and social cooperation
  • 8:47 - 8:50
    all follow from a single equation,
  • 8:50 - 8:52
    which drives a system
  • 8:52 - 8:56
    to maximize its future freedom of action.
  • 8:56 - 8:59
    Now, there's a profound irony here.
  • 8:59 - 9:01
    Going back to the beginning
  • 9:01 - 9:04
    of the usage of the term robot,
  • 9:04 - 9:07
    the play "RUR,"
  • 9:07 - 9:09
    there was always a concept
  • 9:09 - 9:13
    that if we developed machine intelligence,
  • 9:13 - 9:16
    there would be a cybernetic revolt.
  • 9:16 - 9:19
    The machines would rise up against us.
  • 9:19 - 9:22
    One major consequence of this work
  • 9:22 - 9:24
    is that maybe all of these decades,
  • 9:24 - 9:27
    we've had the whole concept of cybernetic revolt
  • 9:27 - 9:29
    in reverse.
  • 9:29 - 9:33
    It's not that machines first become intelligent
  • 9:33 - 9:35
    and then megalomaniacal
  • 9:35 - 9:37
    and try to take over the world.
  • 9:37 - 9:38
    It's quite the opposite,
  • 9:38 - 9:41
    that the urge to take control
  • 9:41 - 9:43
    of all possible futures
  • 9:43 - 9:46
    is a more fundamental principle
  • 9:46 - 9:47
    than that of intelligence,
  • 9:47 - 9:51
    that general intelligence may in fact emerge
  • 9:51 - 9:54
    directly from this sort of control-grabbing,
  • 9:54 - 9:58
    rather than vice versa.
  • 9:58 - 10:02
    Another important consequence is goal seeking.
  • 10:02 - 10:06
    I'm often asked, how does the ability to seek goals
  • 10:06 - 10:08
    follow from this sort of framework?
  • 10:08 - 10:11
    And the answer is, the ability to seek goals
  • 10:11 - 10:13
    will follow directly from this
  • 10:13 - 10:15
    in the following sense:
  • 10:15 - 10:18
    just like you would travel through a tunnel,
  • 10:18 - 10:20
    a bottleneck in your future path space,
  • 10:20 - 10:22
    in order to achieve many other
  • 10:22 - 10:24
    diverse objectives later on,
  • 10:24 - 10:26
    or just like you would invest
  • 10:26 - 10:28
    in a financial security,
  • 10:28 - 10:30
    reducing your short-term liquidity
  • 10:30 - 10:33
    in order to increase your wealth over the long term,
  • 10:33 - 10:35
    goal seeking emerges directly
  • 10:35 - 10:37
    from a long-term drive
  • 10:37 - 10:41
    to increase future freedom of action.
  • 10:41 - 10:45
    Finally, Richard Feynman, famous physicist,
  • 10:45 - 10:48
    once wrote that if human civilization were destroyed
  • 10:48 - 10:50
    and you could pass only a single concept
  • 10:50 - 10:51
    on to our descendants
  • 10:51 - 10:54
    to help them rebuild civilization,
  • 10:54 - 10:55
    that concept should be
  • 10:55 - 10:57
    that all matter around us
  • 10:57 - 11:00
    is made out of tiny elements
  • 11:00 - 11:02
    that attract each other when they're far apart
  • 11:02 - 11:05
    but repel each other when they're close together.
  • 11:05 - 11:07
    My equivalent of that statement
  • 11:07 - 11:09
    to pass on to descendants
  • 11:09 - 11:11
    to help them build artificial intelligences
  • 11:11 - 11:14
    or to help them understand human intelligence,
  • 11:14 - 11:15
    is the following:
  • 11:15 - 11:17
    Intelligence should be viewed
  • 11:17 - 11:19
    as a physical process
  • 11:19 - 11:22
    that tries to maximize future freedom of action
  • 11:22 - 11:25
    and avoid constraints in its own future.
  • 11:25 - 11:27
    Thank you very much.
  • 11:27 - 11:31
    (Applause)
Title:
A new equation for intelligence
Speaker:
Alex Wissner-Gross
Description:

Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ. In a fascinating and informative talk, physicist and computer scientist Alex Wissner-Gross explains what in the world that means. (Filmed at TEDxBeaconStreet.)

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
11:48

English subtitles

Revisions Compare revisions