A new equation for intelligence
-
0:01 - 0:05Intelligence -- what is it?
-
0:05 - 0:07If we take a look back at the history
-
0:07 - 0:09of how intelligence has been viewed,
-
0:09 - 0:13one seminal example has been
-
0:13 - 0:17Edsger Dijkstra's famous quote that
-
0:17 - 0:20"the question of whether a machine can think
-
0:20 - 0:21is about as interesting
-
0:21 - 0:24as the question of whether a submarine
-
0:24 - 0:26can swim."
-
0:26 - 0:30Now, Edsger Dijkstra, when he wrote this,
-
0:30 - 0:32intended it as a criticism
-
0:32 - 0:35of the early pioneers of computer science,
-
0:35 - 0:36like Alan Turing.
-
0:36 - 0:39However, if you take a look back
-
0:39 - 0:41and think about what have been
-
0:41 - 0:43the most empowering innovations
-
0:43 - 0:45that enabled us to build
-
0:45 - 0:47artificial machines that swim
-
0:47 - 0:50and artificial machines that [fly],
-
0:50 - 0:53you find that it was only through understanding
-
0:53 - 0:56the underlying physical mechanisms
-
0:56 - 0:58of swimming and flight
-
0:58 - 1:02that we were able to build these machines.
-
1:02 - 1:04And so, several years ago,
-
1:04 - 1:07I undertook a program to try to understand
-
1:07 - 1:10the fundamental physical mechanisms
-
1:10 - 1:13underlying intelligence.
-
1:13 - 1:14Let's take a step back.
-
1:14 - 1:18Let's first begin with a thought experiment.
-
1:18 - 1:20Pretend that you're an alien race
-
1:20 - 1:23that doesn't know anything about Earth biology
-
1:23 - 1:27or Earth neuroscience or Earth intelligence,
-
1:27 - 1:29but you have amazing telescopes
-
1:29 - 1:31and you're able to watch the Earth,
-
1:31 - 1:33and you have amazingly long lives,
-
1:33 - 1:35so you're able to watch the Earth
-
1:35 - 1:38over millions, even billions of years.
-
1:38 - 1:41And you observe a really strange effect.
-
1:41 - 1:46You observe that, over the course of the millennia,
-
1:46 - 1:50Earth is continually bombarded with asteroids
-
1:50 - 1:52up until a point,
-
1:52 - 1:54and that at some point,
-
1:54 - 1:58corresponding roughly to our year, 2000 AD,
-
1:58 - 2:00asteroids that are on
-
2:00 - 2:01a collision course with the Earth
-
2:01 - 2:03that otherwise would have collided
-
2:03 - 2:06mysteriously get deflected
-
2:06 - 2:09or they detonate before they can hit the Earth.
-
2:09 - 2:11Now of course, as earthlings,
-
2:11 - 2:13we know the reason would be
-
2:13 - 2:14that we're trying to save ourselves.
-
2:14 - 2:17We're trying to prevent an impact.
-
2:17 - 2:19But if you're an alien race
-
2:19 - 2:20who doesn't know any of this,
-
2:20 - 2:23doesn't have any concept of Earth intelligence,
-
2:23 - 2:25you'd be forced to put together
-
2:25 - 2:27a physical theory that explains how,
-
2:27 - 2:30up until a certain point in time,
-
2:30 - 2:34asteroids that would demolish the surface of a planet
-
2:34 - 2:38mysteriously stop doing that.
-
2:38 - 2:42And so I claim that this is the same question
-
2:42 - 2:46as understanding the physical nature of intelligence.
-
2:46 - 2:50So in this program that I
undertook several years ago, -
2:50 - 2:52I looked at a variety of different threads
-
2:52 - 2:56across science, across a variety of disciplines,
-
2:56 - 2:58that were pointing, I think,
-
2:58 - 3:00towards a single, underlying mechanism
-
3:00 - 3:02for intelligence.
-
3:02 - 3:04In cosmology, for example,
-
3:04 - 3:07there have been a variety of
different threads of evidence -
3:07 - 3:10that our universe appears to be finely tuned
-
3:10 - 3:13for the development of intelligence,
-
3:13 - 3:15and, in particular, for the development
-
3:15 - 3:17of universal states
-
3:17 - 3:21that maximize the diversity of possible futures.
-
3:21 - 3:23In game play, for example, in Go --
-
3:23 - 3:26everyone remembers in 1997
-
3:26 - 3:30when IBM's Deep Blue beat
Garry Kasparov at chess -- -
3:30 - 3:32fewer people are aware
-
3:32 - 3:34that in the past 10 years or so,
-
3:34 - 3:35the game of Go,
-
3:35 - 3:37arguably a much more challenging game
-
3:37 - 3:39because it has a much higher branching factor,
-
3:39 - 3:41has also started to succumb
-
3:41 - 3:43to computer game players
-
3:43 - 3:44for the same reason:
-
3:44 - 3:47the best techniques right now
for computers playing Go -
3:47 - 3:51are techniques that try to maximize future options
-
3:51 - 3:53during game play.
-
3:53 - 3:57Finally, in robotic motion planning,
-
3:57 - 3:59there have been a variety of recent techniques
-
3:59 - 4:01that have tried to take advantage
-
4:01 - 4:04of abilities of robots to maximize
-
4:04 - 4:05future freedom of action
-
4:05 - 4:08in order to accomplish complex tasks.
-
4:08 - 4:11And so, taking all of these different threads
-
4:11 - 4:12and putting them together,
-
4:12 - 4:15I asked, starting several years ago,
-
4:15 - 4:18is there an underlying mechanism for intelligence
-
4:18 - 4:20that we can factor out
-
4:20 - 4:21of all of these different threads?
-
4:21 - 4:26Is there a single equation for intelligence?
-
4:26 - 4:29And the answer, I believe, is yes.
["F = T ∇ Sτ"] -
4:29 - 4:31What you're seeing is probably
-
4:31 - 4:34the closest equivalent to an E = mc²
-
4:34 - 4:37for intelligence that I've seen.
-
4:37 - 4:39So what you're seeing here
-
4:39 - 4:42is a statement of correspondence
-
4:42 - 4:46that intelligence is a force, F,
-
4:46 - 4:51that acts so as to maximize future freedom of action.
-
4:51 - 4:53It acts to maximize future freedom of action,
-
4:53 - 4:55or keep options open,
-
4:55 - 4:57with some strength T,
-
4:57 - 5:02with the diversity of possible accessible futures, S,
-
5:02 - 5:04up to some future time horizon, tau.
-
5:04 - 5:08In short, intelligence doesn't like to get trapped.
-
5:08 - 5:11Intelligence tries to maximize
future freedom of action -
5:11 - 5:13and keep options open.
-
5:13 - 5:16And so, given this one equation,
-
5:16 - 5:18it's natural to ask, so what can you do with this?
-
5:18 - 5:20How predictive is it?
-
5:20 - 5:22Does it predict human-level intelligence?
-
5:22 - 5:25Does it predict artificial intelligence?
-
5:25 - 5:27So I'm going to show you now a video
-
5:27 - 5:30that will, I think, demonstrate
-
5:30 - 5:32some of the amazing applications
-
5:32 - 5:35of just this single equation.
-
5:35 - 5:37(Video) Narrator: Recent research in cosmology
-
5:37 - 5:39has suggested that universes that produce
-
5:39 - 5:42more disorder, or "entropy," over their lifetimes
-
5:42 - 5:45should tend to have more favorable conditions
-
5:45 - 5:48for the existence of intelligent
beings such as ourselves. -
5:48 - 5:50But what if that tentative cosmological connection
-
5:50 - 5:52between entropy and intelligence
-
5:52 - 5:54hints at a deeper relationship?
-
5:54 - 5:56What if intelligent behavior doesn't just correlate
-
5:56 - 5:58with the production of long-term entropy,
-
5:58 - 6:01but actually emerges directly from it?
-
6:01 - 6:03To find out, we developed a software engine
-
6:03 - 6:05called Entropica, designed to maximize
-
6:05 - 6:07the production of long-term entropy
-
6:07 - 6:10of any system that it finds itself in.
-
6:10 - 6:12Amazingly, Entropica was able to pass
-
6:12 - 6:15multiple animal intelligence
tests, play human games, -
6:15 - 6:18and even earn money trading stocks,
-
6:18 - 6:20all without being instructed to do so.
-
6:20 - 6:22Here are some examples of Entropica in action.
-
6:22 - 6:25Just like a human standing
upright without falling over, -
6:25 - 6:27here we see Entropica
-
6:27 - 6:29automatically balancing a pole using a cart.
-
6:29 - 6:31This behavior is remarkable in part
-
6:31 - 6:34because we never gave Entropica a goal.
-
6:34 - 6:37It simply decided on its own to balance the pole.
-
6:37 - 6:39This balancing ability will have appliactions
-
6:39 - 6:41for humanoid robotics
-
6:41 - 6:43and human assistive technologies.
-
6:43 - 6:45Just as some animals can use objects
-
6:45 - 6:46in their environments as tools
-
6:46 - 6:48to reach into narrow spaces,
-
6:48 - 6:50here we see that Entropica,
-
6:50 - 6:52again on its own initiative,
-
6:52 - 6:55was able to move a large
disk representing an animal -
6:55 - 6:57around so as to cause a small disk,
-
6:57 - 7:00representing a tool, to reach into a confined space
-
7:00 - 7:02holding a third disk
-
7:02 - 7:05and release the third disk
from its initially fixed position. -
7:05 - 7:07This tool use ability will have applications
-
7:07 - 7:09for smart manufacturing and agriculture.
-
7:09 - 7:11In addition, just as some other animals
-
7:11 - 7:14are able to cooperate by pulling
opposite ends of a rope -
7:14 - 7:16at the same time to release food,
-
7:16 - 7:18here we see that Entropica is able to accomplish
-
7:18 - 7:20a model version of that task.
-
7:20 - 7:23This cooperative ability has interesting implications
-
7:23 - 7:26for economic planning and a variety of other fields.
-
7:26 - 7:28Entropica is broadly applicable
-
7:28 - 7:30to a variety of domains.
-
7:30 - 7:33For example, here we see it successfully
-
7:33 - 7:35playing a game of pong against itself,
-
7:35 - 7:38illustrating its potential for gaming.
-
7:38 - 7:39Here we see Entropica orchestrating
-
7:39 - 7:41new connections on a social network
-
7:41 - 7:44where friends are constantly falling out of touch
-
7:44 - 7:47and successfully keeping
the network well connected. -
7:47 - 7:49This same network orchestration ability
-
7:49 - 7:52also has applications in health care,
-
7:52 - 7:55energy, and intelligence.
-
7:55 - 7:57Here we see Entropica directing the paths
-
7:57 - 7:58of a fleet of ships,
-
7:58 - 8:02successfully discovering and
utilizing the Panama Canal -
8:02 - 8:04to globally extend its reach from the Atlantic
-
8:04 - 8:06to the Pacific.
-
8:06 - 8:07By the same token, Entropica
-
8:07 - 8:09is broadly applicable to problems
-
8:09 - 8:14in autonomous defense, logistics and transportation.
-
8:14 - 8:16Finally, here we see Entropica
-
8:16 - 8:19spontaneously discovering and executing
-
8:19 - 8:21a buy-low, sell-high strategy
-
8:21 - 8:23on a simulated range traded stock,
-
8:23 - 8:26successfully growing assets under management
-
8:26 - 8:27exponentially.
-
8:27 - 8:28This risk management ability
-
8:28 - 8:31will have broad applications in finance
-
8:31 - 8:34and insurance.
-
8:34 - 8:36Alex Wissner-Gross: So what you've just seen
-
8:36 - 8:41is that a variety of signature human intelligent
-
8:41 - 8:42cognitive behaviors
-
8:42 - 8:45such as tool use and walking upright
-
8:45 - 8:47and social cooperation
-
8:47 - 8:50all follow from a single equation,
-
8:50 - 8:52which drives a system
-
8:52 - 8:56to maximize its future freedom of action.
-
8:56 - 8:59Now, there's a profound irony here.
-
8:59 - 9:01Going back to the beginning
-
9:01 - 9:04of the usage of the term robot,
-
9:04 - 9:07the play "RUR,"
-
9:07 - 9:09there was always a concept
-
9:09 - 9:13that if we developed machine intelligence,
-
9:13 - 9:16there would be a cybernetic revolt.
-
9:16 - 9:19The machines would rise up against us.
-
9:19 - 9:22One major consequence of this work
-
9:22 - 9:24is that maybe all of these decades,
-
9:24 - 9:27we've had the whole concept of cybernetic revolt
-
9:27 - 9:29in reverse.
-
9:29 - 9:33It's not that machines first become intelligent
-
9:33 - 9:35and then megalomaniacal
-
9:35 - 9:37and try to take over the world.
-
9:37 - 9:38It's quite the opposite,
-
9:38 - 9:41that the urge to take control
-
9:41 - 9:43of all possible futures
-
9:43 - 9:46is a more fundamental principle
-
9:46 - 9:47than that of intelligence,
-
9:47 - 9:51that general intelligence may in fact emerge
-
9:51 - 9:54directly from this sort of control-grabbing,
-
9:54 - 9:58rather than vice versa.
-
9:58 - 10:02Another important consequence is goal seeking.
-
10:02 - 10:06I'm often asked, how does the ability to seek goals
-
10:06 - 10:08follow from this sort of framework?
-
10:08 - 10:11And the answer is, the ability to seek goals
-
10:11 - 10:13will follow directly from this
-
10:13 - 10:15in the following sense:
-
10:15 - 10:18just like you would travel through a tunnel,
-
10:18 - 10:20a bottleneck in your future path space,
-
10:20 - 10:22in order to achieve many other
-
10:22 - 10:24diverse objectives later on,
-
10:24 - 10:26or just like you would invest
-
10:26 - 10:28in a financial security,
-
10:28 - 10:30reducing your short-term liquidity
-
10:30 - 10:33in order to increase your wealth over the long term,
-
10:33 - 10:35goal seeking emerges directly
-
10:35 - 10:37from a long-term drive
-
10:37 - 10:41to increase future freedom of action.
-
10:41 - 10:45Finally, Richard Feynman, famous physicist,
-
10:45 - 10:48once wrote that if human civilization were destroyed
-
10:48 - 10:50and you could pass only a single concept
-
10:50 - 10:51on to our descendants
-
10:51 - 10:54to help them rebuild civilization,
-
10:54 - 10:55that concept should be
-
10:55 - 10:57that all matter around us
-
10:57 - 11:00is made out of tiny elements
-
11:00 - 11:02that attract each other when they're far apart
-
11:02 - 11:05but repel each other when they're close together.
-
11:05 - 11:07My equivalent of that statement
-
11:07 - 11:09to pass on to descendants
-
11:09 - 11:11to help them build artificial intelligences
-
11:11 - 11:14or to help them understand human intelligence,
-
11:14 - 11:15is the following:
-
11:15 - 11:17Intelligence should be viewed
-
11:17 - 11:19as a physical process
-
11:19 - 11:22that tries to maximize future freedom of action
-
11:22 - 11:25and avoid constraints in its own future.
-
11:25 - 11:27Thank you very much.
-
11:27 - 11:31(Applause)
- Title:
- A new equation for intelligence
- Speaker:
- Alex Wissner-Gross
- Description:
-
Is there an equation for intelligence? Yes. It’s F = T ∇ Sτ. In a fascinating and informative talk, physicist and computer scientist Alex Wissner-Gross explains what in the world that means. (Filmed at TEDxBeaconStreet.)
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 11:48
Jenny Zurawell edited English subtitles for A new equation for intelligence | ||
Morton Bast approved English subtitles for A new equation for intelligence | ||
Morton Bast edited English subtitles for A new equation for intelligence | ||
Morton Bast edited English subtitles for A new equation for intelligence | ||
Madeleine Aronson accepted English subtitles for A new equation for intelligence | ||
Madeleine Aronson edited English subtitles for A new equation for intelligence | ||
Joseph Geni edited English subtitles for A new equation for intelligence | ||
Amara Bot edited English subtitles for A new equation for intelligence |