Return to Video

A monkey that controls a robot with its thoughts. No, really.

  • 0:00 - 0:03
    The kind of neuroscience that I do and my colleagues do
  • 0:03 - 0:05
    is almost like the weatherman.
  • 0:05 - 0:09
    We are always chasing storms.
  • 0:09 - 0:14
    We want to see and measure storms -- brainstorms, that is.
  • 0:14 - 0:17
    And we all talk about brainstorms in our daily lives,
  • 0:17 - 0:20
    but we rarely see or listen to one.
  • 0:20 - 0:22
    So I always like to start these talks
  • 0:22 - 0:25
    by actually introducing you to one of them.
  • 0:25 - 0:28
    Actually, the first time we recorded more than one neuron --
  • 0:28 - 0:30
    a hundred brain cells simultaneously --
  • 0:30 - 0:33
    we could measure the electrical sparks
  • 0:33 - 0:35
    of a hundred cells in the same animal,
  • 0:35 - 0:37
    this is the first image we got,
  • 0:37 - 0:39
    the first 10 seconds of this recording.
  • 0:39 - 0:43
    So we got a little snippet of a thought,
  • 0:43 - 0:46
    and we could see it in front of us.
  • 0:46 - 0:47
    I always tell the students
  • 0:47 - 0:51
    that we could also call neuroscientists some sort of astronomer,
  • 0:51 - 0:52
    because we are dealing with a system
  • 0:52 - 0:55
    that is only comparable in terms of number of cells
  • 0:55 - 0:58
    to the number of galaxies that we have in the universe.
  • 0:58 - 1:01
    And here we are, out of billions of neurons,
  • 1:01 - 1:04
    just recording, 10 years ago, a hundred.
  • 1:04 - 1:06
    We are doing a thousand now.
  • 1:06 - 1:11
    And we hope to understand something fundamental about our human nature.
  • 1:11 - 1:13
    Because, if you don't know yet,
  • 1:13 - 1:18
    everything that we use to define what human nature is comes from these storms,
  • 1:18 - 1:23
    comes from these storms that roll over the hills and valleys of our brains
  • 1:23 - 1:27
    and define our memories, our beliefs,
  • 1:27 - 1:30
    our feelings, our plans for the future.
  • 1:30 - 1:32
    Everything that we ever do,
  • 1:32 - 1:37
    everything that every human has ever done, do or will do,
  • 1:37 - 1:42
    requires the toil of populations of neurons producing these kinds of storms.
  • 1:42 - 1:45
    And the sound of a brainstorm, if you've never heard one,
  • 1:45 - 1:48
    is somewhat like this.
  • 1:48 - 1:51
    You can put it louder if you can.
  • 1:51 - 1:58
    My son calls this "making popcorn while listening to a badly-tuned A.M. station."
  • 1:58 - 1:59
    This is a brain.
  • 1:59 - 2:03
    This is what happens when you route these electrical storms to a loudspeaker
  • 2:03 - 2:06
    and you listen to a hundred brain cells firing,
  • 2:06 - 2:10
    your brain will sound like this -- my brain, any brain.
  • 2:10 - 2:14
    And what we want to do as neuroscientists in this time
  • 2:14 - 2:19
    is to actually listen to these symphonies, these brain symphonies,
  • 2:19 - 2:23
    and try to extract from them the messages they carry.
  • 2:23 - 2:26
    In particular, about 12 years ago
  • 2:26 - 2:29
    we created a preparation that we named brain-machine interfaces.
  • 2:29 - 2:31
    And you have a scheme here that describes how it works.
  • 2:31 - 2:37
    The idea is, let's have some sensors that listen to these storms, this electrical firing,
  • 2:37 - 2:40
    and see if you can, in the same time that it takes
  • 2:40 - 2:45
    for this storm to leave the brain and reach the legs or the arms of an animal --
  • 2:45 - 2:48
    about half a second --
  • 2:48 - 2:50
    let's see if we can read these signals,
  • 2:50 - 2:54
    extract the motor messages that are embedded in it,
  • 2:54 - 2:56
    translate it into digital commands
  • 2:56 - 2:58
    and send it to an artificial device
  • 2:58 - 3:04
    that will reproduce the voluntary motor wheel of that brain in real time.
  • 3:04 - 3:08
    And see if we can measure how well we can translate that message
  • 3:08 - 3:11
    when we compare to the way the body does that.
  • 3:11 - 3:14
    And if we can actually provide feedback,
  • 3:14 - 3:20
    sensory signals that go back from this robotic, mechanical, computational actuator
  • 3:20 - 3:22
    that is now under the control of the brain,
  • 3:22 - 3:23
    back to the brain,
  • 3:23 - 3:25
    how the brain deals with that,
  • 3:25 - 3:30
    of receiving messages from an artificial piece of machinery.
  • 3:30 - 3:33
    And that's exactly what we did 10 years ago.
  • 3:33 - 3:36
    We started with a superstar monkey called Aurora
  • 3:36 - 3:38
    that became one of the superstars of this field.
  • 3:38 - 3:40
    And Aurora liked to play video games.
  • 3:40 - 3:42
    As you can see here,
  • 3:42 - 3:47
    she likes to use a joystick, like any one of us, any of our kids, to play this game.
  • 3:47 - 3:51
    And as a good primate, she even tries to cheat before she gets the right answer.
  • 3:51 - 3:56
    So even before a target appears that she's supposed to cross
  • 3:56 - 3:58
    with the cursor that she's controlling with this joystick,
  • 3:58 - 4:02
    Aurora is trying to find the target, no matter where it is.
  • 4:02 - 4:04
    And if she's doing that,
  • 4:04 - 4:07
    because every time she crosses that target with the little cursor,
  • 4:07 - 4:10
    she gets a drop of Brazilian orange juice.
  • 4:10 - 4:13
    And I can tell you, any monkey will do anything for you
  • 4:13 - 4:16
    if you get a little drop of Brazilian orange juice.
  • 4:16 - 4:19
    Actually any primate will do that.
  • 4:19 - 4:20
    Think about that.
  • 4:20 - 4:24
    Well, while Aurora was playing this game, as you saw,
  • 4:24 - 4:26
    and doing a thousand trials a day
  • 4:26 - 4:30
    and getting 97 percent correct and 350 milliliters of orange juice,
  • 4:30 - 4:33
    we are recording the brainstorms that are produced in her head
  • 4:33 - 4:35
    and sending them to a robotic arm
  • 4:35 - 4:39
    that was learning to reproduce the movements that Aurora was making.
  • 4:39 - 4:43
    Because the idea was to actually turn on this brain-machine interface
  • 4:43 - 4:47
    and have Aurora play the game just by thinking,
  • 4:47 - 4:50
    without interference of her body.
  • 4:50 - 4:53
    Her brainstorms would control an arm
  • 4:53 - 4:56
    that would move the cursor and cross the target.
  • 4:56 - 4:59
    And to our shock, that's exactly what Aurora did.
  • 4:59 - 5:03
    She played the game without moving her body.
  • 5:03 - 5:05
    So every trajectory that you see of the cursor now,
  • 5:05 - 5:08
    this is the exact first moment she got that.
  • 5:08 - 5:10
    That's the exact first moment
  • 5:10 - 5:17
    a brain intention was liberated from the physical domains of a body of a primate
  • 5:17 - 5:21
    and could act outside, in that outside world,
  • 5:21 - 5:24
    just by controlling an artificial device.
  • 5:24 - 5:29
    And Aurora kept playing the game, kept finding the little target
  • 5:29 - 5:32
    and getting the orange juice that she wanted to get, that she craved for.
  • 5:32 - 5:39
    Well, she did that because she, at that time, had acquired a new arm.
  • 5:39 - 5:42
    The robotic arm that you see moving here 30 days later,
  • 5:42 - 5:45
    after the first video that I showed to you,
  • 5:45 - 5:47
    is under the control of Aurora's brain
  • 5:47 - 5:51
    and is moving the cursor to get to the target.
  • 5:51 - 5:55
    And Aurora now knows that she can play the game with this robotic arm,
  • 5:55 - 6:00
    but she has not lost the ability to use her biological arms to do what she pleases.
  • 6:00 - 6:04
    She can scratch her back, she can scratch one of us, she can play another game.
  • 6:04 - 6:06
    By all purposes and means,
  • 6:06 - 6:10
    Aurora's brain has incorporated that artificial device
  • 6:10 - 6:13
    as an extension of her body.
  • 6:13 - 6:16
    The model of the self that Aurora had in her mind
  • 6:16 - 6:20
    has been expanded to get one more arm.
  • 6:20 - 6:23
    Well, we did that 10 years ago.
  • 6:23 - 6:26
    Just fast forward 10 years.
  • 6:26 - 6:31
    Just last year we realized that you don't even need to have a robotic device.
  • 6:31 - 6:36
    You can just build a computational body, an avatar, a monkey avatar.
  • 6:36 - 6:40
    And you can actually use it for our monkeys to either interact with them,
  • 6:40 - 6:45
    or you can train them to assume in a virtual world
  • 6:45 - 6:48
    the first-person perspective of that avatar
  • 6:48 - 6:53
    and use her brain activity to control the movements of the avatar's arms or legs.
  • 6:53 - 6:56
    And what we did basically was to train the animals
  • 6:56 - 6:59
    to learn how to control these avatars
  • 6:59 - 7:03
    and explore objects that appear in the virtual world.
  • 7:03 - 7:05
    And these objects are visually identical,
  • 7:05 - 7:09
    but when the avatar crosses the surface of these objects,
  • 7:09 - 7:16
    they send an electrical message that is proportional to the microtactile texture of the object
  • 7:16 - 7:20
    that goes back directly to the monkey's brain,
  • 7:20 - 7:25
    informing the brain what it is the avatar is touching.
  • 7:25 - 7:30
    And in just four weeks, the brain learns to process this new sensation
  • 7:30 - 7:36
    and acquires a new sensory pathway -- like a new sense.
  • 7:36 - 7:38
    And you truly liberate the brain now
  • 7:38 - 7:43
    because you are allowing the brain to send motor commands to move this avatar.
  • 7:43 - 7:48
    And the feedback that comes from the avatar is being processed directly by the brain
  • 7:48 - 7:50
    without the interference of the skin.
  • 7:50 - 7:53
    So what you see here is this is the design of the task.
  • 7:53 - 7:57
    You're going to see an animal basically touching these three targets.
  • 7:57 - 8:01
    And he has to select one because only one carries the reward,
  • 8:01 - 8:03
    the orange juice that they want to get.
  • 8:03 - 8:09
    And he has to select it by touch using a virtual arm, an arm that doesn't exist.
  • 8:09 - 8:11
    And that's exactly what they do.
  • 8:11 - 8:14
    This is a complete liberation of the brain
  • 8:14 - 8:19
    from the physical constraints of the body and the motor in a perceptual task.
  • 8:19 - 8:23
    The animal is controlling the avatar to touch the targets.
  • 8:23 - 8:28
    And he's sensing the texture by receiving an electrical message directly in the brain.
  • 8:28 - 8:32
    And the brain is deciding what is the texture associated with the reward.
  • 8:32 - 8:36
    The legends that you see in the movie don't appear for the monkey.
  • 8:36 - 8:39
    And by the way, they don't read English anyway,
  • 8:39 - 8:44
    so they are here just for you to know that the correct target is shifting position.
  • 8:44 - 8:48
    And yet, they can find them by tactile discrimination,
  • 8:48 - 8:51
    and they can press it and select it.
  • 8:51 - 8:54
    So when we look at the brains of these animals,
  • 8:54 - 8:57
    on the top panel you see the alignment of 125 cells
  • 8:57 - 9:02
    showing what happens with the brain activity, the electrical storms,
  • 9:02 - 9:04
    of this sample of neurons in the brain
  • 9:04 - 9:06
    when the animal is using a joystick.
  • 9:06 - 9:08
    And that's a picture that every neurophysiologist knows.
  • 9:08 - 9:13
    The basic alignment shows that these cells are coding for all possible directions.
  • 9:13 - 9:19
    The bottom picture is what happens when the body stops moving
  • 9:19 - 9:25
    and the animal starts controlling either a robotic device or a computational avatar.
  • 9:25 - 9:28
    As fast as we can reset our computers,
  • 9:28 - 9:34
    the brain activity shifts to start representing this new tool,
  • 9:34 - 9:39
    as if this too was a part of that primate's body.
  • 9:39 - 9:44
    The brain is assimilating that too, as fast as we can measure.
  • 9:44 - 9:48
    So that suggests to us that our sense of self
  • 9:48 - 9:52
    does not end at the last layer of the epithelium of our bodies,
  • 9:52 - 9:58
    but it ends at the last layer of electrons of the tools that we're commanding with our brains.
  • 9:58 - 10:02
    Our violins, our cars, our bicycles, our soccer balls, our clothing --
  • 10:02 - 10:09
    they all become assimilated by this voracious, amazing, dynamic system called the brain.
  • 10:09 - 10:11
    How far can we take it?
  • 10:11 - 10:15
    Well, in an experiment that we ran a few years ago, we took this to the limit.
  • 10:15 - 10:18
    We had an animal running on a treadmill
  • 10:18 - 10:20
    at Duke University on the East Coast of the United States,
  • 10:20 - 10:23
    producing the brainstorms necessary to move.
  • 10:23 - 10:27
    And we had a robotic device, a humanoid robot,
  • 10:27 - 10:29
    in Kyoto, Japan at ATR Laboratories
  • 10:29 - 10:35
    that was dreaming its entire life to be controlled by a brain,
  • 10:35 - 10:38
    a human brain, or a primate brain.
  • 10:38 - 10:43
    What happens here is that the brain activity that generated the movements in the monkey
  • 10:43 - 10:47
    was transmitted to Japan and made this robot walk
  • 10:47 - 10:51
    while footage of this walking was sent back to Duke,
  • 10:51 - 10:56
    so that the monkey could see the legs of this robot walking in front of her.
  • 10:56 - 11:00
    So she could be rewarded, not by what her body was doing
  • 11:00 - 11:05
    but for every correct step of the robot on the other side of the planet
  • 11:05 - 11:07
    controlled by her brain activity.
  • 11:07 - 11:15
    Funny thing, that round trip around the globe took 20 milliseconds less
  • 11:15 - 11:19
    than it takes for that brainstorm to leave its head, the head of the monkey,
  • 11:19 - 11:23
    and reach its own muscle.
  • 11:23 - 11:29
    The monkey was moving a robot that was six times bigger, across the planet.
  • 11:29 - 11:35
    This is one of the experiments in which that robot was able to walk autonomously.
  • 11:35 - 11:40
    This is CB1 fulfilling its dream in Japan
  • 11:40 - 11:44
    under the control of the brain activity of a primate.
  • 11:44 - 11:46
    So where are we taking all this?
  • 11:46 - 11:48
    What are we going to do with all this research,
  • 11:48 - 11:54
    besides studying the properties of this dynamic universe that we have between our ears?
  • 11:54 - 11:59
    Well the idea is to take all this knowledge and technology
  • 11:59 - 12:04
    and try to restore one of the most severe neurological problems that we have in the world.
  • 12:04 - 12:09
    Millions of people have lost the ability to translate these brainstorms
  • 12:09 - 12:11
    into action, into movement.
  • 12:11 - 12:16
    Although their brains continue to produce those storms and code for movements,
  • 12:16 - 12:21
    they cannot cross a barrier that was created by a lesion on the spinal cord.
  • 12:21 - 12:24
    So our idea is to create a bypass,
  • 12:24 - 12:28
    is to use these brain-machine interfaces to read these signals,
  • 12:28 - 12:32
    larger-scale brainstorms that contain the desire to move again,
  • 12:32 - 12:36
    bypass the lesion using computational microengineering
  • 12:36 - 12:43
    and send it to a new body, a whole body called an exoskeleton,
  • 12:43 - 12:49
    a whole robotic suit that will become the new body of these patients.
  • 12:49 - 12:53
    And you can see an image produced by this consortium.
  • 12:53 - 12:57
    This is a nonprofit consortium called the Walk Again Project
  • 12:57 - 13:00
    that is putting together scientists from Europe,
  • 13:00 - 13:01
    from here in the United States, and in Brazil
  • 13:01 - 13:06
    together to work to actually get this new body built --
  • 13:06 - 13:09
    a body that we believe, through the same plastic mechanisms
  • 13:09 - 13:15
    that allow Aurora and other monkeys to use these tools through a brain-machine interface
  • 13:15 - 13:21
    and that allows us to incorporate the tools that we produce and use in our daily life.
  • 13:21 - 13:24
    This same mechanism, we hope, will allow these patients,
  • 13:24 - 13:28
    not only to imagine again the movements that they want to make
  • 13:28 - 13:31
    and translate them into movements of this new body,
  • 13:31 - 13:38
    but for this body to be assimilated as the new body that the brain controls.
  • 13:38 - 13:42
    So I was told about 10 years ago
  • 13:42 - 13:47
    that this would never happen, that this was close to impossible.
  • 13:47 - 13:49
    And I can only tell you that as a scientist,
  • 13:49 - 13:52
    I grew up in southern Brazil in the mid-'60s
  • 13:52 - 13:58
    watching a few crazy guys telling [us] that they would go to the Moon.
  • 13:58 - 13:59
    And I was five years old,
  • 13:59 - 14:03
    and I never understood why NASA didn't hire Captain Kirk and Spock to do the job;
  • 14:03 - 14:06
    after all, they were very proficient --
  • 14:06 - 14:09
    but just seeing that as a kid
  • 14:09 - 14:12
    made me believe, as my grandmother used to tell me,
  • 14:12 - 14:14
    that "impossible is just the possible
  • 14:14 - 14:18
    that someone has not put in enough effort to make it come true."
  • 14:18 - 14:22
    So they told me that it's impossible to make someone walk.
  • 14:22 - 14:25
    I think I'm going to follow my grandmother's advice.
  • 14:25 - 14:26
    Thank you.
  • 14:26 - 14:34
    (Applause)
Title:
A monkey that controls a robot with its thoughts. No, really.
Speaker:
Miguel Nicolelis
Description:

Can we use our brains to directly control machines -- without requiring a body as the middleman? Miguel Nicolelis talks through an astonishing experiment, in which a clever monkey in the US learns to control a monkey avatar, and then a robot arm in Japan, purely with its thoughts. The research has big implications for quadraplegic people -- and maybe for all of us. (Filmed at TEDMED 2012.)

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
14:55

English subtitles

Revisions Compare revisions