Return to Video

Can we create new senses for humans?

  • 0:01 - 0:06
    We are built out of
    very small stuff,
  • 0:06 - 0:08
    and we are embedded in
    a very large cosmos,
  • 0:08 - 0:13
    and the fact is that we are not
    very good at understanding reality
  • 0:13 - 0:14
    at either of those scales,
  • 0:14 - 0:16
    and that's because our brains
  • 0:16 - 0:20
    haven't evolved to understand
    the world at that scale.
  • 0:20 - 0:24
    Instead, we're trapped on this
    very thin slice of perception
  • 0:24 - 0:26
    right in the middle.
  • 0:27 - 0:31
    But it gets strange, because even at
    that slice of reality that we call home,
  • 0:31 - 0:34
    we're not seeing most
    of the action that's going on.
  • 0:34 - 0:38
    So take the colors of our world.
  • 0:38 - 0:42
    This is light waves, electromagnetic
    radiation that bounces off objects
  • 0:42 - 0:46
    and it hits specialized receptors
    in the back of our eyes.
  • 0:46 - 0:49
    But we're not seeing
    all the waves out there.
  • 0:49 - 0:51
    In fact, what we see
  • 0:51 - 0:55
    is less than a 10 trillionth
    of what's out there.
  • 0:55 - 0:58
    So you have radio waves and microwaves
  • 0:58 - 1:02
    and X-rays and gamma rays
    passing through your body right now
  • 1:02 - 1:05
    and you're completely unaware of it,
  • 1:05 - 1:08
    because you don't come with
    the proper biological receptors
  • 1:08 - 1:09
    for picking it up.
  • 1:10 - 1:12
    There are thousands
    of cell phone conversations
  • 1:12 - 1:14
    passing through you right now,
  • 1:14 - 1:16
    and you're utterly blind to it.
  • 1:16 - 1:20
    Now, it's not that these things
    are inherently unseeable.
  • 1:20 - 1:25
    Snakes include some infrared
    in their reality,
  • 1:25 - 1:29
    and honeybees include ultraviolet
    in their view of the world,
  • 1:29 - 1:32
    and of course we build machines
    in the dashboards of our cars
  • 1:32 - 1:35
    to pick up on signals
    in the radio frequency range,
  • 1:35 - 1:39
    and we built machines in hospitals
    to pick up on the X-ray range.
  • 1:39 - 1:42
    But you can't sense
    any of those by yourself,
  • 1:42 - 1:43
    at least not yet,
  • 1:43 - 1:47
    because you don't come equipped
    with the proper sensors.
  • 1:47 - 1:52
    Now, what this means is that
    our experience of reality
  • 1:52 - 1:55
    is constrained by our biology,
  • 1:55 - 1:58
    and that goes against
    the common sense notion
  • 1:58 - 2:00
    that our eyes and our ears
    and our fingertips
  • 2:00 - 2:04
    are just picking up
    the objective reality that's out there.
  • 2:04 - 2:10
    Instead, our brains are sampling
    just a little bit of the world.
  • 2:10 - 2:12
    Now, across the animal kingdom,
  • 2:12 - 2:15
    different animals pick up
    on different parts of reality.
  • 2:15 - 2:18
    So in the blind
    and deaf world of the tick,
  • 2:18 - 2:23
    the important signals
    are temperature and butyric acid;
  • 2:23 - 2:26
    in the world of the black ghost knifefish,
  • 2:26 - 2:31
    its sensory world is lavishly colored
    by electrical fields;
  • 2:31 - 2:33
    and for the echolocating bat,
  • 2:33 - 2:37
    its reality is constructed
    out of air compression waves.
  • 2:37 - 2:42
    That's the slice of their ecosystem
    that they can pick up on,
  • 2:42 - 2:43
    and we have a word for this in science.
  • 2:43 - 2:45
    It's called the umwelt,
  • 2:45 - 2:49
    which is the German word
    for the surrounding world.
  • 2:49 - 2:52
    Now, presumably, every animal assumes
  • 2:52 - 2:56
    that its umwelt is the entire
    objective reality out there,
  • 2:56 - 2:58
    because why would you ever stop to imagine
  • 2:58 - 3:01
    that there's something beyond
    what we can sense.
  • 3:01 - 3:04
    Instead, what we all do
    is we accept reality
  • 3:04 - 3:07
    as it's presented to us.
  • 3:07 - 3:10
    Let's do a consciousness-raiser on this:
  • 3:10 - 3:12
    imagine that you are a bloodhound dog.
  • 3:13 - 3:15
    Your whole world is about smelling.
  • 3:15 - 3:20
    You've got a long snout that has
    200 million scent receptors in it,
  • 3:20 - 3:24
    and you have wet nostrils
    that attract and trap scent molecules,
  • 3:24 - 3:28
    and your nostrils even have slits
    so you can take big nose-fulls of air.
  • 3:28 - 3:31
    Everything is about smell for you.
  • 3:31 - 3:35
    So one day, you stop in your tracks
    with a revelation:
  • 3:35 - 3:39
    you look at your human owner
    and you think,
  • 3:39 - 3:44
    "What is it like to have the pitiful,
    impoverished nose of a human?
  • 3:45 - 3:48
    What is it like when you take
    a feeble little nose-full of air?
  • 3:48 - 3:52
    How can you not know that there's
    a cat a hundred yards away,
  • 3:52 - 3:56
    or that your neighbor was on
    this very same spot six hours ago?"
  • 3:56 - 3:58
    (Laughter)
  • 3:58 - 4:01
    So because we're humans,
  • 4:01 - 4:03
    we've never experienced
    that world of smell,
  • 4:03 - 4:06
    so we don't miss it,
  • 4:06 - 4:10
    because we are firmly settled
    into our umwelt.
  • 4:10 - 4:14
    But the question is,
    do we have to be stuck there?
  • 4:14 - 4:19
    So as a neuroscientist, I'm interested
    in the way that technology
  • 4:19 - 4:21
    might expand our umwelt,
  • 4:21 - 4:25
    and how that's going to change
    the experience of being human.
  • 4:26 - 4:30
    So we already know that we can marry
    our technology to our biology,
  • 4:30 - 4:34
    because there are hundreds of thousands
    of people walking around
  • 4:34 - 4:37
    with artificial hearing
    and artificial vision.
  • 4:37 - 4:42
    So the way this works is, you take
    a microphone and you digitize the signal,
  • 4:42 - 4:45
    and you put an electrode strip
    directly into the inner ear.
  • 4:45 - 4:48
    Or, with the retinal implant,
    you take a camera
  • 4:48 - 4:51
    and you digitize the signal,
    and then you plug an electrode grid
  • 4:51 - 4:54
    directly into the optic nerve.
  • 4:54 - 4:58
    And as recently as 15 years ago,
  • 4:58 - 5:02
    there were a lot of scientists who thought
    these technologies wouldn't work.
  • 5:02 - 5:07
    Why? It's because these technologies
    speak the language of Silicon Valley,
  • 5:07 - 5:12
    and it's not exactly the same dialect
    as our natural biological sense organs.
  • 5:12 - 5:15
    But the fact is that it works:
  • 5:15 - 5:19
    the brain figures out
    how to use the signals just fine.
  • 5:20 - 5:21
    Now, how do we understand that?
  • 5:22 - 5:23
    Well, here's the big secret:
  • 5:23 - 5:29
    your brain is not hearing
    or seeing any of this.
  • 5:29 - 5:35
    Your brain is locked in a vault of silence
    and darkness inside your skull.
  • 5:35 - 5:39
    All it ever sees are
    electrochemical signals
  • 5:39 - 5:42
    that come in along different data cables,
  • 5:42 - 5:46
    and this is all it has to work with,
    and nothing more.
  • 5:47 - 5:49
    Now, amazingly,
  • 5:49 - 5:52
    the brain is really good
    at taking in these signals
  • 5:52 - 5:55
    and extracting patterns
    and assigning meaning,
  • 5:55 - 6:01
    so that it takes this inner cosmos
    and puts together a story of this,
  • 6:01 - 6:04
    your subjective world.
  • 6:04 - 6:06
    But here's the key point:
  • 6:06 - 6:10
    your brain doesn't know,
    and it doesn't care,
  • 6:10 - 6:13
    where it gets the data from.
  • 6:13 - 6:17
    Whatever information comes in,
    it just figures out what to do with it.
  • 6:17 - 6:20
    And this is a very efficient
    kind of machine.
  • 6:20 - 6:24
    It's essentially a general purpose
    computing device,
  • 6:24 - 6:26
    and it just takes in everything
  • 6:26 - 6:29
    and figures out
    what it's going to do with it,
  • 6:29 - 6:33
    and that, I think, frees up Mother Nature
  • 6:33 - 6:37
    to tinker around with different
    sorts of input channels.
  • 6:37 - 6:40
    So I call this the pH model of evolution,
  • 6:40 - 6:42
    and I don't want to get
    too technical here,
  • 6:42 - 6:45
    but pH stands for "potato head,"
  • 6:45 - 6:49
    and I use this name to emphasize
    that all these sensors
  • 6:49 - 6:52
    that we know and love, like our eyes
    and our ears and our fingertips,
  • 6:52 - 6:57
    these are merely peripheral
    plug-and-play devices:
  • 6:57 - 7:00
    you stick them in, and you're good to go.
  • 7:00 - 7:05
    The brain figures out what to do
    with the data that comes in.
  • 7:06 - 7:08
    And when you look across
    the animal kingdom,
  • 7:08 - 7:11
    you find lots of peripheral devices.
  • 7:11 - 7:15
    So snakes have heat pits
    with which to detect infrared,
  • 7:15 - 7:18
    and the ghost knifefish has
    electroreceptors,
  • 7:18 - 7:21
    and the star-nosed mole
    has this appendage
  • 7:21 - 7:24
    with 22 fingers on it
  • 7:24 - 7:27
    with which it feels around and constructs
    a 3D model of the world,
  • 7:27 - 7:31
    and many birds have magnetite
    so they can orient
  • 7:31 - 7:34
    to the magnetic field of the planet.
  • 7:34 - 7:38
    So what this means is that
    nature doesn't have to continually
  • 7:38 - 7:40
    redesign the brain.
  • 7:40 - 7:45
    Instead, with the principles
    of brain operation established,
  • 7:45 - 7:49
    all nature has to worry about
    is designing new peripherals.
  • 7:49 - 7:52
    Okay. So what this means is this:
  • 7:52 - 7:54
    the lesson that surfaces
  • 7:54 - 7:58
    is that there's nothing
    really special or fundamental
  • 7:58 - 8:01
    about the biology that we
    come to the table with.
  • 8:01 - 8:03
    It's just what we have inherited
  • 8:03 - 8:06
    from a complex road of evolution.
  • 8:06 - 8:10
    But it's not what we have to stick with,
  • 8:10 - 8:12
    and our best proof of principle of this
  • 8:12 - 8:14
    comes from what's called
    "sensory substitution."
  • 8:14 - 8:18
    And that refers to feeding
    information into the brain
  • 8:18 - 8:20
    via unusual sensory channels,
  • 8:20 - 8:23
    and the brain just figures out
    what to do with it.
  • 8:23 - 8:26
    Now, that might sound speculative,
  • 8:26 - 8:29
    but the first paper demonstrating this
    was published in the journal "Nature"
  • 8:29 - 8:32
    in 1969.
  • 8:32 - 8:34
    So a scientist named Paul Bach-y-Rita
  • 8:34 - 8:38
    put blind people
    in a modified dental chair,
  • 8:38 - 8:40
    and he set up a video feed,
  • 8:40 - 8:42
    and he put something
    in front of the camera,
  • 8:42 - 8:45
    and then you would feel that
  • 8:45 - 8:48
    poked into your back
    with the grit of solenoids.
  • 8:48 - 8:50
    So if you wiggle a coffee cup
    in front of the camera,
  • 8:50 - 8:52
    you're feeling that in your back,
  • 8:52 - 8:55
    and amazingly, blind people
    got pretty good
  • 8:55 - 8:59
    at being able to determine
    what was in front of the camera
  • 8:59 - 9:03
    just by feeling it
    in the small of their back.
  • 9:03 - 9:06
    Now, there have been many
    modern incarnations of this.
  • 9:06 - 9:10
    The sonic glasses take a video feed
    right in front of you
  • 9:10 - 9:12
    and turn that into a sonic landscape,
  • 9:12 - 9:15
    so as things move around,
    and get closer and farther,
  • 9:15 - 9:17
    it sounds like "Bzz, bzz, bzz."
  • 9:17 - 9:19
    It sounds like a cacophony,
  • 9:19 - 9:23
    but after several weeks, blind people
    start getting pretty good
  • 9:23 - 9:25
    at understanding what's in front of them
  • 9:25 - 9:28
    just based on what they're hearing.
  • 9:28 - 9:30
    And it doesn't have to be
    through the ears:
  • 9:30 - 9:34
    this system uses an electrotactile grid
    on the forehead,
  • 9:34 - 9:37
    so whatever's in front of the video feed,
    you're feeling it on your forehead.
  • 9:37 - 9:40
    Why the forehead? Because you're not
    using it for much else.
  • 9:40 - 9:44
    The most modern incarnation
    is called the brainport,
  • 9:44 - 9:48
    and this is a little electrogrid
    that sits on your tongue,
  • 9:48 - 9:52
    and the video feed gets turned into
    these little electrotactile signals,
  • 9:52 - 9:58
    and blind people get so good at using this
    that they can throw a ball into a basket,
  • 9:58 - 10:02
    or they can navigate
    complex obstacle courses.
  • 10:03 - 10:08
    They can come to see through their tongue.
  • 10:08 - 10:10
    Now, that sounds completely insane, right?
  • 10:10 - 10:13
    But remember, all vision ever is
  • 10:13 - 10:17
    is electrochemical signals
    coursing around in your brain.
  • 10:17 - 10:19
    Your brain doesn't know
    where the signals come from.
  • 10:19 - 10:23
    It just figures out what to do with them.
  • 10:23 - 10:28
    So my interest in my lab
    is sensory substitution for the deaf,
  • 10:28 - 10:31
    and this is a project I've undertaken
  • 10:31 - 10:34
    with a graduate student
    in my lab, Scott Novich,
  • 10:34 - 10:37
    who is spearheading this for his thesis.
  • 10:37 - 10:39
    And here is what we wanted to do:
  • 10:39 - 10:43
    we wanted to make it so that
    sound from the world gets converted
  • 10:43 - 10:47
    in some way so that a deaf person
    can understand what is being said.
  • 10:47 - 10:52
    And we wanted to do this, given the power
    and ubiquity of portable computing,
  • 10:52 - 10:57
    we wanted to make sure that this
    would run on cell phones and tablets,
  • 10:57 - 10:59
    and also we wanted
    to make this a wearable,
  • 10:59 - 11:02
    something that you could wear
    under your clothing.
  • 11:02 - 11:04
    So here's the concept.
  • 11:05 - 11:10
    So as I'm speaking, my sound
    is getting captured by the tablet,
  • 11:10 - 11:16
    and then it's getting mapped onto a vest
    that's covered in vibratory motors,
  • 11:16 - 11:20
    just like the motors in your cell phone.
  • 11:20 - 11:22
    So as I'm speaking,
  • 11:22 - 11:28
    the sound is getting translated
    to a pattern of vibration on the vest.
  • 11:28 - 11:30
    Now, this is not just conceptual:
  • 11:30 - 11:35
    this tablet is transmitting Bluetooth,
    and I'm wearing the vest right now.
  • 11:35 - 11:37
    So as I'm speaking -- (Applause) --
  • 11:38 - 11:44
    the sound is getting translated
    into dynamic patterns of vibration.
  • 11:44 - 11:49
    I'm feeling the sonic world around me.
  • 11:49 - 11:53
    So, we've been testing this
    with deaf people now,
  • 11:53 - 11:57
    and it turns out that after
    just a little bit of time,
  • 11:57 - 12:00
    people can start feeling,
    they can start understanding
  • 12:00 - 12:03
    the language of the vest.
  • 12:03 - 12:08
    So this is Jonathan. He's 37 years old.
    He has a masters degree.
  • 12:08 - 12:10
    He was born profoundly deaf,
  • 12:10 - 12:14
    which means that there's a part
    of his umwelt that's unavailable to him.
  • 12:14 - 12:19
    So we had Jonathan train with the vest
    for four days, two hours a day,
  • 12:19 - 12:22
    and here he is on the fifth day.
  • 12:22 - 12:24
    (Video) Scott Novich: You.
  • 12:24 - 12:27
    David Eagleman: So Scott says a word,
    Jonathan feels it on the vest,
  • 12:27 - 12:30
    and he writes it on the board.
  • 12:30 - 12:34
    (Video) SN: Where. Where.
  • 12:34 - 12:38
    DE: Jonathan is able to translate
    this complicated pattern of vibrations
  • 12:38 - 12:41
    into an understanding
    of what's being said.
  • 12:41 - 12:44
    (Video) SN: Touch. Touch.
  • 12:44 - 12:49
    DE: Now, he's not doing this
  • 12:49 - 12:55
    -- (Applause) --
  • 12:56 - 13:00
    Jonathan is not doing this consciously,
    because the patterns are too complicated,
  • 13:00 - 13:06
    but his brain is starting to unlock
    the pattern that allows it to figure out
  • 13:06 - 13:08
    what the data mean,
  • 13:08 - 13:12
    and our expectation is that,
    after wearing this for about three months,
  • 13:12 - 13:17
    he will have a direct
    perceptual experience of hearing
  • 13:17 - 13:21
    in the same way that when a blind person
    passes a finger over braille,
  • 13:21 - 13:26
    the meaning comes directly off the page
    without any conscious intervention at all.
  • 13:27 - 13:30
    Now, this technology has the potential
    to be a game-changer,
  • 13:30 - 13:34
    because the only other solution
    for deafness is a cochlear implant,
  • 13:34 - 13:37
    and that requires an invasive surgery.
  • 13:37 - 13:42
    And this can be built for 40 times cheaper
    than a cochlear implant,
  • 13:42 - 13:47
    which opens up this technology globally,
    even for the poorest countries.
  • 13:48 - 13:53
    Now, we've been very encouraged
    by our results with sensory substitution,
  • 13:53 - 13:57
    but what we've been thinking a lot about
    is sensory addition.
  • 13:57 - 14:03
    How could we use a technology like this
    to add a completely new kind of sense,
  • 14:03 - 14:06
    to expand the human umvelt?
  • 14:06 - 14:10
    For example, could we feed
    real-time data from the Internet
  • 14:10 - 14:12
    directly into somebody's brain,
  • 14:12 - 14:16
    and can they develop a direct
    perceptual experience?
  • 14:16 - 14:18
    So here's an experiment
    we're doing in the lab.
  • 14:18 - 14:22
    A subject is feeling a real-time
    streaming feed from the Net of data
  • 14:22 - 14:24
    for five seconds.
  • 14:24 - 14:27
    Then, two buttons appear,
    and he has to make a choice.
  • 14:27 - 14:29
    He doesn't know what's going on.
  • 14:29 - 14:32
    He makes a choice,
    and he gets feedback after one second.
  • 14:32 - 14:33
    Now, here's the thing:
  • 14:33 - 14:36
    the subject has no idea
    what all the patterns mean,
  • 14:36 - 14:39
    but we're seeing if he gets better
    at figuring out which button to press.
  • 14:39 - 14:41
    He doesn't know that what we're feeding
  • 14:41 - 14:45
    is realtime data from the stock market,
  • 14:45 - 14:47
    and he's making buy and sell decisions.
  • 14:47 - 14:49
    (Laughter)
  • 14:49 - 14:53
    And the feedback is telling him
    whether he did the right thing or not.
  • 14:53 - 14:56
    And what we're seeing is,
    can we expand the human umvelt
  • 14:56 - 14:59
    so that he comes to have,
    after several weeks,
  • 14:59 - 15:05
    a direct perceptual experience
    of the economic movements of the planet.
  • 15:05 - 15:08
    So we'll report on that later
    to see how well this goes.
  • 15:08 - 15:10
    (Laughter)
  • 15:11 - 15:13
    Here's another thing we're doing:
  • 15:13 - 15:17
    during the talks this morning,
    we've been automatically scraping Twitter
  • 15:17 - 15:20
    for the TED2015 hashtag,
  • 15:20 - 15:23
    and we've been doing
    an automated sentiment analysis,
  • 15:23 - 15:27
    which means, are people using positive
    words or negative words or neutral?
  • 15:27 - 15:30
    And while this has been going on,
  • 15:30 - 15:33
    I have been feeling this,
  • 15:33 - 15:37
    and so I am plugged in
    to the aggregate emotion
  • 15:37 - 15:41
    of thousands of people in real time,
  • 15:41 - 15:45
    and that's a new kind of human experience,
    because now I can know
  • 15:45 - 15:48
    how everyone's doing
    and how much you're loving this.
  • 15:48 - 15:53
    (Laughter) (Applause)
  • 15:55 - 15:59
    It's a bigger experience
    than a human can normally have.
  • 16:00 - 16:03
    We're also expanding the umvelt of pilots.
  • 16:03 - 16:07
    So in this case, the vest is streaming
    nine different measures
  • 16:07 - 16:08
    from this quadcopter,
  • 16:08 - 16:12
    so pitch and yaw and roll
    and orientation and heading,
  • 16:12 - 16:16
    and that improves
    this pilot's ability to fly it.
  • 16:16 - 16:21
    It's essentially like he's extending
    his skin up there, far away.
  • 16:21 - 16:23
    And that's just the beginning.
  • 16:23 - 16:28
    What we're envisioning is taking
    a modern cockpit full of gauges
  • 16:28 - 16:33
    and instead of trying
    to read the whole thing, you feel it.
  • 16:33 - 16:35
    We live in a world of information now,
  • 16:35 - 16:39
    and there is a difference
    between accessing Big Data
  • 16:39 - 16:42
    and experiencing it.
  • 16:42 - 16:46
    So I think there's really no end
    to the possibilities
  • 16:46 - 16:48
    on the horizon for human expansion.
  • 16:48 - 16:53
    Just imagine an astronaut
    being able to feel
  • 16:53 - 16:57
    the overall health
    of the International Space Station,
  • 16:57 - 17:02
    or, for that matter, having you feel
    the invisible states of your own health,
  • 17:02 - 17:05
    like your blood sugar
    and the state of your microbiome,
  • 17:05 - 17:11
    or having 360-degree vision
    or seeing in infrared or ultraviolet.
  • 17:11 - 17:15
    So the key is this:
    as we move into the future,
  • 17:15 - 17:20
    we're going to increasingly be able
    to choose our own peripheral devices.
  • 17:20 - 17:23
    We no longer have to wait
    for Mother Nature's sensory gifts
  • 17:23 - 17:25
    on her timescales,
  • 17:25 - 17:29
    but instead, like any good parent,
    she's given us the tools that we need
  • 17:29 - 17:34
    to go out and define our own trajectory.
  • 17:34 - 17:35
    So the question now is,
  • 17:35 - 17:41
    how do you want to go out
    and experience your universe?
  • 17:41 - 17:43
    Thank you.
  • 17:43 - 17:51
    (Applause)
  • 17:59 - 18:02
    Chris Anderson: Can you feel it?
    DE: Yeah.
  • 18:02 - 18:05
    Actually, this was the first time
    I felt applause on the vest.
  • 18:05 - 18:07
    It's nice. It's like a massage. (Laughter)
  • 18:07 - 18:11
    CA: Twitter's going crazy.
    Twitter's going mad.
  • 18:11 - 18:13
    So that stock market experiment.
  • 18:13 - 18:18
    This could be the first experiment
    that secures its funding forevermore,
  • 18:18 - 18:20
    right, if successful?
  • 18:20 - 18:23
    DE: Well, that's right, I wouldn't
    have to write to NIH anymore.
  • 18:23 - 18:26
    CA: Well look, just to be
    skeptical for a minute,
  • 18:26 - 18:29
    I mean, this is amazing,
    but isn't most of the evidence so far
  • 18:29 - 18:31
    that sensory substitution works,
  • 18:31 - 18:33
    not necessarily
    that sensory addition works?
  • 18:33 - 18:37
    I mean, isn't it possible that the
    blind person can see through their tongue
  • 18:37 - 18:42
    because the visual cortex is still there,
    ready to process,
  • 18:42 - 18:44
    and that that is needed as part of it?
  • 18:44 - 18:46
    DE: That's a great question.
    We actually have no idea
  • 18:46 - 18:50
    what the theoretical limits are of what
    kind of data the brain can take in.
  • 18:50 - 18:53
    The general story, though,
    is that it's extraordinarily flexible.
  • 18:53 - 18:57
    So when a person goes blind,
    what we used to call their visual cortex
  • 18:57 - 19:02
    gets taken over by other things,
    by touch, by hearing, by vocabulary.
  • 19:02 - 19:06
    So what that tells us is that
    the cortex is kind of a one-trick pony.
  • 19:06 - 19:09
    It just runs certain kinds
    of computations on things.
  • 19:09 - 19:12
    And when we look around
    at things like braille, for example,
  • 19:12 - 19:15
    people are getting information
    through bumps on their fingers.
  • 19:15 - 19:19
    So I don't thing we have any reason
    to think there's a theoretical limit
  • 19:19 - 19:20
    that we know the edge of.
  • 19:21 - 19:25
    CA: If this checks out,
    you're going to be deluged.
  • 19:25 - 19:28
    There are so many
    possible applications for this.
  • 19:28 - 19:32
    Are you ready for this? What are you most
    excited about, the direction it might go?
  • 19:32 - 19:34
    DE: I mean, I think there's
    a lot of applications here.
  • 19:34 - 19:38
    In terms of beyond sensory substitution,
    the things I started mentioning
  • 19:38 - 19:42
    about astronauts on the space station,
    they spend a lot of their time
  • 19:42 - 19:45
    monitoring things, and they could instead
    just get what's going on,
  • 19:45 - 19:49
    because what this is really good for
    is multidimensional data.
  • 19:49 - 19:54
    The key is this: our visual systems
    are good at detecting blobs and edges,
  • 19:54 - 19:56
    but they're really bad
    at what our world has become,
  • 19:56 - 19:58
    which is screens
    with lots and lots of data.
  • 19:58 - 20:01
    We have to crawl that
    with our attentional systems.
  • 20:01 - 20:03
    So this is a way of just
    feeling the state of something,
  • 20:03 - 20:07
    just like the way you know the state
    of your body as you're standing around.
  • 20:07 - 20:10
    So I think heavy machinery, safety,
    feeling the state of a factory,
  • 20:10 - 20:13
    of your equipment, that's one place
    it'll go right away.
  • 20:13 - 20:17
    CA: David Eagleman, that was one
    mind-blowing talk. Thank you very much.
  • 20:17 - 20:22
    DE: Thank you, Chris.
    (Applause)
Title:
Can we create new senses for humans?
Speaker:
David Eagleman
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
20:34
  • A correction was made to this transcript on 1/15/16.

    At 19:15, the subtitle now reads: "I don't think"

English subtitles

Revisions Compare revisions