Return to Video

Shape-shifting tech will change work as we know it | Sean Follmer | TEDxCERN

  • 0:20 - 0:23
    We've evolved with tools
    and tools have evolved with us.
  • 0:23 - 0:28
    Our ancestors created these
    hand axes 1.5 million years ago,
  • 0:28 - 0:31
    shaping them to not only
    fit the task at hand,
  • 0:31 - 0:33
    but also their hand.
  • 0:34 - 0:35
    However, over the years,
  • 0:35 - 0:38
    tools have become
    more and more specialized.
  • 0:38 - 0:42
    These sculpting tools
    have evolved through their use,
  • 0:42 - 0:46
    and each one has a different form
    which matches its function,
  • 0:46 - 0:48
    and they leverage
    the dexterity of our hands
  • 0:48 - 0:52
    in order to manipulate things
    with much more precision.
  • 0:52 - 0:55
    But as tools have become
    more and more complex,
  • 0:55 - 0:59
    we need more complex controls
    to control them.
  • 1:00 - 1:05
    And so designers have become
    very adept at creating interfaces
  • 1:05 - 1:08
    that allow you to manipulate parameters
    while you're attending to other things,
  • 1:08 - 1:11
    such as taking a photograph
    and changing the focus
  • 1:11 - 1:13
    or the aperture.
  • 1:13 - 1:18
    But the computer has fundamentally
    changed the way we think about tools,
  • 1:18 - 1:20
    because computation is dynamic.
  • 1:20 - 1:22
    So it can do a million different things
  • 1:22 - 1:24
    and run a million different applications.
  • 1:25 - 1:28
    However, computers have
    the same static physical form
  • 1:28 - 1:30
    for all of these different applications,
  • 1:30 - 1:33
    and the same static
    interface elements as well.
  • 1:33 - 1:36
    And I believe that this
    is fundamentally a problem,
  • 1:36 - 1:39
    because it doesn't really allow us
    to interact with our hands
  • 1:39 - 1:42
    and capture the rich dexterity
    that we have in our bodies.
  • 1:43 - 1:48
    And my belief is that, then,
    we must need new types of interfaces
  • 1:48 - 1:52
    that can capture these
    rich abilities that we have,
  • 1:52 - 1:54
    and that can physically adapt to us
  • 1:54 - 1:56
    and allow us to interact in new ways.
  • 1:56 - 1:59
    And so that's what I've been doing
    at the MIT Media Lab
  • 1:59 - 2:00
    and now at Stanford.
  • 2:01 - 2:05
    So with my colleagues,
    Daniel Leithinger and Hiroshi Ishii,
  • 2:05 - 2:06
    we created inFORM,
  • 2:06 - 2:09
    where the interface can actually
    come off the screen
  • 2:09 - 2:11
    and you can physically manipulate it.
  • 2:11 - 2:14
    Or you can visualize
    3D information physically
  • 2:14 - 2:17
    and touch it and feel it
    to understand it in new ways.
  • 2:18 - 2:22
    Or you can interact through gestures
    and direct deformations
  • 2:23 - 2:26
    to sculpt digital clay.
  • 2:26 - 2:29
    Or interface elements can arise
    out of the surface
  • 2:29 - 2:31
    and change on demand.
  • 2:31 - 2:33
    And the idea is that for each
    individual application,
  • 2:33 - 2:37
    the physical form can be matched
    to the application.
  • 2:37 - 2:39
    And I believe this represents a new way
  • 2:39 - 2:41
    that we can interact with information,
  • 2:41 - 2:42
    by making it physical.
  • 2:43 - 2:45
    So the question is, how can we use this?
  • 2:46 - 2:49
    Traditionally, urban planners
    and architects build physical models
  • 2:49 - 2:52
    of cities and buildings
    to better understand them.
  • 2:52 - 2:56
    So with Tony Tang at the media lab,
    we created an interface built on inFORM
  • 2:56 - 3:01
    to allow urban planners
    to design and view entire cities.
  • 3:01 - 3:06
    And now you can walk around it,
    but it's dynamic, it's physical,
  • 3:06 - 3:07
    and you can also interact directly.
  • 3:07 - 3:09
    Or you can look at different views,
  • 3:09 - 3:12
    such as population or traffic information,
  • 3:12 - 3:13
    but it's made physical.
  • 3:15 - 3:19
    We also believe that these dynamic
    shape displays can really change
  • 3:19 - 3:21
    the ways that we remotely
    collaborate with people.
  • 3:22 - 3:24
    So when we're working together in person,
  • 3:24 - 3:25
    I'm not only looking at your face,
  • 3:26 - 3:29
    but I'm also gesturing
    and manipulating objects,
  • 3:29 - 3:32
    and that's really hard to do
    when you're using tools like Skype.
  • 3:34 - 3:38
    And so using inFORM, you can really
    literally reach out from the screen
  • 3:38 - 3:40
    and manipulate things at a distance.
  • 3:40 - 3:43
    So we used the pins of the display
    to represent people's hands,
  • 3:43 - 3:48
    allowing them to actually touch
    and manipulate objects at a distance.
  • 3:50 - 3:54
    And you can also manipulate
    and collaborate on 3D data sets as well,
  • 3:55 - 3:58
    so you can gesture around them
    as well as manipulate them.
  • 3:59 - 4:03
    And that allows people to collaborate
    on these new types of 3D information
  • 4:03 - 4:07
    in a richer way than might
    be possible with traditional tools.
  • 4:07 - 4:09
    And so you can also
    bring in existing objects,
  • 4:09 - 4:13
    and those will be captured on one side
    and transmitted to the other.
  • 4:13 - 4:15
    Or you can have an object that's linked
    between two places,
  • 4:15 - 4:17
    so as I move a ball on one side,
  • 4:18 - 4:19
    the ball moves on the other as well.
  • 4:20 - 4:23
    And so we do this by capturing
    the remote user
  • 4:23 - 4:26
    using a depth-sensing camera
    like a Microsoft Kinect.
  • 4:26 - 4:29
    Now, you might be wondering
    how does this all work,
  • 4:29 - 4:33
    and essentially, what it is,
    is 900 linear actuators
  • 4:33 - 4:35
    that are connected to these
    mechanical linkages
  • 4:35 - 4:39
    that allow motion down here
    to be propagated in these pins above.
  • 4:39 - 4:43
    So it's not that complex
    compared to what's going on at CERN,
  • 4:43 - 4:45
    but it did take a long time
    for us to build it -
  • 4:47 - 4:49
    we actually had to build it -
  • 4:49 - 4:51
    and so we started with a single motor,
  • 4:51 - 4:53
    a single linear actuator,
  • 4:53 - 4:56
    and then we had to design
    a custom circuit border to control them.
  • 4:56 - 4:58
    And then we had to make a lot of them.
  • 4:58 - 5:02
    And so the problem with having
    900 of something
  • 5:02 - 5:05
    is that you have to do
    every step 900 times.
  • 5:05 - 5:07
    And so that meant that we had
    a lot of work to do.
  • 5:07 - 5:11
    So we sort of set up
    a mini-sweatshop in the media lab
  • 5:11 - 5:15
    and brought undergrads in and convinced
    them to do "research" --
  • 5:15 - 5:16
    (Laughter)
  • 5:16 - 5:19
    and had late nights
    watching movies, eating pizza,
  • 5:19 - 5:21
    and screwing in thousands of screws.
  • 5:21 - 5:22
    You know -- research.
  • 5:22 - 5:23
    (Laughter)
  • 5:23 - 5:27
    But anyway, I think that we were
    really excited by the things
  • 5:27 - 5:29
    that inFORM allowed us to do.
  • 5:31 - 5:35
    Increasingly, we're using mobile devices
    and we interact on the go,
  • 5:35 - 5:37
    but mobile devices, just like computers,
  • 5:37 - 5:40
    are used for so many
    different applications.
  • 5:40 - 5:42
    So you use them to talk on the phone,
  • 5:42 - 5:45
    to surf the web, to play games,
    to take pictures,
  • 5:45 - 5:47
    or even a million different things.
  • 5:47 - 5:50
    But again, they have the same
    static physical form
  • 5:50 - 5:52
    for each of these applications.
  • 5:52 - 5:55
    And so we wanted to know how can we take
    some of the same interactions
  • 5:55 - 5:57
    that we developed for inFORM
  • 5:57 - 5:59
    and bring them to mobile devices.
  • 5:59 - 6:03
    So at Stanford, we created
    this haptic edge display,
  • 6:03 - 6:06
    which is a mobile device
    with an array of linear actuators
  • 6:06 - 6:08
    that can change shape,
  • 6:08 - 6:12
    so you can feel in your hand
    where you are as you're reading a book.
  • 6:12 - 6:16
    Or you can feel in your pocket
    new types of tactile sensations
  • 6:16 - 6:18
    that are richer than the vibration.
  • 6:18 - 6:21
    Or buttons can emerge from the side
    that allow you to interact
  • 6:21 - 6:23
    where you want them to be.
  • 6:23 - 6:27
    Or you can play games
    and have actual buttons.
  • 6:28 - 6:30
    And so we were able to do this
  • 6:30 - 6:34
    by embedding 40 small, tiny
    linear actuators inside the device,
  • 6:34 - 6:36
    and that allow you not only to touch them,
  • 6:36 - 6:38
    but also back-drive them as well.
  • 6:39 - 6:43
    But we've also looked at other ways
    to create more complex shape change.
  • 6:44 - 6:48
    So we've used pneumatic actuation
    to create a morphing device
  • 6:48 - 6:51
    where you can go from something
    that looks a lot like a phone ...
  • 6:52 - 6:54
    to a wristband on the go.
  • 6:54 - 6:57
    And so together with Ken Nakagaki
    at the media lab,
  • 6:57 - 6:59
    we created this new
    high-resolution version
  • 6:59 - 7:05
    that uses a ray of servo motors
    to change from interactive wristband
  • 7:05 - 7:07
    to a touch-input device
  • 7:07 - 7:09
    to a phone.
  • 7:09 - 7:10
    (Laughter)
  • 7:11 - 7:13
    And we're also interested
    in looking at ways
  • 7:14 - 7:16
    that users can actually
    deform the interfaces
  • 7:16 - 7:19
    to shape them into the devices
    that they want to use.
  • 7:19 - 7:21
    So you can make something
    like a game controller,
  • 7:21 - 7:24
    and then the system will understand
    what shape it's in,
  • 7:24 - 7:26
    and change to that mode.
  • 7:26 - 7:28
    So, where does this point?
  • 7:28 - 7:29
    How do we move forward from here?
  • 7:30 - 7:32
    I think, really, where we are today
  • 7:32 - 7:35
    is in this new age
    of the Internet of Things,
  • 7:35 - 7:37
    where we have computers everywhere --
  • 7:37 - 7:39
    they're in our pockets,
    they're in our walls,
  • 7:39 - 7:42
    they're in almost every device
    that you'll buy in the next five years.
  • 7:42 - 7:45
    But what if we stopped
    thinking about devices
  • 7:45 - 7:48
    and think instead about environments?
  • 7:48 - 7:50
    And so how can we have smart furniture
  • 7:50 - 7:54
    or smart rooms or smart environments
  • 7:54 - 7:57
    or cities that can adapt to us physically,
  • 7:57 - 8:01
    and allow us to do new ways
    of collaborating with people
  • 8:01 - 8:03
    and doing new types of tasks?
  • 8:03 - 8:07
    So for the Milan Design Week,
    we created TRANSFORM,
  • 8:07 - 8:11
    which is an interactive table-scale
    version of these shape displays,
  • 8:11 - 8:14
    which can move physical objects
    on the surface; for example,
  • 8:14 - 8:16
    reminding you to take your keys.
  • 8:16 - 8:21
    But it can also transform
    to fit different ways of interacting.
  • 8:21 - 8:22
    So if you want to work,
  • 8:22 - 8:25
    then it can change to sort of
    set up your work system.
  • 8:25 - 8:27
    And so as you bring a device over,
  • 8:27 - 8:30
    it creates all the affordances you need
  • 8:30 - 8:34
    and brings other objects
    to help you accomplish those goals.
  • 8:34 - 8:36
    So in conclusion,
  • 8:36 - 8:40
    I really think that we need to think
    about a new, fundamentally different way
  • 8:40 - 8:42
    of interacting with computers.
  • 8:43 - 8:46
    We need computers
    that can physically adapt to us
  • 8:46 - 8:48
    and adapt to the ways
    that we want to use them,
  • 8:48 - 8:53
    and really harness the rich dexterity
    that we have of our hands,
  • 8:53 - 8:57
    and our ability to think spatially
    about information by making it physical.
  • 8:58 - 9:02
    But looking forward, I think we need
    to go beyond this, beyond devices,
  • 9:02 - 9:05
    to really think about new ways
    that we can bring people together
  • 9:05 - 9:08
    and bring our information into the world,
  • 9:08 - 9:12
    and think about smart environments
    that can adapt to us physically.
  • 9:12 - 9:14
    So with that, I will leave you.
  • 9:14 - 9:15
    Thank you very much.
  • 9:15 - 9:19
    (Applause)
Title:
Shape-shifting tech will change work as we know it | Sean Follmer | TEDxCERN
Description:

What will the world look like when we move beyond the keyboard and mouse? Interaction designer Sean Follmer is building a future with machines that bring information to life under your fingers as you work with it. In this talk, check out prototypes for a 3D shape-shifting table, a phone that turns into a wristband, a deformable game controller and more that may change the way we live and work.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
09:26
  • 6:59.26

    I think 'a ray of servo motors' should be 'array of servo motors'.

English subtitles

Revisions