Shape-shifting tech will change work as we know it | Sean Follmer | TEDxCERN
-
0:20 - 0:23We've evolved with tools
and tools have evolved with us. -
0:23 - 0:28Our ancestors created these
hand axes 1.5 million years ago, -
0:28 - 0:31shaping them to not only
fit the task at hand, -
0:31 - 0:33but also their hand.
-
0:34 - 0:35However, over the years,
-
0:35 - 0:38tools have become
more and more specialized. -
0:38 - 0:42These sculpting tools
have evolved through their use, -
0:42 - 0:46and each one has a different form
which matches its function, -
0:46 - 0:48and they leverage
the dexterity of our hands -
0:48 - 0:52in order to manipulate things
with much more precision. -
0:52 - 0:55But as tools have become
more and more complex, -
0:55 - 0:59we need more complex controls
to control them. -
1:00 - 1:05And so designers have become
very adept at creating interfaces -
1:05 - 1:08that allow you to manipulate parameters
while you're attending to other things, -
1:08 - 1:11such as taking a photograph
and changing the focus -
1:11 - 1:13or the aperture.
-
1:13 - 1:18But the computer has fundamentally
changed the way we think about tools, -
1:18 - 1:20because computation is dynamic.
-
1:20 - 1:22So it can do a million different things
-
1:22 - 1:24and run a million different applications.
-
1:25 - 1:28However, computers have
the same static physical form -
1:28 - 1:30for all of these different applications,
-
1:30 - 1:33and the same static
interface elements as well. -
1:33 - 1:36And I believe that this
is fundamentally a problem, -
1:36 - 1:39because it doesn't really allow us
to interact with our hands -
1:39 - 1:42and capture the rich dexterity
that we have in our bodies. -
1:43 - 1:48And my belief is that, then,
we must need new types of interfaces -
1:48 - 1:52that can capture these
rich abilities that we have, -
1:52 - 1:54and that can physically adapt to us
-
1:54 - 1:56and allow us to interact in new ways.
-
1:56 - 1:59And so that's what I've been doing
at the MIT Media Lab -
1:59 - 2:00and now at Stanford.
-
2:01 - 2:05So with my colleagues,
Daniel Leithinger and Hiroshi Ishii, -
2:05 - 2:06we created inFORM,
-
2:06 - 2:09where the interface can actually
come off the screen -
2:09 - 2:11and you can physically manipulate it.
-
2:11 - 2:14Or you can visualize
3D information physically -
2:14 - 2:17and touch it and feel it
to understand it in new ways. -
2:18 - 2:22Or you can interact through gestures
and direct deformations -
2:23 - 2:26to sculpt digital clay.
-
2:26 - 2:29Or interface elements can arise
out of the surface -
2:29 - 2:31and change on demand.
-
2:31 - 2:33And the idea is that for each
individual application, -
2:33 - 2:37the physical form can be matched
to the application. -
2:37 - 2:39And I believe this represents a new way
-
2:39 - 2:41that we can interact with information,
-
2:41 - 2:42by making it physical.
-
2:43 - 2:45So the question is, how can we use this?
-
2:46 - 2:49Traditionally, urban planners
and architects build physical models -
2:49 - 2:52of cities and buildings
to better understand them. -
2:52 - 2:56So with Tony Tang at the media lab,
we created an interface built on inFORM -
2:56 - 3:01to allow urban planners
to design and view entire cities. -
3:01 - 3:06And now you can walk around it,
but it's dynamic, it's physical, -
3:06 - 3:07and you can also interact directly.
-
3:07 - 3:09Or you can look at different views,
-
3:09 - 3:12such as population or traffic information,
-
3:12 - 3:13but it's made physical.
-
3:15 - 3:19We also believe that these dynamic
shape displays can really change -
3:19 - 3:21the ways that we remotely
collaborate with people. -
3:22 - 3:24So when we're working together in person,
-
3:24 - 3:25I'm not only looking at your face,
-
3:26 - 3:29but I'm also gesturing
and manipulating objects, -
3:29 - 3:32and that's really hard to do
when you're using tools like Skype. -
3:34 - 3:38And so using inFORM, you can really
literally reach out from the screen -
3:38 - 3:40and manipulate things at a distance.
-
3:40 - 3:43So we used the pins of the display
to represent people's hands, -
3:43 - 3:48allowing them to actually touch
and manipulate objects at a distance. -
3:50 - 3:54And you can also manipulate
and collaborate on 3D data sets as well, -
3:55 - 3:58so you can gesture around them
as well as manipulate them. -
3:59 - 4:03And that allows people to collaborate
on these new types of 3D information -
4:03 - 4:07in a richer way than might
be possible with traditional tools. -
4:07 - 4:09And so you can also
bring in existing objects, -
4:09 - 4:13and those will be captured on one side
and transmitted to the other. -
4:13 - 4:15Or you can have an object that's linked
between two places, -
4:15 - 4:17so as I move a ball on one side,
-
4:18 - 4:19the ball moves on the other as well.
-
4:20 - 4:23And so we do this by capturing
the remote user -
4:23 - 4:26using a depth-sensing camera
like a Microsoft Kinect. -
4:26 - 4:29Now, you might be wondering
how does this all work, -
4:29 - 4:33and essentially, what it is,
is 900 linear actuators -
4:33 - 4:35that are connected to these
mechanical linkages -
4:35 - 4:39that allow motion down here
to be propagated in these pins above. -
4:39 - 4:43So it's not that complex
compared to what's going on at CERN, -
4:43 - 4:45but it did take a long time
for us to build it - -
4:47 - 4:49we actually had to build it -
-
4:49 - 4:51and so we started with a single motor,
-
4:51 - 4:53a single linear actuator,
-
4:53 - 4:56and then we had to design
a custom circuit border to control them. -
4:56 - 4:58And then we had to make a lot of them.
-
4:58 - 5:02And so the problem with having
900 of something -
5:02 - 5:05is that you have to do
every step 900 times. -
5:05 - 5:07And so that meant that we had
a lot of work to do. -
5:07 - 5:11So we sort of set up
a mini-sweatshop in the media lab -
5:11 - 5:15and brought undergrads in and convinced
them to do "research" -- -
5:15 - 5:16(Laughter)
-
5:16 - 5:19and had late nights
watching movies, eating pizza, -
5:19 - 5:21and screwing in thousands of screws.
-
5:21 - 5:22You know -- research.
-
5:22 - 5:23(Laughter)
-
5:23 - 5:27But anyway, I think that we were
really excited by the things -
5:27 - 5:29that inFORM allowed us to do.
-
5:31 - 5:35Increasingly, we're using mobile devices
and we interact on the go, -
5:35 - 5:37but mobile devices, just like computers,
-
5:37 - 5:40are used for so many
different applications. -
5:40 - 5:42So you use them to talk on the phone,
-
5:42 - 5:45to surf the web, to play games,
to take pictures, -
5:45 - 5:47or even a million different things.
-
5:47 - 5:50But again, they have the same
static physical form -
5:50 - 5:52for each of these applications.
-
5:52 - 5:55And so we wanted to know how can we take
some of the same interactions -
5:55 - 5:57that we developed for inFORM
-
5:57 - 5:59and bring them to mobile devices.
-
5:59 - 6:03So at Stanford, we created
this haptic edge display, -
6:03 - 6:06which is a mobile device
with an array of linear actuators -
6:06 - 6:08that can change shape,
-
6:08 - 6:12so you can feel in your hand
where you are as you're reading a book. -
6:12 - 6:16Or you can feel in your pocket
new types of tactile sensations -
6:16 - 6:18that are richer than the vibration.
-
6:18 - 6:21Or buttons can emerge from the side
that allow you to interact -
6:21 - 6:23where you want them to be.
-
6:23 - 6:27Or you can play games
and have actual buttons. -
6:28 - 6:30And so we were able to do this
-
6:30 - 6:34by embedding 40 small, tiny
linear actuators inside the device, -
6:34 - 6:36and that allow you not only to touch them,
-
6:36 - 6:38but also back-drive them as well.
-
6:39 - 6:43But we've also looked at other ways
to create more complex shape change. -
6:44 - 6:48So we've used pneumatic actuation
to create a morphing device -
6:48 - 6:51where you can go from something
that looks a lot like a phone ... -
6:52 - 6:54to a wristband on the go.
-
6:54 - 6:57And so together with Ken Nakagaki
at the media lab, -
6:57 - 6:59we created this new
high-resolution version -
6:59 - 7:05that uses a ray of servo motors
to change from interactive wristband -
7:05 - 7:07to a touch-input device
-
7:07 - 7:09to a phone.
-
7:09 - 7:10(Laughter)
-
7:11 - 7:13And we're also interested
in looking at ways -
7:14 - 7:16that users can actually
deform the interfaces -
7:16 - 7:19to shape them into the devices
that they want to use. -
7:19 - 7:21So you can make something
like a game controller, -
7:21 - 7:24and then the system will understand
what shape it's in, -
7:24 - 7:26and change to that mode.
-
7:26 - 7:28So, where does this point?
-
7:28 - 7:29How do we move forward from here?
-
7:30 - 7:32I think, really, where we are today
-
7:32 - 7:35is in this new age
of the Internet of Things, -
7:35 - 7:37where we have computers everywhere --
-
7:37 - 7:39they're in our pockets,
they're in our walls, -
7:39 - 7:42they're in almost every device
that you'll buy in the next five years. -
7:42 - 7:45But what if we stopped
thinking about devices -
7:45 - 7:48and think instead about environments?
-
7:48 - 7:50And so how can we have smart furniture
-
7:50 - 7:54or smart rooms or smart environments
-
7:54 - 7:57or cities that can adapt to us physically,
-
7:57 - 8:01and allow us to do new ways
of collaborating with people -
8:01 - 8:03and doing new types of tasks?
-
8:03 - 8:07So for the Milan Design Week,
we created TRANSFORM, -
8:07 - 8:11which is an interactive table-scale
version of these shape displays, -
8:11 - 8:14which can move physical objects
on the surface; for example, -
8:14 - 8:16reminding you to take your keys.
-
8:16 - 8:21But it can also transform
to fit different ways of interacting. -
8:21 - 8:22So if you want to work,
-
8:22 - 8:25then it can change to sort of
set up your work system. -
8:25 - 8:27And so as you bring a device over,
-
8:27 - 8:30it creates all the affordances you need
-
8:30 - 8:34and brings other objects
to help you accomplish those goals. -
8:34 - 8:36So in conclusion,
-
8:36 - 8:40I really think that we need to think
about a new, fundamentally different way -
8:40 - 8:42of interacting with computers.
-
8:43 - 8:46We need computers
that can physically adapt to us -
8:46 - 8:48and adapt to the ways
that we want to use them, -
8:48 - 8:53and really harness the rich dexterity
that we have of our hands, -
8:53 - 8:57and our ability to think spatially
about information by making it physical. -
8:58 - 9:02But looking forward, I think we need
to go beyond this, beyond devices, -
9:02 - 9:05to really think about new ways
that we can bring people together -
9:05 - 9:08and bring our information into the world,
-
9:08 - 9:12and think about smart environments
that can adapt to us physically. -
9:12 - 9:14So with that, I will leave you.
-
9:14 - 9:15Thank you very much.
-
9:15 - 9:19(Applause)
- Title:
- Shape-shifting tech will change work as we know it | Sean Follmer | TEDxCERN
- Description:
-
What will the world look like when we move beyond the keyboard and mouse? Interaction designer Sean Follmer is building a future with machines that bring information to life under your fingers as you work with it. In this talk, check out prototypes for a 3D shape-shifting table, a phone that turns into a wristband, a deformable game controller and more that may change the way we live and work.
This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDxTalks
- Duration:
- 09:26
Gemma Lee
6:59.26
I think 'a ray of servo motors' should be 'array of servo motors'.