Return to Video

Nicholas Carr | The Glass Cage: Automation and Us

  • 0:05 - 0:06
    Welcome.
  • 0:06 - 0:09
    Let's start with a pop quiz.
  • 0:09 - 0:17
    What do Benjamin Franklin, Karl Marx, and the philosopher Hannah Arendt have in common?
  • 0:17 - 0:18
    Anybody?
  • 0:18 - 0:24
    So they all proposed this notion of calling humans "homofaber" -
  • 0:24 - 0:26
    man, the tool maker.
  • 0:26 - 0:30
    So we make tools. We make tools, we alter the environment,
  • 0:30 - 0:33
    and then, the tools alter us.
  • 0:33 - 0:36
    And sometimes, we lament that.
  • 0:36 - 0:43
    And sometimes these tools have big effects - clothing, cooking, fire, automobiles,
  • 0:43 - 0:45
    computers, and so on.
  • 0:45 - 0:47
    Sometimes they have smaller effects.
  • 0:47 - 0:50
    But it looks like right now, we're in a period where we're going to start using
  • 0:50 - 0:54
    even more and more tools, and they're going to be ubiquitous throughout our life.
  • 0:54 - 1:01
    And our speaker today, Nicholas Carr, has taken it upon himself to investigate this.
  • 1:01 - 1:03
    How do these tools change us?
  • 1:03 - 1:07
    What's for the better? What's for the worse?
  • 1:07 - 1:12
    And can we figure out a way to design them so that we'll live better with them?
  • 1:12 - 1:14
    Welcome to Google, Nicholas Carr.
  • 1:14 - 1:18
    (applause)
  • 1:18 - 1:20
    Thank you. Thanks very much, Peter.
  • 1:20 - 1:28
    And thanks to Ann Farmer for shepherding me through the process in bringing me here,
  • 1:28 - 1:31
    and thanks to Google for hosting these events.
  • 1:31 - 1:34
    I've been to a couple of other Google offices,
  • 1:34 - 1:38
    but this is the first time I've been to the headquarters,
  • 1:38 - 1:42
    so it's exciting. The Googleplex has kind of played a role in my fantasy life
  • 1:42 - 1:46
    for a long time, I realize. Not a weird role.
  • 1:46 - 1:48
    Kind of a dull fantasy life, but what can I say?
  • 1:48 - 1:51
    So it's good to be here in person.
  • 1:51 - 1:58
    I started writing about technology about 15 years ago or so,
  • 1:58 - 2:02
    so more or less the same time that Google appeared on the scene,
  • 2:02 - 2:07
    and I think it was good timing for Google, and it was also
  • 2:07 - 2:10
    good timing for me, because there's been plenty,
  • 2:10 - 2:12
    obviously, to write about.
  • 2:12 - 2:14
    And like, I think, most technology writers,
  • 2:14 - 2:20
    I started off writing about the technology itself,
  • 2:20 - 2:24
    features, design, stuff like that, and also about
  • 2:24 - 2:28
    the economic and financial side of the business,
  • 2:28 - 2:30
    so competition between technology companies,
  • 2:30 - 2:36
    and so forth. But over the years I became kind of frustrated
  • 2:36 - 2:39
    by what I saw as the narrowness of that view,
  • 2:39 - 2:42
    that just looks at technology as technology
  • 2:42 - 2:46
    or as a economic factor. Because what was
  • 2:46 - 2:50
    becoming clear was that computers,
  • 2:50 - 2:52
    as they became smaller and smaller, and
  • 2:52 - 2:54
    more powerful, and more connected,
  • 2:54 - 2:58
    and as programmers became more adept at their work,
  • 2:58 - 3:05
    computing, computation, the digital connectivity and everything, was infusing
  • 3:05 - 3:08
    more and more aspects of everybody's life,
  • 3:08 - 3:13
    at work, during their leisure time, and so
  • 3:13 - 3:18
    it struck me that, as is always true and as Peter said, with technology,
  • 3:18 - 3:24
    technology frames, in many ways, the context in which we live
  • 3:24 - 3:27
    and it seemed to me important to look at this phenomenon,
  • 3:27 - 3:31
    the rise of the computer as kind of a central component of our lives,
  • 3:31 - 3:35
    from many different angles, so see what sociology could tell us,
  • 3:35 - 3:41
    what philosophy could tell us, and all these different
  • 3:41 - 3:43
    ways we can approach an important phenomenon that's
  • 3:43 - 3:46
    influencing our life.
  • 3:46 - 3:51
    Four or five years ago, I wrote a book called "The Shallows"
  • 3:51 - 3:55
    that kind of examined how the use of the internet
  • 3:55 - 3:59
    as an informational medium is influencing the way we think,
  • 3:59 - 4:02
    and how we're adapting to this kind of,
  • 4:02 - 4:05
    not only availability, of vast amounts of information,
  • 4:05 - 4:10
    but more and more an actual active barrage of it,
  • 4:10 - 4:13
    and what that meant for our ability to tune out the flow when we needed to,
  • 4:13 - 4:19
    and really engage attentively in one task or one train of thought.
  • 4:19 - 4:25
    And as I was writing "The Shallows", I also started becoming aware
  • 4:25 - 4:30
    of this other realm of research into computers
  • 4:30 - 4:33
    that struck me as dealing with an even broader question,
  • 4:33 - 4:36
    which is "What happens to people in their talents,
  • 4:36 - 4:39
    in their engagement with the world, when they
  • 4:39 - 4:42
    become reliant on computers in their various
  • 4:42 - 4:45
    forms, to do more and more things?"
  • 4:45 - 4:47
    So what happens when we automate, not just
  • 4:47 - 4:51
    factory work, but lots of white collar, professional thinking,
  • 4:51 - 4:56
    and what happens when we begin to automate a lot of just the day to day activities
  • 4:56 - 4:58
    that we do? We become more and more
  • 4:58 - 5:01
    reliant on computers, not necessarily to
  • 5:01 - 5:04
    take over all of the work, but to become our aid
  • 5:04 - 5:07
    to help shepherd us through our days.
  • 5:07 - 5:11
    And that was the spark that led to "The Glass Cage",
  • 5:11 - 5:15
    my new book, which tries to look broadly
  • 5:15 - 5:19
    at the repercussions of our dependence
  • 5:19 - 5:22
    on computers and automation in general,
  • 5:22 - 5:24
    but also, looks at the question of "Are we
  • 5:24 - 5:28
    designing this stuff in an optimal fashion?"
  • 5:28 - 5:32
    If we want a world in which we get the benefits
  • 5:32 - 5:36
    of computers, but we also want people to live
  • 5:36 - 5:40
    full, meaningful lives, develop which talents,
  • 5:40 - 5:43
    interact with the world in diverse ways, are we
  • 5:43 - 5:50
    designing all of these tools, everything from robots to
  • 5:50 - 5:52
    simple smart phone apps, in a way that
  • 5:52 - 5:54
    accomplishes both those things?
  • 5:54 - 6:00
    What I'd like to do is just read a short section from the book that,
  • 6:00 - 6:04
    to me, provides both an example of a lot of the
  • 6:04 - 6:05
    things I'm talking about, a lot of the tensions
  • 6:05 - 6:08
    I'm talking about, but also provides sort of a
  • 6:08 - 6:10
    metaphor for, I think, the circumstances we're in,
  • 6:10 - 6:13
    and the challenges we face.
  • 6:13 - 6:20
    This section, which comes in the middles of the book, is about the use of computers
  • 6:20 - 6:26
    and automation, not in a city or even in a kind of western country,
  • 6:26 - 6:28
    where there's tons of it, but in a place
  • 6:28 - 6:31
    that looks like this. Up, in the arctic circle,
  • 6:31 - 6:37
    far, far away, where you might think, is shielded
  • 6:37 - 6:40
    from computers and automation, but, in fact,
  • 6:40 - 6:44
    is not. So let me just read this to you.
  • 6:44 - 6:49
    "The small island of Aglulick, lying off the coast
  • 6:49 - 6:52
    of the Melville Peninsula, in the Nunevah territory
  • 6:52 - 6:55
    of the Canadian North, is a bewildering place
  • 6:55 - 6:58
    in the winter. The average temperature hovers
  • 6:58 - 7:01
    around 20 degrees below zero, thick sheets
  • 7:01 - 7:04
    of sea ice cover the surrounding waters, the sun
  • 7:04 - 7:08
    is absent. Despite the brutal conditions,
  • 7:08 - 7:12
    Ennuit hunters have for some 4000 years ventured out from their homes
  • 7:12 - 7:15
    on the island and traversed miles of ice and
  • 7:15 - 7:18
    tundra in search of caribou and other game.
  • 7:18 - 7:22
    The hunters' ability to navigate vast stretches
  • 7:22 - 7:25
    of barren arctic terrain, where landmarks are few,
  • 7:25 - 7:28
    snow formations are in constant flux, and
  • 7:28 - 7:30
    trails disappear overnight,
  • 7:30 - 7:34
    has amazed voyagers and scientists for centuries.
  • 7:34 - 7:37
    The Ennuits' extraordinary way-finding skills
  • 7:37 - 7:41
    are borne, not of technological prowess,
  • 7:41 - 7:43
    they've eschewed maps, compasses, and
  • 7:43 - 7:46
    other instruments, but have a profound understanding
  • 7:46 - 7:49
    of winds, snow drift patterns, animal behavior,
  • 7:49 - 7:52
    stars, tides, and currents.
  • 7:52 - 7:55
    The Ennuit are masters of perception,
  • 7:55 - 7:57
    or at least they used to be.
  • 7:57 - 8:00
    Something changed in Ennuit culture at the turn of the millenium.
  • 8:00 - 8:03
    In the year 2000, the US government lifted many
  • 8:03 - 8:05
    of the restrictions on the civilian use
  • 8:05 - 8:08
    of the global positioning system.
  • 8:08 - 8:10
    The Aglulick hunters, who had already swapped
  • 8:10 - 8:13
    their dogsleds for snowmobiles, began to
  • 8:13 - 8:16
    rely on computer-generated maps and diretions
  • 8:16 - 8:19
    to get around. Younger Ennuit were particularly
  • 8:19 - 8:21
    eager to use the new technology.
  • 8:21 - 8:25
    In the past, a young hunter had to endure a long appreticeship
  • 8:25 - 8:27
    with his elders, developing his way-finding talents
  • 8:27 - 8:32
    over many years. By purchasing a cheap GPS receiver,
  • 8:32 - 8:34
    he could skip the training and off-load
  • 8:34 - 8:38
    responsibility for navigation to the device.
  • 8:38 - 8:41
    The ease, convenience, and precision of automated
  • 8:41 - 8:43
    navigation made the Ennuits traditional
  • 8:43 - 8:48
    techniques seem antiquated and cumberson, by comparison.
  • 8:48 - 8:52
    But as GPS devices proliferated on the island, reports began to spread
  • 8:52 - 8:55
    of serious accidents during hunts, some
  • 8:55 - 8:57
    resulting in injuries and even deaths.
  • 8:57 - 8:59
    The cause was often traced to an over-reliance
  • 8:59 - 9:04
    on satellites. When a receiver breaks or its
  • 9:04 - 9:06
    batteries freeze, a hunter who hasn't developed
  • 9:06 - 9:10
    strong way-finding skills can easily become lost
  • 9:10 - 9:13
    in the featureless waste, and fall victim to exposure.
  • 9:13 - 9:16
    Even when the devices operate properly,
  • 9:16 - 9:18
    they present hazards. The routes, so meticulously
  • 9:18 - 9:21
    plotted on satellite maps, can give hunters
  • 9:21 - 9:25
    a form of tunnel vision. Trusting the GPS instructions,
  • 9:25 - 9:28
    they'll speed onto dangerously thin ice or
  • 9:28 - 9:30
    into other environmental perils that a skilled
  • 9:30 - 9:32
    navigator would have had the sense and foresight
  • 9:32 - 9:36
    to avoid. Some of these problems may eventually
  • 9:36 - 9:39
    be mitigated by improvements in navigational devices,
  • 9:39 - 9:42
    or by better instruction in their use.
  • 9:42 - 9:45
    What won't be mitigated is the loss of what one
  • 9:45 - 9:50
    tribal elder describes as the wisdom and knowledge of the Ennuit.
  • 9:50 - 9:56
    The anthropologist, Claudio Oporta, of Carlton University, in Ottawa, has been studying
  • 9:56 - 9:58
    Ennuit hunters for years.
  • 9:58 - 10:00
    He reports that while satellite navigation
  • 10:00 - 10:03
    offers attractive advantages, its adoption has
  • 10:03 - 10:06
    already brought a deterioration in way-finding
  • 10:06 - 10:09
    abilities, and more generally, a weakened feel
  • 10:09 - 10:12
    for the land. As a hunter on a GPS-equipped
  • 10:12 - 10:15
    snowmobile devotes his attention to the instructions
  • 10:15 - 10:18
    coming from the computer, he loses sight
  • 10:18 - 10:21
    of his surroundings. He travels blindfolded,
  • 10:21 - 10:24
    as Oporta puts it. A singular talent that has
  • 10:24 - 10:26
    defined and distinguished a people for
  • 10:26 - 10:29
    thousands of years may well evaporate over the
  • 10:29 - 10:32
    course of a generation or two.
  • 10:32 - 10:40
    When I relate that story to people, they tend to have one of two reactions,
  • 10:40 - 10:42
    and my guess is both of those reactions
  • 10:42 - 10:46
    are probably represented in this room.
  • 10:46 - 10:50
    One of the reactions is a feeling of kind of, feeling that
  • 10:50 - 10:54
    this is a poignant story, it's a troubling story
  • 10:54 - 10:56
    a story about loss, about something essential
  • 10:56 - 10:59
    to the human condition, and that tends to be the
  • 10:59 - 11:02
    reaction I have to it, but then
  • 11:02 - 11:04
    there's a very different reactions, which is
  • 11:04 - 11:07
    "Well, welcome to the modern world."
  • 11:07 - 11:12
    Progress goes on, we adapt,
  • 11:12 - 11:14
    and in the end, things get better.
  • 11:14 - 11:17
    And so if you think about it most of us,
  • 11:17 - 11:19
    probably all human beings much had a much
  • 11:19 - 11:22
    more sophisticated inner navigational sense,
  • 11:22 - 11:27
    much more sophisticated perception of the world, the landscape,
  • 11:27 - 11:31
    and for most of us we've lost almost all of that,
  • 11:31 - 11:34
    And yet we didn't go extinct, we're still here.
  • 11:34 - 11:37
    By most measures we're thriving.
  • 11:37 - 11:43
    I think that is also a completely valid point of view.
  • 11:43 - 11:46
    It's true that we lose lots of skills over time,
  • 11:46 - 11:49
    and we gain new ones, and things go on.
  • 11:49 - 11:52
    So in some ways your reaction to this
  • 11:52 - 11:55
    is a value judgement about what's meaningful in human life.
  • 11:55 - 11:58
    But beyond those value judgements
  • 11:58 - 12:02
    I think a couple of things that this story,
  • 12:02 - 12:04
    this experience, tells us is
  • 12:04 - 12:07
    how powerful a new tool can be
  • 12:07 - 12:09
    when introduced into a culture.
  • 12:09 - 12:13
    It can change the way people work,
  • 12:13 - 12:14
    the way people operate,
  • 12:14 - 12:17
    the way they think about what's important,
  • 12:17 - 12:21
    the way they go about their lives, in many different ways.
  • 12:21 - 12:23
    And it can do this very, very quickly.
  • 12:23 - 12:26
    Overturning some skill, or some talent,
  • 12:26 - 12:27
    or some way of life that's been around for
  • 12:27 - 12:29
    thousands of years, just in the course of
  • 12:29 - 12:31
    a year or two.
  • 12:31 - 12:33
    So introducing computer tools,
  • 12:33 - 12:36
    introducing automation, any kind of technology
  • 12:36 - 12:39
    that redefines what human beings do,
  • 12:39 - 12:41
    and redefines what we do versus what
  • 12:41 - 12:44
    we hand off to machines or computers,
  • 12:44 - 12:47
    can have very, very deep and very, very powerful effects.
  • 12:47 - 12:53
    And a lot of these effects are very difficult to anticipate.
  • 12:53 - 12:55
    So the Ennuit hunters, the young hunters
  • 12:55 - 12:58
    didn't go out and buy GPS systems because
  • 12:58 - 13:00
    they wanted to increase the odds that they'd
  • 13:00 - 13:04
    get lost and die, and they probably weren't thinking about
  • 13:04 - 13:07
    eroding some fundamental aspect of culture.
  • 13:07 - 13:08
    They wanted to get the convenience,
  • 13:08 - 13:13
    the ease of the system, which is what
  • 13:13 - 13:14
    many of us are motivated by when we
  • 13:14 - 13:21
    decide to adopt some kind of new form
  • 13:21 - 13:21
    of automation in our lives.
  • 13:21 - 13:26
    When you look at all these unanticipated effects,
  • 13:26 - 13:29
    you can see a very common theme
  • 13:29 - 13:32
    that comes out in research about automation,
  • 13:32 - 13:35
    and particularly about computer automation.
  • 13:35 - 13:37
    It's something that's been documented
  • 13:37 - 13:41
    over and over again by human factors,
  • 13:41 - 13:43
    scientists and researchers, people who study
  • 13:43 - 13:47
    how people interact with computers and other machines.
  • 13:47 - 13:51
    And the concept is referred to as the substitution myth.
  • 13:51 - 13:52
    It's very simple.
  • 13:52 - 13:59
    It says that whenever you automate any part of an activity,
  • 13:59 - 14:02
    you fundamentally change the activity.
  • 14:02 - 14:05
    That's very different from what we anticipate.
  • 14:05 - 14:09
    Most people, either users of software or other automated systems,
  • 14:09 - 14:12
    or the designers, the makers, they assume that
  • 14:12 - 14:16
    you can take bits and pieces of what people do,
  • 14:16 - 14:19
    you can automate them, and turn them
  • 14:19 - 14:21
    over to software or something else,
  • 14:21 - 14:23
    and you'll make those parts of the process
  • 14:23 - 14:28
    more efficient or more convenient, or faster, or cheaper,
  • 14:28 - 14:30
    but you won't fundamentally change the way
  • 14:30 - 14:33
    people go about doing their work.
  • 14:33 - 14:34
    You won't change their behavior.
  • 14:34 - 14:36
    In fact, over and over again we see that
  • 14:36 - 14:40
    even small changes, small shifts of responsibility,
  • 14:40 - 14:42
    from people to technology can have
  • 14:42 - 14:45
    very big effects on the way people behave,
  • 14:45 - 14:48
    the way they learn, the way they approach their jobs.
  • 14:48 - 14:53
    We've seen this recently with the increasing
  • 14:53 - 14:57
    automation of medical record keeping.
  • 14:57 - 15:01
    As you probably know we've moved fairly quickly
  • 15:01 - 15:03
    over the last ten years from doctors
  • 15:03 - 15:06
    taking taking patient notes on paper,
  • 15:06 - 15:08
    either writing them by hand or dictating them,
  • 15:08 - 15:11
    to digital records.
  • 15:11 - 15:15
    So doctors usually as they're going through an exam
  • 15:15 - 15:19
    will be take notes, usually going through a template,
  • 15:19 - 15:21
    on a computer or on a tablet.
  • 15:21 - 15:23
    For most of us our initial reaction
  • 15:23 - 15:24
    is "thank goodness for that"
  • 15:24 - 15:29
    because having records on paper was a pain in the neck.
  • 15:29 - 15:31
    You'd have to enter the same information
  • 15:31 - 15:33
    depending on when you went to different doctors,
  • 15:33 - 15:38
    and god forbid you got sick somewhere else
  • 15:38 - 15:40
    in the country or something and
  • 15:40 - 15:42
    doctors couldn't exchange, had no way
  • 15:42 - 15:44
    to share your old records.
  • 15:44 - 15:46
    So it makes all sorts of sense to
  • 15:46 - 15:50
    automate this and to have digital records.
  • 15:50 - 15:53
    And indeed, ten years ago when the U.S.
  • 15:53 - 15:55
    started down this path there were
  • 15:55 - 15:57
    all sorts of studies that said we're going to
  • 15:57 - 16:00
    save enormous amounts of money,
  • 16:00 - 16:02
    we're going to increase patient care,
  • 16:02 - 16:03
    quality of healthcare,
  • 16:03 - 16:08
    as well as make it easier to share information.
  • 16:08 - 16:10
    There's a big study by the Rand Corporation
  • 16:10 - 16:11
    that documented all this.
  • 16:11 - 16:13
    They had modeled the entire healthcare system
  • 16:13 - 16:17
    in a computer, output various things, and this was
  • 16:17 - 16:18
    going to be all to the good.
  • 16:18 - 16:21
    Well, the government went on to subsidize
  • 16:21 - 16:24
    the adoption of electronic medical records
  • 16:24 - 16:26
    to the tune of something like $30 billion since then.
  • 16:26 - 16:29
    And now we have a lot of information
  • 16:29 - 16:30
    about what's really happened.
  • 16:30 - 16:35
    And nothing that was expected has actually played out.
  • 16:35 - 16:38
    And all sorts of things that weren't expected have.
  • 16:38 - 16:42
    For instance, the cost savings have not materialized.
  • 16:42 - 16:44
    Cost has continued to go up,
  • 16:44 - 16:45
    and there's even some indications that
  • 16:45 - 16:48
    beyond the expense required for the systems
  • 16:48 - 16:52
    themselves, this shift may increase healthcare costs
  • 16:52 - 16:53
    rather than decrease them.
  • 16:53 - 16:56
    The evidence on quality of care is
  • 16:56 - 16:58
    very, very mixed.
  • 16:58 - 16:59
    There seems to be no doubt that
  • 16:59 - 17:01
    for some patients those chronic diseases
  • 17:01 - 17:03
    that require a lot of different doctors
  • 17:03 - 17:04
    quality goes up.
  • 17:04 - 17:06
    But for a lot of patients there hasn't
  • 17:06 - 17:06
    been a change, and there may
  • 17:06 - 17:11
    even have been an erosion of quality in some instances.
  • 17:11 - 17:13
    And finally, we're not even getting the benefits
  • 17:13 - 17:17
    of broad sharing of the records because
  • 17:17 - 17:20
    a lot of the systems are proprietary
  • 17:20 - 17:24
    and so you can't, you know, transfer the records
  • 17:24 - 17:26
    quickly or easily from one hospital to the next
  • 17:26 - 17:27
    or from one practice to the next.
  • 17:27 - 17:29
    And now some of these problems are just
  • 17:29 - 17:33
    coming from the fact that a lot of software is crappy.
  • 17:33 - 17:35
    We've rushed to spend huge amounts
  • 17:35 - 17:36
    of money on it, lots of big software
  • 17:36 - 17:40
    companies that supply this have gotten wealthy,
  • 17:40 - 17:42
    and doctors are struggling with it,
  • 17:42 - 17:43
    patients are struggling with it.
  • 17:43 - 17:45
    And so some of those things will be fixed
  • 17:45 - 17:48
    at more expense over time.
  • 17:48 - 17:50
    But if you look down lower, you see
  • 17:50 - 17:52
    changes in behavior that are much more subtle,
  • 17:52 - 17:54
    much more interesting, and go beyond
  • 17:54 - 17:56
    the quality of the software itself.
  • 17:56 - 17:58
    So for instance, one of the reasons
  • 17:58 - 18:00
    that everybody expected that healthcare
  • 18:00 - 18:03
    costs would go down was the assumption
  • 18:03 - 18:06
    that as soon as doctors can call up images
  • 18:06 - 18:08
    and other test results on their computers
  • 18:08 - 18:10
    when they're in with a patient,
  • 18:10 - 18:13
    they wouldn't order more tests.
  • 18:13 - 18:15
    So we see fewer diagnostic tests and
  • 18:15 - 18:17
    fewer costs from those diagnostic tests,
  • 18:17 - 18:20
    a big part of the healthcare system's costs.
  • 18:20 - 18:23
    Actually the opposite seems to be happening.
  • 18:23 - 18:26
    You give the doctor an ability to
  • 18:26 - 18:28
    quickly order tests and quickly pull up
  • 18:28 - 18:31
    the results, doctors actually order more of them,
  • 18:31 - 18:32
    because they know it's going to be easier for them.
  • 18:32 - 18:35
    And so the quality of the outcomes
  • 18:35 - 18:36
    doesn't go up, we're just seeing more
  • 18:36 - 18:40
    diagnostic tests and more costs.
  • 18:40 - 18:42
    Exactly the opposite of what we expected.
  • 18:42 - 18:45
    You see changes in the doctor-patient relationship.
  • 18:45 - 18:50
    If you've been around for a while, and had the experience of
  • 18:50 - 18:53
    going to a doctor's office for a physical or whatever
  • 18:53 - 18:57
    the doctor paid his or her whole attention to you,
  • 18:57 - 19:00
    to the world of electronic medical records
  • 19:00 - 19:02
    when the doctor has a computer,
  • 19:02 - 19:03
    you know that it intrudes in the
  • 19:03 - 19:05
    doctor-patient relationship.
  • 19:05 - 19:07
    Studies show that doctors now spend,
  • 19:07 - 19:08
    if they have a computer with them, about
  • 19:08 - 19:11
    25-50% of the time during the exam
  • 19:11 - 19:14
    looking at the computer rather than the patient.
  • 19:14 - 19:16
    And doctors aren't happy about that,
  • 19:16 - 19:17
    patients don't tend to be happy about it,
  • 19:17 - 19:21
    but it's a necessary consequence,
  • 19:21 - 19:23
    at least how we've designed these systems,
  • 19:23 - 19:25
    of this transfer.
  • 19:25 - 19:28
    The most interesting, and I'm just going
  • 19:28 - 19:31
    to give you three examples of unexpected results,
  • 19:31 - 19:33
    but what's most interesting to me is the
  • 19:33 - 19:36
    fact that the quality of the records themselves
  • 19:36 - 19:38
    has gone down.
  • 19:38 - 19:41
    And the reason is that, first of all, doctors
  • 19:41 - 19:44
    now use templates, checkboxes lots of times,
  • 19:44 - 19:47
    and then when they have to put in text
  • 19:47 - 19:50
    describing the patient's condition
  • 19:50 - 19:53
    rather than dictating it from what they've just experienced,
  • 19:53 - 19:55
    or hand-writing it, they cut and paste.
  • 19:55 - 19:57
    They cut and paste paragraphs and other stuff
  • 19:57 - 20:00
    from other visits that the patient has had
  • 20:00 - 20:03
    or from visits by other patients that have
  • 20:03 - 20:04
    had similar conditions.
  • 20:04 - 20:07
    This is referred to as the cloning of text.
  • 20:07 - 20:10
    And more and more of personal medical records
  • 20:10 - 20:13
    consist of cloned text these days.
  • 20:13 - 20:16
    Which makes the records less useful
  • 20:16 - 20:18
    for doctors, because it has less rich
  • 20:18 - 20:20
    and subtle information, and it also
  • 20:20 - 20:23
    undermines an important role that records
  • 20:23 - 20:28
    used to play in the exchange of information and knowledge.
  • 20:28 - 20:29
    A primary-care physician used to get
  • 20:29 - 20:31
    a lot of information, a lot of knowledge,
  • 20:31 - 20:35
    by reading rich descriptions from specialists.
  • 20:35 - 20:37
    And now more and more, as doctors say,
  • 20:37 - 20:40
    it's just boiler-plate, just cloned text.
  • 20:40 - 20:43
    So we've created this system that eventually
  • 20:43 - 20:46
    will probably have the very important benefit
  • 20:46 - 20:48
    of allowing us to exchange information
  • 20:48 - 20:50
    more and more quickly,
  • 20:50 - 20:52
    more and more easily,
  • 20:52 - 20:53
    but at the same time we're reducing
  • 20:53 - 20:56
    the quality of the information itself and making
  • 20:56 - 20:59
    what's exchanged less valuable.
  • 20:59 - 21:01
    Now, those are three examples of how the
  • 21:01 - 21:04
    substitution myth has played out in this
  • 21:04 - 21:07
    particular area of automation,
  • 21:07 - 21:08
    and they're very specialized and you see
  • 21:08 - 21:11
    all sorts of these things anywhere you look.
  • 21:11 - 21:13
    But there are a couple of bigger themes
  • 21:13 - 21:17
    that tend to cross all aspects of automation.
  • 21:17 - 21:20
    When you introduce software to make jobs easier,
  • 21:20 - 21:25
    to take over jobs, in addition to the benefits
  • 21:25 - 21:31
    are a couple of big negative developments.
  • 21:31 - 21:33
    Human factors, experts, researchers on this
  • 21:33 - 21:36
    refer to this as automation complacency
  • 21:36 - 21:38
    and automation bias.
  • 21:38 - 21:40
    Automation complacency means exactly
  • 21:40 - 21:43
    what you would expect.
  • 21:43 - 21:46
    When people turn over big aspects of their job
  • 21:46 - 21:48
    to computers, to software, to robots,
  • 21:48 - 21:50
    they tune out.
  • 21:50 - 21:54
    We've very good at trusting a machine,
  • 21:54 - 21:55
    and certainly a computerized machine,
  • 21:55 - 21:59
    to handle our job, to handle any challenge
  • 21:59 - 22:01
    that might arise.
  • 22:01 - 22:03
    And so we become complacent,
  • 22:03 - 22:05
    we tune out, we space out.
  • 22:05 - 22:07
    And that might be fine until
  • 22:07 - 22:08
    something bad happens and we suddenly
  • 22:08 - 22:10
    have to re-engage what we're doing.
  • 22:10 - 22:13
    And then you see people make mistakes.
  • 22:13 - 22:16
    Everybody experiences automation complacency
  • 22:16 - 22:17
    in using computers.
  • 22:17 - 22:21
    A very simple example is autocorrect
  • 22:21 - 22:24
    for spelling.
  • 22:24 - 22:26
    When people have autocorrect going,
  • 22:26 - 22:28
    when they're texting or using a word-processor,
  • 22:28 - 22:31
    they become much more complacent
  • 22:31 - 22:32
    about their spelling.
  • 22:32 - 22:34
    They don't check things, they let it go.
  • 22:34 - 22:38
    And then most people have probably
  • 22:38 - 22:40
    had the experience of sending out a text,
  • 22:40 - 22:43
    or an email, or a report, that has
  • 22:43 - 22:45
    some really stupid typo in it because
  • 22:45 - 22:48
    the computer misunderstood your intent.
  • 22:48 - 22:52
    And that causes maybe a moment of embarrassment.
  • 22:52 - 22:55
    But you take that same phenomenon of complacency,
  • 22:55 - 22:57
    and put it into an industrial control room,
  • 22:57 - 23:01
    into a cockpit, into a battlefield,
  • 23:01 - 23:05
    and you sometimes get very, very dangerous situations.
  • 23:05 - 23:07
    One of the classic examples of
  • 23:07 - 23:10
    automation complacency comes in the
  • 23:10 - 23:12
    cruise-line business.
  • 23:12 - 23:15
    A few years ago, a cruise ship called
  • 23:15 - 23:18
    the Royal Majesty was on the last-leg
  • 23:18 - 23:21
    of a cruise off New England.
  • 23:21 - 23:23
    It was going from Bermuda, I think, to Boston,
  • 23:23 - 23:27
    had a GPS antennae that was connected
  • 23:27 - 23:30
    to an automated navigation system.
  • 23:30 - 23:33
    The crew turned on the automated navigation system
  • 23:33 - 23:36
    and became totally complacent.
  • 23:36 - 23:38
    Just assumed everything's going fine,
  • 23:38 - 23:41
    the computer's plotting our course,
  • 23:41 - 23:42
    don't have to worry about it.
  • 23:42 - 23:47
    And at some point the line to the GPS antennae broke.
  • 23:47 - 23:48
    It was way up somewhere and nobody saw it.
  • 23:48 - 23:51
    Nobody noticed.
  • 23:51 - 23:55
    There were increasing environmental clues
  • 23:55 - 23:58
    that the ship was drifting off-course.
  • 23:58 - 23:59
    Nobody saw it.
  • 23:59 - 24:02
    At one point a mate whose job it was
  • 24:02 - 24:06
    to watch for a locational buoy and report
  • 24:06 - 24:09
    back to the bridge that we've passed
  • 24:09 - 24:10
    this as we should have, he was out there
  • 24:10 - 24:12
    watching for it and he didn't see it,
  • 24:12 - 24:15
    and he said "well, it must be there
  • 24:15 - 24:17
    because the computer's in charge here.
  • 24:17 - 24:18
    I just must have missed it."
  • 24:18 - 24:20
    So he didn't bother to tell the bridge.
  • 24:20 - 24:22
    He was embarrassed that he had missed
  • 24:22 - 24:24
    what must have been there.
  • 24:24 - 24:26
    Well, hours go by and ultimately the ship
  • 24:26 - 24:29
    crashes into a sand bar off Nantucket Island
  • 24:29 - 24:31
    many miles off-course.
  • 24:31 - 24:33
    Fortunately, no one was killed
  • 24:33 - 24:36
    or injured that bad, but there was
  • 24:36 - 24:38
    millions of dollars of damage.
  • 24:38 - 24:42
    And it kind of shows how easily
  • 24:42 - 24:43
    if you give too much responsibility
  • 24:43 - 24:45
    to the computer, the people will
  • 24:45 - 24:49
    tune out and they won't notice things
  • 24:49 - 24:51
    are going wrong, or if they do notice
  • 24:51 - 24:53
    they might make mistakes in responding.
  • 24:53 - 24:56
    Automation bias is closely related
  • 24:56 - 25:00
    to automation complacency.
  • 25:00 - 25:02
    It just means that you place too much
  • 25:02 - 25:06
    trust in the information coming from your computer.
  • 25:06 - 25:09
    To the point where you begin to assume
  • 25:09 - 25:13
    that the computer is infallible, and so you
  • 25:13 - 25:15
    don't have to pay attention to other sources
  • 25:15 - 25:18
    of information, including your own eyes and ears.
  • 25:18 - 25:19
    And this too is something we see
  • 25:19 - 25:22
    over and over again when you automate
  • 25:22 - 25:23
    any kind of activity.
  • 25:23 - 25:27
    A good example is the use of GPS
  • 25:27 - 25:29
    by truck drivers.
  • 25:29 - 25:33
    Truck driver starts to listen to the
  • 25:33 - 25:37
    automated voice of the GPS woman
  • 25:37 - 25:39
    telling him where to go and whatever,
  • 25:39 - 25:42
    and he or she begins to ignore
  • 25:42 - 25:45
    other sources of information like road signs.
  • 25:45 - 25:48
    So we've seen an increase in the incidence
  • 25:48 - 25:50
    of trucks crashing into low overpasses
  • 25:50 - 25:54
    as we've increased the use of GPS.
  • 25:54 - 25:56
    And in Seattle a few years ago,
  • 25:56 - 25:58
    there was a bus driver carrying a
  • 25:58 - 26:01
    load of high-school athletes to a game
  • 26:01 - 26:04
    somewhere--twelve-foot high bus--
  • 26:04 - 26:06
    approached a nine-foot high overpass
  • 26:06 - 26:08
    and there were all these signs along the way
  • 26:08 - 26:10
    "Danger Low Overpass"
  • 26:10 - 26:12
    or even signs that had blinking lights
  • 26:12 - 26:13
    around them.
  • 26:13 - 26:15
    He smashes right into it.
  • 26:15 - 26:17
    Luckily no one died, a bunch of students
  • 26:17 - 26:18
    had to go to the hospital.
  • 26:18 - 26:21
    The police said "what were you thinking?"
  • 26:21 - 26:22
    He said "Well, I had my GPS on,
  • 26:22 - 26:25
    and I just didn't see the signs."
  • 26:25 - 26:28
    So we ignore or don't even see
  • 26:28 - 26:30
    other sources of information.
  • 26:30 - 26:33
    In another very different area,
  • 26:33 - 26:34
    back to healthcare,
  • 26:34 - 26:38
    if you look at how radiologists read
  • 26:38 - 26:40
    diagnostic images today most of them
  • 26:40 - 26:43
    read them as digital images of course,
  • 26:43 - 26:45
    but also there's now software
  • 26:45 - 26:48
    that is designed as a decision-support aid
  • 26:48 - 26:50
    and analytical aid.
  • 26:50 - 26:52
    What it does is it gives the radiologist prompts.
  • 26:52 - 26:55
    It high-lights particular regions of the image
  • 26:55 - 26:59
    that the data analysis, past data suggests
  • 26:59 - 27:01
    are suspicious.
  • 27:01 - 27:03
    And in many cases this has very good results.
  • 27:03 - 27:07
    The doctor focuses attention on those
  • 27:07 - 27:09
    particular highlighted areas,
  • 27:09 - 27:11
    finds a cancer or other abnormality
  • 27:11 - 27:13
    that the doctor may have missed,
  • 27:13 - 27:14
    and that's fine.
  • 27:14 - 27:17
    But research shows that it also has
  • 27:17 - 27:19
    the exact opposite effect.
  • 27:19 - 27:21
    Doctors become so focused on the
  • 27:21 - 27:22
    high-lighted areas that they only pay
  • 27:22 - 27:25
    cursory attention to other areas,
  • 27:25 - 27:29
    and often miss abnormalities or
  • 27:29 - 27:31
    cancers that aren't highlighted.
  • 27:31 - 27:33
    The latest research suggests that
  • 27:33 - 27:36
    these prompt-systems,
  • 27:36 - 27:38
    which as you know are very very common
  • 27:38 - 27:40
    in software in general,
  • 27:40 - 27:43
    these prompt-systems seem to improve
  • 27:43 - 27:49
    the performance of less-expert image readers
  • 27:49 - 27:53
    on simpler challenges, but decrease the performance
  • 27:53 - 27:59
    of expert readers on very, very hard challenges.
  • 27:59 - 28:04
    The phenomenon of automation complacency
  • 28:04 - 28:07
    and automation bias points to
  • 28:07 - 28:11
    an even deeper and more insidious problem
  • 28:11 - 28:14
    that poorly designed software or
  • 28:14 - 28:17
    poorly designed automated systems often triggers.
  • 28:17 - 28:20
    And that is that in both of those cases,
  • 28:20 - 28:22
    with complacency and bias,
  • 28:22 - 28:27
    you see a person disengaging from the world,
  • 28:27 - 28:30
    disengaging from his or her circumstances,
  • 28:30 - 28:33
    disengaging from the task at hand,
  • 28:33 - 28:36
    simply assuming that the computer will handle it.
  • 28:36 - 28:38
    And indeed the computer has been designed,
  • 28:38 - 28:40
    whatever system we're talking about,
  • 28:40 - 28:41
    has been designed to handle as much of
  • 28:41 - 28:43
    the chore as possible.
  • 28:43 - 28:45
    And what happens then is we see
  • 28:45 - 28:49
    an erosion of talent on the part of the person.
  • 28:49 - 28:51
    Either the person isn't developing
  • 28:51 - 28:53
    strong, rich talents,
  • 28:53 - 28:55
    or their existing talents are beginning
  • 28:55 - 28:56
    to get rusty.
  • 28:56 - 28:59
    And the reason is pretty obvious.
  • 28:59 - 29:01
    We all know either intuitively,
  • 29:01 - 29:02
    or if you've read anything about this,
  • 29:02 - 29:05
    how we develop rich talents,
  • 29:05 - 29:07
    sophisticated talents, is by practice.
  • 29:07 - 29:09
    By doing things over and over again,
  • 29:09 - 29:11
    facing lots of different challenges
  • 29:11 - 29:13
    in lots of different circumstances,
  • 29:13 - 29:15
    figuring out how to overcome them.
  • 29:15 - 29:17
    That's how we build the most
  • 29:17 - 29:19
    sophisticated skills and how we continue
  • 29:19 - 29:21
    to refine them.
  • 29:21 - 29:26
    And this crucial element in learning
  • 29:26 - 29:29
    in all sorts of forms is often
  • 29:29 - 29:31
    referred to as the Generation Effect.
  • 29:31 - 29:33
    And what that means is
  • 29:33 - 29:36
    if you're actively engaged in some task,
  • 29:36 - 29:38
    in some form of work,
  • 29:38 - 29:40
    you're going to not only perform better
  • 29:40 - 29:41
    but learn more and become more expert
  • 29:41 - 29:44
    than if you're simply an observer...
  • 29:44 - 29:47
    simply passively watching as things progress.
  • 29:47 - 29:50
    The generation effect was first observed
  • 29:50 - 29:54
    in this very simple experiment
  • 29:54 - 29:58
    involving people's ability to expand vocabulary.
  • 29:58 - 30:01
    Learn vocabulary, remember vocabulary.
  • 30:01 - 30:03
    And what the researchers did,
  • 30:03 - 30:04
    this was back in the '70s,
  • 30:04 - 30:06
    is they got two groups of people
  • 30:06 - 30:09
    to try to memorize lots of pairs of antonyms.
  • 30:09 - 30:11
    Lots of pairs of opposites.
  • 30:11 - 30:14
    And the only difference between the two groups
  • 30:14 - 30:17
    was that one group used flash cards
  • 30:17 - 30:19
    that had both words spelled out entirely
  • 30:19 - 30:21
    (hot, cold),
  • 30:21 - 30:22
    the other had flashcards that just had
  • 30:22 - 30:25
    the first word (hot), but then provided
  • 30:25 - 30:26
    only the first letter of the second word
  • 30:26 - 30:28
    (so C).
  • 30:28 - 30:30
    And what they found was that indeed,
  • 30:30 - 30:33
    the people who used the full words
  • 30:33 - 30:36
    remembered much fewer of the antonyms
  • 30:36 - 30:38
    than the people who had to fill in
  • 30:38 - 30:41
    the second word.
  • 30:41 - 30:44
    There's a little bit more brain activity
  • 30:44 - 30:45
    involved here. You actually
  • 30:45 - 30:49
    have to call to mind what this word is.
  • 30:49 - 30:50
    You have to generate it.
  • 30:50 - 30:52
    And just that small difference
  • 30:52 - 30:56
    gives you better learning, better retention.
  • 30:56 - 30:59
    A few years later, some other researchers,
  • 30:59 - 31:01
    some other professors in this area,
  • 31:01 - 31:03
    realized that actually this is kind of
  • 31:03 - 31:06
    a form of automation.
  • 31:06 - 31:09
    What this does, giving the full word,
  • 31:09 - 31:13
    in essence automates filling in the word.
  • 31:13 - 31:16
    They explained this as a fact,
  • 31:16 - 31:17
    a phenomenon, related to
  • 31:17 - 31:19
    automation complacency.
  • 31:19 - 31:21
    You might be completely unconscious of it,
  • 31:21 - 31:24
    but your brain is a little more complacent,
  • 31:24 - 31:26
    it doesn't have to work as hard,
  • 31:26 - 31:28
    in this mode. And that makes
  • 31:28 - 31:30
    a big difference.
  • 31:30 - 31:32
    And it turns out that the generation effect,
  • 31:32 - 31:36
    it explains a whole lot about
  • 31:36 - 31:39
    how we learn and develop skill in all sorts of places.
  • 31:39 - 31:44
    It's definitely not just restricted to studies of vocabulary.
  • 31:44 - 31:47
    You see it everywhere.
  • 31:47 - 31:50
    If you're actively involved, you learn more,
  • 31:50 - 31:51
    you become more expertise.
  • 31:51 - 31:53
    If you're not, you don't.
  • 31:53 - 31:56
    And unfortunately, with software
  • 31:56 - 31:59
    more and more the programmer,
  • 31:59 - 32:02
    the designer, actually gets in the way
  • 32:02 - 32:03
    of the generation effect.
  • 32:03 - 32:05
    And not by accident, but on purpose
  • 32:05 - 32:07
    because of course the things we
  • 32:07 - 32:08
    tend to automate, the things we
  • 32:08 - 32:10
    tend to simplify for people,
  • 32:10 - 32:12
    are the things that are challenging.
  • 32:12 - 32:14
    You look at a process, you look at where
  • 32:14 - 32:15
    people are struggling,
  • 32:15 - 32:19
    and that is both often the most interesting
  • 32:19 - 32:21
    thing to automate, but also the place
  • 32:21 - 32:25
    that whoever is paying you to write software
  • 32:25 - 32:27
    is encouraging you to do
  • 32:27 - 32:29
    because it seems to create efficiency.
  • 32:29 - 32:31
    It seems to create productivity.
  • 32:31 - 32:34
    But what we're doing is designing
  • 32:34 - 32:36
    lots of systems, lots of software,
  • 32:36 - 32:40
    that actually deliberately (if you look at it
  • 32:40 - 32:42
    in that sense) gets in the way of people's
  • 32:42 - 32:45
    ability to learn and create expertise.
  • 32:45 - 32:48
    There was a series of experiments done
  • 32:48 - 32:53
    beginning about ten years ago
  • 32:53 - 32:55
    by this young cognitive psychologist
  • 32:55 - 32:59
    in Holland named Christof van Nimwegen
  • 32:59 - 33:01
    and he did something very interesting.
  • 33:01 - 33:04
    He got a series of different tasks,
  • 33:04 - 33:06
    one of them was solving a difficult logic problem,
  • 33:06 - 33:09
    one of them was organizing a conference
  • 33:09 - 33:12
    where you had a large number of conference rooms,
  • 33:12 - 33:13
    large number of speakers,
  • 33:13 - 33:15
    large number of time slots and you had
  • 33:15 - 33:20
    to optimize how you put all those things together.
  • 33:20 - 33:22
    So a number of tasks that had lots of components,
  • 33:22 - 33:24
    required a certain amount of smarts,
  • 33:24 - 33:28
    required you to work through a hard problem over time.
  • 33:28 - 33:31
    And in each case he got groups of people,
  • 33:31 - 33:33
    divided them into two,
  • 33:33 - 33:37
    created software--two different applications
  • 33:37 - 33:39
    for doing these.
  • 33:39 - 33:41
    One application was very bare-bones,
  • 33:41 - 33:44
    it just provided you with the scenario
  • 33:44 - 33:45
    and then you had to work through it.
  • 33:45 - 33:47
    The other was very helpful,
  • 33:47 - 33:49
    had prompts, it had high-lights,
  • 33:49 - 33:52
    it had on-screen advice when you
  • 33:52 - 33:54
    got to a point where you could
  • 33:54 - 33:56
    do some moves, but you couldn't do others.
  • 33:56 - 33:57
    It would high-light the ones you could do,
  • 33:57 - 33:59
    and gray-out the ones you couldn't.
  • 33:59 - 34:00
    And then he let them go,
  • 34:00 - 34:02
    and watched what happened.
  • 34:02 - 34:03
    Well, as you might expect,
  • 34:03 - 34:05
    the people with the more helpful software
  • 34:05 - 34:07
    got off to a great start.
  • 34:07 - 34:09
    The software was guiding them,
  • 34:09 - 34:12
    helping them make their initial decisions and moves.
  • 34:12 - 34:16
    The jumped out to a lead in terms
  • 34:16 - 34:18
    of solving the challenges.
  • 34:18 - 34:22
    But over time, the people using the
  • 34:22 - 34:24
    bare-bones software, the un-helpful software,
  • 34:24 - 34:26
    not only caught up, but actually
  • 34:26 - 34:29
    in all the cases ended up completing
  • 34:29 - 34:33
    the assignment much more efficiently.
  • 34:33 - 34:36
    Made much fewer incorrect moves,
  • 34:36 - 34:38
    much fewer mistakes.
  • 34:38 - 34:41
    They also seemed to have a much clearer strategy,
  • 34:41 - 34:43
    whereas the people using the helpful software
  • 34:43 - 34:44
    kind of just clicked around,
  • 34:44 - 34:47
    and finally van Nimwegen gave them
  • 34:47 - 34:51
    tests afterwards to measure their
  • 34:51 - 34:54
    conceptual understanding of what they had done.
  • 34:54 - 34:55
    People with the un-helpful software
  • 34:55 - 34:58
    had a much clearer conceptual understanding.
  • 34:58 - 35:00
    Then, eight months later, he invited
  • 35:00 - 35:04
    just the logic puzzle group...he invited
  • 35:04 - 35:05
    all the people who did that back,
  • 35:05 - 35:08
    had them solve the problem again.
  • 35:08 - 35:09
    The people who had, eight months earlier,
  • 35:09 - 35:11
    used the unhelpful software,
  • 35:11 - 35:14
    solved the puzzle twice as fast as those
  • 35:14 - 35:17
    who used the helpful software.
  • 35:17 - 35:20
    The more helpful the software,
  • 35:20 - 35:22
    the less learning, the weaker performance,
  • 35:22 - 35:23
    the less strategic thinking of the
  • 35:23 - 35:25
    people who used it.
  • 35:25 - 35:28
    Again, this underscores a fundamental
  • 35:28 - 35:32
    paradox that people face--
  • 35:32 - 35:34
    people who develop these programs,
  • 35:34 - 35:36
    and people who use them--
  • 35:36 - 35:39
    where our instinct to make things easier,
  • 35:39 - 35:41
    to find the places of friction
  • 35:41 - 35:42
    and remove the friction,
  • 35:42 - 35:47
    can actually lead to counter-productive results
  • 35:47 - 35:49
    where you're eroding performance
  • 35:49 - 35:52
    and eroding learning.
  • 35:52 - 35:55
    So if you look at all the psychological studies
  • 35:55 - 35:58
    and the human factor studies of how
  • 35:58 - 36:00
    people interact with machines and technology
  • 36:00 - 36:02
    and computers, and you also
  • 36:02 - 36:05
    combine it with psychological understanding
  • 36:05 - 36:08
    of how we learn, what you see is that
  • 36:08 - 36:11
    there's a very complex cycle involved.
  • 36:11 - 36:15
    If you have a high degree of engagement
  • 36:15 - 36:18
    with people, if they're really pushed
  • 36:18 - 36:22
    to engage the challenges, work hard,
  • 36:22 - 36:24
    maintain their awareness of their circumstances,
  • 36:24 - 36:29
    you provoke a state of flow
  • 36:29 - 36:32
    (if you've read Mihaly's book "Flow",
  • 36:32 - 36:34
    or are familiar with it), we perform
  • 36:34 - 36:37
    optimally when we're really immersed in
  • 36:37 - 36:39
    our challenge, when we're stretching our talents,
  • 36:39 - 36:41
    learning new talents.
  • 36:41 - 36:43
    That's the optimal state to be in.
  • 36:43 - 36:45
    It gives us more skills, pushes us
  • 36:45 - 36:47
    to new talents, and it also happens
  • 36:47 - 36:49
    to be the state in which we're most fulfilled,
  • 36:49 - 36:51
    most satisfied.
  • 36:51 - 36:54
    Often, people have this feeling that
  • 36:54 - 36:56
    if they were relieved of work,
  • 36:56 - 36:58
    relieved of effort, they'll be happier.
  • 36:58 - 36:59
    Turns out they're not.
  • 36:59 - 37:01
    They're more miserable, they're actually
  • 37:01 - 37:03
    happier when they are working hard,
  • 37:03 - 37:04
    facing a challenge.
  • 37:04 - 37:06
    And so this sense of fulfillment
  • 37:06 - 37:08
    prolongs your sense of engagement,
  • 37:08 - 37:11
    intensifies it, and you get this very nice cycle.
  • 37:11 - 37:13
    People are performing at a high level,
  • 37:13 - 37:16
    they're learning talents, and they're fulfilled.
  • 37:16 - 37:17
    They're happy. They're satisfied.
  • 37:17 - 37:19
    They like their experience.
  • 37:19 - 37:21
    All too often you stick automation
  • 37:21 - 37:24
    into here, particularly if you haven't
  • 37:24 - 37:27
    thought through all of the implications,
  • 37:27 - 37:28
    and you break this cycle.
  • 37:28 - 37:30
    Suddenly you decrease engagement
  • 37:30 - 37:33
    and all the other things go down as well.
  • 37:33 - 37:36
    You see this today in all sorts of places.
  • 37:36 - 37:38
    You see it with pilots whose jobs
  • 37:38 - 37:40
    have been highly, highly automated.
  • 37:40 - 37:42
    Automation has been a very good,
  • 37:42 - 37:47
    a very positive development for a hundred years
  • 37:47 - 37:50
    in aviation, but recently as pilots'
  • 37:50 - 37:53
    role in control of the aircraft, manual control
  • 37:53 - 37:55
    has gone down to the point to maybe
  • 37:55 - 37:57
    where they're in control for three minutes
  • 37:57 - 37:59
    in a flight, you see problems with
  • 37:59 - 38:01
    the erosion of engagement,
  • 38:01 - 38:02
    the erosion of situational awareness,
  • 38:02 - 38:04
    and the erosion of talent.
  • 38:04 - 38:06
    And unfortunately on those rare occasions
  • 38:06 - 38:09
    when the autopilot fails for whatever reason
  • 38:09 - 38:11
    or there's very weird circumstances,
  • 38:11 - 38:14
    you increase the odds that the pilots will
  • 38:14 - 38:19
    make mistakes, sometimes with dangerous implications.
  • 38:19 - 38:25
    So, why do we go down this path so often?
  • 38:25 - 38:29
    Why do we create computer programs,
  • 38:29 - 38:32
    robotic systems, other automated systems,
  • 38:32 - 38:37
    that instead of raising people up
  • 38:37 - 38:39
    to their highest level of talent,
  • 38:39 - 38:41
    highest level awareness and satisfaction,
  • 38:41 - 38:43
    has the opposite effect?
  • 38:43 - 38:45
    I think much of the blame can be placed
  • 38:45 - 38:47
    on what I would argue is the dominant
  • 38:47 - 38:50
    design philosophy or ethic
  • 38:50 - 38:53
    that governs the people who are making these programs,
  • 38:53 - 38:56
    and making these machines.
  • 38:56 - 38:57
    It's what's often what's referred to as
  • 38:57 - 39:00
    technology centered design.
  • 39:00 - 39:01
    And basically what that means is
  • 39:01 - 39:03
    the engineer or the programmer or whatever
  • 39:03 - 39:07
    starts by asking "What can the computer do?"
  • 39:07 - 39:09
    "What can the technology do?"
  • 39:09 - 39:11
    And then anything that the computer
  • 39:11 - 39:14
    or technology can do, they give that
  • 39:14 - 39:17
    responsibility to the computer.
  • 39:17 - 39:20
    And you can see why this is what
  • 39:20 - 39:22
    engineers and programmers would want to do
  • 39:22 - 39:23
    because that's their job:
  • 39:23 - 39:28
    to simulate or automate interesting work
  • 39:28 - 39:31
    with software, or with robots.
  • 39:31 - 39:33
    So that's a very natural thing to do.
  • 39:33 - 39:34
    But what happens then is
  • 39:34 - 39:36
    what the human being gets is just
  • 39:36 - 39:38
    what the computer can't do,
  • 39:38 - 39:40
    or what we haven't yet figured out how
  • 39:40 - 39:43
    the computer can do it.
  • 39:43 - 39:45
    And that tends to be things like
  • 39:45 - 39:48
    monitoring screens for anomalies,
  • 39:48 - 39:50
    entering data, and oh by the way
  • 39:50 - 39:53
    you're also the last line of defense so if
  • 39:53 - 39:55
    everything goes to hell you've got to take over
  • 39:55 - 39:58
    and get us out of the fix.
  • 39:58 - 40:00
    Those are things that people are actually
  • 40:00 - 40:02
    pretty bad at.
  • 40:02 - 40:05
    We're terrible at monitoring things that
  • 40:05 - 40:07
    we're waiting for an anomaly,
  • 40:07 - 40:11
    we can't focus on it for more than about half an hour.
  • 40:11 - 40:14
    Entering data, becoming the sensor for
  • 40:14 - 40:18
    the computer is a pretty dull job in most cases.
  • 40:18 - 40:21
    If you set up a system that ensures that
  • 40:21 - 40:24
    the operator is going to have a low level
  • 40:24 - 40:26
    of situational awareness then that
  • 40:26 - 40:28
    is not the person you want to be having
  • 40:28 - 40:31
    as the last line of defense.
  • 40:31 - 40:35
    The alternative is something called, surprise,
  • 40:35 - 40:36
    human centered design.
  • 40:36 - 40:38
    Where you start by saying,
  • 40:38 - 40:40
    "what are human beings good at?"
  • 40:40 - 40:41
    And you look at the fact that
  • 40:41 - 40:43
    there's lots of important things that we're
  • 40:43 - 40:46
    actually still much better than computers at.
  • 40:46 - 40:49
    We're creative, we have imagination,
  • 40:49 - 40:51
    we can think conceptually,
  • 40:51 - 40:52
    we have an understanding of the world,
  • 40:52 - 40:55
    we can think critically, we can think skeptically...
  • 40:55 - 40:58
    And then you bring in the software,
  • 40:58 - 41:01
    you bring the automation first to aid
  • 41:01 - 41:05
    the person in exploiting those capabilities,
  • 41:05 - 41:09
    but also to fill in the gaps and the flaws
  • 41:09 - 41:11
    that we all have as human beings.
  • 41:11 - 41:14
    So we're not great at processing huge amounts
  • 41:14 - 41:16
    of information quickly,
  • 41:16 - 41:18
    we're subject to biases in our thinking.
  • 41:18 - 41:20
    You can use software to counteract these,
  • 41:20 - 41:25
    or to provide an additional set of capabilities.
  • 41:25 - 41:28
    And if you go that path you get both
  • 41:28 - 41:31
    the best of the human and the best of the machine,
  • 41:31 - 41:33
    or the best of the technology.
  • 41:33 - 41:37
    Some of the ideas here are very simple.
  • 41:37 - 41:39
    For instance, with pilots instead of
  • 41:39 - 41:43
    allowing them to turn on total flight automation
  • 41:43 - 41:45
    once they're off the ground and then
  • 41:45 - 41:48
    not bother to turn it off until they're about ready to land,
  • 41:48 - 41:51
    you can design the software to
  • 41:51 - 41:55
    give control back to the pilot every once in a while
  • 41:55 - 41:57
    at random instances.
  • 41:57 - 42:00
    And just when you know that you're
  • 42:00 - 42:02
    going to be called upon at some random time
  • 42:02 - 42:05
    to take back control, that improves
  • 42:05 - 42:09
    people's awareness and concentration immeasurably.
  • 42:09 - 42:11
    It makes it less likely that they're going to
  • 42:11 - 42:13
    completely space out.
  • 42:13 - 42:15
    Or in the example of the radiologist,
  • 42:15 - 42:18
    and this goes for examples of decision support
  • 42:18 - 42:22
    or expert system or analytical programs in general,
  • 42:22 - 42:25
    one thing you can do is instead of
  • 42:25 - 42:27
    bringing in the software prompts
  • 42:27 - 42:29
    and the software advice right out the outset,
  • 42:29 - 42:31
    you can first encourage the human being
  • 42:31 - 42:34
    to deal with the problem, to look
  • 42:34 - 42:37
    at the image on his or her own or to
  • 42:37 - 42:39
    do whatever analytical chore is there,
  • 42:39 - 42:41
    and then bring in the software afterwards as
  • 42:41 - 42:46
    a further aid, bringing new information to bear.
  • 42:46 - 42:48
    And that too means you get the best
  • 42:48 - 42:50
    of both people.
  • 42:50 - 42:53
    Unfortunately, we don't do that,
  • 42:53 - 42:55
    or at least not very often.
  • 42:55 - 42:57
    We don't pursue human centered design,
  • 42:57 - 42:59
    and I think it's for a couple of reasons.
  • 42:59 - 43:03
    One is that we human beings, as I said before,
  • 43:03 - 43:07
    are very eager to hand off any kind of work
  • 43:07 - 43:09
    to machines, to software, to other people,
  • 43:09 - 43:13
    because we are afflicted by
  • 43:13 - 43:16
    what psychologists term "miswanted".
  • 43:16 - 43:18
    We think we want to be freed of labor,
  • 43:18 - 43:20
    freed of hard work, freed of challenge.
  • 43:20 - 43:23
    And when are freed of it we feel miserable,
  • 43:23 - 43:25
    we feel anxious, we get self-absorbed,
  • 43:25 - 43:27
    and actually our optimal experience comes
  • 43:27 - 43:30
    when we are working hard at things.
  • 43:30 - 43:31
    So there's something inside of us
  • 43:31 - 43:33
    that is very eager to get rid of stuff,
  • 43:33 - 43:34
    even if it's get rid of effort,
  • 43:34 - 43:37
    even if it's not in our own benefit.
  • 43:37 - 43:39
    And then the other reason, which I think
  • 43:39 - 43:42
    is one that's even harder to deal with,
  • 43:42 - 43:46
    is the pursuit of efficiency and productivity
  • 43:46 - 43:49
    above all other goals.
  • 43:49 - 43:52
    And you can certainly see why hospitals
  • 43:52 - 43:55
    who want the highest productivity possible
  • 43:55 - 43:57
    from radiologists would be adverse
  • 43:57 - 43:59
    to saying, "well we'll let the radiologists
  • 43:59 - 44:02
    look at the image and then we'll bring in the software"
  • 44:02 - 44:05
    because that extends the time that a radiologist
  • 44:05 - 44:07
    is going to look at this, and that's true of
  • 44:07 - 44:09
    any of these kind of analytical chores.
  • 44:09 - 44:12
    And so there's this tension
  • 44:12 - 44:14
    between the pursuit of efficiency above
  • 44:14 - 44:16
    all other things and productivity,
  • 44:16 - 44:18
    and the development of skill,
  • 44:18 - 44:20
    the development of talent,
  • 44:20 - 44:22
    the development of high-levels of human performance
  • 44:22 - 44:25
    and ultimately the sense of satisfaction
  • 44:25 - 44:26
    that people get.
  • 44:26 - 44:29
    I think in the long-run you see signs that
  • 44:29 - 44:32
    that begins to back-fire.
  • 44:32 - 44:34
    Toyota, earlier this year, announced that
  • 44:34 - 44:36
    it was replacing some of its robots
  • 44:36 - 44:39
    in its Japanese factories with human beings,
  • 44:39 - 44:41
    because even though the robots are more efficient,
  • 44:41 - 44:44
    the company had struggled with quality problems.
  • 44:44 - 44:47
    It's had to recall 20 million cars in recent years,
  • 44:47 - 44:49
    and not only is that bad for business,
  • 44:49 - 44:52
    but Toyota's entire culture is built around
  • 44:52 - 44:55
    quality manufacturing, so it erodes its culture.
  • 44:55 - 44:57
    So by bringing back human beings
  • 44:57 - 45:01
    it wants to bring back both the spirit
  • 45:01 - 45:03
    and the reality of human craftsmanship,
  • 45:03 - 45:06
    of people who can actually think critically
  • 45:06 - 45:08
    about what they're doing.
  • 45:08 - 45:09
    And one of the benefits it believes
  • 45:09 - 45:11
    it will get is that it will be smarter
  • 45:11 - 45:14
    about how it programs its robots.
  • 45:14 - 45:16
    It will be able to continually take new
  • 45:16 - 45:19
    human thinking, new human talent and insight
  • 45:19 - 45:20
    and then incorporate that into
  • 45:20 - 45:23
    the processes that even the robots are doing.
  • 45:23 - 45:27
    That's a good news example,
  • 45:27 - 45:30
    but I'm not going to oversimplify this
  • 45:30 - 45:33
    or lie to you...I think this tension between
  • 45:33 - 45:36
    placing efficiency above all other things
  • 45:36 - 45:43
    is a very hard...very hard instinct,
  • 45:43 - 45:46
    very hard economic imperative to overcome.
  • 45:46 - 45:50
    But nevertheless, I think it's absolutely imperative
  • 45:50 - 45:54
    that everyone who designs software and robotics,
  • 45:54 - 45:56
    and all of us who use them,
  • 45:56 - 46:00
    are conscious of the fact that there is this trade-off.
  • 46:00 - 46:04
    And that technology isn't just a means of production,
  • 46:04 - 46:06
    as we often tend to think of it,
  • 46:06 - 46:09
    it really is a means of experience.
  • 46:09 - 46:10
    And it always has been, since the first
  • 46:10 - 46:14
    technologies were developed by our distant ancestors.
  • 46:14 - 46:17
    Technology at its best, tools at their best
  • 46:17 - 46:20
    bring us out into the world,
  • 46:20 - 46:23
    expand our skills, and our talents,
  • 46:23 - 46:27
    make the world an interesting place.
  • 46:27 - 46:29
    And we shouldn't forget that about ourselves
  • 46:29 - 46:33
    as we continue at high-speed into a future
  • 46:33 - 46:36
    where more and more aspects of human experience
  • 46:36 - 46:39
    are going to be off-loaded to computers
  • 46:39 - 46:41
    and to machines.
  • 46:41 - 46:44
    So, thank you very much for your attention.
  • 46:44 - 46:50
    [ applause ] Thank you. [ applause ]
  • 46:50 - 46:52
    One thing that immediately sprung to mind
  • 46:52 - 46:54
    with most of your examples is that
  • 46:54 - 46:57
    they seem like examples of poor automation
  • 46:57 - 47:00
    and I'm wondering if you could...
  • 47:00 - 47:03
    say whether you feel that there could be
  • 47:03 - 47:08
    or are already any sufficiently flawless technologies
  • 47:08 - 47:10
    that we don't have to worry about the
  • 47:10 - 47:12
    problems you're describing.
  • 47:12 - 47:14
    I think in all instances, you have to worry about them.
  • 47:14 - 47:17
    I agree with you that a lot of these problems
  • 47:17 - 47:22
    are not problems about automation per se.
  • 47:22 - 47:24
    No one is going to stop the course of automation.
  • 47:24 - 47:26
    You can argue that the invention of the wheel
  • 47:26 - 47:28
    was an example of automating something,
  • 47:28 - 47:31
    and I don't think any of us regrets that,
  • 47:31 - 47:35
    but I do think it's often unwise design decisions,
  • 47:35 - 47:37
    or unwise assumptions that come in.
  • 47:37 - 47:39
    But as to the question of whether
  • 47:39 - 47:43
    we will create infallible automation...
  • 47:43 - 47:45
    I don't think so.
  • 47:45 - 47:47
    I mean often you get this point of view
  • 47:47 - 47:49
    and it seems to be quite common
  • 47:49 - 47:51
    in Silicon Valley if I can say that,
  • 47:51 - 47:57
    people are only going to be a temporary nuisance
  • 47:57 - 47:58
    in a lot of these processes.
  • 47:58 - 48:00
    We're going to have fully self-driving cars.
  • 48:00 - 48:01
    We're going to have fully self-flying planes.
  • 48:01 - 48:08
    We're going to have fully analytical systems,
  • 48:08 - 48:10
    big data systems that can pump out
  • 48:10 - 48:11
    the right answer, we won't have to
  • 48:11 - 48:12
    worry about it.
  • 48:12 - 48:15
    I don't think that that's actually going to happen.
  • 48:15 - 48:17
    I mean, it might happen eventually.
  • 48:17 - 48:19
    It's very, very difficult to remove
  • 48:19 - 48:22
    the human being altogether.
  • 48:22 - 48:27
    And so, to me what that means is
  • 48:27 - 48:30
    okay, fine, you can pursue that
  • 48:30 - 48:34
    as some ideal, total flawless automation,
  • 48:34 - 48:37
    but in the meantime we live and work
  • 48:37 - 48:39
    in the present not in the future.
  • 48:39 - 48:40
    And for the foreseeable future in all
  • 48:40 - 48:42
    of these processes there are going
  • 48:42 - 48:44
    to be people involved.
  • 48:44 - 48:48
    And there are going to be computers involved.
  • 48:48 - 48:51
    And instead of just...saying let's put
  • 48:51 - 48:53
    the computer's interests before the person's,
  • 48:53 - 48:55
    I think the wise way is, as I said,
  • 48:55 - 48:57
    to go with a more human-centered design
  • 48:57 - 49:01
    that realizes and starts with the assumption
  • 49:01 - 49:03
    that the human being is going to play
  • 49:03 - 49:06
    an essential role in these things for
  • 49:06 - 49:09
    as long as we can imagine,
  • 49:09 - 49:11
    or for as long as we can forsee.
  • 49:11 - 49:14
    And so we better design them to
  • 49:14 - 49:15
    get the most out of the person
  • 49:15 - 49:17
    as well as the technology.
  • 49:18 - 49:20
    [ offscreen audience member ]
    (Thanks, that was great.)
  • 49:20 - 49:23
    (I certainly agree with the idea of focusing)
  • 49:23 - 49:26
    on the human-centered design.
  • 49:26 - 49:27
    I want to make one quick comment,
  • 49:27 - 49:29
    and then a question.
  • 49:29 - 49:31
    I noticed on your "hot/cold" thing
  • 49:31 - 49:33
    there was another researcher who took
  • 49:33 - 49:35
    passages from books and presented them,
  • 49:35 - 49:37
    and then multiple choice quiz, or whatever,
  • 49:37 - 49:39
    and then they took the same passage
  • 49:39 - 49:42
    and deleted a key sentence.
  • 49:42 - 49:45
    And people did better understanding
  • 49:45 - 49:47
    the point then. But somehow no authors
  • 49:47 - 49:50
    are willing to have the guts to do that.
  • 49:50 - 49:50
    [ speaker laughs ]
  • 49:50 - 49:52
    So, will you be that author?
  • 49:52 - 49:55
    To delete the important sentences from your book
  • 49:55 - 49:58
    and make the reader engage more
  • 49:58 - 49:59
    and therefore learn better?
  • 49:59 - 50:01
    If any of you buy my book,
  • 50:01 - 50:03
    I would be happy to take a sharpie
  • 50:03 - 50:05
    and erase certain sentences
  • 50:05 - 50:07
    and you can get the full benefit...
  • 50:07 - 50:08
    But, I will take that under advisement
  • 50:08 - 50:10
    for...future books.
  • 50:10 - 50:11
    [ same off-screen audience member ]
  • 50:11 - 50:12
    (And then a question,)
  • 50:12 - 50:14
    (I'm interested in the difference between)
  • 50:14 - 50:16
    (automation complacency and)
  • 50:16 - 50:18
    authority complacency.
  • 50:18 - 50:21
    So you see a lot of these incidence reports,
  • 50:21 - 50:23
    and there will be some underling who said
  • 50:23 - 50:25
    "you know, I kind of noticed something
  • 50:25 - 50:28
    was going wrong, but the surgeon
  • 50:28 - 50:30
    or the pilot or the CEO seems so sure
  • 50:30 - 50:33
    that I didn't want to say anything."
  • 50:33 - 50:35
    And that has nothing to do with automation,
  • 50:35 - 50:37
    it's just...authority.
  • 50:39 - 50:42
    I think that's probably pretty much exactly
  • 50:42 - 50:44
    the same phenomenon.
  • 50:44 - 50:45
    And I actually do think it probably
  • 50:45 - 50:46
    has something to do with automation
  • 50:46 - 50:50
    because you could say that automation complacency
  • 50:50 - 50:55
    comes when the computer or the machine
  • 50:55 - 50:57
    takes the role of authority.
  • 50:57 - 51:00
    So the person defers to it, and I think
  • 51:00 - 51:02
    that's certainly one way to interpret
  • 51:02 - 51:04
    a lot of the findings.
  • 51:04 - 51:06
    That you don't question the machine,
  • 51:06 - 51:08
    you don't question the automation
  • 51:08 - 51:10
    in a way that wouldn't be wise.
  • 51:10 - 51:12
    So I think they're probably...
  • 51:12 - 51:13
    I think complacency has been a problem
  • 51:13 - 51:15
    since long before computers came around
  • 51:15 - 51:18
    for people for those reasons and others.
  • 51:18 - 51:21
    But, we've created a new way to generate
  • 51:21 - 51:23
    the same phenomenon.
  • 51:23 - 51:27
    [ off-screen audience member ]
    (I'm curious to ask)
  • 51:27 - 51:30
    (if you know much research about)
  • 51:30 - 51:34
    what percentage of time or experiences
  • 51:34 - 51:36
    we need to keep manual in order
  • 51:36 - 51:39
    to make sure that skills don't fade away.
  • 51:39 - 51:41
    And if you have any thoughts on
  • 51:41 - 51:44
    how much this transfers from domain to domain?
  • 51:44 - 51:46
    So like in the wayfaring example of the Inuit.
  • 51:46 - 51:50
    Right? You might say you could use GPS
  • 51:50 - 51:52
    80% of the time, but you've got
  • 51:52 - 51:54
    to do 20% of it manually to keep your skills up.
  • 51:54 - 51:58
    But maybe airline pilots have
  • 51:58 - 51:59
    the other examples that you cited...
  • 51:59 - 52:01
    Do you know how much of this has
  • 52:01 - 52:02
    been studied and how much it might
  • 52:02 - 52:04
    vary from domain to domain?
  • 52:04 - 52:06
    As far as the second question,
  • 52:06 - 52:11
    I don't know. I don't know...any rules of thumb
  • 52:11 - 52:15
    either in specific domains or in cross-domains.
  • 52:15 - 52:17
    I can say though, that there's enormous
  • 52:17 - 52:21
    amounts of research that's been done in aviation,
  • 52:21 - 52:24
    because of the fact that the risk is so high,
  • 52:24 - 52:26
    and lots of people can die,
  • 52:26 - 52:28
    and lots of money can be lost.
  • 52:28 - 52:30
    You know, even since computerization
  • 52:30 - 52:32
    of flight began back in the '70s
  • 52:32 - 52:35
    whether it's NASA or the FAA
  • 52:35 - 52:38
    or universities, there's been tons of research.
  • 52:38 - 52:42
    So my guess is that there has been...
  • 52:42 - 52:44
    There probably have been tests
  • 52:44 - 52:46
    where you have different levels of automation
  • 52:46 - 52:49
    and manual control, and comparing
  • 52:49 - 52:51
    different levels of performance.
  • 52:51 - 52:55
    I didn't come across those specific studies
  • 52:55 - 52:58
    in my work, but my guess is that
  • 52:58 - 53:01
    that would be an obvious thing that would have been done.
  • 53:01 - 53:05
    So, I'm saying in aviation there's probably
  • 53:05 - 53:08
    at least some sense of, you know,
  • 53:08 - 53:11
    at what point does performance start
  • 53:11 - 53:13
    to drop off, or start to drop off dramatically
  • 53:13 - 53:15
    because you've turned over too much
  • 53:15 - 53:17
    responsibility to the machine.
  • 53:17 - 53:20
    Whether that would also translate
  • 53:20 - 53:22
    the same kind of percentages in different domains?
  • 53:22 - 53:23
    I don't know.
  • 53:23 - 53:24
    [ off-screen audience member ]
  • 53:24 - 53:27
    (Thanks for coming.)
  • 53:27 - 53:29
    (The talk was really interesting.)
  • 53:29 - 53:31
    (In your talk you pointed out that)
  • 53:31 - 53:33
    (technology always comes with trade-offs.)
  • 53:33 - 53:35
    And it's hard to disagree with that.
  • 53:35 - 53:38
    But I'm wondering about the title of your book.
  • 53:38 - 53:40
    The title is "The Glass Cage",
  • 53:40 - 53:42
    and that seems like...
  • 53:42 - 53:43
    calling technology a glass cage seems like
  • 53:43 - 53:45
    a much more negative assessment than
  • 53:45 - 53:48
    merely saying that it comes with trade-offs.
  • 53:48 - 53:49
    So I'm wondering if you can say
  • 53:49 - 53:51
    what motivates this title?
  • 53:51 - 53:57
    Well, the title is a reference back to pilots' experience.
  • 53:57 - 54:00
    Since the '70s, pilots and others
  • 54:00 - 54:02
    in the aviation business, have referred
  • 54:02 - 54:04
    to cockpits as glass cockpits.
  • 54:04 - 54:07
    And it's because, increasingly they're wrapped
  • 54:07 - 54:09
    with computer screens.
  • 54:09 - 54:12
    If you look at a modern...a modern
  • 54:12 - 54:16
    passenger jet it's insane amounts
  • 54:16 - 54:18
    of computer screens, all sorts of
  • 54:18 - 54:20
    input devices and stuff.
  • 54:20 - 54:26
    One aviation expert refers to the cockpit now
  • 54:26 - 54:28
    as a flying computer.
  • 54:28 - 54:31
    So, in one sense it's just kind of
  • 54:31 - 54:34
    a play on that, because what I argue is that
  • 54:34 - 54:38
    we can learn a lot from pilots' experience
  • 54:38 - 54:42
    as we enter into a world that essentially
  • 54:42 - 54:43
    more and more of us are going to be living
  • 54:43 - 54:45
    inside a glass cockpit.
  • 54:45 - 54:47
    We're going to be looking at monitors
  • 54:47 - 54:48
    to do more and more things.
  • 54:48 - 54:50
    We're already there, some would argue.
  • 54:50 - 54:54
    And I do think that what a lot of examples
  • 54:54 - 54:56
    of computer automation tell us is that
  • 54:56 - 55:00
    the glass cockpit can become a glass cage.
  • 55:00 - 55:05
    That if we design it to be the primary
  • 55:05 - 55:07
    or the essential way that we interact with the world
  • 55:07 - 55:10
    then it cuts us off from other sources of learning
  • 55:10 - 55:15
    and information that might be absolutely essential,
  • 55:15 - 55:17
    but we're so focused on what the computer
  • 55:17 - 55:19
    is telling us we lose that.
  • 55:19 - 55:21
    So, I mean it is intended to be a little bit ominous,
  • 55:21 - 55:25
    that we can either get trapped in this
  • 55:25 - 55:28
    glass cage, or we can use technology
  • 55:28 - 55:30
    in what I think is a more humane
  • 55:30 - 55:33
    and more balanced way.
  • 55:33 - 55:34
    [ off-screen audience member ]
    (Thanks for that question)
  • 55:34 - 55:36
    (And on that note, please join me)
  • 55:36 - 55:37
    (in thanking Nicholas Carr)
  • 55:37 - 55:39
    (for coming to Google).
  • 55:39 - 55:40
    Thank you.
  • 55:40 - 55:42
    [ applause ]
  • 55:42 - 55:45
    [ electronic music ]
Title:
Nicholas Carr | The Glass Cage: Automation and Us
Description:

more » « less
Video Language:
English
Team:
Captions Requested
Duration:
55:55

English subtitles

Revisions Compare revisions