-
Welcome.
-
Let's start with a pop quiz.
-
What do Benjamin Franklin, Karl Marx, and the philosopher Hannah Arendt have in common?
-
Anybody?
-
So they all proposed this notion of calling humans "homofaber" -
-
man, the tool maker.
-
So we make tools. We make tools, we alter the environment,
-
and then, the tools alter us.
-
And sometimes, we lament that.
-
And sometimes these tools have big effects - clothing, cooking, fire, automobiles,
-
computers, and so on.
-
Sometimes they have smaller effects.
-
But it looks like right now, we're in a period where we're going to start using
-
even more and more tools, and they're going to be ubiquitous throughout our life.
-
And our speaker today, Nicholas Carr, has taken it upon himself to investigate this.
-
How do these tools change us?
-
What's for the better? What's for the worse?
-
And can we figure out a way to design them so that we'll live better with them?
-
Welcome to Google, Nicholas Carr.
-
(applause)
-
Thank you. Thanks very much, Peter.
-
And thanks to Ann Farmer for shepherding me through the process in bringing me here,
-
and thanks to Google for hosting these events.
-
I've been to a couple of other Google offices,
-
but this is the first time I've been to the headquarters,
-
so it's exciting. The Googleplex has kind of played a role in my fantasy life
-
for a long time, I realize. Not a weird role.
-
Kind of a dull fantasy life, but what can I say?
-
So it's good to be here in person.
-
I started writing about technology about 15 years ago or so,
-
so more or less the same time that Google appeared on the scene,
-
and I think it was good timing for Google, and it was also
-
good timing for me, because there's been plenty,
-
obviously, to write about.
-
And like, I think, most technology writers,
-
I started off writing about the technology itself,
-
features, design, stuff like that, and also about
-
the economic and financial side of the business,
-
so competition between technology companies,
-
and so forth. But over the years I became kind of frustrated
-
by what I saw as the narrowness of that view,
-
that just looks at technology as technology
-
or as a economic factor. Because what was
-
becoming clear was that computers,
-
as they became smaller and smaller, and
-
more powerful, and more connected,
-
and as programmers became more adept at their work,
-
computing, computation, the digital connectivity and everything, was infusing
-
more and more aspects of everybody's life,
-
at work, during their leisure time, and so
-
it struck me that, as is always true and as Peter said, with technology,
-
technology frames, in many ways, the context in which we live
-
and it seemed to me important to look at this phenomenon,
-
the rise of the computer as kind of a central component of our lives,
-
from many different angles, so see what sociology could tell us,
-
what philosophy could tell us, and all these different
-
ways we can approach an important phenomenon that's
-
influencing our life.
-
Four or five years ago, I wrote a book called "The Shallows"
-
that kind of examined how the use of the internet
-
as an informational medium is influencing the way we think,
-
and how we're adapting to this kind of,
-
not only availability, of vast amounts of information,
-
but more and more an actual active barrage of it,
-
and what that meant for our ability to tune out the flow when we needed to,
-
and really engage attentively in one task or one train of thought.
-
And as I was writing "The Shallows", I also started becoming aware
-
of this other realm of research into computers
-
that struck me as dealing with an even broader question,
-
which is "What happens to people in their talents,
-
in their engagement with the world, when they
-
become reliant on computers in their various
-
forms, to do more and more things?"
-
So what happens when we automate, not just
-
factory work, but lots of white collar, professional thinking,
-
and what happens when we begin to automate a lot of just the day to day activities
-
that we do? We become more and more
-
reliant on computers, not necessarily to
-
take over all of the work, but to become our aid
-
to help shepherd us through our days.
-
And that was the spark that led to "The Glass Cage",
-
my new book, which tries to look broadly
-
at the repercussions of our dependence
-
on computers and automation in general,
-
but also, looks at the question of "Are we
-
designing this stuff in an optimal fashion?"
-
If we want a world in which we get the benefits
-
of computers, but we also want people to live
-
full, meaningful lives, develop which talents,
-
interact with the world in diverse ways, are we
-
designing all of these tools, everything from robots to
-
simple smart phone apps, in a way that
-
accomplishes both those things?
-
What I'd like to do is just read a short section from the book that,
-
to me, provides both an example of a lot of the
-
things I'm talking about, a lot of the tensions
-
I'm talking about, but also provides sort of a
-
metaphor for, I think, the circumstances we're in,
-
and the challenges we face.
-
This section, which comes in the middles of the book, is about the use of computers
-
and automation, not in a city or even in a kind of western country,
-
where there's tons of it, but in a place
-
that looks like this. Up, in the arctic circle,
-
far, far away, where you might think, is shielded
-
from computers and automation, but, in fact,
-
is not. So let me just read this to you.
-
"The small island of Aglulick, lying off the coast
-
of the Melville Peninsula, in the Nunevah territory
-
of the Canadian North, is a bewildering place
-
in the winter. The average temperature hovers
-
around 20 degrees below zero, thick sheets
-
of sea ice cover the surrounding waters, the sun
-
is absent. Despite the brutal conditions,
-
Ennuit hunters have for some 4000 years ventured out from their homes
-
on the island and traversed miles of ice and
-
tundra in search of caribou and other game.
-
The hunters' ability to navigate vast stretches
-
of barren arctic terrain, where landmarks are few,
-
snow formations are in constant flux, and
-
trails disappear overnight,
-
has amazed voyagers and scientists for centuries.
-
The Ennuits' extraordinary way-finding skills
-
are borne, not of technological prowess,
-
they've eschewed maps, compasses, and
-
other instruments, but have a profound understanding
-
of winds, snow drift patterns, animal behavior,
-
stars, tides, and currents.
-
The Ennuit are masters of perception,
-
or at least they used to be.
-
Something changed in Ennuit culture at the turn of the millenium.
-
In the year 2000, the US government lifted many
-
of the restrictions on the civilian use
-
of the global positioning system.
-
The Aglulick hunters, who had already swapped
-
their dogsleds for snowmobiles, began to
-
rely on computer-generated maps and diretions
-
to get around. Younger Ennuit were particularly
-
eager to use the new technology.
-
In the past, a young hunter had to endure a long appreticeship
-
with his elders, developing his way-finding talents
-
over many years. By purchasing a cheap GPS receiver,
-
he could skip the training and off-load
-
responsibility for navigation to the device.
-
The ease, convenience, and precision of automated
-
navigation made the Ennuits traditional
-
techniques seem antiquated and cumberson, by comparison.
-
But as GPS devices proliferated on the island, reports began to spread
-
of serious accidents during hunts, some
-
resulting in injuries and even deaths.
-
The cause was often traced to an over-reliance
-
on satellites. When a receiver breaks or its
-
batteries freeze, a hunter who hasn't developed
-
strong way-finding skills can easily become lost
-
in the featureless waste, and fall victim to exposure.
-
Even when the devices operate properly,
-
they present hazards. The routes, so meticulously
-
plotted on satellite maps, can give hunters
-
a form of tunnel vision. Trusting the GPS instructions,
-
they'll speed onto dangerously thin ice or
-
into other environmental perils that a skilled
-
navigator would have had the sense and foresight
-
to avoid. Some of these problems may eventually
-
be mitigated by improvements in navigational devices,
-
or by better instruction in their use.
-
What won't be mitigated is the loss of what one
-
tribal elder describes as the wisdom and knowledge of the Ennuit.
-
The anthropologist, Claudio Oporta, of Carlton University, in Ottawa, has been studying
-
Ennuit hunters for years.
-
He reports that while satellite navigation
-
offers attractive advantages, its adoption has
-
already brought a deterioration in way-finding
-
abilities, and more generally, a weakened feel
-
for the land. As a hunter on a GPS-equipped
-
snowmobile devotes his attention to the instructions
-
coming from the computer, he loses sight
-
of his surroundings. He travels blindfolded,
-
as Oporta puts it. A singular talent that has
-
defined and distinguished a people for
-
thousands of years may well evaporate over the
-
course of a generation or two.
-
When I relate that story to people, they tend to have one of two reactions,
-
and my guess is both of those reactions
-
are probably represented in this room.
-
One of the reactions is a feeling of kind of, feeling that
-
this is a poignant story, it's a troubling story
-
a story about loss, about something essential
-
to the human condition, and that tends to be the
-
reaction I have to it, but then
-
there's a very different reactions, which is
-
"Well, welcome to the modern world."
-
Progress goes on, we adapt,
-
and in the end, things get better.
-
And so if you think about it most of us,
-
probably all human beings much had a much
-
more sophisticated inner navigational sense,
-
much more sophisticated perception of the world, the landscape,
-
and for most of us we've lost almost all of that,
-
And yet we didn't go extinct, we're still here.
-
By most measures we're thriving.
-
I think that is also a completely valid point of view.
-
It's true that we lose lots of skills over time,
-
and we gain new ones, and things go on.
-
So in some ways your reaction to this
-
is a value judgement about what's meaningful in human life.
-
But beyond those value judgements
-
I think a couple of things that this story,
-
this experience, tells us is
-
how powerful a new tool can be
-
when introduced into a culture.
-
It can change the way people work,
-
the way people operate,
-
the way they think about what's important,
-
the way they go about their lives, in many different ways.
-
And it can do this very, very quickly.
-
Overturning some skill, or some talent,
-
or some way of life that's been around for
-
thousands of years, just in the course of
-
a year or two.
-
So introducing computer tools,
-
introducing automation, any kind of technology
-
that redefines what human beings do,
-
and redefines what we do versus what
-
we hand off to machines or computers,
-
can have very, very deep and very, very powerful effects.
-
And a lot of these effects are very difficult to anticipate.
-
So the Ennuit hunters, the young hunters
-
didn't go out and buy GPS systems because
-
they wanted to increase the odds that they'd
-
get lost and die, and they probably weren't thinking about
-
eroding some fundamental aspect of culture.
-
They wanted to get the convenience,
-
the ease of the system, which is what
-
many of us are motivated by when we
-
decide to adopt some kind of new form
-
of automation in our lives.
-
When you look at all these unanticipated effects,
-
you can see a very common theme
-
that comes out in research about automation,
-
and particularly about computer automation.
-
It's something that's been documented
-
over and over again by human factors,
-
scientists and researchers, people who study
-
how people interact with computers and other machines.
-
And the concept is referred to as the substitution myth.
-
It's very simple.
-
It says that whenever you automate any part of an activity,
-
you fundamentally change the activity.
-
That's very different from what we anticipate.
-
Most people, either users of software or other automated systems,
-
or the designers, the makers, they assume that
-
you can take bits and pieces of what people do,
-
you can automate them, and turn them
-
over to software or something else,
-
and you'll make those parts of the process
-
more efficient or more convenient, or faster, or cheaper,
-
but you won't fundamentally change the way
-
people go about doing their work.
-
You won't change their behavior.
-
In fact, over and over again we see that
-
even small changes, small shifts of responsibility,
-
from people to technology can have
-
very big effects on the way people behave,
-
the way they learn, the way they approach their jobs.
-
We've seen this recently with the increasing
-
automation of medical record keeping.
-
As you probably know we've moved fairly quickly
-
over the last ten years from doctors
-
taking taking patient notes on paper,
-
either writing them by hand or dictating them,
-
to digital records.
-
So doctors usually as they're going through an exam
-
will be take notes, usually going through a template,
-
on a computer or on a tablet.
-
For most of us our initial reaction
-
is "thank goodness for that"
-
because having records on paper was a pain in the neck.
-
You'd have to enter the same information
-
depending on when you went to different doctors,
-
and god forbid you got sick somewhere else
-
in the country or something and
-
doctors couldn't exchange, had no way
-
to share your old records.
-
So it makes all sorts of sense to
-
automate this and to have digital records.
-
And indeed, ten years ago when the U.S.
-
started down this path there were
-
all sorts of studies that said we're going to
-
save enormous amounts of money,
-
we're going to increase patient care,
-
quality of healthcare,
-
as well as make it easier to share information.
-
There's a big study by the Rand Corporation
-
that documented all this.
-
They had modeled the entire healthcare system
-
in a computer, output various things, and this was
-
going to be all to the good.
-
Well, the government went on to subsidize
-
the adoption of electronic medical records
-
to the tune of something like $30 billion since then.
-
And now we have a lot of information
-
about what's really happened.
-
And nothing that was expected has actually played out.
-
And all sorts of things that weren't expected have.
-
For instance, the cost savings have not materialized.
-
Cost has continued to go up,
-
and there's even some indications that
-
beyond the expense required for the systems
-
themselves, this shift may increase healthcare costs
-
rather than decrease them.
-
The evidence on quality of care is
-
very, very mixed.
-
There seems to be no doubt that
-
for some patients those chronic diseases
-
that require a lot of different doctors
-
quality goes up.
-
But for a lot of patients there hasn't
-
been a change, and there may
-
even have been an erosion of quality in some instances.
-
And finally, we're not even getting the benefits
-
of broad sharing of the records because
-
a lot of the systems are proprietary
-
and so you can't, you know, transfer the records
-
quickly or easily from one hospital to the next
-
or from one practice to the next.
-
And now some of these problems are just
-
coming from the fact that a lot of software is crappy.
-
We've rushed to spend huge amounts
-
of money on it, lots of big software
-
companies that supply this have gotten wealthy,
-
and doctors are struggling with it,
-
patients are struggling with it.
-
And so some of those things will be fixed
-
at more expense over time.
-
But if you look down lower, you see
-
changes in behavior that are much more subtle,
-
much more interesting, and go beyond
-
the quality of the software itself.
-
So for instance, one of the reasons
-
that everybody expected that healthcare
-
costs would go down was the assumption
-
that as soon as doctors can call up images
-
and other test results on their computers
-
when they're in with a patient,
-
they wouldn't order more tests.
-
So we see fewer diagnostic tests and
-
fewer costs from those diagnostic tests,
-
a big part of the healthcare system's costs.
-
Actually the opposite seems to be happening.
-
You give the doctor an ability to
-
quickly order tests and quickly pull up
-
the results, doctors actually order more of them,
-
because they know it's going to be easier for them.
-
And so the quality of the outcomes
-
doesn't go up, we're just seeing more
-
diagnostic tests and more costs.
-
Exactly the opposite of what we expected.
-
You see changes in the doctor-patient relationship.
-
If you've been around for a while, and had the experience of
-
going to a doctor's office for a physical or whatever
-
the doctor paid his or her whole attention to you,
-
to the world of electronic medical records
-
when the doctor has a computer,
-
you know that it intrudes in the
-
doctor-patient relationship.
-
Studies show that doctors now spend,
-
if they have a computer with them, about
-
25-50% of the time during the exam
-
looking at the computer rather than the patient.
-
And doctors aren't happy about that,
-
patients don't tend to be happy about it,
-
but it's a necessary consequence,
-
at least how we've designed these systems,
-
of this transfer.
-
The most interesting, and I'm just going
-
to give you three examples of unexpected results,
-
but what's most interesting to me is the
-
fact that the quality of the records themselves
-
has gone down.
-
And the reason is that, first of all, doctors
-
now use templates, checkboxes lots of times,
-
and then when they have to put in text
-
describing the patient's condition
-
rather than dictating it from what they've just experienced,
-
or hand-writing it, they cut and paste.
-
They cut and paste paragraphs and other stuff
-
from other visits that the patient has had
-
or from visits by other patients that have
-
had similar conditions.
-
This is referred to as the cloning of text.
-
And more and more of personal medical records
-
consist of cloned text these days.
-
Which makes the records less useful
-
for doctors, because it has less rich
-
and subtle information, and it also
-
undermines an important role that records
-
used to play in the exchange of information and knowledge.
-
A primary-care physician used to get
-
a lot of information, a lot of knowledge,
-
by reading rich descriptions from specialists.
-
And now more and more, as doctors say,
-
it's just boiler-plate, just cloned text.
-
So we've created this system that eventually
-
will probably have the very important benefit
-
of allowing us to exchange information
-
more and more quickly,
-
more and more easily,
-
but at the same time we're reducing
-
the quality of the information itself and making
-
what's exchanged less valuable.
-
Now, those are three examples of how the
-
substitution myth has played out in this
-
particular area of automation,
-
and they're very specialized and you see
-
all sorts of these things anywhere you look.
-
But there are a couple of bigger themes
-
that tend to cross all aspects of automation.
-
When you introduce software to make jobs easier,
-
to take over jobs, in addition to the benefits
-
are a couple of big negative developments.
-
Human factors, experts, researchers on this
-
refer to this as automation complacency
-
and automation bias.
-
Automation complacency means exactly
-
what you would expect.
-
When people turn over big aspects of their job
-
to computers, to software, to robots,
-
they tune out.
-
We've very good at trusting a machine,
-
and certainly a computerized machine,
-
to handle our job, to handle any challenge
-
that might arise.
-
And so we become complacent,
-
we tune out, we space out.
-
And that might be fine until
-
something bad happens and we suddenly
-
have to re-engage what we're doing.
-
And then you see people make mistakes.
-
Everybody experiences automation complacency
-
in using computers.
-
A very simple example is autocorrect
-
for spelling.
-
When people have autocorrect going,
-
when they're texting or using a word-processor,
-
they become much more complacent
-
about their spelling.
-
They don't check things, they let it go.
-
And then most people have probably
-
had the experience of sending out a text,
-
or an email, or a report, that has
-
some really stupid typo in it because
-
the computer misunderstood your intent.
-
And that causes maybe a moment of embarrassment.
-
But you take that same phenomenon of complacency,
-
and put it into an industrial control room,
-
into a cockpit, into a battlefield,
-
and you sometimes get very, very dangerous situations.
-
One of the classic examples of
-
automation complacency comes in the
-
cruise-line business.
-
A few years ago, a cruise ship called
-
the Royal Majesty was on the last-leg
-
of a cruise off New England.
-
It was going from Bermuda, I think, to Boston,
-
had a GPS antennae that was connected
-
to an automated navigation system.
-
The crew turned on the automated navigation system
-
and became totally complacent.
-
Just assumed everything's going fine,
-
the computer's plotting our course,
-
don't have to worry about it.
-
And at some point the line to the GPS antennae broke.
-
It was way up somewhere and nobody saw it.
-
Nobody noticed.
-
There were increasing environmental clues
-
that the ship was drifting off-course.
-
Nobody saw it.
-
At one point a mate whose job it was
-
to watch for a locational buoy and report
-
back to the bridge that we've passed
-
this as we should have, he was out there
-
watching for it and he didn't see it,
-
and he said "well, it must be there
-
because the computer's in charge here.
-
I just must have missed it."
-
So he didn't bother to tell the bridge.
-
He was embarrassed that he had missed
-
what must have been there.
-
Well, hours go by and ultimately the ship
-
crashes into a sand bar off Nantucket Island
-
many miles off-course.
-
Fortunately, no one was killed
-
or injured that bad, but there was
-
millions of dollars of damage.
-
And it kind of shows how easily
-
if you give too much responsibility
-
to the computer, the people will
-
tune out and they won't notice things
-
are going wrong, or if they do notice
-
they might make mistakes in responding.
-
Automation bias is closely related
-
to automation complacency.
-
It just means that you place too much
-
trust in the information coming from your computer.
-
To the point where you begin to assume
-
that the computer is infallible, and so you
-
don't have to pay attention to other sources
-
of information, including your own eyes and ears.
-
And this too is something we see
-
over and over again when you automate
-
any kind of activity.
-
A good example is the use of GPS
-
by truck drivers.
-
Truck driver starts to listen to the
-
automated voice of the GPS woman
-
telling him where to go and whatever,
-
and he or she begins to ignore
-
other sources of information like road signs.
-
So we've seen an increase in the incidence
-
of trucks crashing into low overpasses
-
as we've increased the use of GPS.
-
And in Seattle a few years ago,
-
there was a bus driver carrying a
-
load of high-school athletes to a game
-
somewhere--twelve-foot high bus--
-
approached a nine-foot high overpass
-
and there were all these signs along the way
-
"Danger Low Overpass"
-
or even signs that had blinking lights
-
around them.
-
He smashes right into it.
-
Luckily no one died, a bunch of students
-
had to go to the hospital.
-
The police said "what were you thinking?"
-
He said "Well, I had my GPS on,
-
and I just didn't see the signs."
-
So we ignore or don't even see
-
other sources of information.
-
In another very different area,
-
back to healthcare,
-
if you look at how radiologists read
-
diagnostic images today most of them
-
read them as digital images of course,
-
but also there's now software
-
that is designed as a decision-support aid
-
and analytical aid.
-
What it does is it gives the radiologist prompts.
-
It high-lights particular regions of the image
-
that the data analysis, past data suggests
-
are suspicious.
-
And in many cases this has very good results.
-
The doctor focuses attention on those
-
particular highlighted areas,
-
finds a cancer or other abnormality
-
that the doctor may have missed,
-
and that's fine.
-
But research shows that it also has
-
the exact opposite effect.
-
Doctors become so focused on the
-
high-lighted areas that they only pay
-
cursory attention to other areas,
-
and often miss abnormalities or
-
cancers that aren't highlighted.
-
The latest research suggests that
-
these prompt-systems,
-
which as you know are very very common
-
in software in general,
-
these prompt-systems seem to improve
-
the performance of less-expert image readers
-
on simpler challenges, but decrease the performance
-
of expert readers on very, very hard challenges.
-
The phenomenon of automation complacency
-
and automation bias points to
-
an even deeper and more insidious problem
-
that poorly designed software or
-
poorly designed automated systems often triggers.
-
And that is that in both of those cases,
-
with complacency and bias,
-
you see a person disengaging from the world,
-
disengaging from his or her circumstances,
-
disengaging from the task at hand,
-
simply assuming that the computer will handle it.
-
And indeed the computer has been designed,
-
whatever system we're talking about,
-
has been designed to handle as much of
-
the chore as possible.
-
And what happens then is we see
-
an erosion of talent on the part of the person.
-
Either the person isn't developing
-
strong, rich talents,
-
or their existing talents are beginning
-
to get rusty.
-
And the reason is pretty obvious.
-
We all know either intuitively,
-
or if you've read anything about this,
-
how we develop rich talents,
-
sophisticated talents, is by practice.
-
By doing things over and over again,
-
facing lots of different challenges
-
in lots of different circumstances,
-
figuring out how to overcome them.
-
That's how we build the most
-
sophisticated skills and how we continue
-
to refine them.
-
And this crucial element in learning
-
in all sorts of forms is often
-
referred to as the Generation Effect.
-
And what that means is
-
if you're actively engaged in some task,
-
in some form of work,
-
you're going to not only perform better
-
but learn more and become more expert
-
than if you're simply an observer...
-
simply passively watching as things progress.
-
The generation effect was first observed
-
in this very simple experiment
-
involving people's ability to expand vocabulary.
-
Learn vocabulary, remember vocabulary.
-
And what the researchers did,
-
this was back in the '70s,
-
is they got two groups of people
-
to try to memorize lots of pairs of antonyms.
-
Lots of pairs of opposites.
-
And the only difference between the two groups
-
was that one group used flash cards
-
that had both words spelled out entirely
-
(hot, cold),
-
the other had flashcards that just had
-
the first word (hot), but then provided
-
only the first letter of the second word
-
(so C).
-
And what they found was that indeed,
-
the people who used the full words
-
remembered much fewer of the antonyms
-
than the people who had to fill in
-
the second word.
-
There's a little bit more brain activity
-
involved here. You actually
-
have to call to mind what this word is.
-
You have to generate it.
-
And just that small difference
-
gives you better learning, better retention.
-
A few years later, some other researchers,
-
some other professors in this area,
-
realized that actually this is kind of
-
a form of automation.
-
What this does, giving the full word,
-
in essence automates filling in the word.
-
They explained this as a fact,
-
a phenomenon, related to
-
automation complacency.
-
You might be completely unconscious of it,
-
but your brain is a little more complacent,
-
it doesn't have to work as hard,
-
in this mode. And that makes
-
a big difference.
-
And it turns out that the generation effect,
-
it explains a whole lot about
-
how we learn and develop skill in all sorts of places.
-
It's definitely not just restricted to studies of vocabulary.
-
You see it everywhere.
-
If you're actively involved, you learn more,
-
you become more expertise.
-
If you're not, you don't.
-
And unfortunately, with software
-
more and more the programmer,
-
the designer, actually gets in the way
-
of the generation effect.
-
And not by accident, but on purpose
-
because of course the things we
-
tend to automate, the things we
-
tend to simplify for people,
-
are the things that are challenging.
-
You look at a process, you look at where
-
people are struggling,
-
and that is both often the most interesting
-
thing to automate, but also the place
-
that whoever is paying you to write software
-
is encouraging you to do
-
because it seems to create efficiency.
-
It seems to create productivity.
-
But what we're doing is designing
-
lots of systems, lots of software,
-
that actually deliberately (if you look at it
-
in that sense) gets in the way of people's
-
ability to learn and create expertise.
-
There was a series of experiments done
-
beginning about ten years ago
-
by this young cognitive psychologist
-
in Holland named Christof van Nimwegen
-
and he did something very interesting.
-
He got a series of different tasks,
-
one of them was solving a difficult logic problem,
-
one of them was organizing a conference
-
where you had a large number of conference rooms,
-
large number of speakers,
-
large number of time slots and you had
-
to optimize how you put all those things together.
-
So a number of tasks that had lots of components,
-
required a certain amount of smarts,
-
required you to work through a hard problem over time.
-
And in each case he got groups of people,
-
divided them into two,
-
created software--two different applications
-
for doing these.
-
One application was very bare-bones,
-
it just provided you with the scenario
-
and then you had to work through it.
-
The other was very helpful,
-
had prompts, it had high-lights,
-
it had on-screen advice when you
-
got to a point where you could
-
do some moves, but you couldn't do others.
-
It would high-light the ones you could do,
-
and gray-out the ones you couldn't.
-
And then he let them go,
-
and watched what happened.
-
Well, as you might expect,
-
the people with the more helpful software
-
got off to a great start.
-
The software was guiding them,
-
helping them make their initial decisions and moves.
-
The jumped out to a lead in terms
-
of solving the challenges.
-
But over time, the people using the
-
bare-bones software, the un-helpful software,
-
not only caught up, but actually
-
in all the cases ended up completing
-
the assignment much more efficiently.
-
Made much fewer incorrect moves,
-
much fewer mistakes.
-
They also seemed to have a much clearer strategy,
-
whereas the people using the helpful software
-
kind of just clicked around,
-
and finally van Nimwegen gave them
-
tests afterwards to measure their
-
conceptual understanding of what they had done.
-
People with the un-helpful software
-
had a much clearer conceptual understanding.
-
Then, eight months later, he invited
-
just the logic puzzle group...he invited
-
all the people who did that back,
-
had them solve the problem again.
-
The people who had, eight months earlier,
-
used the unhelpful software,
-
solved the puzzle twice as fast as those
-
who used the helpful software.
-
The more helpful the software,
-
the less learning, the weaker performance,
-
the less strategic thinking of the
-
people who used it.
-
Again, this underscores a fundamental
-
paradox that people face--
-
people who develop these programs,
-
and people who use them--
-
where our instinct to make things easier,
-
to find the places of friction
-
and remove the friction,
-
can actually lead to counter-productive results
-
where you're eroding performance
-
and eroding learning.
-
So if you look at all the psychological studies
-
and the human factor studies of how
-
people interact with machines and technology
-
and computers, and you also
-
combine it with psychological understanding
-
of how we learn, what you see is that
-
there's a very complex cycle involved.
-
If you have a high degree of engagement
-
with people, if they're really pushed
-
to engage the challenges, work hard,
-
maintain their awareness of their circumstances,
-
you provoke a state of flow
-
(if you've read Mihaly's book "Flow",
-
or are familiar with it), we perform
-
optimally when we're really immersed in
-
our challenge, when we're stretching our talents,
-
learning new talents.
-
That's the optimal state to be in.
-
It gives us more skills, pushes us
-
to new talents, and it also happens
-
to be the state in which we're most fulfilled,
-
most satisfied.
-
Often, people have this feeling that
-
if they were relieved of work,
-
relieved of effort, they'll be happier.
-
Turns out they're not.
-
They're more miserable, they're actually
-
happier when they are working hard,
-
facing a challenge.
-
And so this sense of fulfillment
-
prolongs your sense of engagement,
-
intensifies it, and you get this very nice cycle.
-
People are performing at a high level,
-
they're learning talents, and they're fulfilled.
-
They're happy. They're satisfied.
-
They like their experience.
-
All too often you stick automation
-
into here, particularly if you haven't
-
thought through all of the implications,
-
and you break this cycle.
-
Suddenly you decrease engagement
-
and all the other things go down as well.
-
You see this today in all sorts of places.
-
You see it with pilots whose jobs
-
have been highly, highly automated.
-
Automation has been a very good,
-
a very positive development for a hundred years
-
in aviation, but recently as pilots'
-
role in control of the aircraft, manual control
-
has gone down to the point to maybe
-
where they're in control for three minutes
-
in a flight, you see problems with
-
the erosion of engagement,
-
the erosion of situational awareness,
-
and the erosion of talent.
-
And unfortunately on those rare occasions
-
when the autopilot fails for whatever reason
-
or there's very weird circumstances,
-
you increase the odds that the pilots will
-
make mistakes, sometimes with dangerous implications.
-
So, why do we go down this path so often?
-
Why do we create computer programs,
-
robotic systems, other automated systems,
-
that instead of raising people up
-
to their highest level of talent,
-
highest level awareness and satisfaction,
-
has the opposite effect?
-
I think much of the blame can be placed
-
on what I would argue is the dominant
-
design philosophy or ethic
-
that governs the people who are making these programs,
-
and making these machines.
-
It's what's often what's referred to as
-
technology centered design.
-
And basically what that means is
-
the engineer or the programmer or whatever
-
starts by asking "What can the computer do?"
-
"What can the technology do?"
-
And then anything that the computer
-
or technology can do, they give that
-
responsibility to the computer.
-
And you can see why this is what
-
engineers and programmers would want to do
-
because that's their job:
-
to simulate or automate interesting work
-
with software, or with robots.
-
So that's a very natural thing to do.
-
But what happens then is
-
what the human being gets is just
-
what the computer can't do,
-
or what we haven't yet figured out how
-
the computer can do it.
-
And that tends to be things like
-
monitoring screens for anomalies,
-
entering data, and oh by the way
-
you're also the last line of defense so if
-
everything goes to hell you've got to take over
-
and get us out of the fix.
-
Those are things that people are actually
-
pretty bad at.
-
We're terrible at monitoring things that
-
we're waiting for an anomaly,
-
we can't focus on it for more than about half an hour.
-
Entering data, becoming the sensor for
-
the computer is a pretty dull job in most cases.
-
If you set up a system that ensures that
-
the operator is going to have a low level
-
of situational awareness then that
-
is not the person you want to be having
-
as the last line of defense.
-
The alternative is something called, surprise,
-
human centered design.
-
Where you start by saying,
-
"what are human beings good at?"
-
And you look at the fact that
-
there's lots of important things that we're
-
actually still much better than computers at.
-
We're creative, we have imagination,
-
we can think conceptually,
-
we have an understanding of the world,
-
we can think critically, we can think skeptically...
-
And then you bring in the software,
-
you bring the automation first to aid
-
the person in exploiting those capabilities,
-
but also to fill in the gaps and the flaws
-
that we all have as human beings.
-
So we're not great at processing huge amounts
-
of information quickly,
-
we're subject to biases in our thinking.
-
You can use software to counteract these,
-
or to provide an additional set of capabilities.
-
And if you go that path you get both
-
the best of the human and the best of the machine,
-
or the best of the technology.
-
Some of the ideas here are very simple.
-
For instance, with pilots instead of
-
allowing them to turn on total flight automation
-
once they're off the ground and then
-
not bother to turn it off until they're about ready to land,
-
you can design the software to
-
give control back to the pilot every once in a while
-
at random instances.
-
And just when you know that you're
-
going to be called upon at some random time
-
to take back control, that improves
-
people's awareness and concentration immeasurably.
-
It makes it less likely that they're going to
-
completely space out.
-
Or in the example of the radiologist,
-
and this goes for examples of decision support
-
or expert system or analytical programs in general,
-
one thing you can do is instead of
-
bringing in the software prompts
-
and the software advice right out the outset,
-
you can first encourage the human being
-
to deal with the problem, to look
-
at the image on his or her own or to
-
do whatever analytical chore is there,
-
and then bring in the software afterwards as
-
a further aid, bringing new information to bear.
-
And that too means you get the best
-
of both people.
-
Unfortunately, we don't do that,
-
or at least not very often.
-
We don't pursue human centered design,
-
and I think it's for a couple of reasons.
-
One is that we human beings, as I said before,
-
are very eager to hand off any kind of work
-
to machines, to software, to other people,
-
because we are afflicted by
-
what psychologists term "miswanted".
-
We think we want to be freed of labor,
-
freed of hard work, freed of challenge.
-
And when are freed of it we feel miserable,
-
we feel anxious, we get self-absorbed,
-
and actually our optimal experience comes
-
when we are working hard at things.
-
So there's something inside of us
-
that is very eager to get rid of stuff,
-
even if it's get rid of effort,
-
even if it's not in our own benefit.
-
And then the other reason, which I think
-
is one that's even harder to deal with,
-
is the pursuit of efficiency and productivity
-
above all other goals.
-
And you can certainly see why hospitals
-
who want the highest productivity possible
-
from radiologists would be adverse
-
to saying, "well we'll let the radiologists
-
look at the image and then we'll bring in the software"
-
because that extends the time that a radiologist
-
is going to look at this, and that's true of
-
any of these kind of analytical chores.
-
And so there's this tension
-
between the pursuit of efficiency above
-
all other things and productivity,
-
and the development of skill,
-
the development of talent,
-
the development of high-levels of human performance
-
and ultimately the sense of satisfaction
-
that people get.
-
I think in the long-run you see signs that
-
that begins to back-fire.
-
Toyota, earlier this year, announced that
-
it was replacing some of its robots
-
in its Japanese factories with human beings,
-
because even though the robots are more efficient,
-
the company had struggled with quality problems.
-
It's had to recall 20 million cars in recent years,
-
and not only is that bad for business,
-
but Toyota's entire culture is built around
-
quality manufacturing, so it erodes its culture.
-
So by bringing back human beings
-
it wants to bring back both the spirit
-
and the reality of human craftsmanship,
-
of people who can actually think critically
-
about what they're doing.
-
And one of the benefits it believes
-
it will get is that it will be smarter
-
about how it programs its robots.
-
It will be able to continually take new
-
human thinking, new human talent and insight
-
and then incorporate that into
-
the processes that even the robots are doing.
-
That's a good news example,
-
but I'm not going to oversimplify this
-
or lie to you...I think this tension between
-
placing efficiency above all other things
-
is a very hard...very hard instinct,
-
very hard economic imperative to overcome.
-
But nevertheless, I think it's absolutely imperative
-
that everyone who designs software and robotics,
-
and all of us who use them,
-
are conscious of the fact that there is this trade-off.
-
And that technology isn't just a means of production,
-
as we often tend to think of it,
-
it really is a means of experience.
-
And it always has been, since the first
-
technologies were developed by our distant ancestors.
-
Technology at its best, tools at their best
-
bring us out into the world,
-
expand our skills, and our talents,
-
make the world an interesting place.
-
And we shouldn't forget that about ourselves
-
as we continue at high-speed into a future
-
where more and more aspects of human experience
-
are going to be off-loaded to computers
-
and to machines.
-
So, thank you very much for your attention.
-
[ applause ] Thank you. [ applause ]
-
One thing that immediately sprung to mind
-
with most of your examples is that
-
they seem like examples of poor automation
-
and I'm wondering if you could...
-
say whether you feel that there could be
-
or are already any sufficiently flawless technologies
-
that we don't have to worry about the
-
problems you're describing.
-
I think in all instances, you have to worry about them.
-
I agree with you that a lot of these problems
-
are not problems about automation per se.
-
No one is going to stop the course of automation.
-
You can argue that the invention of the wheel
-
was an example of automating something,
-
and I don't think any of us regrets that,
-
but I do think it's often unwise design decisions,
-
or unwise assumptions that come in.
-
But as to the question of whether
-
we will create infallible automation...
-
I don't think so.
-
I mean often you get this point of view
-
and it seems to be quite common
-
in Silicon Valley if I can say that,
-
people are only going to be a temporary nuisance
-
in a lot of these processes.
-
We're going to have fully self-driving cars.
-
We're going to have fully self-flying planes.
-
We're going to have fully analytical systems,
-
big data systems that can pump out
-
the right answer, we won't have to
-
worry about it.
-
I don't think that that's actually going to happen.
-
I mean, it might happen eventually.
-
It's very, very difficult to remove
-
the human being altogether.
-
And so, to me what that means is
-
okay, fine, you can pursue that
-
as some ideal, total flawless automation,
-
but in the meantime we live and work
-
in the present not in the future.
-
And for the foreseeable future in all
-
of these processes there are going
-
to be people involved.
-
And there are going to be computers involved.
-
And instead of just...saying let's put
-
the computer's interests before the person's,
-
I think the wise way is, as I said,
-
to go with a more human-centered design
-
that realizes and starts with the assumption
-
that the human being is going to play
-
an essential role in these things for
-
as long as we can imagine,
-
or for as long as we can forsee.
-
And so we better design them to
-
get the most out of the person
-
as well as the technology.
-
[ offscreen audience member ]
(Thanks, that was great.)
-
(I certainly agree with the idea of focusing)
-
on the human-centered design.
-
I want to make one quick comment,
-
and then a question.
-
I noticed on your "hot/cold" thing
-
there was another researcher who took
-
passages from books and presented them,
-
and then multiple choice quiz, or whatever,
-
and then they took the same passage
-
and deleted a key sentence.
-
And people did better understanding
-
the point then. But somehow no authors
-
are willing to have the guts to do that.
-
[ speaker laughs ]
-
So, will you be that author?
-
To delete the important sentences from your book
-
and make the reader engage more
-
and therefore learn better?
-
If any of you buy my book,
-
I would be happy to take a sharpie
-
and erase certain sentences
-
and you can get the full benefit...
-
But, I will take that under advisement
-
for...future books.
-
[ same off-screen audience member ]
-
(And then a question,)
-
(I'm interested in the difference between)
-
(automation complacency and)
-
authority complacency.
-
So you see a lot of these incidence reports,
-
and there will be some underling who said
-
"you know, I kind of noticed something
-
was going wrong, but the surgeon
-
or the pilot or the CEO seems so sure
-
that I didn't want to say anything."
-
And that has nothing to do with automation,
-
it's just...authority.
-
I think that's probably pretty much exactly
-
the same phenomenon.
-
And I actually do think it probably
-
has something to do with automation
-
because you could say that automation complacency
-
comes when the computer or the machine
-
takes the role of authority.
-
So the person defers to it, and I think
-
that's certainly one way to interpret
-
a lot of the findings.
-
That you don't question the machine,
-
you don't question the automation
-
in a way that wouldn't be wise.
-
So I think they're probably...
-
I think complacency has been a problem
-
since long before computers came around
-
for people for those reasons and others.
-
But, we've created a new way to generate
-
the same phenomenon.
-
[ off-screen audience member ]
(I'm curious to ask)
-
(if you know much research about)
-
what percentage of time or experiences
-
we need to keep manual in order
-
to make sure that skills don't fade away.
-
And if you have any thoughts on
-
how much this transfers from domain to domain?
-
So like in the wayfaring example of the Inuit.
-
Right? You might say you could use GPS
-
80% of the time, but you've got
-
to do 20% of it manually to keep your skills up.
-
But maybe airline pilots have
-
the other examples that you cited...
-
Do you know how much of this has
-
been studied and how much it might
-
vary from domain to domain?
-
As far as the second question,
-
I don't know. I don't know...any rules of thumb
-
either in specific domains or in cross-domains.
-
I can say though, that there's enormous
-
amounts of research that's been done in aviation,
-
because of the fact that the risk is so high,
-
and lots of people can die,
-
and lots of money can be lost.
-
You know, even since computerization
-
of flight began back in the '70s
-
whether it's NASA or the FAA
-
or universities, there's been tons of research.
-
So my guess is that there has been...
-
There probably have been tests
-
where you have different levels of automation
-
and manual control, and comparing
-
different levels of performance.
-
I didn't come across those specific studies
-
in my work, but my guess is that
-
that would be an obvious thing that would have been done.
-
So, I'm saying in aviation there's probably
-
at least some sense of, you know,
-
at what point does performance start
-
to drop off, or start to drop off dramatically
-
because you've turned over too much
-
responsibility to the machine.
-
Whether that would also translate
-
the same kind of percentages in different domains?
-
I don't know.
-
[ off-screen audience member ]
-
(Thanks for coming.)
-
(The talk was really interesting.)
-
(In your talk you pointed out that)
-
(technology always comes with trade-offs.)
-
And it's hard to disagree with that.
-
But I'm wondering about the title of your book.
-
The title is "The Glass Cage",
-
and that seems like...
-
calling technology a glass cage seems like
-
a much more negative assessment than
-
merely saying that it comes with trade-offs.
-
So I'm wondering if you can say
-
what motivates this title?
-
Well, the title is a reference back to pilots' experience.
-
Since the '70s, pilots and others
-
in the aviation business, have referred
-
to cockpits as glass cockpits.
-
And it's because, increasingly they're wrapped
-
with computer screens.
-
If you look at a modern...a modern
-
passenger jet it's insane amounts
-
of computer screens, all sorts of
-
input devices and stuff.
-
One aviation expert refers to the cockpit now
-
as a flying computer.
-
So, in one sense it's just kind of
-
a play on that, because what I argue is that
-
we can learn a lot from pilots' experience
-
as we enter into a world that essentially
-
more and more of us are going to be living
-
inside a glass cockpit.
-
We're going to be looking at monitors
-
to do more and more things.
-
We're already there, some would argue.
-
And I do think that what a lot of examples
-
of computer automation tell us is that
-
the glass cockpit can become a glass cage.
-
That if we design it to be the primary
-
or the essential way that we interact with the world
-
then it cuts us off from other sources of learning
-
and information that might be absolutely essential,
-
but we're so focused on what the computer
-
is telling us we lose that.
-
So, I mean it is intended to be a little bit ominous,
-
that we can either get trapped in this
-
glass cage, or we can use technology
-
in what I think is a more humane
-
and more balanced way.
-
[ off-screen audience member ]
(Thanks for that question)
-
(And on that note, please join me)
-
(in thanking Nicholas Carr)
-
(for coming to Google).
-
Thank you.
-
[ applause ]
-
[ electronic music ]