Machine intelligence makes human morals more important
-
0:01 - 0:05So, I started my first job
as a computer programmer -
0:05 - 0:07in my very first year of college --
-
0:07 - 0:08basically, as a teenager.
-
0:09 - 0:11Soon after I started working,
-
0:11 - 0:12writing software in a company,
-
0:13 - 0:16a manager who worked at the company
came down to where I was, -
0:16 - 0:18and he whispered to me,
-
0:18 - 0:21"Can he tell if I'm lying?"
-
0:22 - 0:24There was nobody else in the room.
-
0:25 - 0:29"Can who tell if you're lying?
And why are we whispering?" -
0:30 - 0:33The manager pointed
at the computer in the room. -
0:33 - 0:36"Can he tell if I'm lying?"
-
0:38 - 0:42Well, that manager was having
an affair with the receptionist. -
0:42 - 0:43(Laughter)
-
0:43 - 0:45And I was still a teenager.
-
0:45 - 0:47So I whisper-shouted back to him,
-
0:47 - 0:51"Yes, the computer can tell
if you're lying." -
0:51 - 0:53(Laughter)
-
0:53 - 0:56Well, I laughed, but actually,
the laugh's on me. -
0:56 - 0:59Nowadays, there are computational systems
-
0:59 - 1:03that can suss out
emotional states and even lying -
1:03 - 1:05from processing human faces.
-
1:05 - 1:09Advertisers and even governments
are very interested. -
1:10 - 1:12I had become a computer programmer
-
1:12 - 1:15because I was one of those kids
crazy about math and science. -
1:16 - 1:19But somewhere along the line
I'd learned about nuclear weapons, -
1:19 - 1:22and I'd gotten really concerned
with the ethics of science. -
1:22 - 1:23I was troubled.
-
1:23 - 1:26However, because of family circumstances,
-
1:26 - 1:29I also needed to start working
as soon as possible. -
1:29 - 1:33So I thought to myself, hey,
let me pick a technical field -
1:33 - 1:34where I can get a job easily
-
1:34 - 1:38and where I don't have to deal
with any troublesome questions of ethics. -
1:39 - 1:41So I picked computers.
-
1:41 - 1:42(Laughter)
-
1:42 - 1:45Well, ha, ha, ha!
All the laughs are on me. -
1:45 - 1:48Nowadays, computer scientists
are building platforms -
1:48 - 1:52that control what a billion
people see every day. -
1:53 - 1:57They're developing cars
that could decide who to run over. -
1:58 - 2:01They're even building machines, weapons,
-
2:01 - 2:03that might kill human beings in war.
-
2:03 - 2:06It's ethics all the way down.
-
2:07 - 2:09Machine intelligence is here.
-
2:10 - 2:13We're now using computation
to make all sort of decisions, -
2:13 - 2:15but also new kinds of decisions.
-
2:15 - 2:20We're asking questions to computation
that have no single right answers, -
2:20 - 2:22that are subjective
-
2:22 - 2:24and open-ended and value-laden.
-
2:24 - 2:26We're asking questions like,
-
2:26 - 2:27"Who should the company hire?"
-
2:28 - 2:31"Which update from which friend
should you be shown?" -
2:31 - 2:33"Which convict is more
likely to reoffend?" -
2:34 - 2:37"Which news item or movie
should be recommended to people?" -
2:37 - 2:40Look, yes, we've been using
computers for a while, -
2:40 - 2:42but this is different.
-
2:42 - 2:44This is a historical twist,
-
2:44 - 2:49because we cannot anchor computation
for such subjective decisions -
2:49 - 2:54the way we can anchor computation
for flying airplanes, building bridges, -
2:54 - 2:56going to the moon.
-
2:56 - 3:00Are airplanes safer?
Did the bridge sway and fall? -
3:00 - 3:04There, we have agreed-upon,
fairly clear benchmarks, -
3:04 - 3:06and we have laws of nature to guide us.
-
3:07 - 3:10We have no such anchors and benchmarks
-
3:10 - 3:14for decisions in messy human affairs.
-
3:14 - 3:18To make things more complicated,
our software is getting more powerful, -
3:18 - 3:22but it's also getting less
transparent and more complex. -
3:23 - 3:25Recently, in the past decade,
-
3:25 - 3:27complex algorithms
have made great strides. -
3:27 - 3:29They can recognize human faces.
-
3:30 - 3:32They can decipher handwriting.
-
3:32 - 3:35They can detect credit card fraud
-
3:35 - 3:36and block spam
-
3:36 - 3:38and they can translate between languages.
-
3:38 - 3:40They can detect tumors in medical imaging.
-
3:40 - 3:43They can beat humans in chess and Go.
-
3:43 - 3:48Much of this progress comes
from a method called "machine learning." -
3:48 - 3:51Machine learning is different
than traditional programming, -
3:51 - 3:55where you give the computer
detailed, exact, painstaking instructions. -
3:55 - 4:00It's more like you take the system
and you feed it lots of data, -
4:00 - 4:01including unstructured data,
-
4:01 - 4:04like the kind we generate
in our digital lives. -
4:04 - 4:06And the system learns
by churning through this data. -
4:07 - 4:08And also, crucially,
-
4:08 - 4:13these systems don't operate
under a single-answer logic. -
4:13 - 4:16They don't produce a simple answer;
it's more probabilistic: -
4:16 - 4:19"This one is probably more like
what you're looking for." -
4:20 - 4:23Now, the upside is:
this method is really powerful. -
4:23 - 4:25The head of Google's AI systems called it,
-
4:25 - 4:27"the unreasonable effectiveness of data."
-
4:28 - 4:29The downside is,
-
4:30 - 4:33we don't really understand
what the system learned. -
4:33 - 4:34In fact, that's its power.
-
4:35 - 4:39This is less like giving
instructions to a computer; -
4:39 - 4:43it's more like training
a puppy-machine-creature -
4:43 - 4:46we don't really understand or control.
-
4:46 - 4:48So this is our problem.
-
4:48 - 4:53It's a problem when this artificial
intelligence system gets things wrong. -
4:53 - 4:56It's also a problem
when it gets things right, -
4:56 - 5:00because we don't even know which is which
when it's a subjective problem. -
5:00 - 5:02We don't know what this thing is thinking.
-
5:03 - 5:07So, consider a hiring algorithm --
-
5:08 - 5:12a system used to hire people,
using machine-learning systems. -
5:13 - 5:17Such a system would have been trained
on previous employees' data -
5:17 - 5:19and instructed to find and hire
-
5:19 - 5:22people like the existing
high performers in the company. -
5:23 - 5:24Sounds good.
-
5:24 - 5:26I once attended a conference
-
5:26 - 5:29that brought together
human resources managers and executives, -
5:29 - 5:30high-level people,
-
5:30 - 5:32using such systems in hiring.
-
5:32 - 5:34They were super excited.
-
5:34 - 5:38They thought that this would make hiring
more objective, less biased, -
5:38 - 5:41and give women
and minorities a better shot -
5:41 - 5:44against biased human managers.
-
5:44 - 5:46And look -- human hiring is biased.
-
5:47 - 5:48I know.
-
5:48 - 5:51I mean, in one of my early jobs
as a programmer, -
5:51 - 5:55my immediate manager would sometimes
come down to where I was -
5:55 - 5:59really early in the morning
or really late in the afternoon, -
5:59 - 6:02and she'd say, "Zeynep,
let's go to lunch!" -
6:03 - 6:05I'd be puzzled by the weird timing.
-
6:05 - 6:07It's 4pm. Lunch?
-
6:07 - 6:10I was broke, so free lunch. I always went.
-
6:11 - 6:13I later realized what was happening.
-
6:13 - 6:17My immediate managers
had not confessed to their higher-ups -
6:17 - 6:20that the programmer they hired
for a serious job was a teen girl -
6:20 - 6:24who wore jeans and sneakers to work.
-
6:25 - 6:27I was doing a good job,
I just looked wrong -
6:27 - 6:29and was the wrong age and gender.
-
6:29 - 6:32So hiring in a gender- and race-blind way
-
6:32 - 6:34certainly sounds good to me.
-
6:35 - 6:38But with these systems,
it is more complicated, and here's why: -
6:39 - 6:45Currently, computational systems
can infer all sorts of things about you -
6:45 - 6:47from your digital crumbs,
-
6:47 - 6:49even if you have not
disclosed those things. -
6:50 - 6:52They can infer your sexual orientation,
-
6:53 - 6:54your personality traits,
-
6:55 - 6:56your political leanings.
-
6:57 - 7:01They have predictive power
with high levels of accuracy. -
7:01 - 7:04Remember -- for things
you haven't even disclosed. -
7:04 - 7:06This is inference.
-
7:06 - 7:09I have a friend who developed
such computational systems -
7:09 - 7:13to predict the likelihood
of clinical or postpartum depression -
7:13 - 7:14from social media data.
-
7:15 - 7:16The results are impressive.
-
7:16 - 7:20Her system can predict
the likelihood of depression -
7:20 - 7:24months before the onset of any symptoms --
-
7:24 - 7:25months before.
-
7:25 - 7:27No symptoms, there's prediction.
-
7:27 - 7:32She hopes it will be used
for early intervention. Great! -
7:33 - 7:35But now put this in the context of hiring.
-
7:36 - 7:39So at this human resources
managers conference, -
7:39 - 7:44I approached a high-level manager
in a very large company, -
7:44 - 7:48and I said to her, "Look,
what if, unbeknownst to you, -
7:48 - 7:55your system is weeding out people
with high future likelihood of depression? -
7:56 - 7:59They're not depressed now,
just maybe in the future, more likely. -
8:00 - 8:03What if it's weeding out women
more likely to be pregnant -
8:03 - 8:06in the next year or two
but aren't pregnant now? -
8:07 - 8:12What if it's hiring aggressive people
because that's your workplace culture?" -
8:13 - 8:16You can't tell this by looking
at gender breakdowns. -
8:16 - 8:17Those may be balanced.
-
8:17 - 8:21And since this is machine learning,
not traditional coding, -
8:21 - 8:26there is no variable there
labeled "higher risk of depression," -
8:26 - 8:28"higher risk of pregnancy,"
-
8:28 - 8:30"aggressive guy scale."
-
8:30 - 8:34Not only do you not know
what your system is selecting on, -
8:34 - 8:36you don't even know
where to begin to look. -
8:36 - 8:37It's a black box.
-
8:37 - 8:40It has predictive power,
but you don't understand it. -
8:40 - 8:43"What safeguards," I asked, "do you have
-
8:43 - 8:47to make sure that your black box
isn't doing something shady?" -
8:49 - 8:53She looked at me as if I had
just stepped on 10 puppy tails. -
8:53 - 8:54(Laughter)
-
8:54 - 8:56She stared at me and she said,
-
8:57 - 9:01"I don't want to hear
another word about this." -
9:01 - 9:03And she turned around and walked away.
-
9:04 - 9:06Mind you -- she wasn't rude.
-
9:06 - 9:12It was clearly: what I don't know
isn't my problem, go away, death stare. -
9:12 - 9:13(Laughter)
-
9:14 - 9:18Look, such a system
may even be less biased -
9:18 - 9:20than human managers in some ways.
-
9:20 - 9:22And it could make monetary sense.
-
9:23 - 9:24But it could also lead
-
9:24 - 9:29to a steady but stealthy
shutting out of the job market -
9:29 - 9:31of people with higher risk of depression.
-
9:32 - 9:34Is this the kind of society
we want to build, -
9:34 - 9:37without even knowing we've done this,
-
9:37 - 9:41because we turned decision-making
to machines we don't totally understand? -
9:41 - 9:43Another problem is this:
-
9:43 - 9:48these systems are often trained
on data generated by our actions, -
9:48 - 9:50human imprints.
-
9:50 - 9:54Well, they could just be
reflecting our biases, -
9:54 - 9:58and these systems
could be picking up on our biases -
9:58 - 9:59and amplifying them
-
9:59 - 10:00and showing them back to us,
-
10:00 - 10:02while we're telling ourselves,
-
10:02 - 10:05"We're just doing objective,
neutral computation." -
10:06 - 10:09Researchers found that on Google,
-
10:10 - 10:15women are less likely than men
to be shown job ads for high-paying jobs. -
10:16 - 10:19And searching for African-American names
-
10:19 - 10:24is more likely to bring up ads
suggesting criminal history, -
10:24 - 10:25even when there is none.
-
10:27 - 10:30Such hidden biases
and black-box algorithms -
10:30 - 10:34that researchers uncover sometimes
but sometimes we don't know, -
10:34 - 10:37can have life-altering consequences.
-
10:38 - 10:42In Wisconsin, a defendant
was sentenced to six years in prison -
10:42 - 10:43for evading the police.
-
10:45 - 10:46You may not know this,
-
10:46 - 10:50but algorithms are increasingly used
in parole and sentencing decisions. -
10:50 - 10:53He wanted to know:
How is this score calculated? -
10:54 - 10:55It's a commercial black box.
-
10:55 - 11:00The company refused to have its algorithm
be challenged in open court. -
11:00 - 11:06But ProPublica, an investigative
nonprofit, audited that very algorithm -
11:06 - 11:08with what public data they could find,
-
11:08 - 11:10and found that its outcomes were biased
-
11:10 - 11:14and its predictive power
was dismal, barely better than chance, -
11:14 - 11:18and it was wrongly labeling
black defendants as future criminals -
11:18 - 11:22at twice the rate of white defendants.
-
11:24 - 11:25So, consider this case:
-
11:26 - 11:30This woman was late
picking up her godsister -
11:30 - 11:32from a school in Broward County, Florida,
-
11:33 - 11:35running down the street
with a friend of hers. -
11:35 - 11:39They spotted an unlocked kid's bike
and a scooter on a porch -
11:39 - 11:41and foolishly jumped on it.
-
11:41 - 11:44As they were speeding off,
a woman came out and said, -
11:44 - 11:46"Hey! That's my kid's bike!"
-
11:46 - 11:49They dropped it, they walked away,
but they were arrested. -
11:49 - 11:53She was wrong, she was foolish,
but she was also just 18. -
11:53 - 11:55She had a couple of juvenile misdemeanors.
-
11:56 - 12:01Meanwhile, that man had been arrested
for shoplifting in Home Depot -- -
12:01 - 12:0485 dollars' worth of stuff,
a similar petty crime. -
12:05 - 12:09But he had two prior
armed robbery convictions. -
12:10 - 12:13But the algorithm scored her
as high risk, and not him. -
12:15 - 12:19Two years later, ProPublica found
that she had not reoffended. -
12:19 - 12:21It was just hard to get a job
for her with her record. -
12:21 - 12:23He, on the other hand, did reoffend
-
12:23 - 12:27and is now serving an eight-year
prison term for a later crime. -
12:28 - 12:31Clearly, we need to audit our black boxes
-
12:31 - 12:34and not have them have
this kind of unchecked power. -
12:34 - 12:37(Applause)
-
12:38 - 12:42Audits are great and important,
but they don't solve all our problems. -
12:42 - 12:45Take Facebook's powerful
news feed algorithm -- -
12:45 - 12:50you know, the one that ranks everything
and decides what to show you -
12:50 - 12:52from all the friends and pages you follow.
-
12:53 - 12:55Should you be shown another baby picture?
-
12:55 - 12:56(Laughter)
-
12:56 - 12:59A sullen note from an acquaintance?
-
12:59 - 13:01An important but difficult news item?
-
13:01 - 13:03There's no right answer.
-
13:03 - 13:05Facebook optimizes
for engagement on the site: -
13:06 - 13:07likes, shares, comments.
-
13:08 - 13:11In August of 2014,
-
13:11 - 13:14protests broke out in Ferguson, Missouri,
-
13:14 - 13:18after the killing of an African-American
teenager by a white police officer, -
13:18 - 13:20under murky circumstances.
-
13:20 - 13:22The news of the protests was all over
-
13:22 - 13:25my algorithmically
unfiltered Twitter feed, -
13:25 - 13:27but nowhere on my Facebook.
-
13:27 - 13:29Was it my Facebook friends?
-
13:29 - 13:31I disabled Facebook's algorithm,
-
13:31 - 13:34which is hard because Facebook
keeps wanting to make you -
13:34 - 13:36come under the algorithm's control,
-
13:36 - 13:39and saw that my friends
were talking about it. -
13:39 - 13:41It's just that the algorithm
wasn't showing it to me. -
13:41 - 13:44I researched this and found
this was a widespread problem. -
13:44 - 13:48The story of Ferguson
wasn't algorithm-friendly. -
13:48 - 13:49It's not "likable."
-
13:49 - 13:51Who's going to click on "like?"
-
13:52 - 13:54It's not even easy to comment on.
-
13:54 - 13:55Without likes and comments,
-
13:55 - 13:58the algorithm was likely showing it
to even fewer people, -
13:58 - 14:00so we didn't get to see this.
-
14:01 - 14:02Instead, that week,
-
14:02 - 14:04Facebook's algorithm highlighted this,
-
14:05 - 14:07which is the ALS Ice Bucket Challenge.
-
14:07 - 14:11Worthy cause; dump ice water,
donate to charity, fine. -
14:11 - 14:12But it was super algorithm-friendly.
-
14:13 - 14:16The machine made this decision for us.
-
14:16 - 14:19A very important
but difficult conversation -
14:19 - 14:21might have been smothered,
-
14:21 - 14:24had Facebook been the only channel.
-
14:24 - 14:28Now, finally, these systems
can also be wrong -
14:28 - 14:31in ways that don't resemble human systems.
-
14:31 - 14:34Do you guys remember Watson,
IBM's machine-intelligence system -
14:34 - 14:37that wiped the floor
with human contestants on Jeopardy? -
14:37 - 14:39It was a great player.
-
14:39 - 14:42But then, for Final Jeopardy,
Watson was asked this question: -
14:43 - 14:46"Its largest airport is named
for a World War II hero, -
14:46 - 14:48its second-largest
for a World War II battle." -
14:48 - 14:49(Hums Final Jeopardy music)
-
14:50 - 14:51Chicago.
-
14:51 - 14:52The two humans got it right.
-
14:53 - 14:57Watson, on the other hand,
answered "Toronto" -- -
14:57 - 14:59for a US city category!
-
15:00 - 15:02The impressive system also made an error
-
15:03 - 15:06that a human would never make,
a second-grader wouldn't make. -
15:07 - 15:10Our machine intelligence can fail
-
15:10 - 15:13in ways that don't fit
error patterns of humans, -
15:13 - 15:16in ways we won't expect
and be prepared for. -
15:16 - 15:20It'd be lousy not to get a job
one is qualified for, -
15:20 - 15:23but it would triple suck
if it was because of stack overflow -
15:23 - 15:25in some subroutine.
-
15:25 - 15:27(Laughter)
-
15:27 - 15:29In May of 2010,
-
15:29 - 15:33a flash crash on Wall Street
fueled by a feedback loop -
15:33 - 15:36in Wall Street's "sell" algorithm
-
15:36 - 15:41wiped a trillion dollars
of value in 36 minutes. -
15:42 - 15:44I don't even want to think
what "error" means -
15:44 - 15:48in the context of lethal
autonomous weapons. -
15:50 - 15:54So yes, humans have always made biases.
-
15:54 - 15:56Decision makers and gatekeepers,
-
15:56 - 15:59in courts, in news, in war ...
-
15:59 - 16:02they make mistakes;
but that's exactly my point. -
16:02 - 16:06We cannot escape
these difficult questions. -
16:07 - 16:10We cannot outsource
our responsibilities to machines. -
16:11 - 16:15(Applause)
-
16:17 - 16:22Artificial intelligence does not give us
a "Get out of ethics free" card. -
16:23 - 16:26Data scientist Fred Benenson
calls this math-washing. -
16:26 - 16:28We need the opposite.
-
16:28 - 16:33We need to cultivate algorithm suspicion,
scrutiny and investigation. -
16:33 - 16:37We need to make sure we have
algorithmic accountability, -
16:37 - 16:39auditing and meaningful transparency.
-
16:39 - 16:43We need to accept
that bringing math and computation -
16:43 - 16:46to messy, value-laden human affairs
-
16:46 - 16:48does not bring objectivity;
-
16:48 - 16:52rather, the complexity of human affairs
invades the algorithms. -
16:52 - 16:56Yes, we can and we should use computation
-
16:56 - 16:58to help us make better decisions.
-
16:58 - 17:03But we have to own up
to our moral responsibility to judgment, -
17:03 - 17:06and use algorithms within that framework,
-
17:06 - 17:11not as a means to abdicate
and outsource our responsibilities -
17:11 - 17:13to one another as human to human.
-
17:14 - 17:16Machine intelligence is here.
-
17:16 - 17:20That means we must hold on ever tighter
-
17:20 - 17:22to human values and human ethics.
-
17:22 - 17:23Thank you.
-
17:23 - 17:28(Applause)
- Title:
- Machine intelligence makes human morals more important
- Speaker:
- Zeynep Tufekci
- Description:
-
Machine intelligence is here, and we're already using it to make subjective decisions on things that have no single right answer. But the complex way AI grows and improves makes it hard to understand -- and even harder to control. In this cautionary talk, sociologist Zeynep Tufekci explains that intelligent machines can fail in ways that don't fit human error patterns and that we won't expect or be prepared for it. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 17:42
Brian Greene edited English subtitles for Machine intelligence makes human morals more important | ||
Brian Greene edited English subtitles for Machine intelligence makes human morals more important | ||
Brian Greene edited English subtitles for Machine intelligence makes human morals more important | ||
Brian Greene edited English subtitles for Machine intelligence makes human morals more important | ||
Brian Greene approved English subtitles for Machine intelligence makes human morals more important | ||
Brian Greene edited English subtitles for Machine intelligence makes human morals more important | ||
Camille Martínez accepted English subtitles for Machine intelligence makes human morals more important | ||
Camille Martínez edited English subtitles for Machine intelligence makes human morals more important |