Return to Video

Machine intelligence makes human morals more important

  • 0:01 - 0:05
    So, I started my first job
    as a computer programmer
  • 0:05 - 0:07
    in my very first year of college --
  • 0:07 - 0:08
    basically, as a teenager.
  • 0:09 - 0:11
    Soon after I started working,
  • 0:11 - 0:12
    writing software in a company,
  • 0:13 - 0:16
    a manager who worked at the company
    came down to where I was,
  • 0:16 - 0:18
    and he whispered to me,
  • 0:18 - 0:21
    "Can he tell if I'm lying?"
  • 0:22 - 0:24
    There was nobody else in the room.
  • 0:25 - 0:29
    "Can who tell if you're lying?
    And why are we whispering?"
  • 0:30 - 0:33
    The manager pointed
    at the computer in the room.
  • 0:33 - 0:36
    "Can he tell if I'm lying?"
  • 0:38 - 0:42
    Well, that manager was having
    an affair with the receptionist.
  • 0:42 - 0:43
    (Laughter)
  • 0:43 - 0:45
    And I was still a teenager.
  • 0:45 - 0:47
    So I whisper-shouted back to him,
  • 0:47 - 0:51
    "Yes, the computer can tell
    if you're lying."
  • 0:51 - 0:53
    (Laughter)
  • 0:53 - 0:56
    Well, I laughed, but actually,
    the laugh's on me.
  • 0:56 - 0:59
    Nowadays, there are computational systems
  • 0:59 - 1:03
    that can suss out
    emotional states and even lying
  • 1:03 - 1:05
    from processing human faces.
  • 1:05 - 1:09
    Advertisers and even governments
    are very interested.
  • 1:10 - 1:12
    I had become a computer programmer
  • 1:12 - 1:15
    because I was one of those kids
    crazy about math and science.
  • 1:16 - 1:19
    But somewhere along the line
    I'd learned about nuclear weapons,
  • 1:19 - 1:22
    and I'd gotten really concerned
    with the ethics of science.
  • 1:22 - 1:23
    I was troubled.
  • 1:23 - 1:26
    However, because of family circumstances,
  • 1:26 - 1:29
    I also needed to start working
    as soon as possible.
  • 1:29 - 1:33
    So I thought to myself, hey,
    let me pick a technical field
  • 1:33 - 1:34
    where I can get a job easily
  • 1:34 - 1:38
    and where I don't have to deal
    with any troublesome questions of ethics.
  • 1:39 - 1:41
    So I picked computers.
  • 1:41 - 1:42
    (Laughter)
  • 1:42 - 1:45
    Well, ha, ha, ha!
    All the laughs are on me.
  • 1:45 - 1:48
    Nowadays, computer scientists
    are building platforms
  • 1:48 - 1:52
    that control what a billion
    people see every day.
  • 1:53 - 1:57
    They're developing cars
    that could decide who to run over.
  • 1:58 - 2:01
    They're even building machines, weapons,
  • 2:01 - 2:03
    that might kill human beings in war.
  • 2:03 - 2:06
    It's ethics all the way down.
  • 2:07 - 2:09
    Machine intelligence is here.
  • 2:10 - 2:13
    We're now using computation
    to make all sort of decisions,
  • 2:13 - 2:15
    but also new kinds of decisions.
  • 2:15 - 2:20
    We're asking questions to computation
    that have no single right answers,
  • 2:20 - 2:22
    that are subjective
  • 2:22 - 2:24
    and open-ended and value-laden.
  • 2:24 - 2:26
    We're asking questions like,
  • 2:26 - 2:27
    "Who should the company hire?"
  • 2:28 - 2:31
    "Which update from which friend
    should you be shown?"
  • 2:31 - 2:33
    "Which convict is more
    likely to reoffend?"
  • 2:34 - 2:37
    "Which news item or movie
    should be recommended to people?"
  • 2:37 - 2:40
    Look, yes, we've been using
    computers for a while,
  • 2:40 - 2:42
    but this is different.
  • 2:42 - 2:44
    This is a historical twist,
  • 2:44 - 2:49
    because we cannot anchor computation
    for such subjective decisions
  • 2:49 - 2:54
    the way we can anchor computation
    for flying airplanes, building bridges,
  • 2:54 - 2:56
    going to the moon.
  • 2:56 - 3:00
    Are airplanes safer?
    Did the bridge sway and fall?
  • 3:00 - 3:04
    There, we have agreed-upon,
    fairly clear benchmarks,
  • 3:04 - 3:06
    and we have laws of nature to guide us.
  • 3:07 - 3:10
    We have no such anchors and benchmarks
  • 3:10 - 3:14
    for decisions in messy human affairs.
  • 3:14 - 3:18
    To make things more complicated,
    our software is getting more powerful,
  • 3:18 - 3:22
    but it's also getting less
    transparent and more complex.
  • 3:23 - 3:25
    Recently, in the past decade,
  • 3:25 - 3:27
    complex algorithms
    have made great strides.
  • 3:27 - 3:29
    They can recognize human faces.
  • 3:30 - 3:32
    They can decipher handwriting.
  • 3:32 - 3:35
    They can detect credit card fraud
  • 3:35 - 3:36
    and block spam
  • 3:36 - 3:38
    and they can translate between languages.
  • 3:38 - 3:40
    They can detect tumors in medical imaging.
  • 3:40 - 3:43
    They can beat humans in chess and Go.
  • 3:43 - 3:48
    Much of this progress comes
    from a method called "machine learning."
  • 3:48 - 3:51
    Machine learning is different
    than traditional programming,
  • 3:51 - 3:55
    where you give the computer
    detailed, exact, painstaking instructions.
  • 3:55 - 4:00
    It's more like you take the system
    and you feed it lots of data,
  • 4:00 - 4:01
    including unstructured data,
  • 4:01 - 4:04
    like the kind we generate
    in our digital lives.
  • 4:04 - 4:06
    And the system learns
    by churning through this data.
  • 4:07 - 4:08
    And also, crucially,
  • 4:08 - 4:13
    these systems don't operate
    under a single-answer logic.
  • 4:13 - 4:16
    They don't produce a simple answer;
    it's more probabilistic:
  • 4:16 - 4:19
    "This one is probably more like
    what you're looking for."
  • 4:20 - 4:23
    Now, the upside is:
    this method is really powerful.
  • 4:23 - 4:25
    The head of Google's AI systems called it,
  • 4:25 - 4:27
    "the unreasonable effectiveness of data."
  • 4:28 - 4:29
    The downside is,
  • 4:30 - 4:33
    we don't really understand
    what the system learned.
  • 4:33 - 4:34
    In fact, that's its power.
  • 4:35 - 4:39
    This is less like giving
    instructions to a computer;
  • 4:39 - 4:43
    it's more like training
    a puppy-machine-creature
  • 4:43 - 4:46
    we don't really understand or control.
  • 4:46 - 4:48
    So this is our problem.
  • 4:48 - 4:53
    It's a problem when this artificial
    intelligence system gets things wrong.
  • 4:53 - 4:56
    It's also a problem
    when it gets things right,
  • 4:56 - 5:00
    because we don't even know which is which
    when it's a subjective problem.
  • 5:00 - 5:02
    We don't know what this thing is thinking.
  • 5:03 - 5:07
    So, consider a hiring algorithm --
  • 5:08 - 5:12
    a system used to hire people,
    using machine-learning systems.
  • 5:13 - 5:17
    Such a system would have been trained
    on previous employees' data
  • 5:17 - 5:19
    and instructed to find and hire
  • 5:19 - 5:22
    people like the existing
    high performers in the company.
  • 5:23 - 5:24
    Sounds good.
  • 5:24 - 5:26
    I once attended a conference
  • 5:26 - 5:29
    that brought together
    human resources managers and executives,
  • 5:29 - 5:30
    high-level people,
  • 5:30 - 5:32
    using such systems in hiring.
  • 5:32 - 5:34
    They were super excited.
  • 5:34 - 5:38
    They thought that this would make hiring
    more objective, less biased,
  • 5:38 - 5:41
    and give women
    and minorities a better shot
  • 5:41 - 5:44
    against biased human managers.
  • 5:44 - 5:46
    And look -- human hiring is biased.
  • 5:47 - 5:48
    I know.
  • 5:48 - 5:51
    I mean, in one of my early jobs
    as a programmer,
  • 5:51 - 5:55
    my immediate manager would sometimes
    come down to where I was
  • 5:55 - 5:59
    really early in the morning
    or really late in the afternoon,
  • 5:59 - 6:02
    and she'd say, "Zeynep,
    let's go to lunch!"
  • 6:03 - 6:05
    I'd be puzzled by the weird timing.
  • 6:05 - 6:07
    It's 4pm. Lunch?
  • 6:07 - 6:10
    I was broke, so free lunch. I always went.
  • 6:11 - 6:13
    I later realized what was happening.
  • 6:13 - 6:17
    My immediate managers
    had not confessed to their higher-ups
  • 6:17 - 6:20
    that the programmer they hired
    for a serious job was a teen girl
  • 6:20 - 6:24
    who wore jeans and sneakers to work.
  • 6:25 - 6:27
    I was doing a good job,
    I just looked wrong
  • 6:27 - 6:29
    and was the wrong age and gender.
  • 6:29 - 6:32
    So hiring in a gender- and race-blind way
  • 6:32 - 6:34
    certainly sounds good to me.
  • 6:35 - 6:38
    But with these systems,
    it is more complicated, and here's why:
  • 6:39 - 6:45
    Currently, computational systems
    can infer all sorts of things about you
  • 6:45 - 6:47
    from your digital crumbs,
  • 6:47 - 6:49
    even if you have not
    disclosed those things.
  • 6:50 - 6:52
    They can infer your sexual orientation,
  • 6:53 - 6:54
    your personality traits,
  • 6:55 - 6:56
    your political leanings.
  • 6:57 - 7:01
    They have predictive power
    with high levels of accuracy.
  • 7:01 - 7:04
    Remember -- for things
    you haven't even disclosed.
  • 7:04 - 7:06
    This is inference.
  • 7:06 - 7:09
    I have a friend who developed
    such computational systems
  • 7:09 - 7:13
    to predict the likelihood
    of clinical or postpartum depression
  • 7:13 - 7:14
    from social media data.
  • 7:15 - 7:16
    The results are impressive.
  • 7:16 - 7:20
    Her system can predict
    the likelihood of depression
  • 7:20 - 7:24
    months before the onset of any symptoms --
  • 7:24 - 7:25
    months before.
  • 7:25 - 7:27
    No symptoms, there's prediction.
  • 7:27 - 7:32
    She hopes it will be used
    for early intervention. Great!
  • 7:33 - 7:35
    But now put this in the context of hiring.
  • 7:36 - 7:39
    So at this human resources
    managers conference,
  • 7:39 - 7:44
    I approached a high-level manager
    in a very large company,
  • 7:44 - 7:48
    and I said to her, "Look,
    what if, unbeknownst to you,
  • 7:48 - 7:55
    your system is weeding out people
    with high future likelihood of depression?
  • 7:56 - 7:59
    They're not depressed now,
    just maybe in the future, more likely.
  • 8:00 - 8:03
    What if it's weeding out women
    more likely to be pregnant
  • 8:03 - 8:06
    in the next year or two
    but aren't pregnant now?
  • 8:07 - 8:12
    What if it's hiring aggressive people
    because that's your workplace culture?"
  • 8:13 - 8:16
    You can't tell this by looking
    at gender breakdowns.
  • 8:16 - 8:17
    Those may be balanced.
  • 8:17 - 8:21
    And since this is machine learning,
    not traditional coding,
  • 8:21 - 8:26
    there is no variable there
    labeled "higher risk of depression,"
  • 8:26 - 8:28
    "higher risk of pregnancy,"
  • 8:28 - 8:30
    "aggressive guy scale."
  • 8:30 - 8:34
    Not only do you not know
    what your system is selecting on,
  • 8:34 - 8:36
    you don't even know
    where to begin to look.
  • 8:36 - 8:37
    It's a black box.
  • 8:37 - 8:40
    It has predictive power,
    but you don't understand it.
  • 8:40 - 8:43
    "What safeguards," I asked, "do you have
  • 8:43 - 8:47
    to make sure that your black box
    isn't doing something shady?"
  • 8:49 - 8:53
    She looked at me as if I had
    just stepped on 10 puppy tails.
  • 8:53 - 8:54
    (Laughter)
  • 8:54 - 8:56
    She stared at me and she said,
  • 8:57 - 9:01
    "I don't want to hear
    another word about this."
  • 9:01 - 9:03
    And she turned around and walked away.
  • 9:04 - 9:06
    Mind you -- she wasn't rude.
  • 9:06 - 9:12
    It was clearly: what I don't know
    isn't my problem, go away, death stare.
  • 9:12 - 9:13
    (Laughter)
  • 9:14 - 9:18
    Look, such a system
    may even be less biased
  • 9:18 - 9:20
    than human managers in some ways.
  • 9:20 - 9:22
    And it could make monetary sense.
  • 9:23 - 9:24
    But it could also lead
  • 9:24 - 9:29
    to a steady but stealthy
    shutting out of the job market
  • 9:29 - 9:31
    of people with higher risk of depression.
  • 9:32 - 9:34
    Is this the kind of society
    we want to build,
  • 9:34 - 9:37
    without even knowing we've done this,
  • 9:37 - 9:41
    because we turned decision-making
    to machines we don't totally understand?
  • 9:41 - 9:43
    Another problem is this:
  • 9:43 - 9:48
    these systems are often trained
    on data generated by our actions,
  • 9:48 - 9:50
    human imprints.
  • 9:50 - 9:54
    Well, they could just be
    reflecting our biases,
  • 9:54 - 9:58
    and these systems
    could be picking up on our biases
  • 9:58 - 9:59
    and amplifying them
  • 9:59 - 10:00
    and showing them back to us,
  • 10:00 - 10:02
    while we're telling ourselves,
  • 10:02 - 10:05
    "We're just doing objective,
    neutral computation."
  • 10:06 - 10:09
    Researchers found that on Google,
  • 10:10 - 10:15
    women are less likely than men
    to be shown job ads for high-paying jobs.
  • 10:16 - 10:19
    And searching for African-American names
  • 10:19 - 10:24
    is more likely to bring up ads
    suggesting criminal history,
  • 10:24 - 10:25
    even when there is none.
  • 10:27 - 10:30
    Such hidden biases
    and black-box algorithms
  • 10:30 - 10:34
    that researchers uncover sometimes
    but sometimes we don't know,
  • 10:34 - 10:37
    can have life-altering consequences.
  • 10:38 - 10:42
    In Wisconsin, a defendant
    was sentenced to six years in prison
  • 10:42 - 10:43
    for evading the police.
  • 10:45 - 10:46
    You may not know this,
  • 10:46 - 10:50
    but algorithms are increasingly used
    in parole and sentencing decisions.
  • 10:50 - 10:53
    He wanted to know:
    How is this score calculated?
  • 10:54 - 10:55
    It's a commercial black box.
  • 10:55 - 11:00
    The company refused to have its algorithm
    be challenged in open court.
  • 11:00 - 11:06
    But ProPublica, an investigative
    nonprofit, audited that very algorithm
  • 11:06 - 11:08
    with what public data they could find,
  • 11:08 - 11:10
    and found that its outcomes were biased
  • 11:10 - 11:14
    and its predictive power
    was dismal, barely better than chance,
  • 11:14 - 11:18
    and it was wrongly labeling
    black defendants as future criminals
  • 11:18 - 11:22
    at twice the rate of white defendants.
  • 11:24 - 11:25
    So, consider this case:
  • 11:26 - 11:30
    This woman was late
    picking up her godsister
  • 11:30 - 11:32
    from a school in Broward County, Florida,
  • 11:33 - 11:35
    running down the street
    with a friend of hers.
  • 11:35 - 11:39
    They spotted an unlocked kid's bike
    and a scooter on a porch
  • 11:39 - 11:41
    and foolishly jumped on it.
  • 11:41 - 11:44
    As they were speeding off,
    a woman came out and said,
  • 11:44 - 11:46
    "Hey! That's my kid's bike!"
  • 11:46 - 11:49
    They dropped it, they walked away,
    but they were arrested.
  • 11:49 - 11:53
    She was wrong, she was foolish,
    but she was also just 18.
  • 11:53 - 11:55
    She had a couple of juvenile misdemeanors.
  • 11:56 - 12:01
    Meanwhile, that man had been arrested
    for shoplifting in Home Depot --
  • 12:01 - 12:04
    85 dollars' worth of stuff,
    a similar petty crime.
  • 12:05 - 12:09
    But he had two prior
    armed robbery convictions.
  • 12:10 - 12:13
    But the algorithm scored her
    as high risk, and not him.
  • 12:15 - 12:19
    Two years later, ProPublica found
    that she had not reoffended.
  • 12:19 - 12:21
    It was just hard to get a job
    for her with her record.
  • 12:21 - 12:23
    He, on the other hand, did reoffend
  • 12:23 - 12:27
    and is now serving an eight-year
    prison term for a later crime.
  • 12:28 - 12:31
    Clearly, we need to audit our black boxes
  • 12:31 - 12:34
    and not have them have
    this kind of unchecked power.
  • 12:34 - 12:37
    (Applause)
  • 12:38 - 12:42
    Audits are great and important,
    but they don't solve all our problems.
  • 12:42 - 12:45
    Take Facebook's powerful
    news feed algorithm --
  • 12:45 - 12:50
    you know, the one that ranks everything
    and decides what to show you
  • 12:50 - 12:52
    from all the friends and pages you follow.
  • 12:53 - 12:55
    Should you be shown another baby picture?
  • 12:55 - 12:56
    (Laughter)
  • 12:56 - 12:59
    A sullen note from an acquaintance?
  • 12:59 - 13:01
    An important but difficult news item?
  • 13:01 - 13:03
    There's no right answer.
  • 13:03 - 13:05
    Facebook optimizes
    for engagement on the site:
  • 13:06 - 13:07
    likes, shares, comments.
  • 13:08 - 13:11
    In August of 2014,
  • 13:11 - 13:14
    protests broke out in Ferguson, Missouri,
  • 13:14 - 13:18
    after the killing of an African-American
    teenager by a white police officer,
  • 13:18 - 13:20
    under murky circumstances.
  • 13:20 - 13:22
    The news of the protests was all over
  • 13:22 - 13:25
    my algorithmically
    unfiltered Twitter feed,
  • 13:25 - 13:27
    but nowhere on my Facebook.
  • 13:27 - 13:29
    Was it my Facebook friends?
  • 13:29 - 13:31
    I disabled Facebook's algorithm,
  • 13:31 - 13:34
    which is hard because Facebook
    keeps wanting to make you
  • 13:34 - 13:36
    come under the algorithm's control,
  • 13:36 - 13:39
    and saw that my friends
    were talking about it.
  • 13:39 - 13:41
    It's just that the algorithm
    wasn't showing it to me.
  • 13:41 - 13:44
    I researched this and found
    this was a widespread problem.
  • 13:44 - 13:48
    The story of Ferguson
    wasn't algorithm-friendly.
  • 13:48 - 13:49
    It's not "likable."
  • 13:49 - 13:51
    Who's going to click on "like?"
  • 13:52 - 13:54
    It's not even easy to comment on.
  • 13:54 - 13:55
    Without likes and comments,
  • 13:55 - 13:58
    the algorithm was likely showing it
    to even fewer people,
  • 13:58 - 14:00
    so we didn't get to see this.
  • 14:01 - 14:02
    Instead, that week,
  • 14:02 - 14:04
    Facebook's algorithm highlighted this,
  • 14:05 - 14:07
    which is the ALS Ice Bucket Challenge.
  • 14:07 - 14:11
    Worthy cause; dump ice water,
    donate to charity, fine.
  • 14:11 - 14:12
    But it was super algorithm-friendly.
  • 14:13 - 14:16
    The machine made this decision for us.
  • 14:16 - 14:19
    A very important
    but difficult conversation
  • 14:19 - 14:21
    might have been smothered,
  • 14:21 - 14:24
    had Facebook been the only channel.
  • 14:24 - 14:28
    Now, finally, these systems
    can also be wrong
  • 14:28 - 14:31
    in ways that don't resemble human systems.
  • 14:31 - 14:34
    Do you guys remember Watson,
    IBM's machine-intelligence system
  • 14:34 - 14:37
    that wiped the floor
    with human contestants on Jeopardy?
  • 14:37 - 14:39
    It was a great player.
  • 14:39 - 14:42
    But then, for Final Jeopardy,
    Watson was asked this question:
  • 14:43 - 14:46
    "Its largest airport is named
    for a World War II hero,
  • 14:46 - 14:48
    its second-largest
    for a World War II battle."
  • 14:48 - 14:49
    (Hums Final Jeopardy music)
  • 14:50 - 14:51
    Chicago.
  • 14:51 - 14:52
    The two humans got it right.
  • 14:53 - 14:57
    Watson, on the other hand,
    answered "Toronto" --
  • 14:57 - 14:59
    for a US city category!
  • 15:00 - 15:02
    The impressive system also made an error
  • 15:03 - 15:06
    that a human would never make,
    a second-grader wouldn't make.
  • 15:07 - 15:10
    Our machine intelligence can fail
  • 15:10 - 15:13
    in ways that don't fit
    error patterns of humans,
  • 15:13 - 15:16
    in ways we won't expect
    and be prepared for.
  • 15:16 - 15:20
    It'd be lousy not to get a job
    one is qualified for,
  • 15:20 - 15:23
    but it would triple suck
    if it was because of stack overflow
  • 15:23 - 15:25
    in some subroutine.
  • 15:25 - 15:27
    (Laughter)
  • 15:27 - 15:29
    In May of 2010,
  • 15:29 - 15:33
    a flash crash on Wall Street
    fueled by a feedback loop
  • 15:33 - 15:36
    in Wall Street's "sell" algorithm
  • 15:36 - 15:41
    wiped a trillion dollars
    of value in 36 minutes.
  • 15:42 - 15:44
    I don't even want to think
    what "error" means
  • 15:44 - 15:48
    in the context of lethal
    autonomous weapons.
  • 15:50 - 15:54
    So yes, humans have always made biases.
  • 15:54 - 15:56
    Decision makers and gatekeepers,
  • 15:56 - 15:59
    in courts, in news, in war ...
  • 15:59 - 16:02
    they make mistakes;
    but that's exactly my point.
  • 16:02 - 16:06
    We cannot escape
    these difficult questions.
  • 16:07 - 16:10
    We cannot outsource
    our responsibilities to machines.
  • 16:11 - 16:15
    (Applause)
  • 16:17 - 16:22
    Artificial intelligence does not give us
    a "Get out of ethics free" card.
  • 16:23 - 16:26
    Data scientist Fred Benenson
    calls this math-washing.
  • 16:26 - 16:28
    We need the opposite.
  • 16:28 - 16:33
    We need to cultivate algorithm suspicion,
    scrutiny and investigation.
  • 16:33 - 16:37
    We need to make sure we have
    algorithmic accountability,
  • 16:37 - 16:39
    auditing and meaningful transparency.
  • 16:39 - 16:43
    We need to accept
    that bringing math and computation
  • 16:43 - 16:46
    to messy, value-laden human affairs
  • 16:46 - 16:48
    does not bring objectivity;
  • 16:48 - 16:52
    rather, the complexity of human affairs
    invades the algorithms.
  • 16:52 - 16:56
    Yes, we can and we should use computation
  • 16:56 - 16:58
    to help us make better decisions.
  • 16:58 - 17:03
    But we have to own up
    to our moral responsibility to judgment,
  • 17:03 - 17:06
    and use algorithms within that framework,
  • 17:06 - 17:11
    not as a means to abdicate
    and outsource our responsibilities
  • 17:11 - 17:13
    to one another as human to human.
  • 17:14 - 17:16
    Machine intelligence is here.
  • 17:16 - 17:20
    That means we must hold on ever tighter
  • 17:20 - 17:22
    to human values and human ethics.
  • 17:22 - 17:23
    Thank you.
  • 17:23 - 17:28
    (Applause)
Title:
Machine intelligence makes human morals more important
Speaker:
Zeynep Tufekci
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
17:42

English subtitles

Revisions Compare revisions