Return to Video

Machine intelligence makes human morals more important

  • 0:01 - 0:05
    So I started my first job
    as a computer programmer
  • 0:05 - 0:07
    in my very first year of college,
  • 0:07 - 0:09
    basically as a teenager.
  • 0:09 - 0:11
    Soon after I started working,
  • 0:11 - 0:13
    writing software in a company,
  • 0:13 - 0:17
    a manager who worked at the company
    came down to where I was,
  • 0:17 - 0:18
    and he whispered to me,
  • 0:18 - 0:22
    "Can he tell if I'm lying?"
  • 0:22 - 0:25
    There was nobody else in the room.
  • 0:25 - 0:30
    "Can who tell if you're lying,
    and why are we whispering?"
  • 0:30 - 0:33
    The manager pointed
    at the computer in the room.
  • 0:33 - 0:37
    "Can he tell if I'm lying?"
  • 0:38 - 0:43
    Well, that manager was having
    an affair with the receptionist,
  • 0:43 - 0:46
    and I was still a teenager,
  • 0:46 - 0:48
    so I whisper-shouted back to him,
  • 0:48 - 0:52
    "Yes, the computer can tell
    if you're lying."
  • 0:52 - 0:53
    (Laughter)
  • 0:53 - 0:57
    Well, I laughed, but actually
    the laugh's on me.
  • 0:57 - 1:02
    Nowadays, there are computational systems
    that can suss out emotional states,
  • 1:02 - 1:03
    and even lying,
  • 1:03 - 1:05
    from processing human faces.
  • 1:05 - 1:10
    Advertisers and even governments
    are very interested.
  • 1:10 - 1:13
    I had become a computer programmer
    because I was one of those kids
  • 1:13 - 1:16
    crazy about math and science,
  • 1:16 - 1:19
    but somewhere along the line
    I'd learned about nuclear weapons,
  • 1:19 - 1:22
    and I'd gotten really concerned
    with the ethics of science.
  • 1:22 - 1:23
    I was troubled.
  • 1:23 - 1:26
    However, because of family circumstances,
  • 1:26 - 1:29
    I also needed to start working
    as soon as possible.
  • 1:29 - 1:33
    So I thought to myself, hey,
    let me pick a technical field
  • 1:33 - 1:35
    where I can get a job easily
  • 1:35 - 1:39
    and where I don't have to deal
    with any troublesome questions of ethics.
  • 1:39 - 1:42
    So I picked computers.
  • 1:42 - 1:45
    Well ha ha ha, all the laughs are on me.
  • 1:45 - 1:49
    Nowadays, computer scientists
    are building platforms that control
  • 1:49 - 1:53
    what a billion people see every day.
  • 1:53 - 1:58
    They're developing cars that
    could decide who to run over.
  • 1:58 - 2:03
    They're even building machines, weapons,
    that might kill human beings in war.
  • 2:03 - 2:07
    It's ethics all the way down.
  • 2:07 - 2:10
    Machine intelligence is here.
  • 2:10 - 2:13
    We're now using computation
    to make all sort of decisions,
  • 2:13 - 2:15
    but also new kinds of decisions.
  • 2:15 - 2:21
    We're asking questions to computation
    that have no single right answers,
  • 2:21 - 2:24
    that are subjective and open-ended
    and value-laden.
  • 2:24 - 2:26
    We're asking questions like,
  • 2:26 - 2:28
    "Who should the company hire?"
  • 2:28 - 2:31
    "Which update from which friend
    should you be shown?"
  • 2:31 - 2:34
    "Which convict is more
    likely to re-offend?"
  • 2:34 - 2:37
    "Which news item or movie
    should be recommended to people?"
  • 2:37 - 2:40
    Look, yes, we've been using
    computers for a while,
  • 2:40 - 2:42
    but this is different.
  • 2:42 - 2:44
    This is a historical twist,
  • 2:44 - 2:49
    because we cannot anchor
    computation for such subjective decisions
  • 2:49 - 2:52
    the way we can anchor computation
  • 2:52 - 2:56
    for flying airplanes, building bridges,
    going to the moon.
  • 2:56 - 3:00
    Are airplanes safer?
    Did the bridge sway and fall?
  • 3:00 - 3:04
    There we have agreed-upon,
    fairly clear benchmarks,
  • 3:04 - 3:07
    and we have laws of nature to guide us.
  • 3:07 - 3:10
    We have no such anchors and benchmarks
  • 3:10 - 3:15
    for decisions in messy human affairs.
  • 3:15 - 3:18
    To make things more complicated,
    our software is getting more powerful,
  • 3:18 - 3:23
    but it's also getting less transparent
    and more complex.
  • 3:23 - 3:25
    Recently, in the past decade,
  • 3:25 - 3:28
    complex algorithms have made
    great strides.
  • 3:28 - 3:30
    They can recognize human faces.
  • 3:30 - 3:33
    They can decipher handwriting.
  • 3:33 - 3:36
    They can detect credit card fraud
    and block spam
  • 3:36 - 3:38
    and they can translate between languages.
  • 3:38 - 3:40
    They can detect tumors in medical imaging.
  • 3:40 - 3:44
    They can beat humans in chess and go.
  • 3:44 - 3:48
    Much of this progress comes from
    a method called machine learning.
  • 3:48 - 3:52
    Machine learning is different
    than traditional programming,
  • 3:52 - 3:55
    where you give the computer
    detailed, exact, painstaking instructions.
  • 3:55 - 4:00
    It's more like you take the system
    and you feed it lots of data,
  • 4:00 - 4:01
    including unstructured data,
  • 4:01 - 4:04
    like the kind we generate
    in our digital lives.
  • 4:04 - 4:07
    And the system learns
    by churning through this data.
  • 4:07 - 4:13
    And also crucially, these systems
    don't operate under a single answer logic.
  • 4:13 - 4:15
    They don't produce a simple answer.
  • 4:15 - 4:16
    It's more probabilistic.
  • 4:16 - 4:20
    This one is probably more
    like what you're looking for.
  • 4:20 - 4:23
    Now the upside is, this method
    is really powerful.
  • 4:23 - 4:28
    The head of Google's AI systems called it
    the unreasonable effectiveness of data.
  • 4:28 - 4:33
    The downside is we don't really
    understand what the system learned.
  • 4:33 - 4:35
    In fact, that's its power.
  • 4:35 - 4:39
    This is less like giving instructions
    to a computer.
  • 4:39 - 4:44
    It's more like training a puppy
    machine creature
  • 4:44 - 4:46
    we don't really understand or control.
  • 4:46 - 4:49
    So this is our problem.
  • 4:49 - 4:53
    It's a problem when this artificial
    intelligence system gets things wrong.
  • 4:53 - 4:54
    It's also a problem
  • 4:54 - 4:56
    when it gets things right,
  • 4:56 - 5:00
    because we don't even know which is which
    when it's a subjective problem.
  • 5:00 - 5:04
    We don't know what this thing is thinking.
  • 5:04 - 5:08
    So consider a hiring algorithm,
  • 5:08 - 5:13
    a system used to hire people
    using machine learning systems.
  • 5:13 - 5:17
    Such a system would have been trained
    on previous employees' data
  • 5:17 - 5:21
    an instructed to find and hire
    people like the existing
  • 5:21 - 5:23
    high performers in the company.
  • 5:23 - 5:24
    Sounds good.
  • 5:24 - 5:27
    I once attended a conference
    that brought together
  • 5:27 - 5:30
    human resources, managers,
    and executives, high-level people,
  • 5:30 - 5:32
    using such systems in hiring.
  • 5:32 - 5:34
    They were super-excited.
  • 5:34 - 5:39
    They thought that this would make hiring
    more objective, less biased,
  • 5:39 - 5:42
    and give women
    and minorities a better shot
  • 5:42 - 5:44
    against biased human managers.
  • 5:44 - 5:47
    And look, human hiring is biased.
  • 5:47 - 5:48
    I know.
  • 5:48 - 5:51
    In one of my early jobs as a programmer,
  • 5:51 - 5:56
    my immediate manager would sometimes
    come down to where I was
  • 5:56 - 5:57
    really early in the morning
  • 5:57 - 5:59
    or really late in the afternoon,
  • 5:59 - 6:03
    and she'd say, "Zeynep,
    let's go to lunch!"
  • 6:03 - 6:05
    I'd be puzzled by the weird timing.
  • 6:05 - 6:07
    It's 4 pm? Lunch?
  • 6:07 - 6:11
    I was broke, so free lunch. I always went.
  • 6:11 - 6:13
    I later realized what was happening.
  • 6:13 - 6:17
    My immediate managers
    had not confessed to their higher-ups
  • 6:17 - 6:20
    that the programmer they hired
    for a serious job was a teen girl
  • 6:20 - 6:25
    who wore jeans and sneakers to work.
  • 6:25 - 6:28
    I was doing a good job.
  • 6:28 - 6:29
    I just looked wrong and was
    the wrong age and gender.
  • 6:29 - 6:32
    So hiring in a gender- and race-blind way
  • 6:32 - 6:35
    certainly sounds good to me.
  • 6:35 - 6:37
    But with these systems,
    it is more complicated, and here's why.
  • 6:37 - 6:45
    Currently, computational systems
    can infer all sorts of things about you
  • 6:45 - 6:47
    from your digital crumbs,
  • 6:47 - 6:50
    even if you have not
    disclosed those things.
  • 6:50 - 6:53
    They can infer your sexual orientation,
  • 6:53 - 6:55
    your personality traits,
  • 6:55 - 6:57
    your political leanings.
  • 6:57 - 7:02
    They have predictive power
    with high levels of accuracy,
  • 7:02 - 7:04
    remember for things
    you haven't even disclosed.
  • 7:04 - 7:06
    This is inference.
  • 7:06 - 7:09
    I have a friend who developed
    such computational systems
  • 7:09 - 7:13
    to predict the likelihood of clinical
    or post-partum depression
  • 7:13 - 7:15
    from social media data.
  • 7:15 - 7:17
    The results were impressive.
  • 7:17 - 7:20
    Her system can predict
    the likelihood of depression
  • 7:20 - 7:24
    months before the onset of any symptoms,
  • 7:24 - 7:26
    months before.
  • 7:26 - 7:28
    No symptoms, there's prediction.
  • 7:28 - 7:32
    She hopes it will be used
    for early intervention.
  • 7:32 - 7:34
    Great.
  • 7:34 - 7:36
    But now put this in the context of hiring.
  • 7:36 - 7:39
    So at this human resources
    manager's conference,
  • 7:39 - 7:42
    I approached a high-level manager
  • 7:42 - 7:44
    in a very large company,
  • 7:44 - 7:46
    and I said to her, "Look,
    what if, unbeknownst to you,
  • 7:46 - 7:56
    your system is weeding out people with
    high future likelihood of depression?
  • 7:56 - 7:59
    They're not depressed now,
    just maybe in the future, more likely.
  • 7:59 - 8:03
    What if it's weeding out women
    more likely to be pregnant
  • 8:03 - 8:07
    in the next year or two
    but aren't pregnant now?
  • 8:07 - 8:13
    What if it's hiring aggressive people
    because that's your workplace culture?"
  • 8:13 - 8:16
    You can't tell this by looking
    at gender breakdowns.
  • 8:16 - 8:18
    Those may be balanced.
  • 8:18 - 8:20
    And since this is machine learning,
    not traditional coding,
  • 8:20 - 8:23
    there is no variable there
  • 8:23 - 8:26
    labeled "higher risk of depression,"
  • 8:26 - 8:28
    "higher risk of pregnancy,"
  • 8:28 - 8:30
    "aggressive guy scale."
  • 8:30 - 8:34
    Not only do you not know
    what your system is selecting on,
  • 8:34 - 8:36
    you don't even know
    where to begin to look.
  • 8:36 - 8:38
    It's a black box.
  • 8:38 - 8:41
    It has predictive power,
    but you don't understand it.
  • 8:41 - 8:45
    "What safeguards," I asked, "do you have
    to make sure that your black box
  • 8:45 - 8:47
    isn't doing something shady?"
  • 8:47 - 8:50
    So she looked at me
  • 8:50 - 8:54
    as if I had just stepped
    on 10 puppy tails.
  • 8:54 - 8:57
    She stared at me and she said,
  • 8:57 - 9:02
    "I don't want to hear
    another word about this."
  • 9:02 - 9:04
    And she turned around and walked away.
  • 9:04 - 9:07
    Mind you, she wasn't rude.
  • 9:07 - 9:14
    It was clearly what I don't know
    isn't my problem, go away, death stare.
  • 9:14 - 9:20
    Look, such a system may even be less
    biased than human managers in some ways,
  • 9:20 - 9:23
    and it could make monetary sense,
  • 9:23 - 9:29
    but it could also lead to a steady but
    stealthy shutting out of the job market
  • 9:29 - 9:32
    of people with higher risk of depression.
  • 9:32 - 9:35
    Is this the kind of society
    we want to build
  • 9:35 - 9:37
    without even knowing we've done this
  • 9:37 - 9:42
    because we turned decision-making
    to machines we don't totally understand?
  • 9:42 - 9:43
    Another problem is this:
  • 9:43 - 9:48
    these systems are often trained
    on data generated by our actions,
  • 9:48 - 9:50
    human imprints.
  • 9:50 - 9:54
    Well, they could just be
    reflecting our biases,
  • 9:54 - 9:58
    and these systems could be
    picking up on our biases
  • 9:58 - 10:01
    and amplifying them
  • 10:01 - 10:02
    and showing them back to us,
  • 10:02 - 10:06
    while we're telling ourselves, "We're just
    doing objective neutral computation."
  • 10:06 - 10:10
    Researchers found that on Google,
  • 10:10 - 10:13
    women are less likely than men
  • 10:13 - 10:16
    to be shown job ads for high-paying jobs,
  • 10:16 - 10:19
    and searching for African-American names
  • 10:19 - 10:24
    is more likely to bring up ads
    suggesting criminal history,
  • 10:24 - 10:27
    even when there is none.
  • 10:27 - 10:30
    Such hidden biases
    and black box algorithms
  • 10:30 - 10:33
    that researchers uncover sometimes
  • 10:33 - 10:34
    but sometimes we don't know
  • 10:34 - 10:38
    can have life-altering consequences.
  • 10:38 - 10:41
    In Wisconsin, a defendant was sentenced
  • 10:41 - 10:44
    to six years in prison
    for evading the police.
  • 10:44 - 10:48
    You may not know this, but algorithms
    are increasingly used
  • 10:48 - 10:51
    in parole and sentencing decisions.
  • 10:51 - 10:54
    He wanted to know,
    how is this score calculated?
  • 10:54 - 10:56
    It's a commercial black box.
  • 10:56 - 11:01
    The company refused to have its algorithm
    be challenged in open court,
  • 11:01 - 11:04
    but ProPublica, an investigative nonprofit
  • 11:04 - 11:08
    audited that very algorithm
    with what public data they could find,
  • 11:08 - 11:11
    and found that its outcomes were biased
  • 11:11 - 11:14
    and its predictive power was dismal,
    barely better than chance,
  • 11:14 - 11:16
    and it was wrongly labeling
    black defendants
  • 11:16 - 11:19
    as future criminals
  • 11:19 - 11:22
    at twice the rate of white defendants.
  • 11:22 - 11:26
    So consider this case.
  • 11:26 - 11:31
    This woman was late to picking up
    her godsister from a school
  • 11:31 - 11:33
    in Broward County, Florida,
  • 11:33 - 11:35
    running down a street
    with a friend of hers,
  • 11:35 - 11:39
    and they spotted an unlocked
    kid's bike and a scooter on a porch
  • 11:39 - 11:41
    and foolishly jumped on it.
  • 11:41 - 11:43
    As they were speeding off,
    a woman came out and said,
  • 11:43 - 11:46
    "Hey, that's my kid's bike."
  • 11:46 - 11:48
    They dropped it, they walked away,
  • 11:48 - 11:49
    but they were arrested.
  • 11:49 - 11:53
    She was wrong, she was foolish,
    but she was also just 18.
  • 11:53 - 11:56
    She had a couple of juvenile misdemeanors.
  • 11:56 - 12:00
    Meanwhile, that man had been arrested
    for shoplifting in Home Depot,
  • 12:00 - 12:04
    85 dollars worth of stuff,
    a similar petty crime,
  • 12:04 - 12:10
    but he had two prior
    armed robbery convictions.
  • 12:10 - 12:14
    But the algorithm scored her
    as high risk, and not him.
  • 12:14 - 12:19
    Two years later, ProPublica found
    that she had not reoffended.
  • 12:19 - 12:21
    It was just hard to get a job
    for her with her record.
  • 12:21 - 12:23
    He on the other hand did reoffend
  • 12:23 - 12:28
    and is now serving an eight-year
    prison term for a later crime.
  • 12:28 - 12:32
    Clearly, we need to audit our black boxes
  • 12:32 - 12:34
    and not have them have
    this kind of unchecked power.
  • 12:34 - 12:37
    (Applause)
  • 12:38 - 12:40
    Audits are great and important,
    but they don't solve all our problems.
  • 12:40 - 12:43
    Take Facebook's powerful
    news feed algorithm,
  • 12:43 - 12:46
    you know, the one that ranks everything
  • 12:46 - 12:49
    and decides to show you
  • 12:49 - 12:53
    what from all the friends
    and pages that you follow.
  • 12:53 - 12:57
    Should you be shown
    another baby picture?
  • 12:57 - 13:00
    A sullen note from an acquaintance?
  • 13:00 - 13:02
    An important but difficult
    news item? There's no right answer.
  • 13:02 - 13:06
    Facebook optimizes
    for engagement on the site:
  • 13:06 - 13:07
    likes, shares, comments.
  • 13:07 - 13:11
    So in August of 2014,
  • 13:11 - 13:14
    protests broke out in Ferguson, Missouri,
  • 13:14 - 13:18
    after the killing of an African-American
    teenager by a white police officer
  • 13:18 - 13:20
    under murky circumstances.
  • 13:20 - 13:23
    The news of the protests
    were all over my algorithmically
  • 13:23 - 13:25
    unfiltered Twitter feed
  • 13:25 - 13:27
    but nowhere on my Facebook.
  • 13:27 - 13:29
    Was it my Facebook friends?
  • 13:29 - 13:31
    I disabled Facebook's algorithm,
  • 13:31 - 13:34
    which is hard because Facebook
    keeps wanting to make you
  • 13:34 - 13:37
    come under the algorithm's control,
  • 13:37 - 13:38
    and saw that my friends
    were talking about it.
  • 13:38 - 13:41
    It's just that the algorithm
    wasn't showing it to me.
  • 13:41 - 13:44
    I researched this and found
    this was a widespread problem.
  • 13:44 - 13:46
    The story of Ferguson
    wasn't algorithm-friendly.
  • 13:46 - 13:49
    It's not likable.
  • 13:49 - 13:51
    Who's going to click on like?
  • 13:51 - 13:54
    It's not even easy to comment on.
  • 13:54 - 13:57
    Without likes and comments,
    the algorithm was likely showing it
  • 13:57 - 13:58
    to even fewer people,
  • 13:58 - 14:01
    so we didn't get to see this.
  • 14:01 - 14:04
    Instead, that week, Facebook's
    algorithm highlighted this,
  • 14:04 - 14:07
    which is the ALS ice bucket challenge.
  • 14:07 - 14:11
    Worthy cause. Dump ice water,
    donate to charity, fine.
  • 14:11 - 14:13
    But it was super-algorithm-friendly.
  • 14:13 - 14:16
    The machine made this decision for us.
  • 14:16 - 14:20
    A very important
    but difficult conversation
  • 14:20 - 14:21
    might have been smothered
  • 14:21 - 14:24
    had Facebook been the only channel.
  • 14:24 - 14:27
    Now finally, these systems
  • 14:27 - 14:31
    can also be wrong in ways
    that don't resemble human systems.
  • 14:31 - 14:34
    Do you guys remember Watson,
    IBM's machine intelligence system
  • 14:34 - 14:37
    that wiped the floor with
    human contestants on Jeopardy?
  • 14:37 - 14:39
    It was a great player.
  • 14:39 - 14:43
    But then, for Final Jeopardy,
    Watson was asked this question:
  • 14:43 - 14:46
    "It's largest airport is named
    for a World War II hero,
  • 14:46 - 14:48
    it's second largest
    for a World War II battle."
  • 14:48 - 14:51
    Chicago.
  • 14:51 - 14:53
    The two humans got it right.
  • 14:53 - 14:57
    Watson, on the other hand,
    answered "Toronto"
  • 14:57 - 15:00
    for a US city category.
  • 15:00 - 15:03
    The impressive system also made an error
  • 15:03 - 15:07
    that a human would never make,
    a second grader wouldn't make.
  • 15:07 - 15:12
    Our machine intelligence can fail
    in ways that don't fit
  • 15:12 - 15:13
    error patterns of humans,
  • 15:13 - 15:16
    in ways we won't expect
    and be prepared for.
  • 15:16 - 15:20
    It'd be lousy not to get a job
    one is qualified for,
  • 15:20 - 15:23
    but it would triple suck if it was
    because of stack overflow
  • 15:23 - 15:25
    in some subroutine.
  • 15:25 - 15:27
    (Laughter)
  • 15:27 - 15:32
    In May of 2010, a flash crash
    on Wall Street
  • 15:32 - 15:36
    fueled by a feedback loop
    in Wall Street's sell algorithm
  • 15:36 - 15:42
    wiped a trillion dollars of value
    in 36 minutes.
  • 15:42 - 15:45
    I don't even want to think
    what error means in the context
  • 15:45 - 15:48
    of lethal autonomous weapons.
  • 15:50 - 15:54
    So yes, humans have always made biases.
  • 15:54 - 15:56
    Decision makers and gatekeepers,
  • 15:56 - 16:00
    in courts, in news, in war,
  • 16:00 - 16:03
    they make mistakes,
    but that's exactly my point.
  • 16:03 - 16:07
    We cannot escape
    these difficult questions.
  • 16:07 - 16:10
    We cannot outsource
    our responsibilities to machines.
  • 16:10 - 16:15
    (Applause)
  • 16:17 - 16:23
    Artificial intelligence does not
    get us a get out of ethics free card.
  • 16:23 - 16:26
    Data scientist Fred Benenson
    calls this math-washing.
  • 16:26 - 16:28
    We need the opposite.
  • 16:28 - 16:33
    We need to cultivate algorithm suspicion,
    scrutiny, and investigation.
  • 16:33 - 16:37
    We need to make sure we have
    algorithmic accountability,
  • 16:37 - 16:39
    auditing, and meaningful transparency.
  • 16:39 - 16:43
    We need to accept
    that bringing math and computation
  • 16:43 - 16:46
    to messy, value-laden human affairs
  • 16:46 - 16:48
    does not bring objectivity.
  • 16:48 - 16:50
    Rather, the complexity of human affairs
    invades the algorithms.
  • 16:50 - 16:58
    Yes, we can and we should use computation
    to help us make better decisions,
  • 16:58 - 17:03
    but we have to own up
    to our moral responsibility to judgment
  • 17:03 - 17:06
    and use algorithms within that framework,
  • 17:06 - 17:11
    not as a means to abdicate
    and outsource our responsibilities
  • 17:11 - 17:14
    to one another as human to human.
  • 17:14 - 17:17
    Machine intelligence is here.
  • 17:17 - 17:20
    That means we must hold on
    ever tighter to human values
  • 17:20 - 17:22
    and human ethics.
  • 17:22 - 17:24
    Thank you.
  • 17:24 - 17:29
    (Applause)
Title:
Machine intelligence makes human morals more important
Speaker:
Zeynep Tufekci
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
17:42

English subtitles

Revisions Compare revisions