Return to Video

This app knows how you feel — from the look on your face

  • 0:01 - 0:05
    Our emotions influence
    every aspect of our lives,
  • 0:05 - 0:08
    from our health and how we learn,
    to how we do business and make decisions,
  • 0:08 - 0:10
    big ones and small.
  • 0:11 - 0:14
    Our emotions also influence
    how we connect with one another.
  • 0:15 - 0:19
    We've evolved to live
    in a world like this,
  • 0:19 - 0:23
    but instead, we're living
    more and more of our lives like this --
  • 0:23 - 0:27
    this is the text message
    from my daughter last night --
  • 0:27 - 0:29
    in a world that's devoid of emotion.
  • 0:29 - 0:31
    So I'm on a mission to change that.
  • 0:31 - 0:35
    I want to bring emotions
    back into our digital experiences.
  • 0:36 - 0:39
    I started on this path 15 years ago.
  • 0:39 - 0:41
    I was a computer scientist in Egypt,
  • 0:41 - 0:46
    and I had just gotten accepted to
    a Ph.D. program at Cambridge University.
  • 0:46 - 0:48
    So I did something quite unusual
  • 0:48 - 0:52
    for a young newlywed Muslim Egyptian wife:
  • 0:54 - 0:57
    With the support of my husband,
    who had to stay in Egypt,
  • 0:57 - 1:00
    I packed my bags and I moved to England.
  • 1:00 - 1:03
    At Cambridge, thousands of miles
    away from home,
  • 1:03 - 1:06
    I realized I was spending
    more hours with my laptop
  • 1:06 - 1:08
    than I did with any other human.
  • 1:08 - 1:13
    Yet despite this intimacy, my laptop
    had absolutely no idea how I was feeling.
  • 1:13 - 1:17
    It had no idea if I was happy,
  • 1:17 - 1:20
    having a bad day, or stressed, confused,
  • 1:20 - 1:22
    and so that got frustrating.
  • 1:24 - 1:29
    Even worse, as I communicated
    online with my family back home,
  • 1:29 - 1:33
    I felt that all my emotions
    disappeared in cyberspace.
  • 1:33 - 1:38
    I was homesick, I was lonely,
    and on some days I was actually crying,
  • 1:38 - 1:43
    but all I had to communicate
    these emotions was this.
  • 1:43 - 1:45
    (Laughter)
  • 1:45 - 1:50
    Today's technology
    has lots of I.Q., but no E.Q.;
  • 1:50 - 1:53
    lots of cognitive intelligence,
    but no emotional intelligence.
  • 1:53 - 1:55
    So that got me thinking,
  • 1:55 - 1:59
    what if our technology
    could sense our emotions?
  • 1:59 - 2:03
    What if our devices could sense
    how we felt and reacted accordingly,
  • 2:03 - 2:06
    just the way an emotionally
    intelligent friend would?
  • 2:07 - 2:10
    Those questions led me and my team
  • 2:10 - 2:15
    to create technologies that can read
    and respond to our emotions,
  • 2:15 - 2:18
    and our starting point was the human face.
  • 2:19 - 2:22
    So our human face happens to be
    one of the most powerful channels
  • 2:22 - 2:26
    that we all use to communicate
    social and emotional states,
  • 2:26 - 2:29
    everything from enjoyment, surprise,
  • 2:29 - 2:33
    empathy and curiosity.
  • 2:33 - 2:38
    In emotion science, we call each
    facial muscle movement an action unit.
  • 2:38 - 2:41
    So for example, action unit 12,
  • 2:41 - 2:43
    it's not a Hollywood blockbuster,
  • 2:43 - 2:46
    it is actually a lip corner pull,
    which is the main component of a smile.
  • 2:46 - 2:49
    Try it everybody. Let's get
    some smiles going on.
  • 2:49 - 2:52
    Another example is action unit 4.
    It's the brow furrow.
  • 2:52 - 2:54
    It's when you draw your eyebrows together
  • 2:54 - 2:56
    and you create all
    these textures and wrinkles.
  • 2:56 - 3:01
    We don't like them, but it's
    a strong indicator of a negative emotion.
  • 3:01 - 3:03
    So we have about 45 of these action units,
  • 3:03 - 3:06
    and they combine to express
    hundreds of emotions.
  • 3:06 - 3:10
    Teaching a computer to read
    these facial emotions is hard,
  • 3:10 - 3:13
    because these action units,
    they can be fast, they're subtle,
  • 3:13 - 3:16
    and they combine in many different ways.
  • 3:16 - 3:20
    So take, for example,
    the smile and the smirk.
  • 3:20 - 3:23
    They look somewhat similar,
    but they mean very different things.
  • 3:23 - 3:25
    (Laughter)
  • 3:25 - 3:28
    So the smile is positive,
  • 3:28 - 3:29
    a smirk is often negative.
  • 3:29 - 3:33
    Sometimes a smirk
    can make you became famous.
  • 3:33 - 3:36
    But seriously, it's important
    for a computer to be able
  • 3:36 - 3:39
    to tell the difference
    between the two expressions.
  • 3:39 - 3:41
    So how do we do that?
  • 3:41 - 3:42
    We give our algorithms
  • 3:42 - 3:47
    tens of thousands of examples
    of people we know to be smiling,
  • 3:47 - 3:50
    from different ethnicities, ages, genders,
  • 3:50 - 3:52
    and we do the same for smirks.
  • 3:52 - 3:54
    And then, using deep learning,
  • 3:54 - 3:57
    the algorithm looks for all these
    textures and wrinkles
  • 3:57 - 3:59
    and shape changes on our face,
  • 3:59 - 4:03
    and basically learns that all smiles
    have common characteristics,
  • 4:03 - 4:06
    all smirks have subtly
    different characteristics.
  • 4:06 - 4:08
    And the next time it sees a new face,
  • 4:08 - 4:10
    it essentially learns that
  • 4:10 - 4:13
    this face has the same
    characteristics of a smile,
  • 4:13 - 4:18
    and it says, "Aha, I recognize this.
    This is a smile expression."
  • 4:18 - 4:21
    So the best way to demonstrate
    how this technology works
  • 4:21 - 4:23
    is to try a live demo,
  • 4:23 - 4:27
    so I need a volunteer,
    preferably somebody with a face.
  • 4:27 - 4:30
    (Laughter)
  • 4:30 - 4:32
    Cloe's going to be our volunteer today.
  • 4:33 - 4:38
    So over the past five years, we've moved
    from being a research project at MIT
  • 4:38 - 4:39
    to a company,
  • 4:39 - 4:42
    where my team has worked really hard
    to make this technology work,
  • 4:42 - 4:45
    as we like to say, in the wild.
  • 4:45 - 4:47
    And we've also shrunk it so that
    the core emotion engine
  • 4:47 - 4:51
    works on any mobile device
    with a camera, like this iPad.
  • 4:51 - 4:53
    So let's give this a try.
  • 4:55 - 4:59
    As you can see, the algorithm
    has essentially found Chloe's face,
  • 4:59 - 5:00
    so it's this white bounding box,
  • 5:00 - 5:03
    and it's tracking the main
    feature points on her face,
  • 5:03 - 5:06
    so her eyebrows, her eyes,
    her mouth and her nose.
  • 5:06 - 5:09
    The question is,
    can it recognize her expression?
  • 5:09 - 5:10
    So we're going to test the machine.
  • 5:10 - 5:15
    So first of all, give me your poker face.
    Yep, awesome. (Laughter)
  • 5:15 - 5:17
    And then as she smiles,
    this is a genuine smile, it's great.
  • 5:17 - 5:20
    So you can see the green bar
    go up as she smiles.
  • 5:20 - 5:21
    Now that was a big smile.
  • 5:21 - 5:24
    Can you try a subtle smile
    to see if the computer can recognize?
  • 5:24 - 5:26
    It does recognize subtle smiles as well.
  • 5:26 - 5:28
    We've worked really hard
    to make that happen.
  • 5:28 - 5:31
    And then eyebrow raised,
    indicator of surprise.
  • 5:31 - 5:36
    Brow furrow, which is
    an indicator of confusion.
  • 5:36 - 5:40
    Frown. Yes, perfect.
  • 5:40 - 5:43
    So these are all the different
    action units. There's many more of them.
  • 5:43 - 5:45
    This is just a slimmed down demo.
  • 5:45 - 5:48
    But we call each reading
    an emotion data point,
  • 5:48 - 5:51
    and then they can fire together
    to portray different emotions.
  • 5:51 - 5:56
    So on the right side of the demo,
    like, look like you're happy.
  • 5:56 - 5:57
    So that's joy. Joy fires up.
  • 5:57 - 5:59
    And then give me a disgust face.
  • 5:59 - 6:04
    Try to remember what it was like
    when Zayn left One Direction.
  • 6:04 - 6:05
    (Laughter)
  • 6:05 - 6:09
    Yeah, wrinkle your nose. Awesome.
  • 6:09 - 6:13
    And the valance is actually quite
    negative, so you must have been a big fan.
  • 6:13 - 6:16
    So valance is how positive
    or negative an experience is,
  • 6:16 - 6:19
    and engagement is how
    expressive she is as well.
  • 6:19 - 6:22
    So imagine if Chloe had access
    to this realtime emotion stream,
  • 6:22 - 6:25
    and she could share it
    with anybody she wanted to.
  • 6:25 - 6:28
    Thank you.
  • 6:28 - 6:32
    (Applause)
  • 6:34 - 6:39
    So so far, we have amassed
    12 billion of these emotion data points.
  • 6:39 - 6:42
    It's the largest emotion
    database in the world.
  • 6:42 - 6:45
    We've collected it
    from 2.9 million face videos,
  • 6:45 - 6:47
    people who have agreed
    to share their emotions with us,
  • 6:47 - 6:50
    and from 75 countries around the world.
  • 6:50 - 6:52
    It's growing every day.
  • 6:53 - 6:55
    It blows my mind away
  • 6:55 - 6:58
    that we can now quantify something
    as personal as our emotions,
  • 6:58 - 7:00
    and we can do it at this scale.
  • 7:00 - 7:02
    So what have we learned to date?
  • 7:03 - 7:05
    Gender.
  • 7:05 - 7:09
    Our data confirms something
    that you might suspect.
  • 7:09 - 7:11
    Women are more expressive than men.
  • 7:11 - 7:14
    Not only do they smile more,
    their smiles last longer,
  • 7:14 - 7:16
    and we can now really quantify
    what it is that men and women
  • 7:16 - 7:19
    respond to differently.
  • 7:19 - 7:21
    Let's do culture: so in the United States,
  • 7:21 - 7:24
    women are 40 percent
    more expressive than men,
  • 7:24 - 7:28
    but curiously, we don't see any difference
    in the U.K. between men and women.
  • 7:28 - 7:30
    (Laughter)
  • 7:31 - 7:33
    Age:
  • 7:33 - 7:35
    people who are 50 years and older
  • 7:35 - 7:39
    are 25 percent more emotive
    than younger people.
  • 7:40 - 7:44
    Women in their 20s smile a lot more
    than men the same age,
  • 7:44 - 7:48
    perhaps a necessity for dating,
  • 7:48 - 7:50
    but perhaps what surprised us
    the most about this data
  • 7:50 - 7:53
    is that we happen
    to be expressive all the time,
  • 7:53 - 7:56
    even when we are sitting
    in front of our devices alone,
  • 7:56 - 8:00
    and it's not just when we're watching
    cat videos on Facebook.
  • 8:00 - 8:03
    We are expressive when we're emailing,
    texting, shopping online,
  • 8:03 - 8:06
    or even doing our taxes.
  • 8:06 - 8:08
    Where is this data used today?
  • 8:08 - 8:11
    In understanding how we engage with media,
  • 8:11 - 8:13
    so understanding virality
    and voting behavior;
  • 8:13 - 8:16
    and also empowering
    or emotion-enabling technology,
  • 8:16 - 8:21
    and I want to share some examples
    that are especially close to my heart.
  • 8:21 - 8:24
    Emotion-enabled wearable glasses
    can help individuals
  • 8:24 - 8:27
    who are visually impaired
    read the faces of others,
  • 8:27 - 8:32
    and it can help individuals
    on the autism spectrum interpret emotion,
  • 8:32 - 8:34
    something that they really struggle with.
  • 8:36 - 8:39
    In education, imagine if
    your learning apps
  • 8:39 - 8:42
    sense that you're confused and slow down,
  • 8:42 - 8:43
    or that you're bored, so it sped up,
  • 8:43 - 8:46
    just like a great teacher would
    in a classroom.
  • 8:47 - 8:50
    What if your wristwatch tracks your mood,
  • 8:50 - 8:52
    or your car sensed that you're tired,
  • 8:52 - 8:55
    or perhaps your fridge
    knows that you're stressed,
  • 8:55 - 9:01
    so it auto-locks to prevent you
    from binge eating. (Laughter)
  • 9:01 - 9:04
    I would like that, yeah.
  • 9:04 - 9:06
    What if, when I was in Cambridge,
  • 9:06 - 9:08
    I had access to my realtime
    emotion stream,
  • 9:08 - 9:11
    and I could share that with my family
    back home in a very natural way,
  • 9:11 - 9:15
    just like I would if we were all
    in the same room together?
  • 9:15 - 9:19
    I think in five years down the line,
  • 9:19 - 9:21
    all our devices are going
    to have an emotion chip,
  • 9:21 - 9:25
    and we won't remember what it was like
    when we couldn't just frown at our device
  • 9:25 - 9:29
    and our device would say, "Hmm,
    you didn't like that, did you."
  • 9:29 - 9:33
    Our biggest challenge is that there are
    so many applications of this technology,
  • 9:33 - 9:36
    my team and I realize that we can't
    build them all ourselves,
  • 9:36 - 9:39
    so we've made this technology available
    so that other developers
  • 9:39 - 9:41
    can get building and get creative.
  • 9:41 - 9:46
    We recognize that
    there are potential risks
  • 9:46 - 9:48
    and potential for abuse,
  • 9:48 - 9:51
    but personally, having spent
    many years doing this,
  • 9:51 - 9:54
    I believe that the benefits to humanity
  • 9:54 - 9:56
    from having emotionally
    intelligent technology
  • 9:56 - 9:59
    far outweigh the potential for misuse.
  • 9:59 - 10:02
    And I invite you all to be
    part of the conversation.
  • 10:02 - 10:04
    The more people who know
    about this technology,
  • 10:04 - 10:08
    the more we can all have a voice
    in how it's being used.
  • 10:09 - 10:14
    So as more and more
    of our lives become digital,
  • 10:14 - 10:17
    we are fighting a losing battle
    trying to curb our usage of devices
  • 10:17 - 10:19
    in order to reclaim our emotions.
  • 10:21 - 10:25
    So what I'm trying to do instead
    is to bring emotions into our technology
  • 10:25 - 10:27
    and make our technologies more responsive.
  • 10:27 - 10:29
    So I want those devices
    that have separated us
  • 10:29 - 10:32
    to bring us back together,
  • 10:32 - 10:36
    and by humanizing technology,
    we have this golden opportunity
  • 10:36 - 10:40
    to re-imagine how we
    connect with machines,
  • 10:40 - 10:44
    and therefore, how we, as human beings,
  • 10:44 - 10:46
    connect with one another.
  • 10:46 - 10:48
    Thank you.
  • 10:48 - 10:52
    (Applause)
Title:
This app knows how you feel — from the look on your face
Speaker:
Rana el Kaliouby
Description:

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
11:04
  • Typo in 8:42:
    (...) you're confused and slow down, or that you're bored, so it's SPEED up, just like a great teacher (...)

English subtitles

Revisions Compare revisions