Return to Video

Giving a voice to the unheard | Rabea Ziuod | TEDxJerusalem

  • 0:08 - 0:09
    Hi.
  • 0:11 - 0:14
    We are living in an exciting era,
  • 0:14 - 0:19
    where innovation and technology
    has the potential to do the unimaginable,
  • 0:19 - 0:23
    and it becomes even more unimaginable
  • 0:23 - 0:27
    when it breaks down the gaps
    between disability and ability.
  • 0:28 - 0:31
    15% of the world population
  • 0:33 - 0:35
    - 1 billion people around the world -
  • 0:35 - 0:37
    lives with disabilities
  • 0:37 - 0:42
    which makes people with disabilities
    the largest minority in the world.
  • 0:43 - 0:45
    And they are not living
    on a different planet.
  • 0:45 - 0:50
    They may be part of our families,
    friends, or colleagues.
  • 0:51 - 0:56
    Today, I'm going to tell you
    how people with speech disabilities
  • 0:56 - 0:59
    will have a way to better communicate.
  • 0:59 - 1:03
    I was 7 years old
    when my sister Amal was born.
  • 1:03 - 1:06
    I was too young to see the challenges
  • 1:06 - 1:09
    that my family was facing
    on a daily basis,
  • 1:09 - 1:14
    but I could see that Amal
    couldn't crawl, or eat, or talk
  • 1:14 - 1:17
    like any other baby her age.
  • 1:17 - 1:22
    But with time, we adjusted
    to raise a baby with cerebral palsy,
  • 1:22 - 1:26
    while understanding her special
    communication patterns and needs.
  • 1:28 - 1:30
    Nine years later,
  • 1:30 - 1:33
    my family was blessed
    to have another baby, Ahmad.
  • 1:34 - 1:38
    Ahmad decided to grow up
    exactly like his sister Amal,
  • 1:38 - 1:43
    being so smart, so sharp,
    curious about everything around him,
  • 1:43 - 1:47
    but he also decided to invent
    his special communication patterns
  • 1:47 - 1:49
    to communicate with us,
  • 1:50 - 1:53
    and for the other people
    that couldn't understand him,
  • 1:53 - 1:55
    we had to translate.
  • 1:55 - 2:00
    Amal and Ahmad say "num"
    when they are hungry,
  • 2:00 - 2:05
    and they say "ahh" to call
    the name of Nora, my sister.
  • 2:05 - 2:09
    And when they want to call
    my name, they say "abeya".
  • 2:09 - 2:13
    In case they want to go
    to the bathroom, they say "kkhh".
  • 2:13 - 2:17
    We understand most
    of their special communication patterns,
  • 2:17 - 2:21
    but it's only us, the close circle.
  • 2:21 - 2:25
    And this is the case for most
    of the people who have an unclear voice.
  • 2:26 - 2:29
    One of those people is Urit.
  • 2:29 - 2:34
    Urit is a 34-year-old woman
    with cerebral palsy.
  • 2:34 - 2:36
    She is living an independent life.
  • 2:36 - 2:41
    She can drive her car, go to the gym,
    and do a lot of other things.
  • 2:43 - 2:48
    However, when it comes
    to communicating using her voice,
  • 2:48 - 2:51
    sometimes, it can become
    harder than going to the gym,
  • 2:51 - 2:53
    and more frustrating
  • 2:53 - 2:59
    because she finds herself repeating
    the same words again and again
  • 2:59 - 3:01
    in order to be understood.
  • 3:01 - 3:05
    We asked Urit to say
    a few words in English.
  • 3:06 - 3:08
    Let's listen to her together
  • 3:08 - 3:11
    and see if you can understand
    what she's trying to say.
  • 3:12 - 3:14
    (unclear speech)
  • 3:17 - 3:22
    I don't know how many of you
    could understand her this first time,
  • 3:22 - 3:23
    but let's listen to her again,
  • 3:23 - 3:28
    and really focus and try to understand
    what she's trying to say.
  • 3:28 - 3:29
    (unclear speech)
  • 3:33 - 3:37
    Try to memorize what she has just said;
    we'll get to that later.
  • 3:39 - 3:42
    With my siblings, and Urit,
    and people that I get to know,
  • 3:42 - 3:46
    I had the chance to see
    a world full of challenges,
  • 3:46 - 3:49
    - a world of special people with needs.
  • 3:50 - 3:54
    And this allowed me
    to examine the existent technology
  • 3:54 - 3:58
    in search of an answer
    for what my siblings were seeking.
  • 3:59 - 4:02
    Unfortunately, the current
    state of the art assistive technology,
  • 4:02 - 4:07
    including speech recognition applications,
    could not provide an answer.
  • 4:08 - 4:14
    Since then, all the assistive technology
    has completely bypassed the voice,
  • 4:14 - 4:17
    opting to use other modes of communication
  • 4:18 - 4:22
    [by] replacing the voice
    with symbols and images,
  • 4:22 - 4:26
    or movements of the body
    in the head or in the eyes.
  • 4:27 - 4:32
    This brings me to the other lightweight
    alternative that does use the voice
  • 4:33 - 4:36
    which is speech recognition applications.
  • 4:36 - 4:39
    This technology works in two approaches.
  • 4:40 - 4:44
    The first approach attempts
    to discover which word has been said.
  • 4:46 - 4:49
    The second approach relies on phonemes.
  • 4:49 - 4:54
    Phonemes are all the sounds
    we produce using our mouth and nose.
  • 4:56 - 5:00
    Both approaches rely on statistical models
  • 5:00 - 5:03
    from a large database of standard speech.
  • 5:03 - 5:06
    But once the speech is not standard,
  • 5:06 - 5:10
    - when I say not standard,
    I mean it's enough to have an accent,
  • 5:10 - 5:12
    like most of us here -
  • 5:12 - 5:14
    this will not work.
  • 5:14 - 5:20
    My colleagues and I developed
    a new approach of assistive technology
  • 5:20 - 5:22
    that does use the person's own voice
  • 5:22 - 5:26
    and can understand
    non-standard speech patterns,
  • 5:26 - 5:32
    with the mission to give people
    with a speech disability their voice back.
  • 5:33 - 5:36
    So, whose life is this going to change?
  • 5:36 - 5:39
    People with cerebral palsy,
  • 5:39 - 5:42
    people with Parkinson's,
    and Myasthenia Gravis,
  • 5:42 - 5:44
    so many [other] neurological disorders,
  • 5:44 - 5:47
    people who are born
    with hearing disabilities,
  • 5:47 - 5:52
    or people who suddenly have a stroke
    and their whole life is changed,
  • 5:52 - 5:55
    but not only theirs.
  • 5:55 - 5:59
    Not only the people who have
    difficulty expressing themselves,
  • 5:59 - 6:03
    but everyone who interacts
    with them on a daily basis.
  • 6:03 - 6:09
    This will make it easier
    for them to be socially included
  • 6:09 - 6:13
    - because every one of us
    wants to be socially included.
  • 6:13 - 6:18
    And now, you may be asking yourself,
    "How does it work?"
  • 6:18 - 6:22
    "How come the current speech recognition
    technology couldn't do the same?"
  • 6:25 - 6:28
    Because our technology works
    in a different way.
  • 6:29 - 6:32
    So, each person has to go
    through two phases.
  • 6:32 - 6:35
    The first phase is called
    the calibration phase,
  • 6:35 - 6:41
    where the person has to teach the device
    and the application his own patterns
  • 6:41 - 6:44
    by entering the patterns
    and building his own dictionary.
  • 6:44 - 6:46
    This phase usually happens
  • 6:46 - 6:49
    with the person
    who understands him the most.
  • 6:49 - 6:51
    Together they will build the dictionary.
  • 6:51 - 6:55
    This generally takes
    only one to three hours,
  • 6:55 - 6:58
    and it depends on the speaking
    capability of the speaker.
  • 6:58 - 7:00
    After building the dictionary,
  • 7:00 - 7:04
    we move to the second phase
    which is the recognition phase.
  • 7:04 - 7:08
    The application will be able to recognize
    unintelligible speech patterns
  • 7:08 - 7:11
    from the dictionary that is already built
  • 7:11 - 7:14
    and translate them
    into a clear voice in real time.
  • 7:16 - 7:20
    Our approach is user-dependent
    and language-independent
  • 7:20 - 7:23
    which means it can work
    in any language in the world,
  • 7:24 - 7:26
    even the invented ones.
  • 7:26 - 7:30
    And the key word here
    is 'pattern-matching'.
  • 7:30 - 7:35
    Once the person builds his own dictionary,
    and says a word that already exists there,
  • 7:35 - 7:37
    there will be a pattern-matching
  • 7:37 - 7:40
    between what he says
    and what it already exists.
  • 7:40 - 7:42
    But here we found a problem.
  • 7:42 - 7:45
    We found that people
    with a speech disability
  • 7:45 - 7:48
    pronounce different words in similar ways.
  • 7:50 - 7:54
    And the challenge was
    to differentiate between them.
  • 7:54 - 7:57
    So we created a technology
    called Adaptive Framing.
  • 7:58 - 8:04
    Adaptive Framing technology can be adapted
    to the width of the event in the pattern.
  • 8:04 - 8:10
    In the existing technology, you can see
    the L and the A in the same frame.
  • 8:10 - 8:15
    But in our new technology, you can see
    that the L and A are in different frames
  • 8:15 - 8:18
    which increases the accuracy
    of the pattern-matching.
  • 8:19 - 8:22
    And this makes
    our pattern-matching so much better.
  • 8:23 - 8:26
    I suppose you still remember Urit, right?
  • 8:26 - 8:31
    Let's listen to her again now,
    but this time using Talkitt:
  • 8:34 - 8:35
    (unclear speech)
  • 8:35 - 8:36
    Now I can ...
  • 8:36 - 8:37
    (unclear speech)
  • 8:37 - 8:38
    ... start
  • 8:38 - 8:40
    (unclear speech)
  • 8:40 - 8:41
    ... speaking freely.
  • 8:43 - 8:45
    (Applause)
  • 8:56 - 8:58
    Talkitt is only one step
  • 8:58 - 9:02
    towards bridging the gap
    between disability and ability
  • 9:02 - 9:05
    by letting people express their potential.
  • 9:05 - 9:07
    The more we challenge our minds,
  • 9:07 - 9:12
    the more gaps will collapse
    to let us all have a normal life.
  • 9:12 - 9:13
    Thank you.
  • 9:13 - 9:14
    (Applause)
Title:
Giving a voice to the unheard | Rabea Ziuod | TEDxJerusalem
Description:

This talk was given at a local TEDx event, produced independently of the TED Conferences.

Talkitt is developing an innovative solution that will enable people that suffer from motor, speech, and language disorders to easily communicate using their own voice, by translating the unintelligible pronunciation into understandable speech.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
09:21

English subtitles

Revisions