Hi.
We are living in an exciting era,
where innovation and technology
has the potential to do the unimaginable,
and it becomes even more unimaginable
when it breaks down the gaps
between disability and ability.
15% of the world population
- 1 billion people around the world -
lives with disabilities
which makes people with disabilities
the largest minority in the world.
And they are not living
on a different planet.
They may be part of our families,
friends, or colleagues.
Today, I'm going to tell you
how people with speech disabilities
will have a way to better communicate.
I was 7 years old
when my sister Amal was born.
I was too young to see the challenges
that my family was facing
on a daily basis,
but I could see that Amal
couldn't crawl, or eat, or talk
like any other baby her age.
But with time, we adjusted
to raise a baby with cerebral palsy,
while understanding her special
communication patterns and needs.
Nine years later,
my family was blessed
to have another baby, Ahmad.
Ahmad decided to grow up
exactly like his sister Amal,
being so smart, so sharp,
curious about everything around him,
but he also decided to invent
his special communication patterns
to communicate with us,
and for the other people
that couldn't understand him,
we had to translate.
Amal and Ahmad say "num"
when they are hungry,
and they say "ahh" to call
the name of Nora, my sister.
And when they want to call
my name, they say "abeya".
In case they want to go
to the bathroom, they say "kkhh".
We understand most
of their special communication patterns,
but it's only us, the close circle.
And this is the case for most
of the people who have an unclear voice.
One of those people is Urit.
Urit is a 34-year-old woman
with cerebral palsy.
She is living an independent life.
She can drive her car, go to the gym,
and do a lot of other things.
However, when it comes
to communicating using her voice,
sometimes, it can become
harder than going to the gym,
and more frustrating
because she finds herself repeating
the same words again and again
in order to be understood.
We asked Urit to say
a few words in English.
Let's listen to her together
and see if you can understand
what she's trying to say.
(unclear speech)
I don't know how many of you
could understand her this first time,
but let's listen to her again,
and really focus and try to understand
what she's trying to say.
(unclear speech)
Try to memorize what she has just said;
we'll get to that later.
With my siblings, and Urit,
and people that I get to know,
I had the chance to see
a world full of challenges,
- a world of special people with needs.
And this allowed me
to examine the existent technology
in search of an answer
for what my siblings were seeking.
Unfortunately, the current
state of the art assistive technology,
including speech recognition applications,
could not provide an answer.
Since then, all the assistive technology
has completely bypassed the voice,
opting to use other modes of communication
[by] replacing the voice
with symbols and images,
or movements of the body
in the head or in the eyes.
This brings me to the other lightweight
alternative that does use the voice
which is speech recognition applications.
This technology works in two approaches.
The first approach attempts
to discover which word has been said.
The second approach relies on phonemes.
Phonemes are all the sounds
we produce using our mouth and nose.
Both approaches rely on statistical models
from a large database of standard speech.
But once the speech is not standard,
- when I say not standard,
I mean it's enough to have an accent,
like most of us here -
this will not work.
My colleagues and I developed
a new approach of assistive technology
that does use the person's own voice
and can understand
non-standard speech patterns,
with the mission to give people
with a speech disability their voice back.
So, whose life is this going to change?
People with cerebral palsy,
people with Parkinson's,
and Myasthenia Gravis,
so many [other] neurological disorders,
people who are born
with hearing disabilities,
or people who suddenly have a stroke
and their whole life is changed,
but not only theirs.
Not only the people who have
difficulty expressing themselves,
but everyone who interacts
with them on a daily basis.
This will make it easier
for them to be socially included
- because every one of us
wants to be socially included.
And now, you may be asking yourself,
"How does it work?"
"How come the current speech recognition
technology couldn't do the same?"
Because our technology works
in a different way.
So, each person has to go
through two phases.
The first phase is called
the calibration phase,
where the person has to teach the device
and the application his own patterns
by entering the patterns
and building his own dictionary.
This phase usually happens
with the person
who understands him the most.
Together they will build the dictionary.
This generally takes
only one to three hours,
and it depends on the speaking
capability of the speaker.
After building the dictionary,
we move to the second phase
which is the recognition phase.
The application will be able to recognize
unintelligible speech patterns
from the dictionary that is already built
and translate them
into a clear voice in real time.
Our approach is user-dependent
and language-independent
which means it can work
in any language in the world,
even the invented ones.
And the key word here
is 'pattern-matching'.
Once the person builds his own dictionary,
and says a word that already exists there,
there will be a pattern-matching
between what he says
and what it already exists.
But here we found a problem.
We found that people
with a speech disability
pronounce different words in similar ways.
And the challenge was
to differentiate between them.
So we created a technology
called Adaptive Framing.
Adaptive Framing technology can be adapted
to the width of the event in the pattern.
In the existing technology, you can see
the L and the A in the same frame.
But in our new technology, you can see
that the L and A are in different frames
which increases the accuracy
of the pattern-matching.
And this makes
our pattern-matching so much better.
I suppose you still remember Urit, right?
Let's listen to her again now,
but this time using Talkitt:
(unclear speech)
Now I can ...
(unclear speech)
... start
(unclear speech)
... speaking freely.
(Applause)
Talkitt is only one step
towards bridging the gap
between disability and ability
by letting people express their potential.
The more we challenge our minds,
the more gaps will collapse
to let us all have a normal life.
Thank you.
(Applause)