It used to be that if you wanted
to get a computer to do something new,
you would have to program it.
Now, programming, for those of you here
that haven't done it yourself,
requires laying out in excruciating detail
every single step that you want
the computer to do
in order to achieve your goal.
Now, if you want to do something
that you don't know how to do yourself,
then this is going
to be a great challenge.
So this was the challenge faced
by this man, Arthur Samuel.
In 1956, he wanted to get this computer
to be able to beat him at checkers.
How can you write a program,
lay out in excruciating detail,
how to be better than you at checkers?
So he came up with an idea:
he had the computer play
against itself thousands of times
and learn how to play checkers.
And indeed it worked,
and in fact, by 1962,
this computer had beaten
the Connecticut state champion.
So Arthur Samuel was
the father of machine learning,
and I have a great debt to him,
because I am a machine
learning practitioner.
I was the president of Kaggle,
a community of over 200,000
machine learning practictioners.
Kaggle puts up competitions
to try and get them to solve
previously unsolved problems,
and it's been successful
hundreds of times.
So from this vantage point,
I was able to find out
a lot about what machine learning
can do in the past, can do today,
and what it could do in the future.
Perhaps the first big success of
machine learning commercially was Google.
Google showed that it is
possible to find information
by using a computer algorithm,
and this algorithm is based
on machine learning.
Since that time, there have been many
commercial successes of machine learning.
Companies like Amazon and Netflix
use machine learning to suggest
products that you might like to buy,
movies that you might like to watch.
Sometimes, it's almost creepy.
Companies like LinkedIn and Facebook
sometimes will tell you about
who your friends might be
and you have no idea how it did it,
and this is because it's using
the power of machine learning.
These are algorithms that have
learned how to do this from data
rather than being programmed by hand.
This is also how IBM was successful
in getting Watson to beat
the two world champions at "Jeopardy,"
answering incredibly subtle
and complex questions like this one.
["The ancient 'Lion of Nimrud' went missing
from this city's national museum in 2003
(along with a lot of other stuff)"]
This is also why we are now able
to see the first self-driving cars.
If you want to be able to tell
the difference between, say,
a tree and a pedestrian,
well, that's pretty important.
We don't know how to write
those programs by hand,
but with machine learning,
this is now possible.
And in fact, this car has driven
over a million miles
without any accidents on regular roads.
So we now know that computers can learn,
and computers can learn to do things
that we actually sometimes
don't know how to do ourselves,
or maybe can do them better than us.
One of the most amazing examples
I've seen of machine learning
happened on a project that I ran at Kaggle
where a team run by a guy
called Geoffrey Hinton
from the University of Toronto
won a competition for
automatic drug discovery.
Now, what was extraordinary here
is not just that they beat
all of the algorithms developed by Merck
or the international academic community,
but nobody on the team had any background
in chemistry or biology or life sciences,
and they did it in two weeks.
How did they do this?
They used an extraordinary algorithm
called deep learning.
So important was this that in fact
the success was covered
in The New York Times in a front page
article a few weeks later.
This is Geoffrey Hinton
here on the left-hand side.
Deep learning is an algorithm
inspired by how the human brain works,
and as a result it's an algorithm
which has no theoretical limitations
on what it can do.
The more data you give it and the more
computation time you give it,
the better it gets.
The New York Times also
showed in this article
another extraordinary
result of deep learning
which I'm going to show you now.
It shows that computers
can listen and understand.
(Video) Richard Rashid: Now, the last step
that I want to be able
to take in this process
is to actually speak to you in Chinese.
Now the key thing there is,
we've been able to take a large amount
of information from many Chinese speakers
and produce a text-to-speech system
that takes Chinese text
and converts it into Chinese language,
and then we've taken
an hour or so of my own voice
and we've used that to modulate
the standard text-to-speech system
so that it would sound like me.
Again, the result's not perfect.
There are in fact quite a few errors.
(In Chinese)
(Applause)
There's much work to be done in this area.
(In Chinese)
(Applause)
Jeremy Howard: Well, that was at
a machine learning conference in China.
It's not often, actually,
at academic conferences
that you do hear spontaneous applause,
although of course sometimes
at TEDx conferences, feel free.
Everything you saw there
was happening with deep learning.
(Applause) Thank you.
The transcription in English
was deep learning.
The translation to Chinese and the text
in the top right, deep learning,
and the construction of the voice
was deep learning as well.
So deep learning is
this extraordinary thing.
It's a single algorithm that
can seem to do almost anything,
and I discovered that a year earlier,
it had also learned to see.
In this obscure competition from Germany
called the German Traffic Sign
Recognition Benchmark,
deep learning had learned
to recognize traffic signs like this one.
Not only could it
recognize the traffic signs
better than any other algorithm,
the leaderboard actually showed
it was better than people,
about twice as good as people.
So by 2011, we had the first example
of computers that can see
better than people.
Since that time, a lot has happened.
In 2012, Google announced that
they had a deep learning algorithm
watch YouTube videos
and crunched the data
on 16,000 computers for a month,
and the computer independently learned
about concepts such as people and cats
just by watching the videos.
This is much like the way
that humans learn.
Humans don't learn
by being told what they see,
but by learning for themselves
what these things are.
Also in 2012, Geoffrey Hinton,
who we saw earlier,
won the very popular ImageNet competition,
looking to try to figure out
from one and a half million images
what they're pictures of.
As of 2014, we're now down
to a six percent error rate
in image recognition.
This is better than people, again.
So machines really are doing
an extraordinarily good job of this,
and it is now being used in industry.
For example, Google announced last year
that they had mapped every single
location in France in two hours,
and the way they did it was
that they fed street view images
into a deep learning algorithm
to recognize and read street numbers.
Imagine how long
it would have taken before:
dozens of people, many years.
This is also happening in China.
Baidu is kind of
the Chinese Google, I guess,
and what you see here in the top left
is an example of a picture that I uploaded
to Baidu's deep learning system,
and underneath you can see that the system
has understood what that picture is
and found similar images.
The similar images actually
have similar backgrounds,
similar directions of the faces,
even some with their tongue out.
This is not clearly looking
at the text of a web page.
All I uploaded was an image.
So we now have computers which
really understand what they see
and can therefore search databases
of hundreds of millions
of images in real time.
So what does it mean
now that computers can see?
Well, it's not just
that computers can see.
In fact, deep learning
has done more than that.
Complex, nuanced sentences like this one
are now understandable
with deep learning algorithms.
As you can see here,
this Stanford-based system
showing the red dot at the top
has figured out that this sentence
is expressing negative sentiment.
Deep learning now in fact
is near human performance
at understanding what sentences are about
and what it is saying about those things.
Also, deep learning has
been used to read Chinese,
again at about native
Chinese speaker level.
This algorithm developed
out of Switzerland
by people, none of whom speak
or understand any Chinese.
As I say, using deep learning
is about the best system
in the world for this,
even compared to native
human understanding.
This is a system that we
put together at my company
which shows putting
all this stuff together.
These are pictures which
have no text attached,
and as I'm typing in here sentences,
in real time it's understanding
these pictures
and figuring out what they're about
and finding pictures that are similar
to the text that I'm writing.
So you can see, it's actually
understanding my sentences
and actually understanding these pictures.
I know that you've seen
something like this on Google,
where you can type in things
and it will show you pictures,
but actually what it's doing is it's
searching the webpage for the text.
This is very different from actually
understanding the images.
This is something that computers
have only been able to do
for the first time in the last few months.
So we can see now that computers
can not only see but they can also read,
and, of course, we've shown that they
can understand what they hear.
Perhaps not surprising now that
I'm going to tell you they can write.
Here is some text that I generated
using a deep learning algorithm yesterday.
And here is some text that an algorithm
out of Stanford generated.
Each of these sentences was generated
by a deep learning algorithm
to describe each of those pictures.
This algorithm before has never seen
a man in a black shirt playing a guitar.
It's seen a man before,
it's seen black before,
it's seen a guitar before,
but it has independently generated
this novel description of this picture.
We're still not quite at human
performance here, but we're close.
In tests, humans prefer
the computer-generated caption
one out of four times.
Now this system is now only two weeks old,
so probably within the next year,
the computer algorithm will be
well past human performance
at the rate things are going.
So computers can also write.
So we put all this together and it leads
to very exciting opportunities.
For example, in medicine,
a team in Boston announced
that they had discovered
dozens of new clinically relevant features
of tumors which help doctors
make a prognosis of a cancer.
Very similarly, in Stanford,
a group there announced that,
looking at tissues under magnification,
they've developed
a machine learning-based system
which in fact is better
than human pathologists
at predicting survival rates
for cancer sufferers.
In both of these cases, not only
were the predictions more accurate,
but they generated new insightful science.
In the radiology case,
they were new clinical indicators
that humans can understand.
In this pathology case,
the computer system actually discovered
that the cells around the cancer
are as important as
the cancer cells themselves
in making a diagnosis.
This is the opposite of what pathologists
had been taught for decades.
In each of those two cases,
they were systems developed
by a combination of medical experts
and machine learning experts,
but as of last year,
we're now beyond that too.
This is an example of
identifying cancerous areas
of human tissue under a microscope.
The system being shown here
can identify those areas more accurately,
or about as accurately,
as human pathologists,
but was built entirely with deep learning
using no medical expertise
by people who have
no background in the field.
Similarly, here, this neuron segmentation.
We can now segment neurons
about as accurately as humans can,
but this system was developed
with deep learning
using people with no previous
background in medicine.
So myself, as somebody with
no previous background in medicine,
I seem to be entirely well qualified
to start a new medical company,
which I did.
I was kind of terrified of doing it,
but the theory seemed to suggest
that it ought to be possible
to do very useful medicine
using just these data analytic techniques.
And thankfully, the feedback
has been fantastic,
not just from the media
but from the medical community,
who have been very supportive.
The theory is that we can take
the middle part of the medical process
and turn that into data analysis
as much as possible,
leaving doctors to do
what they're best at.
I want to give you an example.
It now takes us about 15 minutes
to generate a new medical diagnostic test
and I'll show you that in real time now,
but I've compressed it down to
three minutes by cutting some pieces out.
Rather than showing you
creating a medical diagnostic test,
I'm going to show you
a diagnostic test of car images,
because that's something
we can all understand.
So here we're starting with
about 1.5 million car images,
and I want to create something
that can split them into the angle
of the photo that's being taken.
So these images are entirely unlabeled,
so I have to start from scratch.
With our deep learning algorithm,
it can automatically identify
areas of structure in these images.
So the nice thing is that the human
and the computer can now work together.
So the human, as you can see here,
is telling the computer
about areas of interest
which it wants the computer then
to try and use to improve its algorithm.
Now, these deep learning systems actually
are in 16,000-dimensional space,
so you can see here the computer
rotating this through that space,
trying to find new areas of structure.
And when it does so successfully,
the human who is driving it can then
point out the areas that are interesting.
So here, the computer has
successfully found areas,
for example, angles.
So as we go through this process,
we're gradually telling
the computer more and more
about the kinds of structures
we're looking for.
You can imagine in a diagnostic test
this would be a pathologist identifying
areas of pathosis, for example,
or a radiologist indicating
potentially troublesome nodules.
And sometimes it can be
difficult for the algorithm.
In this case, it got kind of confused.
The fronts and the backs
of the cars are all mixed up.
So here we have to be a bit more careful,
manually selecting these fronts
as opposed to the backs,
then telling the computer
that this is a type of group
that we're interested in.
So we do that for a while,
we skip over a little bit,
and then we train the
machine learning algorithm
based on these couple of hundred things,
and we hope that it's gotten a lot better.
You can see, it's now started to fade
some of these pictures out,
showing us that it already is recognizing
how to understand some of these itself.
We can then use this concept
of similar images,
and using similar images, you can now see,
the computer at this point is able to
entirely find just the fronts of cars.
So at this point, the human
can tell the computer,
okay, yes, you've done
a good job of that.
Sometimes, of course, even at this point
it's still difficult
to separate out groups.
In this case, even after we let the
computer try to rotate this for a while,
we still find that the left sides
and the right sides pictures
are all mixed up together.
So we can again give
the computer some hints,
and we say, okay, try and find
a projection that separates out
the left sides and the right sides
as much as possible
using this deep learning algorithm.
And giving it that hint --
ah, okay, it's been successful.
It's managed to find a way
of thinking about these objects
that's separated out these together.
So you get the idea here.
This is a case not where the human
is being replaced by a computer,
but where they're working together.
What we're doing here is we're replacing
something that used to take a team
of five or six people about seven years
and replacing it with something
that takes 15 minutes
for one person acting alone.
So this process takes about
four or five iterations.
You can see we now have 62 percent
of our 1.5 million images
classified correctly.
And at this point, we
can start to quite quickly
grab whole big sections,
check through them to make sure
that there's no mistakes.
Where there are mistakes, we can
let the computer know about them.
And using this kind of process
for each of the different groups,
we are now up to
an 80 percent success rate
in classifying the 1.5 million images.
And at this point, it's just a case
of finding the small number
that aren't classified correctly,
and trying to understand why.
And using that approach,
by 15 minutes we get
to 97 percent classification rates.
So this kind of technique
could allow us to fix a major problem,
which is that there's a lack
of medical expertise in the world.
The World Economic Forum says
that there's between a 10x and a 20x
shortage of physicians
in the developing world,
and it would take about 300 years
to train enough people
to fix that problem.
So imagine if we can help
enhance their efficiency
using these deep learning approaches?
So I'm very excited
about the opportunities.
I'm also concerned about the problems.
The problem here is that
every area in blue on this map
is somewhere where services
are over 80 percent of employment.
What are services?
These are services.
These are also the exact things that
computers have just learned how to do.
So 80 percent of the world's employment
in the developed world
is stuff that computers
have just learned how to do.
What does that mean?
Well, it'll be fine.
They'll be replaced by other jobs.
For example, there will be
more jobs for data scientists.
Well, not really.
It doesn't take data scientists
very long to build these things.
For example, these four algorithms
were all built by the same guy.
So if you think, oh,
it's all happened before,
we've seen the results in the past
of when new things come along
and they get replaced by new jobs,
what are these new jobs going to be?
It's very hard for us to estimate this,
because human performance
grows at this gradual rate,
but we now have a system, deep learning,
that we know actually grows
in capability exponentially.
And we're here.
So currently, we see the things around us
and we say, "Oh, computers
are still pretty dumb." Right?
But in five years' time,
computers will be off this chart.
So we need to be starting to think
about this capability right now.
We have seen this once before, of course.
In the Industrial Revolution,
we saw a step change
in capability thanks to engines.
The thing is, though,
that after a while, things flattened out.
There was social disruption,
but once engines were used
to generate power in all the situations,
things really settled down.
The Machine Learning Revolution
is going to be very different
from the Industrial Revolution,
because the Machine Learning Revolution,
it never settles down.
The better computers get
at intellectual activities,
the more they can build better computers
to be better at intellectual capabilities,
so this is going to be a kind of change
that the world has actually
never experienced before,
so your previous understanding
of what's possible is different.
This is already impacting us.
In the last 25 years,
as capital productivity has increased,
labor productivity has been flat,
in fact even a little bit down.
So I want us to start
having this discussion now.
I know that when I often tell people
about this situation,
people can be quite dismissive.
Well, computers can't really think,
they don't emote,
they don't understand poetry,
we don't really understand how they work.
So what?
Computers right now can do the things
that humans spend most
of their time being paid to do,
so now's the time to start thinking
about how we're going to adjust our
social structures and economic structures
to be aware of this new reality.
Thank you.
(Applause)