Beautiful day out
there. Thank you
for joining us here today.
This gives me great pleasure to
introduce you to
2 thought leaders
who actually inform
inspire and shape
my own thinking about
relationships between
technology and society
on at least a weekly basis.
And I'm not kidding
It's really fantastic to
have
for an hour and a bit
to talk about a big
topid: AI
and society.
Jot is an associate professor
at MIT media lab
where he leads the
scaleable corporations
group among other
things. He has done
really amazing work
over the last couple of
years looking at
the interplay between
autonomous systems
and society. How these
systems should interact
with eachother.
He recently published a
study in science that
got a lot of press
coverage.
addessing the question
whether we can program
moral principles
into autonomous vehicles.
and maybe he will talk a bit
more about that
and then of course
Joe
Director of MIT media lab
professor of practice.
Aperson who doesn't really
need an intro ,
so I'll keep it extremely brief.
Just by highlighting two
of the must reads
from recent months.
It's an interview he had,
a conversation actually with
President Obama
in the Wired Magazine.
on the future of the world,
addressing AI issues
among other topics.
And his book,
Whiplash, which is somehow a
survival guide for the
faster future that we're all
struggling with. I highly
recommend it as a reading
I greatly benefited fromi it.
So, these are not only 2
amazing thought leaders.
They're also wonderful collaborators
and colleagues
and I have the great privilege
through
Berkley and Kline team to work with b
bothof them as part of
our recently launched
joint venture.
The AI Ethics and Governance
Initiative. It's just wonderful
to have you here
and spend some time
with all of us
and share your thoughts
so thank you very much
and welcome.
(applause)
Thank you,
first of all
some of you may be here thinking
"wait, this isn't the talk
that I signed up for.
So to just give you some of
the prominence of this
originally I think there
was a book talk
that I was going to do
with Merckman and then
I said "oh, well, why don't
we bring somebody else interesting in
and Josh joined.
We were going to have a dialoge
about his book and
my book.
And he had a family emeregency
and coudln't make it.
I grabbed Jot and also
realized just as Ers was
saying, we're doing a lot
of work with the
Berkman Center on
AI and society and I thought
this would be a
sufficiently relevant topic to
what we were going to
talk about anyway
so it wouldn't be that
much false advertising.
and It was sort of an idea
that I think relates to my book as well.
One, I can't remember who it was
but a well known author told me
when you give
book talks, don't explain your
whole book because then no
one will have to buy it.
So this book actually started
about 4 years ago.
And we were just wrapping
it up as we saw a lot of
this AI, society,controversy/interests
start. So the book actually
sort of ends where our
exploration of AI and society begins.
So in a way, it overlaps
what the book is about
but is sufficently different
that you have to read the
book in order to
understand
the whole story
But, let me.
I'll just start a few remarks
We'll have .... present some of his work
and then we'll have a converstaion
with all of you
and feel free to interrupt
and ask questions
or disagree.
I think the(stammers)
I co-taught a class with Jonathan
in January in the winter semester.
His tradtional course he teaches
is called internet and society
it's a politics and technology
of control. .... was there
others were there, it was
a fun class.
But one of the sort of
framing pieces of how we
talked about this
was this sort of framing pieces of
how we talked about this.
Was this sort of lesigian?
picture that many of you
may have seen in
his book where
you have law at the top,
and then you have markets
on one side.
and you have norms on the other
and you have technology
underneath and you have you in
the middle
and some how
what you are able to do
is sort of determine by this
relationship between law. technology - I think technology
is on top and law is
down here.
But anyway,
somehow these all
effect eachother.
so you can create
technologies that effect
the law,
you can create laws that
effect norms,
youcan create norms that
effect technology
so some
realtionship between
norms, markets, law and technology
is how we need to be thinking
in order to
design all of these systems
so they work well in th
future. I think one
of the key reasons why the collaboration
between MIT and Harper Law School
Medialab and Berkman
so important is that
you kind of have to get all
of the pieces
and the people in the same room
because the problem is
once everyone has a solution
and they're trying to convince
eachother of the solution
it's, I call them, people
selling doll houses.
rather than legos .
What you want is a whole pile of legos
with lawyers and business people
and technologists and policy makers
playing with the legos
rather than
trying to sell eachother thier
own dollhouses.
That's what was sort
of fun with the class
is that I think
a lot of lawyers
realized that actually infact,
whether you're talking about bit
coin or differential privacy or AI
we still have a lot of choices
to make on the technology side
and in fact those can
be informed by
policy and law
and conversly, I think
a lot of the technologists thought
that law was something like
laws of physics that just are. But in
fact laws are the result
of lawyers and policy
makers taking to
technologists.
Imagining what society wants.
So we're sort of in the process right
now of
struggling through how
we think about this.
But Importantly,
it's already happening
so it's not like we
have that much time.
I think it was Pedro Demingo in
his book
says in master algorithm
and this isn't the exact
I'm paraphrasing the quote
it's something like
I'm less afraid of
a super intelligence coming
to take over the world
and more worried about
a stupid intelligence that's taken
over already.
You know?
I think that's very close to where we are.
I think if you see
Julie A's paper , article in
Propublic, I guess it was
a little over a year ago.
where she happens to find a district where
they're forced to disclose court records.
So she was specifically going after the
fact that machine learning
AI is now used by the
judiciary to set bail
to do parole
and even sentencing.
And they have this thing
called the risk score
where the machine sort
of pops up
after it does an assesment of
a person's history
looks at thier interviews and
she found, and this
is great, cause she's a
math matician
in a data sense. She crunched
all these numbers
and it shows
that for many cases for white people
is't sort of nearly random
in some cases.
So, it's a number but it's still almost random.
and then for black people
it's biased against them
and what's interesting
is when I talked to ...
a prosecuter the other day
he said well I love, they
love these numbers
because you get a risk score
that says okay,
this person has a
risk rating of 8
and so then the court can
say 'okay, we'll give you this bail
because the last thing that they
want is for them to give them
some bail
and then the person goes out and murders
somebody, it's sort of thier fault.
If they're taken the risk score
they can say, "I just looked at
the risk score
it absolves them of this
responsibility
and so there's this
really interesting question
that even at random
it's still, there' s this wierd moral
hazard that even though you
have agency, you're able to
push off
this responsibility to the machine, right?
And then you can sort of say,
well it was math.
and the problem right now
is these algorithms are running
on data sets and rating
systems that are closed.
We see this happening in a
variety of fields . I think we
see this happening
in the judiciary,
Which is a scary place for it
to be happening
And so part of this initiative with AI
fund that we're doing
we're going to try and look
at whether we can create
more transparency and auditability
we're also seeing it in medicine.
There's a study that I heard
that when a doctor overrulled the machine
in diagnostics the doctor was wrong
70 % of the time.
So what does that mean?
So if you're a doctor
and you know for a fact
that you're 70% likely
on average to be wrong
are you ever going to overrule
the machine?
And what about that 30%
where the doctors are right?
So, it creates a very difficult
situation.
You look at...
Imagine war
We talk about autonomous
weapons and there's this whole
fight about
it, but what if all of the data
and not what if,
In fact, all of the data
that's driving intellegence
the way that you get on to
the termination list
as a target,
a lot of it involves statistical
analysis of your activity
your emotions, your calls
and there's this great interview
I think it was in the Independent or
it was Indpendent.
There was this guy who
I think he was in
Pakistan
I'm gonna get this wrong
I'll um, but it's close.
But he had been attacked
a number of times
where the collateral damage
was family members
being dead so he knew he
was on the kill list
but he didn't know how to get off
so he goes to London
to kind of fight for
'wait, look at me, talk to me
I'm on this kill list
but I'm not a bad guy.
Somehow
you got the wrong person.
But there's no interface
in which he can sort of
lobby and petition for getting off
this kill list.
So even though the person
controlling the drone strike and
pushing the button
maybe a human being,
If all of the data that's feeding into, or
a substantial amount of data that's
feeding into the decision to
put the person on
the kill list
is from a machine,
I don't know how that's that
different
from the machine actually being charged.
So we talk about sort of these
future autonomous systems
and robots running around
and killing people as
a sort of scary thing.
But if we are just pushing a button that the
robot just tells us to do
ABC or D but robot
says it's C, you're going to push C.
Apparently that was how Kisenger controlled Nixon
was through his elbow.
The anwer was always C.
But anyway,
the.... that actually is
when we think about practice.
We may already be in autonomous
mode in many things.
And then I'm going to T up to
Y? which is one of the
first places where the rubber meets
the road is with autonomous vehicles.
and a lot of the people that
I talk to say
that while the real soul searching
around this is going to happen
when the next big autonmous vehicle
accident happens where
it's clearly the machine's fault,
How is that going to play out?
So that may be one of the things
But the last thing that
I'll say is that I think
this is where the media lab
is excited. I think it's kind of
an interface design problem
because part of what the
problem is is that you may think
that by pushing , the button, the right to
overrule the computer the right to
launch the missle
may be your finger
if you have no choice,
morally or statistically other than
to push the button
you're not in charge anymore. Right?
So what I think we need to think about
is how to we bring
society and humans into the decision making
process so that the answer that
we derive involves human beings
and how does that interface hapen? what is the
right way to do it
Because I think what we are going to end
up with is collective decision making.
machines. and what we want to
not be in is human agency with no real decision
making ability.
And then we can talk more about some
of the ideas.
but I'll hand it over to Jot,
Thank you.
So I'll just give a short
overview of
the research we' ve been doing
on autonomous vehicles
I'm not a driverless car expert.
I don't build driverless cars.
But I'm interested in them as
as kind of a social phenomenon
and the reason has to do with this dilema that
Steve will keep discussing.
You know, what if it's an
autonomous car
that is going to for some
reason harm a bunch of
pedestrians
crossing the street because the
brakes are broken
or because they jumped in front
of it or whatever.
But the car can swerve and kill one
bystander on the other side.
in order to minimize harm
in order to save 5 or 10 people
should the car do this?
And who should decide?
And more interestingly,
what if the car could
swerve and hit a wall harming
the passenger or killing the passenger.?
In order to save these people?
Should the car do this as well?
What does the car
have a duty towards.?
Minimizing harm?utilitarian principle?
Protection of the owner or passengers in the car?
Duty toward them? Or something else? A sort of negotiation
inbetween?
Do we ignore this problem?
do we just say
well let the car deal with this problem
and it seems to be a very
controversial topic because
there are lots of people
who love this,
and lots of people who hate this.
and people who hate this say,
Well this is never going to happen
it's just so statistically unlikely
and I think that kind
of misses the point
because this is a invitro e xploration
of a principle so you strip
away all of the things that
don't matter in the real world
so you can isolate the factor
you know, does drug x
cause this particular
reation in a cell for example?
You know, you don't do this in the
forest you do it in a petridish.
And this is the petri dish for studying
human perception of machine ethics
and what other factors do people
seem to be ticked off by?
I think when we started studying
this we used the
techniques from social
psychology, we framed these
problems to people,
we varied things,
the number of people
who are being sacrified or
otherwise, whether there's an active
omission vs act of comission
and things like this.
And we're sort of interested in
how to people want to resolve
this dilemma?
What's fascinating is that
there was somthing that
was so obvious that we
missed initially, and that
was : it's not really
an ethical question, it's more
of a social dilemma. So
it's a question about
how you negotiate
the interests of different people.
And this was the sort of
strongest finding that we
found. Which was, no one
wants to be in a self sacrificing car
but they want to whole world
to drive one.
And it's really fascinating
that the fact is so strong.
You know, if you look at
the morality of sacrifice
and this is if you kill a pedestrian
to save ten
kill a passenger to save ten
and so forth.
so you can see that
I think it's moral and desirable
in both my car and
other cars to sacrifice other people
for the greater good. So I'm
happy to kill pedestrians to
save ten that's great
but as soon as you tell me,
'well would you sacrifice yourself?
would you sacrifice your passenger?'
Well I think it's moral ,
I think it's great, but I would
never want this in my car.
Not in other cars and
defininately not in my car.
This is where you see these
things split.
Now this is the tragedy
at the commons
right?
I want public safety to be maximized.
I would like the world
to be a safer place
where the cars
might make the decisions that
minimize harm, but
I don't want to contribute to
this public good. I woudln't
want to pay the personal cost
needed to do this.
So we thought
maybe regulation? You know
that's how public goods, problems
are solved.
Let's set a quota on the
number of sheep that
can graze so we
don't have to overrrun the pasture.
Or let's set a quota on
the number of fish you
can catch so you don't
over fish and kill
all the fish and basically
everybody loses out.
And we ask people whether
they would support this
and we found that people
think it's moral , but
they don't want it to be
legally enforced.
At least for now. Right?
This is the PR
problem and maybe it's a..
maybe we need to double up
the law
so that people can
feel comfortable with
what this means.
(audience)-I'd just like to ask a question ?
because we talk a lot about the
evolution of cultural things
and I assume all of these are people,
or I guess you don't know
but most of these are
people who have never
been in a self driving car
right? And I think one of the
things we found , again, this is not
my work but some of our colleagues
They do this self driving car,
Uber type thing where you can
map and it was actually for
normal sort of, the public.
And they're impression
of the safety of self drving cars
changed substantially
after they had expereinced
it for a little while
and they sort of anticdotally
felt safer than with dad.
So I think once you're in
a self driving car, and
see how much control it
has your view on the safety
as well as it's,
and the other thing that happens and
this may happen more
in Japan than the US
In Japenese culture
your sort of identify with
machines and tools like that
they start to feel trust with
the machine
which I think unless you expereince it
you don't, you can't imagine it.
Anyway.
I agree.
I think there's all sorts of
things . We're now interested in
studying, for example,
agency perception. You know,
do people see these things to
have minds, and if not, why not?
what's the missing component?
Which becomes really interesting
with drones for example.
So the other thing is
when we ask people, well again,
people think it's moral
to sacrifice but they don't want it
to be regulated
and definately not
buy it if it's regulated
but they're much more likely to
purchase those cars if they were regulated.
I think this is really a really important
question.
If people don't purchase those cars
you will not save lives.
I mean people estimate,
scientists estimate that 90 % of
accidents today are due to human
error. So if we can , the sooner, assuming
the technology get's there assuming
we have wide adoption
the sooner we save more lives.
but if the people
are so worried about edge cases
or that their own safety is not paramount
they may not purchase the
cars and we may not therefor
have wide adoption and as a result
- we can map this onto
the quadrants this is clearly one
that you can't just leave up to the market.
If people aren't buying
the thing that they believe
has a common good.
-Exactly, and if you regulate it
you can, there's a backfire
effect which is well fine,
that's great, that's a good
social contract for other
people but I will continue to
drive my own car
and probably be more
likely to kill myself
as a result.
So people are not rational
in they way they
assess risk of getting on a plane
or will I be eaten by a shark? You know
people over estimate
those risks and there's a
good chance that
if we don't trust those
systems then we will overestimate
those risks too and
prefer to drive ourselves.
so we have an ethical
dilemma we strarted from
then we realized it's a
social dilemma
but now we're realizing
there's a meta ethical dilemma
which is if you solve the
social dilemma
by using regualtions
you may actually create a
bigger dilemma
a bigger trolley problem
which is do we continue
to drive cars or sell or
do we lead to wide adoption?
of autonomous vehicles.
so we want to collect more data.
we want to understand this
issue in more nuanced ways
and we started, I'm gonna
move fast on this
we started collecting data, these
things have made it
to transoportation regulations now
or guidelines which is good
but we've created a website
called moral machine
in which we randomly
generate scenarios
so in this case it's not just
one vs ten
or one vs five
it's there's a dog in there
and we've varied the ages,
sometimes they're children
sometimes they're pregnant
woman
sometimes people are crossing at red
lights
and so do they deserve the same
level of protection as -isn't
that interesting?
This group here,
what if they're children?
Do they, should they,
are they expected to know
not to cross the red light?
and so it get's really hairy
really quickly.
You know these are
some cartoons
very simplified scenarios
but i think they still
bring out lots of interesting
questions and we show
people the resulfs
(audience groaning at result)
We show people results.
This is a former of mine
who has a cat
he's happy to kill babies
to save cats
we also show people,
we show people how much they
care about different factors
and how that compared to with others.
So people love this
cause it's kind of metal to thier
own morality.
You know, do I care about
the law a lot and how
do I
compare with other people on this
matter? Do I protect passengers more
than other people or less,?
and so on.
We also have this design mode
where people can create thier own
scenarios and they get a link
to them and
and a lot of people have been using
them to teach ethics
in highschools
and universities.
And we have all sorts of you know,
species preferences
should social value be taken
into account
should age be taken into
account and so forth.
And we also evaluate if there
is an omission or comission
distinction.
which action, is the action that minimizes
harm should it be an omission or commision
and there is definitely a bias.
WE're now analyses.
So far we've translated this into
10 languages we've recieved
3 million uses
that have completed more than
28 million decisions
binary choices a
and we have 300000 fulll
surveys
and this is still growing fast
These full serveys
allow us to tease out
whether these people
have cars
themselves, which age bracket,
which income bracket
they come from
and so on.
This is really interesting because then you can
start
saying well people how
have cars maybe
more or less likely
to support this particular
ethical framework.
We have a lot of global coverage and so far we've
been looking it
cross cultural differences
and because this is recorded
I don't want to talk about it
yet but basically
we're observing some very
interesting cross cultural differences
in terms of the degree to
which people are utilitarian
or to which thy would prioritize
the passengers
to which they're willing to take
an action so omission vs
comission and so forth
I think it's really facinating
and it would be very important
precondition to any sort of
public relations effort to make the
cars more acceptable
but also potentially to the legal differenes in the
legal frameworks
as well.
Also beginning to look at partial autonomy,
so whether it's autonomous cars or
drones
or jujges making bail decisions
again, you can have a
machine to everything or
you can have a human do everything.
so and in the car you
have things where the driver
assitance so the person is
in control and the machine
sort of watches over them
so tpyota has been promoting
this model and other car makers as well
but also there
s auto pilot where the machine does
the things and thehuman
kind of has to keep an eye on
whether it's a car or anything else and
then you have full autonomy.
The question here we're interested in
is we're comparing these
models and we're investigating
empirically whether
people assign
differnt degrees of blame
and responsibility depending on the
control architecture, we can call it
If a person overrriding a decision made by
a machine is differnt from a machine
overriding a decision made by a human
and it happens
again this is now in submission,
But it happens to really matter.
It really matters who you think is
ultimately responsible and who is liable
and I think this is a sort of psychological
imput to potential legistations that
could come up.
to deal with these scenarios
so this is a broader picture
that I like. wich I think Joey
eluded to initially.
which is that there is a gap in
between
and on one side we have engineers
who think everything is an engeneering
problem, you know, ev
everythin can be enginerred away
and you have people from
the humanities and social sciences
study the nuances of human
behavior
but also who know how rules
can get sort of abused
and have a good sort of knack
for this. You know, how do you
ensure you have coherent system of
ethics and values and checks and
balances and so on
I think that these sides often don't
talk to each other so I think
there is a sizable community of
people who complain about who
are very good identifying a problem
and violations of fairness and rights.
and so on
but we don't have the tools
to express these objections
in a way that computer scientists
can operationalize
likewise, we have machine
learning
and there are scientist who feel
that this is problamatic
I can see that this has, can cause
problems this can
violate
people's rights but again
they don't have the intellectual
framework to raise these issues in
a way that humans as a society
can evaluate so we're hoping
to do and this is part of the partnership
between the media lab and the berkman center
Berkman center are from this side and they
understand us,
and we come from this side from technology and we
work on interfaces`
and we hope through this
we will make some interesting famework
this is , I think, where
many of the interesting questions are.
So I think we're ready for a discussion and taking some
questions.
"I guess the one other part I would add
to this is that
just one other acess is going back
to judiciary but we can have
this
in cars as well. Is in the one hand
I don't think anybody
thinks that speeding tickets issues
by speed cameras on the highway
are , I mean some people may
not like them but
that's inappropriate use of machine
because it's really a fact. There's a speed
that you're allowed to go. and the
machine is more likely to measure your s
speed than
a human eye balling it,
and probably more fair.
on the other hand I don't think anyone believes
that the supreme court
decisions at least for now should
have really that much substantial role
with the machines at least in the deliberation
part so there's a spectrum there's this
thing where on the one end
where you're just establishing a fact which
is sort of impemation of the law
which we're not even disputing
the justice of it to
the supreme court which
is supposed to try and reflect the norms
of the day making
determinations about laws.
but then there's a continum between
somewhere in the middle there
you have this uncanny place where
it feels like the machines have some
infulence and I think whats
kind of interesting is just about all
of these hypotheticals we have
there's one extreme where you do want
the machines in charge,
there's another extreme where you
do want humans in charge.
Those are actually not that difficult.
Ther's a space inbetween them
and I thikn that's why it's kind of
an interface problem
is that it's very unclear how the human
and machine peices whether it's a societal thing
or an individual get together so that's
again, it's related to the autonomy question
but I think it's a .. and I think it's
sort of technology and is it ethics or
morality there's some sort of stack as well.
And maybe everything to an internet person
looks like a stack.
Maybe that's my proble.
I there's sort of an intereting thought
experiment which was
you know, suppose that we
I think we need tools too, it's not
just a legal question. New kinds of tools
and new kinds of data can make a
big difference. So let's assume
that we invented the cars
and they started going on high speed
but we didn't invent radar.
that can accurately measure speeds.
so we relied on human guestimation of speeds
of your speed driving. So there's
a police man standing sort of
eyeballing cars and (thinks)
that sort of looks like 120 right?
You can very well imagine that
under this scenario , if policemen
were discriminating one particular group
maybe over estimate the speed of
people driving cars from
that particular ethinic group.
and underestimate the speed of
other people.
But somehow the tool solves this
question because it makes the
final youknow, it's recorded
and somehow it becomes objective.
It becomes a fact.
And we haven't, you know,
It's not disputeable so can we do
something
similar here and we say
-But I think that's where it hits a slippery slope.
So if you're doing speed of a car
it's a very small number of data points the
machine is getting ot guess
your speed but the risk rating
to some people may seem very
scientific especially if they don't
understand math and statistics
and so they may say the machine
rating said they have a risk of this
and actually in the forms, they never
ask you your race.
It just turns out that when you
collect the data and you collect
the questions the result is biased
against race and so
one of the questions of what's difficult
is if you don'ot understand how these
algorithms convert data inot result
and this is the problem with the
black box thing. A lot of
the machines and again there's progress
on making machines that can explain how
they got to the decision
but a lot of the machines we
currently use are unable to
describe how they got the number,
they just give you the number.
So if I may pick up on that and
ask a first question?
So., this question of the normatitvity
of the autonomous system and who makes
where's the sorce of the norm that
seems to be a key question. And I'm wondering
picking up on your earlier description whether
we're on a particular trajectory
by what you described. I think
there'are roughly 3 phases I've heard.
1 Is okay, we have these autonomous vehicles
and now it's a question for law
makers and regulators
Do we apply existing norms to
these new technologies ?.
Sometimes you need to update
the regualations which we
see happening
you made reference to that. But
there is also a second phase
it seems where it, is can we
somehow program some of the values and laws and
rules into the systems themselves so the behavior
is closer to what we have normative constance around
the society and as law makers and policy
makers?
And then there' s a potential third phase
I'm particulary interested in your views
whether that is indeed a trajectory
in the area you study or
more broadly potentially
that as you could envision a future
where more and more data
accumulates in systems like
autonomous vehicles based on
the rules we program then
how to behave and how they learn
how these rules are obeyed or not
what the compliance rate is and the like
where suddenly the norm itsself becomes
computer or machine generated and how to do
we feel about that because that
may be an advertantly get us to the other end of the
spectrum that you're decribing that the
norms are no longer developed here
and then somehow programmed into the
system
but at least evolution of the norm
happens in the automated system.
I think you would have to tease apart the
norms and the laws
One of my -says the engeneer to the law student -
(laughing)
My favorite one is a chart that Jot gave me
but I did this in japan so I can
do this here.
Imagine you have a car
and on oyur left there's two motor
cycles one the left and on the right
the one on the left has no
helmet, the one on the right
is wearing a helmet. And there's
a helmet law. The guy on the left is clearly
breaking the law completely disrespecting the law
so you have to sweve. Someone jumps in
front of your car. You have to swerve.
Do you hit the guy iwthout the helmet
or do you hit the guy with the helmet??
The guy with the helmet is more likely to survive,
but he's following the law.
So who hits the guy without the helmet.
I did this at a Japenese car company
and half of them in the room
raised their hand said well,
of course you go after the guy who
broke the law right?
But this is a very interesting normative
question
so there's all these versions of it