WEBVTT 00:00:06.949 --> 00:00:11.521 Imagine you're watching a runaway trolley barreling down the tracks 00:00:11.521 --> 00:00:15.961 straight towards five workers who can't escape. 00:00:15.961 --> 00:00:18.179 You happen to be standing next to a switch 00:00:18.179 --> 00:00:21.680 that will divert the trolley onto a second track. 00:00:21.680 --> 00:00:22.980 Here's the problem. 00:00:22.980 --> 00:00:28.069 That track has a worker on it, too, but just one. NOTE Paragraph 00:00:28.069 --> 00:00:29.390 What do you do? 00:00:29.390 --> 00:00:32.685 Do you sacrifice one person to save five? NOTE Paragraph 00:00:32.685 --> 00:00:35.414 This is the trolley problem, 00:00:35.414 --> 00:00:42.103 a version of an ethical dilemma that philosopher Philippa Foot devised in 1967. 00:00:42.103 --> 00:00:45.371 It's popular because it forces us to think about how to choose 00:00:45.371 --> 00:00:48.060 when there are no good choices. 00:00:48.060 --> 00:00:50.200 Do we pick the action with the best outcome 00:00:50.200 --> 00:00:55.400 or stick to a moral code that prohibits causing someone's death? NOTE Paragraph 00:00:55.400 --> 00:01:00.837 In one survey, about 90% of respondents said that it's okay to flip the switch, 00:01:00.837 --> 00:01:04.250 letting one worker die to save five, 00:01:04.250 --> 00:01:08.600 and other studies, including a virtual reality simulation of the dilemma, 00:01:08.600 --> 00:01:11.040 have found similar results. NOTE Paragraph 00:01:11.040 --> 00:01:16.061 These judgments are consistent with the philosophical principle of utilitarianism 00:01:16.061 --> 00:01:18.521 which argues that the morally correct decision 00:01:18.521 --> 00:01:23.351 is the one that maximizes well-being for the greatest number of people. 00:01:23.351 --> 00:01:25.481 The five lives outweigh one, 00:01:25.481 --> 00:01:30.562 even if achieving that outcome requires condemning someone to death. NOTE Paragraph 00:01:30.562 --> 00:01:33.471 But people don't always take the utilitarian view, 00:01:33.471 --> 00:01:37.062 which we can see by changing the trolley problem a bit. NOTE Paragraph 00:01:37.062 --> 00:01:40.303 This time, you're standing on a bridge over the track 00:01:40.303 --> 00:01:43.192 as the runaway trolley approaches. 00:01:43.192 --> 00:01:44.873 Now there's no second track, 00:01:44.873 --> 00:01:48.794 but there is a very large man on the bridge next to you. 00:01:48.794 --> 00:01:52.492 If you push him over, his body will stop the trolley, 00:01:52.492 --> 00:01:54.243 saving the five workers, 00:01:54.243 --> 00:01:56.033 but he'll die. NOTE Paragraph 00:01:56.033 --> 00:01:59.432 To utilitarians, the decision is exactly the same, 00:01:59.432 --> 00:02:01.982 lose one life to save five. 00:02:01.982 --> 00:02:04.584 But in this case, only about 10% of people 00:02:04.584 --> 00:02:08.453 say that it's OK to throw the man onto the tracks. 00:02:08.453 --> 00:02:11.914 Our instincts tell us that deliberately causing someone's death 00:02:11.914 --> 00:02:16.303 is different than allowing them to die as collateral damage. 00:02:16.303 --> 00:02:20.953 It just feels wrong for reasons that are hard to explain. NOTE Paragraph 00:02:20.953 --> 00:02:23.473 This intersection between ethics and psychology 00:02:23.473 --> 00:02:26.604 is what's so interesting about the trolley problem. 00:02:26.604 --> 00:02:30.984 The dilemma in its many variations reveal that what we think is right or wrong 00:02:30.984 --> 00:02:36.345 depends on factors other than a logical weighing of the pros and cons. NOTE Paragraph 00:02:36.345 --> 00:02:38.835 For example, men are more likely than women 00:02:38.835 --> 00:02:42.504 to say it's okay to push the man over the bridge. 00:02:42.504 --> 00:02:46.994 So are people who watch a comedy clip before doing the thought experiment. 00:02:46.994 --> 00:02:49.165 And in one virtual reality study, 00:02:49.165 --> 00:02:52.944 people were more willing to sacrifice men than women. NOTE Paragraph 00:02:52.944 --> 00:02:55.214 Researchers have studied the brain activity 00:02:55.214 --> 00:02:59.535 of people thinking through the classic and bridge versions. 00:02:59.535 --> 00:03:04.054 Both scenarios activate areas of the brain involved in conscious decision-making 00:03:04.054 --> 00:03:06.514 and emotional responses. 00:03:06.514 --> 00:03:10.975 But in the bridge version, the emotional response is much stronger. 00:03:10.975 --> 00:03:13.194 So is activity in an area of the brain 00:03:13.194 --> 00:03:16.884 associated with processing internal conflict. 00:03:16.884 --> 00:03:18.145 Why the difference? 00:03:18.145 --> 00:03:22.912 One explanation is that pushing someone to their death feels more personal, 00:03:22.912 --> 00:03:26.925 activating an emotional aversion to killing another person, 00:03:26.925 --> 00:03:31.424 but we feel conflicted because we know it's still the logical choice. NOTE Paragraph 00:03:31.424 --> 00:03:36.405 "Trolleyology" has been criticized by some philosophers and psychologists. 00:03:36.405 --> 00:03:41.266 They argue that it doesn't reveal anything because its premise is so unrealistic 00:03:41.266 --> 00:03:45.425 that study participants don't take it seriously. NOTE Paragraph 00:03:45.425 --> 00:03:48.556 But new technology is making this kind of ethical analysis 00:03:48.556 --> 00:03:50.698 more important than ever. 00:03:50.698 --> 00:03:54.036 For example, driver-less cars may have to handle choices 00:03:54.036 --> 00:03:58.007 like causing a small accident to prevent a larger one. 00:03:58.007 --> 00:04:01.626 Meanwhile, governments are researching autonomous military drones 00:04:01.626 --> 00:04:05.976 that could wind up making decisions of whether they'll risk civilian casualties 00:04:05.976 --> 00:04:09.276 to attack a high-value target. 00:04:09.276 --> 00:04:11.197 If we want these actions to be ethical, 00:04:11.197 --> 00:04:15.397 we have to decide in advance how to value human life 00:04:15.397 --> 00:04:17.667 and judge the greater good. NOTE Paragraph 00:04:17.667 --> 00:04:20.107 So researchers who study autonomous systems 00:04:20.107 --> 00:04:22.207 are collaborating with philosophers 00:04:22.207 --> 00:04:27.628 to address the complex problem of programming ethics into machines, 00:04:27.628 --> 00:04:30.957 which goes to show that even hypothetical dilemmas 00:04:30.957 --> 00:04:35.058 can wind up on a collision course with the real world.