WEBVTT 00:00:00.739 --> 00:00:04.861 So, I started my first job as a computer programmer 00:00:04.885 --> 00:00:06.841 in my very first year of college -- 00:00:06.865 --> 00:00:08.372 basically, as a teenager. NOTE Paragraph 00:00:08.889 --> 00:00:10.621 Soon after I started working, 00:00:10.645 --> 00:00:12.255 writing software in a company, 00:00:12.799 --> 00:00:16.434 a manager who worked at the company came down to where I was, 00:00:16.458 --> 00:00:17.726 and he whispered to me, 00:00:18.229 --> 00:00:21.090 "Can he tell if I'm lying?" 00:00:21.806 --> 00:00:23.883 There was nobody else in the room. NOTE Paragraph 00:00:25.032 --> 00:00:29.421 "Can who tell if you're lying? And why are we whispering?" NOTE Paragraph 00:00:30.266 --> 00:00:33.373 The manager pointed at the computer in the room. 00:00:33.397 --> 00:00:36.493 "Can he tell if I'm lying?" 00:00:37.613 --> 00:00:41.975 Well, that manager was having an affair with the receptionist. NOTE Paragraph 00:00:41.999 --> 00:00:43.111 (Laughter) NOTE Paragraph 00:00:43.135 --> 00:00:44.901 And I was still a teenager. 00:00:45.447 --> 00:00:47.466 So I whisper-shouted back to him, 00:00:47.490 --> 00:00:51.114 "Yes, the computer can tell if you're lying." NOTE Paragraph 00:00:51.138 --> 00:00:52.944 (Laughter) NOTE Paragraph 00:00:52.968 --> 00:00:55.891 Well, I laughed, but actually, the laugh's on me. 00:00:55.915 --> 00:00:59.183 Nowadays, there are computational systems 00:00:59.207 --> 00:01:02.755 that can suss out emotional states and even lying 00:01:02.779 --> 00:01:04.823 from processing human faces. 00:01:05.248 --> 00:01:09.401 Advertisers and even governments are very interested. NOTE Paragraph 00:01:10.319 --> 00:01:12.181 I had become a computer programmer 00:01:12.205 --> 00:01:15.318 because I was one of those kids crazy about math and science. 00:01:15.942 --> 00:01:19.050 But somewhere along the line I'd learned about nuclear weapons, 00:01:19.074 --> 00:01:22.026 and I'd gotten really concerned with the ethics of science. 00:01:22.050 --> 00:01:23.254 I was troubled. 00:01:23.278 --> 00:01:25.919 However, because of family circumstances, 00:01:25.943 --> 00:01:29.241 I also needed to start working as soon as possible. 00:01:29.265 --> 00:01:32.564 So I thought to myself, hey, let me pick a technical field 00:01:32.588 --> 00:01:34.384 where I can get a job easily 00:01:34.408 --> 00:01:38.426 and where I don't have to deal with any troublesome questions of ethics. 00:01:39.022 --> 00:01:40.551 So I picked computers. NOTE Paragraph 00:01:40.575 --> 00:01:41.679 (Laughter) NOTE Paragraph 00:01:41.703 --> 00:01:45.113 Well, ha, ha, ha! All the laughs are on me. 00:01:45.137 --> 00:01:47.891 Nowadays, computer scientists are building platforms 00:01:47.915 --> 00:01:52.124 that control what a billion people see every day. 00:01:53.052 --> 00:01:56.874 They're developing cars that could decide who to run over. 00:01:57.707 --> 00:02:00.920 They're even building machines, weapons, 00:02:00.944 --> 00:02:03.229 that might kill human beings in war. 00:02:03.253 --> 00:02:06.024 It's ethics all the way down. NOTE Paragraph 00:02:07.183 --> 00:02:09.241 Machine intelligence is here. 00:02:09.823 --> 00:02:13.297 We're now using computation to make all sort of decisions, 00:02:13.321 --> 00:02:15.207 but also new kinds of decisions. 00:02:15.231 --> 00:02:20.403 We're asking questions to computation that have no single right answers, 00:02:20.427 --> 00:02:21.629 that are subjective 00:02:21.653 --> 00:02:23.978 and open-ended and value-laden. NOTE Paragraph 00:02:24.002 --> 00:02:25.760 We're asking questions like, 00:02:25.784 --> 00:02:27.434 "Who should the company hire?" 00:02:28.096 --> 00:02:30.855 "Which update from which friend should you be shown?" 00:02:30.879 --> 00:02:33.145 "Which convict is more likely to reoffend?" 00:02:33.514 --> 00:02:36.568 "Which news item or movie should be recommended to people?" NOTE Paragraph 00:02:36.592 --> 00:02:39.964 Look, yes, we've been using computers for a while, 00:02:39.988 --> 00:02:41.505 but this is different. 00:02:41.529 --> 00:02:43.596 This is a historical twist, 00:02:43.620 --> 00:02:48.957 because we cannot anchor computation for such subjective decisions 00:02:48.981 --> 00:02:54.401 the way we can anchor computation for flying airplanes, building bridges, 00:02:54.425 --> 00:02:55.684 going to the moon. 00:02:56.449 --> 00:02:59.708 Are airplanes safer? Did the bridge sway and fall? 00:02:59.732 --> 00:03:04.230 There, we have agreed-upon, fairly clear benchmarks, 00:03:04.254 --> 00:03:06.493 and we have laws of nature to guide us. 00:03:06.517 --> 00:03:09.911 We have no such anchors and benchmarks 00:03:09.935 --> 00:03:13.898 for decisions in messy human affairs. NOTE Paragraph 00:03:13.922 --> 00:03:18.159 To make things more complicated, our software is getting more powerful, 00:03:18.183 --> 00:03:21.956 but it's also getting less transparent and more complex. 00:03:22.542 --> 00:03:24.582 Recently, in the past decade, 00:03:24.606 --> 00:03:27.335 complex algorithms have made great strides. 00:03:27.359 --> 00:03:29.349 They can recognize human faces. 00:03:29.985 --> 00:03:32.040 They can decipher handwriting. 00:03:32.436 --> 00:03:34.502 They can detect credit card fraud 00:03:34.526 --> 00:03:35.715 and block spam 00:03:35.739 --> 00:03:37.776 and they can translate between languages. 00:03:37.800 --> 00:03:40.374 They can detect tumors in medical imaging. 00:03:40.398 --> 00:03:42.603 They can beat humans in chess and Go. NOTE Paragraph 00:03:43.264 --> 00:03:47.768 Much of this progress comes from a method called "machine learning." 00:03:48.175 --> 00:03:51.362 Machine learning is different than traditional programming, 00:03:51.386 --> 00:03:54.971 where you give the computer detailed, exact, painstaking instructions. 00:03:55.378 --> 00:03:59.560 It's more like you take the system and you feed it lots of data, 00:03:59.584 --> 00:04:01.240 including unstructured data, 00:04:01.264 --> 00:04:03.542 like the kind we generate in our digital lives. 00:04:03.566 --> 00:04:06.296 And the system learns by churning through this data. 00:04:06.669 --> 00:04:08.195 And also, crucially, 00:04:08.219 --> 00:04:12.599 these systems don't operate under a single-answer logic. 00:04:12.623 --> 00:04:15.582 They don't produce a simple answer; it's more probabilistic: 00:04:15.606 --> 00:04:19.089 "This one is probably more like what you're looking for." NOTE Paragraph 00:04:20.023 --> 00:04:23.093 Now, the upside is: this method is really powerful. 00:04:23.117 --> 00:04:25.193 The head of Google's AI systems called it, 00:04:25.217 --> 00:04:27.414 "the unreasonable effectiveness of data." 00:04:27.791 --> 00:04:29.144 The downside is, 00:04:29.738 --> 00:04:32.809 we don't really understand what the system learned. 00:04:32.833 --> 00:04:34.420 In fact, that's its power. 00:04:34.946 --> 00:04:38.744 This is less like giving instructions to a computer; 00:04:39.200 --> 00:04:43.264 it's more like training a puppy-machine-creature 00:04:43.288 --> 00:04:45.659 we don't really understand or control. 00:04:46.362 --> 00:04:47.913 So this is our problem. 00:04:48.427 --> 00:04:52.689 It's a problem when this artificial intelligence system gets things wrong. 00:04:52.713 --> 00:04:56.253 It's also a problem when it gets things right, 00:04:56.277 --> 00:04:59.905 because we don't even know which is which when it's a subjective problem. 00:04:59.929 --> 00:05:02.268 We don't know what this thing is thinking. NOTE Paragraph 00:05:03.493 --> 00:05:07.176 So, consider a hiring algorithm -- 00:05:08.123 --> 00:05:12.434 a system used to hire people, using machine-learning systems. 00:05:13.052 --> 00:05:16.631 Such a system would have been trained on previous employees' data 00:05:16.655 --> 00:05:19.246 and instructed to find and hire 00:05:19.270 --> 00:05:22.308 people like the existing high performers in the company. 00:05:22.814 --> 00:05:23.967 Sounds good. 00:05:23.991 --> 00:05:25.990 I once attended a conference 00:05:26.014 --> 00:05:29.139 that brought together human resources managers and executives, 00:05:29.163 --> 00:05:30.369 high-level people, 00:05:30.393 --> 00:05:31.952 using such systems in hiring. 00:05:31.976 --> 00:05:33.622 They were super excited. 00:05:33.646 --> 00:05:38.299 They thought that this would make hiring more objective, less biased, 00:05:38.323 --> 00:05:41.323 and give women and minorities a better shot 00:05:41.347 --> 00:05:43.535 against biased human managers. NOTE Paragraph 00:05:43.559 --> 00:05:46.402 And look -- human hiring is biased. 00:05:47.099 --> 00:05:48.284 I know. 00:05:48.308 --> 00:05:51.313 I mean, in one of my early jobs as a programmer, 00:05:51.337 --> 00:05:55.205 my immediate manager would sometimes come down to where I was 00:05:55.229 --> 00:05:58.982 really early in the morning or really late in the afternoon, 00:05:59.006 --> 00:06:02.068 and she'd say, "Zeynep, let's go to lunch!" 00:06:02.724 --> 00:06:04.891 I'd be puzzled by the weird timing. 00:06:04.915 --> 00:06:07.044 It's 4pm. Lunch? 00:06:07.068 --> 00:06:10.162 I was broke, so free lunch. I always went. 00:06:10.618 --> 00:06:12.685 I later realized what was happening. 00:06:12.709 --> 00:06:17.255 My immediate managers had not confessed to their higher-ups 00:06:17.279 --> 00:06:20.392 that the programmer they hired for a serious job was a teen girl 00:06:20.416 --> 00:06:24.346 who wore jeans and sneakers to work. 00:06:25.174 --> 00:06:27.376 I was doing a good job, I just looked wrong 00:06:27.400 --> 00:06:29.099 and was the wrong age and gender. NOTE Paragraph 00:06:29.123 --> 00:06:32.469 So hiring in a gender- and race-blind way 00:06:32.493 --> 00:06:34.358 certainly sounds good to me. 00:06:35.031 --> 00:06:38.372 But with these systems, it is more complicated, and here's why: 00:06:38.968 --> 00:06:44.759 Currently, computational systems can infer all sorts of things about you 00:06:44.783 --> 00:06:46.655 from your digital crumbs, 00:06:46.679 --> 00:06:49.012 even if you have not disclosed those things. 00:06:49.506 --> 00:06:52.433 They can infer your sexual orientation, 00:06:52.994 --> 00:06:54.300 your personality traits, 00:06:54.859 --> 00:06:56.232 your political leanings. 00:06:56.830 --> 00:07:00.515 They have predictive power with high levels of accuracy. 00:07:01.362 --> 00:07:03.940 Remember -- for things you haven't even disclosed. 00:07:03.964 --> 00:07:05.555 This is inference. NOTE Paragraph 00:07:05.579 --> 00:07:08.840 I have a friend who developed such computational systems 00:07:08.864 --> 00:07:12.505 to predict the likelihood of clinical or postpartum depression 00:07:12.529 --> 00:07:13.945 from social media data. 00:07:14.676 --> 00:07:16.103 The results are impressive. 00:07:16.492 --> 00:07:19.849 Her system can predict the likelihood of depression 00:07:19.873 --> 00:07:23.776 months before the onset of any symptoms -- 00:07:23.800 --> 00:07:25.173 months before. 00:07:25.197 --> 00:07:27.443 No symptoms, there's prediction. 00:07:27.467 --> 00:07:32.279 She hopes it will be used for early intervention. Great! 00:07:32.911 --> 00:07:34.951 But now put this in the context of hiring. NOTE Paragraph 00:07:36.027 --> 00:07:39.073 So at this human resources managers conference, 00:07:39.097 --> 00:07:43.806 I approached a high-level manager in a very large company, 00:07:43.830 --> 00:07:48.408 and I said to her, "Look, what if, unbeknownst to you, 00:07:48.432 --> 00:07:54.981 your system is weeding out people with high future likelihood of depression? 00:07:55.761 --> 00:07:59.137 They're not depressed now, just maybe in the future, more likely. 00:07:59.923 --> 00:08:03.329 What if it's weeding out women more likely to be pregnant 00:08:03.353 --> 00:08:05.939 in the next year or two but aren't pregnant now? 00:08:06.844 --> 00:08:12.480 What if it's hiring aggressive people because that's your workplace culture?" 00:08:13.173 --> 00:08:15.864 You can't tell this by looking at gender breakdowns. 00:08:15.888 --> 00:08:17.390 Those may be balanced. 00:08:17.414 --> 00:08:20.971 And since this is machine learning, not traditional coding, 00:08:20.995 --> 00:08:25.902 there is no variable there labeled "higher risk of depression," 00:08:25.926 --> 00:08:27.759 "higher risk of pregnancy," 00:08:27.783 --> 00:08:29.517 "aggressive guy scale." 00:08:29.995 --> 00:08:33.674 Not only do you not know what your system is selecting on, 00:08:33.698 --> 00:08:36.021 you don't even know where to begin to look. 00:08:36.045 --> 00:08:37.291 It's a black box. 00:08:37.315 --> 00:08:40.122 It has predictive power, but you don't understand it. NOTE Paragraph 00:08:40.486 --> 00:08:42.855 "What safeguards," I asked, "do you have 00:08:42.879 --> 00:08:46.552 to make sure that your black box isn't doing something shady?" 00:08:48.863 --> 00:08:52.741 She looked at me as if I had just stepped on 10 puppy tails. NOTE Paragraph 00:08:52.765 --> 00:08:54.013 (Laughter) NOTE Paragraph 00:08:54.037 --> 00:08:56.078 She stared at me and she said, 00:08:56.556 --> 00:09:00.889 "I don't want to hear another word about this." 00:09:01.458 --> 00:09:03.492 And she turned around and walked away. 00:09:04.064 --> 00:09:05.550 Mind you -- she wasn't rude. 00:09:05.574 --> 00:09:11.882 It was clearly: what I don't know isn't my problem, go away, death stare. NOTE Paragraph 00:09:11.906 --> 00:09:13.152 (Laughter) NOTE Paragraph 00:09:13.862 --> 00:09:17.701 Look, such a system may even be less biased 00:09:17.725 --> 00:09:19.828 than human managers in some ways. 00:09:19.852 --> 00:09:21.998 And it could make monetary sense. 00:09:22.573 --> 00:09:24.223 But it could also lead 00:09:24.247 --> 00:09:28.995 to a steady but stealthy shutting out of the job market 00:09:29.019 --> 00:09:31.312 of people with higher risk of depression. 00:09:31.753 --> 00:09:34.349 Is this the kind of society we want to build, 00:09:34.373 --> 00:09:36.658 without even knowing we've done this, 00:09:36.682 --> 00:09:40.646 because we turned decision-making to machines we don't totally understand? NOTE Paragraph 00:09:41.265 --> 00:09:42.723 Another problem is this: 00:09:43.314 --> 00:09:47.766 these systems are often trained on data generated by our actions, 00:09:47.790 --> 00:09:49.606 human imprints. 00:09:50.188 --> 00:09:53.996 Well, they could just be reflecting our biases, 00:09:54.020 --> 00:09:57.613 and these systems could be picking up on our biases 00:09:57.637 --> 00:09:58.950 and amplifying them 00:09:58.974 --> 00:10:00.392 and showing them back to us, 00:10:00.416 --> 00:10:01.878 while we're telling ourselves, 00:10:01.902 --> 00:10:05.019 "We're just doing objective, neutral computation." NOTE Paragraph 00:10:06.314 --> 00:10:08.991 Researchers found that on Google, 00:10:10.134 --> 00:10:15.447 women are less likely than men to be shown job ads for high-paying jobs. 00:10:16.463 --> 00:10:18.993 And searching for African-American names 00:10:19.017 --> 00:10:23.723 is more likely to bring up ads suggesting criminal history, 00:10:23.747 --> 00:10:25.314 even when there is none. 00:10:26.693 --> 00:10:30.242 Such hidden biases and black-box algorithms 00:10:30.266 --> 00:10:34.239 that researchers uncover sometimes but sometimes we don't know, 00:10:34.263 --> 00:10:36.924 can have life-altering consequences. NOTE Paragraph 00:10:37.958 --> 00:10:42.117 In Wisconsin, a defendant was sentenced to six years in prison 00:10:42.141 --> 00:10:43.496 for evading the police. 00:10:44.824 --> 00:10:46.010 You may not know this, 00:10:46.034 --> 00:10:50.032 but algorithms are increasingly used in parole and sentencing decisions. 00:10:50.056 --> 00:10:53.011 He wanted to know: How is this score calculated? 00:10:53.795 --> 00:10:55.460 It's a commercial black box. 00:10:55.484 --> 00:10:59.689 The company refused to have its algorithm be challenged in open court. 00:11:00.396 --> 00:11:05.928 But ProPublica, an investigative nonprofit, audited that very algorithm 00:11:05.952 --> 00:11:07.968 with what public data they could find, 00:11:07.992 --> 00:11:10.308 and found that its outcomes were biased 00:11:10.332 --> 00:11:13.961 and its predictive power was dismal, barely better than chance, 00:11:13.985 --> 00:11:18.401 and it was wrongly labeling black defendants as future criminals 00:11:18.425 --> 00:11:22.320 at twice the rate of white defendants. NOTE Paragraph 00:11:23.891 --> 00:11:25.455 So, consider this case: 00:11:26.103 --> 00:11:29.955 This woman was late picking up her godsister 00:11:29.979 --> 00:11:32.054 from a school in Broward County, Florida, 00:11:32.757 --> 00:11:35.113 running down the street with a friend of hers. 00:11:35.137 --> 00:11:39.236 They spotted an unlocked kid's bike and a scooter on a porch 00:11:39.260 --> 00:11:40.892 and foolishly jumped on it. 00:11:40.916 --> 00:11:43.515 As they were speeding off, a woman came out and said, 00:11:43.539 --> 00:11:45.744 "Hey! That's my kid's bike!" 00:11:45.768 --> 00:11:49.062 They dropped it, they walked away, but they were arrested. NOTE Paragraph 00:11:49.086 --> 00:11:52.723 She was wrong, she was foolish, but she was also just 18. 00:11:52.747 --> 00:11:55.291 She had a couple of juvenile misdemeanors. 00:11:55.808 --> 00:12:00.993 Meanwhile, that man had been arrested for shoplifting in Home Depot -- 00:12:01.017 --> 00:12:03.941 85 dollars' worth of stuff, a similar petty crime. 00:12:04.766 --> 00:12:09.325 But he had two prior armed robbery convictions. 00:12:09.955 --> 00:12:13.437 But the algorithm scored her as high risk, and not him. 00:12:14.746 --> 00:12:18.620 Two years later, ProPublica found that she had not reoffended. 00:12:18.644 --> 00:12:21.194 It was just hard to get a job for her with her record. 00:12:21.218 --> 00:12:23.294 He, on the other hand, did reoffend 00:12:23.318 --> 00:12:27.154 and is now serving an eight-year prison term for a later crime. 00:12:28.088 --> 00:12:31.457 Clearly, we need to audit our black boxes 00:12:31.481 --> 00:12:34.096 and not have them have this kind of unchecked power. NOTE Paragraph 00:12:34.120 --> 00:12:36.999 (Applause) NOTE Paragraph 00:12:38.087 --> 00:12:42.329 Audits are great and important, but they don't solve all our problems. 00:12:42.353 --> 00:12:45.101 Take Facebook's powerful news feed algorithm -- 00:12:45.125 --> 00:12:49.968 you know, the one that ranks everything and decides what to show you 00:12:49.992 --> 00:12:52.276 from all the friends and pages you follow. 00:12:52.898 --> 00:12:55.173 Should you be shown another baby picture? NOTE Paragraph 00:12:55.197 --> 00:12:56.393 (Laughter) NOTE Paragraph 00:12:56.417 --> 00:12:59.013 A sullen note from an acquaintance? 00:12:59.449 --> 00:13:01.305 An important but difficult news item? 00:13:01.329 --> 00:13:02.811 There's no right answer. 00:13:02.835 --> 00:13:05.494 Facebook optimizes for engagement on the site: 00:13:05.518 --> 00:13:06.933 likes, shares, comments. NOTE Paragraph 00:13:08.168 --> 00:13:10.864 In August of 2014, 00:13:10.888 --> 00:13:13.550 protests broke out in Ferguson, Missouri, 00:13:13.574 --> 00:13:17.991 after the killing of an African-American teenager by a white police officer, 00:13:18.015 --> 00:13:19.585 under murky circumstances. 00:13:19.974 --> 00:13:21.981 The news of the protests was all over 00:13:22.005 --> 00:13:24.690 my algorithmically unfiltered Twitter feed, 00:13:24.714 --> 00:13:26.664 but nowhere on my Facebook. 00:13:27.182 --> 00:13:28.916 Was it my Facebook friends? 00:13:28.940 --> 00:13:30.972 I disabled Facebook's algorithm, 00:13:31.472 --> 00:13:34.320 which is hard because Facebook keeps wanting to make you 00:13:34.344 --> 00:13:36.380 come under the algorithm's control, 00:13:36.404 --> 00:13:38.642 and saw that my friends were talking about it. 00:13:38.666 --> 00:13:41.175 It's just that the algorithm wasn't showing it to me. 00:13:41.199 --> 00:13:44.241 I researched this and found this was a widespread problem. NOTE Paragraph 00:13:44.265 --> 00:13:48.078 The story of Ferguson wasn't algorithm-friendly. 00:13:48.102 --> 00:13:49.273 It's not "likable." 00:13:49.297 --> 00:13:50.849 Who's going to click on "like?" 00:13:51.500 --> 00:13:53.706 It's not even easy to comment on. 00:13:53.730 --> 00:13:55.101 Without likes and comments, 00:13:55.125 --> 00:13:58.417 the algorithm was likely showing it to even fewer people, 00:13:58.441 --> 00:13:59.983 so we didn't get to see this. 00:14:00.946 --> 00:14:02.174 Instead, that week, 00:14:02.198 --> 00:14:04.496 Facebook's algorithm highlighted this, 00:14:04.520 --> 00:14:06.746 which is the ALS Ice Bucket Challenge. 00:14:06.770 --> 00:14:10.512 Worthy cause; dump ice water, donate to charity, fine. 00:14:10.536 --> 00:14:12.440 But it was super algorithm-friendly. 00:14:13.219 --> 00:14:15.832 The machine made this decision for us. 00:14:15.856 --> 00:14:19.353 A very important but difficult conversation 00:14:19.377 --> 00:14:20.932 might have been smothered, 00:14:20.956 --> 00:14:23.652 had Facebook been the only channel. NOTE Paragraph 00:14:24.117 --> 00:14:27.914 Now, finally, these systems can also be wrong 00:14:27.938 --> 00:14:30.674 in ways that don't resemble human systems. 00:14:30.698 --> 00:14:33.620 Do you guys remember Watson, IBM's machine-intelligence system 00:14:33.644 --> 00:14:36.772 that wiped the floor with human contestants on Jeopardy? 00:14:37.131 --> 00:14:38.559 It was a great player. 00:14:38.583 --> 00:14:42.152 But then, for Final Jeopardy, Watson was asked this question: 00:14:42.659 --> 00:14:45.591 "Its largest airport is named for a World War II hero, 00:14:45.615 --> 00:14:47.867 its second-largest for a World War II battle." NOTE Paragraph 00:14:47.891 --> 00:14:49.269 (Hums Final Jeopardy music) NOTE Paragraph 00:14:49.582 --> 00:14:50.764 Chicago. 00:14:50.788 --> 00:14:52.158 The two humans got it right. 00:14:52.697 --> 00:14:57.045 Watson, on the other hand, answered "Toronto" -- 00:14:57.069 --> 00:14:58.887 for a US city category! 00:14:59.596 --> 00:15:02.497 The impressive system also made an error 00:15:02.521 --> 00:15:06.172 that a human would never make, a second-grader wouldn't make. NOTE Paragraph 00:15:06.823 --> 00:15:09.932 Our machine intelligence can fail 00:15:09.956 --> 00:15:13.056 in ways that don't fit error patterns of humans, 00:15:13.080 --> 00:15:16.030 in ways we won't expect and be prepared for. 00:15:16.054 --> 00:15:19.692 It'd be lousy not to get a job one is qualified for, 00:15:19.716 --> 00:15:23.443 but it would triple suck if it was because of stack overflow 00:15:23.467 --> 00:15:24.899 in some subroutine. NOTE Paragraph 00:15:24.923 --> 00:15:26.502 (Laughter) NOTE Paragraph 00:15:26.526 --> 00:15:29.312 In May of 2010, 00:15:29.336 --> 00:15:33.380 a flash crash on Wall Street fueled by a feedback loop 00:15:33.404 --> 00:15:36.432 in Wall Street's "sell" algorithm 00:15:36.456 --> 00:15:40.640 wiped a trillion dollars of value in 36 minutes. 00:15:41.722 --> 00:15:43.909 I don't even want to think what "error" means 00:15:43.933 --> 00:15:47.522 in the context of lethal autonomous weapons. NOTE Paragraph 00:15:49.894 --> 00:15:53.684 So yes, humans have always made biases. 00:15:53.708 --> 00:15:55.884 Decision makers and gatekeepers, 00:15:55.908 --> 00:15:59.401 in courts, in news, in war ... 00:15:59.425 --> 00:16:02.463 they make mistakes; but that's exactly my point. 00:16:02.487 --> 00:16:06.008 We cannot escape these difficult questions. 00:16:06.596 --> 00:16:10.112 We cannot outsource our responsibilities to machines. NOTE Paragraph 00:16:10.676 --> 00:16:14.884 (Applause) NOTE Paragraph 00:16:17.089 --> 00:16:21.536 Artificial intelligence does not give us a "Get out of ethics free" card. NOTE Paragraph 00:16:22.742 --> 00:16:26.123 Data scientist Fred Benenson calls this math-washing. 00:16:26.147 --> 00:16:27.536 We need the opposite. 00:16:27.560 --> 00:16:32.948 We need to cultivate algorithm suspicion, scrutiny and investigation. 00:16:33.380 --> 00:16:36.578 We need to make sure we have algorithmic accountability, 00:16:36.602 --> 00:16:39.047 auditing and meaningful transparency. 00:16:39.380 --> 00:16:42.614 We need to accept that bringing math and computation 00:16:42.638 --> 00:16:45.608 to messy, value-laden human affairs 00:16:45.632 --> 00:16:48.016 does not bring objectivity; 00:16:48.040 --> 00:16:51.673 rather, the complexity of human affairs invades the algorithms. 00:16:52.148 --> 00:16:55.635 Yes, we can and we should use computation 00:16:55.659 --> 00:16:57.673 to help us make better decisions. 00:16:57.697 --> 00:17:03.029 But we have to own up to our moral responsibility to judgment, 00:17:03.053 --> 00:17:05.871 and use algorithms within that framework, 00:17:05.895 --> 00:17:10.830 not as a means to abdicate and outsource our responsibilities 00:17:10.854 --> 00:17:13.308 to one another as human to human. NOTE Paragraph 00:17:13.807 --> 00:17:16.416 Machine intelligence is here. 00:17:16.440 --> 00:17:19.861 That means we must hold on ever tighter 00:17:19.885 --> 00:17:22.032 to human values and human ethics. NOTE Paragraph 00:17:22.056 --> 00:17:23.210 Thank you. NOTE Paragraph 00:17:23.234 --> 00:17:28.254 (Applause)