1 00:00:00,739 --> 00:00:04,861 So, I started my first job as a computer programmer 2 00:00:04,885 --> 00:00:06,841 in my very first year of college -- 3 00:00:06,865 --> 00:00:08,372 basically, as a teenager. 4 00:00:08,889 --> 00:00:10,621 Soon after I started working, 5 00:00:10,645 --> 00:00:12,255 writing software in a company, 6 00:00:12,799 --> 00:00:16,434 a manager who worked at the company came down to where I was, 7 00:00:16,458 --> 00:00:17,726 and he whispered to me, 8 00:00:18,229 --> 00:00:21,090 "Can he tell if I'm lying?" 9 00:00:21,806 --> 00:00:23,883 There was nobody else in the room. 10 00:00:25,032 --> 00:00:29,421 "Can who tell if you're lying? And why are we whispering?" 11 00:00:30,266 --> 00:00:33,373 The manager pointed at the computer in the room. 12 00:00:33,397 --> 00:00:36,493 "Can he tell if I'm lying?" 13 00:00:37,613 --> 00:00:41,975 Well, that manager was having an affair with the receptionist. 14 00:00:41,999 --> 00:00:43,111 (Laughter) 15 00:00:43,135 --> 00:00:44,901 And I was still a teenager. 16 00:00:45,447 --> 00:00:47,466 So I whisper-shouted back to him, 17 00:00:47,490 --> 00:00:51,114 "Yes, the computer can tell if you're lying." 18 00:00:51,138 --> 00:00:52,944 (Laughter) 19 00:00:52,968 --> 00:00:55,891 Well, I laughed, but actually, the laugh's on me. 20 00:00:55,915 --> 00:00:59,183 Nowadays, there are computational systems 21 00:00:59,207 --> 00:01:02,755 that can suss out emotional states and even lying 22 00:01:02,779 --> 00:01:04,823 from processing human faces. 23 00:01:05,248 --> 00:01:09,401 Advertisers and even governments are very interested. 24 00:01:10,319 --> 00:01:12,181 I had become a computer programmer 25 00:01:12,205 --> 00:01:15,318 because I was one of those kids crazy about math and science. 26 00:01:15,942 --> 00:01:19,050 But somewhere along the line I'd learned about nuclear weapons, 27 00:01:19,074 --> 00:01:22,026 and I'd gotten really concerned with the ethics of science. 28 00:01:22,050 --> 00:01:23,254 I was troubled. 29 00:01:23,278 --> 00:01:25,919 However, because of family circumstances, 30 00:01:25,943 --> 00:01:29,241 I also needed to start working as soon as possible. 31 00:01:29,265 --> 00:01:32,564 So I thought to myself, hey, let me pick a technical field 32 00:01:32,588 --> 00:01:34,384 where I can get a job easily 33 00:01:34,408 --> 00:01:38,426 and where I don't have to deal with any troublesome questions of ethics. 34 00:01:39,022 --> 00:01:40,551 So I picked computers. 35 00:01:40,575 --> 00:01:41,679 (Laughter) 36 00:01:41,703 --> 00:01:45,113 Well, ha, ha, ha! All the laughs are on me. 37 00:01:45,137 --> 00:01:47,891 Nowadays, computer scientists are building platforms 38 00:01:47,915 --> 00:01:52,124 that control what a billion people see every day. 39 00:01:53,052 --> 00:01:56,874 They're developing cars that could decide who to run over. 40 00:01:57,707 --> 00:02:00,920 They're even building machines, weapons, 41 00:02:00,944 --> 00:02:03,229 that might kill human beings in war. 42 00:02:03,253 --> 00:02:06,024 It's ethics all the way down. 43 00:02:07,183 --> 00:02:09,241 Machine intelligence is here. 44 00:02:09,823 --> 00:02:13,297 We're now using computation to make all sort of decisions, 45 00:02:13,321 --> 00:02:15,207 but also new kinds of decisions. 46 00:02:15,231 --> 00:02:20,403 We're asking questions to computation that have no single right answers, 47 00:02:20,427 --> 00:02:21,629 that are subjective 48 00:02:21,653 --> 00:02:23,978 and open-ended and value-laden. 49 00:02:24,002 --> 00:02:25,760 We're asking questions like, 50 00:02:25,784 --> 00:02:27,434 "Who should the company hire?" 51 00:02:28,096 --> 00:02:30,855 "Which update from which friend should you be shown?" 52 00:02:30,879 --> 00:02:33,145 "Which convict is more likely to reoffend?" 53 00:02:33,514 --> 00:02:36,568 "Which news item or movie should be recommended to people?" 54 00:02:36,592 --> 00:02:39,964 Look, yes, we've been using computers for a while, 55 00:02:39,988 --> 00:02:41,505 but this is different. 56 00:02:41,529 --> 00:02:43,596 This is a historical twist, 57 00:02:43,620 --> 00:02:48,957 because we cannot anchor computation for such subjective decisions 58 00:02:48,981 --> 00:02:54,401 the way we can anchor computation for flying airplanes, building bridges, 59 00:02:54,425 --> 00:02:55,684 going to the moon. 60 00:02:56,449 --> 00:02:59,708 Are airplanes safer? Did the bridge sway and fall? 61 00:02:59,732 --> 00:03:04,230 There, we have agreed-upon, fairly clear benchmarks, 62 00:03:04,254 --> 00:03:06,493 and we have laws of nature to guide us. 63 00:03:06,517 --> 00:03:09,911 We have no such anchors and benchmarks 64 00:03:09,935 --> 00:03:13,898 for decisions in messy human affairs. 65 00:03:13,922 --> 00:03:18,159 To make things more complicated, our software is getting more powerful, 66 00:03:18,183 --> 00:03:21,956 but it's also getting less transparent and more complex. 67 00:03:22,542 --> 00:03:24,582 Recently, in the past decade, 68 00:03:24,606 --> 00:03:27,335 complex algorithms have made great strides. 69 00:03:27,359 --> 00:03:29,349 They can recognize human faces. 70 00:03:29,985 --> 00:03:32,040 They can decipher handwriting. 71 00:03:32,436 --> 00:03:34,502 They can detect credit card fraud 72 00:03:34,526 --> 00:03:35,715 and block spam 73 00:03:35,739 --> 00:03:37,776 and they can translate between languages. 74 00:03:37,800 --> 00:03:40,374 They can detect tumors in medical imaging. 75 00:03:40,398 --> 00:03:42,603 They can beat humans in chess and Go. 76 00:03:43,264 --> 00:03:47,768 Much of this progress comes from a method called "machine learning." 77 00:03:48,175 --> 00:03:51,362 Machine learning is different than traditional programming, 78 00:03:51,386 --> 00:03:54,971 where you give the computer detailed, exact, painstaking instructions. 79 00:03:55,378 --> 00:03:59,560 It's more like you take the system and you feed it lots of data, 80 00:03:59,584 --> 00:04:01,240 including unstructured data, 81 00:04:01,264 --> 00:04:03,542 like the kind we generate in our digital lives. 82 00:04:03,566 --> 00:04:06,296 And the system learns by churning through this data. 83 00:04:06,669 --> 00:04:08,195 And also, crucially, 84 00:04:08,219 --> 00:04:12,599 these systems don't operate under a single-answer logic. 85 00:04:12,623 --> 00:04:15,582 They don't produce a simple answer; it's more probabilistic: 86 00:04:15,606 --> 00:04:19,089 "This one is probably more like what you're looking for." 87 00:04:20,023 --> 00:04:23,093 Now, the upside is: this method is really powerful. 88 00:04:23,117 --> 00:04:25,193 The head of Google's AI systems called it, 89 00:04:25,217 --> 00:04:27,414 "the unreasonable effectiveness of data." 90 00:04:27,791 --> 00:04:29,144 The downside is, 91 00:04:29,738 --> 00:04:32,809 we don't really understand what the system learned. 92 00:04:32,833 --> 00:04:34,420 In fact, that's its power. 93 00:04:34,946 --> 00:04:38,744 This is less like giving instructions to a computer; 94 00:04:39,200 --> 00:04:43,264 it's more like training a puppy-machine-creature 95 00:04:43,288 --> 00:04:45,659 we don't really understand or control. 96 00:04:46,362 --> 00:04:47,913 So this is our problem. 97 00:04:48,427 --> 00:04:52,689 It's a problem when this artificial intelligence system gets things wrong. 98 00:04:52,713 --> 00:04:56,253 It's also a problem when it gets things right, 99 00:04:56,277 --> 00:04:59,905 because we don't even know which is which when it's a subjective problem. 100 00:04:59,929 --> 00:05:02,268 We don't know what this thing is thinking. 101 00:05:03,493 --> 00:05:07,176 So, consider a hiring algorithm -- 102 00:05:08,123 --> 00:05:12,434 a system used to hire people, using machine-learning systems. 103 00:05:13,052 --> 00:05:16,631 Such a system would have been trained on previous employees' data 104 00:05:16,655 --> 00:05:19,246 and instructed to find and hire 105 00:05:19,270 --> 00:05:22,308 people like the existing high performers in the company. 106 00:05:22,814 --> 00:05:23,967 Sounds good. 107 00:05:23,991 --> 00:05:25,990 I once attended a conference 108 00:05:26,014 --> 00:05:29,139 that brought together human resources managers and executives, 109 00:05:29,163 --> 00:05:30,369 high-level people, 110 00:05:30,393 --> 00:05:31,952 using such systems in hiring. 111 00:05:31,976 --> 00:05:33,622 They were super excited. 112 00:05:33,646 --> 00:05:38,299 They thought that this would make hiring more objective, less biased, 113 00:05:38,323 --> 00:05:41,323 and give women and minorities a better shot 114 00:05:41,347 --> 00:05:43,535 against biased human managers. 115 00:05:43,559 --> 00:05:46,402 And look -- human hiring is biased. 116 00:05:47,099 --> 00:05:48,284 I know. 117 00:05:48,308 --> 00:05:51,313 I mean, in one of my early jobs as a programmer, 118 00:05:51,337 --> 00:05:55,205 my immediate manager would sometimes come down to where I was 119 00:05:55,229 --> 00:05:58,982 really early in the morning or really late in the afternoon, 120 00:05:59,006 --> 00:06:02,068 and she'd say, "Zeynep, let's go to lunch!" 121 00:06:02,724 --> 00:06:04,891 I'd be puzzled by the weird timing. 122 00:06:04,915 --> 00:06:07,044 It's 4pm. Lunch? 123 00:06:07,068 --> 00:06:10,162 I was broke, so free lunch. I always went. 124 00:06:10,618 --> 00:06:12,685 I later realized what was happening. 125 00:06:12,709 --> 00:06:17,255 My immediate managers had not confessed to their higher-ups 126 00:06:17,279 --> 00:06:20,392 that the programmer they hired for a serious job was a teen girl 127 00:06:20,416 --> 00:06:24,346 who wore jeans and sneakers to work. 128 00:06:25,174 --> 00:06:27,376 I was doing a good job, I just looked wrong 129 00:06:27,400 --> 00:06:29,099 and was the wrong age and gender. 130 00:06:29,123 --> 00:06:32,469 So hiring in a gender- and race-blind way 131 00:06:32,493 --> 00:06:34,358 certainly sounds good to me. 132 00:06:35,031 --> 00:06:38,372 But with these systems, it is more complicated, and here's why: 133 00:06:38,968 --> 00:06:44,759 Currently, computational systems can infer all sorts of things about you 134 00:06:44,783 --> 00:06:46,655 from your digital crumbs, 135 00:06:46,679 --> 00:06:49,012 even if you have not disclosed those things. 136 00:06:49,506 --> 00:06:52,433 They can infer your sexual orientation, 137 00:06:52,994 --> 00:06:54,300 your personality traits, 138 00:06:54,859 --> 00:06:56,232 your political leanings. 139 00:06:56,830 --> 00:07:00,515 They have predictive power with high levels of accuracy. 140 00:07:01,362 --> 00:07:03,940 Remember -- for things you haven't even disclosed. 141 00:07:03,964 --> 00:07:05,555 This is inference. 142 00:07:05,579 --> 00:07:08,840 I have a friend who developed such computational systems 143 00:07:08,864 --> 00:07:12,505 to predict the likelihood of clinical or postpartum depression 144 00:07:12,529 --> 00:07:13,945 from social media data. 145 00:07:14,676 --> 00:07:16,103 The results are impressive. 146 00:07:16,492 --> 00:07:19,849 Her system can predict the likelihood of depression 147 00:07:19,873 --> 00:07:23,776 months before the onset of any symptoms -- 148 00:07:23,800 --> 00:07:25,173 months before. 149 00:07:25,197 --> 00:07:27,443 No symptoms, there's prediction. 150 00:07:27,467 --> 00:07:32,279 She hopes it will be used for early intervention. Great! 151 00:07:32,911 --> 00:07:34,951 But now put this in the context of hiring. 152 00:07:36,027 --> 00:07:39,073 So at this human resources managers conference, 153 00:07:39,097 --> 00:07:43,806 I approached a high-level manager in a very large company, 154 00:07:43,830 --> 00:07:48,408 and I said to her, "Look, what if, unbeknownst to you, 155 00:07:48,432 --> 00:07:54,981 your system is weeding out people with high future likelihood of depression? 156 00:07:55,761 --> 00:07:59,137 They're not depressed now, just maybe in the future, more likely. 157 00:07:59,923 --> 00:08:03,329 What if it's weeding out women more likely to be pregnant 158 00:08:03,353 --> 00:08:05,939 in the next year or two but aren't pregnant now? 159 00:08:06,844 --> 00:08:12,480 What if it's hiring aggressive people because that's your workplace culture?" 160 00:08:13,173 --> 00:08:15,864 You can't tell this by looking at gender breakdowns. 161 00:08:15,888 --> 00:08:17,390 Those may be balanced. 162 00:08:17,414 --> 00:08:20,971 And since this is machine learning, not traditional coding, 163 00:08:20,995 --> 00:08:25,902 there is no variable there labeled "higher risk of depression," 164 00:08:25,926 --> 00:08:27,759 "higher risk of pregnancy," 165 00:08:27,783 --> 00:08:29,517 "aggressive guy scale." 166 00:08:29,995 --> 00:08:33,674 Not only do you not know what your system is selecting on, 167 00:08:33,698 --> 00:08:36,021 you don't even know where to begin to look. 168 00:08:36,045 --> 00:08:37,291 It's a black box. 169 00:08:37,315 --> 00:08:40,122 It has predictive power, but you don't understand it. 170 00:08:40,486 --> 00:08:42,855 "What safeguards," I asked, "do you have 171 00:08:42,879 --> 00:08:46,552 to make sure that your black box isn't doing something shady?" 172 00:08:48,863 --> 00:08:52,741 She looked at me as if I had just stepped on 10 puppy tails. 173 00:08:52,765 --> 00:08:54,013 (Laughter) 174 00:08:54,037 --> 00:08:56,078 She stared at me and she said, 175 00:08:56,556 --> 00:09:00,889 "I don't want to hear another word about this." 176 00:09:01,458 --> 00:09:03,492 And she turned around and walked away. 177 00:09:04,064 --> 00:09:05,550 Mind you -- she wasn't rude. 178 00:09:05,574 --> 00:09:11,882 It was clearly: what I don't know isn't my problem, go away, death stare. 179 00:09:11,906 --> 00:09:13,152 (Laughter) 180 00:09:13,862 --> 00:09:17,701 Look, such a system may even be less biased 181 00:09:17,725 --> 00:09:19,828 than human managers in some ways. 182 00:09:19,852 --> 00:09:21,998 And it could make monetary sense. 183 00:09:22,573 --> 00:09:24,223 But it could also lead 184 00:09:24,247 --> 00:09:28,995 to a steady but stealthy shutting out of the job market 185 00:09:29,019 --> 00:09:31,312 of people with higher risk of depression. 186 00:09:31,753 --> 00:09:34,349 Is this the kind of society we want to build, 187 00:09:34,373 --> 00:09:36,658 without even knowing we've done this, 188 00:09:36,682 --> 00:09:40,646 because we turned decision-making to machines we don't totally understand? 189 00:09:41,265 --> 00:09:42,723 Another problem is this: 190 00:09:43,314 --> 00:09:47,766 these systems are often trained on data generated by our actions, 191 00:09:47,790 --> 00:09:49,606 human imprints. 192 00:09:50,188 --> 00:09:53,996 Well, they could just be reflecting our biases, 193 00:09:54,020 --> 00:09:57,613 and these systems could be picking up on our biases 194 00:09:57,637 --> 00:09:58,950 and amplifying them 195 00:09:58,974 --> 00:10:00,392 and showing them back to us, 196 00:10:00,416 --> 00:10:01,878 while we're telling ourselves, 197 00:10:01,902 --> 00:10:05,019 "We're just doing objective, neutral computation." 198 00:10:06,314 --> 00:10:08,991 Researchers found that on Google, 199 00:10:10,134 --> 00:10:15,447 women are less likely than men to be shown job ads for high-paying jobs. 200 00:10:16,463 --> 00:10:18,993 And searching for African-American names 201 00:10:19,017 --> 00:10:23,723 is more likely to bring up ads suggesting criminal history, 202 00:10:23,747 --> 00:10:25,314 even when there is none. 203 00:10:26,693 --> 00:10:30,242 Such hidden biases and black-box algorithms 204 00:10:30,266 --> 00:10:34,239 that researchers uncover sometimes but sometimes we don't know, 205 00:10:34,263 --> 00:10:36,924 can have life-altering consequences. 206 00:10:37,958 --> 00:10:42,117 In Wisconsin, a defendant was sentenced to six years in prison 207 00:10:42,141 --> 00:10:43,496 for evading the police. 208 00:10:44,824 --> 00:10:46,010 You may not know this, 209 00:10:46,034 --> 00:10:50,032 but algorithms are increasingly used in parole and sentencing decisions. 210 00:10:50,056 --> 00:10:53,011 He wanted to know: How is this score calculated? 211 00:10:53,795 --> 00:10:55,460 It's a commercial black box. 212 00:10:55,484 --> 00:10:59,689 The company refused to have its algorithm be challenged in open court. 213 00:11:00,396 --> 00:11:05,928 But ProPublica, an investigative nonprofit, audited that very algorithm 214 00:11:05,952 --> 00:11:07,968 with what public data they could find, 215 00:11:07,992 --> 00:11:10,308 and found that its outcomes were biased 216 00:11:10,332 --> 00:11:13,961 and its predictive power was dismal, barely better than chance, 217 00:11:13,985 --> 00:11:18,401 and it was wrongly labeling black defendants as future criminals 218 00:11:18,425 --> 00:11:22,320 at twice the rate of white defendants. 219 00:11:23,891 --> 00:11:25,455 So, consider this case: 220 00:11:26,103 --> 00:11:29,955 This woman was late picking up her godsister 221 00:11:29,979 --> 00:11:32,054 from a school in Broward County, Florida, 222 00:11:32,757 --> 00:11:35,113 running down the street with a friend of hers. 223 00:11:35,137 --> 00:11:39,236 They spotted an unlocked kid's bike and a scooter on a porch 224 00:11:39,260 --> 00:11:40,892 and foolishly jumped on it. 225 00:11:40,916 --> 00:11:43,515 As they were speeding off, a woman came out and said, 226 00:11:43,539 --> 00:11:45,744 "Hey! That's my kid's bike!" 227 00:11:45,768 --> 00:11:49,062 They dropped it, they walked away, but they were arrested. 228 00:11:49,086 --> 00:11:52,723 She was wrong, she was foolish, but she was also just 18. 229 00:11:52,747 --> 00:11:55,291 She had a couple of juvenile misdemeanors. 230 00:11:55,808 --> 00:12:00,993 Meanwhile, that man had been arrested for shoplifting in Home Depot -- 231 00:12:01,017 --> 00:12:03,941 85 dollars' worth of stuff, a similar petty crime. 232 00:12:04,766 --> 00:12:09,325 But he had two prior armed robbery convictions. 233 00:12:09,955 --> 00:12:13,437 But the algorithm scored her as high risk, and not him. 234 00:12:14,746 --> 00:12:18,620 Two years later, ProPublica found that she had not reoffended. 235 00:12:18,644 --> 00:12:21,194 It was just hard to get a job for her with her record. 236 00:12:21,218 --> 00:12:23,294 He, on the other hand, did reoffend 237 00:12:23,318 --> 00:12:27,154 and is now serving an eight-year prison term for a later crime. 238 00:12:28,088 --> 00:12:31,457 Clearly, we need to audit our black boxes 239 00:12:31,481 --> 00:12:34,096 and not have them have this kind of unchecked power. 240 00:12:34,120 --> 00:12:36,999 (Applause) 241 00:12:38,087 --> 00:12:42,329 Audits are great and important, but they don't solve all our problems. 242 00:12:42,353 --> 00:12:45,101 Take Facebook's powerful news feed algorithm -- 243 00:12:45,125 --> 00:12:49,968 you know, the one that ranks everything and decides what to show you 244 00:12:49,992 --> 00:12:52,276 from all the friends and pages you follow. 245 00:12:52,898 --> 00:12:55,173 Should you be shown another baby picture? 246 00:12:55,197 --> 00:12:56,393 (Laughter) 247 00:12:56,417 --> 00:12:59,013 A sullen note from an acquaintance? 248 00:12:59,449 --> 00:13:01,305 An important but difficult news item? 249 00:13:01,329 --> 00:13:02,811 There's no right answer. 250 00:13:02,835 --> 00:13:05,494 Facebook optimizes for engagement on the site: 251 00:13:05,518 --> 00:13:06,933 likes, shares, comments. 252 00:13:08,168 --> 00:13:10,864 In August of 2014, 253 00:13:10,888 --> 00:13:13,550 protests broke out in Ferguson, Missouri, 254 00:13:13,574 --> 00:13:17,991 after the killing of an African-American teenager by a white police officer, 255 00:13:18,015 --> 00:13:19,585 under murky circumstances. 256 00:13:19,974 --> 00:13:21,981 The news of the protests was all over 257 00:13:22,005 --> 00:13:24,690 my algorithmically unfiltered Twitter feed, 258 00:13:24,714 --> 00:13:26,664 but nowhere on my Facebook. 259 00:13:27,182 --> 00:13:28,916 Was it my Facebook friends? 260 00:13:28,940 --> 00:13:30,972 I disabled Facebook's algorithm, 261 00:13:31,472 --> 00:13:34,320 which is hard because Facebook keeps wanting to make you 262 00:13:34,344 --> 00:13:36,380 come under the algorithm's control, 263 00:13:36,404 --> 00:13:38,642 and saw that my friends were talking about it. 264 00:13:38,666 --> 00:13:41,175 It's just that the algorithm wasn't showing it to me. 265 00:13:41,199 --> 00:13:44,241 I researched this and found this was a widespread problem. 266 00:13:44,265 --> 00:13:48,078 The story of Ferguson wasn't algorithm-friendly. 267 00:13:48,102 --> 00:13:49,273 It's not "likable." 268 00:13:49,297 --> 00:13:50,849 Who's going to click on "like?" 269 00:13:51,500 --> 00:13:53,706 It's not even easy to comment on. 270 00:13:53,730 --> 00:13:55,101 Without likes and comments, 271 00:13:55,125 --> 00:13:58,417 the algorithm was likely showing it to even fewer people, 272 00:13:58,441 --> 00:13:59,983 so we didn't get to see this. 273 00:14:00,946 --> 00:14:02,174 Instead, that week, 274 00:14:02,198 --> 00:14:04,496 Facebook's algorithm highlighted this, 275 00:14:04,520 --> 00:14:06,746 which is the ALS Ice Bucket Challenge. 276 00:14:06,770 --> 00:14:10,512 Worthy cause; dump ice water, donate to charity, fine. 277 00:14:10,536 --> 00:14:12,440 But it was super algorithm-friendly. 278 00:14:13,219 --> 00:14:15,832 The machine made this decision for us. 279 00:14:15,856 --> 00:14:19,353 A very important but difficult conversation 280 00:14:19,377 --> 00:14:20,932 might have been smothered, 281 00:14:20,956 --> 00:14:23,652 had Facebook been the only channel. 282 00:14:24,117 --> 00:14:27,914 Now, finally, these systems can also be wrong 283 00:14:27,938 --> 00:14:30,674 in ways that don't resemble human systems. 284 00:14:30,698 --> 00:14:33,620 Do you guys remember Watson, IBM's machine-intelligence system 285 00:14:33,644 --> 00:14:36,772 that wiped the floor with human contestants on Jeopardy? 286 00:14:37,131 --> 00:14:38,559 It was a great player. 287 00:14:38,583 --> 00:14:42,152 But then, for Final Jeopardy, Watson was asked this question: 288 00:14:42,659 --> 00:14:45,591 "Its largest airport is named for a World War II hero, 289 00:14:45,615 --> 00:14:47,867 its second-largest for a World War II battle." 290 00:14:47,891 --> 00:14:49,269 (Hums Final Jeopardy music) 291 00:14:49,582 --> 00:14:50,764 Chicago. 292 00:14:50,788 --> 00:14:52,158 The two humans got it right. 293 00:14:52,697 --> 00:14:57,045 Watson, on the other hand, answered "Toronto" -- 294 00:14:57,069 --> 00:14:58,887 for a US city category! 295 00:14:59,596 --> 00:15:02,497 The impressive system also made an error 296 00:15:02,521 --> 00:15:06,172 that a human would never make, a second-grader wouldn't make. 297 00:15:06,823 --> 00:15:09,932 Our machine intelligence can fail 298 00:15:09,956 --> 00:15:13,056 in ways that don't fit error patterns of humans, 299 00:15:13,080 --> 00:15:16,030 in ways we won't expect and be prepared for. 300 00:15:16,054 --> 00:15:19,692 It'd be lousy not to get a job one is qualified for, 301 00:15:19,716 --> 00:15:23,443 but it would triple suck if it was because of stack overflow 302 00:15:23,467 --> 00:15:24,899 in some subroutine. 303 00:15:24,923 --> 00:15:26,502 (Laughter) 304 00:15:26,526 --> 00:15:29,312 In May of 2010, 305 00:15:29,336 --> 00:15:33,380 a flash crash on Wall Street fueled by a feedback loop 306 00:15:33,404 --> 00:15:36,432 in Wall Street's "sell" algorithm 307 00:15:36,456 --> 00:15:40,640 wiped a trillion dollars of value in 36 minutes. 308 00:15:41,722 --> 00:15:43,909 I don't even want to think what "error" means 309 00:15:43,933 --> 00:15:47,522 in the context of lethal autonomous weapons. 310 00:15:49,894 --> 00:15:53,684 So yes, humans have always made biases. 311 00:15:53,708 --> 00:15:55,884 Decision makers and gatekeepers, 312 00:15:55,908 --> 00:15:59,401 in courts, in news, in war ... 313 00:15:59,425 --> 00:16:02,463 they make mistakes; but that's exactly my point. 314 00:16:02,487 --> 00:16:06,008 We cannot escape these difficult questions. 315 00:16:06,596 --> 00:16:10,112 We cannot outsource our responsibilities to machines. 316 00:16:10,676 --> 00:16:14,884 (Applause) 317 00:16:17,089 --> 00:16:21,536 Artificial intelligence does not give us a "Get out of ethics free" card. 318 00:16:22,742 --> 00:16:26,123 Data scientist Fred Benenson calls this math-washing. 319 00:16:26,147 --> 00:16:27,536 We need the opposite. 320 00:16:27,560 --> 00:16:32,948 We need to cultivate algorithm suspicion, scrutiny and investigation. 321 00:16:33,380 --> 00:16:36,578 We need to make sure we have algorithmic accountability, 322 00:16:36,602 --> 00:16:39,047 auditing and meaningful transparency. 323 00:16:39,380 --> 00:16:42,614 We need to accept that bringing math and computation 324 00:16:42,638 --> 00:16:45,608 to messy, value-laden human affairs 325 00:16:45,632 --> 00:16:48,016 does not bring objectivity; 326 00:16:48,040 --> 00:16:51,673 rather, the complexity of human affairs invades the algorithms. 327 00:16:52,148 --> 00:16:55,635 Yes, we can and we should use computation 328 00:16:55,659 --> 00:16:57,673 to help us make better decisions. 329 00:16:57,697 --> 00:17:03,029 But we have to own up to our moral responsibility to judgment, 330 00:17:03,053 --> 00:17:05,871 and use algorithms within that framework, 331 00:17:05,895 --> 00:17:10,830 not as a means to abdicate and outsource our responsibilities 332 00:17:10,854 --> 00:17:13,308 to one another as human to human. 333 00:17:13,807 --> 00:17:16,416 Machine intelligence is here. 334 00:17:16,440 --> 00:17:19,861 That means we must hold on ever tighter 335 00:17:19,885 --> 00:17:22,032 to human values and human ethics. 336 00:17:22,056 --> 00:17:23,210 Thank you. 337 00:17:23,234 --> 00:17:28,254 (Applause)