Protecting Twitter users (sometimes from themselves)
-
0:01 - 0:02My job at Twitter
-
0:02 - 0:04is to ensure user trust,
-
0:04 - 0:07protect user rights and keep users safe,
-
0:07 - 0:08both from each other
-
0:08 - 0:12and, at times, from themselves.
-
0:12 - 0:17Let's talk about what scale looks like at Twitter.
-
0:17 - 0:19Back in January 2009,
-
0:19 - 0:23we saw more than two million new tweets each day
-
0:23 - 0:24on the platform.
-
0:24 - 0:30January 2014, more than 500 million.
-
0:30 - 0:33We were seeing two million tweets
-
0:33 - 0:35in less than six minutes.
-
0:35 - 0:42That's a 24,900-percent increase.
-
0:42 - 0:45Now, the vast majority of activity on Twitter
-
0:45 - 0:47puts no one in harm's way.
-
0:47 - 0:49There's no risk involved.
-
0:49 - 0:54My job is to root out and prevent activity that might.
-
0:54 - 0:56Sounds straightforward, right?
-
0:56 - 0:58You might even think it'd be easy,
-
0:58 - 1:00given that I just said the vast majority
-
1:00 - 1:04of activity on Twitter puts no one in harm's way.
-
1:04 - 1:06Why spend so much time
-
1:06 - 1:09searching for potential calamities
-
1:09 - 1:11in innocuous activities?
-
1:11 - 1:14Given the scale that Twitter is at,
-
1:14 - 1:17a one-in-a-million chance happens
-
1:17 - 1:22500 times a day.
-
1:22 - 1:23It's the same for other companies
-
1:23 - 1:24dealing at this sort of scale.
-
1:24 - 1:26For us, edge cases,
-
1:26 - 1:30those rare situations that are unlikely to occur,
-
1:30 - 1:32are more like norms.
-
1:32 - 1:36Say 99.999 percent of tweets
-
1:36 - 1:38pose no risk to anyone.
-
1:38 - 1:39There's no threat involved.
-
1:39 - 1:42Maybe people are documenting travel landmarks
-
1:42 - 1:44like Australia's Heart Reef,
-
1:44 - 1:47or tweeting about a concert they're attending,
-
1:47 - 1:52or sharing pictures of cute baby animals.
-
1:52 - 1:56After you take out that 99.999 percent,
-
1:56 - 2:00that tiny percentage of tweets remaining
-
2:00 - 2:02works out to roughly
-
2:02 - 2:06150,000 per month.
-
2:06 - 2:08The sheer scale of what we're dealing with
-
2:08 - 2:11makes for a challenge.
-
2:11 - 2:12You know what else makes my role
-
2:12 - 2:15particularly challenging?
-
2:15 - 2:20People do weird things.
-
2:20 - 2:22(Laughter)
-
2:22 - 2:24And I have to figure out what they're doing,
-
2:24 - 2:26why, and whether or not there's risk involved,
-
2:26 - 2:29often without much in terms of context
-
2:29 - 2:30or background.
-
2:30 - 2:33I'm going to show you some examples
-
2:33 - 2:35that I've run into during my time at Twitter --
-
2:35 - 2:36these are all real examples —
-
2:36 - 2:39of situations that at first seemed cut and dried,
-
2:39 - 2:40but the truth of the matter was something
-
2:40 - 2:42altogether different.
-
2:42 - 2:44The details have been changed
-
2:44 - 2:45to protect the innocent
-
2:45 - 2:49and sometimes the guilty.
-
2:49 - 2:52We'll start off easy.
-
2:52 - 2:53["Yo bitch"]
-
2:53 - 2:57If you saw a Tweet that only said this,
-
2:57 - 2:58you might think to yourself,
-
2:58 - 3:00"That looks like abuse."
-
3:00 - 3:03After all, why would you
want to receive the message, -
3:03 - 3:05"Yo, bitch."
-
3:05 - 3:10Now, I try to stay relatively hip
-
3:10 - 3:12to the latest trends and memes,
-
3:12 - 3:15so I knew that "yo, bitch"
-
3:15 - 3:18was also often a common greeting between friends,
-
3:18 - 3:23as well as being a popular "Breaking Bad" reference.
-
3:23 - 3:25I will admit that I did not expect
-
3:25 - 3:28to encounter a fourth use case.
-
3:28 - 3:31It turns out it is also used on Twitter
-
3:31 - 3:34when people are role-playing as dogs.
-
3:34 - 3:39(Laughter)
-
3:39 - 3:41And in fact, in that case,
-
3:41 - 3:43it's not only not abusive,
-
3:43 - 3:46it's technically just an accurate greeting.
-
3:46 - 3:49(Laughter)
-
3:49 - 3:51So okay, determining whether or not
-
3:51 - 3:52something is abusive without context,
-
3:52 - 3:54definitely hard.
-
3:54 - 3:57Let's look at spam.
-
3:57 - 3:59Here's an example of an account engaged
-
3:59 - 4:00in classic spammer behavior,
-
4:00 - 4:02sending the exact same message
-
4:02 - 4:04to thousands of people.
-
4:04 - 4:07While this is a mockup I put
together using my account, -
4:07 - 4:10we see accounts doing this all the time.
-
4:10 - 4:12Seems pretty straightforward.
-
4:12 - 4:14We should just automatically suspend accounts
-
4:14 - 4:17engaging in this kind of behavior.
-
4:17 - 4:20Turns out there's some exceptions to that rule.
-
4:20 - 4:23Turns out that that message
could also be a notification -
4:23 - 4:27you signed up for that the International
Space Station is passing overhead -
4:27 - 4:29because you wanted to go outside
-
4:29 - 4:31and see if you could see it.
-
4:31 - 4:32You're not going to get that chance
-
4:32 - 4:34if we mistakenly suspend the account
-
4:34 - 4:36thinking it's spam.
-
4:36 - 4:40Okay. Let's make the stakes higher.
-
4:40 - 4:41Back to my account,
-
4:41 - 4:45again exhibiting classic behavior.
-
4:45 - 4:48This time it's sending the same message and link.
-
4:48 - 4:50This is often indicative of
something called phishing, -
4:50 - 4:54somebody trying to steal another
person's account information -
4:54 - 4:56by directing them to another website.
-
4:56 - 5:00That's pretty clearly not a good thing.
-
5:00 - 5:02We want to, and do, suspend accounts
-
5:02 - 5:05engaging in that kind of behavior.
-
5:05 - 5:08So why are the stakes higher for this?
-
5:08 - 5:11Well, this could also be a bystander at a rally
-
5:11 - 5:13who managed to record a video
-
5:13 - 5:16of a police officer beating a non-violent protester
-
5:16 - 5:19who's trying to let the world know what's happening.
-
5:19 - 5:21We don't want to gamble
-
5:21 - 5:23on potentially silencing that crucial speech
-
5:23 - 5:26by classifying it as spam and suspending it.
-
5:26 - 5:29That means we evaluate hundreds of parameters
-
5:29 - 5:31when looking at account behaviors,
-
5:31 - 5:33and even then, we can still get it wrong
-
5:33 - 5:35and have to reevaluate.
-
5:35 - 5:39Now, given the sorts of challenges I'm up against,
-
5:39 - 5:41it's crucial that I not only predict
-
5:41 - 5:45but also design protections for the unexpected.
-
5:45 - 5:47And that's not just an issue for me,
-
5:47 - 5:49or for Twitter, it's an issue for you.
-
5:49 - 5:52It's an issue for anybody who's building or creating
-
5:52 - 5:54something that you think is going to be amazing
-
5:54 - 5:57and will let people do awesome things.
-
5:57 - 5:59So what do I do?
-
5:59 - 6:03I pause and I think,
-
6:03 - 6:05how could all of this
-
6:05 - 6:09go horribly wrong?
-
6:09 - 6:13I visualize catastrophe.
-
6:13 - 6:16And that's hard. There's a sort of
-
6:16 - 6:18inherent cognitive dissonance in doing that,
-
6:18 - 6:20like when you're writing your wedding vows
-
6:20 - 6:23at the same time as your prenuptial agreement.
-
6:23 - 6:25(Laughter)
-
6:25 - 6:27But you still have to do it,
-
6:27 - 6:31particularly if you're marrying
500 million tweets per day. -
6:31 - 6:34What do I mean by "visualize catastrophe?"
-
6:34 - 6:37I try to think of how something as
-
6:37 - 6:40benign and innocuous as a picture of a cat
-
6:40 - 6:42could lead to death,
-
6:42 - 6:44and what to do to prevent that.
-
6:44 - 6:46Which happens to be my next example.
-
6:46 - 6:49This is my cat, Eli.
-
6:49 - 6:51We wanted to give users the ability
-
6:51 - 6:53to add photos to their tweets.
-
6:53 - 6:55A picture is worth a thousand words.
-
6:55 - 6:57You only get 140 characters.
-
6:57 - 6:58You add a photo to your tweet,
-
6:58 - 7:01look at how much more content you've got now.
-
7:01 - 7:03There's all sorts of great things you can do
-
7:03 - 7:05by adding a photo to a tweet.
-
7:05 - 7:07My job isn't to think of those.
-
7:07 - 7:10It's to think of what could go wrong.
-
7:10 - 7:12How could this picture
-
7:12 - 7:15lead to my death?
-
7:15 - 7:19Well, here's one possibility.
-
7:19 - 7:22There's more in that picture than just a cat.
-
7:22 - 7:24There's geodata.
-
7:24 - 7:26When you take a picture with your smartphone
-
7:26 - 7:27or digital camera,
-
7:27 - 7:29there's a lot of additional information
-
7:29 - 7:31saved along in that image.
-
7:31 - 7:32In fact, this image also contains
-
7:32 - 7:34the equivalent of this,
-
7:34 - 7:37more specifically, this.
-
7:37 - 7:39Sure, it's not likely that someone's going to try
-
7:39 - 7:42to track me down and do me harm
-
7:42 - 7:43based upon image data associated
-
7:43 - 7:45with a picture I took of my cat,
-
7:45 - 7:49but I start by assuming the worst will happen.
-
7:49 - 7:51That's why, when we launched photos on Twitter,
-
7:51 - 7:55we made the decision to strip that geodata out.
-
7:55 - 8:01(Applause)
-
8:01 - 8:04If I start by assuming the worst
-
8:04 - 8:05and work backwards,
-
8:05 - 8:07I can make sure that the protections we build
-
8:07 - 8:09work for both expected
-
8:09 - 8:11and unexpected use cases.
-
8:11 - 8:14Given that I spend my days and nights
-
8:14 - 8:16imagining the worst that could happen,
-
8:16 - 8:21it wouldn't be surprising if
my worldview was gloomy. -
8:21 - 8:22(Laughter)
-
8:22 - 8:24It's not.
-
8:24 - 8:28The vast majority of interactions I see --
-
8:28 - 8:32and I see a lot, believe me -- are positive,
-
8:32 - 8:34people reaching out to help
-
8:34 - 8:37or to connect or share information with each other.
-
8:37 - 8:40It's just that for those of us dealing with scale,
-
8:40 - 8:44for those of us tasked with keeping people safe,
-
8:44 - 8:47we have to assume the worst will happen,
-
8:47 - 8:51because for us, a one-in-a-million chance
-
8:51 - 8:54is pretty good odds.
-
8:54 - 8:56Thank you.
-
8:56 - 9:00(Applause)
- Title:
- Protecting Twitter users (sometimes from themselves)
- Speaker:
- Del Harvey
- Description:
-
Del Harvey heads up Twitter’s Trust and Safety Team, and she thinks all day about how to prevent worst-case scenarios — abuse, trolling, stalking — while giving voice to people around the globe. With deadpan humor, she offers a window into how she works to keep 240 million users safe.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 09:19
Morton Bast edited English subtitles for Protecting Twitter users (sometimes from themselves) | ||
Morton Bast approved English subtitles for Protecting Twitter users (sometimes from themselves) | ||
Morton Bast edited English subtitles for Protecting Twitter users (sometimes from themselves) | ||
Morton Bast edited English subtitles for Protecting Twitter users (sometimes from themselves) | ||
Madeleine Aronson accepted English subtitles for Protecting Twitter users (sometimes from themselves) | ||
Madeleine Aronson edited English subtitles for Protecting Twitter users (sometimes from themselves) | ||
Joseph Geni edited English subtitles for Protecting Twitter users (sometimes from themselves) | ||
Amara Bot edited English subtitles for Protecting Twitter users (sometimes from themselves) |