Public Sphere Interview: Sheera Frenkel on Facebook's "Ugly Truth"

In a new book, two New York Times reporters go deep on the last five years at the company as it has massively grown and been at the center of domestic and global political crises

Sheera Frenkel is a New York Times technology reporter based in San Francisco. Along with Times technology reporter Cecilia Kang, she is the author of a new book, An Ugly Truth: Inside Facebook's Battle for Domination. The book covers the last five years of Facebook, as it was roiled by crises including misinformation, conspiracy theories, the inflammatory rhetoric of Donald Trump, and Russian election interference. Frenkel and Kang drew on extensive sources inside the company, lawmakers, regulators, academics, and researchers around the globe, conducting over 400 interviews. Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg ultimately did not respond to requests from the authors for on-the-record interviews.

Frenkel and Kang write extensively about Facebook's role in domestic and global politics, which is starkly different from several previous books that primarily covered the company's rise as a social-networking startup. I spoke with Frenkel last week over Google Meet. Our conversation is condensed and edited for clarity.

Luke Johnson: Let's start with the title, "An Ugly Truth." Can you tell readers what that truth is?

Sheera Frenkel: The title of the book comes from Andrew Bosworth, known as Boz, one of Facebook's executives. It's a memo that he lays out to his own employees, which says the cost of growth is some of these harms that we've wrought upon the world, that in order to grow as fast as we can, in order to connect to people, we have to accept a certain amount of damage being done, whether that's a terrorist using our platform in a way that isn't intended, or a person being bullied, that those ugly truths are something we have to accept. And we thought that was really such a powerful idea and memo, and spoke to so many of the problems at Facebook.

LJ: Let's talk about Facebook's newsworthiness standard, which was created during the Trump 2016 campaign, for not removing speech that might have violated its community standards, which is "newsworthy, significant, or important to the public interest." Can you tell me what you discovered in your reporting about the creation of that standard?

SF: I think the newsworthiness standard has been one of the most consequential and impactful decisions made at Facebook. As a reporter, I wanted to go into it and figure out which academics, which research, which books they read, going into this to make this really monumental decision. What we found is they didn't. This was a decision that came downwind of one single decision, that was when Donald Trump ran for president of the United States. He made this announcement in January 2016, that he wanted to ban Muslims from the United States. He posted a video of that to his Facebook page. Mark Zuckerberg's own employees raised this and said, "Wait, does this violate our rules? Isn't this hate speech?"

Mark Zuckerberg makes a one-off decision that we have to create a carve out for politicians, because they're running for office, and it's important for the public to know their views. Somebody close to him said Mark's thinking was, people will see this and think, well, that's awful -- we shouldn't ban all Muslims from the United States. People will use their own judgment on that post. This came from a person close to him -- Mark wasn't really thinking about how Facebook's own algorithms were going to promote that kind of post. It's so sensational. It's so emotive. Whether you agree with it or disagree with it, it's going to inspire an emotion in you. That's what Facebook's algorithms are so good at doing. You make that one off decision, and then you end up down the line with this newsworthiness carve out, which is going to favor politicians that make the most sensational statements possible.

LJ: One of the chapters in the book is about the mass murder of the Rohingya in Myanmar, and the company, again, didn't seem to know that there's all this misinformation on its platform. There is one Burmese speaker [in the company] and there's there's all sorts of languages in Burma. What do you think that says about the company's stance on moderating content? Have they gotten any better?

SF: That chapter was one of the most personally powerful because I reported in Myanmar in 2015. I was one of the reporters that was sending Facebook emails saying, what is going on here? The hate speech is out of control. And everyone here says you're not doing anything about it. I remember receiving emails back from them saying you're blowing this out of proportion. 

When I started to do the reporting for that chapter, and I spoke to so many academics and human rights activists that were hearing the same thing as me, it felt like this overwhelming body of evidence that Facebook was getting the warnings, they were being told over and over again, something bad is happening here. It's out of control. If you don't do something, people can get hurt. It [Facebook] just doesn't dedicate the time or the resources to get ahead of it. As you said, they had a single Burmese speaker who wasn't even based in the country, he was based in Dublin, moderating content in a country where over 100 languages are spoken. If you're an executive, and you were told that statistic, surely there must be a moment where you pause and you say, we're not doing enough. I'll note that they eventually end up hiring two more people. Ultimately, when the massacres start to happen, they have three Burmese speakers. But, I think objectively people will say how can that possibly be enough?

LJ: Another chapter is devoted to the [2016] Russian interference in the U.S. election. And again, it seems like that the [Facebook] leadership was caught flat-footed, and largely the U.S. government was as well. Can you talk about your reporting on the security team, and the individuals who tried to sound the alarm?

SF: The Facebook security team -- I just want to stop for a minute and give them credit. Ultimately, these were experts, a lot of them came from intelligence agencies, where they were among the few that could sound the alarm that knew what to look for. They were looking for Russian election interference a year before the elections, because they knew it was a problem. And they knew what Russia was doing in other countries. The problem was that what they were looking for was traditional types of hacks in Eastern Europe, and which it did in the United States, hacking the Hillary Clinton campaign. They weren't looking for the kind of I.R.A. [Internet Research Agency] activity, that they ultimately ended up finding almost a year after the elections. It was a failure of imagination, and of their own executives telling them the warnings they were receiving. 

We found out that in 2014, there was a group of European lawmakers  -- MPs from Estonia, Ukraine, Hungary -- came to Facebook's offices and said, we're having a huge problem on Facebook of trolls, putting fake news on our accounts, attacking our accounts, we think these are Russian trolls. We need you to do something about it, because if you don't, it's a matter of time before they do it here in the United States. They meet with people in Facebook's policy team, and nothing is done. The security team didn't even know that that group of MPs was in the building making this warning.

LJ: You have a whole chapter devoted to how Facebook catches leakers called "The Rat Catcher," which is a very apt title. To the extent that you are able to, can you talk about your reporting process for the book?

SF: Because we're so aware of how aggressively Facebook tries to catch leakers, we were very, very careful. For our original article, "Delay, Deny, Deflect," which came out in 2018, we already had a lot of Facebook sources. After that article was published, we were getting inundated with messages from current and former Facebook employees who kept saying to us, this is the tip of the iceberg. That's one reason we really wanted to write this book. 

There's certain security precautions we take. I don't put the phone numbers of my sources into my phone; I don't ever even enter their names into my phone. I have other ways that I communicate with them, some very old school, others using technology that is a little bit safer. When I meet them, I don't bring my cell phone, I don't bring my laptop, I just bring a notebook and a piece of paper. For Cecilia [Kang] and I, keeping our sources safe was the most important thing, making sure they didn't get fired. I'll note that to this day, no one has been fired at Facebook who has been a source of mine either as a result of my original article or the book. 

One thing that's been funny for me as a reporter is that Facebook constantly tells its employees not to talk to us. I feel like every time they put that out, someone gets in touch with me out of anger. I think these companies have really struggled with leakers and their own employees who are very keen to get the real version out there, what they see as mistakes by the managers and executives of the company.

LJ: Now onto 2020. Can you talk about how Facebook finally drew the line on QAnon? How effective has that [ban] been? Do you think that was a function of that Biden was likely to win the election?

SF: I'm going to be generous here for a moment. I think Facebook knew it was a growing problem. Their own security teams were terrified about what they were seeing with QAnon. I've interviewed people who have said it was frightening. It was scary to them as people who could see the content, just how many Americans were falling prey to far-right conspiracies, which were really damaging. I think they also politically read the writing on the wall in terms of Biden's likely win, and it was a safe time for them to make those kinds of changes.

One thing that says a lot about them as a company was, even when they decide that it's time to take real action on QAnon and their own security teams present them with this list and say, here's all these QAnon pages, they're really dangerous; let's take them down. The company's policy team says, wait, hold on, it's going to look really one-sided if we just take down QAnon, because QAnon on is seen as a right-wing movement, let's stop for a minute and find a left-wing equivalent and take that down as well. The security team is baffled when they hear this. They scramble and eventually come up with Antifa. It's not really the same as QAnon. But it's the only other one that we can think of that people know and is scary to people. And then we have to spend weeks looking for Antifa groups they can remove. It's all about public perception. It's all about looking even, fair, and neutral. I think Facebook's own security team would say it's a false equivalence in terms of groups that were actually damaging to the public.

LJ: I underlined this passage because I thought it was shocking, in talking about how Facebook engineers tried to run algorithmic experiments showing users accurate content versus more divisive stories. “The bottom line was that we couldn't hurt our bottom line," you quoted a Facebook data scientist as saying, who worked on making Facebook less divisive. "Mark still wanted people using Facebook as much as possible, as often as possible." Is it really true that Facebook cannot be profitable without misinformation?

SF: I don't think it's that they can't be profitable, because they've shown immense growth. The question is how profitable. I think the problem Facebook has is that it is shown such astronomical growth year after year, the moment they start to slow down even a tiny bit, it's read by some of their investors as panic. That is something that their own executives have decided that they can't risk. 

If they removed all misinformation, it would be very, very difficult to do at this point, because their policies don't support it. But if they started to become very aggressive, let's say in showing people more accurate sources of information, their own experiments have found that's kind of boring. Maybe you see a bunch of New York Times and Wall Street Journal and Washington Post and other articles at the top of your News Feed. You may not engage with it as much, you might log out and go do something else. Maybe you go to TikTok, maybe you'll go to Twitter. That is really dangerous for Facebook.

LJ: The book ends with Facebook banning Trump after January 6. Can you tell me how that decision was from the inside, and also how you see them wrestling with the big question after 2023, which is when he possibly could be allowed back?

SF: January 6 rolls around and Facebook knows this is going to be an important day and they're watching Donald Trump's account, but they're also watching his speeches on television. They're watching his Twitter feed. They're looking at him really holistically and saying, this is it right? At this point, people are in Washington, they're storming the Capitol. If he goes on to  encourage them or not back down off this idea that the election was stolen for him, they seem to make an internal decision. 

The [Oversight] Board says that how does that make sense? You drew this line as being the last thing you were no longer going to tolerate? It could have been a year earlier when he said that disinfectants and UV lights can be used to treat COVID, which also violated Facebook's rules about not spreading misinformation on COVID. 

Facebook just chose that moment in time to say this is where we're drawing the line that we will not allow Trump to continue on the platform. But it's fairly arbitrary. And that's something we're going to have to explain going forward because when 2023 rolls around, they're going to have to say, we've again looked at his speeches, we've looked at every other medium in which he still communicates, and we're going to make a call on whether or not we allow him back on the platform. It's going to happen right in time for a potential second run for office. I think it's a very precarious time for Facebook.

LJ: One of the big themes of the book is whether Facebook can fix itself. I'm curious where you come down on that question.

SF: I think that if Facebook wanted to, it could make serious, significant changes that would lead to it fixing itself, but I don't see any moves towards that happening. I think one of the biggest problems is that Mark Zuckerberg sits at the helm of a company where he cannot be controlled or influenced in a significant way. The board is there to advise him, executives are there to advise him.

I would make an argument that if you look at historical examples, it's not healthy for people to have that kind of unabashed power. It's good for human beings to be vulnerable; to be forced to listen to other opinions and ideas, to know that they can be fired, if they're not doing a good job and if they've made mistakes. By putting himself in a position where those things can't happen, I think it's quite dangerous and I don't see change happening anytime soon.

Share Public Sphere

Share