4 Questions That The January 6 Social Media Subpoenas Could Answer
What did platforms know, and when?
After months of negotiations, the January 6 Committee has decided to subpoena social media companies. On January 13, chairman Bennie Thompson (D-Miss.) sent subpoenas to Meta, the parent company of Facebook; Alphabet, which owns YouTube; Twitter; and Reddit. Thompson said in a statement, "Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps—if any—social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence." He went on, "It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions."
I have four questions for the companies, that I hope will be answered by these subpoenas:
In the lead-up to January 6, how much did algorithms promote the event?
In 2018, Facebook started prioritizing "meaningful social interaction" in its News Feed, which has had the effect of prioritizing divisive and angry content. The lead up to January 6 was precisely the kind of divisive and angry event that would drive engagement -- likes, shares, and comments. YouTube's recommendation-based algorithm has also come under fire for continuing to promote misinformation. Even in its efforts to de-rank misinformation purveyors, its algorithm boosted Fox News.
After Election Day, Facebook changed its algorithm to promote news from authoritative sources like The New York Times and CNN and away from partisan outlets hosted by right-wing figures like Ben Shapiro and Dan Bongino. The result was a boost in traffic to mainstream sources. In December 2020, Facebook confirmed that they had turned off the algorithm change, back to its engagement-based ranking. Facebook's algorithm change worked -- and if it had been continued, it likely would have slowed the spread of "Stop the Steal" content. What was the decision-making process in turning off this algorithm change, at a time when there was an active challenge to the results of the election?
Why did Facebook disband its Civic Integrity team after the November elections?
According to press reports and Facebook whistleblower Frances Haugen, Facebook disbanded its Civic Integrity team, which had been focused on election-related misinformation, and placed it in a larger team. In front of Congress, Haugen said, "they told us, 'we're dissolving civic integrity.' Like they basically said, 'Oh good, we made it through the election.There wasn't riots. We can get rid of Civic Integrity now.' Fast forward a couple of months, we got the insurrection."
The company has objected to this characterization. Meta's Vice President of Integrity Guy Rosen said, "We did not disband Civic Integrity. We integrated it into a larger Central Integrity team so that the incredible work pioneered for elections could be applied even further, for example across health related issues."
What was the decision-making process behind disbanding the team? Did Facebook executives think that the danger had passed despite an ongoing challenge to the legitimacy of the election results?

What if "Stop the Steal" content had been treated as a movement, rather than as the actions of unconnected individuals?
In documents released by Frances Haugen, which were previously reported on by BuzzFeed News, employees noted that many "Stop the Steal" pages and groups did not violate company policy individually, but they would have if they were treated as part of a collective effort. An internal analysis read, "Because we were looking at each entity individually, rather than as a cohesive movement, we were only able to take down individual Groups and Pages once they exceeded a violation threshold." Only after the insurrection was the offending content more thoroughly removed.
Facebook has policies against "coordinated inauthentic behavior." (The Internet Research Agency in St. Petersburg which created fake personas of American voters in the 2016 election is an example.) But the analysis said that the company had "little policy around coordinated authentic harm" -- real people putting their names behind misinformation to rally people to reject the election results. The analysis noted that the speech of real people may have been individually protected, but at a network level, it was causing harm. What is Facebook's response to its employees' postmortem analysis? Are they considering making any changes?
How much did YouTube's "grace period" affect content?
After the votes had been counted and it was clear that Biden had won, on December 9, 2020, YouTube announced a stricter content moderation policy against election-related misinformation. The company explained, "yesterday was the safe harbor deadline for the U.S. Presidential election and enough states have certified their election results to determine a President-elect." Given that fact, the company said it would start removing content alleging widespread fraud or errors in vote-counting.
As with previous content moderation policies, YouTube said it had a grace period for enforcing the policy, between the announcement and Inauguration Day. That meant that new content violating the policy would be removed without penalty. When the grace period expired, channels that received a strike for offending content were temporarily suspended, and three strikes within 90 days earned a ban. YouTube ended its grace period prematurely on January 7, after the insurrection. But what if it had started enforcement earlier? More channels would have been subject to ban. The January 6 Committee noted that Steve Bannon was streaming his podcast on YouTube leading up to the day.

***
Meta and Reddit did not respond to requests for comment about the subpoena. Alphabet said in a statement that they had been responding to the committee and were committed to working with Congress. Twitter declined to comment.
Twitter is the only company to acknowledge its role in the insurrection. At a congressional hearing in March, then-CEO Jack Dorsey answered in the affirmative when asked whether his company played a role. "Yes,” he said. “But you also have to take into consideration the broader ecosystem. It’s not just about the technological systems that we use.”
Even without social media, the insurrection would likely have happened. Plenty of it was planned off of social media -- and on encrypted platforms like Signal and Telegram, which have featured prominently in the indictments of rioters.
But these companies played a role. The January 6 event was advertised openly on platforms like Facebook, Twitter, and YouTube. The question is, how much did these platforms' engagement-based algorithms and not taking more aggressive steps to moderate content beforehand contribute to the scope and violence of the day? While members of both parties agree on reining in big tech, Democrats focus on algorithm regulation and Republicans want to limit their ability to moderate content. These are irreconcilable views. Given that, it appears unlikely that legislative tech regulation will pass in 2022.