Twitter purges QAnon accounts; Facebook targets ‘Stop the Steal’

Both companies say the moves are intended to limit the incitement of further violence ahead of Inauguration Day.
Jan. 6 U.S. Capitol
Supporters of President Donald Trump outside the U.S. Capitol Building on Jan. 6. A new Graphika and Stanford report documents how Russian disinformation has thrived on alt-right sites. (Elvert Barnes / Flickr)

The latest moves by Twitter and Facebook against misinformation look a lot like efforts from 2020.

One platform suspended tens of thousands of accounts that were amplifying the QAnon conspiracy theory, and another is removing content suggesting that the election was rigged.

Twitter said Tuesday it removed more than 70,000 accounts for “sharing harmful QAnon-associated content at scale,” one day after Facebook said it is targeting the phrase “stop the steal” — a favorite hashtag of President Donald Trump’s supporters — in the interest of stopping “misinformation and content that could incite further violence.”

Both moves come as the social media giants have blocked Trump in the aftermath of the riot by right-wing extremists at the U.S. Capitol on Jan. 6, which occurred as Congress was preparing to certify Joe Biden’s victory over the incumbent president in November’s elections. Twitter, Facebook and other Silicon Valley companies say incitements to violence from Trump and prominent supporters have violated their terms of service. Social media platform Parler, one of the apps that extremists used to communicate, was booted from Amazon Web Services for similar reasons.


With Biden’s inauguration looming next week, the Capitol Building and surrounding grounds are now guarded by thousands of National Guard troops. The FBI has warned that violent protests could happen at all 50 state capitols.

The struggle to contain QAnon content and other election-related misinformation dates back before 2020, as Twitter and Facebook faced intensified calls to tighten their policing of content after the 2016 elections. By late summer in 2020, both companies were touting progress. Facebook said it had more than doubled its removal of posts that violated hate speech, and Twitter announced that it would be clearly labeling election misinformation.

Twitter said its latest action involved “many instances of a single individual operating numerous accounts,” but it did not provide specific examples. That kind of amplification was made famous in 2016, as Russia-backed operations used bot accounts and other tools to sow division online. In 2020, Twitter identified similar behavior from U.S. adversaries, including Iran.

Facebook’s announcement comes as the company’s leadership is trying to downplay the idea that the platform was a place where Jan. 6 rioters planned some of their activities.

“We again took down QAnon, Proud Boys, Stop the Steal, anything that was talking about possible violence last week,” COO Sheryl Sandberg said in an interview with Reuters. “Our enforcement is never perfect, so I’m sure there were still things on Facebook. I think these events were largely organized on platforms that don’t have our abilities to stop hate, don’t have our standards and don’t have our transparency.”


Twitter, meanwhile, responded directly to backlash from prominent Trump supporters who said the company was deliberately decreasing their follower counts. The purge of QAnon accounts had a direct effect on those numbers, Twitter said.

“Our updated enforcement on QAnon content on Twitter, along with routine spam challenges, has resulted in changes in follower count for some people’s Twitter accounts,” the announcement said. “In some cases, these actions may have resulted in follower count changes in the thousands.” 

Latest Podcasts