Advertisement

Civil society groups press platforms to step up election integrity work

A coalition of civil society groups wants social media companies to re-invest in human content moderation and commit to holding bad actors accountable.
In this photo illustration, social media apps are displayed on an iPad on February 26, 2024, in Miami, Florida. (Photo illustration by Joe Raedle/Getty Images)

Over 200 nonprofits, online research organizations and civil society groups are calling on social media platforms to step up their content moderation efforts ahead of elections in the United States and dozens of other countries this year.

In an open letter sent to executives at 12 major tech and social media companies, the groups chided the platforms over their lack of investment in personnel and resources to combat election-related disinformation and influence operations.

“Research illustrates that individual users can have an outsized impact on online discourse, which results in real-world harms, such as the rise of extremism and violent attempts to overthrow democratic governments,” the signatories wrote. “And yet, many of the largest social-media companies have reduced, not reinforced, interventions necessary to keep online platforms safe and healthy.”

The coalition is calling for platforms to commit to a number of actions this year, including regular transparency reports around election-related efforts, boosting their non-English speaking trust and safety personnel, reinstating moderating policies to curtail false or unsubstantiated claims about 2020 election fraud, enhanced disclosures around AI-generated political content and advertising, and a commitment to hold large influencers accountable for intentionally spreading false information about elections.

Advertisement

The letter was addressed to the CEOs of Google, Meta, X, TikTok, Instagram, SnapChat, Twitch, YouTube, Pinterest, Reddit, Rumble and Discord.

While many of these platforms have been engaged in content moderation for multiple election cycles, the message comes as social media platforms have laid off tens of thousands of employees over the past two years, including many who worked on content moderation and disinformation.

Written responses to questions posed by the House Judiciary Committee in March indicate that X (formerly Twitter) has cut more than 1,000 Trust and Safety positions since 2022. X CEO Linda Yaccarino said the company is building a new trust and safety center of excellence in Austin, Texas to consolidate its content moderation efforts and reduce the company’s reliance on contractors. She also said the company plans to hire an additional 100 employees to staff the center.

Meta, which underwent four rounds of layoffs between 2022 and 2023, told the committee that the company still monitors and investigates networks of coordinated inauthentic behavior on their platform. However, for election protection and voter interference, Meta relies on federal, state and local authorities and other “officials responsible for election protection” to flag potential incidents or posts before taking action.

“When they identified potential voter interference or other violations of our policies, we investigated and took action if warranted, and we have established strong channels of communication to respond to any election-related threats,” Meta wrote.

Advertisement

Last month, Meta announced that on Aug. 14 it would be sunsetting CrowdTangle, a tool widely used by online researchers to track disinformation on the platform. Researchers have urged the platform to continue supporting the tool through at least January 2025.

Responses to the committee from other platforms, such as Discord and Snap, also revealed substantial cuts to trust and safety teams. The groups behind the letter are also concerned about increased efforts by both domestic and foreign actors to specifically target marginalized populations and exploit the dearth of non-English speaking employees dedicated to content moderation by crafting disinformation campaigns in other languages.

“Social-media platforms have profited enormously by privatizing and monetizing the means by which we discover, debate and decide as a society, and they have a moral responsibility to ensure those information flows aren’t toxified by foreign state actors, hate preachers, bot networks and sly propagandists,” Imran Ahmed, CEO of the Center for Countering Digital Hate, said in a statement.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts