Advertisement

Senators slam social media companies for failure to keep disinformation from going viral

Tech executives say they are working hard to fight disinformation, but lawmakers and critics say they simply aren't doing enough.
From left, Chris Cox, chief product officer for Meta, Neal Mohan, chief product officer for YouTube, Vanessa Pappas, chief operating officer for TikTok, and Jay Sullivan, general manager of Bluebird Twitter, are sworn in during a US Senate Homeland Security hearing regarding social media's impact on homeland security and disinformation on September 14, 2022. The executives are under fire for the vast amount of disinformation on their platforms, but they say if the Supreme Court upholds Texas and Florida laws seeking to ban them from curating content the problem will grow much worse. (Photo by STEFANI REYNOLDS/AFP via Getty Images)

Six current and former social media executives appeared at Senate hearings Wednesday focused on disinformation, with some facing blistering attacks from lawmakers and former colleagues who alleged that their companies allow the spread of untrue, divisive and extremist content because it is profitable.

Sen. Gary Peters, D-Mich., told executives from Meta, YouTube, TikTok and Twitter that by pushing “the most engaging posts to more users, they end up amplifying extremist, dangerous, and radicalizing content. This includes QAnon, Stop the Steal, and other conspiracy theories, as well as white supremacist and Anti-Semitic rhetoric.”

The daylong grilling and critique of the platforms took place over two Senate Homeland Security Committee hearings and also included ex-social media executives turned critics.

The hearings come at a pivotal moment for the companies, whose content dissemination practices have come under increasing fire in the wake of Facebook whistleblower Frances Haugen’s blockbuster revelations last September about how the company often chose to let disinformation spread — including around Donald Trump’s claims of election fraud — rather than rein it in and sacrifice growth.

Advertisement

On Tuesday, Twitter’s former head of security, Peiter “Mudge” Zatko, testified before the Senate Judiciary Committee about a whistleblower complaint he filed last month alleging the company deceived regulators, consumers and board members about its security practices.

Former insiders reveal why disinformation flourishes

The former Twitter and Facebook executives appearing Wednesday offered an unvarnished picture of how the companies’ engineers are incentivized to disseminate content that will engage users even if it could be dangerous.

“Regulators must understand these companies’ incentives, culture, and internal processes to fully appreciate how resistant they will be to changing the status quo that has been so lucrative for them,” said Alex Roetter, the former head of engineering at Twitter, who argued that more government regulation is urgently needed.

Roetter said Twitter engineering teams use an experimental system to test ways to get the most engagement from users. “This system logs a slew of data for every live experiment,” he testified. “Teams use this data to show per-experiment effects on various user and revenue metrics. Noticeably absent were any values tracking impacts on trust and safety metrics.”

Advertisement

The same unyielding drive for user engagement is pervasive at Facebook, according to Brian Boland, who left the company as a vice president for product engineering, marketing, strategic operations and analytics after 11 years in 2020.

“What finally convinced me that it was time to leave was that despite growing evidence that the newsfeed may be causing harm globally, the focus on and investments in safety remained small and siloed.”

brian boland, former facebook executive

Boland said that after acquiring CrowdTangle, a company that provides what he called “industry leading transparency” into Facebook’s public newsfeed content, parent company Meta “attempted to delegitimize the CrowdTangle-generated data” after it showed the platform was fueling political and racial divisions in the summer of 2020.

“What finally convinced me that it was time to leave was that despite growing evidence that the newsfeed may be causing harm globally, the focus on and investments in safety remained small and siloed,” Boland said. “Rather than address the serious issues raised by its own research, Meta leadership chooses growing the company over keeping more people safe.”

He also pointed out that Facebook disbanded its so-called Responsible Innovation team last week.

Advertisement

Boland offered a bleak vision of the future if legislation isn’t enacted, saying that as machine learning technology advances, algorithms will only get better at targeting users who are vulnerable to disinformation and extremist content.

Platforms say they are fighting disinformation

Current social media company executives told the senators they are doing what they can. Meta Chief Product Officer Chris Cox said the company employs 80 fact checkers in 60 countries.

“We employ tens of thousands of people and we use industry leading technology,” to root out disinformation and hate speech, Cox testified. “I’m proud that we’ve invested around $5 billion last year alone and have over 40,000 people working on safety and security.”

Twitter executive Jay Sullivan asserted that Twitter prioritizes safety throughout product development.

Advertisement

Current TikTok and YouTube executives also testified at the hearing. Several senators focused on reporting from BuzzFeed News revealing that TikTok, which is owned by the Chinese company ByteDance, has reportedly allowed China-based engineers to access American users’ data. Senator Mitt Romney, R. Utah, suggested he would like to see TikTok banned from operating in the U.S.

Despite the strong language from the senators, however, longtime critics aren’t optimistic that legislation will be forthcoming.

Heidi Beirich, co-founder of the Global Project Against Hate and Extremism, told CyberScoop that social media companies have been under fire from the White House and Congress for years and remain virtually unregulated. Even without stronger enforcement, Beirich said the government must find a way to force the companies to release data so that outsiders can better understand how their algorithms fuel disinformation and hate speech.

Beirich said it is very clear to her that social media is fueling violence, pointing to the fact that platforms have promoted the white supremacist conspiracy theory known as the “great replacement” which posits that white people are being replaced by immigrants, Muslims, and other people of color in the countries where they live. The theory has been embraced by several mass shooters — many of whom have streamed their killings on social media platforms — over the past few years. She said her organization has been asking YouTube to remove content promoting this theory for more than two years but nothing has been done.

“It is expensive to content moderate — that eats into your profit,” she said. “The only way that tech companies have ever changed their practices, including banning white supremacist material, for example, is after PR disasters … They are not going to self-regulate.”

Suzanne Smalley

Written by Suzanne Smalley

Suzanne joined CyberScoop from Inside Higher Ed, where she covered educational technology and from Yahoo News, where she worked as an investigative reporter. Prior to Yahoo News, Suzanne worked as a consultant to the economist Raj Chetty as he launched his Harvard-based research institute Opportunity Insights. Earlier in her career Suzanne covered the Boston Police Department for the Boston Globe and covered two presidential campaigns for Newsweek. She holds a masters in journalism from Northwestern and a BA from Georgetown. A Miami native, Suzanne lives in upper Northwest Washington with her family.

Latest Podcasts