Foreign governments are likely to continue inauthentic messaging campaigns via phony social media accounts heading into elections around the world in 2024, officials with Meta warn in a new report, particularly Russia, Iran and China.
As opposed to prior election cycles in the United States, however, the U.S. government has stopped proactively sharing information with Meta and other social networking platforms, cutting off a key source of information regarding major, nation-state influence operations, officials with the company told reporters Wednesday.
“We believe that it’s important to build on the progress the defender community has made since 2016, and make sure that we work together to keep evolving our defenses against foreign interference,” said Ben Nimmo, Meta’s global threat intelligence lead.
Nathaniel Gleicher, Meta’s head of security policy, said U.S. information sharing stopped in July, but declined to discuss the government’s motives.
In July, a federal judge restricted some government agencies and officials from meeting and communicating with social media companies about “protected speech,” the Washington Post reported at the time, causing worry among some about progress on the curbing of coordinated disinformation campaigns on major platforms.
The comments came ahead of the release of Meta’s third-quarter security and integrity report, in which the company details efforts it is taking to combat inauthentic accounts and campaigns, as well as share thoughts on key issues heading into the election year.
The report — which first emerged in the wake of the company’s sharing of information it had on the 2016 Russian election interference operation — disclosed that Meta had identified and neutralized three covert influence campaigns in the third quarter: two from China and one from Russia.
The report also identified areas that Meta is focused on heading into the 2024 election year, including myriad foreign covert influence operations, “perception hacking” as a means to influence audiences by claiming things that look true but aren’t, and the company’s approach to the threats and opportunities associated with generative artificial intelligence.
“Information sharing between tech companies, governments and law enforcement has also proven critical to identifying and disrupting foreign interference early, ahead of elections,” the company wrote in the report.
“This type of information sharing can be particularly critical in disrupting malicious foreign campaigns by sophisticated threat actors who coordinate their operations outside of our platforms,” the company reported. “While we’ve continued to strengthen our internal capacity to detect and enforce against malicious activity since 2017, external insights from counterparts in government, as well as researchers and investigative journalists, can be particularly important in detecting and disrupting threat activity early in its planning taking place off-platform.”
Gleicher said the company will “continue to share information with our partners in government and civil society and industry, and we’re going to keep doing that going forward.” The company will be in contact with governments “where it’s appropriate or where we’ve seen their citizens being targeted,” he added.
Gleicher said that Meta has gotten much better at detecting and removing nation-state covert influence operations that originate and operate from its platform. But more and more, he noted, groups looking to carry out such campaigns are either self-hosting and linking to material, or spreading operations across multiple platforms in hopes they won’t easily be shut down.
“When we have particularly sophisticated threat actors, in the context of foreign interference, nation states that are trying to run these campaigns, we have seen a small number of cases where they plan and coordinated the campaign off of our platforms, which means that our investigators might not know that a campaign is coming until the last minute,” Gleicher said.
“For this small set of sophisticated actors, if they’re operating off our platforms, there are a number of times when a tip from government has enabled us to take action against them quickly,” he added.
Russia, Iran and China are the top three geopolitical sources of coordinated inauthentic behavior, according to the report. If narratives of interest for these countries become election issues in the U.S. or anywhere else in 2024, coordinated covert influence efforts will be taken, the report said.
“We anticipate that if relations with China become an election issue in a particular country, it is likely that we’ll see China-based influence operations pivot to attempt to influence those debates,” the report notes. “The more domestic debates in Europe and North America focus on support for Ukraine, the more likely that we should expect to see Russian attempts to interfere in those debates.”