Advertisement

Meta scrubbed a fake scientist’s account that spread bogus COVID-19 claims

Facebook removed 524 inauthentic Facebook accounts, 20 pages, four Groups and 86 Instagram accounts boosting the claims.
A medical worker prepares a dose of the Sinopharm Covid-19 coronavirus vaccine at a hospital in Wuhan in China's central Hubei province on November 25, 2021. - China OUT (Photo by AFP) / China OUT (Photo by STR/AFP via Getty Images)

On July 24, 2021 a Swiss biologist, Wilson Edwards, claimed on Facebook and Twitter that the United States was pressuring the World Health Organization to blame the origin of COVID-19 on the Chinese government.

Within an hour, Chinese officials were promoting the message on social media, using the apparent claim to turn public opinion against the U.S. after China attracted scrutiny for reportedly rejected further investigation into the origins of the virus.

But Wilson Edwards wasn’t real, the Swiss Embassy in Beijing announced on Twitter on Aug. 10.

Instead, the ruse was a part of an elaborate coordinated campaign based in China to discredit the U.S., researchers at Meta, which is owned by Facebook, revealed in a report out Wednesday.

Advertisement

What researchers found was a “hall of mirrors,” Ben Nimmo, global information operations threat intelligence lead at Facebook, said in the report.

In total, Facebook removed 524 Facebook accounts, 20 pages, four Groups and 86 Instagram accounts a part of the network. The network targeted primarily English-speaking audiences in the U.S. and U.K., with Chinese state infrastructure companies sharing the posts.

The fake accounts all followed a similar posting pattern. Before and after the Wilson post, the hundreds of accounts posted the same links to news articles and social media posts, which praised China or bashed the government’s critics.

Meta found two occasions in which accounts in the network published the designated links along with directions for the campaign, which instructed accounts when and what to post, as well as how to track engagement.

The campaign was largely unsuccessful, with few real users engaging with the material, the company said.

Advertisement

The inability of such coordinated campaigns to pick up organic engagement has been a common thread across Meta’s research, Nimmo notes in a report outlining the findings. Similar efforts that relied on traditional spamming techniques have frequently failed to generate viewers, in part because of clumsy language or signs of obvious bias.

The findings were released as part of Meta’s most recent threat report. While the report in the past has focused on coordinated inauthentic behavior, researchers for the first time included research regarding other security threats, including “brigading.” The tactic involves people working together to mass comment, post or engage in behaviors like mass reporting to stifle or harass opposing viewpoints.

Under its anti-brigading policy, Meta also took down an anti-vaccine group called “V_V,” which has targeted medical professionals and other journalists on both Meta as well as other platforms such as Twitter and YouTube. The network, which coordinated through Telegram, relied on mass-harassment of pro-vaccination individuals. It also hijacked high-interest Meta Pages to share anti-vaccination misinformation.

Additionally, Meta took down a network of Vietnamese accounts used to mass report activists critical of the Vietnam government as a means of getting them de-platformed. Some members of the network also impersonated individuals in an effort to report the legitimate account as a fraud.

Researchers also uncovered separate networks in both Poland and Belarus focused on the border crisis between the European Union and Belarus. Meta linked the Belarusian campaign, which consisted of 41 Meta accounts, five groups, and four Instagram accounts, to the Belarusian KGB. The Belarusian military has been tied to a similar disinformation campaign.

Advertisement

The Polish campaign included two events that garnered 90 interested users. The planning of one protest in Minsk received local news coverage but Meta could not confirm if the events in question happened.

The Polish, Chinese and Belarusian networks all used artificial intelligence-generated images for inauthentic profiles.

Tonya Riley

Written by Tonya Riley

Tonya Riley covers privacy, surveillance and cryptocurrency for CyberScoop News. She previously wrote the Cybersecurity 202 newsletter for The Washington Post and before that worked as a fellow at Mother Jones magazine. Her work has appeared in Wired, CNBC, Esquire and other outlets. She received a BA in history from Brown University. You can reach Tonya with sensitive tips on Signal at 202-643-0931. PR pitches to Signal will be ignored and should be sent via email.

Latest Podcasts