Research outs poorly constructed disinfo campaign aimed at Hong Kong protests
Hackers that appear to be acting in the interest of China’s government have been hijacking and using fake accounts on Facebook, Twitter and YouTube to push narratives denigrating the Hong Kong protests, according to research from Graphika.
Named “Spamouflage Dragon,” the people behind the campaign attempted to avoid detection algorithms by posting a small amount of political content interspersed with higher volumes of spam, such as cat and landscape pictures, TikTok videos and sports content. Twitter and Facebook accounts in the spam network also have been interspersing political postings with inspirational quotes as well as food and travel content.
The sweeping, cross-platform campaign — which Graphika assesses is still active and is focused on promoting YouTube videos — appears to have been in operation for years, although it largely went silent in 2017. In June as the Hong Kong protests against China’s controversial extradition law gained traction, the spam network started up again, with accounts, pages and channels linking to content across the platforms.
Much of the messages have been critical of Hong Kong protesters while applauding Hong Kong police, according to Graphika.
But in contrast with some recent takedowns related to Hong Kong, this campaign does not appear to be linked with Beijing directly because the campaign seems to be very sloppily put together, according to Graphika. Although one of the channels in question had 1.6 million views, the campaign has been low-impact and has not demonstrated a sophisticated approach to generating engagement, Graphika assesses.
“Very few of the videos gained any degree of traction, and their viewing figures were extremely low,” Graphika researchers write. “Virtually no assets outside the [campaign] network reacted to, shared, or commented on any of the posts outside of its network.”
Efforts to conceal the operation
Some videos the spam network has posted have been so poorly programmed that they cut out after a few seconds, Graphika researchers note. In one instance, the spam network used a YouTube channel that had approximately 800,000 views — but for the most part it was just posting snippets of the U.K. television show, “The Good Life.”
The researchers suggest this kind of behavior is not the product of a social media information operation planned well in advance.
“Such repurposing is characteristic of hijacked or otherwise compromised assets that are sold on the black market, rather than dedicated assets created and run by a single operation; it indicates that [the campaign] prioritized the quick acquisition of assets over credible appearance,” the researchers write.
In at least one case, the campaign deviated from this blended posting behavior. A cluster of Twitter accounts centered around one account, which called itself Microview Review, tweeted only political content about Hong Kong and the protests.
Many of the older inauthentic Twitter accounts in this campaign, which date to 2012 and 2013, use stolen profile pictures. Other accounts have used more generic stock photos. In some cases, the campaign on Facebook appears to have tried using pages to appear as individual user accounts, possibly in an effort to avoid being flagged for takedowns.
Facebook removed all the associated accounts and Twitter took down at least some of these accounts this month, according to Graphika. When reached for comment Twitter did not say answer why it did so or if it took down every account flagged.
YouTube took down one prominent account in the second quarter of this year, but when reached for comment, a YouTube spokesperson neither said why it did so, nor did it answer whether the platform had taken down each account flagged.