Russia’s GRU propped up fake media personas, mostly failed at social media promotion after DNC hack
Russian military hackers who stole emails from the Democratic National Committee in 2016 were only acting as one part of a larger, coordinated effort to spread Kremlin-approved messaging before and after the 2016 election, according to new findings from Stanford University.
Stanford’s Internet Observatory on Tuesday released a trove of analysis detailing how the GRU, a Russian military intelligence unit, was unable to generate public interest in the data stolen from Hillary Clinton’s campaign for more than a month.
Hackers first linked to the stolen emails in a June 14, 2016 set of Facebook posts, pointing to a set of messages supposedly leaked from the campaign. Facebook engagement to the DC Leaks Page, later attributed to Russia, totaled a mere 834 engagements over 22 posts published over four months. International attention only began when WikiLeaks tweeted a link to a database containing thousands of documents revealing internal strife in the party as the race for the presidency was accelerating.
The Stanford analysis demonstrates in stark detail how Russian information operations grew. From the hack on the DNC, Kremlin-backed trolls relied on state media outlets like RT and then WikiLeaks to amplify their narrative into the mainstream media. Meanwhile, the GRU also made accounts for bogus organizations and personalities, and it used Twitter either to attract more attention or discredit skeptics.
“The GRU created think tanks and media outlets to serve as initial content drops, and fabricated personas — fake online identities — to serve as authors,” the Stanford authors wrote. “A network of accounts additionally served as distributors, posting the content to platforms such as Twitter and Reddit. In this way, GRU-created content could make its way from a GRU media property to an ideologically aligned real independent media website to Facebook to Reddit — a process designed to reduce skepticism in the original unknown blog.”
The most notable example of a fabricated media organization cited by Stanford’s researchers is the Inside Syria Media Center, propped up to support Russia’s client state amid the civil war there. The site posted original stories that “appear to be almost exclusively written by sock puppets with an obviously pro-Bashar Assad and anti-Western slant.” Articles alleged that the U.S. military was deploying chemical weapons, encouraged Syrian refugees to return home and pushed skepticism about the White Helmets, a volunteer medical and rescue organization.
The same outlet also tried using Twitter and Facebook pages to spread its message. The now-suspended Twitter account, @Inside_Syria, had more than 19,000 followers, though researchers noted “it is not possible to determine the extent to which those followers were real people, purchased engagement, or bot accounts.”
This larger GRU effort overlapped with similar activity carried out by Russia’s Internet Research Agency. Like the GRU, the IRA, which specializes in social media propaganda, posted frequently about Syria, Ukraine and race relations in the U.S., often with fabricated personas. But while the IRA uses social media as a first resort, it appears that the GRU used Facebook and Twitter with the aim of creating media attention around its other activity.
The GRU effort also failed to prioritize obvious audience-building strategies, like purchasing meaningful advertising space on social media sites, a “perplexing” oversight that researchers suggested could simply mean GRU staffers were focused on other tasks.
“A second explanation is that they didn’t fully understand the dynamics of the social platforms,” the Stanford authors wrote. “A third is that they were simply ineffectual or incompetent in their execution.”