Advertisement

Online influence operators continue fine-tuning use of AI to deceive their targets, researchers say

The use of artificial intelligence for malign purposes is limited but growing and maturing in key ways, researchers with Google's Mandiant said Thursday.
Getty Images

Hackers, cybercrime groups and other digital adversaries are increasingly using artificial intelligence to generate images and video and will likely continue to capitalize on the average person’s inability to distinguish digital fakes, researchers with Google’s Mandiant said Thursday.

The adoption of AI for intrusion operations “remains limited and primarily related to social engineering,” the researchers added, noting also that they’d so far only seen one example of information operations actors referencing AI-generated text or the large language models underlying the array of generative AI tools on the market.

But state-aligned hacking campaigns and online influence operators are continuing to experiment and evolve how they use publicly available AI-tools designed to generate convincing images and higher-quality content.

The analysis comes as the White House said it would fast-track an executive order on federal agencies’ use of AI, with Congress pursuing its own approach to regulation. In July, FBI Director Christopher Wray warned of the increasing risk of AI-enabled threats, primarily from China, and also the threats to U.S. companies working in the AI-space.

Advertisement

“Mandiant judges that generative AI technologies have the potential to significantly augment information operations actors’ capabilities in two key aspects: the efficient scaling of activity beyond the actors’ inherent means; and their ability to produce realistic fabricated content toward deceptive ends,” the researchers wrote.

The researchers said that AI tools “have the potential to significantly augment malicious operations in the future, enabling threat actors with limited resources and capabilities, similar to the advantages provided by exploit frameworks including Metasploit or Cobalt Strike,” referring to two software frameworks used for legitimate penetration testing but also frequently abused in hacking operations.

Newer and better tools allow for much faster and easier development of plausible content, potentially improving the success rates of information and influence operations. Since 2019, many information operations carried out by nearly a dozen countries and private parties have used AI-produced headshots via generative adversarial networks (GANs) to bolster fake personas.

Text-to-image models — such as OpenAI’s DALL-E or Midjourney — on the other hand, have not been seen as much so far, but “could pose a more significant deceptive threat” than GANs, the researchers said, because the models could be applied to a wider range of use cases, and could be harder to detect, both by humans and AI detection software.

A Chinese-linked information operation tracked as Dragonbridge shared AI-generated images including one of President Trump in an orange jumpsuit in jail in March 2023, for instance.

Advertisement

AI is also improving social engineering, which seeks to trick humans into divulging information they otherwise wouldn’t, including access credentials or other sensitive material. Large language models, which power technologies such as OpenAI’s ChatGPT and Google’s Bard, can also be used to create more convincing phishing material crafted for specific targets, the researchers said.

The researchers also said that large language models could be used to help generate malware, even if humans need to correct mistakes and shortcomings, which is a concern given that the “ability of these tools to significantly assist in malware creation can still augment proficient malware developers, and enable those who might lack technical sophistication.”

For all the hype surrounding AI, the rapid evolution of publicly available tools and increasing sophistication of the tools should cause concern, according to the analysis.

“Threat actors regularly evolve their tactics, leveraging and responding to new technologies as part of the constantly changing cyber threat landscape,” the researchers concluded. “Mandiant anticipates that threat actors of diverse origins and motivations will increasingly leverage generative AI as awareness and capabilities surrounding such technologies develop.”

Latest Podcasts