Advertisement

OpenAI says it has disrupted 20-plus foreign influence networks in past year

Threat actors were observed using ChatGPT and other tools to scope out attack surfaces, debug malware and create spearphishing content.
This photo illustration shows the ChatGPT logo at an office in Washington, DC, on March 15, 2023. (STEFANI REYNOLDS/AFP via Getty Images)

OpenAI said it has disrupted more than 20 operations and networks over the past year from foreign actors attempting to use the company’s generative AI technologies to influence political sentiments around the world and meddle in elections, including the United States.

In some cases, the actors attempted to use ChatGPT and other OpenAI tools to analyze and generate social media content, creating fake articles for websites, debugging malware, writing biographies and performing a host of other tasks that support online influence efforts. However, despite a surge of malicious use, the technology’s overall impact appears to be more complimentary than game-changing.

“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” OpenAI principal investigator Ben Nimmo and Michael Flossman, a member of the company’s technical staff, wrote.

The findings, included in OpenAI’s latest quarterly threat report, details activity that originated from countries like China, Russia, Iran, Rwanda, Vietnam and others, though not all of the disclosed operations were explicitly tied to government actors or agencies.

Advertisement

Some of the findings were included in past reports from OpenAI, covering attempts by governments like Russia and Iran to leverage generative AI to target American voters online. It also includes more recent efforts, like attempts by Iranian-linked actors in August to use OpenAI tools to mass-generate social media comments and long-form articles around divisive topics like the war in Gaza, Israel’s relationship with Western countries, Venezuelan politics and Scottish independence.

In some cases, threat actors attempted to use OpenAI’s own technologies against them. A set of accounts from a network linked to China used the tools to generate spearphishing emails that were then sent to OpenAI’s own employees. The actor posed as a ChatGPT user requesting support and included a malicious .zip file attachment titled “Some Problems” that was laced with a remote access trojan that would have given the attacker control over compromised devices. OpenAI does not name the source who first alerted them, but screenshots included in the report of spearphishing emails credit the cybersecurity firm Proofpoint. A spokesperson for Proofpoint confirmed to CyberScoop that the company contacted OpenAI regarding the incident. 

Another group of accounts that allegedly relied on the same IT infrastructure was also observed using ChatGPT to answer questions and complete scripting and vulnerability research tasks.

Among the questions posed were queries for how to find specific versions of software that remain vulnerable to the popular Log4Shell vulnerability, how to use tools like sqlmap to upload web shells to target servers, social engineering targeting techniques for government employees, and how to exploit the technological infrastructure of automobiles from a prominent car company.

Another Iranian-linked group labeled the “CyberAv3ngers” used ChatGPT to create and refine malicious script asks about common default username and password combinations for programmable logic controllers, which are often used to interact with and operate machinery or other equipment in critical infrastructure sectors.

Advertisement

Private researchers have linked CyberAveng3rs to the Islamic Revolutionary Guard Corps (IRGC), noting their operations often serve as impactful information campaigns rather than actual cyberattacks. OpenAI said it was able to use the group’s queries to identify other possible technologies and brands of interest to the group.

OpenAI’s findings align with statements made this year by U.S. intelligence officials regarding their own observed use of AI by foreign actors targeting the 2024 U.S. elections.

In particular, they said foreign nations continue to struggle in developing their own sophisticated AI models and overcoming guardrails now built into many commercially available generative AI tools to detect and prevent malicious use.

“Foreign actors are using AI to more quickly and convincingly tailor synthetic content. The [intelligence community] considers AI a malign influence accelerant, not a revolutionary influence tool,” an ODNI official said in a September briefing.

Intelligence officials described Russia as the most active government leveraging AI for  global disinformation, using a mixture of algorithmically generated content in support of Moscow’s broader goal of “denigrating” Vice President Kamala Harris and the Democratic Party. Iranian actors have used it to generate content in English and Spanish, amplifying divisive political issues in the U.S. Intelligence officials have concluded that leaders in Tehran are seeking to harm the electoral prospects of Republican candidate Donald Trump.

Advertisement

The report also details how OpenAI has expanded its threat detection capabilities over the past year, specifically building a set of new AI-powered tools that the company said has cut down the time spent on some analytical tasks from “days to minutes.” Threat analysts at OpenAI also believe that AI tools now sit at a critical intersection for many influence operations, giving providers potentially unique insights into ongoing operations.

“Threat actors most often used our models to perform tasks in a specific, intermediate phase of activity — after they had acquired basic tools such as internet access, email addresses and social media accounts, but before they deployed ‘finished’ products such as social media posts or malware across the internet via a range of distribution channels,” Nimmo and Flossman wrote. 

“Investigating threat actor behavior in this intermediate position,” they continued, “allows AI companies to complement the insights of both ‘upstream’ providers — such as email and internet service providers — and ‘downstream’ distribution platforms such as social media.”

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts