Advertisement

Tech companies pledge to protect 2024 elections from AI-generated media

Twenty major tech companies committed to policies that make it harder for bad actors to leverage AI to influence elections
In this photo illustration, Open AI's newly released text-to-video "Sora" tool is advertised on their website on a monitor in Washington, DC, on February 16, 2024. (Photo by Drew Angerer / AFP)

A coalition of major technology companies committed on Friday to limit the malicious use of deepfakes and other forms of artificial intelligence to manipulate or deceive voters in democratic elections.

The AI elections accord, announced at the Munich Security Conference, outlines a series of commitments to make it harder for bad actors to use generative AI, large language models and other AI tools to deceive voters ahead of a busy election year across the globe in the coming year.

Signed by 20 major companies, the document features a who’s-who of technology firms, including OpenAI, Microsoft, Amazon, Meta, TikTok and the social media platform X. It also includes key but lesser-known players in the AI industry, like StabilityAI and ElevenLabs — whose technology has already been implicated in the creation of AI-generated content used to influence voters in New Hampshire. Other signatories include Adobe and TruePic, two firms that are working on detection and watermarking technologies.

Friday’s agreement commits these companies to supporting the development of tools that can better detect, verify or label media that is synthetically generated or manipulated. They also committed to dedicated assessments of AI models to better understand how they may be leveraged to disrupt elections and to develop enhanced methods to track the distribution of viral AI-generated content on social media platforms. The signatories committed to labeling AI media where possible while respecting legitimate uses like satire.

Advertisement

The agreement marks the most comprehensive effort to date by global tech companies to address the ways in which AI might be used to manipulate elections, and comes on the heels of several incidents in which deepfakes are being used as part of influence operations.

Earlier this week, a video in which an AI-generated French news anchor claimed French President Emmanuel Macron had canceled a trip to Ukraine due to a foiled assassination plot went viral online. In January, voters in New Hampshire were targeted by robocalls featuring an AI-generated voice of President Joe Biden urging them to stay away from the polls. 

Sen. Mark Warner, D-Va., said the agreement represented a healthy evolution from earlier elections when social media companies were in denial about how their platforms were being abused and leveraged by foreign actors.

“When we think about the techniques that were used in 2016, 2018 and 2020, they were literally child’s play compared to the threats and challenges we face across the board today,” Warner said in remarks to reporters at the Munich conference.

As AI tools proliferate, policymakers are pushing hard for companies to move faster to integrate ways to identify AI-generated content, and Friday’s agreement includes language encouraging companies to include “provenance signals to identify the origin of content where appropriate and technically feasible.” 

Advertisement

One such provenance standard are those developed by the Coalition for Content Provenance and Authenticity, a group formed by Adobe, Intel, Microsoft and other companies to create and spread a set of new open, interoperable technical standards for digital media. Those standards rely on a process called Secure Controlled Capture that would create a traceable history for each piece of media at the file level, with a master file kept in a local or cloud-based storage system for comparison. In theory, it would give the public a verifiable history of an image or video file, when it was created, by whom and whether any aspect of that media was altered along the way.

But the content provenance provisions of Friday’s agreement have major limitations. The agreement does not require companies to implement such standards. Rather, companies have pledged to “support” their “development” with “the understanding that all such solutions have limitations.” Companies are committed to “working toward” those solutions, including machine-readable versions of content provenance information in AI-generated media.

And even if major companies incorporate content provenance information, many experts believe that malicious actors will instead turn toward the large ecosystem of open-source models that don’t require content provenance tools.

At the National Secretaries of State winter conference earlier this month, Josh Lawson, the director of AI and democracy at the Aspen Institute, noted that in discussing how to limit the use of malicious AI in elections with dozens of technology experts, “one of the things that came up time after time is that the bad actors are much more likely to leverage open models and jailbroken models than they would” models developed by major corporations.

Detection technologies, meanwhile, can offer only a probability estimate of whether a piece of media has been synthetically manipulated, and most of those tools rely on similar machine learning technologies that bad actors can study to create more convincing fakes. 

Advertisement

While efforts to address the election-related risks of AI remain nascent, the field continues to move ahead at a blistering pace. Just this week, OpenAI revealed a new model for generating high-quality video, a development that prompted a reporter at Friday’s press conference in Munich to ask what benefits the tool would provide to society at a time when policymakers are unclear on how to run secure elections against the backdrop of ever-more available AI tools. 

Anna Makanju, OpenAI’s vice president for global affairs, noted that the company is not making the tool, called Sora, publicly available, and said a small group of red teamers had been given access to study ways to make it safer. “It’s important for people to understand where the state of the art with current technology is and really build societal resilience around this,” she said. 

Tech companies are facing pushback from critics that have cast efforts to rein in technology-enabled disinformation as a broader effort to censor or suppress political speech, but Microsoft President Brad Smith drew a line between free speech rights and using AI to broadcast manipulated political speech. 

“The right of free expression does not put one person in the position to put their words in someone else’s mouth, in a way that deceives the public, by making it seem like a person uttered words they never uttered,” Smith said. 

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts