Advertisement

AI companies promise to protect our elections. Will they live up to their pledges?

Critics in government and industry want more transparency, reporting and accountability for recent industry promises around AI and elections.
A photo shows a frame of a video generated by a new intelligence artificial tool, dubbed "Sora", unveiled by the company OpenAI, in Paris on February 16, 2024. OpenAI, the creator of ChatGPT and image generator DALL-E, said it was testing a text-to-video model called Sora that would allow users to create realistic videos with a simple prompt. (Photo by Stefano RELLANDINI / AFP) (Photo by STEFANO RELLANDINI/AFP via Getty Images)

Three months after a who’s-who of AI developers pledged to deploy safeguards to protect elections against machine-learning technologies, policymakers and researchers are warning that they haven’t seen enough concrete action by major technology firms to live up to their promises. 

At the Munich Security Conference in February, a group of 20 major tech companies signed on to an AI elections accord that committed them to developing tools and implementing policies to prevent bad actors from using AI to meddle in elections. An additional six firms have since signed on to that voluntary compact, which built on a similar White House pledge from last year that saw AI firms commit to prioritizing safety and security in the design of machine-learning systems. 

Now, many observers of AI are growing concerned that these voluntary commitments are not translating into tangible measures. 

On the sidelines of the RSA conference in San Francisco this month, Sen. Mark Warner, D-Va., told reporters that while he was encouraged by the firms’ commitments, it is not clear what has been done to implement them.

Advertisement

Amid ongoing elections in India, European parliamentary elections in June and U.S. elections in January, Warner said it was time for AI companies to “show us the beef.” 

This week, Warner sent a letter to companies that signed the elections accord asking for an update on their progress, and that letter mirrors criticism from civil society groups. 

Earlier this year, an open letter from the Massachusetts Institute of Technology called on AI companies to put in place clear safe harbor policies to support independent research and evaluation of their tools, a move in line with pledges to better expose models to outside testing.  

OpenAI’s current usage policy prohibits outside researchers from intentionally circumventing safeguards and mitigations “unless supported by OpenAI,” and advocates for AI safety would like firms like OpenAI to create more opportunities for researchers to scrutinize their models. 

OpenAI does deploy a network of third-party red teamers to conduct adversarial research of their models, but researchers must apply to be part of the program, and OpenAI ultimately sets the rules of engagement.

Advertisement

That posture can make it harder for outside researchers to validate that guardrails installed by these companies to prevent their models from being used for disinformation work as intended and ensure that they aren’t circumvented in other ways.

“These accords are all first steps … but the larger problem is around accountability: First of all, what exactly are they doing and to what extent is it effective?” said Dhanaraj Thakur, a research director at the left-leaning Center for Democracy and Technology.

The MIT letter, which has been signed by Thakur and more than 350 members of the tech industry, academia and media, urges AI companies to adopt terms of service language similar to what largely exists today for cybersecurity research. Vulnerability research for AI would be governed by disclosure policies designed to both protect companies from bad faith actors under the guise of legitimate research and prevent companies from controlling or restricting unfriendly parties from accessing their systems.

Irene Solaiman, head of global policy at Hugging Face, a machine learning and data science platform, said access to models by outside researchers often provides fresh eyes that can discover unique vulnerabilities or attack vectors in large language models.

“Access to models can help to scrutinize them, and can also help study them and improve them and build on them in novel ways,” Solaiman said in an email. “External views can provide insights to systems that may never occur within developer organizations.”

Advertisement

Since the elections accord was signed there has been some progress on technical measures to address election security. Google, for example, has developed what it calls its SynthID watermarking system, and on Tuesday said it had figured out how to apply that method to text. Such methods for assessing the provenance of content represents a key theme of recent AI safety pledges, but experts caution that watermarking systems remain relatively immature and can be easily bypassed. 

Nonetheless, the Brennan Center for Justice, a left-leaning think tank, has expressed concern that while many of the pledges made by AI companies tend to strike the right notes, when it comes to reining in abuse, they are “entirely unenforceable and don’t include ways to gauge the signatories’ progress in accomplishing them.”

An analysis from earlier this year by the Brennan Center’s Larry Norden and the scholar David Evan Harris argued that AI companies should be doing more to demonstrate that these agreements are “more than PR window-dressing” and publish monthly public reports detailing new policies around election-related content and investments in the detection of AI-generated media. Other transparency measures might include information about the number of employees working on election-related issues, the malicious use of AI systems by American adversaries and how much-election related content has been blocked. 

After researchers in 2020 and 2022 identified numerous misinformation and disinformation campaigns that targeted Spanish-speaking voters, Norden and Harris also urged companies to bolster their ranks with employees who speak other languages.

Solaiman echoed that view, arguing that “peoples who are affected by systems will often have the best insights on what is best for them.”

Advertisement

Nicole Schneidman, a technology policy strategist at Protect Democracy, a nonprofit founded by three former Obama administration lawyers to fight against what it views as encroaching authoritarianism in American politics, told CyberScoop that she is encouraged to see AI companies coalesce around principles for how their technologies should be used in elections.

“There are good commitments in there,” she said. “I’m just really hoping that they follow through on them, and work amongst themselves to make sure they’re exchanging information and best practices, recognizing that we’re all facing a really novel scenario and landscape.”

A number of tech and election experts said the predicament AI companies face today parallels the pressures that many social media companies experienced in the wake of the 2016 U.S. presidential election. During that cycle, the Russian government used American platforms — as well as mainstream media outlets — to conduct disinformation campaigns and spread hacked emails.

Tammy Patrick, CEO for programs at the National Association of Election Officials, told CyberScoop she takes AI executives at their word when they say they’re committed to protecting elections — but they must not repeat the mistakes that social media companies made in the past. 

“Election officials don’t have the bandwidth or skillset in most cases” to combat deepfakes and other AI-generated disinformation, Patrick said. There needs to be “very serious conversations” with AI companies on the matter “because we want to make sure that there is some accountability in this moment.”  

Advertisement

A study conducted in January by the nonprofit AI Democracy Projects found that five AI chat models — Google’s Gemini, OpenAI’s GPT-4, Anthropic’s Claude, Meta’s Llama 2 and Mistral’s Mixtral — regularly served up inaccurate or false answers to election and voting-specific questions.

That same month, OpenAI announced a slew of election-related policies and said that it was collaborating with organizations like the National Association of Secretaries of State to ensure that their systems push users to more authoritative sources, like CanIVote.org, when asking about election procedures.

The use of AI to generate high-quality video — and create content that might be used to meddle in elections — is a key concern of researchers, but even as AI companies announce highly capable video-generation tools, the most cutting-edge tools haven’t yet been made widely available. 

Both OpenAI and Google have recently previewed tools that can be used to create alarmingly lifelike videos — Sora and Veo, respectively — but both companies appear to be holding back their widespread release. 

Tina Sui, a spokesperson for OpenAI, said the company is restricting Sora to a group of red teamers to explore possible abuse, and in a conversation with the Brookings Institution earlier this month, OpenAI CEO Sam Altman said he did not expect the model to be released “before an election this year.” 

Advertisement

Google’s Veo, which was announced on Tuesday, will only be available to “select creators” in coming weeks. 

Tech executives acknowledge that as their AI products continue to evolve and increasingly interact with the election space, there will be a learning curve.

David Vorhaus, the director for global election integrity at Google, told CyberScoop that the company is committed to engaging with external researchers and civil society to make sure they are being held accountable for their promises.

“I think we are humble enough to believe that we don’t necessarily have all of this right, and so welcome the external scrutiny on it,” Vorhaus said.

Dave Leichtman, director of corporate civil responsibility at Microsoft, told CyberScoop he’s confident the companies will meet their commitments on AI and elections.

Advertisement

“The proof is in the pudding, right?” he said. “When the election comes around and we can look back and say, ‘yes, we did a good job,’ and hopefully the Brennan Center [and other critics] agree with that.”

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts