Advertisement

Tech giants reveal plans to combat AI-fueled election antics

Letters sent to Sen. Warner and shared exclusively with CyberScoop show platforms’ approach to AI and elections following calls that they aren’t doing enough.
A mobile billboard, deployed by Accountable Tech, is seen outside the Meta headquarters on Jan. 17, 2023, in Menlo Park, Calif. (Photo by Kimberly White/Getty Images for Accountable Tech)

When a who’s-who of AI companies committed in February at the Munich Security Conference that they would take a series of steps to protect elections from the harmful effects of artificial intelligence, policymakers and researchers hailed it as a major win for protecting democratic systems. 

Six months later, policymakers and experts say that while AI firms are living up to some of those commitments, they have failed to deliver on others, while not being sufficiently transparent about the steps they are taking. 

In letters sent to Sen. Mark Warner, the Democratic chairman of the Senate Intelligence Committee, the firms describe the progress they have made to build systems to identify and label synthetic content, as well as banning or limiting the use of their technologies within the context of elections. Shared exclusively with CyberScoop, the letters demonstrate the tentative progress made by key technology firms — including OpenAI, Microsoft, Google, Amazon, TikTok, Meta and X — to prevent the potential harms to elections from AI systems. 

But Warner wants to see more from these companies.  

Advertisement

“I’m disappointed that few of the companies provided users with clear reporting channels and remediation mechanisms against impersonation-based misuses,” Warner said in a statement to CyberScoop. “I am deeply concerned by the lack of robust and standardized information-sharing mechanisms within the ecosystem. With the election less than 100 days away, we must prioritize real action and robust communication to systematically catalog harmful AI-generated content.” 

Warner said the responses show “promising avenues” for collaboration and standard-setting among tech companies but highlighted significant gaps in the industry’s approach. 

In the view of researchers, the work described in the letters lack detailed plans, resources for enforcement and measurable impacts on AI-generated political disinformation.  

“Scanning the documents, what I was looking for was a number, any number, to indicate just how deeply — if at all — these efforts were impacting the threat environment. I didn’t find one,” said Matthew Mittelsteadt, a technologist and research fellow at the right-leaning Mercatus Center’s AI and Progress program at George Mason University. 

Mittelsteadt cautioned that in this early stage of AI development, arriving at the right solutions takes time. “Still, many of these highlighted efforts —– with some exceptions on the cybersecurity side —– seem largely geared to highlight a ‘sense’ of action to policymakers without actually proving impact for voters/consumers,” he said. 

Advertisement

Despite the efforts taken by AI companies to prevent their technologies from undermining elections, generative AI content meant to mislead voters has flooded the internet and social media this election cycle. 

A political consultant faces charges and millions in fines for using AI to generate a fake robocall of President Joe Biden during New Hampshire’s Democratic primary. More recently, X CEO Elon Musk shared a video that mimicked the voice of Vice President Kamala Harris using AI technology. Online accounts supporting President Donald Trump’s campaign have also shared images featuring the candidate and images of Black supporters created by AI.

Silicon Valley’s approach to protecting elections

As described in their letters, tech companies are racing to implement and refine newly developed detection algorithms, watermarking, and other measures to curb generative AI misuse in this year’s elections. Many have also collaborated with fact-checking organizations, local and state officials, and industry efforts like the Coalition for Content Provenance and Authenticity and the National Institute of Standards and Technology’s AI Safety Institute to enhance detection and watermarking methods. 

Several companies touted their work developing or implementing detection and watermarking technologies, two of the most popular policy solutions offered thus far for curbing political deepfakes. 

Advertisement

Like several other companies, OpenAI highlighted its integration of C2PA, an open technical standard that can be used to trace the provenance of synthetic media. That technology is built into  the image generator DALL-E 3 and the forthcoming video generation platform Sora.  DALL-E 3 includes watermarking, and Open AI informed Warner that its voice model, Voice Engine, also features identification flags, though the model is not yet publicly available. 

OpenAI said it is also working on detection classifiers for identifying when content has been created by a generative AI system like DALL-E. Anna Makanju, the company’s vice president of global affairs, said in her letter that these classifiers can reliably detect generative AI-manipulated imagery even when bad actors make modifications to images through techniques like compression, cropping and saturation changes.

Makanju cautioned, however, that other modifications to AI-generated images can reduce the accuracy of these systems and that OpenAI’s technology still struggles with determining the difference between images created by DALL-E and other models. 

TikTok, for its part, said more than 37 million creators have flagged “realistic” AI-generated content. In the coming months, the platform will implement “Content Credentials,” or tamper-evident metadata, to identify manipulated videos and begin automatically labeling synthetically altered content from other platforms. 

Despite these efforts, manipulated content impersonating politicians continues to flood TikTok and other social platforms amid the ongoing campaigning in the United States and the recently concluded election in the ad UK.

Advertisement

The contrast between tech companies’ claims and users’ experience may increase cynicism among critics and policymakers if not addressed.

“This is a rare issue where impact can be clearly ‘felt’ on the voter/consumer side,” Mittelsteadt said. “Given enough time, companies will have a hard time convincing congresspeople and their staff these efforts are impactful when impact or lack thereof can be directly observed online.”

As for political campaign content, TikTok prohibits paid political advertising and doesn’t allow politicians or parties to monetize their accounts. The company also bans the use of generative artificial intelligence to mislead people about endorsements.

TikTok “does not allow content that shares or shows fake authoritative sources or crisis events, or falsely shows public figures such as politicians in certain contexts, as even when appropriately labeled, AIGC or edited media in these contexts may still be harmful,” wrote Michael Beckerman, vice president and head of public policy for the Americas. 

Similarly, Meta is increasing disclosures and labels for synthetic AI content, including its own and any generated by third-party services. The company’s AI Research Lab is also working on Stable Signature, its AI watermarking technology. Advertisers using this kind of technology may have to disclose its use in political ads. 

Advertisement

Wifredo Fernández, the head of global government affairs for the U.S. and Canada at X, cited its policies against spam, bots, and election interference, as well as rules against misleading people about an account’s identity and using synthetic media.  He also emphasized the company’s community notes program. 

Google highlighted its content provenance technology, including SynthID, a DeepMind-built watermarking service that works within services like imageFX, Gemini Chat  and Veo, Google’s generative video platform. The company requires disclosure when synthetic content is used in election ads. YouTube also requires content labels regarding the use of synthetic content. The company is imposing restrictions on Gemini and AI Overviews for election content, out of what Google says is “an abundance of caution.” 

But with an explosion of commonly available commercial and open-source models over the past two years, many firms warned their solutions will have difficulty identifying or labeling deepfakes that are created by other models. The companies stressed that broader adoption and information-sharing measures are needed within industry to accurately capture a wide variety of AI-generated imagery, audio and text. 

Fernández wrote that “while companies are starting to include signals in their image generators, they have not started including them in AI tools that generate audio and video at the same scale.”

Tim Harper, a senior policy analyst for democracy and elections at the left-leaning Center for Democracy and Technology, told CyberScoop that challenges around detecting third-party deepfakes appear to be widespread across industry.Firms like OpenAI, Anthropic, Google and Meta have said they’re generally able to detect AI content generated by their own models, but they’ve acknowledged that it’s far more difficult to detect such material when it’s produced by other companies’ models, Harper said. 

Advertisement

Harper said the overwhelming focus in the letters on content provenance was “disappointing” given the current efficacy of the technology. “We know that watermarking is easily removable, and it’s still a developing technology. Its ability to really prevent harm this election cycle is quite limited,” Harper said. 

He added the responses also indicate that companies aren’t dedicating enough resources to detecting and tracing the origins of AI-generated text, which make up huge swaths of the online mis- and disinformation ecosystems. 

Some companies touted their use of outside red-teams to identify vulnerabilities in their models, but there is little transparency in their testing process. Firms like OpenAI have highly restrictive terms of service in place that allow them to handpick researchers and set the rules of engagement. CDT and hundreds of other groups have called for tech companies to allow a broader range of researchers to test generative AI systems without fear of legal reprisal, akin to the modern norms in other areas of cybersecurity. 

Crucially, Harper and others argued that technological progress can’t overcome larger issues,  such as cutting trust and safety teams, lax enforcement of disinformation policies, and the impulsive partisan actions of key stakeholders, like Elon Musk.

After all, the best deep fake detection technology — and ever-more stringent content moderation rules — may not matter when a CEO or owner is simply tweeting deep fakes to tens of millions of users with little or no context.

Advertisement

Musk eventually acknowledged the Harris video was altered and called it a parody.  However, Harper said those kinds of choices by Musk undermines the rationale of current AI content moderation efforts.

“In many instances, leadership may want a different outcome than a policy team, but typically policy teams bear enough influence to [impact] the outcome of especially high-profile situations,” he said. “We’re not seeing that at X, and that’s a concern.”

You can read the complete list of company responses here: Adobe, Amazon, Anthropic, Arm, Google, IBM, Intuit, LG, McAfee, Microsoft, Meta, OpenAI, Snap, Stability AI, TikTok, Trend Micro, True Media, Truepic, and X.

Latest Podcasts