FCC pushes new disclosure requirements for AI in political ads

The proposal comes as campaigns and super PACs have sought to leverage generative AI tools in attack ads this election cycle.
FCC Commissioner Jessica Rosenworcel testifies before Congress on Dec. 5, 2019. (Photo by Chip Somodevilla/Getty Images)

The Federal Communications Commission is considering whether to require disclosure when a campaign or group uses artificial intelligence in political advertisements.

Under a proposed regulation put forward Wednesday by Chair Jessica Rosenworcel, AI-generated imagery would still be permitted for use in political advertisements on radio, cable and satellite television, but the groups behind those ads would need to disclose use of the technology on air or in writing.

“As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used,” Rosenworcel said in a statement. “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”

Rosenworcel cited the FCC’s authority under the Bipartisan Campaign Reform Act to protect consumers from false, misleading or deceptive programming.


If adopted, the agency will seek public comment on a specific definition for AI-generated content and whether such rules should apply to issue-related political advertisements as well as candidates. An FCC spokesperson told CyberScoop that it is agency practice to not release the full text of a proposed regulation unless and until it is adopted by the full commission.

The proposal comes as campaigns and super PACs have sought to leverage generative AI tools in attack ads this election cycle, creating background imagery or mimicking the voice and likeness of political opponents.

The most prominent example came in January when a Democratic operative working on behalf of the Dean Phillips campaign created a deepfake robocall of President Joe Biden warning Democrats to stay away from the polls ahead of the party’s New Hampshire primary.

That incident is part of an ongoing state criminal investigation, though no charges have been filed and Steve Kramer, the operative behind the call, told CyberScoop in April that he is cooperating with state and federal authorities.

The incident pushed the FCC to approve new language less than a month later affirming that AI-generated robocalls are covered under the Telephone Consumer Protection Act’s prohibition on unwanted calls that use artificial or prerecorded voices.


A number of bills have been introduced in Congress that would mandate similar disclosures in political ads, place new restrictions around voice cloning of political candidates and push agencies like the Federal Election Commission or Federal Trade Commission to develop or enforce specific regulations around the issue. However, none have made it to a floor vote or been passed into law.

The FEC voted last year to open public comment on whether to require disclosure of AI in political ads, but campaign finance experts have told CyberScoop that any proposed regulation would not be approved in time to impact the 2024 election.

Last year, the American Association of Political Consultants condemned the use of generative AI in political advertisements, saying its use goes against the organization’s professional code of ethics, even if candidates and organizations disclose their use of the technology upfront.

“Citizens must have confidence in the basic truthfulness of political campaigns,” the organization said. “While the public’s trust in institutions and campaigns has been shaken in recent decades, the use of ‘deepfake’ generative AI content is a dramatically different and dangerous threat to democracy.”

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts