The FTC’s AI portfolio is about to get bigger
The Federal Trade Commission is poised to deepen its involvement in curbing the use of AI for malicious purposes, including the spread of nonconsensual sexualized deepfakes and voice cloning scams.
Last year, Congress passed the Take It Down Act, a law that allowed for criminal prosecution of individuals who share or distribute nonconsensual, intimate images and digital forgeries, including those that are AI-generated.
At a Senate oversight hearing last week, FTC Chair Andrew Ferguson called the new law one of the “greatest legislative achievements” of the current Congress and President Donald Trump’s administration, and said the FTC was preparing for “robust enforcement.”
Earlier this month, the Department of Justice scored its first successful conviction under the new law, when 37-year-old Columbus, Ohio resident James Strahler pleaded guilty to using AI-generated deepfake nudes as part of a harassment campaign targeting at least six women.
Another section of the law – set to become active in May, will permit individuals to file “take down” notices with websites that publish or host sexual deepfakes. Companies will have 48 hours to remove the content or be subject to FTC investigation and enforcement.
Commissioner Mark Meador said at a March 30 conference in Washington D.C. that while he hopes they “never have to enforce it,” the FTC is treating Take It Down enforcement as a top priority and “actively spinning everything up that we need” to enforce the take down provision.
That could quickly set up one of the first major confrontations with the tech sector— especially companies like xAI. Its Grok tool continues to be used to create and host nonconsensual deepfake images of real people, even after the scandal it faced earlier this year.
Following his speech, CyberScoop asked Meador how the take down provisions might apply to Grok’s mass nudification spree of its users. He said the law specifies that the commission can’t take action against a company until they receive formal complaints starting in May.
“This is coming into place, and then if they don’t [remove the content] we would get the complaints and then we would go after them at that point,” Meador said. “So, we kind of have to wait and see how…companies respond to complaints and requests being made, and my hope would be that every company that gets a request to take something down would immediately take it down.”
xAI’s press office did not respond to CyberScoop’s request for comment on its preparations to comply with the Take It Down act.
Strahler, who has yet to be sentenced, also admitted to using photos of children in his neighborhood to create deepfake pornography. A strategic plan published earlier this month flagged protecting children online as a “key concern” for the commission that merits more consumer tools and resources.
The commission is “dedicated to exploring other ways the FTC can protect children and support families, including through its new authority under the Take It Down Act,” the plan states.
Casey Waughn, a privacy lawyer and senior associate at Armstrong Teasdale, told CyberScoop that the current commission’s focus on child online safety leaves ample room for the law to be brought to bear in creative ways.
“We’ve seen enforcing technology and privacy violations related to youth children is a priority, so I think it’s relatively easy to parlay that into some Take it Down Act enforcement,” she said.
Waughn said the one-year delay for provision’s enforcement was so that platforms could prepare, but also said the FTC could do more to publicly signal to companies what lawful compliance looks like, similar to the resources they provide around major privacy laws.
“I think what would be helpful for all organizations…would be guidance explaining what constitutes a good faith effort, for example, to attempt to address a take down request,” said Waughn.
Living in a scammer’s paradise
The FTC is also grappling with the impact of AI on criminal scams targeting Americans online.
Ferguson told lawmakers that AI is “increasing both the sophistication of the actual mechanisms by which the scams are accomplished, but it’s also making it easier for scammers to choose their targets.”
But the FTC’s powers are limited, as the Federal Communications Commission regulates the telephone and internet providers that transmit most scams. Ferguson also noted that many call center scams are located overseas “where they don’t bat an eye at the risk of civil enforcement from the FTC.” He said the commission was open to additional legislative authorities to tackle the problem.
At the March conference, Meador was said AI-fueled deception was something the commission thinks about “daily” and is lowering the barrier to entry for many criminal schemes.
“The biggest place that we’ve seen [in] the way that some of these AI tools are being used to triple charge scams, to be honest,” he said.
Last year, the FBI reported that voice cloning scams impersonating distressed family members had bilked Americans out of nearly $900 million, and the technology has been used to impersonate high level Trump administration officials in conversations with businesses and political leaders.
Senator Maggie Hassan wrote to four AI voice cloning companies – ElevenLabs, LOVO, Speechify and VEED – asking what policies and programs they had in place to prevent or deter fraud enabled by their tools.
But Meador said that when it comes to deceptive claims, it’s particularly difficult to define credulity around the use of AI. Many deepfakes, he said, are seen and consumed by many people online with the same sort of “willing suspension of disbelief” that they bring to computer-generated effects in movies.
As such, the FTC will likely have to adjudicate on a case-by-case basis rather than through “broad brush strokes.”
“I think we’ll see a lot of that in the AI context, where if you know something wasn’t meant to be real or authentic, that’s not a concern,” he said. “The question is then, what are those situations where there is an expectation that you’re being shown something authentic and quote, unquote ‘real’ as opposed to being AI generated and was there misrepresentation or material omission” to disclose that?”