Advertisement

New legislation targets scammers that use AI to deceive

Following a rash of AI-assisted impersonations of U.S. officials, the bill would raise the financial and criminal penalties around using the technology to defraud. 
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Representative Ted Lieu, D-Calif., speaks a press conference. Following a rash of AI-assisted impersonations of U.S. officials, the bill would raise the financial and criminal penalties around using the technology to defraud.  (Photo by Kayla Bartkowski/Getty Images)

A new bipartisan bill introduced in the House would increase the criminal penalties for committing fraud and impersonation with the assistance of AI tools.

The AI Fraud Deterrence Act, introduced by Reps. Ted Lieu, D-Calif., and Neal Dunn, R-Md., would raise the overall ceiling for criminal fines and prison time for fraudsters who use AI tools to create convincing fake audio, video or texts to carry out their schemes.

For instance, the total potential fines incurred for mail fraud, wire fraud, bank fraud and money laundering would all be increased to between $1-2 million, with new language specifying that using AI-assisted tools carries a maximum prison sentence of 20-30 years.

Meanwhile, scammers who use AI to impersonate government officials can be fined up to $1 million and spend 3 years in prison.

Advertisement

“Both everyday Americans and government officials have been victims of fraud and scams using AI, and that can be ruinous for people who fall prey to financial scams, and can be disastrous for our national security if government officials are impersonated by bad actors,” Lieu said in a statement.

The bill comes after a rash of high-profile incidents over the past year where unidentified parties have been able to communicate with or impersonate top U.S. officials in the government, seemingly with the assistance of AI voice and video tools.

In May, The Wall Street Journal reported that federal authorities were investigating fraudulent calls and texts sent to senators, governors, business leaders and other VIPs from someone impersonating White House Chief of Staff Susie Wiles’ voice and number. Wiles reportedly said her phone had been hacked, which President Donald Trump later confirmed publicly, telling the press “they breached the phone; they tried to impersonate her.” Some of the recipients said the voice sounded AI-generated.

Less than two months later, the State Department warned diplomats that someone was impersonating  Secretary of State Marco Rubio in voice mails, texts and Signal messages. The messages were sent to at least three foreign ministers, a U.S. senator and a governor in what  appeared to be a scam. Rubio was also targeted in a deepfake earlier this year,  making it appear he was on  CNN vowing to persuade Elon Musk to cut off Starlink access to Ukraine.

Other high-profile figures like singer Taylor Swift  have seen their likeness and image used in scams, pornography or political attacks, while former President Joe Biden had his voice cloned by AI in a scheme hatched by a Democratic consultant working for rival Dean Phillips ahead of the 2024 New Hampshire presidential primary.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts