Advertisement

In Paris, U.S. signals shift from AI safety to deregulation

The Trump administration made it clear that innovation and competition with China would be bigger priorities.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
US vice-president J.D. Vance attends a plenary session at the Artificial Intelligence (AI) Action Summit, at the Grand Palais, in Paris, on February 11, 2025. (Photo by LUDOVIC MARIN/AFP via Getty Images)

 As technology and policy representatives around the world convened in Paris, France this week to find balance between safety and innovation in AI, Vice President JD Vance was blunt about how the Trump administration is planning to position itself. 

“I’m not here to talk about AI safety, which was the title of this conference a few years ago,” Vance said in his opening remarks at the Paris AI Action Summit. “I’m here to talk about AI opportunity.”

For the past three years, policymakers in Washington D.C. and Europe have focused on competing with China for market dominance, boosting domestic investment and preventing AI tools from being used to cause harm or exacerbate existing biases in society. The Biden administration particularly emphasized responsible AI development to prevent some of the technology’s worst abuses, such as jailbreaking, deepfakes, malware creation and disinformation campaigns.

Yet, the Trump administration seems to be headed in a different direction.Vance laid out four areas of emphasis: ensuring U.S. dominance in the AI race, ensuring that excessive regulation doesn’t “kill a transformative industry,” keeping AI tools free from ideological bias, and fostering a growth-oriented, worker-friendly environment.

Advertisement

While safety wasn’t completely ignored, Vance later reiterated in his speech where the issue ranked in his mind.

“We want to ensure the Internet is a safe place,” he said. “But it is one thing to prevent a predator from preying on a child on the internet, and it is something quite different to prevent a grown man or woman from accessing an opinion that the government thinks is misinformation.” 

The remarks signal a potentially seismic shift in how the U.S. government and American companies — compromising roughly half of global investment last year in the burgeoning space, according to Vance — approach AI development. 

While Vance showed willingness to collaborate with allies in order to foster innovation, the U.S. and United Kingdom opted not to sign an international agreement at the summit that called for open, inclusive and ethical AI development.

Some experts have welcomed the shift, arguing it will help the U.S. stay ahead in the AI race in the face of new lower-cost, open-weight models from Chinese firms and a stricter European regulatory environment.

Advertisement

Amit Elazari, CEO of OpenPolicy, a policy intelligence company that specializes in AI, told CyberScoop that both Vance’s speech and the Trump administration’s AI action plan auger a multifaceted approach” but one “less focused on safety and governance and a lot more focused on national competitiveness and national security.”

Elazari, who attended the  summit, noted that the Trump administration’s hesitation to sign international agreements is unsurprising given that it’s still formulating its own policy. 

Still, she said there does appear to be a role for safety and security protections for AI when they align with the Trump administration’s broader vision.

“In that way, cybersecurity considerations are important when it comes to preventing large theft of intellectual property and undermining national security priorities,” she said. “But they [must be] instrumental to the broader picture of national security competitiveness and making sure the U.S. is leading both the technology and productivity conversations around AI.” 

Some, like Elazari, ultimately welcome the shift, arguing it will help to boost U.S. industry amid rising competition from more affordable and efficient Chinese models. 

Advertisement

But others were less optimistic, worried that the emphasis on deregulation will eliminate or roll back existing efforts to enhance security and prevent misuse by bad actors.

Mark Scott, a senior resident fellow at the Atlantic Council’s Digital Forensic Research Lab, said the difference between the Paris Summit and previous international gatherings indicate that “lawmakers and policymakers have fundamentally altered their perspective on how policy should be harnessed around AI.”

“Attention on safety, including long-term fears [of] the emerging technology eventually destroying humanity, has given way to short-term needs around galvanizing AI for economic gain,” Scott wrote in a piece reacting to the summit.

In previous international pledges like the Munich AI security accords, dozens of companies committed to building safeguards to prevent AI’s use in disinformation campaigns. 

Lawrence Norden, vice president of the elections and government program at the Brennan Center for Justice, told CyberScoop that even with the Trump administration’s pivot, neither the government nor the companies innovating in the AI space need to sacrifice AI safety or security in order to compete on the global stage.   

Advertisement

“I don’t see a contradiction between the kind of regulation that we are calling for and that we think is necessary to protect future elections and to protect democracy,” Norden said. “These are things that are normal for other industries and there’s no reason to me why AI should be immune or think that those basic things we expect in other industries shouldn’t be expected from” them.

Meanwhile, there may be a split among European countries, with some following the United States’ lead in an effort to keep pace with China and others embracing their role as a counterweight to the U.S. deregulatory approach.

Elazari noted that in the next six months, European governments will engage in those conversations, while U.S. states like California and New York may lead in AI safety and regulation. Norden added that civil society groups and private cybersecurity groups must address gaps left by the federal government, such as identifying vulnerabilities in AI systems and tracking the use of AI in disinformation campaigns.

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts