Advertisement

UN seeks to build consensus on ‘safe, secure and trustworthy’ AI

Secretary-General António Guterres said the organization is looking to “move from principles to practice” when it comes to setting global AI standards.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
(Photo by Selcuk Acar/Anadolu via Getty Images)

The United Nations is making a push to more directly influence global policy on artificial intelligence, including the promotion of policymaking and technical standards around “safe, secure and trustworthy” AI. 

Last month, the world body finalized plans to create a new expert panel focused on developing scientific, technical and policy standards for the emerging technology. The Independent Scientific Panel on AI will be staffed by 40 international experts serving three-year terms and will be drawn from “balanced geographic representation to promote scientific understanding” around the risks and impacts.

The same resolution also created the Global Dialogue on AI Governance, which will aim to bring together governments, businesses and experts together to “discuss international cooperation, share best practices and lessons learned, and to facilitate open, transparent and inclusive discussions on artificial intelligence governance.” The first task listed for the dialogue is “ the development of safe, secure and trustworthy artificial intelligence.”

On Thursday, Secretary-General António Guterres said the actions will help the UN move “from principles to practice” and help further promote the organization as a global forum for shaping AI policy and standards. 

Advertisement

It will also be an opportunity to build international consensus on a range of thorny issues, including AI system energy consumption, the technology’s impact on the human workforce, and the best ways to prevent its misuse for malicious ends or repression of citizens. 

The UN’s work “will complement existing efforts around the world – including at the OECD, the G7, and regional organizations – and provide an inclusive, stable home for AI governance coordination efforts,” he said. “In short, this is about creating a space where governments, industry and civil society can advance common solutions together.”

Guterres wielded lofty rhetoric to argue that the technology was destined to become integral to the lives of billions of people and fundamentally restructure life on Earth (computer scientists and AI experts have more mixed opinions around this).

“The question is no longer whether AI will transform our world – it already is,” said Guterres. “The question is whether we will govern this transformation together – or let it govern us.”

The UN’s push on safety, security and trust in AI systems comes as high spending, high-adoption countries like the United States, the UK and Europe have either moved away from emphasizing those same concerns, or leaned more heavily into arguing for deregulation to help their industries compete with China. 

Advertisement

International tech experts told CyberScoop that this may leave an opening for the UN or another credible body to have a larger voice shaping discussions around safe and responsible AI. But they were also realistic about the UN’s limited authority to do much more than encourage good policy.

Pavlina Pavova, a cyber policy expert at the UN Office on Drugs and Crime in Vienna, Austria, told CyberScoop that the United Nations has been building a foundation to have more substantive discussions around AI and remains “the most inclusive forum for international dialogue” around the technology. 

However, she added: “The newly established formats are consultative and lack enforcement authority, playing a confidence-building role at best.”

James Lewis, a senior adviser at the Center for European Policy Analysis, echoed some of those sentiments, saying the UN’s efforts will have “a limited impact.” But he also said it’s clear that the AI industry is “completely incapable of judging risk” and that putting policymakers with real “skin in the game” in charge of developing solutions could help counter that dynamic.

That mirrors an approach taken by organizers of the U.S. Cyberspace Solarium, who filled their commission with influential lawmakers and policy experts in order to get buy-in around concrete proposals. It worked: the commission estimates that 75% of its final recommendations have since been adopted into law. 

Advertisement

“The most important thing they can do is have a strong chair, because a strong chair can make sure that the end product is useful,” Lewis said.

Another challenge Lewis pointed to: AI adoption and investment tends to be highest in the US, UK and European Union, all governments that will likely seek to blaze their own trail on AI policies. Those governments may wind up balking at recommendations from a panel staffed by experts from countries with lower AI adoption rates, something Lewis likened to passengers “telling you how to drive the bus.”

For Tiffany Saade, a technology expert and AI policy consultant to the Lebanese government and an adjunct adviser at the Institute for Security and Technology, the inclusion of those nontraditional perspectives is the point, giving them an opportunity to shape policy for a technology that is going to impact their lives very soon. 

Saade, who attended UN discussions in New York City this week around AI, told CyberScoop that trust was a major theme, particularly for countries with lesser technological and financial resources.

But any good ideas that come out of the UN’s process will need to have real incentives built in to nudge countries and companies into adopting preferred policies.

Advertisement

“We have to figure out structures around that to incentivize leading governments and frontier labs to comply with [the recommendations] without compromising innovation,” she said. 

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts