Advertisement

Does the world need an arms control treaty for AI?

Organizations like the IAEA offer an imperfect but instructive model for designing systems to control AI proliferation.
Getty Images

At the dawn of the atomic age, the nuclear scientists who invented the atomic bomb realized that the weapons of mass destruction they had created desperately needed to be controlled. Physicists such as Niels Bohr and J. Robert Oppenheimer believed that as knowledge of nuclear science spread so, too, would bombs. That realization marked the beginning of the post-war arms control era.

Today, there’s a similar awakening among the scientists and researchers behind advancements in artificial intelligence. If AI really poses an extinction threat to humankind — as many in the field claim — many experts in the field are examining how efforts to limit the spread of nuclear warheads might control the rampant spread of AI.

Already, OpenAI, the world’s leading AI lab, has called for the formation of “something like” an International Atomic Energy Agency — the global nuclear watchdog —  but for AI. United Nations Secretary General Antonio Guterres has since backed the idea, and rarely a day goes by in Washington without one elected official or another expressing a need for stricter AI regulation

Early efforts to control AI — such as via export controls targeting the chips that power bleeding-edge models — show how tools designed to control the spread of nuclear weapons might be applied to AI. But at this point in the development of AI, it’s far from certain that the arms control lessons of the nuclear era translate elegantly to the era of machine intelligence.

Advertisement

Arms control frameworks for AI 

Most concepts of controlling the spread of AI models turn on a quirk of the technology. Building an advanced AI system today requires three key ingredients: data, algorithms and computing power — what the researcher Ben Buchanan popularized as the “AI Triad.” Data and algorithms are essentially impossible to control, but only a handful of companies build the type of computing power — powerful graphics processing units — needed to build cutting-edge language models. And a single company — Nvidia — dominates the upper end of this market. 

Because leading AI models are reliant on high-end GPUs — at least for now — controlling the hardware for building large language model offers a way to use arms control concepts to limit proliferation of the most powerful models. “It’s not the best governance we could imagine, but it’s the best one we have available,” said Lennart Heim, a researcher at the Centre for the Governance of AI, a British nonprofit, who studies computing resources. 

U.S. officials have in recent months embarked on an experiment that offers a preview of what an international regime to control AI might look like. In October, the U.S. banned the export of high-end GPUs to China and the chip making equipment necessary to make the most advanced chips, attempting to prevent proliferation of advanced AI models to China. “If you look at how AI is currently being governed,” Heim said, “it’s being governed right now by the U.S. government. They’re making sure certain chips don’t go to China.” 

Biden administration officials are now considering expanding these controls to lagging-edge chips and limiting Chinese access to cloud computing resources, moves that would further cut Beijing off from the hardware it needs to build competitive AI models.

Advertisement

While Washington is the driving force behind these export controls, which are aimed at ensuring U.S. supremacy in microelectronics, quantum computing and AI, it also relies on allies. In restricting the flow of chips and chipmaking equipment to China, the U.S. has signed up support from other key manufacturers of such goods: the Netherlands, Japan, South Korea and Taiwan.

By virtue of their chokehold on the chips used to train high-end language models, these countries are showing how the spread of AI models might be checked via what for now are ad hoc measures that might one day be integrated into an international body.

But that’s only one half of the puzzle of international arms control. 

Carrots and sticks 

In the popular imagination, the IAEA is an organization primarily charged with sending inspectors around the world to ensure that peaceful nuclear energy programs aren’t being subverted to build nuclear bombs. The less well-known work of the agency facilitates the transfer of nuclear science. Its basic bargain is something like this: sign up to the Nuclear Non-Proliferation Treaty, pledge not to build a bomb and the IAEA will help you reap the benefits of peaceful nuclear energy. 

Advertisement

“That’s the big reason that most states are enthusiastic about the IAEA: They’re in it for the carrots,” said Carl Robichaud, who helps lead the existential risk and nuclear weapons program at Longview Philanthropy, a nonprofit based in London. “They show up in Vienna in order to get assistance with everything from radiotherapy to building nuclear power plants.”

Building an international control regime of this sort for AI requires considering how to first govern the spread of the technology and then how to make its benefits available, argues Paul Scharre, the executive vice president and director of studies at the Center for a New American Security in Washington. By controlling where advanced AI chips go and who amasses them, licensing the data centers used to train models and monitoring who is training very capable models, such a regime could control the proliferation of these models, Scharre argued.

Countries that buy into this arrangement would then gain easier access to very capable models for peaceful use. “If you want to access the model to do scientific discovery, that’s available — just not to make biological weapons,” Scharre said.

These types of access controls have grown more feasible as leading AI labs have abandoned the open source approach that has been a hallmark of the industry in recent years. Today, the most advanced models are only available via online apps or APIs, which allows for monitoring how they are used. Controlling access in this way — both to monitor use and to provide beneficial access — is essential for any regime to control the spread of advanced AI systems, Scharre argued. 

But it’s not clear that the economic incentives of participating in such a regime translate from the world of nuclear arms control to AI governance. Institutions like the IAEA help to facilitate the creation of capital and knowledge intensive nuclear energy industries, and it’s unclear whether similar hurdles exist for AI to incentivize participating in an arms control regime.

Advertisement

“I like the idea of an international agency that helps humanity benefit more equitably from AI and helps this technology reach and help everyone. It’s not clear right now that there is market failure as to why that wouldn’t happen,” Robichaud said.

It’s also not clear that access controls can be maintained in the long run. Unlike nuclear weapons, which are fairly large physical devices that are difficult to move around, AI models are just software that can be easily copied and spread online. “All it takes is one person to leak the model and then the cats out of the bag,” Scharre said.

That places an intense burden on AI labs to keep their products from escaping the lab — as has already occurred — and is an issue U.S. policymakers are trying to address.

In an interview with CyberScoop, Anne Neuberger, a top White House adviser on cybersecurity and emerging technology, said that as leading AI firms increasingly move away from open source models and seek to control access, the U.S. government has carried out defensive cybersecurity briefings to leading AI firms to help ensure that their models aren’t stolen or leaked.

What are we trying to prevent? 

Advertisement

When AI safety researchers speak of the potentially existential threat posed by AI — whether that be a flood disinformation or the development of novel biological weapons — they are speculating. Looking at the exponential progress of machine learning systems in the past decade, many AI safety researchers believe that if current trends hold, machine intelligence may very well surpass human intelligence. And, if it does, there’s reason to think machines won’t be kind to humans

But that isn’t a sure thing, and it’s not clear exactly what catastrophic AI harms the future holds that need to be prevented today. That’s a major problem for trying to build an international regime to govern the spread of AI. “We don’t know exactly what we’re going to need because we don’t know exactly what the technology is going to do,” said Robert Trager, a political scientist at the University of California, Los Angeles, studying how to govern emerging technology. 

In trying to prevent the spread of nuclear weapons, the international community was inspired by the immense violence visited upon Hiroshima and Nagasaki. The destruction of these cities provided an illustration of the dangers posed by nuclear weapons technology and an impetus to govern their spread — which only gained momentum with the advent of more destructive thermonuclear bombs. 

By contrast, the catastrophic risks posed by AI are theoretical and draw from the realm of science fiction, which makes it difficult to build the consensus necessary for an international non-proliferation regime. “I think these discussions are suffering a little bit from being maybe ahead of their time,” said Helen Toner, an AI policy and safety expert at the Center for Security and Emerging Technology at Georgetown University and who sits on OpenAI’s board of directors.

If 10 or 20 years from now, companies are building AI systems that are clearly reaching a point where they threaten human civilization, “you can imagine there being more political will and more political consensus around the need to have something quite, quite strong,” Toner said. But if major treaties and conventions are the product of tragedy and catastrophe, those arguing for AI controls now have a simple request, Toner observes: “Do we have to wait? Can we not skip that step?”

Advertisement

But that idea hasn’t broken through with policymakers, who appear more focused on immediate risks, such as biased AI systems and the spread of misinformation. Neuberger, the White House adviser, said that while international efforts to govern AI are important, the Biden administration is more focused on how the technology is being used and abused today and what steps to take via executive order and congressional action before moving to long-term initiatives.

“There’s a time sequence here,” Neuberger said. “We can talk about longer term efforts, but we want to make sure we’re focusing on the threats today.”

In Europe, where EU lawmakers are at work on a landmark AI Act, which would limit its use in high-risk contexts, regulators have taken a similarly skeptical approach toward the existential risks of AI and are instead focusing on how to address the risks posed by AI as it is used today.

The risk of extinction might exist, “but I think the likelihood is quite small,” the EU’s competition chief Margrethe Vestager recently told the BBC. “I think the AI risks are more that people will be discriminated [against], they will not be seen as who they are.”

Long-term control 

Advertisement

Today’s leading AI models are built on a foundation of funneling ever more data into ever more powerful data centers to produce ever more powerful models. But as the algorithms that process that data become more efficient it’s not clear that ever more powerful data centers — and the chips that power them — will be necessary. As algorithms become more efficient, model developers “get better capability” for “less compute,” Heim from the Centre for the Governance of AI explains. In the future, this may mean that developers can train far more advanced models with less advanced hardware.

Today, efforts to control the spread of AI rest on controlling hardware, but if having access to the most advanced hardware is no longer essential for building the most advanced models, the current regime to control AI crumbles.

These shifts in training models are already taking place. Last year, researchers at Together, an open source AI firm, trained a model known as GPT-JT using a variety of GPUs strung together using slow internet speeds — suggesting that high-performing models could be trained in a decentralized manner by linking large numbers of lagging-edge chips. And as publicly available, ever more capable open source models proliferate, the moat separating AI labs from independent developers continues to narrow — or may disappear altogether.  

What’s more, arguments about the role of algorithmic efficiency making compute less relevant don’t account for entirely new approaches to training models. Today’s leading models rely on a compute-intensive transformer architecture, but future models may use some entirely different approach that would undermine efforts today to control AI models, Toner observes. 

Moreover, arms control experts observe that past efforts to control the spread of dangerous weapons should force a measure of humility on any policymaker trying to control the spread of AI. In the aftermath of World War II, President Truman and many of his key aides, ignoring their scientific advisers, convinced themselves that it would take the Soviet Union decades to build an atomic bomb — when it only took the Kremlin five years. And in spite of export controls, China succeeded in building “2 bombs and 1 satellite” — an atomic bomb, a thermonuclear bomb and a space program. 

Advertisement

That history makes Trager, the political scientist, skeptical about “grand visions for what export restrictions can do.” 

With private companies currently conducting the most advanced AI research, efforts to control the technology have understandably focused on managing industry, but in the long run, military applications may be far more concerning than commercial applications. And that does not bode well for arms control efforts. According to Trager, there is no example in history of major powers “agreeing to limit the development of a technology that they see as very important for their security, and for which they don’t have military substitutes.”

But even if arms control frameworks are imperfect vessels for regulating AI, arms control regimes have evolved over time and grown more stringent to deal with setbacks. The discovery of Iraq’s nuclear program in the 1990s, for example, spurred the creation of the IAEA’s additional protocols. 

“We’re 80 years into the nuclear age, and we haven’t had a detonation in wartime since 1945 and we only have nine nuclear-armed states,” Robichaud from Longview Philanthropy argues. “We’ve gotten lucky a few times, but we’ve also built the systems that started off really weak and have gotten better over time.” 

Correction, July 17, 2023: The additional protocols are agreements between states and the IAEA to supplement their nuclear safeguards but are not a component of the Nuclear Non-Proliferation Treaty.

Latest Podcasts