Open letter calls for prohibition on superintelligent AI, highlighting growing mainstream concern

An open letter released Wednesday has called for a ban on the development of artificial intelligence systems considered to be “superintelligent” until there is broad scientific consensus that such technologies can be created both safely and in a manner the public supports.
The statement, issued by the nonprofit Future of Life Institute, has been signed by more than 700 individuals, including Nobel laureates, technology industry veterans, policymakers, artists, and public figures such as Prince Harry and Meghan Markle, the Duke and Duchess of Sussex.
The letter reflects deep and accelerating concerns over projects undertaken by technology giants like Google, OpenAI, and Meta Platforms that are seeking to build artificial intelligence capable of outperforming humans on virtually every cognitive task. According to the letter, such ambitions have raised fears about unemployment due to automation, loss of human control and dignity, national security risks, and the possibility of far-reaching social or existential harms.
“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” the statement reads.
Signatories include AI pioneers Yoshua Bengio and Geoffrey Hinton, both recipients of the Turing Award, Apple co-founder Steve Wozniak, businessman Richard Branson, and actor Joseph Gordon-Levitt. On the political front, names range from Steve Bannon, former White House chief strategist under Donald Trump, to Susan Rice, former national security adviser in the Obama administration, and former chairman of the U.S. Joint Chiefs of Staff Mike Mullen.
The open letter notes that while there is recognition of AI’s “unprecedented” potential for improving health and prosperity, the development of superintelligent systems brings forth risks that have not yet been sufficiently addressed. Proponents of the letter argue that the race among major technology corporations could push development past a point of no return, making oversight and control impossible. Concerns about national security, civil liberties, and potential human disempowerment are front and center, as are warnings about unforeseen consequences if human-level or greater intelligence is achieved in machines.
The Future of Life Institute previously published a widely circulated letter in 2023 calling for a pause in the development of powerful AI models, a request that was not heeded by leading technology companies. Organizers of the latest campaign say the issue is now more urgent, citing polls indicating broad public skepticism about pursuing superintelligence before ensuring safety and strong oversight. The question of whether governments will intervene in time or companies will self-regulate remains open.
You see the full list of signatories on the Future of Life’s website.