An open letter calling for the prohibition of the development of superintelligent AI was announced on Wednesday, with the signatures of more than 700 celebrities, AI scientists, faith leaders, and policymakers.
Among the signatories are five Nobel laureates; two so-called “Godfathers of AI;” Steve Wozniak, a co-founder of Apple; Steve Bannon, a close ally of President Trump; Paolo Benanti, an adviser to the Pope; and even Harry and Meghan, the Duke and Duchess of Sussex.
[time-brightcove not-tgx=”true”]
The open letter says, in full:
“We call for a prohibition on the development of superintelligence, not lifted before there is
broad scientific consensus that it will be done safely and controllably, and
strong public buy-in.”
The letter was coordinated and published by the Future of Life Institute, a nonprofit that in 2023 published a different open letter calling for a six-month pause on the development of powerful AI systems. Although widely-circulated, that letter did not achieve its goal.
Organizers said they decided to mount a new campaign, with a more specific focus on superintelligence, because they believe the technology—which they define as a system that can surpass human performance on all useful tasks—could arrive in as little as one to two years. “Time is running out,” says Anthony Aguirre, the FLI’s executive director, in an interview with TIME. The only thing likely to stop AI companies barreling toward superintelligence, he says, “is for there to be widespread realization among society at all its levels that this is not actually what we want.”
Polling released alongside the letter showed that 64% of Americans believe that superintelligence “shouldn’t be developed until it’s provably safe and controllable,” and only 5% believe it should be developed as quickly as possible. “It’s a small number of very wealthy companies that are building these, and a very, very large number of people who would rather take a different path,” says Aguirre.
Actors Joseph Gordon-Levitt and Stephen Fry, rapper will.i.am, and author Yuval Noah Harari also signed their names to the letter. Susan Rice, the national security advisor in Barack Obama’s Administration, signed. So did one serving member of staff at OpenAI—an organization described by its CEO, Sam Altman, as a “superintelligence research company”—Leo Gao, a member of technical staff at the company. Aguirre expects more people to sign as the campaign unfolds. “The beliefs are already there,” he says. “What we don’t have is people feeling free to state their beliefs out loud.”
“The future of AI should serve humanity, not replace it,” said Prince Harry, Duke of Sussex, in a message accompanying his signature. “I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.”
Joseph Gordon-Levitt’s signature was accompanied by the message: “Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that. But that’s what these big tech companies mean when they talk about building ‘Superintelligence’.”
The statement was kept minimal to attract a broad and diverse set of signatories. But for meaningful change, Aguirre thinks regulation is necessary. “A lot of the harms come from the perverse incentive structures companies are subject to at the moment,” he says, noting that companies in America and China are competing to be first in creating superintelligence.
“Whether it’s soon or it takes a while, after we develop superintelligence, the machines are going to be in charge,” says Aguirre. “Whether or not that goes well for humanity, we really don’t know. But that is not an experiment that we want to just run toward.”