Last week, a coalition of scientific, religious, and political leaders called for a global prohibition on developing superintelligence: AI that outperforms humans across all cognitive tasks. I was one of the early signatories, alongside Nobel laureates like Geoffrey Hinton; the world’s-most cited AI scientist Yoshua Bengio; former advisor to President Donald Trump Steve Bannon; former Joint Chiefs of Staff Chairman Mike Mullen; and Prince Harry and Meghan, Duchess of Sussex.
What’s bringing this unprecedented coalition together? The urgent, extinction-level threat posed by superintelligence. Tech companies are pouring billions of dollars into privately reaching superintelligence as fast as possible. No one knows how to control AIs that are vastly more competent than any human, yet we are getting closer and closer to developing them, with many experts expecting superintelligence in the next 5 years at the current pace.
[time-brightcove not-tgx=”true”]
This is why leading AI scientists warn that developing superintelligence could result in humanity’s extinction.
The need for a ban on superintelligence
Once we develop machines significantly more competent than us across all domains, we will most likely be either be at the mercy of the person, or country who controls them. Or we will be at the mercy of the superintelligent machines themselves, as currently no country, no company, and no person knows how to control them. In theory, a superintelligent AI would pursue its own goals, and if those goals are incompatible with sustaining human life, we will be annihilated.
To make matters worse, AI developers do not understand how current powerful AI systems actually work. Unlike bridges or power plants, which are designed to precise human specifications, today’s AI systems are “grown” from vast datasets through processes their own creators cannot interpret. Even Anthropic CEO Dario Amodei admits that we only “understand 3% of how they work.”
Despite this danger, superintelligence remains the explicit goal of leading AI companies: OpenAI, Anthropic, Google DeepMind, Meta, xAI, DeepSeek. And given the skyrocketing valuation of these companies, they are not about to stop by themselves.
Governments worldwide must step in before it is too late. Yet the international situation is not encouraging. We live in an era of rising geopolitical tension, rife with trade wars between the U.S. and China. Countries are rushing to invest billions in data centers to power AI at a time when developing and deploying dangerous AI systems remains less regulated than opening a new restaurant or building a house.
How to ban superintelligence
In this climate, is an international ban on the development of superintelligence even possible?
Yes, because we’ve achieved such global prohibitions before.
In 1985, the world learned there was a hole in the ozone layer above Antarctica, thanks to three scientists from the British Antarctic Survey. The culprits for this atmospheric crime were chlorofluorocarbons (CFCs), ubiquitous industrial chemicals. Unless something was done, the hole would keep growing, and millions of people would get skin cancer or turn blind because of the lack of UV protection.
Instead, millions banded together to ban CFCs. Scientists made the threat tangible with colored satellite pictures and clear discussion of the health consequences. NGOs orchestrated boycotts of huge brands and directed thousands of concerned citizens to write protest letters. Schools worldwide ran educational programs, and the UN endorsed public awareness campaigns.
In 1987, a mere two years after the ozone hole was made public, every existing country signed the Montreal Protocol. Signed during the Cold War, the Montreal Protocol demonstrates that it is possible to reach quick and decisive international agreements in the midst of geopolitical tensions.
One key factor was that the ozone hole endangered nearly everybody in the world. It was not an externality pushed by some people onto others, but something that everyone would suffer from. Superintelligence is a similarly universal threat: loss of control of AI means that even those who develop it will not be spared from its dangers. The extinction risk from superintelligence thus has the potential to cut through every division. It can unite people across political parties, religions, nations, and ideologies. Nobody wants their life, their family, their world to be destroyed.
When people learn about superintelligence and the extinction risk it poses, many see the danger and start worrying about it. Like with the ozone hole, this worry must be catalyzed into civic engagement, building a global movement that works with governments to make a prohibition on superintelligence a reality.
Unfortunately, most lawmakers simply still do not know about the threat of superintelligence or its urgency, and AI companies are now deploying hundreds of millions of dollars to crush attempts to regulate AI.
The best counterbalance to this gargantuan lobbying effort is for lawmakers to hear from their constituents what they truly think about superintelligence. Very often, lawmakers will find that most of their constituents want them to say “no” to superintelligence, and “yes” to a future where humanity survives and thrives.
In an era of declining political engagement and increased partisanship, prohibiting superintelligence is a common sense issue that unites people across the political spectrum.
As with the depletion of the ozone layer, everyone stands to lose from the development of superintelligence. We know the movement to avoid this fate can be built.
The only question left is: can we build it fast enough?
