For the first time in history, nearly every nation on Earth has agreed that artificial intelligence is too consequential to leave ungoverned. In a moment when global cooperation feels broken, 193 countries have chosen to act together.
This week, the United Nations will launch two new institutions approved by resolution: an independent scientific panel to assess the risks and opportunities of AI, and a global dialogue where governments, companies, and civil society can collaborate on governing this technology.
[time-brightcove not-tgx=”true”]
From years of working alongside governments, multilateral bodies, and civil society, I have seen how often ambition is lost in the machinery of politics. That is why this moment, fragile as it is, merits special attention—and maybe even a bit of hope. In this case, nations recognized that no single country could govern artificial intelligence alone, and that recognition created the space to begin building lasting institutions for AI governance.
The reality we must confront is that for years, our debates about AI have been dominated by hype and fear, recycled narratives that misdirect our imagination and our policies. The UN’s resolution represents the first attempt to break that cycle by creating institutions that can anchor AI in science, evidence, and cooperation. If they succeed, they can create a new narrative of AI: one that serves public purpose rather than amplifying unjust profit or panic.
Too often, we tell the same scary stories: an evil mogul in his tower building AI systems no one else can control, a machine that outgrows its makers, a gleaming future where technology erases our flaws. Each carries a fragment of truth, but together they obscure the realities already shaping human lives. Narratives like these shape policy and investment, while the most consequential applications are too often ignored.
Consider just a few examples from across the globe. In California, AI now scans camera feeds across fire-prone landscapes. By distinguishing between early morning fog and a rising plume of smoke, it can alert firefighters within minutes, a margin that often determines whether a blaze is contained or a community burns. In Rajasthan, a nonprofit organization called Khushi Baby has developed a predictive model that enables health workers to identify households most at risk of malnutrition, thereby doubling the number of children reached with lifesaving care.
These glimpses demonstrate how AI can augment human capacity, and they remind us how easily such possibilities can be overshadowed when spectacle takes over. They are proof that AI can support and sustain us by buying firefighters time and sparing families the grief of preventable loss. And they underscore why governance matters.
We have already seen how quickly the louder stories can capture the stage. Two decades ago, social media promised connection and knowledge. We trusted that markets would deliver fairness and that governance could wait. By the time the consequences were clear, the damage was already done. Connection had become commerce. Access had become advertising.
Artificial intelligence gives us another chance. The UN’s mechanisms will not answer every question, and it will not overcome entrenched power on its own. But it is scaffolding, institutions that can evolve, adapt, and persist: a scientific panel to anchor decisions in evidence, and a global dialogue to ensure that evidence informs cooperation.
Expanding connectivity and digital literacy will be essential so that billions of people are not excluded from AI’s benefits. Building public repositories of data, algorithms, and expertise can help ensure that the foundations of AI are not controlled by a handful of corporations. And governance must reflect not only governments and companies, but the communities that live with the consequences.
The first test will come quickly, when U.N. Secretary-General António Guterres opens nominations for the new Scientific Panel. Its credibility will rest on who is chosen to serve. A body dominated by the same narrow set of voices, a few governments and powerful firms, will lose legitimacy before it begins. A panel that reflects the breadth of global expertise, from Nairobi to New Delhi to New York, could instead establish the independence and authority this moment requires.
Credibility will also depend on how AI innovation is financed. Today, the incentives shaping AI are set largely by venture capital and private markets, where short horizons and profit targets drive decisions. That model rewards speed and scale but cannot carry the responsibility of building equitable systems. Encouragingly, the U.N. has begun exploring voluntary financing mechanisms for AI capacity-building through its Office of Emerging and Digital Technologies, and philanthropy has committed billions of dollars to align capital with public purpose. Financing itself must become part of the governance infrastructure for AI.
Civil society institutions, ranging from the United Nations to nonprofits, universities, and community organizations, are often the first to recognize how AI is reshaping daily life and the first to develop solutions tailored to local needs. They are not an accessory to governance; they are the only way to connect global rules with lived realities. Without their leadership, AI’s future will be authored by states and corporations alone.
We will continue telling stories about AI, and the ones that endure will determine the kind of future we inherit. Left unchecked, the familiar tales of fear and profit will drown out the quieter truths: families spared from wildfire, babies who live to see their first birthday. Stories can change, and with institutions built to last, they finally have a chance to take root.
The UN’s vote marks the first time nations have tried to govern AI together. If these institutions hold, they could prove that even in an age of fracture, the world is still capable of building technology in the service of humanity.