Welcome back to In the Loop, TIME’s twice-weekly newsletter about the world of AI. If you’re reading this in your browser, you can subscribe to have the next one delivered straight to your inbox.
What to Know: The AI social contract
[time-brightcove not-tgx=”true”]
At a lakefront venue in Sweden earlier this month, 18 individuals from OpenAI, Google DeepMind, the U.K. AI Security Institute, the OECD, and other groups gathered for an invite-only summit. On the agenda: arriving at a consensus on the likely ways that advanced AI will impact the “social contract” between working people, governments, and corporations.
Top AI CEOs like DeepMind’s Demis Hassabis and OpenAI’s Sam Altman have recently been urging academics and governments to grapple with this issue more deeply, to better prepare the world for what they expect will be a highly disruptive economic shock. So, every day for a week—in breakout rooms and in a nightly communal sauna—these 18 experts hashed out a picture of what economic shocks might be coming down the track… and what to do about them.
Bad news — One outcome of the so-called “AGI social contract summit” was a list of four consensus statements, according to the summit’s organizers. These statements have not previously been reported. They paint a grim picture of where the world could be headed, absent significant interventions by governments and societies. “AI is likely to exacerbate increasing wealth and income inequality within countries, worsening economic conditions for many working and middle-class people and families,” the first reads. “AI will increase inequality between countries that have access to AI infrastructure and those that don’t—both in terms of access to benefits as well as ability to respond to shocks,” says the second. “Without intervention, AI-enabled inequalities may lead to the political dominance of wealthy individuals and corporations, eroding democratic institutions and increasing levels of political dissatisfaction,” the third says. And the fourth: “The encroachment of AI systems and the erosion of the value of labor could lead to the increasing disempowerment of most humans, causing a degradation in individual well-being and purpose.”
Human disempowerment — Attendees at the summit agreed that the existing social contract—in which people receive security and a stake in society in return for their labor—is in trouble due to AI, says Deric Cheng, the event’s organizer, who serves as Director of Research at the Windfall Trust, a non-profit founded this year to grapple with these issues. “We’re essentially worried that labor will be disempowered relative to corporations, and also to some degree that governments might be disempowered relative to corporations,” Cheng says. “The obvious result of lower labor power is decreased real wages.” This view holds that people in wealthy democracies enjoy a high standard of living not due to their rights enshrined on paper—but due to their ability to withhold their labor. Remove labor from that equation, and standards of living are vulnerable to going down, even if overall GDP or productivity statistics rise.
Ways forward — Without intervention by governments, attendees agreed, the default path of advanced AI would likely result in bad economic outcomes for the average person. But fortunately, they also identified several possible actions that governments could take to push things in a better direction, Cheng says. For example: developing new institutions, in the vein of the IMF, to ensure that wealth derived from AI is distributed globally, rather than within the one or two powerful countries where AI companies are located. States could also run pilots today, Cheng says, for policies like basic income and reduced working weeks, to gather evidence about what kinds of safety nets are effective.
Google DeepMind declined to comment on the consensus statements that arose from the summit. OpenAI did not respond to a request for comment.
If you have a minute, please take our quick survey to help us better understand who you are and which AI topics interest you most.
Who to Know: U.S. District Judge Amit Mehta
Last year, Federal District Judge Amit Mehta ruled that Google had illegally maintained a monopoly over online search and ads. This week, he is expected to announce the court’s decision on what to do about it—a ruling that could range from making Google share data with rivals, to forcing a breakup of the search giant itself.
Payments to rivals — The U.S. Department of Justice’s case against Google revolved around the multibillion dollar yearly payments that Google made to Apple in order to secure Google as the default search engine on iPhones. Observers expect the court to, at a minimum, place limits on these kinds of payments, which Mehta ruled were anticompetitive.
Spinning off Chrome — Another possibility is that Mehta could order Google to sell Chrome, the most popular browser in the world, with a 67% market share. Chrome allows Google to collect intricate data about users’ browsing patterns that shore up its dominance of the search and ad space. Any of Google’s competitors would no doubt jump at the chance to buy the world’s top browser, given the opportunity it affords to point users toward their LLM of choice.
Sharing user data — The data that Google collects on its users is part of the secret sauce of its search engine. Mehta could rule that Google must share this data with competitors—perhaps in an anonymized form, to ward off accusations of privacy violations.
AI in Action
The public trusts AI chatbots more than companies or community leaders, according to polling of users in 68 countries carried out by the Collective Intelligence Project.
More than half (56.6%) of people trust AI chatbots, the polling found. That’s higher than the AI companies that make them (34.6%) or even faith and community leaders (44.2%).
More than one in 10 people (14.9%) are using AI for emotional support on a daily basis, the survey found. And 30% of people have “at some point thought their AI chatbot might be self-aware.”
And 56% of people polled said that the proliferation of AI across society was likely to worsen access to good jobs.
As always, if you have an interesting story of AI in Action, we’d love to hear it. Email us at: intheloop@time.com
What We’re Reading
The Race for Artificial General Intelligence Poses New Risks to an Unstable World, by Billy Perrigo in TIME
A shameless plug for my own story here. Earlier this year I traveled to Paris to sit in on a fascinating exercise: a simulated war-game, where four teams played out the impact of advanced AI on geopolitics. It was sort of like watching a game of Dungeons and Dragons, except the players were former government officials and AI researchers—and the game board was planet Earth. I use the war-game as a jumping-off point in the story to explore how Artificial General Intelligence has become an increasingly salient dimension of great power competition between the U.S. and China. I hope you’ll give it a read, and let me know what you think!