“i never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now,” wrote Sam Altman, CEO of OpenAI, last week on X in his typical all-lowercase style.
Altman, CEO of the company that created ChatGPT, the world’s most popular AI text generator, drew irony on X. “You’re absolutely right! This observation isn’t just smart—it shows you’re operating on a higher level,” wrote one user, mimicking the sycophantic tone of ChatGPT text.
[time-brightcove not-tgx=”true”]
Altman was referring to an idea popularized by a 2021 post on the online forum Agora Road’s Macintosh Cafe: that the internet, once vibrant with human life, was now dead, run entirely by bots and for bots. “The Internet feels empty and devoid of people,” wrote IlluminatiPirate, the pseudonymous author of the theory, at the time. Gone was the promise of free exchange between people. The internet had been “hijacked by a powerful few.”
A conspiracy theory
In 2021, almost two years before the launch of ChatGPT, the idea that robots ran the internet sounded far-fetched, as did the explanation that “the U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population.” The Atlantic ran a story on the theory with the headline “The ‘Dead-Internet Theory’ Is Wrong but Feels True.”
Bots—automated computer scripts ranking websites for search engines and social media content for platforms—were part of the internet, but they couldn’t generate convincing content.
“We didn’t have AI working at that scale where you actually really could have believable AI accounts running the internet,” Adam Aleksic, a linguist and author of Algospeak: How Social Media is Transforming the Future of Language, told TIME. The dead internet theory “used to be a lunatic fringe conspiracy theory, but it’s looking a lot more real.”
Death of the internet
The business model of content creation on the internet is simple: advertisers pay creators for the eyeballs that their content attracts, which allows creators to go on creating—and humans to go on looking at things they like. Except that, in the last few years, humans have become surplus to requirements.
A March report by Adalytics, an ad analysis firm, found millions of cases since at least 2020 in which ads for brands from Pfizer to the NYPD were served to bots crawling the web rather than real users, undermining the advertisers’ investment. In some comical cases, the ads were served by Google’s ad server to Google’s own bots. The fraction of internet traffic made up by bots has grown in the last ten years according to Imperva, a cybersecurity company. In 2024, it hit 51 percent, the first time it had surpassed the share of internet traffic from humans.
Even if some of the denizens of the internet were bots, they were mostly passive observers. That changed in 2022 when Sam Altman’s OpenAI kicked off the generative AI race. Since then, the quantity of AI-generated content has skyrocketed. The fraction of websites in Google’s top-20 search results that contain AI-generated content has increased 400% since ChatGPT was released, according to Originality AI, a startup that builds AI content detectors.
“It is in the business interest of platforms to cram slop down our throats, because over time, if there’s more AI accounts, they have to pay human creators less,” said Aleksic.
Search engines like Google began providing AI summaries of articles on the internet. Rather than having to visit content creators’ pages, users could get an overview without leaving the search engine. Fewer clicks on content meant less advertising revenue flowing to creators.
As the sophistication of AI-generated content increases, its purview has transcended social media platforms. In August, Dispatch reported that stories published by “Margaux Blanchard” in Wired and at least five other outlets had been taken down after the author turned out to be an AI. For scammers with a creative flair, AI presents a novel way to make a quick buck.
Thus, IlluminatiPirate’s vision of a virtual wasteland created by and for bots is more plausible than ever.
The human cost
The rising tide of “slop” is causing problems for AI developers, too. LLMs, or large language models, like ChatGPT are trained on the internet. If AI-served summaries continue to divert profits from original content creators, high quality content may dry up—leaving model developers with nothing to train on. A paper published in Nature in 2024 showed that AI models “collapse” when trained on data that they themselves generated.
In response, some cloud providers, such as Cloudflare, have proposed limiting access to the websites they host and forcing bots to pay to enter. This could help creators recapture the revenue they need to keep on creating. “My utopian vision is a world where humans get content for free, and robots have to pay a ton for it,” Matthew Prince, the company’s CEO told TIME in an earlier interview.
The stakes are higher than just the internet. Humans, like large language models, learn from what they read. It’s not “just that bots are surrounding us,” said Aleksic. “It’s that we are starting to become more like the bots.” In July, Scientific American reported that words commonly used by ChatGPT, like “delve” and “meticulous,” started showing up in conversational podcasts more often after the product’s release in 2022. “real people have picked up quirks of LLM-speak” Altman observed in a post on Monday.
There’s nothing inherently bad about the usage of language changing. But the algorithms that curate what we see online “already represent reality differently than it actually exists” by promoting extreme content from human users. AI-generation could unmoor the collective reality further, by reducing human input that anchors online discourse in what real people think.
“We have a growing perception gap in America where people think that other people’s views are more extreme than they actually are,” said Aleksic. “It’s AI psychosis on a mass scale.”