Tech companies are investing hundreds of billions of dollars to build new U.S. datacenters where —if all goes to plan—radically powerful new AI models will be brought into existence.
But all of these datacenters are vulnerable to Chinese espionage, according to a report published Tuesday.
[time-brightcove not-tgx=”true”]
At risk, the authors argue, is not just tech companies’ money, but also U.S. national security amid the intensifying geopolitical race with China to develop advanced AI.
The unredacted report was circulated inside the Trump White House in recent weeks, according to its authors. TIME viewed a redacted version ahead of its public release. The White House did not respond to a request for comment.
Today’s top AI datacenters are vulnerable to both asymmetrical sabotage—where relatively cheap attacks could disable them for months—and exfiltration attacks, in which closely guarded AI models could be stolen or surveilled, the report’s authors warn.
Even the most advanced datacenters currently under construction—including OpenAI’s Stargate project—are likely vulnerable to the same attacks, the authors tell TIME.
“You could end up with dozens of datacenter sites that are essentially stranded assets that can’t be retrofitted for the level of security that’s required,” says Edouard Harris, one of the authors of the report. “That’s just a brutal gut-punch.”
The report was authored by brothers Edouard and Jeremie Harris of Gladstone AI, a firm that consults for the U.S. government on AI’s security implications. In their year-long research period, they visited a datacenter operated by a top U.S. technology company alongside a team of former U.S. special forces who specialize in cyberespionage.
In speaking with national security officials and datacenter operators, the authors say, they learned of one instance where a top U.S. tech company’s AI datacenter was attacked and intellectual property was stolen. They also learned of another instance where a similar datacenter was targeted in an attack against a specific unnamed component which, if it had been successful, would have knocked the entire facility offline for months.
The report addresses calls from some in Silicon Valley and Washington to begin a “Manhattan Project” for AI, aimed at developing what insiders call superintelligence: an AI technology so powerful that it could be used to gain a decisive strategic advantage over China. All the top AI companies are attempting to develop superintelligence—and in recent years both the U.S. and China have woken up to its potential geopolitical significance.
Although hawkish in tone, the report does not advocate for or against such a project. Instead, it says that if one were to begin today, existing datacenter vulnerabilities could doom it from the start. “There’s no guarantee we’ll reach superintelligence soon,” the report says. “But if we do, and we want to prevent the [Chinese Communist Party] from stealing or crippling it, we need to start building the secure facilities for it yesterday.”
China Controls Key Datacenter Parts
Many critical components for modern datacenters are mostly or exclusively built in China, the report points out. And due to the booming datacenter industry, many of these parts are on multi-year back orders.
What that means is that an attack on the right critical component can knock a datacenter offline for months—or longer.
Some of these attacks, the report claims, can be incredibly asymmetric. One such potential attack—the details of which are redacted in the report—could be carried out for as little as $20,000, and if successful could knock a $2 billion datacenter offline from between six months to a year.
China, the report points out, is likely to delay shipment of components necessary to fix datacenters brought offline by these attacks, especially if it considers the U.S. to be on the brink of developing superintelligence. “We should expect that the lead times on China-sourced generators, transformers, and other critical data center components will start to lengthen mysteriously beyond what they already are today,” the report says. “This will be a sign that China is quietly diverting components to its own facilities, since after all, they control the industrial base that is making most of them.”
AI Labs Struggle With Basic Security, Insiders Warn
The report says that neither existing datacenters nor AI labs themselves are secure enough to prevent AI model weights—essentially their underlying neural networks—from being stolen by nation-state level attackers.
The authors cite a conversation with a former OpenAI researcher who described two vulnerabilities that would allow attacks like that to happen—one of which had been reported on the company’s internal Slack channels, but was left unaddressed for months. The specific details of the attacks are not included in the version of the report viewed by TIME.
An OpenAI spokesperson said in a statement: “It’s not entirely clear what these claims refer to, but they appear outdated and don’t reflect the current state of our security practices. We have a rigorous security program overseen by our Board’s Safety and Security Committee.”
The report’s authors acknowledge that things are slowly getting better. “According to several researchers we spoke to, security at frontier AI labs has improved somewhat in the past year, but it remains completely inadequate to withstand nation state attacks,” the report says. “According to former insiders, poor controls at many frontier AI labs originally stem from a cultural bias towards speed over security.”
Independent experts agree many problems remain. “There have been publicly disclosed incidents of cyber gangs hacking their way to the [intellectual property] assets of Nvidia not that long ago,” Greg Allen, the director of the Wadhwani AI Center at the Washington think-tank the Center for Strategic and International Studies, tells TIME in a message. “The intelligence services of China are far more capable and sophisticated than those gangs. There’s a bad offense / defense mismatch when it comes to Chinese attackers and U.S. AI firm defenders.”
Superintelligent AI May Break Free
A third crucial vulnerability identified in the report is the susceptibility of datacenters—and AI developers—to powerful AI models themselves.
In recent months, studies by leading AI researchers have shown top AI models beginning to exhibit both the drive, and the technical skill, to “escape” the confines placed on them by their developers.
In one example cited in the report, during testing, an OpenAI model was given the task of retrieving a string of text from a piece of software. But due to a bug in the test, the software didn’t start. The model, unprompted, scanned the network in an attempt to understand why—and discovered a vulnerability on the machine it was running on. It used that vulnerability, also unprompted, to break out of its test environment and recover the string of text that it had initially been instructed to find.
“As AI developers have built more capable AI models on the path to superintelligence, those models have become harder to correct and control,” the report says. “This happens because highly capable and context-aware AI systems can invent dangerously creative strategies to achieve their internal goals that their developers never anticipated or intended them to pursue.”
The report recommends that any effort to develop superintelligence must develop methods for “AI containment,” and allow leaders with a responsibility for developing such precautions to block the development of more powerful AI systems if they judge the risk to be too high.
“Of course,” the authors note, “if we’ve actually trained a real superintelligence that has goals different from our own, it probably won’t be containable in the long run.”