Tue. Jan 20th, 2026

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?

What to Know: Musk v. Altman

Two artificial intelligence heavyweights will face off in court this spring, in a case that could have far-reaching outcomes for the future of AI.

[time-brightcove not-tgx=”true”]

A judge ruled on Thursday that Elon Musk’s lawsuit against Sam Altman, Microsoft, and other OpenAI co-founders can proceed to a jury trial, dismissing OpenAI’s attempts to get the case thrown out.

Musk’s argument — The lawsuit relates to the early days of OpenAI, which started as a nonprofit that was funded by around $38 million in donations from Musk. The Tesla CEO alleges that Altman and others fraudulently misled him about OpenAI’s plans to transition to a for-profit—a transition that resulted in zero profits for Musk, whose contributions were chalked up as charitable donations rather than seed investments, but which ultimately helped make OpenAI staff billions of dollars. Musk is seeking up to $134 billion in damages from OpenAI and Microsoft, calling the funds “wrongful gains.”

OpenAI’s rebuttal — OpenAI has strongly denied Musk’s allegations, calling them legal harassment, and noting that Musk is a competitor who owns a rival AI company. Musk, OpenAI alleges, in fact agreed that OpenAI needed to transition to a for-profit company, and only quit because executives rebuffed his effort to secure total control of the fledgling AI lab and merge it with Tesla. “Elon’s latest variant of this lawsuit is his fourth attempt at these⁠ particular claims, and part of a broader strategy of harassment⁠ aimed at slowing us down and advantaging his own AI company, xAI,” OpenAI said in a blog post on Friday. OpenAI also called Musk’s request for billions in damages an “unserious demand.”

Internal documents — Whichever way the judge ultimately rules, the case promises to be a bonanza for lovers of drama, intrigue, and OpenAI lore. Earlier this month, the judge unsealed thousands of pages of documents obtained during discovery, including excerpts from OpenAI co-founder Greg Brockman’s 2017 personal notes. “It’d be wrong to steal the nonprofit from [Musk]. To convert to a b-corp without him. That’d be pretty morally bankrupt,” reads one of these excerpts, which was cited by the judge on Thursday in her decision to let the case proceed to trial. (OpenAI said this quote was taken out of context by Musk’s legal team to make Brockman look bad, and that Brockman was referring to the possible outcomes of something that “never happened.”)

Implications for the world — It is no exaggeration to say that this lawsuit could be a matter of life and death for OpenAI. If the judge rules against it, OpenAI might be forced to pay Musk billions of dollars—money that could hurt, or even doom, its high-stakes effort to turn a profit by 2029. Other potential legal remedies might include unwinding OpenAI’s current structure, preventing any future IPO, or forcing Microsoft to divest—all things that could significantly complicate OpenAI’s future plans. A Musk victory would also be a strategic and symbolic victory for xAI—a company that has seemingly committed to building AI models with only the vaguest pretense of guardrails, as exemplified by the recent Grok scandal, in which Musk’s AI generated sexualized depictions of women and children. For all of OpenAI’s many alleged trust and safety failings, it undoubtedly takes its responsibilities on that front far more seriously than Musk’s companies do.

Who to Know: Miles Brundage

When it comes to safety and security, the AI industry has less oversight than food, drugs, or aviation. The few measures that do exist are largely examples of companies voluntarily “grading their own homework,” according to Miles Brundage, OpenAI’s former policy head, who has just started a new nonprofit that aims to fix this problem.

New acronym alert — Brundage is the founder of the AI Verification and Evaluation Research Institute (AVERI), which proposes a new system of checks and balances, in which third-party auditors could review an AI company’s practices. This would go beyond existing safety-testing regimes like those practiced by government AI Security Institutes (AISIs): not only testing individual AI models, but also examining corporate governance setups, internal-only model deployments, training data, and computing infrastructure. The end result would be a set of scores, or “AI Assurance Levels,” which would denote the degree to which companies and their AIs could be trusted in high-stakes domains.

AVERI hard problem — In an interview with TIME, Brundage acknowledges his project could face some of the same limitations faced by AISIs: namely, depending on tech companies to give auditors the access required to do their jobs, thus creating a disincentive to publishing findings that might jeopardize that access. But Brundage says he believes there are areas where companies will be incentivized to allow auditors in, like if insurers refuse to underwrite AI companies in the absence of a solid assurance score. “To put it bluntly, I’m interested in: what would force companies to come to the table?” Brundage says. “We’re trying to change the incentives, not just taking them as given.”

Agentic auditing — Top AI companies pride themselves on moving quickly and using their own tools to accelerate their work. Brundage is enthusiastic about doing the same for holding them to account. “In the same way that the companies they’re auditing are making heavy use of AI, the auditor also will be doing things like [saying to a model:] ‘Okay, here’s a database of a million Slack messages; do an analysis of safety culture at this company,’” Brundage says. “We need to be exploring those kinds of things in order to make sure that this is scalable.”

AI in Action

An anonymous group of tech company employees have built a “data poisoning” tool that aims to infect AI training data with information that could damage AI models’ utility, the Register reports. It is a rare example of guerrilla action against AI companies, and makes use of a vulnerability in AI training whereby a small amount of “poisoned” data can have an outsized effect on the final model.

“We agree with Geoffrey Hinton: machine intelligence is a threat to the human species,” the initiative’s website says. “In response to this threat we want to inflict damage on machine intelligence systems,” it goes on, before urging website owners to “assist the war effort” by retransmitting the poisoned data, thus making it more likely to be picked up by the crawler bots that send training data to AI companies.

What We’re Reading

From Tokens to Burgers: A Water Footprint Face-Off, in Semianalysis

It has become a meme, especially in left-leaning spaces on the internet, that AI is unethical because it uses gargantuan quantities of water. So the cracked team at Semianalysis ran the numbers on how the world’s biggest datacenter compares to a much older American institution: gorging oneself on fast food. With some back of the envelope math, they find that xAI’s Colossus 2 datacenter uses the same amount of water in a day as the burgers sold by two In-N-Out burger joints. That’s not nothing, but also puts into perspective how AI use compares to other daily activities that people may not think twice about. Nicolas Bontigui and Dylan Patel write: “A single burger’s water footprint equals using Grok for 668 years, 30 times a day, every single day.”

By

Leave a Reply

Your email address will not be published.