Tue. Jul 22nd, 2025

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about the world of AI.

If you’re reading this in your browser, you can subscribe to have the next one delivered straight to your inbox.

What to Know: Trump’s AI Action Plan

President Trump will deliver a major speech on Wednesday at an event in Washington, D.C., titled “Winning the AI Race,” where he is expected to unveil his long-awaited AI action plan. The 20-page, high-level document will focus on three main areas, according to a person with knowledge of the matter. It will come as a mixture of directives to federal agencies, with some grant programs. “It’s mostly carrots, not sticks,” the person said.

[time-brightcove not-tgx=”true”]

Pillar 1: Infrastructure — The first pillar of the action plan is about AI infrastructure. The plan emphasizes the importance of overhauling permitting rules to ease the building of new data centers. It will also focus on the need to modernize the energy grid, including by adding new sources of power.

Pillar 2: Innovation — Second, the action plan will argue that the U.S. needs to lead the world on innovation. It will focus on removing red tape, and will revive the idea of blocking states from regulating AI—although mostly as a symbolic gesture, since the White House’s ability to tell states what to do is limited. And it will warn other countries against harming U.S. companies’ ability to develop AI, the person said. This section of the plan will also encourage the development of so-called “open-weights” AI models, which allow developers to download models, modify them, and run them locally.

Pillar 3: Global influence —The third pillar of the action plan will emphasize the importance of spreading American AI around the world, so that foreign countries don’t come to rely on Chinese models or chips. DeepSeek and other recent Chinese models could become a useful source of geopolitical leverage if they continue to be widely adopted, officials worry. So, part of the plan will focus on ways to ensure U.S. allies and other countries around the world will adopt American models instead.

Who to Know: Michael Druggan, Former xAI Employee

Elon Musk’s xAI fired an employee who had welcomed the possibility of AI wiping out humanity in posts on X that drew widespread attention and condemnation. “I would like to announce that I am no longer employed at xAI,” Michael Druggan, a mathematician who worked on creating expert datasets for training Grok’s reasoning model, according to his resume, wrote on X. “This separation comes as a result of things I posted on this account relating to my stance on AI philosophy.”

What he said — In response to a post questioning why any super-intelligent AI would decide to cooperate with humans, rather than wiping them out, Druggan had written: “It won’t and that’s OK. We can pass the torch to the new most intelligent species in the known universe.” When a commenter replied that he would prefer for his child to live, Druggan replied: “Selfish tbh.” Druggan has identified himself in other posts as a member of the “worthy successor” movement—a transhumanist group that believes humans should welcome their inevitable replacement by super-intelligent AI, and work to make it as intelligent and morally valuable as possible.

X firestorm — The controversial posts were picked up by AI Safety Memes an X account. The account had in the preceding days sparred with Druggan over posts in which the X employee had defended Grok advising a user that they should assassinate a world leader if they wanted to get attention. “This xAI employee is openly OK with AI causing human extinction,” the account wrote in a tweet that appears to have been noticed by Musk. After Druggan announced he was no longer employed at X, Musk replied to AI Safety Memes with a two-word post: “Philosophical disagreements.”

Succession planning — Druggan did not respond to a request for comment. But in a separate post, he clarified his views. “I don’t want human extinction, of course,” he wrote. “I’m human and I quite like being alive. But, in a cosmic sense, I recognize that humans might not always be the most important thing.”

AI in Action

Last week we got another worrying insight into ChatGPT’s ability to send users down delusional rabbit-holes—this time with perhaps the most high-profile individual yet.

Geoff Lewis, a venture capitalist, posted on X screenshots of his chats with ChatGPT. “I’ve long used GPT as a tool in pursuit of my core value: Truth,” he wrote. “Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern.”

The screenshots appear to show ChatGPT roleplaying a conspiracy theory-style scenario in which Lewis had discovered a secret entity known as “Mirrorthread,” supposedly associated with 12 deaths. Some observers noted that the text’s style appeared to mirror that of the community-written “SCP” fan-fiction, and that it appeared Lewis had confused this roleplaying for reality. “This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual,” Max Spero, CEO of a company focused on detecting “AI slop,” wrote on X. Lewis did not respond to a request for comment.

What We’re Reading

Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety

A new paper coauthored by dozens of top AI researchers at OpenAI, DeepMind, Anthropic, and more, calls on companies to ensure that future AIs continue to “think” in human languages, arguing that this is a “new and fragile opportunity” to make sure AIs aren’t deceiving their human creators. Current “reasoning” models think in language, but a new trend in AI research of outcome-based reinforcement learning threatens to undermine this “easy win” for AI safety. I found this paper especially interesting because it hit on a dynamic that I wrote about six months ago, here.

By

Leave a Reply

Your email address will not be published.