Wed. Oct 22nd, 2025

Every week, over 800 million people use ChatGPT to answer questions, complete tasks, and make decisions. AI systems are being rapidly adopted in schools, universities, and workplaces worldwide. Meanwhile, with billions of dollars being invested in building better systems, the technology itself continues to advance—and the future is set to be weirder than ever.

[time-brightcove not-tgx=”true”]

AI could cause mass unemployment, enormous energy consumption, and even, some experts worry, the destruction of civilization. While such issues remain under debate, one practical step individuals can take is to learn how to work with AI systems without letting them usurp your agency.

TIME spoke with five experts who use AI in their own work—from math to psychology to neuroscience—to distill advice on how to use these systems most effectively, without eroding critical thinking in the process.

Experiment for fit

AI systems are “jagged”—their performance can be uneven and unpredictable. They can excel on complex tasks while struggling with simple ones. And the boundaries of what they are or aren’t good for are changing all the time. For example, “before [OpenAI’s reasoning model] came out, they really weren’t useful for research mathematics,” says Daniel Litt, an assistant professor at the University of Toronto.

To know which model is best for your needs, you need to spend at least a few hours playing with it. New and more capable AI systems are released on a near-monthly basis, and “which models you pick makes a difference,” says Ethan Mollick, a Wharton professor and author of Co-Intelligence, a book on how to collaborate with AI. “Give it a shot in an area you know well,” he advises. “If it does badly, correct it. If it still does badly, come back in a few months.” Mollick uses one set of models for coding, and another for editorial help—common among power users. “Use one for ten hours, and you’re gonna know what kinds of questions you get good answers for,” he says.

It’s also worth taking advantage of the fact that, in addition to text, you can now send most AI systems pictures and voice notes—providing them with greater context and improving their responses. You could ask it to identify a kind of tree or give you the history of a local building. There’s lots of value to be found in a few hours of intentional play.

Currently, on free tiers, OpenAI, Anthropic, and Google all limit how many times per day you can message their top reasoning models. Once this is exhausted, they default to cheaper and less capable models, or require you to wait for the limit to reset. Subscriptions to each company’s top models start at $20 per month. 

Understand their strengths

Current AI systems have four key advantages over humans: they provide near-instant responses, process large amounts of contextual information, do not tire, and can access vast stores of human-created knowledge. “If the answer is no good, you can ask it a follow-up. You can home in on what you need: you can go through a feedback loop very quickly,” says Scott Aaronson, a computer science professor at the University of Texas at Austin.

AI systems perform better if you provide them with relevant information about yourself and whatever task you’re trying to complete. “I upload all my notes and documents, and it provides me with feedback that makes sense based on how I think, and on ideas I’ve had in the past,” says Anne-Laure Le Cunff, a neuroscientist at King’s College London. No matter how smart or talented a human collaborator might be, “they’re never going to be able to hold all of that information in memory and give me feedback based on that,” she says.

Long after a person would get frustrated with your queries, an AI system will keep listening and responding. This can be good, as you can stay in a flow state while consulting with it. But La Cunff also cautions that this can create an “illusion of creative momentum,” in which it feels like you’re making progress when in fact you would be better served by taking a break, going for a walk, and letting your brain process the task in the background.

Given that AI systems are trained on and have access to immense amounts of data, we can think of them as “a technique for accessing information from other people,” says Alison Gopnik, a psychology professor at UC Berkeley. They can act as more sophisticated search engines, surfacing high-quality human-created content—essays, books, music, films, and more—that might not be found through traditional methods. “In my case, I use it as a substitute for search,” she says. 

Keep your brain in the loop

For Le Cunff, it’s vital to “keep your brain in the loop”—to actively collaborate with the AI, rather than blindly relying on its outputs. She uses AI as a thinking and conversational partner to improve her work—asking it to point out any blind spots or biases in her thinking, or key points she might have missed—rather than having it create material from scratch.

“When you’re trying to learn something, the process is the point,” says Mollick. For example, if you’re trying to learn how to write an essay, the process of writing is where the learning happens. If you outsource it, you won’t learn anything. As Litt puts it, “AI can’t understand something for you.”

Several experts highlighted the importance of not blindly relying on AI outputs. “In just about no area would I want to rely on the AI’s output without putting my own thought into it,” says Aaronson. Ideally, you should know enough about a subject to be able to tell if it’s wrong, he says. “It very often will be wrong, but it will still be confident and superficially persuasive.”

Since the introduction last September of reasoning models—AI systems that make notes to themselves before responding—and with most AIs now able to search the internet, you can usually just ask for sources for a claim. Always follow the source and verify for yourself if a claim is supported.

Consider them imaginary friends

“All the evidence we have suggests [AI systems] work best when you treat them like people, even though they’re not people,” says Mollick. This looks like asking follow-up questions, pointing out when a system has made mistakes, and pushing back when you disagree with something. Every response gives the system more context, improving its response. 

That said, maintaining clear boundaries is crucial to avoid falling prey to manipulation. “You could think about the way you interact with ChatGPT as being like interacting with an imaginary friend,” says Gopnik. Studies have found that most children intuitively understand the difference between real and imaginary friends, and the different roles played by each. “But it’s really important that it’s an imaginary friend. If you start treating your imaginary friends as if they’re real, you’re gonna be in trouble,” she says.

Set personal boundaries

People are already using ChatGPT to write eulogies, wedding toasts, and bedtime stories for their children. “We’re going to have to figure out what we think is too intimate or too sacred for the AI,” says Mollick. “I think it’s an important human decision we get to make. I don’t know where that line’s gonna end up being.” His personal line: he does all his writing himself first, before consulting AI, and he never uses it to grade student papers. “There are just some things where I feel an obligation to keep them human,” he says.

While a small but growing fraction of people are turning to AI for emotional support, the social impacts of this are still unclear. In the absence of evidence, it’s worth being cautious about letting AI substitute for human contact. 

There is also a real risk of getting caught in consultation loops: bouncing between different models as a way to circumvent making a decision. To avoid this, you need to draw from your own experience. “You have to make a judgment call,” says Mollick.

By

Leave a Reply

Your email address will not be published.