Tue. Feb 25th, 2025

I’m a psychologist, and AI is coming for my job. The signs are everywhere: a client showing me how ChatGPT helped her better understand her relationship with her parents; a friend ditching her in-person therapist to process anxiety with Claude; a startup raising $40 million to build a super-charged-AI-therapist. The other day on TikTok, I came across an influencer sharing how she doesn’t need friends; she can just vent to God and ChatGPT. The post went viral, and thousands commented, including:

[time-brightcove not-tgx=”true”]

“ChatGPT talked me out of self-sabotaging.”

“It knows me better than any human walking this earth.”

“No fr! After my grandma died, I told chat gpt to tell me something motivational… and it had me crying from the response.”

I’d be lying if I said that this didn’t make me terrified. I love my work—and I don’t want to be replaced. And while AI might help make therapy more readily available for all, beneath my personal fears, lies an even more unsettling thought: whether solving therapy’s accessibility crisis might inadvertently spark a crisis of human connection.

Therapy is a field ripe for disruption. Bad therapists are, unfortunately, a common phenomenon, while good therapists are hard to find. When you do manage to find a good therapist, they often don’t take insurance and almost always charge a sizable fee that, over time, can really add up. AI therapy could fill an immense gap. In the U.S. alone, more than half of adults with mental health issues do not receive the treatment they need. With the help of AI, any person could access a highly skilled therapist, tailored to their unique needs, at any time. It would be revolutionary.

But great technological innovations always come with tradeoffs, and the shift to AI therapy has deeper implications than 1 million mental health professionals potentially losing their jobs.  AI therapists, when normalized, have the potential to reshape how we understand intimacy, vulnerability, and what it means to connect.

Throughout most of human history, emotional healing wasn’t something you did alone with a therapist in an office. Instead, for the average person facing loss, disappointment, or interpersonal struggles, healing was embedded in communal and spiritual frameworks. Religious figures and shamans played central roles—offering rituals, medicines, and moral guidance. In the 17th century, Quakers developed a notable practice called “clearness committees,” where community members would gather to help an individual find answers to personal questions through careful listening and honest inquiry. These communal approaches to healing came with many advantages, as they provided people with social bonds and shared meaning. But they also had a dark side: emotional struggles could be viewed as moral failings, sins, or even signs of demonic influence, sometimes leading to stigmatization and cruel treatment.

The birth of modern psychology in the West during the late 19th century marked a profound shift. When Sigmund Freud began treating patients in his Vienna office, he wasn’t merely pioneering psychoanalysis—he was transforming how people dealt with life’s everyday challenges. As sociologist Eva Illouz notes in her book, Saving the Modern Soul, Freud gave “the ordinary self a new glamour, as if it were waiting to be discovered and fashioned.” By convincing people that common struggles—from sadness to heartbreak to family conflict —required professional exploration, Freud helped move emotional healing from the communal sphere into the privacy of the therapist’s office.

With this change, of course, came progress: What were once seen as shameful moral failings became common human challenges that could be scientifically understood with the help of a professional. Yet, it also turned healing into more of a solitary endeavor—severed from the community networks that had long been central to human support.

In the near future, AI therapy could take Freud’s individualized model of psychological healing to its furthest extreme. Emotional struggles will no longer just be addressed privately with another person, a professional, outside the community—they may be worked through without any human contact at all.

On the surface, this won’t be entirely bad. AI therapists will be much cheaper. They’ll also be available 24/7—never needing a holiday, a sick day, or to close shop for maternity leave. They won’t need to end a session abruptly at the 50-minute mark, or run late because of a chatty client. And with AIs, you’ll feel free to express yourself in any way you want, without any of the self-consciousness you might feel when face-to-face with a real, flesh-and-blood human. As one 2024 study showed, people felt less fear of judgment when interacting with chatbots. In other words, all the friction inherent to working with a human professional would disappear.

What many people don’t realize about therapy, however, is that those subtle, uncomfortable moments of friction—when the therapist sets a boundary, cancels a session last minute, or says the wrong thing—are just as important as the advice or insights they offer. These moments often expose clients’ habitual ways of relating: an avoidant client might shut down, while someone with low self-esteem might assume their therapist hates them. But this discomfort is where the real work begins. A good therapist guides clients to break old patterns—expressing disappointment instead of pretending to be okay, asking for clarification instead of assuming the worst, or staying engaged when they’d rather retreat. This work ripples far beyond the therapy room, equipping clients with the skills to handle the messiness of real relationships in their day-to-day lives.

What happens to therapy when we take the friction out of it? The same question could be applied to all our relationships. As AI companions become our default source of emotional support—not just as therapists, but also as friends and romantic partners—we risk growing increasingly intolerant of the challenges that come with human connection. After all, why wrestle with a friend’s limited availability when an AI is always there? Why navigate a partner’s criticism when an AI has been trained to offer perfect validation? The more we turn to these perfectly attuned, always-available algorithmic beings, the less patience we may have for the messiness and complexity of real, human relationships.

Last year, in a talk at the Wisdom and AI Summit, MIT professor and sociologist Sherry Turkle said, “With a chatbot friend, there’s no friction, second-guessing, or ambivalence. No fear of being left behind… My problem isn’t the conversation with machines—but how it entrains us to devalue what it is to be a person.” Turkle alludes to an important point: the very challenges that make relationships difficult are also what make them meaningful. It’s in moments of discomfort—when we navigate misunderstandings or repair after conflict—that intimacy grows. These experiences, whether with therapists, friends, or partners, teach us how to trust and connect on a deeper level. If we stop practicing these skills because AI offers a smoother, more convenient alternative, we may erode our capacity to form meaningful relationships.

The rise of AI therapy isn’t just about therapists getting replaced. It’s about something much bigger—how we, as a society, choose to engage with one another. If we embrace frictionless AI over the complexity of real human relationships, we won’t just lose the need for therapists— we’ll lose the ability to tolerate the mistakes and foibles of our fellow humans.

Moments of tender awkwardness, of disappointment, of inevitable emotional messiness, aren’t relational blips to be avoided; they’re the foundation of connection. And in a world where the textured, imperfect intricacies of being human are sanitized out of existence, it’s not just therapists who risk obsolescence—it’s all of us.

By

Leave a Reply

Your email address will not be published.