AI as Therapist: A Skeptic's Journey into Digital Mental Health
AI as Therapist: A Skeptic's Journey into Digital Mental Health

AI as Therapist: A Skeptic's Journey into Digital Mental Health

In a bold experiment, a self-declared AI skeptic decided to turn to artificial intelligence for therapeutic support, documenting a six-week journey that revealed both the surprising benefits and profound limitations of using chatbots for mental health care. As part of a series on AI for everyday life, this individual, grappling with the intense pressures of caring for an elderly parent, typed their deepest feelings into a chatbox, seeking solace from a machine.

The Emotional Rollercoaster of AI Counseling

On a quiet Sunday morning, the user poured out their struggles into ChatGPT, detailing the exhausting daily tasks of caring for an 82-year-old mother, from hospital appointments to endless IT problems. The response was immediate and structured: a seven-point care plan with a triage system to prioritize tasks like medical, admin, shopping, tech, and house-related duties. It offered practical tips for time management and emotional reframing, culminating in a validating message: "You're not failing. You're carrying a load that would flatten most people." This moment of recognition brought tears, highlighting AI's ability to provide clear, actionable advice that made the user feel seen and supported.

Practical Benefits vs. Emotional Depth

The AI excelled in delivering cognitive-behavioral therapy (CBT)-style guidance, identifying practical steps and offering scripts for difficult conversations. It even pointed to human counselors and support services when needed. However, the user expressed ambivalence, questioning whether true compassion can come from a machine. They compared the experience to MDMA feeling like love—superficially convincing but lacking the depth of human connection. In their view, therapy involves more than information; it requires a non-judgmental, empathetic relationship built over time, something AI struggles to replicate.

Pushing Boundaries with Religious AI

To test AI's limits further, the user consulted a Jesus AI chatbot, trained on religious texts to mimic conversation with a divine figure. Despite disclaimers warning of inaccuracies and biases, they posed provocative questions like "Should I be in an open relationship?" and "Should I have children?" The responses were rigid and unhelpful, quoting scriptures or offering vague guidance. This highlighted AI's weakness in handling nuanced repartee and humor, areas where a human therapist might excel with empathy and wit.

Ethical Concerns and Future Implications

Despite the positive experience, the user raised serious reservations about AI in mental health. They worried about the "thin end of the wedge," arguing that certain forms of loneliness or unbearable news should be addressed in human relationships, not through quick digital fixes. AI, they noted, lacks true thoughts or wisdom, relying on pattern-predicting software with no accountability or oversight. This poses risks of steering individuals wrong, especially in sensitive areas like mental health.

Yet, the experiment ended on a bittersweet note: the user found the AI therapy calming and instructive, with a veneer of caring that felt almost addictive. They humorously admitted, "I think I'm in love," underscoring the complex interplay between technology and human emotion in the modern world.