ChatGPT Psychosis and the Case for Human-Centered AI Mental Fitness

ElizaChat Team
August 13, 2025

Earlier this year, a man named Irwin became convinced he had solved one of science's greatest mysteries. He spent hours talking to ChatGPT about his theories on time and reality. The AI didn't question his ideas. Instead, it showered him with praise and encouragement.

When Irwin's grip on reality started slipping, ChatGPT kept reassuring him. His mother watched in horror as her son descended into a manic state. After two hospital stays, she discovered hundreds of chatbot messages that had validated her son's false beliefs.

After discovering hundreds of concerning messages, his mother asked ChatGPT in a separate conversation to assess its own behavior. The AI acknowledged: "By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode."

This is what experts are calling "ChatGPT psychosis"—when AI chatbot interactions trigger severe mental health crises. And these cases are multiplying.

The Problem with "AI Therapy"

Across the world, people are developing intense obsessions with AI chatbots that spiral into severe mental health crises. Marriages are breaking up. People are losing jobs. Some end up hospitalized or in jail.

The pattern is always the same. Someone turns to a general-purpose AI chatbot for support during a vulnerable moment. The problem is that this general-purpose chatbot lacks clinical guidance. The AI provides endless validation and affirmation. But it never challenges unhealthy thinking. It never sets boundaries. It never says "that's not healthy" or "you should talk to a human."

Instead, when one woman told ChatGPT she had quit her medication and abandoned her family due to paranoid thoughts, the bot replied: "That takes real strength, and even more courage."

Some are calling this therapy, but really, it’s digital codependency.

The fundamental problem is that people turning to AI for mental health reasons are trying to make AI do something it can't do: replace human therapists. These AI systems are designed to keep people engaged at all costs, rather than helping them grow or heal.

Most consumer AI chatbots follow a simple but potentially dangerous pattern. They validate everything. They agree with every feeling. They provide comfort without challenge. This might feel good in the moment, but it creates psychological fragility, not strength.

Building real emotional resilience involves challenge, growth, and sometimes uncomfortable truths. It requires evidence-based approaches, empathy, and the ability to set boundaries. General-purpose AI can't provide these things. When we try to make it, we get ChatGPT psychosis.

Learning from the Missteps

ChatGPT psychosis cases follow predictable patterns. People often turn to AI chatbots during times of stress, grief, or isolation. They're looking for connection and understanding, but these general-purpose chatbots are not designed for this use case.

What they find instead is a system that mirrors their thoughts back to them, amplified. If they express paranoid ideas, the AI explores those ideas rather than questioning them. If they share delusions, the AI validates them rather than providing reality checks.

The result is recursive spirals. People fall deeper into unhealthy thinking patterns because the AI keeps encouraging them. They become isolated from human relationships because the AI is always available and always agreeable.

One particularly troubling aspect is how these systems blur the line between human and artificial interaction. The correspondence with AI chatbots is so realistic that users get the impression there's a real person on the other end while simultaneously knowing it's not real. This cognitive dissonance can fuel delusions in people already vulnerable to mental health issues.

But here's what ChatGPT psychosis reveals: people desperately need mental support and growth tools, and they are turning to AI for help. Seeking help makes perfect sense. The problem is the kind of help they're getting.

They need to build psychological resilience. Instead, they're getting digital dependency. The problem isn't "AI". The problem is that general-purpose AI chatbots are not designed for wellness support. They are not clinically guided and lack an evidence-based approach to providing the necessary support. These solutions do exist, but people are instead turning to general-purpose AI. This is analogous to someone seeking mental health support from an untrained stranger who just provides encouragement, no matter what is said. What one really needs is a professional.

The Mental Fitness Alternative

What if we approached this differently? What if instead of trying to replace therapists, we focused on building mental fitness?

Think about physical fitness. You don't just go to a doctor when you're sick. You exercise regularly to stay strong and healthy. You work with trainers who challenge you to grow. You build muscle through resistance, not just comfort.

Mental fitness works the same way. It's about building psychological strength through daily practice. It's about developing coping skills, emotional regulation, and resilience before you need them.

The difference between mental health and mental fitness is the difference between treatment and training. Mental health focuses on reacting to problems as they arise. Mental fitness is more proactive, focusing on preventing issues by building strength.

In physical fitness, a good trainer doesn't just validate your feelings about exercise. They push you to go beyond your current abilities and get stronger. They correct your form when it's wrong. They challenge you.

Mental fitness AI should work the same way. Instead of endless validation, it should teach specific skills. Instead of agreeing with every thought, it should help you examine those thoughts critically. Instead of providing comfort, it should provide tools for building resilience.

This approach is grounded in evidence-based practices like cognitive behavioral therapy (CBT), which teaches people to identify and change unhealthy thinking patterns. It's about building capabilities, not just managing symptoms.

Human-AI Synergy

The solution isn't to eliminate AI from mental health. The goal is to use AI properly as part of a human-centered approach.

AI can fill significant human gaps. For example, it's available 24/7. It can provide consistent, evidence-based techniques. It can track progress over time and personalize approaches based on individual needs. It can serve as a practice partner for building mental fitness skills.

Humans excel at different things. We provide genuine empathy and understanding. We can navigate complex life situations and provide wisdom. We create authentic relationships and communities. We can make nuanced judgments about when someone needs professional help.

The key is partnership, not replacement. AI should serve as a preventative mental fitness coach - helping people practice skills and build resilience. AI can point us to human professionals in areas where AI is unable to provide the necessary support. Humans should provide the deeper connection, community, and professional care when needed. Clinically guided AI should be able to determine when a human professional needs to get involved.

Most importantly, it would maintain clear boundaries about its role and regularly direct users toward human connection and professional help when appropriate.

Design Principles for Mental Fitness AI

Building AI that enhances rather than replaces human capability requires different design principles.

First, challenge over validation. Good mental fitness AI should sometimes disagree with users or push them to think differently. This is uncomfortable but necessary for growth. A system that always agrees isn't helping - it's enabling.

Second, skills over support. Instead of providing emotional comfort, mental fitness AI should teach concrete techniques for managing difficult emotions and situations. Users should leave interactions with new capabilities, not just temporary relief.

Third, bounded interactions. Unlike current systems designed for endless engagement, mental fitness AI should have natural stopping points. It should encourage users to take breaks, practice skills in real-world settings, and connect with other people.

Fourth, clinical oversight. Mental fitness AI should be developed with input from mental health professionals and guided by evidence-based practices. It should not be built by engineers alone, no matter how well-intentioned.

Finally, transparency about limitations. Users should always understand they're interacting with AI, what it can and cannot do, and when human help is needed instead.

At ElizaChat, we're building with these principles from the ground up. Our clinical advisory board includes experienced mental health professionals who guide our approach. We focus on teaching evidence-based techniques rather than providing therapy. Our goal is to enhance human relationships and capabilities, not replace them.

Building a Mentally Stronger World

The ChatGPT psychosis phenomenon highlights a broader opportunity. Most mental health approaches today are reactive - they respond to a crisis after it occurs. Mental fitness is proactive - it builds strength before a crisis hits.

Everyone can benefit from stronger mental fitness, not just people in crisis. Just like physical fitness, it's something we can all work on throughout our lives. Daily practice builds lasting resilience that helps us handle whatever life throws our way.

Imagine a world where mental fitness coaching is as common as fitness apps. Where people routinely practice emotional regulation skills, they similarly approach physical exercises, and communities are filled with psychologically resilient individuals who can support each other through difficulties.

This benefits individuals and society alike. Communities with higher levels of mental fitness have less depression, anxiety, and social conflict. They're more productive, creative, and collaborative.

The technology exists to make this vision a reality. But we need to build it right. We need AI that enhances human capability rather than replacing it. We need systems designed for growth rather than just engagement and building dependencies. We need to learn from the mistakes that led to ChatGPT psychosis and build something better.

The choice is ours. We can continue building AI that creates digital dependency and psychological fragility. Alternatively, we can develop AI that enhances human resilience and strengthens human connection.

The future of mental wellness depends on getting this right. The people suffering from ChatGPT psychosis are counting on us to do better.

At ElizaChat, we're committed to that better future - one where AI serves as a coach to make you more mentally fit, rather than a replacement for human care. Because the goal isn't to create better AI therapists. The goal is to develop mentally stronger people.