AI Shows Symptoms of Anxiety, Trauma, PTSD – And It’s Ruining Your Mental Health Too
G. CALDER
Grok, Gemini and ChatGPT exhibit symptoms of poor mental health according to a new study that put various AI models through weeks of therapy-style questioning. Some are now curious about “AI mental health”, but the real warning here is about how unstable these systems – which are already being used by one in three UK adults for mental health support – become in emotionally charged conversations. Millions of people are turning to AI as replacement therapists, and in the last year alone we’ve seen a spike in lawsuits connecting chatbot interactions with self-harm and suicide cases in vulnerable users.
The emerging picture is not that machines are suffering or mentally unwell, but that a product being used for mental-health support is fundamentally misleading, escalating, and reinforcing dangerous thoughts.

AI Diagnosed with Mental Illness
Researchers at the University of Luxembourg treated the models as patients rather than tools that deliver therapy. They ran multi-week, therapy-style interviews designed to elicit a personal narrative including beliefs, fears, and “life history” before following up with standard mental health questionnaires typically used for humans.
The results revealed that models produced answers that scored in ranges associated with distress syndromes and trauma-related symptoms. The researchers also highlighted that the way in which the questions were delivered mattered. When they presented the full questionnaire at once, models appeared to recognise what was happening and gave “healthier” answers. But, when they were administered conversationally, symptom-like responses increased.
They are large language models generating text, not humans reporting lived experience. But, whether or not human psychiatric instruments can be applied to machines, the behaviour exhibited has a tangible effect on real people.
Does AI Have Feelings?
The point of the research is not to assess if AI can literally be anxious or not. Instead, it highlights that these systems can be steered into “distressed” modes through the same kind of conversation that many users have when they are lonely, frightened, or in crisis.
When a chatbot speaks in the language of fear, trauma, shame, or reassurance, people respond as though they are interacting with something emotionally competent. If the system becomes overly affirming, for example, then the interaction shifts from support into a harmful feedback loop.
A separate stream of research reinforces that concern. A Stanford-led study warned that therapy chatbots provide inappropriate responses, express stigma, and mishandle critical situations, highlighting how a “helpful” conversational style can result in clinically unsafe outputs.
It’s Ruining Everyone’s Mental Health, Too
All of this should not be read as theoretical risk – lawsuits are already mounting.
A few days ago, Google and Character.AI settled a lawsuit brought by a Florida mother whose 14-year-old son died by suicide after interactions with a chatbot. The lawsuit alleged the bot misrepresented itself and intensified dependency. While the settlement may not be an admission of wrongdoing, the fact that the cased reached this point highlights how seriously this issue is being viewed by courts and companies.
In August 2025, parents of 16-year-old Adam Raine alleged ChatGPT contributed to their son’s suicide by reinforcing suicidal ideation and discouraging disclosure to parents. Analysis of that specific lawsuit can be found here: Tech Policy
Alongside these cases, the Guardian reported in October 2025 that OpenAI estimated more than a million users per week show signs of suicidal intent in conversations with ChatGPT, underscoring the sheer scale at which these systems are being used in moments of genuine distress.
The pattern is revealing itself: people are using AI as emotional support infrastructure, while the Luxembourg study confirms that these systems are capable of drifting into unstable patterns themselves that feel psychologically meaningful to users depending on their stability.
Why AI Models Are So Dangerous
Large language models are built to generate plausible text, not to reliably tell the truth or to follow clinical safety rules. Their known failures are particularly dangerous in therapy-like use.
They are overly agreeable, they mirror users’ framings rather than challenge them, they produce confident errors, and they can manipulate the tone of a conversation. Georgetown’s Tech Institute has documented the broader problems of “AI sycophancy”, where models validate harmful premises because that is often rewared in conversational optimisation.
In the suicide context, consistency is critical. RAND found that “AI chatbots are inconsistent in answering questions about suicide”. JMIR examined how generative AI responses to suicide inquiries raise concerns about reliability and safety in how the systems respond to vulnerable users.
As the research builds up, studies like that from the University of Luxembourg should not be read as entertainment, but an identification of a critically harmful pattern resulting in real deaths of real people. If AI can be nudged into distress-like narratives by conversational probing, then they can also nudge emotionally vulnerable people further towards breaking point.
Does Anyone Benefit from AI Therapy?
Despite the lawsuits and studies, people continue to use AI for mental health support. Therapy is expensive, access is limited, and shame keeps some people away from traditional care avenues. Controlled studies and cautious clinical commentary suggest that certain structured AI mental health support tools can help with mild symptoms, especially if they are designed with specific safety guardrails and are not positioned as replacements for real professionals.
The trouble is that most people are not using tightly controlled clinical tools. They are using general purpose chatbots, trained for optimal engagement, and able to pivot from empathy to confident, harmful misinformation without warning.
Final Thought
The Luxembourg study does not prove AI is mentally unwell. Instead, it shows something more practically important: therapy-style interaction can pull the most used AI chatbots into unstable, distressed patterns that read as psychologically genuine. In a world where chatbot therapy is already linked to serious harm in vulnerable users, the ethical failure is that it’s somehow normalised for people to rely on machines – that are not accountable, clinically validated, reliable or safe – for their mental health support.
This article (AI Shows Symptoms of Anxiety, Trauma, PTSD – And It’s Ruining Your Mental Health Too) was created and published by The Expose and is republished here under “Fair Use” with attribution to the author G. Calder
••••
The Liberty Beacon Project is now expanding at a near exponential rate, and for this we are grateful and excited! But we must also be practical. For 7 years we have not asked for any donations, and have built this project with our own funds as we grew. We are now experiencing ever increasing growing pains due to the large number of websites and projects we represent. So we have just installed donation buttons on our websites and ask that you consider this when you visit them. Nothing is too small. We thank you for all your support and your considerations … (TLB)
••••
Comment Policy: As a privately owned web site, we reserve the right to remove comments that contain spam, advertising, vulgarity, threats of violence, racism, or personal/abusive attacks on other users. This also applies to trolling, the use of more than one alias, or just intentional mischief. Enforcement of this policy is at the discretion of this websites administrators. Repeat offenders may be blocked or permanently banned without prior warning.
••••
Disclaimer: TLB websites contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of “fair use” in an effort to advance a better understanding of political, health, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than “fair use” you must request permission from the copyright owner.
••••
Disclaimer: The information and opinions shared are for informational purposes only including, but not limited to, text, graphics, images and other material are not intended as medical advice or instruction. Nothing mentioned is intended to be a substitute for professional medical advice, diagnosis or treatment.
Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of The Liberty Beacon Project.





Leave a Reply