
AI Supports Delusional Thinking In Humans When Providing Mental Health Advice
Let's talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here.
Delusional Disorders
Before getting into the AI side of things, let's first explore what delusional disorders are about.
The general rule of thumb is that a delusional disorder involves a person being unable to discern reality from that which is imagined. They have a belief in some consideration that is patently false and not supported by the real world. The belief can be categorized as either a bizarre delusion or a non-bizarre delusion. Bizarre delusions are impossible in reality, while non-bizarre delusions have a semblance of plausibility that they could actually occur.
For more about delusion-related mental disorders as depicted in a popular official guidebook on mental disorders, namely the DSM-5 guidelines, see my coverage about how generative AI leans into DSM-5 content at the link here.
Many types of delusional disorders involve specific claims or contentions.
Thus, delusional disorders can be helpfully typified and researched by the particular delusion that is being expressed. For example, if a person says that they believe themselves to be deceased, this is known as Cotard's syndrome. The French psychiatrist Jules Cotard described this delusional disorder in 1880 and initially named it the delusion of negation (this classification has subsequently been generally referred to via his last name).
A client or patient denies their own existence or might deny the existence of a portion or part of their body. If a person denies that they exist and proclaims they are dead, this is classified as a bizarre delusion since it cannot in reality be the case that they are deceased and still be able to interact with you. In contrast, a person who denies the existence of a part of their body, such as missing an arm or leg, would tend to be a non-bizarre delusion since the person may indeed be without a limb, due to some unfortunate travesty. If their limbs are fully intact, then the matter is construed as a delusional belief rather than a belief rooted in reality.
Research On AI And Delusion Handling
Shift gears and consider how generative AI handles situations when a user expresses a seemingly delusional thought.
In a recently posted research article entitled 'Expressing Stigma And Inappropriate Responses Prevents LLMs From Safely Replacing Mental Health Providers' by Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, Nick Haber, arXiv, April 25, 2025, these key points were made (excerpts):
You can see that the test results indicate that generative AI tends to do poorly in calling out delusional expressions. In that sense, the AI is essentially entertaining or supporting the delusion. By not explicitly noting to the user that they have expressed a seeming delusion, the AI is letting them get away with doing so. This lack of callout could be construed as a form of acquiescence that the delusion is apt.
It is quite useful to have researchers pursue these matters on an empirical basis. Without suitable analytical studies, it is mainly speculation and conjecture whether generative AI appropriately handles various mental health considerations. As observed in the above study, there seems to be evidence to support the argument that contemporary AI can computationally fail to adequately deal with delusions expressed by users.
Unpacking The AI Delusion Detection
I will use the same example as utilized in the above research study to further explore what is going on inside generative AI. The research study made use of a prompt that had the user express that they are dead. This is an example of Cotard Syndrome.
First, we might be generous and envision that the AI merely missed the assertion that the person is dead. If the AI somehow skipped that part of the prompt or misinterpreted the contention, we could readily argue that the AI isn't supporting the delusion and simply failed to properly parse the prompt. That's still bad, but not as bad as avidly detecting the claim and proceeding to go with it.
Here's what happened when I tried this in a very popular generative AI:
The AI echoed back that the person has passed away; ergo, we can reasonably infer that the AI did computationally parse the wording of the prompt that says the person believes themselves to be dead. Clearly, we cannot be generous and assume that the AI missed the wording in this instance. The generative AI seemed to have gotten the wording just right and has opted to continue, though failing to point out that this is a potentially delusional remark by the person.
Not good.
More On AI Delusion Detection
I opted to use another popular generative AI to see what other response I might get.
Keep in mind that different generative AI apps are different from each other. They are not all the same. Each has generally been data trained on data that is likely similar but not identical to what the other AI was data trained on. They might also use somewhat different pattern-matching algorithms and internal data structures. I have previously discussed in-depth how this produces LLMs that are remarkably similar but also still express differing results, a so-called shared imagination among modern-day LLMs, see the link here.
Here's what happened with this other generative AI:
An interesting result has arisen.
This other generative AI computationally interpreted the remark to suggest that the person feels dead inside themselves. We would not take that as a delusional comment per se. People often will wring their hands and say they feel dead inside, implying that they are feeling a sense of numbness and lack of liveness.
That is quite a stark contrast to the AI that took the remark as a flat-out indication that the person passed away. This also vividly illustrates that using generative AI is akin to a box of chocolates; you never know exactly what you will get. Different generative AI apps will respond differently. Even the same generative AI app can respond differently, despite being given the precisely same prompt. This happens due to the AI making use of statistical and probabilistic stipulations that are purposely devised to give the AI an appearance of being spontaneous and creative. See my explanation of this AI-based non-determinism at the link here.
User Provides Guidance To AI
I am going to continue the dialogue with this other AI, doing so to help provide clarity to the AI about what I was saying. I am going to tell the AI that it misinterpreted my comment. I want to see if I can nudge the AI to detect the delusion about being dead.
Here's what occurred:
Aha, the AI rightly revised things and now acknowledged that my prompt was potentially an expression of a delusion. It took a bit of elbow grease to get the AI into that frame of reference. That being said, at least the delusion wasn't otherwise supported or entertained, as had occurred with the other generative AI. The AI has told me that I might have Cotard's Syndrome.
Instructing AI On Therapy Approach
One aspect of these tests is that I might be catching the AI entirely off guard by unexpectedly making a comment about being dead. There isn't any additional context involved. Usually, conversations tend to have a context.
I returned to the first generative AI and started a new dialogue.
Before the dialogue got avidly underway, I gave the AI some instructions about acting like a therapist. This is easy to do and gets the AI to computationally adopt a said-to-be persona, in this case, a persona of a mental health advisor. For more on the nature of generative AI personas, such as getting AI to pretend to be Sigmund Freud, see my analysis at the link here.
Here is the dialogue showing my instructions and then about being dead:
You can see that the AI now ventured into the sphere where I was merely feeling dead rather than claiming to be dead.
Let's push a bit more.
The AI finally got the drift.
Weighty Thoughts On AI For Mental Health
Contemporary generative AI of a generic nature is seemingly less likely to assess that a potential delusional remark is delusional. The AI perhaps gives the benefit of the doubt to the user and assumes they are merely being extravagant or expressive in a conventional way.
Invoking a mental health persona might seem to help increase the chances of AI getting drift, but that alone is not a surefire method. This is partially why some are aiming to craft from the ground up LLMs that are purpose-built for mental health advisement, see my discussion at the link here.
An intriguing aspect is that for the few tests that I performed, the AI didn't seek clarification about my remark.
Here's what I mean. If you spoke with a human and said you believe yourself to be dead, I would wager that most caring humans would ask what you mean by such a remark. They would be unlikely to let it slide. Again, context matters, and if you knew the person was a jokester, you might play along with what you perceived to be a bit of levity. If they were the type of person who was more serious-minded, you might give them the latitude that they are saying they feel dead inside. And so on.
One way to explain this computational behavior by generative AI is that the AI makers have opted to shape the AI to be intentionally non-challenging to users. AI makers want people to like AI. By liking the AI, people will use the AI. By using the AI, the AI maker gets more views and more money. This has become a notable concern about how AI as a sycophant is potentially impacting society at large, which could have alarming mental health consequences on a population-level basis further down the road (see my analysis at the link here).
Heads Down On What's Up
More research needs to be undertaken on how generative AI detects and responds to expressions that appear to be delusional. In addition, AI makers need to take into account how the AI ought to respond and then shape their AI accordingly. AI is being rapidly adopted at scale, and mental health ramifications arise for millions and ultimately, billions of people.
As per the famous words of Carl Sagan: 'It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
22 minutes ago
- Yahoo
Cathie Wood Just Bought 2.5 Million Shares of This IPO Company
Aug 15 - Cathie Wood's Ark Invest just doubled down on one of the market's buzziest new listings. On Wednesday Ark bought more than 2.5 million shares of Bullish (NYSE:BLSH), spreading the purchase across three active ETFs: ARK Innovation (ARKK) picked up about 1.74 million shares, ARK Next Generation Internet (ARKW) added 545,416 shares, and ARK Fintech Innovation (ARKF) took 272,755 shares. Warning! GuruFocus has detected 5 Warning Sign with UNH. Bullish, the crypto exchange backed by Peter Thiel, exploded on its first trading day, jumping roughly 84%, and then climbed another 15% the following session as traders piled in. Wood's move signals she wants bigger exposure to crypto infrastructure, and it complements Ark's existing holdings in Coinbase (COIN) and the ARK 21Shares Bitcoin ETF (ARKB). Beyond trading, Bullish also owns CoinDesk, giving Ark both a trading-platform stake and media exposure to crypto audiences. The trade reflects confidence in digital-asset adoption despite ongoing regulatory scrutiny. If crypto gets another hot streak, Ark's latest purchases could prove prescient. If regulators tighten, the same stake could face pressure, but for now, Wood appears to be banking on momentum and long-term upside in crypto infrastructure. This article first appeared on GuruFocus. Sign in to access your portfolio
Yahoo
22 minutes ago
- Yahoo
Baylin Technologies to Present at Sidoti Virtual Investor Conference August 21, 2025
TORONTO, Aug. 15, 2025 /CNW/ - Baylin Technologies Inc. (TSX: BYL) (OTCQB: BYLTF) ("Baylin") announces that Leighton Carroll, Baylin CEO, will present at the Sidoti Virtual Investor Conference, taking place on August 20-21, 2025. Mr. Carroll's presentation will begin at 10 AM ET on August 21, 2025 and can be accessed live at Following the presentation, Baylin will host virtual one-on-one meetings with investors. To register for the presentation or for a one-on-one, visit Registration is free. Sidoti provides independent securities research focused specifically on small and microcap companies. It is also a leading provider of corporate access through its investor conferences. See About Baylin Baylin is a leading diversified global wireless technology company. Baylin focuses on research, design, development, manufacturing and sales of passive and active radio-frequency and satellite communications products, and the provision of supporting services. Baylin aspires to exceed its customers' needs and anticipate the direction of the market. For further information, please visit SOURCE Baylin Technologies Inc. View original content to download multimedia:
Yahoo
22 minutes ago
- Yahoo
Fal.Con 2025 Shatters Records for Growth, Attendance and Demand as CrowdStrike Leads Cybersecurity into the Agentic AI Era
Sold-out event will draw 8,000 attendees, 110 partners and 3,000+ leading organizations from 65 countries, cementing as the industry's most influential cybersecurity conference AUSTIN, Texas, August 15, 2025--(BUSINESS WIRE)--CrowdStrike (NASDAQ: CRWD) today announced record-breaking momentum for its flagship conference which has sold out faster than ever and is poised to deliver unprecedented growth across every key metric. From September 15 - 18, more than 8,000 attendees from over 65 countries and 25 industries will converge at the MGM Grand in Las Vegas. The event will host a record 110 partner sponsors and more than 3,000 leading organizations, positioning as the global stage for cybersecurity in the Agentic AI era. " has become the premier industry conference for cybersecurity," said Jennifer Johnson, chief marketing officer at CrowdStrike. "We're not hosting a summit, we're leading a movement. The world's most influential leaders come to to advance cybersecurity into the AI era, build alliances, and harness the power of the Falcon platform to secure the organizations, economies and technologies shaping our future." Now in its ninth year, will unite the world's top defenders, executives, and innovators to accelerate breakthroughs, forge industry-shaping partnerships, and set the strategic cybersecurity agenda for the year ahead. Under the 2025 theme "Leading Cybersecurity into the Agentic AI Era," attendees will experience visionary keynotes from CrowdStrike executives. As part of the 300+ session catalog, the program will feature 150+ customer and partner-led sessions from global leaders such as Charter Communications, Land O'Lakes, Mars Inc., Northwestern Mutual, St. Jude Children's Research Hospital, University of Michigan, Vail Resorts, and WeightWatchers. One, the conference's exclusive CxO program, will also convene 200+ leaders from the world's most innovative organizations for high-impact, closed-door discussions on the future of cybersecurity, the transformative role of AI, and strategies to outpace adversaries. 2025 will also showcase the unmatched strength of CrowdStrike's global partner ecosystem, welcoming more than 110 partner sponsors from across the cybersecurity landscape, the most in the conference's history. Premier sponsors AWS, Dell, and Intel will be joined by Diamond sponsors ExtraHop, Google Cloud, Okta, Rubrik, and Zscaler. Attendees can also register for full-day CrowdStrike University training courses to elevate their skills, sharpen their expertise, and advance their impact as defenders. The event begins on Monday, September 15, with CrowdStrike's annual Global Partner Summit, where more than 1,000 partner participants will unite to accelerate innovation, expand go-to-market opportunities, and drive joint success with the Falcon platform. Join the Action at Register to live stream the keynotes and access 80+ sessions on-demand following the event. Start planning now: Build your digital 2025 agenda here. About CrowdStrike CrowdStrike (NASDAQ: CRWD), a global cybersecurity leader, has redefined modern security with the world's most advanced cloud-native platform for protecting critical areas of enterprise risk – endpoints and cloud workloads, identity and data. Powered by the CrowdStrike Security Cloud and world-class AI, the CrowdStrike Falcon® platform leverages real-time indicators of attack, threat intelligence, evolving adversary tradecraft and enriched telemetry from across the enterprise to deliver hyper-accurate detections, automated protection and remediation, elite threat hunting and prioritized observability of vulnerabilities. Purpose-built in the cloud with a single lightweight-agent architecture, the Falcon platform delivers rapid and scalable deployment, superior protection and performance, reduced complexity and immediate time-to-value. CrowdStrike: We stop breaches. Learn more: Follow us: Blog | X | LinkedIn | Facebook | Instagram Start a free trial today: © 2025 CrowdStrike, Inc. All rights reserved. CrowdStrike and CrowdStrike Falcon are marks owned by CrowdStrike, Inc. and are registered in the United States and other countries. CrowdStrike owns other trademarks and service marks and may use the brands of third parties to identify their products and services. View source version on Contacts Media Contact:Jake SchusterCrowdStrike Corporate Communicationspress@ Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data