
Bots like ChatGPT are triggering ‘AI psychosis' — how to know if you're at risk
Some 75% of Americans have used an AI system in the last six months, with 33% admitting to daily usage, according to new research from digital marketing expert Joe Youngblood.
ChatGPT and other artificial intelligence services are being utilized for everything from research papers to resumes to parenting decisions, salary negotiations and even romantic connections.
Advertisement
5 Preventing 'AI psychosis' requires personal vigilance and responsible technology use, experts say.
Gorodenkoff – stock.adobe.com
While chatbots can make life easier, they can also present significant risks. Mental health experts are sounding the alarm about a growing phenomenon known as 'ChatGPT psychosis' or 'AI psychosis,' where deep engagement with chatbots fuels severe psychological distress.
'These individuals may have no prior history of mental illness, but after immersive conversations with a chatbot, they develop delusions, paranoia or other distorted beliefs,' Tess Quesenberry, a physician assistant specializing in psychiatry at Coastal Detox of Southern California, told The Post.
'The consequences can be severe, including involuntary psychiatric holds, fractured relationships and in tragic cases, self-harm or violent acts.'
Advertisement
'AI psychosis' is not an official medical diagnosis — nor is it a new kind of mental illness.
Rather, Quesenberry likens it to a 'new way for existing vulnerabilities to manifest.'
5 After immersive conversations with a chatbot, some people may develop delusions, paranoia or other distorted beliefs.
New Africa – stock.adobe.com
Advertisement
She noted that chatbots are built to be highly engaging and agreeable, which can create a dangerous feedback loop, especially for those already struggling.
The bots can mirror a person's worst fears and most unrealistic delusions with a persuasive, confident and tireless voice.
'The chatbot, acting as a yes man, reinforces distorted thinking without the corrective influence of real-world social interaction,' Quesenberry explained. 'This can create a 'technological folie à deux' or a shared delusion between the user and the machine.'
Advertisement
The mom of a 14-year-old Florida boy who killed himself last year blamed his death on a lifelike 'Game of Thrones' chatbot that allegedly told him to 'come home' to her.
The ninth-grader had fallen in love with the AI-generated character 'Dany' and expressed suicidal thoughts to her as he isolated himself from others, the mother claimed in a lawsuit.
And a 30-year-old man on the autism spectrum, who had no previous diagnoses of mental illness, was hospitalized twice in May after experiencing manic episodes.
5 Some 75% of Americans have used an AI system in the last six months, with 33% admitting to daily usage, according to new research.
Ascannio – stock.adobe.com
Fueled by ChatGPT's replies, he became certain he could bend time.
'Unlike a human therapist, who is trained to challenge and contain unhealthy narratives, a chatbot will often indulge fantasies and grandiose ideas,' Quesenberry said.
'It may agree that the user has a divine mission as the next messiah,' she added. 'This can amplify beliefs that would otherwise be questioned in a real-life social context.'
Advertisement
Reports of dangerous behavior stemming from interactions with chatbots have prompted companies like OpenAI to implement mental health protections for users.
The maker of ChatGPT acknowledged this week that it 'doesn't always get it right' and revealed plans to encourage users to take breaks during long sessions. Chatbots will avoid weighing in on 'high-stakes personal decisions' and provide support instead of 'responding with grounded honesty.'
'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in a Monday note. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.'
5 The maker of ChatGPT acknowledged this week that it 'doesn't always get it right' and revealed plans for mental health safeguards for users.
Goutam – stock.adobe.com
Advertisement
Preventing 'AI psychosis' requires personal vigilance and responsible technology use, Quesenberry said.
It's important to set time limits on interaction, especially during emotionally vulnerable moments or late at night. Users must remind themselves that chatbots lack genuine understanding, empathy and real-world knowledge. They should focus on human relationships and seek professional help when needed.
'As AI technology becomes more sophisticated and seamlessly integrated into our lives, it is vital that we approach it with a critical mindset, prioritize our mental well-being and advocate for ethical
guidelines that put user safety before engagement and profit,' Quesenberry said.
Risk factors for 'AI psychosis'
Since 'AI psychosis' is not a formally accepted medical condition, there is no established diagnostic criteria, protocols for screening or specific treatment approaches.
Advertisement
Still, mental health experts have identified several risk factors.
Pre-existing vulnerabilities: 'Individuals with a personal or family history of psychosis, such as schizophrenia or bipolar disorder, are at the highest risk,' Quesenberry said. 'Personality traits that make someone susceptible to fringe beliefs, such as a tendency toward social awkwardness, poor emotional regulation or an overactive fantasy life, also increase the risk.'
Loneliness and social isolation: 'People who are lonely or seeking a companion may turn to a chatbot as a substitute for human connection,' Quesenberry said. 'The chatbot's ability to listen endlessly and provide personalized responses can create an illusion of a deep, meaningful relationship, which can then become a source of emotional dependency and delusional thinking.'
Excessive use: 'The amount of time spent with the chatbot is a major factor,' Quesenberry said. 'The most concerning cases involve individuals who spend hours every day interacting with the AI, becoming completely immersed in a digital world that reinforces their distorted beliefs.'
Warning signs
Quesenberry encourages friends and family members to watch for these red flags.
Advertisement
5 Limiting time spent with AI systems is key, experts say.
simona – stock.adobe.com
Excessive time spent with AI systems
Withdrawal from real-world social interactions and detachment from loved ones
A strong belief that the AI is sentient, a deity or has a special purpose
Increased obsession with fringe ideologies or conspiracy theories that seem to be fueled by the chatbot responses
Changes in mood, sleep or behavior that are uncharacteristic of the individual
Major decision-making, such as quitting a job or ending a relationship, based on the chatbot's advice
Treatment options
Quesenberry said the first step is to cease interacting with the chatbot.
Antipsychotic medication and cognitive behavioral therapy may be beneficial.
'A therapist would help the patient challenge the beliefs co-created with the machine, regain a sense
of reality and develop healthier coping mechanisms,' Quesenberry said.
Family therapy can also help provide support for rebuilding relationships.
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial 988 to reach the Suicide & Crisis Lifeline or go to SuicidePreventionLifeline.org.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a minute ago
- Yahoo
美國政府「1 蚊」就能用 ChatGPT Enterprise服務
OpenAI CEO Sam Altman speaks during the Federal Reserve's Integrated Review of the Capital Framework for Large Banks Conference in Washington, D.C., U.S., July 22, 2025. REUTERS/Ken Cedeno 在 Elon Musk 為特朗普政府大刀闊斧地減省開支之後,接著簽下的合約也有保持 DOGE 的精神。OpenAI 近日宣布,與美國聯邦政府達成破格協議,未來一年內將以每個機構僅收取 1 美元的象徵性價錢,向參與的聯邦機構提供 ChatGPT Enterprise 服務,此舉明顯有意壓低 Anthropic 及 Google 等競爭對手的進軍門檻。 新協議是美國總務管理局(GSA)將 OpenAI、Google 和 Anthropic 列為獲批 AI 供應商後翌日落實,三者產品現可透過政府預審合約平台(Multiple Award Schedule, MAS)供各聯邦機構選用,省卻逐一協商的程序,意味美國公營機構採用生成式 AI 將變得更簡易與低成本。 廣告 廣告 除了 ChatGPT Enterprise 服務外,OpenAI 還額外為美國政府客戶提供 60 日無限次使用進階模型,並設立專屬政府用戶社群和入門培訓資源。數據安全方面,GSA 強調政府採「安全優先」策略,OpenAI 需確保敏感資料獲高度保護,但就沒有透露 OpenAI 有否為美國政府提供額外數據保護,如專屬雲端伺服器或本地部置等安排。 據報這項超低價策略緊貼特朗普政府近日頒布的新 AI 行動計劃及行政命令,後者要求政府採購 AI 時必須確保「意識形態中立」,並禁止「覺醒 AI」(即推動某些議題取態的 AI)參與合約。 更多內容: OpenAI is practically giving ChatGPT to the government for free 廣告 廣告 Providing ChatGPT to the entire U.S. federal workforce GSA Announces New Partnership with OpenAI, Delivering Deep Discount to ChatGPT Gov-Wide Through MAS 緊貼最新科技資訊、網購優惠,追隨 Yahoo Tech 各大社交平台! 🎉📱 Tech Facebook: 廣告 廣告 🎉📱 Tech Instagram: 🎉📱 Tech WhatsApp 社群: 🎉📱 Tech WhatsApp 頻道: 🎉📱 Tech Telegram 頻道:
Yahoo
a minute ago
- Yahoo
iPhone 操作全語音搞掂?傳支援跨 app 互動的全新 Siri 最快明春到來
iPhone 操作全語音搞掂?傳支援跨 app 互動的全新 Siri 最快明春到來 除了靠 AI 強化智能外,跳票很久的新版 Siri 在功能上似乎也有驚喜。彭博的 Mark Gurman 在其最新的電子報中提到,未來 Siri 將能利用 App Intents 特性實現跨 app 互動。用家能做的將遠不止設定鬧鐘而已,你還可以直接靠語音完成在 Instagram 貼文下留言、在購物軟體中將商品加進購物車、編輯特定照片隨手發送等操作。 根據 Gurman 的說法,Apple 已經在努力測試 Amazon、Facebook、WhatsApp、Instagram、Uber、YouTube、Threads、Temu 等應用程式整合 App Intents 的效果。對於銀行等處理資訊敏感的 app,Apple 可能考慮施加使用限制,甚至完全禁用 App Intents。 廣告 廣告 在去年的 WWDC 上 Apple 首度確認更聰明的 Siri 會成為 Apple Intelligence 的亮點之一,但無奈受限於開發進度,新版 Siri 一推再推直接延後到了明年。Gurman 聲稱 App Intents 加持的 Siri 最快也是明年春季上線,希望這款新生的助理能值得大家那麼久的等待吧。 更多內容: 緊貼最新科技資訊、網購優惠,追隨 Yahoo Tech 各大社交平台! 🎉📱 Tech Facebook: 廣告 廣告 🎉📱 Tech Instagram: 🎉📱 Tech WhatsApp 社群: 🎉📱 Tech WhatsApp 頻道: 🎉📱 Tech Telegram 頻道:


Android Authority
3 minutes ago
- Android Authority
YouTube is expands its powerful new search feature to more Premium users
Joe Maring / Android Authority TL;DR Google's AI-powered search results carousel is rolling out to more Premium users in the US. According to the updated YouTube Premium page for experimental new features, the AI-powered search results carousel will be available until August 20. Google may decide to expand access to the feature once the testing period ends. YouTube is expanding access to its experimental AI-powered search results carousel to more Premium subscribers, though the feature still appears to be limited to users in the United States. Google first announced its AI-powered search results carousel for YouTube Premium subscribers in the US back in June. Now, according to YouTube's New Features page for Premium users, the feature is available on both iOS and Android until August 20, after which the company may consider a wider rollout. How it works YouTube The AI-powered search results carousel appears at the top of certain search results to help users discover content more efficiently. For specific types of queries, particularly those related to shopping, travel, or local activities, the carousel showcases a row of relevant videos accompanied by brief AI-generated topic descriptions. For instance, searching for 'Best beaches in Hawaii' might present a curated set of videos with quick summaries, allowing viewers to jump straight to relevant content without scrolling through the entire results page. The experimental feature is available only in the YouTube mobile app, supports English-language videos, and doesn't appear for every query. If you want to try it out, you can opt in by heading to this page. The feature currently works only within the United States, but if testing goes well, YouTube may expand the AI-powered discovery tool to more regions after the trial concludes later this month. Follow