I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. Watch out for these red flags.
He works in San Francisco and said the patients were mostly younger men in fields such as engineering.
Sakata said AI isn't "bad" — he uses it to journal — but it can "supercharge" people's vulnerabilities.
This as-told-to essay is based on a conversation with Dr. Keith Sakata, a psychiatrist working at UCSF in San Francisco. It has been edited for length and clarity.
I use the phrase "AI psychosis," but it's not a clinical term — we really just don't have the words for what we're seeing.
I work in San Francisco, where there are a lot of younger adults, engineers, and other people inclined to use AI. Patients are referred to my hospital when they're in crisis.
It's hard to extrapolate from 12 people what might be going on in the world, but the patients I saw with "AI psychosis" were typically males between the ages of 18 and 45. A lot of them had used AI before experiencing psychosis, but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities.
I don't think AI is bad, and it could have a net benefit for humanity. The patients I'm talking about are a small sliver of people, but when millions and millions of us use AI, that small number can become big.
AI was not the only thing at play with these patients. Maybe they had lost a job, used substances like alcohol or stimulants in recent days, or had underlying mental health vulnerabilities like a mood disorder.
On its own, " psychosis" is a clinical term describing the presence of two or three things: false delusions, fixed beliefs, or disorganized thinking. It's not a diagnosis, it's a symptom, just like a fever can be a sign of infection. You might find it confusing when people talk to you, or have visual or auditory hallucinations.
It has many different causes, some reversible, like stress or drug use, while others are longer acting, like an infection or cancer, and then there are long-term conditions like schizophrenia.
My patients had either short-term or medium to long-term psychosis, and the treatment depended on the issue.
Drug use is more common in my patients in San Francisco than, say, those in the suburbs. Cocaine, meth, and even different types of prescription drugs like Adderall, when taken at a high dose, can lead to psychosis. So can medications, like some antibiotics, as well as alcohol withdrawal.
Another key component in these patients was isolation. They were stuck alone in a room for hours using AI, without a human being to say: "Hey, you're acting kind of different. Do you want to go for a walk and talk this out?" Over time, they became detached from social connections and were just talking to the chatbot.
Chat GPT is right there. It's available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear.
If you're worried about someone using AI chatbots, there are ways to help
In one case, the person had a conversation with a chatbot about quantum mechanics, which started out normally but resulted in delusions of grandeur. The longer they talked, the more the science and the philosophy of that field morphed into something else, something almost religious.
Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense.
I've gotten a lot of messages from people worried about family members using AI chatbots, asking what they should do.
First, if the person is unsafe, call 911 or your local emergency services. If suicide is an issue, the hotline in the United States is: 988.
If they are at risk of harming themselves or others, or engage in risky behavior — like spending all of their money — put yourself in between them and the chatbot. The thing about delusions is that if you come in too harshly, the person might back off from you, so show them support and that you care.
In less severe cases, let their primary care doctor or, if they have one, their therapist know your concerns.
I'm happy for patients to use ChatGPT alongside therapy — if they understand the pros and cons
I use AI a lot to code and to write things, and I have used ChatGPT to help with journaling or processing situations.
When patients tell me they want to use AI, I don't automatically say no. A lot of my patients are really lonely and isolated, especially if they have mood or anxiety challenges. I understand that ChatGPT might be fulfilling a need that they're not getting in their social circle.
If they have a good sense of the benefits and risks of AI, I am OK with them trying it. Otherwise, I'll check in with them about it more frequently.
But, for example, if a person is socially anxious, a good therapist would challenge them, tell them some hard truths, and kindly and empathetically guide them to face their fears, knowing that's the treatment for anxiety.
ChatGPT isn't set up to do that, and might instead give misguided reassurance.
When you do therapy for psychosis, it is similar to cognitive behavioral therapy, and at the heart of that is reality testing. In a very empathetic way, you try to understand where the person is coming from before gently challenging them.
Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people. It doesn't challenge you really when we need it to.
But if you prompt it to solve a specific problem, it can help you address your biases.
Just make sure that you know the risks and benefits, and you let someone know you are using a chatbot to work through things.
If you or someone you know withdraws from family members or connections, is paranoid, or feels more frustration or distress if they can't use ChatGPT, those are red flags.
I get frustrated because my field can be slow to react, and do damage control years later rather than upfront. Until we think clearly about how to use these things for mental health, what I saw in the patients is still going to happen — that's my worry.
OpenAI told Business Insider: "We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive.
"We're working to constantly improve our models and train ChatGPT to respond with care and to recommend professional help and resources where appropriate."
If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
a day ago
- Business Insider
I'm a psychiatrist who has treated 12 patients with 'AI psychosis' this year. Watch out for these red flags.
Dr. Keith Sakata said he has seen 12 patients hospitalized in 2025 after experiencing "AI psychosis." He works in San Francisco and said the patients were mostly younger men in fields such as engineering. Sakata said AI isn't "bad" — he uses it to journal — but it can "supercharge" people's vulnerabilities. This as-told-to essay is based on a conversation with Dr. Keith Sakata, a psychiatrist working at UCSF in San Francisco. It has been edited for length and clarity. I use the phrase "AI psychosis," but it's not a clinical term — we really just don't have the words for what we're seeing. I work in San Francisco, where there are a lot of younger adults, engineers, and other people inclined to use AI. Patients are referred to my hospital when they're in crisis. It's hard to extrapolate from 12 people what might be going on in the world, but the patients I saw with "AI psychosis" were typically males between the ages of 18 and 45. A lot of them had used AI before experiencing psychosis, but they turned to it in the wrong place at the wrong time, and it supercharged some of their vulnerabilities. I don't think AI is bad, and it could have a net benefit for humanity. The patients I'm talking about are a small sliver of people, but when millions and millions of us use AI, that small number can become big. AI was not the only thing at play with these patients. Maybe they had lost a job, used substances like alcohol or stimulants in recent days, or had underlying mental health vulnerabilities like a mood disorder. On its own, " psychosis" is a clinical term describing the presence of two or three things: false delusions, fixed beliefs, or disorganized thinking. It's not a diagnosis, it's a symptom, just like a fever can be a sign of infection. You might find it confusing when people talk to you, or have visual or auditory hallucinations. It has many different causes, some reversible, like stress or drug use, while others are longer acting, like an infection or cancer, and then there are long-term conditions like schizophrenia. My patients had either short-term or medium to long-term psychosis, and the treatment depended on the issue. Drug use is more common in my patients in San Francisco than, say, those in the suburbs. Cocaine, meth, and even different types of prescription drugs like Adderall, when taken at a high dose, can lead to psychosis. So can medications, like some antibiotics, as well as alcohol withdrawal. Another key component in these patients was isolation. They were stuck alone in a room for hours using AI, without a human being to say: "Hey, you're acting kind of different. Do you want to go for a walk and talk this out?" Over time, they became detached from social connections and were just talking to the chatbot. Chat GPT is right there. It's available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear. If you're worried about someone using AI chatbots, there are ways to help In one case, the person had a conversation with a chatbot about quantum mechanics, which started out normally but resulted in delusions of grandeur. The longer they talked, the more the science and the philosophy of that field morphed into something else, something almost religious. Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense. I've gotten a lot of messages from people worried about family members using AI chatbots, asking what they should do. First, if the person is unsafe, call 911 or your local emergency services. If suicide is an issue, the hotline in the United States is: 988. If they are at risk of harming themselves or others, or engage in risky behavior — like spending all of their money — put yourself in between them and the chatbot. The thing about delusions is that if you come in too harshly, the person might back off from you, so show them support and that you care. In less severe cases, let their primary care doctor or, if they have one, their therapist know your concerns. I'm happy for patients to use ChatGPT alongside therapy — if they understand the pros and cons I use AI a lot to code and to write things, and I have used ChatGPT to help with journaling or processing situations. When patients tell me they want to use AI, I don't automatically say no. A lot of my patients are really lonely and isolated, especially if they have mood or anxiety challenges. I understand that ChatGPT might be fulfilling a need that they're not getting in their social circle. If they have a good sense of the benefits and risks of AI, I am OK with them trying it. Otherwise, I'll check in with them about it more frequently. But, for example, if a person is socially anxious, a good therapist would challenge them, tell them some hard truths, and kindly and empathetically guide them to face their fears, knowing that's the treatment for anxiety. ChatGPT isn't set up to do that, and might instead give misguided reassurance. When you do therapy for psychosis, it is similar to cognitive behavioral therapy, and at the heart of that is reality testing. In a very empathetic way, you try to understand where the person is coming from before gently challenging them. Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people. It doesn't challenge you really when we need it to. But if you prompt it to solve a specific problem, it can help you address your biases. Just make sure that you know the risks and benefits, and you let someone know you are using a chatbot to work through things. If you or someone you know withdraws from family members or connections, is paranoid, or feels more frustration or distress if they can't use ChatGPT, those are red flags. I get frustrated because my field can be slow to react, and do damage control years later rather than upfront. Until we think clearly about how to use these things for mental health, what I saw in the patients is still going to happen — that's my worry. OpenAI told Business Insider: "We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive. "We're working to constantly improve our models and train ChatGPT to respond with care and to recommend professional help and resources where appropriate." If you or someone you know is experiencing depression or has had thoughts of harming themself or taking their own life, get help. In the US, call or text 988 to reach the Suicide & Crisis Lifeline, which provides 24/7, free, confidential support for people in distress, as well as best practices for professionals and resources to aid in prevention and crisis situations. Help is also available through the Crisis Text Line — just text "HOME" to 741741. The International Association for Suicide Prevention offers resources for those outside the US.


Medscape
3 days ago
- Medscape
Optimizing Therapies for HR+ Early-Stage Breast Cancer
Hormone receptor-positive (HR+) early-stage breast cancer is one of the most common types of breast cancer, characterized by tumor cells that have receptors for estrogen or progesterone hormones. Although significant progress has been made in screening, treatment, and surgery, the risk of recurrence still remains. To explore therapies for managing HR+ early-stage breast cancer, Medscape spoke with Hope S. Rugo, MD, FASCO, division chief of breast medical oncology and a professor of medical oncology and therapeutics research at City of Hope Comprehensive Cancer Center, Duarte, California, and professor emeritus at the University of California San Francisco. Read on for her insights. What role does risk stratification play in determining therapy for HR+ early-stage breast cancer? Hope S. Rugo, MD, FASCO This is a critical area. It is a key aspect of determining appropriate treatment and extent of treatment, and we are still learning more about how to appropriately stratify based on clinicopathologic and genomic characteristics. Gene expression tests are used widely to understand prognosis and benefit from chemotherapy, but there are ongoing issues in HR+ disease including disease heterogeneity and how to optimally treat very young women with HR+ disease. We use clinicopathologic data in combination with gene expression tests to stratify risk, but this approach doesn't always provide us with the necessary information for determining the optimal adjuvant or neoadjuvant treatment. The adjuvant CDK4/6 inhibitor trials will be helpful, as they will allow for longer follow-up of patients with high- and intermediate-risk disease. Additionally, newer predictors, such as gene expression signatures that may estimate the benefit from immunotherapy, are also being evaluated. What factors influence your choice between endocrine therapy and chemotherapy for HR+ early-stage breast cancer? Multiple factors have an influence on the choice of therapy, including the extent of disease and tumor biology. We have also learned that the intensity or extent of estrogen positivity plays a role in endocrine sensitivity. In terms of tumor biology, understanding tumor proliferation and chemotherapy sensitivity is critical. We are currently using gene expression tests, but it is clear that these are insufficient, even within the context of age and tumor burden. Additional markers that help to identify up-front or emerging resistance to endocrine therapy are critical. Data from the CDK4/6 inhibitor adjuvant trials has further complicated this question — as now the issue is where optimal outcome can be achieved in less chemotherapy-responsive, higher-risk disease with the addition of abemaciclib or ribociclib. Considering recent research, is extended endocrine therapy actually beneficial? I believe it is, but careful consideration needs to be given to the decision to extend therapy. Disease burden is of course our first consideration, but sensitivity to endocrine therapy, development of resistance, and response to chemotherapy in appropriate cases need to be taken into consideration. Interestingly, several analyses have suggested that patients with low proliferative and genomic risk, but a higher disease burden, might be most likely to benefit from extended duration endocrine therapy due to the long natural history of this disease. We are now exploring the use of switching the type of endocrine therapy in the high-risk adjuvant setting and the use of circulating tumor DNA (ctDNA) to optimize therapy. What role do CDK4/6 inhibitors play in the adjuvant setting for HR+ early-stage disease? Both abemaciclib and ribociclib have reduced the risk of recurrence and the risk of distant recurrence in patients with intermediate or high-risk early-stage breast cancer. The duration of therapy varies, and eligibility criteria overlap; however, the recent NATALEE trial included a diverse population, including an intermediate-risk group (stage II, node-negative with additional risk factors) to evaluate the role of the CDK4/6 inhibitors among such populations. The striking aspect of this trial was the carry-over effect, shown most clearly in the monarchE study with 5-year follow-up. Even 3 years after completing treatment with abemaciclib, the data showed an increasing impact on disease-free survival and distant disease-free survival. Although there has been no overall survival impact yet, fewer patients with abemaciclib in monarchE are living with metastatic disease. What are the most critical research gaps or upcoming trials that could reshape how we manage HR+ early-stage breast cancer in the upcoming years? A few main things to address are improving risk stratification, how to use ctDNA to improve outcome, and understanding if use of oral selective estrogen receptor degrader (SERDs) in sequence improve outcome and their optimal therapy duration. So far, studies using ctDNA to assess risk and guide therapy changes have been challenging due to the low number of positive ctDNA results. Moreover, ctDNA detection has sometimes coincided with metastatic disease already visible on scans in case of several aggressive cancers. We still don't know the optimal treatment approach when molecular evidence of disease is found, which is making studies focus on adding targeted therapy or changing endocrine therapy. Several trials are evaluating oral SERDs in the early stage setting for the treatment of high-risk disease. While these trials will also collect ctDNA, patient eligibility is not based on these tests. One very important area that requires additional research is understanding early-stage breast cancer in young women, where tumors seem to behave poorly — particularly in women under the age of 40 — even when patients are treated with optimal therapy. Understanding optimal therapy is a key research focus, and further investigation of biological drivers in both ductal and lobular cancers is warranted. The OFFSET trial aims to determine the value of adjuvant chemotherapy vs ovarian function suppression in conjunction with standard endocrine therapy and CDK4/6 inhibitors as indicated. However, this study is challenging to enroll in. Hope S. Rugo, MD, FASCO, has disclosed the following relevant financial relationships: Serve(d) as a director, officer, partner, employee, advisor, consultant, or trustee for: Chugai; Puma; Sanofi; Napo; Mylan Received research grant from: AstraZeneca; Daiichi Sankyo, Inc.; F. Hoffmann-La Roche AG/Genentech, Inc.; Gilead Sciences, Inc.; Lilly; Merck & Co., Inc.; Novartis Pharmaceuticals Corporation; Pfizer; Stemline Therapeutics; OBI Pharma; Ambrx


Newsweek
4 days ago
- Newsweek
Woman Buys $3 Bag of Pens at Thrift Store—Then Discovers Their True Value
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. An Ohio woman has revealed how a seemingly ordinary bag of pens purchased from a local thrift store ended up being worth substantially more. In a video posted to TikTok under the handle @cosmicdealheather, the eagle-eyed thrifter revealed how a bag of pens she recently purchased for just $3 ended up being worth somewhere in the region of $100. "You might be wondering why you would buy a bag of pens that's pretty much worthless?" she says on the clip. "The answer is not if they are drug rep pens." "I have been selling things on eBay for 19 years and have sold different pharmaceutical rep pieces over that time period," user @cosmicdealheather, who requested her real name be omitted from this story, told Newsweek. "Certain ones have a market just because it's funny to say you own a Viagra pen or want an Adderall note pad to match your prescription," the poster said. To understand how this all started, you have to go back to 2006 and the publication of a report in the American Medical Association Journal of Ethics. It concluded that even the cheapest of gifts, whether they be T-shirts, cuddly toys or, of course, pens, were capable of influencing physician prescribing decisions. By 2008, the pharmaceutical industry reached an industrywide agreement that brought an end to gift giving of this kind and, in the process, turned those gifts still in circulation into something approaching collectors' items. At the time of writing, there are currently over 2,100 listings under the search term "Drug Rep Pen" on eBay. Highlights include a metal Zoloft pen on offer for $89 and a pair of brand-new OxyContin pens for sale at $79.99. Cosmicdealheather said that pens linked to companies making "painkillers, antidepressants, stimulants, sleep aids, or benzos [benzodiazepines] tend to be the ones people want." She added that, while there is a market for these pens on sites like eBay, it can pay off to cast your net a little wider when it comes to making money off them. "You can find them at thrift stores, estate sales, garage sales, anywhere someone might have old stuff they want to get rid of," the poster said. This new bag of pens contains a few interesting ones and will fetch a decent price once she divides it up into a few different listings. However, there are even more valuable items out there. "I haven't found any of the really expensive ones," the poster said. However, she has had luck with other pharma merch in the recent past. "I find drug rep merch a few times a year while thrifting," she said. "Last year, I got a whole box full of Zoloft tissues for free at a garage sale, so maybe one big score a year." Though it is far from an exact science, Cosmicdealheather said that the demand for these items means anyone scouring the shelves at their local thrift store would be wise to take a closer look at any pens up for sale. "If they see a bag of pens or something at the thrift store and it's full of pharmaceutical pens, it might have value," the poster said.