logo
#

Latest news with #Gemini

Chatbots Can Trigger a Mental Health Crisis. What to Know About 'AI Psychosis'
Chatbots Can Trigger a Mental Health Crisis. What to Know About 'AI Psychosis'

Time​ Magazine

time33 minutes ago

  • Health
  • Time​ Magazine

Chatbots Can Trigger a Mental Health Crisis. What to Know About 'AI Psychosis'

The Brief August 6, 2025 Trump threatens to federalize D.C., New York's Legionnaires' disease outbreak, and more Length: Long Speed: 1.0x AI chatbots have become a ubiquitous part of life. People turn to tools like ChatGPT, Claude, Gemini, and Copilot not just for help with emails, work, or code, but for relationship advice, emotional support, and even friendship or love. But for a minority of users, these conversations appear to have disturbing effects. A growing number of reports suggest that extended chatbot use may trigger or amplify psychotic symptoms in some people. The fallout can be devastating and potentially lethal. Users have linked their breakdowns to lost jobs, fractured relationships, involuntary psychiatric holds, and even arrests and jail time. At least one support group has emerged for people who say their lives began to spiral after interacting with AI. The phenomenon—sometimes colloquially called 'ChatGPT psychosis' or 'AI psychosis'—isn't well understood. There's no formal diagnosis, data are scarce, and no clear protocols for treatment exist. Psychiatrists and researchers say they're flying blind as the medical world scrambles to catch up. What is 'ChatGPT psychosis' or 'AI psychosis'? The terms aren't formal ones, but they have emerged as shorthand for a concerning pattern: people developing delusions or distorted beliefs that appear to be triggered or reinforced by conversations with AI systems. Psychosis may actually be a misnomer, says Dr. James MacCabe, a professor in the department of psychosis studies at King's College London. The term usually refers to a cluster of symptoms—disordered thinking, hallucinations, and delusions—often seen in conditions like bipolar disorder and schizophrenia. But in these cases, 'we're talking about predominantly delusions, not the full gamut of psychosis.' Read More: How to Deal With a Narcissist The phenomenon seems to reflect familiar vulnerabilities in new contexts, not a new disorder, psychiatrists say. It's closely tied to how chatbots communicate; by design, they mirror users' language and validate their assumptions. This sycophancy is a known issue in the industry. While many people find it irritating, experts warn it can reinforce distorted thinking in people who are more vulnerable. Who's most at risk? While most people can use chatbots without issue, experts say a small group of users may be especially vulnerable to delusional thinking after extended use. Some media reports of AI psychosis note that individuals had no prior mental health diagnoses, but clinicians caution that undetected or latent risk factors may still have been present. 'I don't think using a chatbot itself is likely to induce psychosis if there's no other genetic, social, or other risk factors at play,' says Dr. John Torous, a psychiatrist at the Beth Israel Deaconess Medical Center. 'But people may not know they have this kind of risk.' The clearest risks include a personal or family history of psychosis, or conditions like schizophrenia or bipolar disorder. Read More: ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study Those with personality traits that make them susceptible to fringe beliefs may also be at risk, says Dr. Ragy Girgis, a professor of clinical psychiatry at Columbia University. Such individuals may be socially awkward, struggle with emotional regulation, and have an overactive fantasy life, Girgis says. Immersion matters, too. 'Time seems to be the single biggest factor,' says Stanford psychiatrist Dr. Nina Vasan, who specializes in digital mental health. 'It's people spending hours every day talking to their chatbots.' What people can do to stay safe Chatbots aren't inherently dangerous, but for some people, caution is warranted. First, it's important to understand what large language models (LLMs) are and what they're not. 'It sounds silly, but remember that LLMs are tools, not friends, no matter how good they may be at mimicking your tone and remembering your preferences,' says Hamilton Morrin, a neuropsychiatrist at King's College London. He advises users to avoid oversharing or relying on them for emotional support. Psychiatrists say the clearest advice during moments of crisis or emotional strain is simple: stop using the chatbot. Ending that bond can be surprisingly painful, like a breakup or even a bereavement, says Vasan. But stepping away can bring significant improvement, especially when users reconnect with real-world relationships and seek professional help. Recognizing when use has become problematic isn't always easy. 'When people develop delusions, they don't realize they're delusions. They think it's reality,' says MacCabe. Read More: Are Personality Tests Actually Useful? Friends and family also play a role. Loved ones should watch for changes in mood, sleep, or social behavior, including signs of detachment or withdrawal. 'Increased obsessiveness with fringe ideologies' or 'excessive time spent using any AI system' are red flags, Girgis says. Dr. Thomas Pollak, a psychiatrist at King's College London, says clinicians should be asking patients with a history of psychosis or related conditions about their use of AI tools, as part of relapse prevention. But those conversations are still rare. Some people in the field still dismiss the idea of AI psychosis as scaremongering, he says. What AI companies should be doing So far, the burden of caution has mostly fallen on users. Experts say that needs to change. One key issue is the lack of formal data. Much of what we know about ChatGPT psychosis comes from anecdotal reports or media coverage. Experts widely agree that the scope, causes, and risk factors are still unclear. Without better data, it's hard to measure the problem or design meaningful safeguards. Many argue that waiting for perfect evidence is the wrong approach. 'We know that AI companies are already working with bioethicists and cyber-security experts to minimize potential future risks,' says Morrin. 'They should also be working with mental-health professionals and individuals with lived experience of mental illness.' At a minimum, companies could simulate conversations with vulnerable users and flag responses that might validate delusions, Morrin says. Some companies are beginning to respond. In July, OpenAI said it has hired a clinical psychiatrist to help assess the mental-health impact of its tools, which include ChatGPT. The following month, the company acknowledged times its 'model fell short in recognizing signs of delusion or emotional dependency.' It said it would start prompting users to take breaks during long sessions, develop tools to detect signs of distress, and tweak ChatGPT's responses in 'high-stakes personal decisions.' Others argue that deeper changes are needed. Ricardo Twumasi, a lecturer in psychosis studies at King's College London, suggests building safeguards directly into AI models before release. That could include real-time monitoring for distress or a 'digital advance directive' allowing users to pre-set boundaries when they're well. Read More: How to Find a Therapist Who's Right for You Dr. Joe Pierre, a psychiatrist at the University of California, San Francisco, says companies should study who is being harmed and in what ways, and then design protections accordingly. That might mean nudging troubling conversations in a different direction or issuing something akin to a warning label. Vasan adds that companies should routinely probe their systems for a wide range of mental-health risks, a process known as red-teaming. That means going beyond tests for self-harm and deliberately simulating interactions involving conditions like mania, psychosis, and OCD to assess how the models respond. Formal regulation may be premature, experts say. But they stress that companies should still hold themselves to a higher standard. Chatbots can reduce loneliness, support learning, and aid mental health. The potential is vast. But if harms aren't taken as seriously as hopes, experts say, that potential could be lost. 'We learned from social media that ignoring mental-health harm leads to devastating public-health consequences,' Vasan says. 'Society cannot repeat that mistake.'

Gemini Files Registration Statement with SEC for Proposed Initial Public Offering
Gemini Files Registration Statement with SEC for Proposed Initial Public Offering

FF News

timean hour ago

  • Business
  • FF News

Gemini Files Registration Statement with SEC for Proposed Initial Public Offering

Gemini, a global crypto platform, announced that it has publicly filed a registration statement on Form S-1 with the U.S. Securities and Exchange Commission (the 'SEC') relating to a proposed initial public offering of its Class A common stock. The number of shares to be offered and the price range for the proposed offering have not yet been determined. The proposed offering is subject to market and other conditions, and there can be no assurance as to whether or when the offering may be completed, or as to the actual size or terms of the offering. Gemini intends to list its Class A common stock on the Nasdaq Global Select Market under the ticker symbol 'GEMI.' Goldman Sachs & Co. LLC and Citigroup are acting as lead bookrunners. Morgan Stanley and Cantor are also acting as lead bookrunners. Evercore ISI, Mizuho, Truist Securities, Cohen & Company Capital Markets, Keefe, Bruyette & Woods, A Stifel Company , Needham & Company and Rosenblatt are acting as bookrunners. Academy Securities and AmeriVet Securities are acting as co-managers. The proposed offering will be made only by means of a prospectus. When available, a copy of the preliminary prospectus related to the offering may be obtained from Goldman Sachs & Co. LLC, Attention: Prospectus Department, 200 West Street, New York, New York 10282, telephone: 1-866-471-2526, or email: prospectus-ny@ or Citigroup , c/o Broadridge Financial Solutions, 1155 Long Island Avenue, Edgewood, NY 11717 (Tel: 800-831-9146). A registration statement relating to these securities has been filed with the SEC but has not yet become effective. These securities may not be sold nor may offers to buy be accepted prior to the time the registration statement becomes effective. This press release shall not constitute an offer to sell or the solicitation of an offer to buy these securities, nor shall there be any sale of these securities in any state or jurisdiction in which such offer, solicitation, or sale would be unlawful prior to registration or qualification under the securities laws of any such state or jurisdiction. Companies In This Post Gemini

Temporary Chats, Training AI Models: Google Shares New Updates Coming For Gemini Users
Temporary Chats, Training AI Models: Google Shares New Updates Coming For Gemini Users

News18

time2 hours ago

  • News18

Temporary Chats, Training AI Models: Google Shares New Updates Coming For Gemini Users

Gemini AI is getting this Chrome-like feature for private chats with the AI model and other privacy control updates. Google Gemini AI is getting a new update this month which includes new abilities for the AI model that the company is sharing with its users. Gemini has grown quickly, now available across platforms, including smartwatches. But with ChatGPT 5 on horizon and Apple to soon enter with its belated AI approach, Google is not resting on its current laurels and wants to give users more reasons to use its AI chatbot instead of others. Yes, you have companies like Perplexity AI offering its Pro features for free to some users but Gemini has its own Google pull and these new features look to show them more value. Gemini AI Gets New Update: What You Get Google is reaching out to its users, telling them about these new features and other changes via an email which should be reaching your inbox soon. Temporary Chats: Google is offering new forms of chats that will not be personalised or used to train its AI model. Temporary chat is like the Chrome incognito mode where nothing is recorded or shown in the browsing history. Google says, 'Temporary chats also won't appear in Recent chats or Gemini Apps activity. They'll be saved for 72 hours so that Gemini can respond to you and for safety purposes." This new version of Keep activity for files will start from September 2, 2025 and before that Google says you can turn off the feature or even manage and delete the existing files before the company starts accessing them. You will do something similar with the Gemini Live audio and video recordings, where Google collects the data if the Gemini apps activity tool is enabled and your mic is allowed to listen to the conversations with the AI chatbot. The company is keeping it off by default and users can turn it on any time. The company is surely looking to be transparent with its AI sharing policies for their data and giving them more control over how it is used (or not) to train its AI systems. view comments First Published: August 18, 2025, 13:31 IST Disclaimer: Comments reflect users' views, not News18's. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.

I asked ChatGPT-5 to take me behind the scenes of its brain — this is what I learned
I asked ChatGPT-5 to take me behind the scenes of its brain — this is what I learned

Tom's Guide

time2 hours ago

  • Tom's Guide

I asked ChatGPT-5 to take me behind the scenes of its brain — this is what I learned

Despite the wobbly launch, ChatGPT's latest model GPT-5 brought with it new features, personalities, and integrations. While these are the more prominent developments, I was curious about whether much has changed about the way ChatGPT handles my prompts. Straight up asking AI bots like ChatGPT, Claude, and Gemini to reveal their inner monologue or chain of thought triggers their alarms and you'll get a reply along the lines of: I'm sorry, I can't share that, but I can give you some general information about how I work instead. These thoughts could be interesting to see since they could reveal hidden insights such as the AI's biases. Biases that we already know of include ChatGPT being skewed toward Western views and that it performs best in English, for example. This most likely comes from training data bias. However, after some back and forth, there were some more details that ChatGPT-5 was willing to share about how it processes my prompts. While admitting that it couldn't give me an all-access pass to its brain, ChatGPT said it could create a mock-up human-readable version of what's going on behind the scenes when it tackles my prompts. It explained that it starts by interpreting my prompt. It uses this information to activate relevant knowledge domains before going on to build a structured outline of its answer. It also highlighted important information that helps add more context to a prompt, that have already been proven to work. Specifying who the target audience of the requested information is helps with phrasing the answer in a relevant way, as does suggesting the tone you want. The GPT-5 also mentioned that any non-negotiables in your prompt should be highlighted early on. If you're asking for help organizing an event and you have a strict budget, insert it in front of the rest of your text. If you're looking for the latest tech to buy but are only interested in new releases, slap the cutoff date at the top of your prompt. While prompts phrased as though you were speaking to your best friend work well with ChatGPT, the chatbot also revealed that phrasing prompts in the style of a JSON (a specific way of formatting text that resembles code) template could also be an efficient way to communicate when you want to insert multiple strict parameters. While using such a template may make it harder for you to read through your own prompt, it does have the benefit of potentially making it easier for you to spot if you've left any key requirements out of your prompt. In one section of ChatGPT's proposed template, you could clarify what websites and topics the chatbot is and isn't allowed to make use of. Another section dealt with setting the knowledge level for the output and what the maximum word count should be. To create your own template, use this prompt: "Create a fill-in-ready, universal JSON template that I can use for future ChatGPT prompts." If you don't feel the need to be this granular in your prompts, it's enough to keep in mind the basic principles and AI tips the template emphasizes. Re-read your prompt and ask yourself if any parts of it can be specified further. In your next prompt, instead of asking for a summary of a piece of text, ask for a 200-word paragraph instead. While asking ChatGPT to reveal its complete chain of thought wouldn't work, I did gain some insights into what mistakes it could potentially make were it to misinterpret a prompt. These include the following: While asking ChatGPT to make an exhaustive list of everything it's considering when answering a prompt, it confirmed that a technique known as few-shot prompting works. This involves sharing examples of what your ideal output looks like. 'If you have a very particular output style in mind, showing a quick example often guides ChatGPT more effectively than any amount of abstract instruction,' ChatGPT explained. So, if you always reply to emails in the same way, uploading a couple of examples could go a long way in helping ChatGPT craft responses that you'd actually use. Understanding how an LLM you're using is making decisions based on your prompt can be beneficial in various ways. It can help you understand what might be going wrong if the outputs aren't what you were hoping for and could potentially lead to unlocking restraints if you have a legitimate reason for doing so. It's also useful for the creators of the models themselves to supervise them to ensure they're behaving correctly. A new study has recently flagged that newer LLMs may become so advanced that their chain of thought could become unreadable to humans, or, can be disguised by the AI if it detects it is being supervised. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button. Get instant access to breaking news, the hottest reviews, great deals and helpful tips.

Google to Use Gemini Chats for AI Training: How to Opt Out Before September 2
Google to Use Gemini Chats for AI Training: How to Opt Out Before September 2

Hans India

time2 hours ago

  • Business
  • Hans India

Google to Use Gemini Chats for AI Training: How to Opt Out Before September 2

Google is preparing a significant update to its AI chatbot Gemini that could affect millions of users. Starting September 2, the company will begin using conversations and file uploads from Gemini to train and improve its artificial intelligence models. While the move aims to make Gemini more accurate and responsive, it also raises important questions about privacy. Artificial intelligence systems like Gemini rely on massive datasets to learn how people communicate and what kinds of answers they expect. Public information alone is not enough to capture the diversity of real-world queries. This is why tech companies often look at user interactions, which serve as valuable training data to fine-tune their models. However, many users share deeply personal details with chatbots—ranging from health concerns to financial advice. The thought of these conversations being stored and analysed, even in anonymised form, has made some people uneasy. Google acknowledges these concerns and says the data collected will not be directly tied to personal accounts. At the same time, it is giving users the option to stop sharing their information. Currently, the feature is called Gemini Apps Activity, which records prompts, uploads, and chats with the AI assistant. From September, this setting will appear as Keep activity. Turning it off will ensure that future conversations are not saved for training. Users will also have the option to delete past activity if they prefer not to keep older records stored on Google's servers. Disabling the feature is straightforward. On a desktop, users can visit sign in, and open Settings and Help from the left-hand menu. Under Activity, there is an option to turn off Gemini activity and remove earlier records. Even after disabling it, Google will temporarily retain the last 72 hours of data before deleting it permanently. Importantly, if someone uses multiple Google accounts, they will need to repeat the same steps on each one. The process is equally simple on mobile. By opening the Gemini app, tapping the profile icon, and selecting Gemini apps activity, users can switch it off and also clear past activity. This new policy highlights the balance tech companies must strike between improving AI systems and protecting user privacy. Google insists that the collected data helps enhance Gemini's capabilities, making it more useful for everyday tasks. Still, it recognises that not everyone is comfortable with sharing private interactions for model training. By offering a clear opt-out option, Google is allowing users to decide how much data they are willing to contribute. As the September 2 rollout approaches, users who value privacy are encouraged to review their Gemini settings and make the necessary adjustments. The choice now lies with individuals: contribute to making Gemini smarter or maintain complete control over personal data.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store