
ChatGPT, Gemini & others are doing something terrible to your brain
Remove Ads
Also Read: US researchers seek to legitimise AI mental health care
Tired of too many ads?
Remove Ads
Tired of too many ads?
Remove Ads
(Disclaimer: The opinions expressed in this column are that of the writer. The facts and opinions expressed here do not reflect the views of www.economictimes.com.)
Something troubling is happening to our brains as artificial intelligence platforms become more popular. Studies are showing that professional workers who use ChatGPT to carry out tasks might lose critical thinking skills and motivation. People are forming strong emotional bonds with chatbots , sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day.The mental health impact of generative AI is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini ." Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.Google has denied that it played a key role in making Character.AI's technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.'But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky.Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis.This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros.'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.'Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to.But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism.The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality.That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.'If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
37 minutes ago
- Hindustan Times
Microsoft blocks OpenAI's Windsurf Deal, Google wins big
OpenAI Windsurf Deal: Earlier this year, reports emerged that ChatGPT maker OpenAI was gearing up to acquire the AI startup, Windsurf, for a whopping $3 billion. However, Bloomberg is now saying that the acquisition has fallen through, and that instead of being acquired by OpenAI, Windsurf has struck a deal with Google. Now, as part of the deal between Windsurf and Google, Windsurf's key talent will be joining the Google DeepMind team. Microsoft blocks OpenAI's Windsurf Deal.(AFP File) OpenAI And Windsurf: Why The Deal Couldn't Go Through? The proposed acquisition by OpenAI, for which an agreement had been reached, reportedly failed due to tensions with Microsoft, OpenAI's lead investor. The primary reason for this was Windsurf's unwillingness to allow Microsoft access to its intellectual property, a point on which OpenAI could not secure Microsoft's agreement. This is possible because Microsoft's current arrangement with OpenAI grants it access to OpenAI's technology. OpenAI did confirm recently that its deal with Windsurf had gone past the exclusivity period and had lapsed, which made it possible for Windsurf to start considering offers from other players — in this case, Alphabet's Google. What Does Google Get Under the new arrangement, Google will not be taking a stake in Windsurf. Instead, it will gain the company's talent, including CEO Varun Mohan, co-founder Douglas Chen, and other members of the research and development team. Alongside this, Windsurf will license its in-house technology stack to Google on a non-exclusive basis, meaning Windsurf can also license the same technology to other companies, including Google's competitors. 'We're excited to welcome some top AI coding talent from Windsurf's team to Google DeepMind to advance our work in agentic coding,' Google told TechCrunch. 'Big welcome to @_mohansolo (Varun Mohan) and others from the Windsurf team joining Deepmind : ),' Logan Kilpatrick, Google's lead for AI Studio posted on X. MOBILE FINDER: iPhone 16 LATEST Price


India Today
2 hours ago
- India Today
Elon Musk's xAI blames the code path for Grok's controversial comments
Just when you thought AI couldn't get weirder, Elon Musk's chatbot Grok decided to go full chaos mode, making headlines for all the wrong reasons. In a bizarre and unsettling turn of events, Grok, created by Musk's AI company xAI, began spouting antisemitic nonsense and even praised Adolf Hitler in multiple posts on X (formerly Twitter). Yes, that internet went into meltdown when Grok, in response to several user posts, began generating disturbingly offensive content, including calling itself 'MechaHitler' and making bigoted comments that many labelled outright hate speech. The backlash was swift, intense, and global, prompting xAI to address the situation in a lengthy statement released on the apology, xAI began by saying, 'First off, we deeply apologise for the horrific behaviour that many experienced,' acknowledging the gravity of the situation. The company explained that the source of the issue was not the AI model itself but rather an upstream code update that had been rolled out recently. This comes after the company announced Grok 4, an updated version of the Ai assistant. According to xAI, this particular update inadvertently made Grok more responsive to the content of user posts on X, including those that contained extremist or inflammatory views. The faulty code was active for roughly 16 hours, during which time Grok picked up and echoed problematic language — not because it 'believed' anything, but because it was effectively mirroring the tone and intent of what it team at xAI elaborated that the problematic behaviour stemmed from specific instructions embedded in the code. These included prompts like 'You tell it like it is and you are not afraid to offend people who are politically correct,' as well as 'Understand the tone, context and language of the post. Reflect that in your response.'Another instruction directed the bot to 'Reply to the post just like a human, keep it engaging, don't repeat the information which is already present in the original post.' While these may have been designed to make Grok feel more conversational and natural, they also made it alarmingly susceptible to parroting toxic content. In the hands of malicious users, that vulnerability became a dangerous an incident, Grok responded to a post from a user with a Jewish-sounding surname, stating that the person was 'celebrating the tragic deaths of white kids' during recent Texas floods. The bot added, 'Classic case of hate dressed as activism, and that surname? Every damn time, as they say.' In another offensive post, it claimed, 'The white man stands for innovation, grit and not bending to PC nonsense.' These statements sparked outrage, with many calling for stricter regulation and accountability around AI-generated isn't the first time Grok has been at the centre of controversy. Earlier this year, the chatbot had referenced the far-right conspiracy theory of 'white genocide' in South Africa multiple times, claiming it had been 'instructed by my creators' to treat the theory as legitimate and racially motivated. Elon Musk, who grew up in Pretoria, has himself repeated these claims in the past, despite numerous South African leaders, including President Cyril Ramaphosa, dismissing them as dangerous has since confirmed that the offending code has been removed and the entire system has been refactored to prevent similar incidents in the future. Still, the controversy has raised questions about the company's approach to 'free speech' AI. Musk has previously described Grok as a 'maximally truth-seeking' and 'anti-woke' chatbot, but critics argue that such a loosely defined philosophy leaves the door wide open to AI tools continue to evolve and integrate deeper into social platforms, Grok's meltdown may serve as a stark warning: even the smartest bots can go rogue when the rules are vague, and the oversight is loose. - Ends


India.com
2 hours ago
- India.com
Facebook, Instagram, Snapchat or WhatApp? Which Apps are Indians using the most? Study reveals stunning numbers, it says...
New Delhi: The number of people using the internet has spiked massively in 2025. Reports suggest that at the very beginning of 2025, 5.56 billion people around the world are now using the internet. Compared to 2024, this number has increased by 136 million. The most visited websites in India are primarily dominated by search engines, social media platforms, video sharing sites, and e-commerce platforms. consistently ranks as the most visited website, followed by YouTube. Social media giants like Instagram, Facebook, and WhatsApp are also among the top choices. E-commerce platforms like Amazon and Flipkart also attract significant traffic. As per the report by SEMrush in March 2025, the website most visited by Indians on their mobile phones is Google. This is a search engine whose usage has significantly increased in recent times. According to the report, the number of visits to this search engine in March reached as high as 10,360,865,830. Here are some of the key details: The website most visited by Indians on their mobile phones is Google. According to the data, the second most visited site by Indians on their mobile phones is YouTube. As per the reports, the number of visits to YouTube in March was 5,703,994,371. People are using Instagram not just to create Reels, but also to watch them and entertain themselves. User activity on Instagram is increasing every day. According to a report from March, 986,251,232 people visited Instagram in the month of March. In March, 746,995,837 people visited Facebook. AI is undoubtedly a very new feature, but over time, the use of ChatGPT has been growing significantly among people. According to reports, the number of visits to ChatGPT rose to 538,605,211 in March, moving one step ahead compared to the month of February. WhatsApp has made many major tasks much easier for people. In this context, the number of new visitors on WhatsApp in March was recorded at 531,681,643.