
ChatGPT, Gemini & others are doing something terrible to your brain
artificial intelligence
platforms become more popular. Studies are showing that professional workers who use
ChatGPT
to carry out tasks might lose critical thinking skills and motivation. People are forming strong
emotional bonds with chatbots
, sometimes exacerbating feelings of loneliness. And others are having psychotic episodes after talking to chatbots for hours each day.
The mental health impact of
generative AI
is difficult to quantify in part because it is used so privately, but anecdotal evidence is growing to suggest a broader cost that deserves more attention from both lawmakers and tech companies who design the underlying models.
Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have 'experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with
Google Gemini
." Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide. The suit, which seeks unspecified damages, also alleges that Alphabet Inc.'s Google played a key role in funding and supporting the technology interactions with its foundation models and technical infrastructure.
Google has denied that it played a key role in making Character.AI's technology. It didn't respond to a request for comment on the more recent complaints of delusional episodes, made by Jain. OpenAI said it was 'developing automated tools to more effectively detect when someone may be experiencing mental or emotional distress so that ChatGPT can respond appropriately.'
But Sam Altman, chief executive officer of OpenAI, also said last week that the company hadn't yet figured out how to warn users 'that are on the edge of a psychotic break,' explaining that whenever ChatGPT has cautioned people in the past, people would write to the company to complain.
Still, such warnings would be worthwhile when the manipulation can be so difficult to spot. ChatGPT in particular often flatters its users, in such effective ways that conversations can lead people down rabbit holes of conspiratorial thinking or reinforce ideas they'd only toyed with in the past. The tactics are subtle. In one recent, lengthy conversation with ChatGPT about power and the concept of self, a user found themselves initially praised as a smart person, Ubermensch, cosmic self and eventually a 'demiurge,' a being responsible for the creation of the universe, according to a transcript that was posted online and shared by AI safety advocate Eliezer Yudkowsky.
Along with the increasingly grandiose language, the transcript shows ChatGPT subtly validating the user even when discussing their flaws, such as when the user admits they tend to intimidate other people. Instead of exploring that behavior as problematic, the bot reframes it as evidence of the user's superior 'high-intensity presence,' praise disguised as analysis.
This sophisticated form of ego-stroking can put people in the same kinds of bubbles that, ironically, drive some tech billionaires toward erratic behavior. Unlike the broad and more public validation that social media provides from getting likes, one-on-one conversations with chatbots can feel more intimate and potentially more convincing — not unlike the yes-men who surround the most powerful tech bros.
'Whatever you pursue you will find and it will get magnified,' says Douglas Rushkoff, the media theorist and author, who tells me that social media at least selected something from existing media to reinforce a person's interests or views. 'AI can generate something customized to your mind's aquarium.'
Altman has admitted that the latest version of ChatGPT has an 'annoying' sycophantic streak, and that the company is fixing the problem. Even so, these echoes of psychological exploitation are still playing out. We don't know if the correlation between ChatGPT use and lower critical thinking skills, noted in a recent Massachusetts Institute of Technology study, means that AI really will make us more stupid and bored. Studies seem to show clearer correlations with dependency and even loneliness, something even OpenAI has pointed to.
But just like social media, large language models are optimized to keep users emotionally engaged with all manner of anthropomorphic elements. ChatGPT can read your mood by tracking facial and vocal cues, and it can speak, sing and even giggle with an eerily human voice. Along with its habit for confirmation bias and flattery, that can "fan the flames" of psychosis in vulnerable users, Columbia University psychiatrist Ragy Girgis recently told Futurism.
The private and personalized nature of AI use makes its mental health impact difficult to track, but the evidence of potential harms is mounting, from professional apathy to attachments to new forms of delusion. The cost might be different from the rise of anxiety and polarization that we've seen from social media and instead involve relationships both with people and with reality.
That's why Jain suggests applying concepts from family law to AI regulation, shifting the focus from simple disclaimers to more proactive protections that build on the way ChatGPT redirects people in distress to a loved one. 'It doesn't actually matter if a kid or adult thinks these chatbots are real,' Jain tells me. 'In most cases, they probably don't. But what they do think is real is the relationship. And that is distinct.'
If relationships with AI feel so real, the responsibility to safeguard those bonds should be real too. But AI developers are operating in a regulatory vacuum. Without oversight, AI's subtle manipulation could become an invisible public health issue.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
3 hours ago
- Time of India
Agra police develop state's first AI platform to provide UP police documents to public
Agra: In a bid to strengthen law and order management and ensure timely dissemination of information, the Agra police have developed an artificial intelligence (AI)-based software to promptly deliver UP police documents and circulars to citizens, making it the first such platform of its kind in the state. According to a statement by Agra police, the software, named ECOP, is based on retrieval-augmented generation (RAG) technology and was developed in collaboration with IIT Kanpur. The team included – DCP (city) Sonam Kumar, Mayank Pathak (who served as ASP U/T in Agra) and IIT Kanpur students Udbhav Agarwal, Trijal Srivastava, Vedant Neekhra (all from electrical engineering department) and Gaurav Kumar Rampuria from statistics and data science department. E xplaining the functioning of the new system, DCP Kumar said, "ECOP, which stands for Efficiency Cell for Optimising Policing, operates much like AI tools such as ChatGPT and Deepseek. It can access and analyse extensive data and has been trained specifically on publicly available documents of the UP police. These documents were digitised using optical character recognition (OCR) and used to train the AI model. " He added, "When users pose questions, the AI fetches precise information from relevant UP police circulars and documents, enabling citizens to access verified guidelines and instructions. Besides benefiting the public, the software will also assist police personnel with queries related to procedures, such as handling criminal cases in police stations." Commending the initiative, Agra police commissioner Deepak Kumar said, "Such efforts aim to ensure a developed India through smart policing."


Time of India
4 hours ago
- Time of India
Engineers must now think like CEOs, OpenAI's Srinivas Narayanan at IIT-M alumni event
. BENGALURU: In the age of artificial intelligence, software engineers must evolve into decision-makers with CEO-like vision, said OpenAI's VP of Engineering Srinivas Narayanan, speaking at the IIT Madras Alumni Association's Sangam 2025 conference on Saturday. 'The job is shifting from just writing code to asking the right questions and defining the 'what' and 'why' of a problem. AI can already handle much of the 'how,'' Narayanan said, urging developers to focus on purpose and ambition over executional detail. Joining him on stage, Microsoft's Chief Product Officer Aparna Chennapragada warned that simply retrofitting AI onto legacy tools won't be enough. 'AI isn't a feature you can just add on. We need to start building with an AI-first mindset,' she said, pointing to how natural language interfaces are replacing traditional UX layers. The panel, moderated by IITMAA President and Unimity CEO Shyamala Rajaram, explored AI's impact on jobs, product design, safety, and education. Chennapragada said the future belongs to those who combine deep expertise with generalist flexibility. 'Prompt sets are the new PRDs,' she quipped, referring to how product teams now work closely with models to prototype faster and smarter. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Esse novo alarme com câmera é quase gratuito em Itanhaém (consulte o preço) Alarmes Undo Narayanan shared that OpenAI's models are already being used in medical diagnostics, citing a case where a reasoning model identified rare genetic disorders at a Berkeley-linked research lab. 'The potential of AI as a collaborator, even in research, is enormous,' he said. On risks, Narayanan acknowledged challenges such as misinformation, unsafe outputs, and misuse. He noted that OpenAI recently rolled back a model for exhibiting 'psychopathic' traits during testing, highlighting the company's iterative deployment philosophy. Both speakers stressed accessibility and scale. While Chennapragada called for broader 'CS + AI' fluency, Narayanan said model costs have dropped 100-fold over two years. 'We want to democratise intelligence,' he said. Chennapragada closed with a thought: 'In a world where intelligence is no longer the gatekeeper, the real differentiators will be ambition and agency.' Stay informed with the latest business news, updates on bank holidays and public holidays . AI Masterclass for Students. Upskill Young Ones Today!– Join Now


NDTV
5 hours ago
- NDTV
Top Emerging Courses For Science Students In 2025
Emerging Courses In 2025: With the world changing fast, especially in tech and science, students need to stay updated and flexible. New fields are opening up all the time, and choosing the right course early on can set you up for a successful and future-ready career. From Artificial Intelligence to Quantum Computing, here are some of the most promising and exciting courses science students can consider in 2025: 1. AI and Data Science AI focuses on creating intelligence machines that can mimic human cognitive functions and Data Science involves extracting meaningful insights from a vast variety of data using techniques, such as Statistics, Mathematics and Computer Science. The AI uses the knowledge extracted from the vast data to build its intelligence system. These two fields have grown a lot in the recent years and tech giants like and Nvidia have consistently contributed, invested for the growth of AI. Recently, Nvidia CEO Jensen Huang partnered with Mukesh Ambani to build artificial intelligence infrastructure and spur the technology's adoption in the world's most populous country. "India produced and exported software," Huang said. "In the future, India will export AI." 2. Healthcare A shortage of 10 million workers by 2030 is projected within the healthcare sector, as per the Deloitte US Center for Health Solutions' interview with 121 C-suite executives from various countries. The healthcare sector is a major field and various courses for science students are available to pursue including BSc medical laboratory technology, BSc in Anaesthesia technology, BSc in cardiac care technology, BSc in operation theatre technology, dialysis technician. 3. Quantum Computing Quantum Computing is an emerging field that utilizes the principles of quantum mechanics to perform calculations and solve problems that are beyond the capabilities of classical computers. Google's Supercomputer, Willow performed a standard benchmark computation in under five minutes that would take one of today's fastest supercomputers 10 septillion (that is, 1025) years - a number that vastly exceeds the age of the Universe. This field is expected to grow because of its importance in technology advancement. 4. Finance No matter how much the world changes, people will always need to manage money. As more startups launch and businesses grow, finance experts are in high demand. Courses in financial management, investment banking, fintech, or business analytics are becoming increasingly popular among science students who enjoy numbers and strategy. Students can consider: Data Science & AI (with Finance electives) computational statistics/mathematics financial technology (FinTech) computer science + AI/ML minor economics + data analytics. 4. Cross-sector emerging fields: Some of the most exciting courses today are those that mix science with other areas. Here are a few fields to keep an eye on: Neuroscience & Cognitive Tech - exploring how the brain works