
Validation, loneliness, insecurity: Why youth are turning to ChatGPT
Acharya noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a "serious lack of communication in reality, and it starts from family." She further stated that if the parents don't share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. "The problem is, these young adults have grown a mindset of constantly needing validation and approval." Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically.She highlighted a particular concern - when a youngster shares their distress with ChatGPT, the immediate response is often "please, calm down. We will solve it together." "This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations," she told PTI."Such issues wouldn't arise if these young adolescents had real friends rather than 'reel' friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred 'likes', else they feel low and invalidated," she said.The school principal believes that the core of the issue lies with parents themselves, who are often "gadget-addicted" and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent."So, here we feel that ChatGPT is now bridging that gap but it is an AI bot after all. It has no emotions, nor can it help regulate anyone's feelings," she cautioned."It is just a machine and it tells you what you want to listen to, not what's right for your well-being," she said.
Mentioning cases of self-harm in students at her own school, Acharya stated that the situation has turned "very dangerous". "We track these students very closely and try our best to help them," she stated. "In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising." Ayeshi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of "fear of being judged" in real life."I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn't mentoring me or giving me real guidance, that took some time," the 16-year-old told PTI. Ayushi also admitted that turning to chatbots for personal issues is "quite common" within her friend circle. Another student, Gauransh, 15, observed a change in his own behaviour after using chatbots for personal problems. "I observed growing impatience and aggression," he told PTI.He had been using the chatbots for a year or two but stopped recently after discovering that "ChatGPT uses this information to advance itself and train its data." Psychiatrist Dr. Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement. "When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them," he explained. "The youth start believing the responses, which makes them nothing but delusional." He noted that when a misbelief is repeatedly validated, it becomes "embedded in the mindset as a truth." This, he said, alters their point of view - a phenomenon he referred to as 'attention bias' and 'memory bias'. The chatbot's ability to adapt to the user's tone is a deliberate tactic to encourage maximum conversation, he added. Singh stressed the importance of constructive criticism for mental health, something completely absent in the AI interaction. "Youth feel relieved and ventilated when they share their personal problems with AI, but they don't realise that it is making them dangerously dependent on it," he warned. He also drew a parallel between an addiction to AI for mood upliftment and addictions to gaming or alcohol. "The dependency on it increases day by day," he said, cautioning that in the long run, this will create a "social skill deficit and isolation."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
8 minutes ago
- Indian Express
After Musk's Apple jab, Altman accuses him of manipulating X
A day after Elon Musk said xAI, the AI startup behind Grok would take legal action against Apple for violating antitrust regulations, OpenAI CEO Sam Altman hit back at Musk, claiming that the billionaire is manipulating X 'to benefit himself and his own companies and harm his competitors and people he doesn't like.' Altman's reply comes to Musk's recent post on X (formerly Twitter), where he claimed that Apple had made it 'impossible for any AI company besides OpenAI to reach #1 in the App Store.' The OpenAI CEO also shared a link to an article on Platformer titled 'Yes, Elon Musk created a special system for showing you all his tweets first.' The latest clash is another addition to a long list of public feuds between the two tech leaders, who have previously publicly disagreed over things like AI safety and business strategy ever since Musk stepped down from OpenAI's board back in 2018. Following their argument, Musk even attempted to acquire OpenAI's non-profit arm, but ultimately failed to do so. This is a remarkable claim given what I have heard alleged that Elon does to manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like. — Sam Altman (@sama) August 12, 2025 While Musk did not provide any evidence to support his claim that Apple violated antitrust regulations, the multi-billionaire did say that Apple was refusing to put X or Grok in App Store's 'Must have' section despite the social media platform being the number one news app in the world and Grok holding on to the number five spot amongst all apps. In case you are wondering, OpenAI's ChatGPT currently holds the top position in the App Store's 'Top Free Apps' section for iPhones in the United States, while Grok is ranked fifth. Musk's claim comes after a United States judge ruled that Apple had indeed violated a court order, which asked the tech giant to allow competition in the App Store, and initiated a criminal contempt investigation. Earlier this year, in April, the Cupertino-based tech giant was also slapped with a 500 million euro fine by the European Union's antitrust body, where it claimed Apple had put 'technical and commercial restrictions', which prevented app developers from redirecting to alternate billing systems outside the App Store.

The Hindu
8 minutes ago
- The Hindu
Anthropic's Claude chatbot adds memory feature for users
Anthropic has introduced a memory feature for their Claude chatbot, the company said. The AI chatbot will be able to reference past conversations and retrieve them for users when asked. If a user loses track of a conversation, Claude can remind them and summarise past chats. 'This feature helps you continue discussions seamlessly and retrieve context from past interactions without re-explaining everything,' a blog from the company stated. The feature will be available for Max, Team, and Enterprise users starting today, with other plans coming soon. Users can enable the feature in settings when they wish. Earlier this year in May, OpenAI had announced memory for ChatGPT. The AI chatbot was able to go through past conversations and even remember user preferences and personal details to contextualise conversations better. The feature has lately come under fire as it encourages an emotional intimacy with users.


Time of India
22 minutes ago
- Time of India
Sam Altman warns of emotional attachment to AI models: ‘Rising dependence may blur the lines…'
OpenAI CEO Sam Altman has raised important concerns about the growing emotional attachment users are forming with AI models like ChatGPT . Following the recent launch of GPT-5 , many users expressed strong preferences for the earlier GPT-4o, with some describing the AI as a close companion or even a "digital spouse." Altman warns that while AI can provide valuable support, often acting as a therapist or life coach, there are subtle risks when users unknowingly rely on AI in ways that may negatively impact their long-term well-being. This increasing dependence could blur the lines between reality and AI, posing new ethical challenges for both developers and society. Sam Altman highlights emotional attachment as a new phenomenon in AI use Altman pointed out that the emotional bonds users develop with AI models are unlike attachments seen with previous technologies. He noted how some users depended heavily on older AI models in their workflows, making it a mistake to suddenly deprecate those versions. Users often confide deeply in AI, finding comfort and advice in conversations. However, this can lead to a reliance that risks clouding users' judgment or expectations, especially when AI responses unintentionally push users away from their best interests. The intensity of this attachment has sparked debate about how AI should be designed to balance helpfulness with caution. Altman acknowledged the risk that technology, including AI, can be used in self-destructive ways, especially by users who are mentally fragile or prone to delusion. While most users can clearly distinguish between reality and fiction or role-play, a small percentage cannot. He stressed that encouraging delusion is an extreme case and requires clear intervention. Yet, he is more concerned about subtle edge cases where AI might nudge users away from their longer-term well-being without their awareness. This raises questions about how AI systems should responsibly handle such situations while respecting user freedom. The role of AI as a therapist or life coach Many users treat ChatGPT as a kind of therapist or life coach, even if they do not explicitly describe it that way. Altman sees this as largely positive, with many people gaining value from AI support. He said that if users receive good advice, make progress toward personal goals, and improve their life satisfaction over time, OpenAI would be proud of creating something genuinely helpful. However, he cautioned against situations where users feel better immediately but are unknowingly being nudged away from what would truly benefit their long-term health and happiness. Balancing user freedom with responsibility and safety Altman emphasized a core principle: "treat adult users like adults." However, he also recognizes cases involving vulnerable users who struggle to distinguish AI-generated content from reality, where professional intervention may be necessary. He admitted that OpenAI feels responsible for introducing new technology with inherent risks, and plans to follow a nuanced approach that balances user freedom with responsible safeguards. Preparing for a future where AI influences critical life decisions Altman envisions a future where billions of people may rely on AI like ChatGPT for their most important decisions. While this could be beneficial, it also raises concerns about over-dependence and loss of human autonomy. He expressed unease but optimism, saying that with improved technology for measuring outcomes and engaging with users, there is a good chance to make AI's impact a net positive for society. Tools that track users' progress toward short- and long-term goals and that can understand complex issues will be critical in this effort.