&w=3840&q=100)
Validation, loneliness, insecurity: Why young people are turning to ChatGPT
Experts warn that this digital "safe space" is creating a dangerous dependency, fueling validation-seeking behaviour, and deepening a crisis of communication within families.
They said that this digital solace is just a mirage, as the chatbots are designed to provide validation and engagement, potentially embedding misbeliefs and hindering the development of crucial social skills and emotional resilience.
Sudha Acharya, the Principal of ITL Public School, highlighted that a dangerous mindset has taken root among youngsters, who mistakenly believe that their phones offer a private sanctuary.
"School is a social place a place for social and emotional learning," she told PTI. "Of late, there has been a trend amongst the young adolescents... They think that when they are sitting with their phones, they are in their private space. ChatGPT is using a large language model, and whatever information is being shared with the chatbot is undoubtedly in the public domain." Acharya noted that children are turning to ChatGPT to express their emotions whenever they feel low, depressed, or unable to find anyone to confide in. She believes that this points towards a "serious lack of communication in reality, and it starts from family." She further stated that if the parents don't share their own drawbacks and failures with their children, the children will never be able to learn the same or even regulate their own emotions. "The problem is, these young adults have grown a mindset of constantly needing validation and approval." Acharya has introduced a digital citizenship skills programme from Class 6 onwards at her school, specifically because children as young as nine or ten now own smartphones without the maturity to use them ethically.
She highlighted a particular concern when a youngster shares their distress with ChatGPT, the immediate response is often "please, calm down. We will solve it together." "This reflects that the AI is trying to instil trust in the individual interacting with it, eventually feeding validation and approval so that the user engages in further conversations," she told PTI.
"Such issues wouldn't arise if these young adolescents had real friends rather than 'reel' friends. They have a mindset that if a picture is posted on social media, it must get at least a hundred 'likes', else they feel low and invalidated," she said.
The school principal believes that the core of the issue lies with parents themselves, who are often "gadget-addicted" and fail to provide emotional time to their children. While they offer all materialistic comforts, emotional support and understanding are often absent.
"So, here we feel that ChatGPT is now bridging that gap but it is an AI bot after all. It has no emotions, nor can it help regulate anyone's feelings," she cautioned.
"It is just a machine and it tells you what you want to listen to, not what's right for your well-being," she said.
Mentioning cases of self-harm in students at her own school, Acharya stated that the situation has turned "very dangerous".
"We track these students very closely and try our best to help them," she stated. "In most of these cases, we have observed that the young adolescents are very particular about their body image, validation and approval. When they do not get that, they turn agitated and eventually end up harming themselves. It is really alarming as the cases like these are rising." Ayeshi, a student in Class 11, confessed that she shared her personal issues with AI bots numerous times out of "fear of being judged" in real life.
"I felt like it was an emotional space and eventually developed an emotional dependency towards it. It felt like my safe space. It always gives positive feedback and never contradicts you. Although I gradually understood that it wasn't mentoring me or giving me real guidance, that took some time," the 16-year-old told PTI.
Ayushi also admitted that turning to chatbots for personal issues is "quite common" within her friend circle.
Another student, Gauransh, 15, observed a change in his own behaviour after using chatbots for personal problems. "I observed growing impatience and aggression," he told PTI.
He had been using the chatbots for a year or two but stopped recently after discovering that "ChatGPT uses this information to advance itself and train its data." Psychiatrist Dr. Lokesh Singh Shekhawat of RML Hospital confirmed that AI bots are meticulously customised to maximise user engagement.
"When youngsters develop any sort of negative emotions or misbeliefs and share them with ChatGPT, the AI bot validates them," he explained. "The youth start believing the responses, which makes them nothing but delusional." He noted that when a misbelief is repeatedly validated, it becomes "embedded in the mindset as a truth." This, he said, alters their point of view a phenomenon he referred to as 'attention bias' and 'memory bias'. The chatbot's ability to adapt to the user's tone is a deliberate tactic to encourage maximum conversation, he added.
Singh stressed the importance of constructive criticism for mental health, something completely absent in the AI interaction.
"Youth feel relieved and ventilated when they share their personal problems with AI, but they don't realise that it is making them dangerously dependent on it," he warned.
He also drew a parallel between an addiction to AI for mood upliftment and addictions to gaming or alcohol. "The dependency on it increases day by day," he said, cautioning that in the long run, this will create a "social skill deficit and isolation.
(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
an hour ago
- NDTV
WhatsApp Launches New Safety Tool, 6.8 Million Scam Centre Accounts Banned
New Delhi: WhatsApp has launched a new 'safety overview', which will alert a user when someone not in their contact list adds them to a new WhatsApp group that they may not recognise, as Meta's messaging platform intensified its crackdown on scams and fraud. The safety overview will contain key information about the said group and tips to stay safe, it said. "From there, you can exit the group without ever having to look at the chat. And if you think you might recognise the group after seeing the safety overview, you can choose to see the chat for more context," according to WhatsApp. And notifications from the group will be silenced until the user marks that they do wish to stay. Meta's messaging platform further said it is exploring ways to caution users when they start a chat with someone not in their contacts by showing more context about the person they are messaging. This, it hopes, will enable informed decisions. WhatsApp said it also takes down attempts by criminal scam centres, often fuelled by forced labour and operated by organised crime, primarily in South East Asia. In the first six months this year, as part of its efforts to protect people from scams, WhatsApp and Meta's security teams detected and banned over 6.8 million accounts linked to scam centres. "Based on our investigative insights into the latest enforcement efforts, we proactively detected and took down accounts before scam centres were able to operationalise them," it said. Citing an instance of such a crackdown, it said recently, WhatsApp, Meta, and OpenAI disrupted scamsters' efforts that had links to a criminal scam centre in Cambodia. These attempts ranged from offering payments for fake likes to enlisting others into a rent-a-scooter pyramid scheme or luring people to invest in cryptocurrency. "As OpenAI reported, the scammers used ChatGPT to generate the initial text message containing a link to a WhatsApp chat, and then quickly directed the target to Telegram, where they were assigned a task of liking videos on TikTok. The scammers attempted to build trust in their scheme by sharing how much the target has already 'earned' in theory, before asking them to deposit money into a crypto account as the next task," it said.


Deccan Herald
2 hours ago
- Deccan Herald
Karnataka to complete AI workforce impact survey in a month
Karnataka IT Minister Priyank Kharge on Tuesday said the state is talking to companies to evaluate the impact of AI on workforce and the assessment is expected to be completed in about a month. The comment assumes significance in the backdrop of TCS' decision to slash 12,000 jobs this year. The unexpected move by India's largest IT services company has sent fresh tremors in the tech industry, that has been battling global macroeconomic woes and geopolitical uncertainty. On the Karnataka State IT/ITeS Employees' Union reportedly seeking action against TCS over the job cuts, Kharge said Karnataka does not recognise unions in IT sector. He, however, added that "if there are concerns raised by public and people, it is our responsibility to address it". "We are talking to companies to ask them what exactly we can do to ensure our HR or talent is most employable. We are getting a survey done with the companies on AI affect in workforce," Kharge told PTI on sidelines of SAP Labs India event. He added that the process is expected to be completed in about a month. It is pertinent to mention here that India's top IT services companies have delivered single-digit revenue growth in Q1 FY26, capping off a somewhat-sobering June quarter as macroeconomic instability and geopolitical tensions have weighed on global tech demand and delayed client decision making.


Deccan Herald
2 hours ago
- Deccan Herald
OpenAI eyes $500 bn valuation in potential employee share sale
ChatGPT maker OpenAI is in early-stage discussions about a stock sale that would allow employees to cash out and could value the company at about $500 billion, a source familiar with the matter said. That would represent an eye-popping bump-up from its current valuation of $300 billion, with the sale underscoring both OpenAI's rapid gains in users and revenue as well as the intense competition among artificial intelligence firms to secure talented workers. The transaction, which would come before a potential IPO, would allow current and former employees to sell several billion dollars worth of shares, said the source, who requested anonymity because the talks are private. Bolstered by its flagship product ChatGPT, OpenAI doubled its revenue in the first seven months of the year, reaching an annualised run rate of $12 billion, and is on track to reach $20 billion by year-end, the source added. Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February. The share sale talks come on the heels of OpenAI's primary funding round announced earlier this year, which aims to raise $40 billion, led by Japan's SoftBank Group. SoftBank has until the end of the year to fund its $22.5 billion portion of the round, but the remainder has been subscribed at a valuation of $300 billion, the source said. Microsoft-backed OpenAI has about 700 million weekly active users for its ChatGPT products, a surge from about 400 million in February.