logo
AI companions and humans, a toxic relationship: Scientists warn of emotional dependency risks.

AI companions and humans, a toxic relationship: Scientists warn of emotional dependency risks.

Time of Indiaa day ago

iStock
As AI becomes more humanlike in its language, tone and presence, scientists and psychologists have pressed the red buzzer, wailing sirens about the growing side-effects: emotional dependency.
According to a
AI Companions today come in many forms, some driven by chat-boxes like 'Replika' others through voice assistants or personalised avatars that can hold deep thought evoking conversations. As these tools become more and more advanced, users report not just affection but being emotionally reliant on their AI companion. The AI companion provides for a safe, non-judgemental space for the users, thereby making it inevitable that the users would open up to them in ways they wouldn't in general to real people. This may seem harmless, although upon there being a malfunction in the AI, the AI would disappear in a moment. As a result, the user becomes grief-stricken, the same way he/she would on losing a loved one.
'
Live Events
This form of dependency can also become a market risk, for AI firms offering companionship services working on a tightrope between user satisfaction, ethical AI design and psychological safety. Startups entering the
AI offers a space that is non-judgemental, safe and easy to access, making it ideal for people to connect and disclose their deepest darkest secrets, but at what cost. If losing access to these AI chat-boxes can trigger despair-The rise of AI companions signifies a new era where technology transcends its traditional role, becoming sources of friendship and emotional support. However, the risk of emotional dependency looms, with users forging deep connections that can lead to heartbreak if the AI is no longer accessible.some and shattering emotions, then it is time for the policy makers, investors and developers to question themselves: are we creating these AI companions to fill human errors or are we creating new errors, for humans to be more vulnerable?
AI is now way beyond just a personal assistant on the keyboard, searching the precise answer over the internet, saving a bit of that extra time that could eventually come in handy. AI today is a friend, a teacher, a music instructor, a wizard that can bring the dead back to life. Anything and everything that you command it to be, it will be.As AI becomes more humanlike in its language, tone and presence, scientists and psychologists have pressed the red buzzer, wailing sirens about the growing side-effects: emotional dependency.According to a 2024 systemic review in "smart learning environments" long-term emotional bonds with AI companions can lead to psychological responses similar to that experienced when losing a real friend or a family member at that.AI Companions today come in many forms, some driven by chat-boxes like 'Replika' others through voice assistants or personalised avatars that can hold deep thought evoking conversations. As these tools become more and more advanced, users report not just affection but being emotionally reliant on their AI companion. The AI companion provides for a safe, non-judgemental space for the users, thereby making it inevitable that the users would open up to them in ways they wouldn't in general to real people. This may seem harmless, although upon there being a malfunction in the AI, the AI would disappear in a moment. As a result, the user becomes grief-stricken, the same way he/she would on losing a loved one. Replika ' temporarily removed its 'romantic features' from its chat-box after regulatory pressure. The backlash was immensely emotional, users reported feeling shattered as though they had 'lost' a partner or a friend. Forums and SNS were flooded with grief posts about a digital entity.This form of dependency can also become a market risk, for AI firms offering companionship services working on a tightrope between user satisfaction, ethical AI design and psychological safety. Startups entering the emotional AI or 'grief tech' space must now consider mental health safeguards into their platforms. India, being a young digitally curious population with a growing mental awareness, is a fertile ground for both caution and innovation.AI offers a space that is non-judgemental, safe and easy to access, making it ideal for people to connect and disclose their deepest darkest secrets, but at what cost. If losing access to these AI chat-boxes can trigger despair-The rise of AI companions signifies a new era where technology transcends its traditional role, becoming sources of friendship and emotional support. However, the risk of emotional dependency looms, with users forging deep connections that can lead to heartbreak if the AI is no longer accessible.some and shattering emotions, then it is time for the policy makers, investors and developers to question themselves: are we creating these AI companions to fill human errors or are we creating new errors, for humans to be more vulnerable?
Disclaimer Statement: This content is authored by a 3rd party. The views expressed here are that of the respective authors/ entities and do not represent the views of Economic Times (ET). ET does not guarantee, vouch for or endorse any of its contents nor is responsible for them in any manner whatsoever. Please take all steps necessary to ascertain that any information and content provided is correct, updated, and verified. ET hereby disclaims any and all warranties, express or implied, relating to the report and any content therein.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

RBI for ethical adoption of AI
RBI for ethical adoption of AI

Hans India

timean hour ago

  • Hans India

RBI for ethical adoption of AI

Mumbai: The Reserve Bank will prepare a framework for responsible and ethical adoption of artificial intelligence (AI) in the financial sector during the current financial year, the central bank's annual report said. Driven by rapid advances in computing power and the vast availability of digital data, AI and machine learning (ML) technologies have seen growing interest and significant progress in recent years, with financial institutions globally and domestically increasingly adopting these technologies. The Reserve Bank will prepare a framework for responsible and ethical adoption of AI in the financial sector, as per the agenda for 2025-26 spelt in the RBI's annual report for 2024-25. 'The Reserve Bank is exploring and implementing AI/ML-driven solutions in its own functions,' it said. It has constituted an external committee in December 2024, comprising experts with a mandate to recommend a Framework for Responsible and Ethical Enablement of AI in the financial sector. The Reserve Bank said it undertook several measures to safeguard the financial system by further strengthening the regulatory and supervisory framework of banking and non-banking sectors in line with global best practices. Going forward, the RBI said concerted efforts would be made, inter alia, towards rationalisation and harmonisation of regulations across regulated entities; preparing a framework for responsible and ethical adoption of AI in the financial sector; and strengthening of liquidity stress tests of commercial banks, among others. Further, fine-tuning the existing complaint management and grievance redress mechanism, including exploring the use of AI, would remain in focus, the report said. On AI governance policy, the report said framework for AI Policy for the Reserve Bank for responsible and ethical use of AI/machine learning (ML) technologies by employees, vendors, and third-party partners will also be initiated. By providing clear guidelines on data handling, consent and security, the policy seeks to maintain the integrity of the Reserve Bank's operations while using the opportunities that AI offers, it said.

AI mission crosses 34K GPU capacity, 3 new startups to help develop models
AI mission crosses 34K GPU capacity, 3 new startups to help develop models

Hindustan Times

time2 hours ago

  • Hindustan Times

AI mission crosses 34K GPU capacity, 3 new startups to help develop models

India has added 15,916 graphics processing units (GPUs) to its national artificial intelligence infrastructure while selecting three new startups to develop indigenous AI models, the government announced on Friday, accelerating efforts to build sovereign AI capabilities that developers can tap into. According to Union minister for electronics and information technology (Meity) Ashwini Vaishnaw, who made the announcement at an event in Delhi, seven companies will provide the additional GPU capacity — which takes the total in the IndiaAI compute portal to 34,333 units ——at subsidised rates of ₹67 per hour, substantially below the average market rate of ₹115 per GPU hour. 'When we began the India AI mission, we had an initial target of 10,000 GPUs, which we at the time thought would be a lot,' Vaishnaw said. 'But we got a promising response from the industry.' The government has already empanelled 18,417 GPUs in the first tranche, with 14,000 units currently accessible through the IndiaAI compute portal that became operational in March. Vaishnaw announced a third round of GPU empanelment will be completed soon, suggesting continued expansion beyond the original targets. The IndiaAI mission centres on offering AI technology access through government-subsidised infrastructure, including a marketplace of sorts for access to compute (which typically requires expensive hardware) and government databases. Among these efforts is the development of indigenous large language models, the technology that underpins AI chatbots such as ChatGPT. The three startups that were chosen from 506 proposals will develop LLMs optimised for India's 22 major languages and hundreds of dialects. The new startups join Sarvam AI, which was chosen in April to build a 120 billion parameter model, in developing indigenous AI systems trained on Indian datasets. The government has loaded 367 datasets onto AI Kosha, a platform supporting AI research across India. Soket AI will develop a 120-billion parameter open-source model optimised for Indian languages, targeting defence, healthcare and education applications. CEO Abhishek Upperwala set a one-year delivery timeline, saying 'India needs a strong base model that others can build on.' Gnani AI is building a 14-billion parameter voice system for real-time, multilingual speech processing. Gan AI focuses on text-to-speech technology with a 70-billion parameter multilingual model targeting 'superhuman' capabilities. The expansion, coupled with the hardware aspect, represents a dramatic scaling from the programme's original ambitions. 'When we began the India AI mission, we had an initial target of 10,000 GPUs, which we at the time thought would be a lot,' Vaishnaw said. 'But we got a promising response from the industry.' The government has already empanelled 18,417 GPUs in the first tranche, with 14,000 units currently accessible through the IndiaAI compute portal that became operational in March 2025. 'Common compute is a very important part of the principle of democratising technology,' Vaishnaw said. 'Technology should not be left in the hands of a few.' Rather than building government data centres, officials opted to empanel private providers who invest in the infrastructure while receiving 40% subsidies. The strategy yielded bids already more than 40% below prevailing market rates before the additional subsidy was applied. 'The government thought building data centres was going to take longer, so they decided to empanel service providers and private sector players who will invest in the compute infrastructure,' a senior official told HT. The subsidised pricing represents a significant departure from international GPU markets, where Nvidia H100 hourly rates range from $2.99 to $9.98 across major cloud providers in 2025, according to estimates. The seven newly empanelled companies are Cyfuture India, Ishan Infotech, Locuz Enterprise Solutions, Netmagic IT Services, Sify Digital Services, Vensysco Technologies, and Yotta Data Services. Cyfuture emerged as the dominant supplier, offering multiple chip types including Nvidia's H100 and AMD's MI300X processors, while Yotta Data Services provides access to Nvidia's latest B200 chips.

Judge wrestles with far-reaching remedy proposals in US antitrust case against Google
Judge wrestles with far-reaching remedy proposals in US antitrust case against Google

Mint

time4 hours ago

  • Mint

Judge wrestles with far-reaching remedy proposals in US antitrust case against Google

WASHINGTON — The fate and fortunes of one of the world's most powerful tech companies now sits in the hands of a U.S. judge wrestling with whether to impose far-reaching changes upon Google in the wake of its dominant search engine being declared an illegal monopoly. U.S. District Judge Amit Mehta heard closing arguments Friday from Justice Department lawyers who argued that a radical shake-up is needed to promote a free and fair market. Their proposed remedies include a ban on Google paying to lock its search engine in as the default on smart devices and an order requiring the company to sell its Chrome browser. Google's legal team argued that only minor concessions are needed and urged Mehta not to unduly punish the company with a harsh ruling that could squelch future innovations. Google also argued that upheaval triggered by advances in artificial intelligence already are reshaping the search landscape, as conversational search options are rolling out from AI startups that are hoping to use the Department of Justice's four-and-half-year-old case to gain the upper hand in the next technological frontier. It was an argument that Mehta appeared to give serious consideration as he marveled at the speed at which the AI industry was growing. He also indicated he was still undecided on how much AI's potential to shake up the search market should be incorporated in his forthcoming ruling. 'This is what I've been struggling with,' Mehta said. Mehta spoke frequently at Friday's hearing, often asking probing and pointed questions to lawyers for both sides, while hinting that he was seeking a middle ground between the two camps' proposed remedies. 'We're not looking to kneecap Google,' the judge said, adding that the goal was to 'kickstart' competitors' ability to challenge the search giant's dominance. Mehta will spend much of the summer mulling a decision that he plans to issue before Labor Day. Google has already vowed to appeal the ruling that branded its search engine as a monopoly, a step it can't take until the judge orders a remedy. Google's attorney John Schmidtlein asked Mehta to put a 60-day delay on implementing any proposed changes, which Justice prosecutor David Dahlquist immediately objected to. 'We believe the market's waited long enough,' Dahlquist said. While both sides of this showdown agree that AI is an inflection point for the industry's future, they have disparate views on how the shift will affect Google. The Justice Department contends that AI technology by itself won't rein in Google's power, arguing additional legal restraints must be slapped on a search engine that's the main reason its parent company, Alphabet Inc., is valued at $2 trillion. Google has already been deploying AI to transform its search engine i nto an answer engine, an effort that has so far helped maintain its perch as the internet's main gateway despite inroads being made by alternatives from the likes of OpenAI and Perplexity. The Justice Department contends a divestiture of the Chrome browser that Google CEO Sundar Pichai helped build nearly 20 years ago would be among the most effective countermeasures against Google continuing to amass massive volumes of browser traffic and personal data that could be leveraged to retain its dominance in the AI era. Executives from both OpenAi and Perplexity testified last month that they would be eager bidders for the Chrome browser if Mehta orders its sale. The debate over Google's fate also has pulled in opinions from Apple, mobile app developers, legal scholars and startups. Apple, which collects more than $20 billion annually to make Google the default search engine on the iPhone and its other devices, filed briefs arguing against the Justice Department's proposed 10-year ban on such lucrative lock-in agreements. Apple told the judge that prohibiting the contracts would deprive the company of money that it funnels into its own research, and that the ban might even make Google even more powerful because the company would be able to hold onto its money while consumers would end up choosing its search engine anyway. The Cupertino, California, company also told the judge a ban wouldn't compel it to build its own search engine to compete against Google. In other filings, a group of legal scholars said the Justice Department's proposed divestiture of Chrome would be an improper penalty that would inject unwarranted government interference in a company's business. Meanwhile, former Federal Trade Commission officials James Cooper and Andrew Stivers warned that another proposal that would require Google to share its data with rival search engines 'does not account for the expectations users have developed over time regarding the privacy, security, and stewardship' of their personal information. Mehta said Friday that compared to some of the Justice Department's other proposals, there was 'less speculation' about what might happen in the broader market if Google were forced to divest of Chrome. Schmidtlein said that was untrue, and such a ruling would be a wild overreach. 'I think that would be inequitable in the extreme,' he said. Dahlquist mocked some of the arguments against divesting Chrome. 'Google thinks it's the only one who can invest things,' he said. This article was generated from an automated news agency feed without modifications to text.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store