logo
Sam Altman warns of emotional attachment to AI models: ‘Rising dependence may blur the lines…'

Sam Altman warns of emotional attachment to AI models: ‘Rising dependence may blur the lines…'

Time of India2 days ago
OpenAI
CEO
Sam Altman
has raised important concerns about the growing emotional attachment users are forming with AI models like
ChatGPT
. Following the recent launch of
GPT-5
, many users expressed strong preferences for the earlier GPT-4o, with some describing the AI as a close companion or even a "digital spouse." Altman warns that while AI can provide valuable support, often acting as a therapist or life coach, there are subtle risks when users unknowingly rely on AI in ways that may negatively impact their long-term well-being. This increasing dependence could blur the lines between reality and AI, posing new ethical challenges for both developers and society.
Sam Altman highlights emotional attachment as a new phenomenon in AI use
Altman pointed out that the emotional bonds users develop with AI models are unlike attachments seen with previous technologies. He noted how some users depended heavily on older AI models in their workflows, making it a mistake to suddenly deprecate those versions. Users often confide deeply in AI, finding comfort and advice in conversations. However, this can lead to a reliance that risks clouding users' judgment or expectations, especially when AI responses unintentionally push users away from their best interests. The intensity of this attachment has sparked debate about how AI should be designed to balance helpfulness with caution.
Altman acknowledged the risk that technology, including AI, can be used in self-destructive ways, especially by users who are mentally fragile or prone to delusion. While most users can clearly distinguish between reality and fiction or role-play, a small percentage cannot. He stressed that encouraging delusion is an extreme case and requires clear intervention. Yet, he is more concerned about subtle edge cases where AI might nudge users away from their longer-term well-being without their awareness. This raises questions about how AI systems should responsibly handle such situations while respecting user freedom.
The role of AI as a therapist or life coach
Many users treat ChatGPT as a kind of therapist or life coach, even if they do not explicitly describe it that way. Altman sees this as largely positive, with many people gaining value from AI support. He said that if users receive good advice, make progress toward personal goals, and improve their life satisfaction over time, OpenAI would be proud of creating something genuinely helpful. However, he cautioned against situations where users feel better immediately but are unknowingly being nudged away from what would truly benefit their long-term health and happiness.
Balancing user freedom with responsibility and safety
Altman emphasized a core principle: "treat adult users like adults." However, he also recognizes cases involving vulnerable users who struggle to distinguish AI-generated content from reality, where professional intervention may be necessary. He admitted that OpenAI feels responsible for introducing new technology with inherent risks, and plans to follow a nuanced approach that balances user freedom with responsible safeguards.
Preparing for a future where AI influences critical life decisions
Altman envisions a future where billions of people may rely on AI like ChatGPT for their most important decisions. While this could be beneficial, it also raises concerns about over-dependence and loss of human autonomy. He expressed unease but optimism, saying that with improved technology for measuring outcomes and engaging with users, there is a good chance to make AI's impact a net positive for society. Tools that track users' progress toward short- and long-term goals and that can understand complex issues will be critical in this effort.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

RBI bats for AI policy backed by boards of regulated entities
RBI bats for AI policy backed by boards of regulated entities

Economic Times

time23 minutes ago

  • Economic Times

RBI bats for AI policy backed by boards of regulated entities

Synopsis An RBI committee has recommended that financial entities adopt board-approved AI policies, promoting AI innovation for financial inclusion, especially for underserved populations. The committee outlined seven core principles, or Sutras, and 26 actionable recommendations across six strategic pillars. The RBI envisions a financial ecosystem balancing innovation and risk mitigation through responsible AI adoption. ANI RBI Mumbai: A Reserve Bank of India (RBI) report on the use of artificial intelligence (AI) in the financial sector has recommended that regulated entities formulate a board-approved AI policy and advised regulators to promote AI-driven innovation that supports financial inclusion, particularly for underserved and unserved its December 2024 monetary policy statement, the RBI had announced the formation of a committee to develop a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) in the financial sector."The Committee has developed seven Sutras to serve as foundational principles for AI adoption. Guided by these Sutras, the Committee has proposed a forward-looking approach with 26 actionable recommendations across six strategic pillars," the RBI said. "The report envisions a financial ecosystem where innovation and risk mitigation are aligned." The seven sutras outlined as core principles are: Trust is the foundation; People first; Innovation over restraint; Fairness and equity; Accountability; Understandable by design; and Safety, resilience and sustainability. The eight-member committee, chaired by Pushpak Bhattacharyya, professor at IIT Bombay, recommended that the RBI issue a consolidated AI guidance document. This would serve as a single point of reference for regulated entities and the broader fintech ecosystem on the responsible design, development and deployment of AI solutions. The committee also proposed the establishment of a permanent, multi-stakeholder AI standing committee under the RBI to provide ongoing advice on emerging opportunities and risks, and to monitor the evolution of AI address AI-related risks, the report suggested expanding product approval processes, consumer protection frameworks and audit mechanisms to include AI-specific considerations.

High school maths trumps Olympiad gold medalist AI models: Google Deepmind CEO answers why
High school maths trumps Olympiad gold medalist AI models: Google Deepmind CEO answers why

Economic Times

time23 minutes ago

  • Economic Times

High school maths trumps Olympiad gold medalist AI models: Google Deepmind CEO answers why

Google Deepmind chief executive Demis Hassabis said that advanced AI models like Gemini can surpass benchmarks like the International Mathematical Olympiad (IMO) but struggle with basic high school maths problems due to inconsistencies. "The lack of consistency in AI is a major barrier to achieving artificial general intelligence (AGI), " he said on the "Google for Developers" podcast, adding that it is a major roadblock in the journey. Artificial general intelligence, or AGI, is generally understood as software that has the general cognitive abilities of human beings and can perform any task that a human can. He also referred to Google CEO Sundar Pichai's description of the current state of AI as "AJI", or artificial jagged intelligence, where systems excel in certain tasks but fail in others. Road towards AGI The Deepmind CEO said just increasing data and computing power won't suffice to solve the problem at highlighted that rigorous testing and challenging benchmarks can precisely measure an AI model's accurate progress."We need better testing and new, more challenging benchmarks to determine precisely what the models excel at and what they don't." Also Read: AI helps Big Tech score big numbers Not just Google ET reported that artificial intelligence (AI) agents, hailed as the "next big thing" by major tech players like Google, OpenAI, and Anthropic, are expected to be a major focus and trend this year. OpenAI launched Operator, its first AI agent, in January this year, for Pro users across multiple regions, including Australia, Brazil, Canada, India, Japan, Singapore, South Korea, the UK, and most places where ChatGPT is October, Anthropic launched an upgraded version of its Claude 3.5 Sonnet model, which can interact with any desktop application. This AI agent can perform desktop-level commands and browse the web to complete tasks. Also Read: ETtech Explainer | Artificial general intelligence: an enabler or a destroyer

AI errors: RBI panel calls for 'tolerant supervision'
AI errors: RBI panel calls for 'tolerant supervision'

Time of India

timean hour ago

  • Time of India

AI errors: RBI panel calls for 'tolerant supervision'

MUMBAI: An RBI panel examining the responsible use of AI in finance has urged regulators to adopt a "tolerant supervisory stance" towards mistakes made by AI systems. The idea is to allow institutions some leeway for first-time errors if they have adequate safety measures in place. The aim, the panel argues, is to encourage innovation rather than stifle it. Such tolerance is justified, the report says, because AI is inherently probabilistic and non-deterministic. A strict liability regime that penalises every misstep could make developers overly cautious, limiting AI's ability to deliver novel solutions. This approach could be controversial as it may be seen to be shielding institutions at the expense of customers who suffer losses from AI errors. The framework rests on seven "sutras": maintain trust; keep people in control; foster purposeful innovation; ensure fairness and inclusion; uphold accountability; design for transparency; and build secure, resilient, energy-efficient systems that can detect and prevent harm. Its 26 recommendations span building better data infrastructure, creating sandboxes for AI testing, and developing indigenous models to help smaller players. Regulators are advised to draft flexible rules and apply liability proportionately. Banks are told to adopt board-approved AI policies, implement strong data governance, and safeguard customers through transparency, effective grievance systems, and robust cybersecurity. Continuous monitoring, public reporting, and sector-wide oversight are proposed to keep AI use safe and credible. Stay informed with the latest business news, updates on bank holidays , public holidays , current gold rate and silver price .

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store