
Sam Altman Raises Alarm Over ‘Self-Destructive' AI Use After GPT-5 Backlash
In a candid post on X, Altman reflected on the surprising emotional attachment users have formed with specific AI models.
'If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake),' he wrote.
Altman emphasized that while technology—including AI—can be a powerful tool for positive engagement, it can also become harmful under certain conditions.
'People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that. Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.'
While many people have turned to ChatGPT as a virtual therapist, mentor, or life coach, Altman clarified that such uses were not inherently troubling.
'This can be really good! A lot of people are getting value from it already today,' he noted.
His primary concern lies in scenarios where AI guidance might subtly steer users away from choices that support their long-term well-being. According to Altman, the level of trust some users place in ChatGPT for crucial life decisions is both remarkable and worrisome.
'People really trust the advice coming from ChatGPT for the most important decisions,' he said, adding that this trust makes him uneasy.
The GPT-5 Backlash
The controversy stems from OpenAI's decision to retire older GPT and reasoning models, a move that sparked an outcry on social media. Many long-time users claimed that GPT-5's responses felt shorter, less nuanced, and lacking the emotional depth they had come to rely on. The abrupt transition left some feeling that a key part of their workflow—or even their emotional support system—had been taken away without adequate notice.
Faced with this backlash, OpenAI reversed course on some decisions, working to restore certain capabilities and give users more flexibility. However, Altman's remarks underscore a broader challenge for AI developers: balancing innovation and safety while addressing the emotional bonds users form with these tools.
As AI becomes more integrated into daily life, Altman's warning serves as a reminder that the technology's impact extends far beyond productivity—touching on mental health, trust, and the boundaries between human and machine relationships.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
2 minutes ago
- Mint
New Claude update lets users pick up conversations where they left off: How Anthropic AI chatbot's feature works
Anthropic, the San Francisco-based artificial intelligence firm, has announced a new capability for its Claude chatbot that lets it retrieve and refer to earlier conversations with a user. Announced on Monday, the update is currently being rolled out to paid subscribers on the Max, Team, and Enterprise plans, with the Pro plan expected to receive access in the near future. It remains unclear whether the feature will be made available to users on the free tier. The new capability enables users to continue conversations seamlessly from where they left off, eliminating the need to manually search through earlier chats to resume discussions on ongoing projects. This feature operates by retrieving data from past interactions when requested by the user or when the chatbot deems it necessary. While referencing earlier conversations is already available in other AI chatbots, such as OpenAI's ChatGPT and Google's Gemini, which provide this function to all users including those on free plans, Anthropic has been relatively slow in integrating similar enhancements into Claude. The company had previously introduced two-way voice conversations and web search functions only recently, in May 2025. The timing of this new feature comes shortly after Anthropic implemented weekly rate limits for paid users, a response to some individuals exploiting the previous policy that reset limits every five hours. Reports indicated that a small group of users were running Claude Code continuously, resulting in usage costs amounting to tens of thousands of dollars. Some users have expressed concerns that retrieving extensive information from previous, information-heavy chats might cause them to reach their rate limits more quickly. However, Anthropic has yet to clarify whether the new feature affects token consumption or usage quotas. An X user named Naeem Shabir commented on Anthropic's official post, stating, 'How will this impact usage limits? I'm Excited to test it out with this advancement in persistent memory across chats. Despite it being something that ChatGPT had a while ago, I am curious whether your implementation differs from theirs at all. This resolves a big issue for me because I had to regularly start new chats when the context limit/window was reached 🙏.'


Hans India
31 minutes ago
- Hans India
Sam Altman Voices Concern Over Emotional Bonds Between Users and ChatGPT
For a growing number of people, late-night confessions, moments of anxiety, and relationship dilemmas are no longer shared with friends or therapists — they're poured out to ChatGPT. In particular, the now-famous ChatGPT 4o has earned a reputation for its empathetic tone and comforting responses, becoming a 'digital confidant' for many. This trend, often referred to as voice journaling, involves users speaking to the chatbot as both recorder and responder, receiving validation, advice, and a listening ear at any hour. Online spaces like Reddit are filled with personal accounts of how people turn to the AI for relationship guidance, emotional support during stress, and even to process grief. Unlike human counselors, ChatGPT doesn't charge, interrupt, or grow impatient — a factor that has boosted its appeal. However, this growing intimacy between humans and AI is now making even OpenAI CEO Sam Altman uneasy. Speaking on a podcast with comedian Theo Von, Altman cautioned users against seeing ChatGPT as a therapist. 'People talk about the most personal shit in their lives to ChatGPT. People use it, young people, especially, as a therapist, a life coach; having these relationship problems and (asking) what should I do?' he said. His concerns aren't just about the quality of advice. Altman emphasized that, unlike real therapy, conversations with ChatGPT are not protected by doctor-patient or legal privilege. 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. And we haven't figured that out yet for when you talk to ChatGPT,' he explained. Deleted chats, he added, might still be retrievable for legal or security reasons. The caution is supported by research. A Stanford University study recently found that AI 'therapist' chatbots can misstep badly — reinforcing harmful stereotypes, missing signs of crisis, and sometimes encouraging unhealthy delusions. They also displayed bias toward conditions like schizophrenia and alcohol dependence, falling short of best clinical standards. When GPT-5 replaced GPT-4o for many users, the reaction was swift and emotional. Social media lit up with complaints from people who described losing not just a tool, but a friend — some even called GPT-4o their 'digital wife.' Altman admitted that retiring the older model was 'a mistake' and acknowledged that these emotional bonds were 'different and stronger' than past attachments to technology. Following user backlash, OpenAI allowed Plus subscribers to switch back to GPT-4o and doubled usage limits. But Altman remains concerned about the bigger picture: AI's ability to influence users in deeply personal ways, potentially shaping their thinking and emotional lives without oversight. As Altman summed up, 'No one had to think about that even a year ago, and now I think it's this huge issue.' For now, ChatGPT continues to exist in an unregulated grey zone — a place where comfort and risk intersect in ways society is only beginning to understand.


Indian Express
31 minutes ago
- Indian Express
Sam Altman says the term ‘AGI' is losing meaning amid high-stakes AI race
Artificial general intelligence (AGI) may have been every tech bro's chant, but OpenAI CEO Sam Altman thinks that the term is increasingly becoming passé. Rapid advances in the AI race are making it harder to define the concept of AGI which has led to the term losing its relevance, Altman said in an interview with CNBC. 'I think it's not a super useful term,' the ChatGPT frontman was quoted as saying in response to a question about whether the company's latest GPT-5 model moves the world any closer to achieving AGI. Loosely defined, AGI refers to a form of artificial intelligence that could enable a system to perform any intellectual task at the same level of humans or even beyond it. Achieving this level of AI capability in a way that is safe and benefits all of humanity, has been OpenAI's core mission for years. Altman, who has previously suggested on multiple occasions that the Microsoft-backed AI startup is nearing AGI, has shifted stance more recently and attempted to downplay the significance of AGI. Instead, he has emphasised another concept called artificial superintelligence (ASI). The problem with AGI is that there are multiple definitions being used by different companies and individuals. One definition is an AI that can do 'a significant amount of the work in the world,' he said. However, that has its issues because the nature of work is constantly changing, according to Altman. 'I think the point of all of this is it doesn't really matter and it's just this continuing exponential of model capability that we'll rely on for more and more things,' he added. The promise of AGI is said to be a major factor in AI companies like OpenAI successfully raising billions of dollars and receiving staggeringly high valuations. Based on its latest funding round, OpenAI is worth $300 billion and the company is reportedly gearing up for a share sale at a valuation of $500 billion. To be sure, AGI continues to be a major goal for OpenAI. But Altman believes that progress should be measured differently. 'We try now to use these different levels … rather than the binary of, 'is it AGI or is it not?' I think that became too coarse as we get closer,' Altman had said during a talk at the FinRegLab AI Symposium in November last year. He has further said that AI-driven breakthroughs in fields such as mathematics and science will be achieved in the next two years or so. Earlier this month, OpenAI unveiled GPT-5, its latest large language model (LLM) that is freely accessible by ChatGPT users globally. OpenAI has said that the AI model is smarter, faster, and more useful, especially when it comes to writing and coding-related tasks as well as answering health-related queries. However, the launch of GPT-5 has drawn criticism, with some saying that the AI model only offers marginal improvements over its predecessors such as GPT-4o. Accepting that GPT-5 is not at AGI-level yet, Altman said at a media roundtable, 'The idea that you have a system that can answer almost any question, do some tasks, and write software for you at PhD levels of expertise… most people, if they heard that five years ago, would have said, 'absolutely impossible.'' 'The impact this is having on education, health care, productivity, economic growth, scientific discovery, and the like is 'quite special,' he added.