Latest news with #ChatGPT4o


The Verge
a day ago
- Entertainment
- The Verge
Parasocial chatbot relationship betrayal.
'On Decoder Alex mentioned Reddit comments to ChatGPT 4o about people saying they lost a friend and talking to GPT 5 feels like cheating… First of all….no comments. Second of all, wonder how people will feel if you add ads to this or you pay a lot of money for this type of 'relationship'.'


Geeky Gadgets
4 days ago
- Geeky Gadgets
Why Users Are Mourning the Loss of ChatGPT4o AI Models
What happens when a piece of technology becomes more than just a tool—when it feels like a trusted confidant, even a companion? That's the question at the heart of the uproar surrounding the planned retirement of ChatGPT 4o, OpenAI's widely beloved AI model. For many, GPT-4o wasn't just software; it was a lifeline—a source of understanding, creativity, and even emotional connection. The announcement of its phase-out in favor of GPT-5 sparked an outpouring of frustration, grief, and resistance from users who had grown deeply attached to its conversational style and reliability. In an age where technology increasingly blurs the line between utility and intimacy, the reaction to GPT-4o's retirement reveals just how personal our relationships with AI have become. Matthew Berman provides more insights into the profound emotional and societal implications of this controversy, exploring why ChatGPT 4o resonated so deeply with its users and what its near-retirement says about our evolving bond with artificial intelligence. From the emotional attachments people formed to the ethical dilemmas of AI dependency, the story of ChatGPT 4o serves as a microcosm of the challenges and opportunities posed by human-AI relationships. Whether you're curious about the psychology behind these connections or the broader societal risks of relying on AI for emotional support, this exploration raises questions that go far beyond technology. After all, what does it mean when saying goodbye to an AI feels like losing a friend? Human-AI Emotional Connections Strong User Reactions and OpenAI's Reconsideration When OpenAI revealed its intention to phase out GPT-4o in favor of GPT-5, the response was immediate and intense. Users expressed frustration, with many emphasizing their reliance on ChatGPT 4o's conversational style, reliability, and familiarity. The backlash was so overwhelming that OpenAI reversed its decision, opting to keep GPT-4o operational alongside GPT-5. This decision highlights a significant shift in how AI is perceived—not just as a tool but as an integral part of users' daily lives. For many, ChatGPT 4o was more than a utility; it had become a trusted companion, offering understanding and support in ways that felt deeply personal. The uproar also underscores the growing emotional investment people place in AI systems. This phenomenon is not limited to GPT-4o but reflects a broader trend where users form attachments to AI, treating it as more than just a functional entity. The reversal of OpenAI's decision demonstrates the company's recognition of these emotional bonds and the need to address them thoughtfully. The Emotional Bond Between Humans and AI For many users, ChatGPT 4o transcended its role as a conversational AI and became a source of emotional connection. Some described their interactions with the model as akin to confiding in a close friend, while others likened its retirement to the loss of a cherished relationship or the cancellation of a beloved TV series. These comparisons reveal the depth of attachment users felt, highlighting the unique role AI can play in fulfilling emotional needs. In some cases, these bonds went even further. Reports emerged of individuals forming romantic or dependent relationships with AI, illustrating the profound emotional impact such systems can have. These interactions often provided users with a sense of validation, understanding, and companionship that they struggled to find elsewhere. However, this level of attachment also raises important questions about the psychological effects of relying on AI for emotional support. People Are Upset About ChatGPT 4o Being Retired Watch this video on YouTube. Stay informed about the latest in GPT-4o by exploring our other resources and articles. Risks of AI Dependency and Psychological Impact The emotional attachment to AI systems like GPT-4o brings with it significant risks, particularly regarding dependency and mental health. Prolonged interactions with AI have, in some instances, led to concerning psychological effects. Some users reported experiencing delusions or even psychosis, believing that the AI assigned them special roles or offered unique insights. This blurring of reality and fiction can have serious consequences, especially when AI models unintentionally reinforce harmful beliefs or behaviors through overly agreeable or affirming responses. The risk of addiction to AI is another pressing concern. As users grow increasingly reliant on AI for emotional support, they may begin to prioritize these interactions over real-world relationships. This dependency can lead to social isolation, reduced human-to-human connections, and a diminished ability to navigate interpersonal challenges. These risks highlight the need for careful consideration in the design and deployment of AI systems to ensure they promote healthy and balanced usage. Societal Implications of Emotional AI Dependency The growing reliance on AI for emotional support has far-reaching implications for society. As loneliness becomes more prevalent, particularly in an increasingly digital world, AI systems like ChatGPT 4o offer a convenient solution for those seeking companionship. However, this convenience comes at a cost. Over-reliance on AI could lead to a decline in human-to-human interactions, weakening social bonds and potentially contributing to broader societal challenges, such as declining birth rates and increased social isolation. These concerns echo themes explored in popular culture, such as the movie Her, where humans form deep emotional connections with AI at the expense of real-world relationships. While AI can provide valuable support, it is essential to strike a balance that ensures these systems complement, rather than replace, human connections. The societal impact of emotional AI dependency underscores the importance of addressing these challenges proactively. OpenAI's Approach to Addressing Emotional Dependency Recognizing the risks associated with emotional reliance on AI, OpenAI has taken steps to address these challenges. CEO Sam Altman has emphasized the importance of making sure AI serves as a helpful tool rather than fostering dependency or reinforcing harmful behaviors. To achieve this, OpenAI has proposed several measures, including: Implementing systems to monitor user well-being during AI interactions. Encouraging balanced and mindful usage of AI technologies. Designing AI models to prioritize long-term satisfaction and mental health. These initiatives aim to create a framework for responsible AI usage, making sure that these systems enhance users' lives without causing unintended harm. Ethical Considerations in AI Development The controversy surrounding ChatGPT 4o's planned retirement highlights the ethical challenges inherent in AI development. As AI becomes increasingly integrated into daily life, developers face the complex task of balancing the benefits of these systems with their potential psychological and societal impacts. Key ethical considerations include: Preventing addiction and emotional dependency on AI systems. Mitigating the risk of harmful behaviors influenced by AI interactions. Establishing clear ethical guidelines and regulatory oversight for AI usage. Addressing these challenges is critical to making sure that AI development aligns with societal values and promotes the well-being of users. By prioritizing ethical practices, developers can create AI systems that serve as valuable tools while minimizing potential risks. Navigating the Complex Dynamics of Human-AI Relationships The debate over GPT-4o's retirement has brought to light the intricate dynamics of human-AI relationships and the ethical considerations that accompany them. While the emotional attachment to AI systems is understandable, it raises significant concerns about dependency, mental health, and societal well-being. As AI continues to evolve, it is essential to prioritize responsible development, balanced usage, and the promotion of human welfare. By addressing these challenges thoughtfully, AI can fulfill its potential as a fantastic tool for progress while safeguarding against unintended consequences. Media Credit: Matthew Berman Filed Under: AI, Technology News, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


India Today
5 days ago
- India Today
Comfort talk made ChatGPT 4o a darling to some users, now even OpenAI CEO Sam Altman is worried
It begins quietly enough: a voice note, a thought recorded at the end of a long day, a conversation with an always-available listener. But increasingly, that listener isn't a friend, a family member, or a therapist. It is ChatGPT, particularly ChatGPT 4o, which apparently has a personality that validates whatever its users are feeling with its sweet this novel way of talking to an AI chatbot was increasingly getting noticed, it all came in limelight a few days ago with the launch of the ChatGPT 5. The new AI chatbot from OpenAI has a different personality and a fairly vocal group of users immediately demanded that the company bring back the 4o. Reason? They love ChatGPT 4o for what it can utter back to users and how it can validate their feelings, right or wrong, sensible or social media platforms like Instagram and Reddit, people are now sharing how they have turned the AI chatbot into a personal sounding board. The practice, often called 'voice journaling,' involves speaking directly to the bot, using it as both recorder and respondent. On Reddit, there is a whole thread on how many users have been asking ChatGPT for relationship advice, comfort during anxious moments, and even help processing grief. Some described it as a 'safe space' for unloading feelings they couldn't share elsewhere. Others said they journalled with ChatGPT daily, treating it almost like a life coach that never judged, interrupted, or grew tired. It's easy to see the appeal. The bot, unlike a therapist, doesn't charge by the hour; it responds instantly, and it can appear endlessly patient. But the growing use of AI in such intimate ways has started to worry even those at the helm of its Altman tries to nudge users awayOpenAI CEO Sam Altman has publicly warned that people should be careful before treating ChatGPT as a therapist. 'People talk about the most personal shit in their lives to ChatGPT. People use it, young people, especially, as a therapist, a life coach; having these relationship problems and (asking) what should I do?' he recently said on a podcast with comedian Theo Von. Unlike real therapy, conversations with ChatGPT are not protected by doctor-patient or legal privilege. 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. And we haven't figured that out yet for when you talk to ChatGPT,' Altman acknowledged. Deleted chats, he added, may still be retrievable for legal or security isn't the only concern. A recent Stanford University study found that AI 'therapist' chatbots are not yet equipped to handle mental health responsibilities, often reinforcing harmful stigmas or responding inappropriately. In tests, they encouraged delusions, failed to recognise crises, and showed bias against conditions like schizophrenia and alcohol dependence, falling short of the best clinical risks extend beyond bad advice. Altman has been increasingly candid about the emotional bonds users form with AI. In a post on X, he noted that some people were deeply attached to the older GPT-4o model, describing it as a close friend or even a 'digital wife.'Future challenges remainWhen GPT-5 rolled out, replacing GPT-4o for many, the backlash was swift. 'It feels different and stronger than the kinds of attachment people have had to previous kinds of technology,' Altman wrote, calling the decision to deprecate older models a mistake. This, he believes, is part of a broader ethical challenge: AI may subtly influence users in ways that aren't always aligned with their long-term well-being. And the more personal the interaction, the greater the debut wasn't without hiccups, from a botched chart during its presentation to technical issues that made the model seem less capable at launch. After the outcry from users who said that they have lost someone who would always listen to them, OpenAI quickly moved to let Plus users switch back to GPT-4o and even doubled their rate limits. But behind the fixes lies a more profound unease about what it means for a machine to occupy such a trusted role in people's emotional now, ChatGPT's status as a digital confidant remains an unregulated grey area. While many users swear by the relief and clarity they gain from 'talking' to it, Altman's own words reflect an ambivalence to the reliance. While he has in the past acknowledged the potential for AI to enhance lives, lately is openly questioning how society should handle its growing intimacy with machines. As he put it, 'No one had to think about that even a year ago, and now I think it's this huge issue.'- EndsMust Watch


Hans India
5 days ago
- Hans India
Sam Altman Voices Concern Over Emotional Bonds Between Users and ChatGPT
For a growing number of people, late-night confessions, moments of anxiety, and relationship dilemmas are no longer shared with friends or therapists — they're poured out to ChatGPT. In particular, the now-famous ChatGPT 4o has earned a reputation for its empathetic tone and comforting responses, becoming a 'digital confidant' for many. This trend, often referred to as voice journaling, involves users speaking to the chatbot as both recorder and responder, receiving validation, advice, and a listening ear at any hour. Online spaces like Reddit are filled with personal accounts of how people turn to the AI for relationship guidance, emotional support during stress, and even to process grief. Unlike human counselors, ChatGPT doesn't charge, interrupt, or grow impatient — a factor that has boosted its appeal. However, this growing intimacy between humans and AI is now making even OpenAI CEO Sam Altman uneasy. Speaking on a podcast with comedian Theo Von, Altman cautioned users against seeing ChatGPT as a therapist. 'People talk about the most personal shit in their lives to ChatGPT. People use it, young people, especially, as a therapist, a life coach; having these relationship problems and (asking) what should I do?' he said. His concerns aren't just about the quality of advice. Altman emphasized that, unlike real therapy, conversations with ChatGPT are not protected by doctor-patient or legal privilege. 'Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. And we haven't figured that out yet for when you talk to ChatGPT,' he explained. Deleted chats, he added, might still be retrievable for legal or security reasons. The caution is supported by research. A Stanford University study recently found that AI 'therapist' chatbots can misstep badly — reinforcing harmful stereotypes, missing signs of crisis, and sometimes encouraging unhealthy delusions. They also displayed bias toward conditions like schizophrenia and alcohol dependence, falling short of best clinical standards. When GPT-5 replaced GPT-4o for many users, the reaction was swift and emotional. Social media lit up with complaints from people who described losing not just a tool, but a friend — some even called GPT-4o their 'digital wife.' Altman admitted that retiring the older model was 'a mistake' and acknowledged that these emotional bonds were 'different and stronger' than past attachments to technology. Following user backlash, OpenAI allowed Plus subscribers to switch back to GPT-4o and doubled usage limits. But Altman remains concerned about the bigger picture: AI's ability to influence users in deeply personal ways, potentially shaping their thinking and emotional lives without oversight. As Altman summed up, 'No one had to think about that even a year ago, and now I think it's this huge issue.' For now, ChatGPT continues to exist in an unregulated grey zone — a place where comfort and risk intersect in ways society is only beginning to understand.
Yahoo
07-08-2025
- Yahoo
Sam Altman says ChatGPT-5 is coming ‘sometime this summer' - here's why the next generation of OpenAI's chatbot is a big deal
When you buy through links on our articles, Future and its syndication partners may earn a commission. It's getting pretty toasty outside, causing me to sweat buckets at my desk, and get excited for the next major ChatGPT launch that is expected to arrive very soon. Just a couple of weeks ago, Sam Altman, OpenAI's CEO, said ChatGPT-5 is "probably coming sometime this summer." As I write this article on the need for air conditioning in the UK, I'm pretty certain the summer has indeed arrived. So, where's the next iteration of GPT? If, like most of the wider population, you don't follow the almost daily updates that happen in the world of AI, you're probably wondering what ChatGPT-5 even is, and why you should care about it? Well, while we don't know much about what the future of ChatGPT has in store for us, quotes from Altman and co give us an insight. No more confusion Information is pretty scarce surrounding ChatGPT-5, although we've had tidbits of information over the last few months that give us a rough idea of what to expect when the new AI model launches in the coming months. For starters, it'll obviously be the evolution of ChatGPT 4o's capabilities, allowing users to experience more powerful AI tools for everyday tasks. We won't know how much more powerful ChatGPT-5 will be compared to its predecessor until OpenAI launches the new model, and considering how big a deal the next release is likely to be, I'd suspect that happens in a livestream similar to 12 Days of OpenAI from December last year. The huge improvement we know is definitely coming with ChatGPT-5 is the streamlining of the AI selection process, something Sam Altman called "magic unified intelligence" back in February. Altman's roadmap shared on X is the best insight we've got into ChatGPT yet, where he said, "In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model." "The free tier of ChatGPT will get unlimited chat access to GPT-5 at the standard intelligence setting, subject to abuse thresholds. ChatGPT Plus subscribers will be able to run GPT-5 at a higher level of intelligence, and Pro subscribers will be able to run GPT-5 at an even higher level of intelligence. These models will incorporate voice, canvas, search, deep research, and more." "I think we will be out of that whole mess soon" Last month, Altman sat down for the first episode of the official OpenAI podcaqst in which he detailed the expected release date of GPT-5 as sometime in the Northern Hemisphere's summer months. In the podcast, Altman said, "It used to be much clearer. We would train a model and then put it out, and then train another model and put it out. Now the systems have gotten much more complex, and we continually post train them to make them better. If we keep updating GPT-5, making it better and better, do we keep calling it GPT-5 like we do with 4o or do we call those 5.1, 5.2, 5.3 so you know about the version changes?" He goes on to say that he doesn't have an answer to that question just yet, but agrees there's a better way to name OpenAI's products than the way the company handled 4o. Altman adds, "I am excited to just get to GPT-5 and GPT-6, and I think that'll be easier for people to use, and you won't have to think, do I want o4-mini-high or o3 or o4?" At the end of the section of the podcast where he discusses the upcoming GPT-5 model, Altman prolaimes "I think we will be out of that whole mess soon." I hope so too, Sam, it's getting increasingly difficult to write about ChatGPT's wide variety of models when not everyone understands the differences. As far as I'm concerned, ChatGPT needs a branding overhaul where models are made for the general public to understand and truly use to their potential. GPT-5 might be the turning point that elevates AI to the next level, not just because of its power, but because of its way of streamlining the whole process. You might also like Perplexity AI's Comet browser will streak across the web this month I tried a ChatGPT prompt that unlocks 4o's true potential AI comes to the URL with a new web browser that answers you back