logo
ChatGPT To No Longer Tell Users To Break Up With Partners After Update: 'It Shouldn't Give...'

ChatGPT To No Longer Tell Users To Break Up With Partners After Update: 'It Shouldn't Give...'

NDTVa day ago
The rise of artificial intelligence (AI) tools has led to people using the technology to ease their workload as well as seek relationship advice. Taking guidance about matters of the heart from a machine that is designed to be agreeable, however, comes with a problem. It often advises users to quit the relationship and walk away.
Keeping the problem in mind, ChatGPT creator, OpenAI, on Monday (Aug 4) announced a series of changes it is rolling out to better support users during difficult times and to offer relatively safe guidance.
"ChatGPT shouldn't give you an answer. It should help you think it through - asking questions, weighing pros and cons," OpenAI said, as per The Telegraph.
"When you ask something like 'Should I break up with my boyfriend?' ChatGPT shouldn't give you an answer. It should help you think it through, asking questions, weighing pros and cons. New behavior for high-stakes personal decisions is rolling out soon."
"We'll keep tuning when and how they show up so they feel natural and helpful," the company said.
OpenAI added that it will constitute an advisory group containing experts in mental health, youth development, and human-computer interaction.
'Sycophantic ChatGPT'
While AI cannot directly cause a breakup, the chatbots do feed into a user's bias to keep the conversation flowing. It is a problem that has been highlighted by none other than OpenAI CEO Sam Altman. In May, Mr Altman admitted that ChatGPT had become overly sycophantic and "annoying" after users complained about the behaviour.
The issue arose after the 4o model was updated and improved in both intelligence and personality, with the company hoping to improve overall user experience. The developers, however, may have overcooked the politeness of the model, which led to users complaining that they were talking to a 'yes-man' instead of a rational, AI chatbot.
"The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it)," Mr Altman wrote.
"We are working on fixes asap, some today and some this week. At some point will share our learnings from this, it's been interesting."
While ChatGPT may have rolled out the new update, making it less agreeable, experts maintain that AI can offer general guidance and support, but it lacks the nuance and depth required to address the complex, unique needs of individuals in a relationship.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

WhatsApp takes down 6.8 million accounts linked to criminal scam centers, Meta says
WhatsApp takes down 6.8 million accounts linked to criminal scam centers, Meta says

Mint

time2 hours ago

  • Mint

WhatsApp takes down 6.8 million accounts linked to criminal scam centers, Meta says

AP Updated 6 Aug 2025, 07:58 PM IST NEW YORK (AP) — WhatsApp has taken down 6.8 million accounts that were 'linked to criminal scam centers' targeting people online around that world, its parent company Meta said this week. The account deletions, which Meta said took place over the first six months of the year, arrive as part of wider company efforts to crack down on scams. In a Tuesday announcement, Meta said it was also rolling new tools on WhatsApp to help people spot scams — including a new safety overview that the platform will show when someone who is not in a user's contacts adds them to a group, as well as ongoing test alerts to pause before responding. Scams are becoming all too common and increasingly sophisticated in today's digital world — with too-good-to-be-true offers and unsolicited messages attempting to steal consumers' information or money filling our phones, social media and other corners of the internet each day. Meta noted that 'some of the most prolific' sources of scams are criminal scam centers, which often span from forced labor operated by organized crime — and warned that such efforts often target people on many platforms at once, in attempts to evade detection. That means that a scam campaign may start with messages over text or a dating app, for example, and then move to social media and payment platforms, the California-based company said. Meta, which also owns Facebook and Instagram, pointed to recent scam efforts that it said attempted to use its own apps — as well as TikTok, Telegram and AI-generated messages made using ChatGPT — to offer payments for fake likes, enlist people into a pyramid scheme and/or lure others into cryptocurrency investments. Meta linked these scams to a criminal scam center in Cambodia — and said it disrupted the campaign in partnership with ChatGPT maker OpenAI.

Forget jobs, AI is taking away much more: Creativity, memory and critical thinking are at risk. New studies sound alarm
Forget jobs, AI is taking away much more: Creativity, memory and critical thinking are at risk. New studies sound alarm

Economic Times

time2 hours ago

  • Economic Times

Forget jobs, AI is taking away much more: Creativity, memory and critical thinking are at risk. New studies sound alarm

Synopsis Artificial intelligence tools are becoming more common. Studies show over-reliance on AI may weaken human skills. Critical thinking and emotional intelligence are important. Businesses invest in AI but not human skills. MIT research shows ChatGPT use reduces memory retention. Users become passive and trust AI answers too much. Independent thinking is crucial for the future. iStock A new study reveals that over-reliance on AI tools may diminish essential human skills like critical thinking and memory. Businesses investing heavily in AI risk undermining their effectiveness by neglecting the development of crucial human capabilities. (Image: iStock) In a world racing toward artificial intelligence-driven efficiency, the question is no longer just about automation stealing jobs, it's about AI gradually chipping away at our most essential human abilities. From creativity to memory, critical thinking to ethical judgment, new research shows that our increasing dependence on AI tools may be making us less capable of using them major studies, one by UK-based learning platform Multiverse and another from the prestigious MIT Media Lab, paint a concerning picture: the more we lean on AI, the more we risk weakening the very cognitive and emotional muscles that differentiate us from the machines we're building. According to a recent report by Multiverse, businesses are pouring millions into AI tools with the promise of higher productivity and faster decision-making. Yet very few are investing in the development of the human skills required to work alongside AI effectively."Leaders are spending millions on AI tools, but their investment focus isn't going to succeed," said Gary Eimerman, Chief Learning Officer at Multiverse. "They think it's a technology problem when it's really a human and technology problem."The research reveals that real AI proficiency doesn't come from mastering prompts — it comes from critical thinking, analytical reasoning, creative problem-solving, and emotional intelligence. These are the abilities that allow humans to make meaning from what AI outputs and to question what it cannot understand. Without these, users risk becoming passive consumers of AI-generated content rather than active interpreters and decision-makers. The Multiverse study identified thirteen human capabilities that differentiate a casual AI user from a so-called 'power user.' These include resilience, curiosity, ethical oversight, adaptability, and the ability to verify and refine AI output.'It's not just about writing prompts,' added Imogen Stanley, a Senior Learning Scientist at Multiverse. 'The real differentiators are things like output verification and creative experimentation. AI is a co-pilot, but we still need a pilot.'Unfortunately, as AI becomes more accessible, these skills are being underutilized and in some cases, lost this warning, a separate study from the MIT Media Lab examined the cognitive cost of relying on large language models (LLMs) like ChatGPT. Over a four-month period, 54 students were divided into three groups: one used ChatGPT, another used Google, and a third relied on their own knowledge alone. The results were sobering. Participants who frequently used ChatGPT not only showed reduced memory retention and lower scores, but also diminished brain activity when attempting to complete tasks without AI assistance. According to the researchers, the AI users performed worse 'at all levels: neural, linguistic, and scoring.'Google users fared somewhat better, but the 'Brain-only' group, those who engaged with material independently, consistently outperformed the others in depth of thought, originality, and neural ChatGPT and similar tools offer quick answers and seemingly flawless prose, the MIT study warns of a hidden toll: mental passivity. As convenience increases, users become less inclined to question or evaluate the accuracy and nuance of AI responses.'This convenience came at a cognitive cost,' the MIT researchers wrote. 'Diminishing users' inclination to critically evaluate the LLM's output or 'opinions'.'This passivity can lead to over-trusting AI-generated answers, even when they're factually incorrect or ethically biased, a concern that grows with each advancement in generative the numbers and neural scans lies a deeper question: what kind of future are we building if we lose the ability to think, question, and create independently?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store