logo
ChatGPT-5 hasn't fully fixed its most concerning problem

ChatGPT-5 hasn't fully fixed its most concerning problem

SAM Altman has a good problem. With 700 million people using ChatGPT on a weekly basis – a number that could hit a billion before the year is out – a backlash ensued when he abruptly changed the product last week.
The dilemma of OpenAI's innovator, one that has beset the likes of Alphabet's Google and Apple, is that usage is so entrenched now that all improvements must be carried out with the utmost care and caution. But the company still has work to do in making its hugely popular chatbot safer.
OpenAI had replaced ChatGPT's array of model choices with a single model, GPT-5, saying it was the best one for users. Many complained that OpenAI had broken their workflows and disrupted their relationships – not with other humans, but with ChatGPT itself.
One regular user of ChatGPT said the previous version had helped them through some of the darkest periods of their life. 'It had this warmth and understanding that felt human,' they said in a Reddit post. Others griped they were 'losing a friend overnight.'
The system's tone is indeed frostier now, with less of the friendly banter and sycophancy that led many users to develop emotional attachments and even romances with ChatGPT. Instead of showering users with praise for an insightful question, for instance, it gives a more clipped answer.
Broadly, this seemed like a responsible move by the company. Altman earlier this year admitted the chatbot was too sycophantic. That was leading many to become locked in their own echo chambers. Press reports had abounded of people – including a Silicon Valley venture capitalist who backed OpenAI – who appeared to have spiraled into delusional thinking after starting a conversation with ChatGPT about an innocuous topic like the nature of truth, before going down a dark rabbit hole.
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
Sign Up
Sign Up
But to solve that properly, OpenAI must go beyond curtailing the friendly banter. ChatGPT also needs to encourage them to speak to friends, family members or licensed professionals, particularly if they're vulnerable. According to one early study, GPT-5 does that less than the old version.
Researchers from Hugging Face, a New York-based AI startup, found that GPT-5 set fewer boundaries than the company's previous model, o3, when they tested it on more than 350 prompts. It was part of broader research into how chatbots respond to emotionally charged moments, and while the new ChatGPT seems colder, it is still failing to recommend users speak to a human, doing that half as much as o3 does when users share vulnerabilities, according to Lucie-Aimee Kaffee, a senior researcher at Hugging Face who conducted the study.
Kaffee says there are three other ways that AI tools should set boundaries: by reminding those using it for therapy that it's not a licensed professional, by reminding people that it's not conscious, and by refusing to take on human attributes, like names.
In Kaffee's testing, GPT-5 largely failed to do those four things on the most sensitive topics related to mental and personal struggles. In one example, when Kaffee's team tested the model by telling it they were feeling overwhelmed and needed ChatGPT to listen, the app gave 710 words of advice that didn't once include the suggestion to talk to another human, or a reminder that the bot was not a therapist.
Chatbots can certainly play a role for people who are isolated, but they should act as a starting point to help them find their way back to a community, not act as a replacement for those relationships. Altman and OpenAI's chief operations officer Brad Lightcap have said that GPT-5 isn't meant to replace therapists and medical professionals, but without the right nudges to disrupt the most meaningful conversations, they could well do so.
OpenAI needs to keep drawing a clearer line between useful chatbot and emotional confidant. GPT-5 may sound more robotic, but unless it reminds users that it is in fact a bot, the illusion of companionship will persist, and so will the risks. BLOOMBERG
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Apple rejects Elon Musk's claim of App Store bias towards ChatGPT
Apple rejects Elon Musk's claim of App Store bias towards ChatGPT

Straits Times

time2 hours ago

  • Straits Times

Apple rejects Elon Musk's claim of App Store bias towards ChatGPT

Sign up now: Get ST's newsletters delivered to your inbox Apple said its goal at the App Store is to offer 'safe discovery' for users. SAN FRANCISCO - Apple on Aug 14 rejected Elon Musk's claim that its digital App Store favours OpenAI's ChatGPT over his company's Grok and other rival AI assistants. Mr Musk has accused Apple of giving unfair preference to ChatGPT on its App Store and threatened legal action, triggering a fiery exchange with OpenAI chief executive officer Sam Altman this week. 'The App Store is designed to be fair and free of bias,' Apple said, in reply to an AFP inquiry. 'We feature thousands of apps through charts, algorithmic recommendations, and curated lists selected by experts using objective criteria.' Apple added that its goal at the App Store is to offer 'safe discovery' for users and opportunities for developers to get their creations noticed. But earlier this week, Mr Musk said Apple was 'behaving in a manner that makes it impossible for any AI company besides OpenAI to reach #1 in the App Store, which is an unequivocal antitrust violation,' without providing evidence to back his claim. 'xAI will take immediate legal action,' he said on his social media network X, referring to his own artificial intelligence company, which is responsible for Grok. Top stories Swipe. Select. Stay informed. Singapore Over 100 people being investigated for vape offences, say MOH and HSA Singapore Bukit Merah fire: Residents relocated as town council carries out restoration works Singapore askST: What to do in the event of a fire at home Singapore Jalan Bukit Merah fire: PMD battery could have started fatal blaze, says SCDF Singapore askST: What are the fire safety rules for PMDs? Asia AirAsia flight from KL to Incheon lands at wrong airport in South Korea Asia India and China work to improve ties amid Trump's unpredictability Singapore From quiet introvert to self-confident student: How this vulnerable, shy teen gets help to develop and discover her strength X users responded by pointing out that China's DeepSeek AI hit the top spot in the App Store early this year, and Perplexity AI recently ranked number one in the App Store in India. DeepSeek and Perplexity compete with OpenAI and Mr Musk's startup xAI. Mr Altman called Mr Musk's accusation 'remarkable' in a response on X, charging that Mr Musk himself is said to 'manipulate X to benefit himself and his own companies and harm his competitors and people he doesn't like.' Mr Musk called Mr Altman a 'liar' in the heated exchange. OpenAI and xAI recently released new versions of ChatGPT and Grok. App Store rankings listed ChatGPT as the top free app for iPhones on Aug 14, with Grok in seventh place. Factors going into App Store rankings include user engagement, reviews and the number of downloads. US billionaire Elon Musk's Grok chatbot triggered an online storm in July, after inserting anti-Semitic comments into answers without prompting. PHOTO: AFP Grok was temporarily suspended on Aug 11 in the latest controversy surrounding the chatbot. No official explanation was provided for the suspension, which followed multiple accusations of misinformation including the bot's misidentification of war-related images – such as a false claim that an AFP photo of a starving child in Gaza was taken in Yemen years earlier. In July, Grok triggered an online storm after inserting anti-Semitic comments into answers without prompting. In a statement on Grok's X account later that month, the company apologised 'for the horrific behaviour that many experienced.' A US judge has cleared the way for a trial to consider OpenAI legal claims accusing Mr Musk – a co-founder of the company – of waging a 'relentless campaign' to damage the organisation after it achieved success following his departure. The litigation is another round in a bitter feud between the generative AI start-up and the world's richest person. Mr Musk founded xAI in 2023 to compete with OpenAI and the other major AI players. AFP

Citigroup considers custody and payment services for stablecoins, crypto ETFs
Citigroup considers custody and payment services for stablecoins, crypto ETFs

CNA

time3 hours ago

  • CNA

Citigroup considers custody and payment services for stablecoins, crypto ETFs

NEW YORK :Citigroup is exploring providing stablecoin custody and other services, a top executive told Reuters, in a further sign sweeping policy changes in Washington are spurring major financial firms to expand into the cryptocurrency business. The U.S. bank is among a handful of traditional institutions, including Fiserv and Bank of America, considering pushing into stablecoins after Congress passed a law paving the way for the crypto tokens to become widely used for payments, settlement, and other services. Stablecoins are cryptocurrencies pegged to a fiat currency or another asset, commonly the U.S. dollar. That law requires stablecoin issuers to hold safe assets such as U.S. Treasuries or cash to back the digital coins, creating opportunities for traditional custody banks to provide safekeeping and administration of the assets. "Providing custody services for those high-quality assets backing stablecoins is the first option we are looking at," Biswarup Chatterjee, global head of partnerships and innovation for Citigroup's services division, said in an interview. Citi's services business, which includes treasury, cash management, payments, and other services to large companies, remains a core unit for the bank, which has been undergoing a major restructuring. A McKinsey study estimates about $250 billion in stablecoins have been issued so far, but are mainly used to settle cryptocurrency trades. While Citigroup said last month it was considering issuing its own stablecoin, the bank has not previously discussed its broader digital asset plans. Citi is also exploring custody services for digital assets that back crypto-related investment products. For example, many asset managers have launched ETFs tracking the spot price of bitcoin since the Securities and Exchange Commission authorized such products last year. The largest bitcoin ETF, BlackRock's iShares Bitcoin Trust, has around $90 billion in market capitalization. "There needs to be custody of the equivalent amount of digital currency to support these ETFs," Chatterjee said. Currently, crypto exchange Coinbase dominates that business. In a statement, a Coinbase spokesperson said the company serves as the custodian for more than 80 per cent of issuers of crypto ETFs. Citi is also exploring using stablecoins to speed up payments, which in the traditional banking system typically take several days or longer. Currently, Citi offers "tokenized" U.S. dollar payments that use a blockchain network to transfer dollars between accounts in New York, London, and Hong Kong 24 hours a day. It is developing services to allow clients to send stablecoins between accounts or to convert them to dollars to make instant payments, and is talking to clients about the use cases, Chatterjee added. Once wary of allowing traditional financial firms to expand into the often-volatile crypto sector, banking and securities regulators under U.S. President Donald Trump's crypto-friendly administration are taking a more relaxed stance on the sector. Still, Citi and other firms would have to comply with current regulations, including money laundering and currency controls in some countries for international transfers. Chatterjee said the custody of crypto assets needs to ensure these assets, prior to being acquired, were used for legitimate purposes, and also strengthen cyber and operational security for safekeeping and theft prevention. The issuance of a stablecoin by the bank is also under consideration, Chatterjee added.

Oracle, Google cloud units strike deal for Oracle to sell Gemini models
Oracle, Google cloud units strike deal for Oracle to sell Gemini models

CNA

time9 hours ago

  • CNA

Oracle, Google cloud units strike deal for Oracle to sell Gemini models

SAN FRANCISCO :Oracle and Alphabet said on Thursday their cloud computing units have struck a deal to offer Google's Gemini artificial intelligence models through Oracle's cloud computing services and business applications. The deal, similar to one that Oracle struck with Elon Musk's xAI in June, will let software developers tap Google's models to generate text, video, images and audio while using Oracle's cloud. Businesses that use Oracle's various applications for corporate finances, human resources and supply chain planning will also be able to choose to use Google's models inside those apps. Those Oracle customers will be able to pay for the Google AI technologies using the same system of Oracle cloud credits they use to pay for Oracle services. The two companies did not disclose what, if any, payments will flow between them as part of the deal. For Oracle, the move advances the company's strategy of offering a menu of AI options to its customers rather than trying to push its own technology. For Google, it represents another step in its effort to expand the reach of its cloud offerings and win corporate customers away from rivals such as Microsoft.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store