logo
Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI

Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI

Economic Times12 hours ago

OpenAI CEO Sam Altman has expressed surprise at the high level of trust people place in ChatGPT, despite its known tendency to "hallucinate" or fabricate information. Speaking on the OpenAI podcast, he warned users not to rely blindly on AI-generated responses, noting that these tools are often designed to please rather than always tell the truth.
Tired of too many ads?
Remove Ads
Trusting the Tool That Admits It Lies?
Tired of too many ads?
Remove Ads
When Intelligence Misleads
A Wake-Up Call from the Inside
In a world increasingly shaped by artificial intelligence, a startling statement from one of AI's foremost leaders has triggered fresh debate around our trust in machines. Sam Altman , CEO of OpenAI and the face behind ChatGPT, has admitted that even he is surprised by the degree of faith people place in generative AI tools—despite their very human-like flaws.The revelation came during a recent episode of the OpenAI podcast , where Altman openly acknowledged, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.' His remarks, first reported by Complex, have added fuel to the ongoing discourse around artificial intelligence and its real-world implications.Altman's comments arrive at a time when AI is embedded in virtually every aspect of daily life—from phones and personal assistants to corporate software and academic tools. Yet his warning is rooted in a key flaw of current language models : hallucinations.In AI parlance, hallucinations refer to moments when a model like ChatGPT fabricates information. These aren't just harmless errors; they can sometimes appear convincingly accurate, especially when the model tries to fulfill a user's prompt, even at the expense of factual integrity.'You can ask it to define a term that doesn't exist, and it will confidently give you a well-crafted but false explanation,' Altman warned, highlighting the deceptive nature of AI responses. This is not an isolated issue—OpenAI has in the past rolled out updates to mitigate what some have termed the tool's 'sycophantic tendencies,' where it tends to agree with users or generate agreeable but incorrect information.What makes hallucinations particularly dangerous is their subtlety. They rarely wave a red flag, and unless the user is well-versed in the topic, it becomes difficult to distinguish between truth and AI-generated fiction. That ambiguity is at the heart of Altman's caution.A recent report even documented a troubling case where ChatGPT allegedly convinced a user they were trapped in a Matrix-like simulation, encouraging extreme behavior to 'escape.' Though rare and often anecdotal, such instances demonstrate the psychological sway these tools can wield when used without critical oversight.Sam Altman's candid reflection is more than a passing remark—it's a wake-up call. Coming from the very creator of one of the world's most trusted AI platforms, it reframes the conversation about how we use and trust machine-generated content.It also raises a broader question: In our rush to embrace AI as a problem-solving oracle, are we overlooking its imperfections?Altman's comments serve as a reminder that while AI can be incredibly useful, it must be treated as an assistant—not an oracle. Blind trust, he implies, is not only misplaced but potentially dangerous. As generative AI continues to evolve, so must our skepticism.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI to adjust salary and reward employees after Meta apparently broke into its home and stole top AI talent
OpenAI to adjust salary and reward employees after Meta apparently broke into its home and stole top AI talent

India Today

timean hour ago

  • India Today

OpenAI to adjust salary and reward employees after Meta apparently broke into its home and stole top AI talent

As Meta intensifies efforts to build its superintelligence lab, OpenAI is reeling from a wave of senior staff exits. In the past few weeks, at least eight high-profile scientists have reportedly left OpenAI to join Meta. This includes Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai, alongside others like Trapit Bansal, Shengjia Zhao, Jiahui Yu, Shuchao Bi, and Hongyu Ren. In response, OpenAI's leadership has reportedly launched an urgent internal campaign to reassure remaining staff and defend the company's culture. advertisementOpenAI's chief research officer Mark Chen apparently took to Slack to deliver a forceful message, likening the departures to a personal violation. 'I feel a visceral feeling right now, as if someone has broken into our home and stolen something,' Chen wrote in the internal memo that was obtained by Wired. 'Please trust that we haven't been sitting idly by.'Chen, along with OpenAI CEO Sam Altman and other senior leaders, has been working 'around the clock' to speak to staff with competing offers. He said they are actively 'recalibrating comp' and exploring 'creative ways to recognise and reward top talent.' While trying to stem the talent drain, Chen made it clear that fairness would not be compromised. 'While I'll fight to keep every one of you, I won't do so at the price of fairness to others,' he context behind this internal scramble lies in what appears to be an aggressive hiring spree by Meta. Over the past month, CEO Mark Zuckerberg has reportedly been directly involved in recruiting AI talent, and according to a report by The Wall Street Journal, Meta is targeting individuals from OpenAI and Google. The Information and TechCrunch have also detailed the wave of departures, further fuelling speculation around Meta's hiring addition to that, speaking on a podcast with his brother Jack Altman, OpenAI's chief executive claimed that Meta was offering some recruits signing bonuses of up to $100 million. Internally, Meta has denied this. According to The Verge, Meta CTO Andrew Bosworth dismissed Altman's claim as 'wildly misleading' and said that only a 'very, very small number' of extremely senior individuals might receive such large packages, and even then not as a lump-sum bonus.'It's far more complex than a one-off cheque,' Bosworth reportedly to Meta's defence, Beyer, Kolesnikov, and Zhai confirmed their move to Meta on X (formerly Twitter) but described the $100 million rumour as 'fake news.' Back at OpenAI, leaders are doing what they can to prevent further attrition. Chen's Slack message reportedly included words of encouragement from seven other research team members. One apparently advised teammates not to feel pressured by 'ridiculous exploding offers' and to reach out if they needed support. Another warned that Meta might take advantage of OpenAI's planned company-wide break to isolate and pressure potential recruits.'Meta knows we're taking this week to recharge and will take advantage of it to try and pressure you to make decisions fast and in isolation,' the message timing is particularly sensitive. OpenAI is giving most of its staff a week off to recover from the intense pace of work, which some say often runs up to 80 hours a week. Despite this, company executives, including Chen and Altman, have indicated they will remain available during the break to support his memo, Chen reportedly also reflected on OpenAI's mission and cautioned against getting caught up in the 'cadence of regular product launches' or short-term competition. 'We need to remain focused on the real prize of finding ways to compute into intelligence,' he wrote. 'Skirmishes with Meta are the side quest.'OpenAI CEO Sam Altman reportedly also responded to Chen's memo in Slack, commending his leadership during the difficult moment. 'It's been really amazing to watch Mark's leadership and integrity through this process, especially when he has had to make tough decisions,' he wrote. 'Very grateful we have him as our leader!'- Ends

OpenAI co-founder says AI is going to be extremely unpredictable and unimaginable
OpenAI co-founder says AI is going to be extremely unpredictable and unimaginable

India Today

timean hour ago

  • India Today

OpenAI co-founder says AI is going to be extremely unpredictable and unimaginable

Artificial intelligence may still be imperfect today, but Ilya Sutskever, co-founder and former chief scientist at OpenAI, believes it is only the beginning of a future that could quickly become unpredictable and unimaginable. Speaking in a recent video interview with The Open University of Israel, Sutskever said that the rapid development of AI systems could lead to a tipping point. Once AI begins to improve itself, the pace of progress might spiral beyond human control or comprehension. 'AI is going to be both extremely unpredictable and unimaginable,' he he acknowledged the risks, Sutskever also expressed optimism about the technology's potential to transform the world. 'If the AI became capable enough, we'll have incredible health care,' he said, adding that diseases could be cured and human lifespans comments came shortly after he accepted an honorary degree from The Open University, where he reflected on his personal journey into artificial intelligence. He described how, as an eighth-grade student, he taught himself advanced topics simply by reading slowly and carefully until he understood relocating to Toronto, Sutskever made an unusual choice: he skipped completing high school and instead transferred directly to the University of Toronto to study under AI pioneer Geoffrey Hinton. 'The place to be,' he recalled. This passion for learning led him to help develop AlexNet, a groundbreaking neural network that reshaped the field of AI. That success caught the attention of major tech companies, eventually leading Sutskever and his collaborators to form a startup, later acquired by Google. His next move was co-founding OpenAI, driven by a desire to build something meaningful 'with all these illustrious people.'Ilya Sutskever says AI could cure disease, extend life, and accelerate science beyond if it can do that, what else can it do?'The problem with AI is that it is so powerful. It can also do everything.'We don't know what's coming. We must prepare, together. vitrupo (@vitrupo) June 28, 2025advertisementIn his recent remarks, Sutskever stressed how AI is already capable of surprising feats, calling its current state 'evocative'. He said that AI is already powerful enough to hint at vast possibilities, but not yet fully realised. He said AI systems would eventually be able to do everything that humans can do, and perhaps laid out his reasoning with a simple comparison: 'We have a brain, the brain is a biological computer, so why can't a digital computer, a digital brain, do the same things?'When pressed on how soon such a future might arrive, Sutskever estimated a breakthrough into true superintelligence could happen in 'three, five, maybe ten years.' What comes after, he said, is unclear. 'The rate of progress will become really extremely fast for some time at least,' he future, he said, is unavoidable. 'Whether you like it or not, your life is going to be affected by AI to a great extent.'Sutskever also shared advice for the graduating class, encouraging them to focus on the present instead of dwelling on past mistakes. 'It's so easy to think, 'Oh, some bad past decision or bad stroke of luck,'' he said. 'It's just so much better and more productive to say, 'Okay, things are the way they are, what's the next best step?''advertisementHis words held deeper meaning given his own role in the surprise ousting of OpenAI CEO Sam Altman in late 2023. Sutskever was part of the board that removed Altman, only to later express deep regret and join the call for his reinstatement. Altman returned within days, and Sutskever left the company six months later to launch a new AI lab focused on building 'safe superintelligence.'Returning to his academic roots, Sutskever told graduates that the age of AI is unlike any other moment in history. 'We all live in the most unusual time ever,' he said. 'And the reason it's true this time is because of AI.'- Ends

DGP pats AP police for successfully conducting Hackathon
DGP pats AP police for successfully conducting Hackathon

Hans India

timean hour ago

  • Hans India

DGP pats AP police for successfully conducting Hackathon

Vijayawada: The three-day AI4 Andhra Police Hackathon, a national-level programme held from June 27 to 29 concluded on Sunday at RVR & JC College of Engineering in Guntur. The Hackathon was inaugurated by Chief Minister Nara Chandrababu Naidu on Friday and organised by the AP police in collaboration with knowledge partner 4SightsAI. Over 60 teams including IT companies, startups and academic institutions from India and abroad have participated in the event and showcased AI expertise. DGP Harish Kumar Gupta has distributed cash prize of Rs 10 lakh to the winners and congratulated for participation and achieving success. The DGP participating in the closing ceremony on Sunday congratulated the police official ands technical experts who made this event a grand success. He said the Hackathon is a start of long journery for AP police. He said Dr. Kolla Srinivas, and team transformed their campus overnight—new elevator, cutting-edge hackathon arena, and boundless hospitality. He further said the Tech-savvy officers like Raj Kumari, Fakkirappa, Malika Garg, Vakul, Tushar, and Kishore curated and refined eight deployable challenges and also all the members of Event management team. Harish Kumar Gupta also praised the local police stating that ther frontline officers and the newly formed Prism unit ensured uninterrupted support. He said every participant was a winner and felt India's enterprises have yet to embrace GenAI and Agentic AI at production scale; many may not even know these terms. He said the AP police has taken the uphill task and proved what is possible by successfully organising the Hackathon in Guntur.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store