logo
Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI

Does ChatGPT suffer from hallucinations? OpenAI CEO Sam Altman admits surprise over users' blind trust in AI

Economic Times29-06-2025
OpenAI CEO Sam Altman has expressed surprise at the high level of trust people place in ChatGPT, despite its known tendency to "hallucinate" or fabricate information. Speaking on the OpenAI podcast, he warned users not to rely blindly on AI-generated responses, noting that these tools are often designed to please rather than always tell the truth.
Tired of too many ads?
Remove Ads
Trusting the Tool That Admits It Lies?
Tired of too many ads?
Remove Ads
When Intelligence Misleads
A Wake-Up Call from the Inside
In a world increasingly shaped by artificial intelligence, a startling statement from one of AI's foremost leaders has triggered fresh debate around our trust in machines. Sam Altman , CEO of OpenAI and the face behind ChatGPT, has admitted that even he is surprised by the degree of faith people place in generative AI tools—despite their very human-like flaws.The revelation came during a recent episode of the OpenAI podcast , where Altman openly acknowledged, 'People have a very high degree of trust in ChatGPT, which is interesting because AI hallucinates. It should be the tech that you don't trust that much.' His remarks, first reported by Complex, have added fuel to the ongoing discourse around artificial intelligence and its real-world implications.Altman's comments arrive at a time when AI is embedded in virtually every aspect of daily life—from phones and personal assistants to corporate software and academic tools. Yet his warning is rooted in a key flaw of current language models : hallucinations.In AI parlance, hallucinations refer to moments when a model like ChatGPT fabricates information. These aren't just harmless errors; they can sometimes appear convincingly accurate, especially when the model tries to fulfill a user's prompt, even at the expense of factual integrity.'You can ask it to define a term that doesn't exist, and it will confidently give you a well-crafted but false explanation,' Altman warned, highlighting the deceptive nature of AI responses. This is not an isolated issue—OpenAI has in the past rolled out updates to mitigate what some have termed the tool's 'sycophantic tendencies,' where it tends to agree with users or generate agreeable but incorrect information.What makes hallucinations particularly dangerous is their subtlety. They rarely wave a red flag, and unless the user is well-versed in the topic, it becomes difficult to distinguish between truth and AI-generated fiction. That ambiguity is at the heart of Altman's caution.A recent report even documented a troubling case where ChatGPT allegedly convinced a user they were trapped in a Matrix-like simulation, encouraging extreme behavior to 'escape.' Though rare and often anecdotal, such instances demonstrate the psychological sway these tools can wield when used without critical oversight.Sam Altman's candid reflection is more than a passing remark—it's a wake-up call. Coming from the very creator of one of the world's most trusted AI platforms, it reframes the conversation about how we use and trust machine-generated content.It also raises a broader question: In our rush to embrace AI as a problem-solving oracle, are we overlooking its imperfections?Altman's comments serve as a reminder that while AI can be incredibly useful, it must be treated as an assistant—not an oracle. Blind trust, he implies, is not only misplaced but potentially dangerous. As generative AI continues to evolve, so must our skepticism.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Sam Altman says OpenAI ‘screwed up' GPT-5 rollout: Here are the changes ‘coming soon'
Sam Altman says OpenAI ‘screwed up' GPT-5 rollout: Here are the changes ‘coming soon'

Indian Express

timean hour ago

  • Indian Express

Sam Altman says OpenAI ‘screwed up' GPT-5 rollout: Here are the changes ‘coming soon'

Weeks after the bumpy debut of its latest flagship AI model, OpenAI has said it is tweaking GPT-5 to make its AI-generated responses seem warmer and more familiar. The post-launch changes are based on user feedback that GPT-5's responses 'felt too formal.' 'Changes are subtle, but ChatGPT should feel more approachable now. You'll notice small, genuine touches like 'Good question' or 'Great start,' not flattery. Internal tests show no rise in sycophancy compared to the previous GPT-5 personality,' OpenAI said in a post on X on Friday, August 15. The behavioural changes to GPT-5 are expected to roll out in the coming week. It is one of many updates that have been announced by the Microsoft-backed AI startup since the model was launched on August 7. Users have been left disappointed by the underwhelming release of GPT-5, which has been hyped up since the company's 2023 release of GPT-4. GPT-5 also suffered several delays due to safety testing and compute limitations. When the model became freely available in ChatGPT this month, users pointed out that the advancements they had been expecting seemed incremental, with GPT-5's main improvements related to cost and speed. In response to these issues, OpenAI CEO Sam Altman reportedly told journalists at a dinner last week, 'I think we totally screwed up some things on the rollout.' 'On the other hand, our API traffic doubled in 48 hours and is growing. We're out of GPUs. ChatGPT has been hitting a new high of users every day. A lot of users really do love the model switcher. I think we've learned a lesson about what it means to upgrade a product for hundreds of millions of people in one day,' Altman was quoted as saying by The Verge. ChatGPT has quadrupled its user base in a year, and is nearing over 700 million people each week, as per reports. On GPT-5's behavioural issues, Nick Turley, the product head of ChatGPT, said, 'GPT-5 was just very to the point. I like that. I use the robot personality — I'm German, you know, whatever. But many people do not, and they really like the fact that ChatGPT would actually check in with you.' Here's a brief list of all the changes and improvements announced by OpenAI since the launch of GPT-5. GPT-5 includes Auto, Fast, and Thinking modes. Fast mode gives users faster answers from GPT-5 while Thinking mode means the model takes more time to give deeper answers. Auto mode routes between Fast and Thinking modes. The three modes can be selected by users within the model picker in ChatGPT. 'GPT-5 will seem smarter starting today. Yesterday, the autoswitcher broke and was out of commission for a chunk of the day, and the result was GPT-5 seemed way dumber,' Altman said in a post on X. During a Reddit Ask Me Anything (AMA) session, multiple users requested OpenAI to bring back GPT-4o. Replying to the Reddit thread, Altman said that the OpenAI team had heard user feedback and had decided that they will offer an option for Plus users to continue using GPT-4o and 'will watch usage to determine how long to support it.' GPT-4o is available under 'Legacy models' by default for paid users. Other legacy AI models such as o3 and GPT-4.1, as well as GPT-5 Thinking mini, can be added to the model picker within ChatGPT by enabling 'Show additional models' in ChatGPT's settings. To be sure, this option is only available for paid users. Moving forward, Altman said that the company will give users a more clear 'transition period' when deprecating AI models in the future, as per a report by TechCrunch. ChatGPT Plus and Team subscribers now get up to 3,000 messages in a week when using GPT-5 in Thinking mode, with extra capacity on GPT-5 Thinking mini when they hit this limit. The company has also made GPT-5 available for ChatGPT Enterprise and Edu subscribers. Additionally, OpenAI has said it is working on improvements at the user interface (UI) level so that users can more easily enable Thinking Mode in GPT-5 and more clearly see which AI model is responding to their query or prompt.

Google's Gemini AI is training on your personal conversations by default. Here's how you can turn if off
Google's Gemini AI is training on your personal conversations by default. Here's how you can turn if off

Mint

timean hour ago

  • Mint

Google's Gemini AI is training on your personal conversations by default. Here's how you can turn if off

Artificial Intelligence chatbots have seriously taken off since ChatGPT became a viral sensation back in late 2022. While ChatGPT remains the most popular chatbot on the market, Google's Gemini AI has caught up with some great new model launches in the last few months. What also helps Gemini's case is that the chatbot is present across a number of Google services including Gmail and Calendar, making for easier integration. Google by default uses your conversations with Gemini to train its upcoming AI models. Notably, large language models (LLMs) like Gemini are trained on massive datasets in order to learn patterns in language, reasoning and context. In one sense, modern LLMs are essentially pattern recognisers and while publicly available datasets can help the model learn some useful patterns, they are not enough for the model to handle natural queries better. By training on user interactions, LLMs like Gemini understand what queries users ask most and how the model could adapt to deliver more useful content. With users increasingly going to chatbots to ask questions about everything in their lives, from simple tax troubles to past emotional traumas, the thought of Google having access to this data for training its new AI model could be unsettling. But if you want to revoke Google's access to train its AI model on your conversations, there's a quick fix for that. In order to stop Gemini from training on your personal conversations, you'll need to turn off the 'Gemini Apps Activity' option by going to the settings page on the website or iOS/Android app. Google is soon going to change the name of this setting to 'Keep activity' after a new update, but the process to turn it off will remain the same. - Visit on your browser and sign in to your Google account - Click on the three-bar menu on the left-hand side of the page and tap on Settings and help - Tap on Activity and you will be taken to a new settings page - Click on the Turn off option next to Gemini activity to stop the chatbot from training on your conversations - For even more privacy, you can delete past Gemini activity to remove data the chatbot collected before you disabled the feature - Google will continue to store your Gemini activity for 72 hours before deleting it from its servers. Moreover, if you have multiple Google accounts from which you use Gemini, make sure to repeat the process for each one in order to stop Gemini from training on your conversations. Gemini app activity settings - Open the Gemini app on your phone - Tap on the accounts option in the top-right corner and click on Gemini apps activity - Follow the same process to turn off Gemini app activity and delete previous activity if needed

Transformer by Mint: The man shaping India's AI dreams, and continuing chaos at Vodafone
Transformer by Mint: The man shaping India's AI dreams, and continuing chaos at Vodafone

Mint

timean hour ago

  • Mint

Transformer by Mint: The man shaping India's AI dreams, and continuing chaos at Vodafone

I've known Abhishek Singh, a senior bureaucrat, for some time now. He's been in the Indian tech ecosystem for a while, leading multiple government-backed digitisation initiatives. Now, as chief of the billion-dollar India AI Mission, he faces one of his biggest challenges in a public-service career spanning three decades. The reasons for this are varied. For one, the fact that AI presents a huge opportunity to a long-serving government official shows just how far the technology has come, and how it now affects everyone. More importantly, though, India could potentially gain or lose a lot depending on what we do with AI. Let me take you back a few decades. If you've read the venerable Chip War by Chris Miller (whom I had the pleasure to meet this January), you know that during America's push for leadership in electronic machines at the start of the world's tryst with semiconductors, India missed the bus. This allowed Japan and Taiwan to become global technology leaders despite being societies steeped in tradition. Then came the mobile revolution, and apart from emerging as a big global market, India almost missed the bus there, too. But then the Digital India and Make in India initiatives emerged, digital skills took centre stage, and India is now at a point where tech manufacturing is at least on the ascendancy. To cut a long story short, after having missed out on tectonic global shifts, India a chance to show with AI that it is not just the world's tech back-office and can lead from the front, too. Singh has a plan for this: building a voice-based foundational model that, along with India's government-supported base of thousands of Nvidia GPUs, would become India's next big export to the world after UPI. Here's why he thinks this will work. Speaking of tech's back offices… Jas Bardia, our resident correspondent for India's nearly $300-billion IT services industry, reported last week that there's a war brewing at India's mid-sized tech services firms, which truly believe they can take on the behemoths and win. India's IT services industry had began booming in the early 1990s, turning Tata Consultancy Services, Infosys, Wipro and the likes into the mammoths they are today. During the late 1990s and early 2000s, almost every household around where I grew up had at least one person working at these IT giants. The world, however, as changed considerably since then. Over the past two years companies such as Coforge and Persistent Services have emerged as serious competitors, pitching themselves as specialised firms with a deeper understanding of technology. Where does this leave TCS and its ilk? Will they lose out? Maybe not so soon, but market dynamics are undeniably changing. Also changing is the top job at Vodafone-Idea The beleaguered telecom operator began its India journey as Command Telecom, a telco operated under Kolkata's Usha Martin. In 2000, Hutchison Max acquired Command, leading to the creation of network provider Hutch in 2005. In 2007, Vodafone entered the market and created Vodafone Essar Limited, the entity's longest-standing identity so far. Despite its more than three decades of history, the Vodafone-Idea entity of today is in perilous financial health. Last week the telco appointed erstwhile chief operating officer Abhijit Kishore as CEO for three years as outgoing chief Akshay Moondra's term ended. Now, being a CEO is a dream for anyone in corporate India, but Vi faces a veritable nightmare. After all, it needs to catch up with Airtel and Jio on quality of service while paying off its eye-watering dues and needing $30 billion of capital immediately. Suddenly, Kishore's job doesn't seem like a dream. One thing's clear, though – whichever way this goes, Vodafone-Idea's story will make for a fascinating case study in India's telecom sector for years to come. Mint's telecom correspondent Jatin Grover brings you all the juicy details. Finally, satellites on the frontline Last week, Jatin and I wrote about India's potential revamp of sensitive defence networks in an exclusive report. The full story: over the past two years, the government has been exploring ways for modern satellite internet providers such as Elon Musk's Starlink and Bharti Airtel's OneWeb to offer their services to India's defence forces. The reason is clear: it's now imperative to have secure and blazing-fast internet connectivity even in remote bounary regions. India needs drones, consistent satellite feeds, and a host of other technologies to stay ahead of its enemies. Older satellite connections—which serve only as a backup—aren't up to the task. In other news: the battle for Chrome, and an iPhone 'Air' Last week, Perplexity CEO Aravind Srinivas put in a bid for Google Chrome, saying his company was willing to spend $34.5 billion to buy the world's leading browser. However, he doesn't have that kind of money. You see, Perplexity is only worth about $18 billion. Chrome, on the other hand, is valued more than $50 billion. Then, OpenAI CEO Sam Altman added fuel to the fire, asking, 'Is Google really selling Chrome? If they are, we'd be interested. Why not?" Welcome to Silicon Valley's newest battleground, one that we'll be tracking. We've already reported about Google and OpenAI's silent fight, and how it forced Sergey Brin, a Valley legend, back to the engineering table. Finally, its that time of the year when we expect to see new Google Pixels and Apple iPhones. This year, rumours are that Apple will launch an 'iPhone Air' as part of its range this year. If you've followed Apple, you'd know the 'Air' branding refers to ultra-thin and light devices. The first MacBook Air, in fact, remains one of the most legendary consumer devices to date. Will the iPhone Air live up to this? Here's what we've gathered so far. Transformer by Mint is a weekly newsletter that brings India's most important and interesting technology updates under one umbrella. As the world transforms with every day of innovation, Transformer will keep a tab on the impact that technologies will make in each of our lives. Published every week, the newsletter brings some of India's tech landscape's most insightful coverages until date.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store