logo
AI godfather Geoffrey Hinton who warned against supersmart AI says he trusts ChatGPT more than he should

AI godfather Geoffrey Hinton who warned against supersmart AI says he trusts ChatGPT more than he should

India Today20-05-2025

Geoffrey Hinton, often described as the godfather of AI, has made headlines once again, but this time not for sounding the alarm, but for admitting a surprising degree of trust in the very technology he has warned the world about. In a recent CBS interview, Hinton revealed that he relies on OpenAI's GPT-4 model for his daily tasks and confessed, 'I tend to believe what it says, even though I should probably be suspicious.'advertisementThis statement stands out coming from a man who has spent years cautioning the world about the dangers of artificial intelligence. Hinton, who received the 2024 Nobel Prize in Physics for his groundbreaking work in neural networks, has previously warned that superintelligent AI could eventually manipulate humans or become uncontrollable. Yet, when asked which tool he uses most, his answer was simple: ChatGPT.During the interview, Hinton tested GPT-4 with a riddle: 'Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have?' The correct answer is one because Sally is one of the two sisters. But GPT-4 got it wrong. 'It surprises me. It surprises me it still screws up on that,' Hinton said. Despite the error, he acknowledged he still finds himself trusting its answers more than he should.advertisementWhen asked during the interview if he thinks the next version, GPT-5, would get it right, he said, 'Yeah, I suspect.'While Hinton may be right to expect improvements, he's not alone in his fascination. Since GPT-4 was introduced by OpenAI in March 2023, it has been widely adopted across industries for its ability to write code, summarise documents, and solve complex problems. However, OpenAI officially discontinued GPT-4 as a standalone product earlier this month. The latest models OpenAI has are GPT-4o and GPT-4.1, which are newer and more advanced models that boast faster response times, lower costs, and a broader range of capabilities including real-time audio and visual input. For Hinton, though, this trust in AI is layered with caution. He has long expressed concerns about the potential misuse of AI. From warning against AI spreading misinformation to posing a threat to humanity if machines ever surpass human intelligence, Hinton has always asked scientists and consumers to be vary of the technology. Hinton worked at Google's AI division for a decade. In 2023, he resigned to speak more openly about these dangers. He has warned of systems so persuasive that they could influence public opinion or deceive users, especially once AI starts understanding the world better than we do. advertisementAnd yet, there's a human side to Hinton that often goes overlooked, one that celebrates curiosity and mentorship. During a press conference after his Nobel win, he proudly acknowledged his students, especially Ilya Sutskever, co-founder of OpenAI. With a touch of dry humour, he said, 'I'm particularly proud of the fact that one of my students fired Sam Altman,' referring to the high-drama leadership crisis at OpenAI in 2023.Still, his outlook on AI hasn't softened. In several public remarks, Hinton has compared AI's rise to the industrial revolution. Only this time, it's about intellectual rather than physical power. 'We have no experience in having things which are smarter than us,' he warned. He has advocated for responsible development and governance frameworks, saying the benefits of AI in fields like healthcare and climate science could be immense, but only if managed wisely.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

WWDC 2025: Apple could let developers use its AI models to build new features, says report
WWDC 2025: Apple could let developers use its AI models to build new features, says report

Mint

time2 hours ago

  • Mint

WWDC 2025: Apple could let developers use its AI models to build new features, says report

Apple's Worldwide Developer Conference is all set to take off from 9 June, marking the Cupertino-based tech giant's second-biggest event of the year where it demonstrates all its software prowess. Over the last year, Apple has received a lot of flak for its patchy rollout of new software updates, along with delays in AI features, some of which still haven't arrived. However, the company will likely try to shift attention from those concerns as it focuses on a major rebrand with the unveiling of iOS 26 at WWDC 2025, a significant departure from the current naming scheme used by Cupertino. Gurman says the new AI features to be unveiled by Apple at WWDC 2025 will be minor and are unlikely to impress industry watchers, especially at a time when the pace of AI progress is accelerating, with companies like Google, Meta, Microsoft and OpenAI announcing new products almost every month. Among the AI features Apple is reportedly planning this year is a systemwide push into translation. The Translate functionality will be integrated across Apple operating systems as part of Apple Intelligence, with its main use case being live translation of phone calls and text messages. Gurman adds that the biggest AI-related announcement by Apple will be opening up its large language models (LLMs), the building blocks behind Apple Intelligence, to outside app developers. This will allow them to build their own AI features using the same technology that powers tools like Genmoji and Apple's writing aids. Apple is also expected to announce an upgraded version of its foundation models for both on-device and cloud use at WWDC 2025. Notably, developers will be given access to the on-device version of these LLMs. In terms of other AI announcements, there is unlikely to be any update on Apple's long awaited Siri revamp. Moreover, the Apple's partnership with Google to use Gemini in its apps is also unlikely to be announced at this year's WWDC.

You can now schedule tasks with Gemini as Google's powerful new AI feature rivals ChatGPT's capabilities
You can now schedule tasks with Gemini as Google's powerful new AI feature rivals ChatGPT's capabilities

Hindustan Times

time4 hours ago

  • Hindustan Times

You can now schedule tasks with Gemini as Google's powerful new AI feature rivals ChatGPT's capabilities

Google is steadily evolving Gemini into a smarter, more proactive AI assistant that now competes directly with OpenAI's ChatGPT. The tech giant has started rolling out a feature called Scheduled Actions, which lets users automate recurring or timed tasks without repeating commands. Originally previewed during Google I/O, Scheduled Actions is now arriving on both Android and iOS devices. The feature is currently available to subscribers of Google One AI Premium and select Google Workspace business and education plans. With this rollout, Google is pushing Gemini closer to becoming a fully integrated productivity companion. Scheduled Actions let users instruct Gemini to perform specific tasks at set times or intervals. This includes sending daily calendar summaries, weekly content prompts, or even one time reminders. Once scheduled, Gemini handles them automatically in the background with no follow up required. For example, a user might say, 'Send me a summary of today's meetings every morning at 8 AM' or 'Generate weekly blog ideas every Friday at 10 AM.' These tasks run quietly behind the scenes, transforming Gemini from a reactive chatbot into a daily-use productivity tool. The setup process is built to be intuitive, making automation easy for both everyday users and professionals. Within the Gemini app, users can define a task, set the time, and choose the frequency through a clean and accessible interface. Scheduled Actions puts Google in direct competition with the kind of automation ChatGPT users create through Zapier or custom workflows. What gives Gemini a clear edge is its deep integration with Google's suite of apps. Functioning across Gmail, Calendar, Docs, and Tasks, Gemini offers a smooth setup and efficient task execution experience. Since it is built into tools people already use, Gemini can interact directly with information across Google's ecosystem. There is no need for third party services or custom scripts. For users already invested in Google's platform, the experience is more seamless than ChatGPT's dependence on external integrations. Scheduled Actions signals a shift in expectations for how AI assistants should function. Instead of waiting for commands, Gemini can now anticipate and handle repetitive tasks, offering a more personal and assistant like experience. While this may be just the beginning, it is a clear step toward positioning Gemini as a truly productivity first AI assistant. And as Gemini continues to evolve, it may not just catch up to ChatGPT but define the next generation of digital assistance.

Google DeepMind CEO Predicts AI-Powered Humanity Will 'Colonise The Galaxy' Starting 2030
Google DeepMind CEO Predicts AI-Powered Humanity Will 'Colonise The Galaxy' Starting 2030

NDTV

time4 hours ago

  • NDTV

Google DeepMind CEO Predicts AI-Powered Humanity Will 'Colonise The Galaxy' Starting 2030

Google DeepMind CEO Demis Hassabis predicts human galaxy colonisation. AI will significantly enhance human productivity and exploration of the universe, he claims. Hassabis envisions a future with radical abundance and solutions to major global issues. Google DeepMind CEO Demis Hassabis has claimed that humans will be able to 'colonise the galaxy', starting 2030, with artificial intelligence (AI) being the driving force behind this revolution. The 2024 Nobel Prize in Chemistry winner said AI tools will turbocharge human productivity and lead us to far and away lands in the universe. Mr Hassabis said the 'golden era' was only five years away, with AI models set to bring about a renaissance in human existence. "If everything goes well, then we should be in an era of radical abundance, a kind of golden era. AGI can solve what I call root-node problems in the world, curing terrible diseases, much healthier and longer lifespans, finding new energy sources," said Mr Hassabis in an interview with WIRED, referring to human-level AI, popularly known as Artificial General Intelligence (AGI). "If that all happens, then it should be an era of maximum human flourishing, where we travel to the stars and colonise the galaxy. I think that will begin to happen in 2030," he added. Quizzed if the abundance will still lead to unequal distribution as is the condition today, Mr Hassabis said AI will make "things feel like a non-zero-sum game". AGI's future While AGI may help humanity unlock new frontiers, Mr Hassabis has previously talked about society not being ready for it and that the technology and its perils keep him up at night. "It's a sort of like probability distribution. But it's [AGI] coming, either way it's coming very soon and I'm not sure society's quite ready for that yet. And we need to think that through and also think about these issues that I talked about earlier, to do with the controllability of these systems and also the access to these systems and ensuring that all goes well," said Mr Hassabis. He has also called for the establishment of a UN-like umbrella organisation to oversee AGI's development. "I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research-focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible," he added.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store