logo
ChatGPT boss compares new AI to inventing nukes and reveals he's terrified of super-smart bot

ChatGPT boss compares new AI to inventing nukes and reveals he's terrified of super-smart bot

The Irish Sun30-07-2025
CHATGPT mastermind Sam Altman has opened up about his AI fears saying the tech has left him feeling "useless".
The OpenAI chief likened the feeling of spearheading AI to that of scientists behind the Manhattan Project, a WWII research and development programme devised to
Advertisement
2
Altman says he felt 'useless' compared to the AI's recent enhancement, GPT-5
Credit: Getty
A new version of the powerful ChatGPT system, GPT-5, is set to be unleashed soon with enhanced capabilities in understanding and processing like never before.
Altman, 40, said it "feels very fast" when testing the update recently.
After receiving a question he didn't quite understand via email he put it in GPT-5 which "answered it perfectly".
"I really kind of sat back in my chair and I was just like, oh man, 'here it is' moment," he told the
Advertisement
Read more about ChatGPT
"I felt useless relative to the AI, in this thing that I felt like I should have been able to do and I couldn't, it was really hard, but the AI just did it like that.
"It was a weird feeling."
The OpenAI boss went onto talk about moments in the history of science where you have a group of scientists look at their creation and say "what have we done?".
"Maybe the most iconic example is thinking about the scientists working on the Manhattan Project in 1945, sitting there and watching the Trinity test...
Advertisement
Most read in Science
Live Blog
"It was a completely new, not human scale kind of power and everyone knew it was going to reshape the world.
"And I do think people working on AI have that feeling in a very deep way. You know, we just don't know."
ChatGPT CEO Sam Altman admits feeling anxious over how powerful AI will change our lives as Congress warns of 'new industrial revolution'
Altman also laughed as he was likened to a "Charming Terminator".
It's not the first time Altman has been frank about his fears around AI.
Advertisement
He's previously admitted that "if this technology goes wrong, it can go quite wrong".
What was the Manhattan Project?
Here's what you need to know...
The Manhattan Project was a US WWII research and development project intended to create the first nuclear weapons
It was led by the US but was also supported by the UK and Canada
It started because of fears that the Nazi's had created nuclear weapons during WWII
The project started in 1939 and continued into the late 1940s
Most of the project was based at a facility in New Mexico
It created the nuclear bombs which were dropped over Hiroshima and Nagasaki in 1945
This remains the only use of nuclear weapons in armed conflict and killed between 129,000 and 226,000 people
2
Altman says GPT-5 is 'very fast'
Credit: Alamy
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Conor Skehan: No, Dermot Desmond — AI transport won't derail MetroLink, but here's what it will change
Conor Skehan: No, Dermot Desmond — AI transport won't derail MetroLink, but here's what it will change

Irish Independent

time27 minutes ago

  • Irish Independent

Conor Skehan: No, Dermot Desmond — AI transport won't derail MetroLink, but here's what it will change

The short answer is no. The longer answer is more important — the danger isn't that MetroLink will be outpaced by technology. The danger is that simplistic visions like this distort how we think about transport and lead to decisions we will regret for decades. Transport is not just a matter of getting from A to B. It is a slow-moving, overlapping tangle of infrastructure, emotion, status, cost, culture and geography. Generalisations about AI-powered futures overlook how messy and uneven transport is.

Karen Hao on AI tech bosses: ‘Many choose not to have children because they don't think the world is going to be around much longer'
Karen Hao on AI tech bosses: ‘Many choose not to have children because they don't think the world is going to be around much longer'

Irish Times

time8 hours ago

  • Irish Times

Karen Hao on AI tech bosses: ‘Many choose not to have children because they don't think the world is going to be around much longer'

Scarlett Johansson never intended to take on the might of Silicon Valley. But last summer the Hollywood star discovered a ChatGPT model had been developed whose voice – husky, with a hint of vocal fry – bore an uncanny resemblance to the AI assistant voiced by Johansson in the 2013 Spike Jonze movie Her. On the day of the launch, OpenAI chief executive Sam Altman , maker of ChatGPT, posted on X a one-word comment: 'her'. Later Johansson released a furious statement revealing she had been asked to voice the new aide but had declined. Soon the model was scrapped. Johansson and a phalanx of lawyers had defeated the tech behemoths. That skirmish is one among the many related in Karen Hao's new book Empire of AI: Inside the Reckless Race for Total Domination, a 482-page volume that, in telling the story of San Francisco company OpenAI and its founder, Altman, concerns itself with large and worrying truths. Could AI steal your job, destabilise your mental health and, via its energy-guzzling servers plunge the environment into catastrophe? Yes to all of the above, and more. [ Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao - Precise, insightful, troubling Opens in new window ] As Hao puts it in the book: 'How do we govern artificial intelligence? AI is one of the most consequential technologies of this era. In a little over a decade, it has reformed the backbone of the Internet. It is now on track to rewire a great many other critical functions in society, from healthcare to education, from law to finance, from journalism to government. The future of AI – the shape this technology takes – is inextricably tied to our future.' It's a rainy day in Dublin when I travel to Dalkey to meet Hao, a Hong Kong-dwelling, New Jersey-raised journalist who has become a thorn in Altman's side. Educated at MIT, she writes for the Atlantic and leads the Pulitzer Centre AI Spotlight series, a programme that trains journalists in covering AI matters. Among families grabbing a bite to eat in a local hotel, the boisterous kids running around tables in the lobby and tourists checking in and out, Hao, neat and professional in a cream blazer with her hair tied back, radiates an air of calm authority. READ MORE 'AI is such an urgent story,' she says. 'The pursuit of AI becomes dangerous as an idea because it's eroding people's data privacy . It's eroding people's fundamental rights. It's exploiting labour, but it's humans that are doing that, in the name of AI.' Whether you're in Dublin or San Diego, AI is hurtling into our lives. ChatGPT has 400 million weekly users. You can't go on to WhatsApp , Google or Meta without encountering an AI bot. It was revealed in a recent UK Internet Matters survey that 12 per cent of kids and teens use chatbots to offset feelings of loneliness. Secondary school students are changing their CAO forms to give themselves the best chance of thwarting the broken career ladder that AI has created. The impact of AI on the environment is extraordinary. Just one ChatGPT search about something as simple as the weather consumes vast energy, 10 times more than a Google search. Or, as Des Traynor of Intercom put it at Dalkey Book Festival recently, it's like using a 'massive diesel generator to power a calculator'. It's far from the utopian ideal of a medical solutions-focused, climate-improving enterprise that was first trumpeted to Hao when she began investigating OpenAI and Altman in 2019. As a 20-something reporter at MIT Technology Review covering artificial intelligence, Hao became intrigued by the company. Founded as a non-profit, OpenAI claimed not to chase commercialisation. Even its revamp into a partially for-profit model didn't alter its mission statement: to safely build artificial intelligence for the benefit of humanity. And to be open and transparent while doing it. But when Hao arrived at the plush headquarters on San Francisco's 18th and Folsom Streets, all exposed wood beam ceilings and comfy couches, she noticed that: nobody seemed to be allowed to talk to her casually. Her photograph had been sent to security. She couldn't even eat lunch in the canteen with the employees. 'They were really secretive, even though they kept saying they were transparent,' Hao says. 'Later on, I started sourcing my own interviews. People started telling me: this is the most secretive organisation I've ever worked for.' Karen Hao in Dublin during the Dalkey book festival. Photograph: Nick Bradshaw The meetings Hao had with OpenAI executives did not impress her. 'In the first meeting, they could not articulate what the mission was. I was like, well, this organisation has consistently been positioning itself as anti-Silicon Valley. But this feels exactly like Silicon Valley, where men are thrown boatloads of money when they don't yet have a clear idea of what they're even doing.' Simple questions appeared to wrong-foot the executives. They spoke about AGI (artificial general intelligence), the theoretical notion that silicon chips could one day give rise to a human-like consciousness. AGI would help solve complex problems in medicine and climate change , they enthused. But how would they achieve this and how would AGI technology be successfully distributed? They hedged. 'Fire is another example,' Hao was told. 'It's also got some real drawbacks to it.' Since that time, AGI has not been developed, but billions have been pumped into large language models such as ChatGPT, which can perform tasks such as question answering and translation. Built by consuming vast amounts of often garbage data from the bottom drawer of the Internet, AI chatbots are frequently unreliable. An AI assistant might give you the right answer. Or it might, as Elon Musk's AI bot Grok did recently , praise Adolf Hitler and cast doubt on people with Jewish surnames. 'Quality information and misinformation are being mixed together constantly,' Hao says, 'and no one can tell any more what are the sources of truth.' It didn't have to be this way. 'Before ChatGPT and before OpenAI took the scaling approach, the original trend in AI research was towards tiny AI models and small data sets,' Hao says. 'The idea was that you could have really powerful AI systems with highly curated data sets that were only a couple of hundred images or data points. But the key was you needed to do the curation on the way in. When it's the other way around, you're culling the gunk and toxicity and that becomes content moderation.' One particularly moving section of Hao's book is when she journeys to poorer countries to look at how people who work on the content moderation side of OpenAI cope day-to-day. Meagre incomes, job instability and exposure to hate speech, child sex abuse and rape fantasies online are just some of the realities contractors face. In Kenya , one worker's sanity became so frayed his wife and daughter left him. When he told Hao his story, the author says she felt like she'd been punched in the gut. 'I went back to my hotel, and I cried because I was like, this is tearing people's families apart.' Hao nearly didn't get her book out. She had thought she would have some collaboration with Altman and OpenAI, but the participation didn't happen. 'I was devastated,' she admits. 'Fortunately I had a lot of amazing people in my life who were like, 'Are you going to let them win or are you going to continue being the excellent journalist you know you can be, and report it without them?'' Understanding companies such as OpenAI is becoming more important for everyone. In recent weeks, Meta, Microsoft, Amazon and Alphabet , Google's parent company, delivered their quarterly public financial reports, disclosing that their year-to-date capital expenditure ran into tens of billions , much of it required for the creation and maintenance of data centres to power AI's services. In Ireland, there are more than 80 data centres, gobbling up 50 per cent of the electricity in the Dublin region, and hoovering up more than 20 per cent nationally, as they work to process and distribute huge quantities of digital information. [ Let's get real: Ireland's data centre boom is driving up fossil fuel dependence Opens in new window ] Hao believes governments must force tech companies to have more transparency in relation to the energy their data centres consume. 'If you're going to build data centres, you have to report to the public what the actual energy consumed is, how much water is actually used. That enables the public and the government to decide if this is a trade-off worth continuing. And they need to invest more in independent institutions for cultivating AI expertise.' While governments have to play their part, it's difficult reading the book not to find yourself asking the simple question: why aren't tech bosses themselves concerned about what they're doing? Tech behemoths may be making billions – AI researchers are negotiating pay packages of $250 million from companies such as Meta – but surely they've given a care to their children's future? And their children's children? Wouldn't they prefer them to live in a world they still have flowers and polar bears and untainted water? [ Adam Kelly: I am a university lecturer witnessing how AI is shrinking our ability to think Opens in new window ] 'What's interesting is many of them choose not to have children because they don't think the world is going to be around much longer,' Hao says. 'With some people in more extreme parts of the community, their idea of Utopia is all humans eventually going away and being superseded by this superior intelligence. They see this as a natural force of evolution.' 'It's like a very intense version of utilitarianism,' she adds. 'You'd maximise morality in the world if you created superior intelligences that are more moral than us, and then they inherited our Earth.' Offering a more positive outlook, there are many in the AI community who would say that the work they are doing will result in delivering solutions that benefit the planet. AI has the potential to accelerate scientific discoveries: its possibilities are exciting because they are potentially paradigm-shifting. Is that enough to justify the actions being taken? Not according to Hao. 'The problem is: we don't have time to continue destroying our planet with the hope that one day maybe all of it will be solved by this thing that we're creating,' she says. 'They're taking real world harm today and offsetting it with a possible future tomorrow. That possible future could go in the opposite direction.' 'They can make these trade-offs because they're the ones that are going to be fine. They're the ones with the wealth to build the bunkers. If climate change comes, they have everything ready.' Empire of AI: Inside the Reckless Race for Total Domination by Karen Hao is published by Allen Lane

What is OpenAI's GPT-5, and should I worry about my job?
What is OpenAI's GPT-5, and should I worry about my job?

Irish Times

time19 hours ago

  • Irish Times

What is OpenAI's GPT-5, and should I worry about my job?

An incredible on-demand superpower or overhyped tech? The latest artificial intelligence model from Open AI, GPT-5 , is live and makes some big promises. Open AI founder Sam Altman claims it is similar to having a PhD-level intelligence in your pocket but one that is easier to use, is more honest and is less prone to making things up. So what exactly does it mean? What is GPT-5? Let's start at the beginning. GPT-5 is the latest model underpinning OpenAI's chatbot. It combines reasoning with the ability to answer queries quickly, and is considered another step towards the creation of artificial general intelligence (AGI – more on that later). READ MORE Open AI says the new model is a 'significant leap' – its 'smartest, fastest, most useful' model that puts expert-level intelligence in everyone's hands. It says it combines the best of the company's previous models – quick answers, reasoning – but the model itself makes the decision. That means it is more efficient with the use of its resources too. What is AGI? Artificial general intelligence is an autonomous artificial intelligence that is capable of performing tasks as well as any human. It learns and adapts without retraining, taking things it has learned and applying it to new areas. That could put people out of jobs but we aren't quite there yet, even with GPT-5. So what can GPT-5 do? Open AI says the new model improves on a lot of things. Altman compared GPT-3, which was released in 2022 and kick-started the AI arms race, to talking to a high school or secondary school, student. GPT-4 upgraded that to a college student. He has described GPT-5 as a PhD-level expert in anything. Apart from being smarter, GPT-5 is designed to be easier to communicate with in a more natural way. According to Open AI, the system performs better than its predecessors at a range of tasks, from writing text and producing advanced computer code to solving maths equations and answering health-related questions. It further reduces hallucinations – where AI invents things – and also improves its ability to follow instructions. It is more honest about its abilities – and potentially the lack thereof – when answering your questions, Open AI says. It won't be overconfident about its answers and it is less sycophantic too. GPT-5 will use fewer emojis too, which is always welcome. Who can access it? Open AI said GPT‑5 is available to all users but its Plus subscribers get more usage and the Pro subscribers getting access a more advanced version that has extended reasoning capabilities. Will it take my job? Not just yet. While it may be capable of PhD-level intelligence, Open AI chief executive Altman says it is not quite at the level of artificial general intelligence where it can work independently and reliably take over human jobs. Who knows what will happen in the future, though? Altman compared the development of AI to the Manhattan Project, which led to the creation of nuclear weapons, in terms of the unforeseen impact it had on the world. AI is in its nascent stages, despite the hype. We don't know what AI's impact will be on society by the end of the decade, let alone the midpoint of the century. By then, the articles you read on AI might be written by ChatGPT itself.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store