
What is OpenAI's GPT-5, and should I worry about my job?
artificial intelligence
model from Open AI,
GPT-5
, is live and makes some big promises.
Open AI
founder
Sam Altman
claims it is similar to having a PhD-level intelligence in your pocket but one that is easier to use, is more honest and is less prone to making things up.
So what exactly does it mean?
What is GPT-5?
Let's start at the beginning. GPT-5 is the latest model underpinning OpenAI's chatbot. It combines reasoning with the ability to answer queries quickly, and is considered another step towards the creation of artificial general intelligence (AGI – more on that later).
READ MORE
Open AI says the new model is a 'significant leap' – its 'smartest, fastest, most useful' model that puts expert-level intelligence in everyone's hands.
It says it combines the best of the company's previous models – quick answers, reasoning – but the model itself makes the decision. That means it is more efficient with the use of its resources too.
What is AGI?
Artificial general intelligence is an autonomous artificial intelligence that is capable of performing tasks as well as any human. It learns and adapts without retraining, taking things it has learned and applying it to new areas.
That could put people out of jobs but we aren't quite there yet, even with GPT-5.
So what can GPT-5 do?
Open AI says the new model improves on a lot of things. Altman compared GPT-3, which was released in 2022 and kick-started the AI arms race, to talking to a high school or secondary school, student. GPT-4 upgraded that to a college student. He has described GPT-5 as a PhD-level expert in anything.
Apart from being smarter, GPT-5 is designed to be easier to communicate with in a more natural way. According to Open AI, the system performs better than its predecessors at a range of tasks, from writing text and producing advanced computer code to solving maths equations and answering health-related questions.
It further reduces hallucinations – where AI invents things – and also improves its ability to follow instructions.
It is more honest about its abilities – and potentially the lack thereof – when answering your questions, Open AI says. It won't be overconfident about its answers and it is less sycophantic too. GPT-5 will use fewer emojis too, which is always welcome.
Who can access it?
Open AI said GPT‑5 is available to all users but its Plus subscribers get more usage and the Pro subscribers getting access a more advanced version that has extended reasoning capabilities.
Will it take my job?
Not just yet. While it may be capable of PhD-level intelligence, Open AI chief executive Altman says it is not quite at the level of artificial general intelligence where it can work independently and reliably take over human jobs.
Who knows what will happen in the future, though? Altman compared the development of AI to the Manhattan Project, which led to the creation of nuclear weapons, in terms of the unforeseen impact it had on the world. AI is in its nascent stages, despite the hype. We don't know what AI's impact will be on society by the end of the decade, let alone the midpoint of the century. By then, the articles you read on AI might be written by ChatGPT itself.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Irish Times
12 hours ago
- Irish Times
China wants US to relax AI chip-export controls for trade deal
China wants the United States to ease export controls on chips critical for artificial intelligence as part of a trade deal before a possible summit between presidents Donald Trump and Xi Jinping, the Financial Times reported on Sunday. Chinese officials have told experts in Washington that Beijing wants the Trump administration to relax export restrictions on high-bandwidth memory chips, the newspaper reported, citing unnamed people familiar with the matter. The White House, State Department and China's foreign ministry did not immediately respond to requests for comment on the report. HBM chips, which help perform data-intensive AI tasks quickly, are closely watched by investors due to their use alongside AI graphic processors, particularly Nvidia's. READ MORE The FT said China is concerned because US HBM controls hamper the ability of Chinese companies such as Huawei to develop their own AI chips. Successive US administrations have curbed exports of advanced chips to China, looking to stymie Beijing's AI and defence development. While this has impacted US firms' ability to fully address booming demand from China, one of the world's largest semiconductor markets, it still remains an important revenue driver for American chipmakers. - Reuters


Irish Independent
14 hours ago
- Irish Independent
Steve Dempsey: Big Tech bleating about EU AI rules little more than a fear of a basic level of oversight and respect for copyright
At the heart of much of the discussion here is whether the need for AI innovation trumps existing copyright laws. The US sees itself in an AI race with China, while Europe has been more focused on protecting citizens and existing rights. The European Commission recently published implementation guidelines relating to the EU AI act. These include details of legal obligations for the safe use of AI, copyright protections for creators, and transparency rules around how AI models are trained. As Europe has a track record of creating de facto rules for the West around tech legislation, it's worth understanding how these implementation guidelines have been greeted. Last week a consortium that represents rights holders from across the media, music, film & TV, books and publishing and art worlds came out against the AI guidelines. Ironically, there isn't a creative name among the host of acronyms representing the creative industries. There's AEPO-ARTIS, BIEM, CISAC, ECSA, FIM, GESAC, ICMP, IMPALA and more. Their point is clear, though. In an open letter, they claim that the European Commission's official guidance on the copyright and transparency obligations contained in the EU AI Act favours tech companies over creators and copyright owners. Their concern is that the new AI regulations will solely benefit the AI companies that scrape their copyrighted content without permission to build and train models. The letter says: 'We are contending with the seriously detrimental situation of generative AI companies taking our content without authorisation on an industrial scale in order to develop their AI models. Their actions result in illegal commercial gains and unfair competitive advantages for their AI models, services, and products, in violation of European copyright laws.' Big tech, which seems to have more lobbying muscle than coding muscle these days, is not presenting a unified front. Google has said it will sign the EU's AI code of practice but warned that the Act and the Code could make Europe an AI laggard. Kent Walker, president of global affairs and chief legal officer at Google's parent company Alphabet, ominously warned: 'Departures from EU copyright law, steps that slow approvals, or requirements that expose trade secrets could chill European model development and deployment, harming Europe's competitiveness.' OpenAI and the French artificial intelligence company, Mistral are also onboard. And Microsoft will more than likely sign too. But Meta, Facebook's parent company, is against the code. They believe it introduces a number of legal uncertainties for model developers and measures that go beyond the scope of the AI Act. Like Google, they're warning that this will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them. Facebook knows all about how to use FOMO. And it's working. There's been another open letter, this time from the chief executives of large European companies, including Airbus and BNP Paribas, urging a two-year pause by Brussels and warning that unclear and overlapping regulations were threatening the bloc's competitiveness in the global AI race. With all these talking heads, commercial imperative and AI hype cycle, it's easy to forget what all this hot air is about. The issue here is Article 53 of the AI act that introduces transparency into the heart of general-purpose AI model deployment. This article stipulates that AI providers must create and maintain detailed technical documentation covering the AI model's design, development, training data, evaluation, testing, intended tasks, architecture, licensing and energy metrics. All of this must be available to the EU AI Office and national authorities on request. It also must be available in relation to any other downstream systems that integrate the model in question. Article 53 also ensures model providers adhere to EU copyright law and must publicly publish a detailed summary of the training data used. This aims to shed light on datasets, sources, and potential inclusion of copyrighted material. So really, all this quibbling boils down to a level of transparency, societal oversight and a respect for copyright. It's understandable that technology companies are bristling. China isn't tying itself up in this level of bureaucracy, right? The EU's history with tech regulation, such as the GDPR, have often set up roadblocks for users rather than truly protecting privacy. And there's a significant opportunity cost to complying with this level of oversight. How is big tech supposed to move fast and break things with European technocrats looking over their shoulders? But then again, maybe that's the point. When it comes to a technology that might take all our jobs or wipe us all out – depending on who you talk to – maybe a bit of technocratic oversight isn't a bad thing? We know from recent history what happens if Silicon Valley's needs are put ahead of society's. Perhaps the artists and creators who have warned against favouring big tech capital over copyright aren't just protecting their own livelihoods. They're doing us all a favour.


Irish Independent
14 hours ago
- Irish Independent
Any other business: Dermot Desmond wants us to fly over obstacles rather than go under them
Plus Hilary Weston's private village, Lambe's gym shake-up and Health Minister's long delay for data 'Imagine a world where it's impossible to speed, impossible to crash and virtually impossible to knock someone down,' the businessman Dermot Desmond invited his audience to ponder at an AI conference in Queen's University Belfast last month. 'What responsible government would permit anything other than autonomous vehicles on the road?' The billionaire investor made his opposition to Dublin's MetroLink clear in the 'fireside chat' he had with broadcaster Donna Traynor at the conference, but his audience in Belfast would have been more interested in his vision of the role of cars in the cities of 2050.