Latest news with #GPT-4.0


India Today
7 days ago
- Health
- India Today
OpenAI COO says GPT-5 is 5x more accurate, but only some users will see big changes
OpenAI recently unveiled GPT-5, describing it as its most capable AI model yet. In an interview with Big Technology, OpenAI's COO Brad Lightcap has now claimed that this version is four to five times more accurate than its earlier versions. He says the upgrade will be more noticeable for some users than others, largely because of how the model now works behind the think now with GPT-5 and obviously with future models, we have seen consistently the rates of accuracy and the rates of hallucination go up and down respectively. GPT-5 depends on how you measure it, but it's four to five times more accurate than its predecessors," he explained that GPT-5 changes the way people interact with ChatGPT. In earlier versions, users had to manually select between different models - some are better for quick replies and some are designed for deeper reasoning. This model switching often confused people. GPT-5 removes that choice altogether by letting the AI itself decide how much time and computational effort to spend on a question. In practice, this means the model might take extra time to "think" through a complex query, while keeping simpler answers fast. Lightcap described this as more than a convenience upgrade. By dynamically adjusting its reasoning process, GPT-5 can deliver better responses across different types of questions, whether it is writing, coding, or tackling a health-related topic. The model's improvements are not limited to speed and reasoning. He asserted that the model has also been trained with a particular emphasis on medical reasoning and health-related queries. It is being said that this is in response to a growing number of people using AI tools to understand health conditions or manage ongoing company is clear, however, that GPT-5 is not meant to replace doctors. Instead, it is designed to give people more awareness and context, such as explaining a medical condition, its common symptoms, and possible care options. According to Lightcap, many people struggle with understanding their own diagnoses because the healthcare system often leaves little time for detailed explanations. OpenAI hopes GPT-5 can fill some of that gap while maintaining high factual that point, the company says GPT-5's answers are far less likely to contain factual errors compared to previous versions. Measured against GPT-4o, responses are about 45 per cent less likely to be incorrect, and when the model engages in extended reasoning, the accuracy improvement rises to around 80 per casual ChatGPT users, especially those on the free tier, the jump to GPT-5 could feel substantial. Many of them have only experienced the more basic GPT-4.0, which works like a fast, search-style assistant. With GPT-5, these users will see, for the first time, a model that can decide when to take extra time to deliver a more thoughtful answer. For long-time Plus subscribers and advanced users, who have already had access to reasoning models like GPT-4.1 and OpenAI's 'o-series,' the improvement will still be noticeable, though perhaps less its advances, GPT-5 is not being called artificial general intelligence (AGI). Lightcap said the technology shows early signs of AGI-like abilities, such as multi-step reasoning and better tool usage, but is still part of a gradual evolution rather than a clear-cut breakthrough. - Ends


Sky News
28-01-2025
- Business
- Sky News
How Chinese DeepSeek can be as good as US AI rivals at fraction of cost
Based on the limited numbers of comparisons made so far, DeepSeek's AI models appear to be faster, smaller, and a whole lot cheaper than the best offerings from the supposed titans of AI like OpenAI, Anthropic and Google. And here's the kicker, the Chinese offering appears to be just as good. So how have they done it? Firstly, it looks like DeepSeek's engineers have thought about what an AI needs to do rather than what it might be able to do. It doesn't need to work out every possible answer to a question, just the best one - to two decimal places for example instead of 20. Their models are still massive computer programmes, DeepSeek-V3 has 671 billion variables. But ChatGPT -4 is a colossal 1.76 trillion. Doing more with less seems to be down to the architecture of the model, which uses a technique called "mixture of experts". Where OpenAI 's latest model GPT-4.0 attempts to be Einstein, Shakespeare and Picasso rolled into one, DeepSeek's is more like a university broken up into expert departments. This allows the AI to decide what kind of query it's being asked, and then send it to a particular part of its digital brain to be dealt with. 0:53 This allows the other parts to remain switched off, saving time, energy and most importantly the need for computing power. And it's the equivalent performance with significantly less computing power, that has shocked the big AI developers and financial markets. The state-of-the-art AI models had been developed using more and more powerful graphics processing units (GPUs) made by the likes of Nvidia in the US. The only way to improve them, so the market logic went, was more and more "compute". Partly to stay ahead of China in the AI arms-race, the US restricted the sale of the most powerful GPUs to China. What DeepSeek's engineers have demonstrated is what engineers do when you present them with a problem. They come up with a workaround. Learning from what OpenAI and others have done, they redesigned a model from the ground up so that it could work on GPUs designed for computer games not superintelligence. What's more, their model is open source meaning it will be easier for developers to incorporate into their products. Being far more efficient, and open source makes DeepSeek's approach look like a far more attractive offering for everyday AI applications. The result, of course, a nearly $600bn overnight haircut for Nvidia. But it will survive its sudden reverse in fortunes. The LLM-type (large language model) models pioneered by OpenAI and now improved by DeepSeek aren't the be-all and end-all in AI development. "General intelligence" from an AI is still a way off - and lots of high end computing will likely be needed to get us there. The fate of firms like OpenAI is less certain. Their supposedly game-changing GPT-5 model, requiring mind-blowing amounts of computing power to function, is still to emerge. Now the game appears to have changed around them and many are clearly wondering what return they're going to get on their AI investment.