
China's DeepSeek quietly releases upgraded R1 AI model, ramping up competition with OpenAI
The company did not make an official announcement, but the upgrade of DeepSeek R1 was released on AI model repository Hugging Face.
DeepSeek rose to prominence this year after its free, open-source R1 reasoning model outperformed offerings from rivals including Meta and OpenAI. The low-cost and short time of development shocked global markets, sparking concerns that U.S. tech giants were overspending on infrastructure and wiping billions of dollars of value of major U.S. tech stocks like AI stalwart Nvidia. These companies have since broadly recovered.
Just as was the case with DeepSeek R1's debut, the upgraded model was also released with little fanfare. It is a reasoning model, which means the AI can execute more complicated tasks through a step-by-step logical thought process.
The upgraded DeepSeek R1 model is just behind OpenAI's o4-mini and o3 reasoning models on LiveCodeBench, a site that benchmarks models against different metrics.
DeepSeek has become the poster child of how Chinese artificial intelligence is still developing despite U.S. attempts to restrict the country's access to chips and other technology. This month, Chinese technology giants Baidu and Tencent revealed how they were making their AI models more efficient to deal with U.S. semiconductor export curbs.
Jensen Huang, CEO of Nvidia, which designs the graphics processing units required to train huge AI models, slammed U.S. export controls on Wednesday.
"The U.S. has based its policy on the assumption that China cannot make AI chips," Huang said. "That assumption was always questionable, and now it's clearly wrong."
"The question is not whether China will have AI," Huang added. "It already does."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Insider
29 minutes ago
- Business Insider
What is ChatGPT? Here's everything you need to know about OpenAI's famous chatbot.
ChatGPT is OpenAI's flagship AI model. In August 2025, the company unveiled GPT-5, its latest iteration. OpenAI CEO Sam Altman says GPT-5 is its most advanced AI model yet, marked by increased general intelligence and enhanced usability, like a "real-time router" that selects the most appropriate model to handle each user request. Here's everything you need to know. What is ChatGPT? OpenAI is the leading AI startup. Its ultimate mission is to develop artificial general intelligence, or AGI, in a way that benefits humanity as a whole. AGI is a still-theoretical AI that reasons as well as humans. The company first released ChatGPT in November 2022. It has yet to achieve AGI-level intelligence. For now, it's a conversational chatbot that relies on a large language model to generate responses to questions. Some people use it much like they would Google Search. But it can also do deeper research, generate images and reports, write just about anything, code, and solve problems that involve quantitative reasoning. Since its debut, the chatbot's user base has exploded. OpenAI said in a blog post in August that its user base had reached 700 million weekly users. How to use ChatGPT ChatGPT is available online, and as an app available for both iOS and Android. Users engage with it through conversation by simply typing in a prompt — an instruction for the chatbot. OpenAI also unveiled an " advanced voice mode" in 2024 — following a legal battle with Scarlett Johansson over the use of a voice that sounded too similar to hers — that lets users engage with the chatbot in natural, real-time conversations with the ability to sense emotions. Since the release of ChatGPT, OpenAI has unveiled several different ChatGPT models — all of which can be used in conjunction with the chatbot. It has rolled out a series of reasoning models, for example, which are designed to think more deeply about problems. It also unveiled GPT-4.5, which Altman described on X as "the first model that feels like talking to a thoughtful person." Until the release of GPT-5, it was up to the user to understand which model was best for their needs. Now, GPT-5 can make that decision for them. That means, in essence, that the model is deciding how long it needs to think about a problem to get to the best answer. ChatGPT also offers dozens of plug-ins to paying subscribers. An Expedia plug-in can help you book a trip, while one from OpenTable will nab you a dinner reservation. OpenAI has also launched Code Interpreter, a version of ChatGPT that can code and analyze data. Despite the bot's impressive capabilities, it remains imperfect. ChatGPT relies on available data for its responses, which means it can sometimes give misinformation. OpenAI has also been accused of stealing personal or copyrighted data to train ChatGPT. It has even encouraged students to cheat and plagiarize on their assignments. How does ChatGPT work? Chatbots like ChatGPT are powered by large amounts of data and computing techniques to make predictions and string words together meaningfully. They not only tap into a vast amount of vocabulary and information but also understand words in context. This helps them mimic speech patterns while dispatching encyclopedic knowledge. When a user prompts a large language model, the query is broken into tokens — the smallest unit of text a model processes. For OpenAI's models, they can be "as short as a single character or as long as a full word, depending on the language and context. Spaces, punctuation, and partial words all contribute to token counts," according to OpenAI. ChatGPT's growing influence Users have flocked to ChatGPT to improve their personal lives and boost productivity. The chatbot attracted 100 million users in its first five days on the market, a record at the time. Some workers have used the AI chatbot to develop code, write real estate listings, and create lesson plans, while others have made teaching the best ways to use ChatGPT a career in itself. Businesses, including consulting firms, are also scrambling to adopt AI. The popularity of ChatGPT crystallized the value of a conversational tool, McKinsey senior partner Delphine Zurkiya told Business Insider. "There wasn't a major shift in our strategy in the sense that we had already been developing a lot of tools internally. It's just these tools now have become, we'll say faster, in delivering value thanks to that natural user interface," she said in regards to the firm's internal chatbot, Lilli. Many consulting firms are also building similar tools for clients. KPMG, for example, has been collecting data on how its workers prompt AI, and used that information to build new tools — for itself and clients. AI is also making waves in the legal world. Gibson Dunn is piloting ChatGPT Enterprise for its roughly 500 lawyers and staff. Judges, however, say they've seen an increase in fake legal citations due to lawyers relying too much on AI. There is a slate of ChatGPT competitors that have also come out since its launch. Meta AI, built on its Llama 4 model, offers users an AI assistant that "gets to know" user preferences, remembers context, and is personalized. Anthropic's Claude has become the leading AI assistant for coding. Elon Musk also built Grok, a chatbot that the company is training in line with Musk's support for free speech. Google has Gemini, a multimodal model that CEO Sundar Pichai called "one of the biggest science and engineering efforts we've undertaken as a company." For OpenAI, which continues to unveil new models at a healthy clip, the chatbot is an eternal work in progress. "There is no analogy for what we're building," Nick Turley, the company's head of ChatGPT, said on a podcast in August.


CNN
29 minutes ago
- CNN
The latest ChatGPT is supposed to be ‘PhD level' smart. It can't even label a map
A version of this story appeared in CNN Business' Nightcap newsletter. To get it in your inbox, sign up for free here. Sam Altman, the artificial intelligence hype master, is in damage-control mode. OpenAI's latest version of its vaunted ChatGPT bot was supposed to be 'PhD-level' smart. It was supposed to be the next great leap forward for a company that investors have poured billions of dollars into. Instead, ChatGPT got a flatter, more terse personality that can't reliably answer basic questions. The resulting public mockery has forced the company to make sweaty apologies while standing by its highfalutin claims about the bot's capabilities. In short: It's a dud. The misstep on the model, called GPT-5, is notable for a couple of reasons. 1. It highlighted the many existing shortcomings of generative AI that critics were quick to seize on (more on that in a moment, because they were quite funny). 2. It raised serious doubts about OpenAI's ability to build and market consumer products that human beings are willing to pay for. That should be particularly concerning for investors, given OpenAI, which has never turned a profit, is reportedly worth $500 billion. Let's rewind a bit to last Thursday, when OpenAI finally released GPT-5 to the world — about a year behind schedule, according to the Wall Street Journal. Now, one thing this industry is really good at is hype, and on that metric, CEO Sam Altman delivered. During a livestream ahead of the launch last Thursday, Altman said talking to GPT-5 would be like talking to 'a legitimate PhD-level expert in anything, any area you need.' In his typically lofty style, Altman said GPT-5 reminds him of 'when the iPhone went from those giant-pixel old ones to the retina display.' The new model, he said, is 'significantly better in obvious ways and subtle ways, and it feels like something I don't want to ever have to go back from,' Altman said in a press briefing. Then people started actually using it. Users had a field day testing GPT-5 and mocking its wildly incorrect answers. The journalist Tim Burke said on Bluesky that he prompted GPT-5 to 'show me a diagram of the first 12 presidents of the United States with an image of their face and their name under the image.' The bot returned an image of nine people instead, with rather creative spellings of America's early leaders, like 'Gearge Washingion' and 'William Henry Harrtson.' A similar prompt for the last 12 presidents returned an image that included two separate versions of George W. Bush. No, not George H.W. Bush, and then Dubya. It had 'George H. Bush.' And then his son, twice. Except the second time, George Jr. looked like just some random guy. Labeling basic maps of the United States also proved tricky for GPT-5 (but again, pretty funny, as tech writer Ed Zitron's post on Bluesky showed). GPT-5 did slightly better when I asked it on Wednesday for a map of the US. Some people can, in fact, label the great state of Vermont correctly without a PhD, but not GPT-5. And this is the first I'm hearing of states named 'Yirginia.' The slop coming out of GPT-5 was funny when it was just us nerds trying to find its blind spots. But some regular fans of ChatGPT weren't laughing. Especially because users have been particularly alarmed by the new version's personality – or rather, lack thereof. In rolling out the new model, OpenAI essentially retired its earlier models, including the wildly popular GPT-4o that's been on the market for over a year, making it so that even people who loved the previous iteration of the chatbot suddenly couldn't use it. More than 4,000 people signed a petition to compel OpenAI to resurrect it. 'I'm so done with ChatGPT 5,' one user wrote on Reddit, explaining how they tried to use the new model to run 'a simple system' of tasks that an earlier ChatGPT model used to handle. The user said GPT-5 'went rogue,' deleting tasks and moving deadlines. And while OpenAI's defenders could chalk that up to an isolated or even made-up incident, within 24 hours of the GPT-5 launch Altman was doing damage control, seemingly caught of guard by the bad reception. On X, he announced a laundry list of updates, including the return of GPT-4o for paid subscribers. 'We expected some bumpiness as we roll out so many things at once,' Altman said in a post. 'But it was a little more bumpy than we hoped for!' The CEO's failure to anticipate the outrage suggests he doesn't have a firm grasp on how an estimated 700 million weekly active users are engaging with his product. Perhaps Altman missed all the coverage — from CNN, the New York Times, the Wall Street Journal — of people forming deep emotional attachments to ChatGPT or rival chatbots, having endless conversations with them as if they were real people. A simple search of Reddit could have offered insights into how others are integrating the tool into their workflows and lives. Basic market research should have shown OpenAI that a mass update sunsetting the tools people rely on would be more than just a bit bumpy. When asked about the backlash to GPT-5, an OpenAI representative pointed CNN to Altman's public statements on social media announcing the return of older models, as well as a blog post about how the company is optimizing GPT-5. The messy rollout speaks to how the AI industry as a whole is struggling to prove themselves as producers of consumer goods rather than 'labs' — as they love to call themselves, because it sounds more scientific and distracts people from the fact that they are backed by people who are trying to make unfathomable amounts of cash for themselves. AI companies often base their fanfare around how a model performs in various behind-the-scenes benchmark tests that show how well a bot can do complex math. For all we know, GPT-5 sailed through those evaluations. But the problem is that OpenAI hyped the thing so far into the stratosphere, disappointment was (or should have been) inevitable. 'I honestly didn't think OpenAI would burn the brand name on something so mid,' wrote prominent researcher and AI critic Gary Marcus. 'In a rational world, their valuation would take a hit,' he added, noting OpenAI still hasn't turned a profit, is slashing prices to keep its user numbers up, and is hemorrhaging talent as competition heats up. For critics like Marcus, the GPT-5 flop was a kind of vindication. As he noted in a blog post, other models like Elon Musk's Grok aren't faring much better, and the backlash from even AI proponents feels like a turning point. When people talk about AI, they're talking about one of two things: the AI we have now — chatbots with limited, defined utility — and the AI that companies like Altman's claim they can build — machines that can outsmart humans and tell us how to cure cancer, fix global warming, drive our cars and grow our crops, all while entertaining and delighting us along the way. But the gap between the promise and the reality of AI only seems to widen with every new model. CNN's Lisa Eadicicco contributed reporting.
Yahoo
30 minutes ago
- Yahoo
Bumps in the Machine: OpenAI's GPT-5 Rollout Stumbles Into the Spotlight
OpenAI's much-hyped launch of GPT-5—touted as a groundbreaking leap in artificial intelligence—has instead hit a familiar snag called reality. The company billed the model as its most advanced yet, but early users say the rollout has been anything but seamless. Reports of sluggish performance, erratic outputs, and missing features have fueled growing skepticism about whether GPT-5 and OpenAI can deliver on its promises. On Friday, OpenAI CEO Sam Altman offered a mea culpa on X for all of the company's promises and mistakes. 'Rolling out to everyone is taking a bit longer,' he wrote. 'It's a massive change at big scale.' Altman acknowledged the rocky rollout, conceding it was rougher than OpenAI had planned. 'We will continue to work to get things stable and will keep listening to feedback,' he said. 'As we mentioned, we expected some bumpiness as we roll out so many things at once. But it was a little more bumpy than we hoped for!' Here's a breakdown of the early complaints and controversies surrounding GPT-5 and what it might signal for the future of AI rollouts. Performance Woes: Less Chatty, More Clunky Many users on the Free and Plus tiers say GPT-5 felt lazy, slower, shorter, and more robotic than before. Slower responses, shorter replies, and a more robotic tone prompted comparisons to early-generation bots rather than an 'expert-level' AI. Some even argue it's a step backward, especially when compared to the snappy and contextual GPT-4o. 'Incredible how ChatGPT Plus went from essential to garbage with the release of GPT-5,' wrote Nillion Network CTO John Woods on X. Hyperbolic Labs co-founder and CTO Yuchen Jin called the model a letdown—still prone to hallucinations, overusing em dashes, and struggling to follow instructions. 'I miss 4o, 4.5, and o3. The big router keeps failing me,' he wrote. 'Turns out I liked the long model list…please, get my friends out of the funeral.' And while OpenAI marketed GPT-5 as a reasoning powerhouse, users say it often needs heavy-handed prompt engineering just to perform at the level expected. 'ChatGPT has some very serious bugs with routing for GPT-5,' Raindrop AI CTO Ben Hylak wrote. 'Unless you say 'think harder,' almost every request gets routed to a much smaller model that is incredibly stupid and circular.' Some developers flagged what they saw as regressions in basic coding skills with GPT-5 reportedly stumbling over fundamental programming concepts such as variable scope and initialization—a troubling sign for a model marketed as the future of intelligent agents and autonomous coding. Worse, GPT-5 introduced 'thinking modes' that operate like internal gears—but users can't see or control them. The result? Confusion. One moment it's a philosopher, the next it can't tell how many Bs are in "blueberry." Rollout Frustrations: Where's My Old Bot? If you felt pushed into GPT-5, you're not alone. Many users complain that older model options like GPT-4 and 4o were abruptly removed or made hard to access, leaving them stuck with a model they didn't ask for. The rollout has also exposed stark disparities between pricing tiers. Free-tier and Plus users get throttled with usage limits and a nerfed 'mini' version, while Pro and Team subscribers access the full GPT-5 Pro. That's nothing new—but in the context of widespread dissatisfaction, it's particularly galling. Even Pro users have reported delays, outages, and throttling during peak hours, suggesting that OpenAI may be struggling with capacity. PR Misfires and Ethical Red Flags Any high-stakes tech launch carries the risk of a PR faceplant, and GPT-5 delivered. OpenAI drew criticism for using performance charts that some observers called misleading. The company also fumbled a basic math example during its live demo, a slip that raised eyebrows among both engineers and investors. Ethical concerns also continue to dog the rollout, and GPT-5's massive context window and AI agent abilities have reignited fears about misuse, ranging from fraud and misinformation to the creation of synthetic media designed to deceive. Longstanding issues such as algorithmic bias, privacy violations, and job displacement have returned to the conversation with renewed urgency, intensifying calls for regulation. The Good News (Yes, There Is Some) Not everything is broken. OpenAI claimed GPT-5 showed progress on several fronts: fewer hallucinations, less sycophantic flattery, and more consistent reasoning across a broader range of topics. Its larger context window means it can now track and integrate information across longer conversations, which is genuinely useful for power users. The safety system has also been upgraded to offer more nuanced responses to sensitive prompts, though some still feel GPT-5 errs on the side of bland risk aversion. For developers with the right prompts and patience, GPT-5 can produce impressive code and tackle complex reasoning tasks. But for many, it still falls short of the 'game-changer' billing. Conclusion: A Soft Launch in a Hard World GPT-5's debut offers a cautionary tale in AI development: technical prowess isn't enough. Expectations are sky-high, and the margin for error is shrinking. Users want speed, accuracy, personality, and control—and they want it all the time. OpenAI now faces the twin challenge of managing those expectations while continuing to iterate on a product that is, for all its flaws, still at the frontier of AI. The company's rollout strategy may need as much fine-tuning as the model itself. Because if this is the future of AI… it might need a patch.