
China's DeepSeek Upgrades R1 AI Model, Narrowing Gap with Western Counterparts
The latest version of the R1 model reportedly performs just below OpenAI's o3 and o4-mini, based on evaluations by LiveCodeBench
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
Chinese AI startup DeepSeek has quietly released an upgraded version of its R1 model, 'DeepSeek-R1-0528', on the open-source platform Hugging Face, claiming substantial improvements in mathematical reasoning, programming, and logic, alongside a reduction in hallucination rates.
The update positions DeepSeek's model closer to top-tier systems such as OpenAI's o3 and Google's Gemini 2.5 Pro, according to performance data cited on Hugging Face. The company's low-key release strategy continues its trend of disrupting the AI landscape without formal fanfare.
"In terms of comprehensive performance, the updated model is approaching the level of the industry's leading systems," DeepSeek said in its Hugging Face release.
DeepSeek first drew global attention in January with the debut of its original R1 model, which delivered competitive performance despite limited resources. The company's progress raised concerns in the West, with R1's launch contributing to brief market volatility and a dip in Nvidia's stock price.
The latest version of the R1 model reportedly performs just below OpenAI's o3 and o4-mini, based on evaluations by LiveCodeBench, an AI benchmarking platform. The model's capabilities are built on cost-efficiency and reasoning-focused design—traits that have made DeepSeek a symbol of China's broader AI ambitions.
Founded in Hangzhou, DeepSeek has become a key player in China's push to rival American AI giants, navigating restrictions on advanced semiconductors by focusing on optimisation and resourceful development. The company's founder, Liang Wenfeng, has gained domestic prominence, recently participating in a high-level economic forum chaired by President Xi Jinping. His rise reflects growing confidence in China's home-grown AI talent, amid intensifying global competition in the sector.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
37 minutes ago
- Forbes
New Study Reveals True AI Capabilities And Job Replacement Risk
The OECD has unveiled groundbreaking AI Capability Indicators that map artificial intelligence ... More progress against human abilities across nine key domains, revealing where AI currently stands and what's coming next. Imagine trying to navigate the digital transformation of your business using a compass that only points to "somewhere north." That's essentially what we've been doing with AI assessment until now. While tech companies have been throwing around impressive-sounding claims of superhuman performance in narrow tasks, business leaders and policymakers have been left squinting through the hype, trying to figure out what any of it actually means for the real world. The OECD has just delivered something we've desperately needed: a proper GPS system for AI capabilities. Their new AI Capability Indicators represent the most comprehensive attempt yet to create a standardized framework for understanding what AI can actually do compared to human abilities. Think of it as moving from vague headlines about "AI breakthrough" to having a detailed performance review that actually tells you something useful about real-world capabilities. Unlike the typical parade of cherry-picked benchmarks that dominate tech headlines, the OECD's approach cuts through the marketing noise. They've developed nine distinct capability scales that map AI progress against fundamental human abilities: Language, Social Interaction, Problem Solving, Creativity, Metacognition and Critical Thinking, Knowledge and Memory, Vision, Manipulation, and Robotic Intelligence. Each scale runs from Level 1 (basic, solved problems) to Level 5 (full human equivalence), with clear descriptions of what AI systems can actually accomplish at each stage. What makes this particularly helpful is how it sidesteps the technical jargon that usually makes AI assessment reports about as accessible as quantum physics textbooks. Instead of drowning in discussions of transformer architectures or neural network parameters, you get straightforward descriptions like whether an AI system can "adapt teaching methods to meet students' varying needs" or "handle objects of diverse shapes and materials in cluttered environments." The methodology behind these indicators is equally impressive. Over 50 experts across computer science and psychology spent five years developing this framework, combining rigorous academic research with practical, real-world applications. Here's where things get interesting and perhaps a bit sobering for those caught up in the AGI hype cycle. The assessment reveals that current AI systems are clustered around Levels 2 and 3 across most capabilities. We're not at the finish line; we're not even close to it. Large language models like ChatGPT score at Level 3 for language capabilities, meaning they can understand and generate semantic meaning with sophisticated knowledge, but they still struggle with analytical reasoning and have that persistent habit of confidently stating complete nonsense. It's like having a brilliant conversationalist who occasionally insists that gravity flows upward. In social interaction, even the most advanced systems barely reach Level 2. They can combine simple movements to express emotions and learn from interactions, but they're essentially sophisticated actors with no real understanding of the social dynamics they're performing. The vision capabilities tell an equally nuanced story. While AI can handle variations in lighting and target objects, performing multiple subtasks with known data variations (Level 3), it's still leagues away from the adaptable, learning-oriented visual intelligence that characterizes higher levels. For business leaders, this framework offers something really valuable: a reality check that cuts through vendor marketing speak. When a sales representative promises their AI solution will "revolutionize your operations," you can now ask pointed questions about which capability levels their system actually achieves and in which specific domains. The gap analysis between current AI capabilities and the requirements of specific business tasks becomes clearer when standardized benchmarks are in place. Consider customer service, where companies are deploying AI chatbots with the enthusiasm of gold rush prospectors. The OECD framework suggests that while AI can handle structured interactions reasonably well, anything requiring genuine social intelligence, nuanced problem-solving, or creative thinking quickly exposes current limitations. This doesn't mean AI isn't useful in customer service, but it helps set realistic expectations about what human oversight will still be necessary. It's the difference between using AI as a sophisticated tool versus expecting it to be a replacement employee. One approach leads to productivity gains; the other leads to customer complaints and public relations disasters. The framework also reveals opportunities that might not be immediately obvious. Areas where AI performs at Level 3 or higher represent genuine automation potential, while Level 2 capabilities suggest powerful augmentation opportunities. Smart businesses will use this intelligence to identify the low-hanging fruit while preparing for the longer-term implications of advancing capabilities. Perhaps nowhere are the implications more immediate and profound than in the field of education. The report's analysis of teaching capabilities reveals why educators are feeling simultaneously excited and terrified about AI's expanding role in classrooms. Many core teaching tasks require capabilities at Levels 4 and 5, particularly when it comes to adapting instruction to individual student needs or managing the complex social dynamics that make learning environments work. This creates a fascinating paradox worthy of a philosophy textbook: AI might be able to deliver standardized instruction more efficiently than humans, but the most transformational aspects of teaching, the inspiration, emotional connection, and creative problem-solving that actually change lives, remain firmly in human territory. The implications suggest we're heading toward a hybrid model that could fundamentally reshape education. AI handles routine instructional delivery, assessment, and administrative tasks, while humans focus on motivation, emotional support, creative problem-solving, and the kind of inspirational mentoring that transforms students into lifelong learners. This isn't displacement; it's specialization at a scale we've never seen before. The OECD's systematic approach provides something invaluable for strategic planning: a clear picture of what breakthrough capabilities we should be monitoring. The jump from Level 3 to Level 4 across multiple domains would represent a genuine inflection point, particularly in areas like creative problem-solving and social intelligence. What's especially revealing is how the framework illuminates the interconnectedness of different capabilities. True robotic intelligence, for instance, requires simultaneous advances across multiple domains. You can't have Level 5 robotic intelligence without corresponding progress in vision, manipulation, social interaction, and problem-solving. The framework also highlights capability areas where progress might stall or slow dramatically. Social interaction and creativity appear to have particularly steep curves between current performance and human-level capability. What the OECD has created is essentially a report card system for the AI age. Instead of being swept along by breathless predictions about artificial general intelligence arriving next week, we now have a framework for systematically tracking progress and understanding real-world implications. For businesses, this means more informed decisions about where to invest in AI capabilities and where to double down on human talent development. For policymakers, it provides a foundation for regulations and workforce planning grounded in evidence rather than science fiction. For educators, it offers a roadmap for preparing students for a world where human and artificial intelligence must work together effectively. The OECD framework isn't predicting exactly when AI will achieve human-level performance across all domains; that's still anyone's guess. Instead, it provides a common language for discussing AI capabilities and a systematic way to track progress that everyone, from CEOs to school principals, can understand and use. In a field notorious for moving fast and breaking things, having a reliable measurement system might just be what is needed.


Business Insider
38 minutes ago
- Business Insider
Quanta Services price target raised to $414 from $364 at Goldman Sachs
Goldman Sachs raised the firm's price target on Quanta Services (PWR) to $414 from $364 and keeps a Buy rating on the shares. The firm is positive on the stock and the trading multiple with the belief that over the longer-term, the substantial volume of transmission and distribution spending will continue to provide support to shares even in periods of AI headline driven softness, the analyst tells investors in a research note. Goldman adds however that there is a relatively better understanding of the fundamental drivers of estimates and a recognition among investors that under 10% of revenue comes from data centers. Confident Investing Starts Here:
Yahoo
43 minutes ago
- Yahoo
Austin city manager removes item on automatic license plate readers from council agenda
The Brief Item regarding automated license plate reader program removed from Austin City Council agenda City manager cites concerns expressed by residents during Tuesday's work session Testimony focused on city's vendor Flock, which works with ICE and uses AI AUSTIN, Texas - Austin's city manager has removed an item concerning the city's automated license plate reader (ALPR) program from Thursday's city council agenda. What we know T.C. Broadnax said in a statement Tuesday night that staff will be withdrawing Item 67, a proposed extension of the city's ALPR program, from Thursday's agenda. Broadnax cites concerns expressed by Austin residents during the council's work session on Tuesday as a reason behind his decision. What they're saying "Given concerns expressed today, I have decided to withdraw this item from the agenda at this time to provide more opportunities to address council members' questions and do our due diligence to alleviate concerns prior to bringing this item back to City Council for consideration," Broadnax said in his message to the Mayor and Council. Local perspective A press release from the office of council member Mike Siegel says that dozens of residents showed up to the work session to testify about the program. "The speakers overwhelmingly testified against the use of ALPRs, citing concerns about personal privacy, threats to immigrant families, threats to political dissidents, and more. Much of the testimony focused on the City's ALPR vendor, Flock, which works closely with Immigration and Customs Enforcement (ICE) and also uses artificial intelligence (AI) to develop profiles of vehicles based on camera footage and other sources," said the release. What's next Broadnax's decision essentially means Austin's ALPR program will end on June 30. A press conference is scheduled for Wednesday, June 4 at 11:30 a.m. where immigration, reproductive rights and data privacy advocates will be joining Mayor Pro Tem Vanessa Fuentes, Council member Zo Qadri and Siegel. They are expected to speak against the use of ALPRs and mass surveillance tools in Austin. The Source Information in this report comes from a release from Austin City Council member Mike Siegel's office.