logo
Robots lose against humans in half marathon

Robots lose against humans in half marathon

Yahoo19-04-2025
Robot Tiangong Ultra finished a half-marathon in 2 hours 40 minutes on Saturday in Beijing at the E-town Humanoid Robot Half Marathon. But it was no match for the fastest human finish of 1 hour, 11 minutes, 7 seconds.
The world's first human and humanoid robot half-marathon (21 kilometers or about 13 miles) included 21 bipedal robots racing alongside 10,000 humans.
The robots from Chinese manufacturers, such as DroidVP and Noetix Robotics, came in a variety of shapes and sizes, some shorter than 120 centimeters (3.9 feet), others as tall as 1.8 meters. One company boasted that its robot looked almost human, with feminine features and the ability to wink and smile. There was no comment on how that would help the robot run faster in the race.
Engineers operating the robots could make adjustments at aid stations. While the human racers had water and snacks along the way, the robots were treated to batteries and technical tools.
Organizers said the race was a technical demonstration, and no robot actually had a chance of winning.
It was part of a boost for AI and robots by the Chinese government, as Beijing tries to grow its technological strength against the United States.
China is trying to boost economic growth by increasing investment in AI and technology.
"Chinese companies have really focused on showing off walking, running, dancing, and other feats of agility. Generally, these are interesting demonstrations, but they don't demonstrate much regarding the utility of useful work or any type of basic intelligence," Alan Fern, professor of computer science, artificial intelligence and robotics at Oregon State University, told Reuters news agency.
China is hoping that investment in frontier industries like robotics can help create new engines of economic growth. Some analysts, though, question whether having robots enter marathons is a reliable indicator of their industrial potential.
Edited by: Sean Sinico
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NIO Stock Soars 10% as Newly Launched ES8 SUV Takes Aim at TSLA
NIO Stock Soars 10% as Newly Launched ES8 SUV Takes Aim at TSLA

Business Insider

time23 minutes ago

  • Business Insider

NIO Stock Soars 10% as Newly Launched ES8 SUV Takes Aim at TSLA

Chinese EV maker Nio Inc. (NIO) has introduced its all-new ES8 SUV, a spacious three-row model now available for pre-order in China. Following the news, NIO stock jumped 10% on Thursday, as investors reacted positively to the company's latest push in the premium EV market. Elevate Your Investing Strategy: Take advantage of TipRanks Premium at 50% off! Unlock powerful investing tools, advanced data, and expert analyst insights to help you invest with confidence. Nio's ES8 SUV is seen as a direct response to Tesla's (TSLA) recently launched Model Y L in China. TSLA's vehicle saw a strong start, gaining attention for its spacious design and strong performance. Nio's ES8 Gets Bigger and More Affordable The ES8 is Nio's largest battery-electric SUV to date, available in six- and seven-seat versions. It features a 520-kW dual-motor system and advanced autonomous driving supported by three LiDAR sensors and a 4D imaging radar. Importantly, the price starts at about $58,000. This makes it about 25% cheaper than its predecessor and brings it closer in price to Tesla's Model Y. The pricing move is a key part of Nio's strategy to gain market share in the highly competitive Chinese EV market. To attract early orders, NIO has rolled out promotional incentives, including pre-order discounts and benefits for existing NIO owners who repurchase. NIO Rolls Out Back-to-Back New Models The ES8 launch follows Nio's July rollout of the ONVO L90, a mid-size SUV under its new family-focused brand. The L90 is priced under $37,000, and it targets everyday buyers looking for value and comfort. Just 10 days after launch, Nio delivered over 4,000 units, and it is expected to pass 10,000 deliveries in August. This strong demand is helping Nio bounce back in China's competitive EV market. Is Nio a Buy, Sell, or Hold? Overall, Wall Street has a Hold consensus rating on NIO stock, based on three Buys, seven Holds, and one Sell assigned in the last three months. The average NIO stock price target of $4.67 implies 15.55% downside risk from current levels.

China's DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI
China's DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI

Yahoo

time2 hours ago

  • Yahoo

China's DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI

Chinese AI startup DeepSeek shocked the world in January with an AI model, called R1, that rivaled OpenAI's and Anthropic's top LLMs. It was built at a fraction of the cost of those other models, using far fewer Nvidia chips, and was released for free. Now, just two weeks after OpenAI debuted its latest model, GPT-5, DeepSeek is back with an update to its flagship V3 model that experts say matches GPT-5 on some benchmarks—and is strategically priced to undercut it. DeepSeek's new V3.1 model was quietly released in a message to one of its groups on WeChat, China's all-in-one messaging and social app, as well as on the Hugging Face platform. Its debut touches several of today's biggest AI narratives at once. DeepSeek is a core part of China's broader push to develop, deploy, and control advanced AI systems without relying on foreign technology. (And in fact, DeepSeek's new V3 model is specifically tuned to perform well on Chinese-made chips.)While U.S. companies have been hesitant to embrace DeepSeek's models, they've been widely adopted in China and increasingly in other parts of the world. Even some American firms have built applications on DeepSeek's R1 reasoning model. At the same time, researchers warn that the models' outputs often hew closely to Chinese Communist Party–approved narratives—raising questions about their neutrality and trustworthiness. China's AI push goes beyond DeepSeek: Its industry also includes models including Alibaba's Qwen, Moonshot AI's Kimi, and Baidu's Ernie. DeepSeek's new release, however, coming just after OpenAI's GPT-5—a rollout that fell short of industry watchers' high expectations—underscores Beijing's determination to keep pace with, or even leapfrog, top U.S. labs. OpenAI is concerned about China and DeepSeek DeepSeek's efforts are certainly keeping U.S. labs on their toes. In a recent dinner with reporters, OpenAI CEO Sam Altman said that rising competition from Chinese open-source models, including DeepSeek, influenced his company's decision to release its own open-weight models two weeks ago. 'It was clear that if we didn't do it, the world was gonna be mostly built on Chinese open-source models,' Altman said. 'That was a factor in our decision, for sure. Wasn't the only one, but that loomed large.' In addition, last week the U.S. granted Nvidia and AMD licenses to export China-specific AI chips—including Nvidia's H20—but only if they agree to hand over 15% of revenue from those sales to Washington. Beijing quickly pushed back, moving to restrict purchases of Nvidia chips after Commerce Secretary Howard Lutnick told CNBC on July 15: 'We don't sell them our best stuff, not our second-best stuff, not even our third-best.' By optimizing DeepSeek for Chinese-made chips, the company is signaling resilience against U.S. export controls and a drive to reduce reliance on Nvidia. In DeepSeek's WeChat post, it noted that the new model format is optimized for 'soon-to-be-released next-generation domestic chips.' Altman, at that same dinner, warned that the U.S. may be underestimating the complexity and seriousness of China's progress in AI—and said export controls alone likely aren't a reliable solution. 'I'm worried about China,' he said. Less of a leap, but still striking incremental advances Technically, what makes the new DeepSeek model notable is how it was built, with a few advances that would be invisible to consumers. But for developers, these innovations make V3.1 cheaper to run and more versatile than many closed and more expensive rival models. For instance, V3.1 is huge—685 billion parameters, which is on the level of many top 'frontier' models. But its 'mixture-of-experts' design means only a fraction of the model activates when answering any query, keeping computing costs lower for developers. And unlike earlier DeepSeek models that split tasks that could be answered instantly based on the model's pretraining from those that required step-by-step reasoning, V3.1 combines both fast answers and reasoning in one as well as the most recent models from Anthropic and Google, have a similar ability. But few open-weight models have been able to do this so far. V3.1's hybrid architecture is 'the biggest feature by far,' Ben Dickson, a tech analyst and founder of the TechTalks blog, told Fortune. Others point out that while this DeepSeek model is less of a leap than the company's R1 model—which was a reasoning model distilled down from the original V3 that shocked the world in January, the new V3.1 is still striking. 'It is pretty impressive that they continue making non-marginal improvements,' said William Falcon, founder and CEO of AI developer platform Lightning AI. But he added that he would expect OpenAI to respond if its own open-source model 'starts to meaningfully lag,' and pointed out that the DeepSeek model is harder for developers to get into production, while OpenAI's version is fairly easy to deploy. For all the technical details, though, DeepSeek's latest release highlights the fact that AI is increasingly seen as part of a simmering technological cold war between the U.S. and China. With that in mind, if Chinese companies can build better AI models for what they claim is a fraction of the cost, U.S. competitors have reason to worry about staying ahead. This story was originally featured on

China's DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI
China's DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI

Yahoo

time2 hours ago

  • Yahoo

China's DeepSeek quietly releases an open-source rival to GPT-5—optimized for Chinese chips and priced to undercut OpenAI

Chinese AI startup DeepSeek shocked the world in January with an AI model, called R1, that rivaled OpenAI's and Anthropic's top LLMs. It was built at a fraction of the cost of those other models, using far fewer Nvidia chips, and was released for free. Now, just two weeks after OpenAI debuted its latest model, GPT-5, DeepSeek is back with an update to its flagship V3 model that experts say matches GPT-5 on some benchmarks—and is strategically priced to undercut it. DeepSeek's new V3.1 model was quietly released in a message to one of its groups on WeChat, China's all-in-one messaging and social app, as well as on the Hugging Face platform. Its debut touches several of today's biggest AI narratives at once. DeepSeek is a core part of China's broader push to develop, deploy, and control advanced AI systems without relying on foreign technology. (And in fact, DeepSeek's new V3 model is specifically tuned to perform well on Chinese-made chips.)While U.S. companies have been hesitant to embrace DeepSeek's models, they've been widely adopted in China and increasingly in other parts of the world. Even some American firms have built applications on DeepSeek's R1 reasoning model. At the same time, researchers warn that the models' outputs often hew closely to Chinese Communist Party–approved narratives—raising questions about their neutrality and trustworthiness. China's AI push goes beyond DeepSeek: Its industry also includes models including Alibaba's Qwen, Moonshot AI's Kimi, and Baidu's Ernie. DeepSeek's new release, however, coming just after OpenAI's GPT-5—a rollout that fell short of industry watchers' high expectations—underscores Beijing's determination to keep pace with, or even leapfrog, top U.S. labs. OpenAI is concerned about China and DeepSeek DeepSeek's efforts are certainly keeping U.S. labs on their toes. In a recent dinner with reporters, OpenAI CEO Sam Altman said that rising competition from Chinese open-source models, including DeepSeek, influenced his company's decision to release its own open-weight models two weeks ago. 'It was clear that if we didn't do it, the world was gonna be mostly built on Chinese open-source models,' Altman said. 'That was a factor in our decision, for sure. Wasn't the only one, but that loomed large.' In addition, last week the U.S. granted Nvidia and AMD licenses to export China-specific AI chips—including Nvidia's H20—but only if they agree to hand over 15% of revenue from those sales to Washington. Beijing quickly pushed back, moving to restrict purchases of Nvidia chips after Commerce Secretary Howard Lutnick told CNBC on July 15: 'We don't sell them our best stuff, not our second-best stuff, not even our third-best.' By optimizing DeepSeek for Chinese-made chips, the company is signaling resilience against U.S. export controls and a drive to reduce reliance on Nvidia. In DeepSeek's WeChat post, it noted that the new model format is optimized for 'soon-to-be-released next-generation domestic chips.' Altman, at that same dinner, warned that the U.S. may be underestimating the complexity and seriousness of China's progress in AI—and said export controls alone likely aren't a reliable solution. 'I'm worried about China,' he said. Less of a leap, but still striking incremental advances Technically, what makes the new DeepSeek model notable is how it was built, with a few advances that would be invisible to consumers. But for developers, these innovations make V3.1 cheaper to run and more versatile than many closed and more expensive rival models. For instance, V3.1 is huge—685 billion parameters, which is on the level of many top 'frontier' models. But its 'mixture-of-experts' design means only a fraction of the model activates when answering any query, keeping computing costs lower for developers. And unlike earlier DeepSeek models that split tasks that could be answered instantly based on the model's pretraining from those that required step-by-step reasoning, V3.1 combines both fast answers and reasoning in one as well as the most recent models from Anthropic and Google, have a similar ability. But few open-weight models have been able to do this so far. V3.1's hybrid architecture is 'the biggest feature by far,' Ben Dickson, a tech analyst and founder of the TechTalks blog, told Fortune. Others point out that while this DeepSeek model is less of a leap than the company's R1 model—which was a reasoning model distilled down from the original V3 that shocked the world in January, the new V3.1 is still striking. 'It is pretty impressive that they continue making non-marginal improvements,' said William Falcon, founder and CEO of AI developer platform Lightning AI. But he added that he would expect OpenAI to respond if its own open-source model 'starts to meaningfully lag,' and pointed out that the DeepSeek model is harder for developers to get into production, while OpenAI's version is fairly easy to deploy. For all the technical details, though, DeepSeek's latest release highlights the fact that AI is increasingly seen as part of a simmering technological cold war between the U.S. and China. With that in mind, if Chinese companies can build better AI models for what they claim is a fraction of the cost, U.S. competitors have reason to worry about staying ahead. This story was originally featured on

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store