logo
Humanoid robot malfunctions, sparks viral panic

Humanoid robot malfunctions, sparks viral panic

Fox News3 days ago

A chilling video circulating on social media has reignited old anxieties about robots turning against their creators. The footage shows a Unitree H1 humanoid robot, a machine about the size of an adult human, suddenly flailing its arms and legs with alarming force during a test, coming dangerously close to two technicians.
The scene has sparked heated debate about the safety of advanced robotics and artificial intelligence. But is this truly the beginning of something out of our worst fears, or is there just a straightforward technical explanation for what happened?
In the viral clip first posted on Reddit, the Unitree H1 is seen suspended from a crane at a Chinese factory, surrounded by two handlers. Without warning, the robot loses control, thrashing its limbs, knocking over equipment and forcing the technicians to scramble out of harm's way. The chaos is palpable, and the images quickly drew comparisons to movies like "The Terminator" and "I, Robot," with many viewers wondering if the age of rogue machines had finally arrived.
The Unitree H1 is not a prototype but a commercially available, general-purpose humanoid robot. Standing 5.9 feet tall and weighing 104 pounds, it's designed to walk, run and even perform dynamic movements like backflips and dancing. Its joints are powerful and capable of generating 365 pound-feet of torque, enough to lift heavy objects or, in the wrong circumstances, cause serious harm.
Despite the frightening visuals, the reality is far less sinister. According to engineers and robotics experts, the root cause of the malfunction was a combination of software and design oversight. During the test, the H1 was tethered by its head for safety, a common practice during public demonstrations. However, this physical restraint was not accounted for in the robot's balance algorithm.
The robot's sensors interpreted the resistance from the tether as if it were constantly falling. In response, the H1's stabilization software tried to correct its position, but the tether prevented normal movement. This created a feedback loop: the robot made increasingly aggressive corrections, resulting in the violent flailing seen in the video. Investigators concluded that this was not a case of emergent AI behavior but rather a known failure mode triggered by an unanticipated physical constraint and software flaw.
Although no one was seriously injured, the incident set off a wave of panic online. Many viewers saw the video without any technical context, fueling fears of a robot uprising. The imagery alone was enough to make people question whether advanced robots are safe to have around humans.
Experts, however, were quick to clarify that the malfunction was not evidence of a conscious or rebellious machine. Instead, it highlighted the importance of thorough safety protocols and testing, especially when deploying powerful machines in environments shared with people.
This event highlights some important lessons for both the robotics industry and the public. First, safety protocols are essential. Even with the most advanced hardware, unexpected interactions between software and the physical world can create dangerous situations.
Second, transparency from manufacturers plays a crucial role. When companies provide quick and clear explanations, they can help prevent panic and stop misinformation from spreading.
Finally, it is important to remember that artificial intelligence is not sentient, at least not yet. The Unitree H1's behavior was caused by programming and sensor misinterpretation, not by any independent thought or intent.
The viral Unitree H1 video is a reminder that technology, especially when it's powerful and autonomous, demands respect and caution. While the footage is unsettling, the true story is one of technical error, not a robot rebellion. As robots become more common in our workplaces and public spaces, incidents like this will serve as important lessons for engineers, regulators and the public alike. For now, the machines are not plotting against us, but they do need careful supervision and thoughtful design to keep everyone safe.
If you saw a robot lose control right in front of you, would you trust having machines like this in your daily life? Why or why not? Let us know by writing us at Cyberguy.com/Contact.
For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.
Follow Kurt on his social channels:
Answers to the most-asked CyberGuy questions:
New from Kurt:
Copyright 2025 CyberGuy.com. All rights reserved.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Perplexity received 780 million queries last month, CEO says
Perplexity received 780 million queries last month, CEO says

TechCrunch

time9 minutes ago

  • TechCrunch

Perplexity received 780 million queries last month, CEO says

Perplexity received 780 million queries in May, CEO Aravind Srinivas shared on stage at Bloomberg's Tech Summit on Thursday. Srinivas said that the AI search engine is seeing more than 20% growth month-over-month. 'Give it a year, we'll be doing, like, a billion queries a week if we can sustain this growth rate,' Srinivas said. 'And that's pretty impressive because the first day in 2022, we did 3,000 queries, just one single day. So from there to doing 30 million queries a day now, it's been phenomenal growth.' Srinivas went on to note that the same growth trajectory is possible, especially with the new Comet browser that it's working on. 'If people are in the browser, it's infinite retention,' he said. 'Everything in the search bar, everything on the new tab page, everything you're doing on the sidecar, any of the pages you're in, these are all going to be extra queries per active user, as well as seeking new users who just are tired of legacy browsers, like Chrome. I think that's going to be the way to grow over the coming year.' Srinivas said the reason Perplexity is developing Comet is to shift the role of AI from simply providing answers to actually completing actions on your behalf. He explained that when you get an AI-powered answer, it's essentially four or five searches in one. On the other hand, AI performing an action would be getting an entire browsing session done with one prompt. 'You really need to actually have a browser and hybridize the compute on the client and the server side in the most seamless way possible,' he said. 'And that calls for rethinking the whole browser.' He went on to explain that Perplexity isn't thinking of Comet as 'yet another browser,' but as a 'cognitive operating system.' Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW 'It'll be there for you every time, anytime, for work or life, as a system on the side, or like, just going and doing browsing sessions for you,' Srinivas said. 'And I think that'll fundamentally make us rethink how we even think about the internet. Like, earlier we would browse the internet, but now people are increasingly living on the internet. Like a lot of our life actually exists there. And if you want to build a proactive, personalized AI, it needs to live together with you, and that's why we need to rethink the browser entirely.' While the company hasn't revealed too much about the browser, Srinivas said in April that one reason Perplexity is developing its own browser is to track user activity beyond its own app so that it can sell premium ads, which would essentially mirror what Google quietly did to become the giant it is today. It's currently unknown when exactly Comet will launch, but Srinivas previously said on X that it will launch in the coming weeks.

Tesla China EV sales decline continues in May
Tesla China EV sales decline continues in May

Yahoo

time10 minutes ago

  • Yahoo

Tesla China EV sales decline continues in May

Tesla's sales of China-made electric vehicles have continued to decline, marking an eight-month downturn with a 15% year-on-year drop in May, reported Reuters. Deliveries of the Model 3 and Model Y, including both domestic sales and exports, fell to 61,662 vehicles, despite a 5.5% increase from April. The US electric vehicle specialist's sales woes in China were further exacerbated by intense price wars in the world's largest auto market. In an effort to boost sales, Tesla has offered smart assisted driving capability transfers to new vehicles and included Model 3 and Model Y in a government-backed campaign to promote EV sales in rural areas. However, Tesla's challenges are not limited to China. The company has also faced a sales rout across much of Europe last month, attributed to its aging model lineup and CEO Elon Musk's political activities, which may have deterred buyers. In response to the competitive market, Tesla ignited a price war in 2023, drawing in over 40 brands. This aggressive pricing strategy is under scrutiny as China has urged a halt to the bruising price wars. Following Tesla's price cuts, other manufacturers such as BYD, Geely Auto, and Chery have offered fresh incentives, intensifying the competition. Despite the overall decline, Tesla's China-made EV deliveries in May did see a slight increase from the previous month. Meanwhile, BYD, Tesla's biggest rival, reported a 14.1% year-on-year rise in global passenger vehicle sales, although this was a slowdown from April's 19.4% growth. In a strategic move to enhance its advanced driving assistance system (ADAS) in China, Tesla partnered with Baidu in March. Baidu's engineers have collaborated with Tesla's Beijing team to integrate Baidu's mapping data with Tesla's full self-driving (FSD) Version 13 software, aiming to refine the system with more accurate and up-to-date mapping information for navigating Chinese roads. "Tesla China EV sales decline continues in May" was originally created and published by Just Auto, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

Reddit Sues Anthropic Over the Unlicensed Use of Its User Posts
Reddit Sues Anthropic Over the Unlicensed Use of Its User Posts

Gizmodo

time26 minutes ago

  • Gizmodo

Reddit Sues Anthropic Over the Unlicensed Use of Its User Posts

It's well known that the AI industry rests on shaky legal ground. Companies like OpenAI have built their multi-billion-dollar businesses on the backs of vast tranches of training data, much of which is sourced from copyrighted content. The creators of that content know they're being ripped off and, more and more, it's leading to lawsuits. We got another reminder of this conundrum this week, when Reddit sued Anthropic over its use of Redditors' posts in its training data. Reddit's lawsuit, which was filed Wednesday, accuses the Amazon-backed AI company of breaching its user agreement. 'As far back as December 2021…Anthropic was already—without authorization and in direct violation of Reddit's User Agreement—training Claude on Reddit users' posts, the lawsuit claims. Anthropic, whose flagship product is the AI chatbot Claude, has tried to position itself as the 'good guy' of the AI industry—a company that plays by the rules and advances AI frameworks that are considerate of safety and ethical considerations. But, despite its 'white knight' PR, the company has repeatedly run into legal issues that throw its supposedly 'ethical' businesses practices into question. This week's litigation is yet another reminder of that. The lawsuit accuses Anthropic of unjustly enriching itself while also breaching the platform's user agreement. The suit claims that the AI company's bots have visited its website over 100,000 times since 2024. 'This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets,' the litigation states. It adds that Anthropic 'continues to publicly admit that it trains its Al technologies on Reddit content.' When reached for comment by Gizmodo, an Anthropic spokesperson provided the following statement: 'We disagree with Reddit's claims and will defend ourselves vigorously.' The war over AI content usage has become one of the industry's most prominent dilemmas. Platforms and artists are aware that their content is being pilfered for the sake of AI fuel, and they're firing up the lawsuit machine to fight back. At this point, OpenAI has been sued by so many different people and institutions that it's hard to keep track of it all—everyone from Sarah Silverman, Ta-Nahisi Coates, George R. R. Martin, and Jonathan Franzen, to the Center for Investigative Reporting, The Intercept, a variety of newspapers (including The Denver Post and the Chicago Tribune), and some YouTubers. The New York Times is currently suing the company on similar grounds. Reddit has sought to insulate itself from getting ripped off by developing contracts with AI companies that clearly stipulate an exchange of content for money. Last February, Reddit struck a deal with Google that allowed the tech giant to use the content on its platform as AI fodder, so long as the company coughed up $60 million a year. Not long afterward, a similar deal was struck with OpenAI. Anthropic doesn't seem to have gotten the memo, but it surely will now. More and more, it seems like this is the new model for the AI industry: To quote one of my favorite TV shows, you're going to have to pay the troll toll if you don't want to get pounded by a lawsuit. It's obviously a situation that favors large companies. AI companies with the resources will be able to buy access to large amounts of data to fuel their AI habits. Smaller, lesser-resourced firms will be shit out of luck.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store