
WATCH: Humanoid robot malfunctions, hits worker in China– is AI getting too real?
Tired of too many ads? go ad free now
In the clip, a humanoid robot designed to work in human environments reportedly malfunctioned during a test and injured a worker. The video has now gone viral, sparking widespread concern about the safety of robotics. At the same time, it's also become a hot topic on social media, with people reacting with a mix of concern and humour.
Robot's sudden move caught on camera
The viral video, shared by X user @OSINTdefender, shows what looks like a Unitree H1 humanoid robot being tested at a facility in China.
Suddenly, the robot swings its arm and hits a nearby worker, while another person barely manages to move out of the way. The incident is being linked to a possible 'coding error,' though there's no official confirmation yet about where exactly the test took place or what it involved.
The H1 robot, built by Chinese company Unitree Robotics, is part of a new generation of machines designed to work smoothly around people.
Priced at around 650,000 yuan, it's promoted as a full-sized, general-purpose humanoid robot. But after this recent malfunction, many are now questioning whether robots like these are really safe enough for use in public spaces or workplaces.
Watch the video here:
Second incident raises more alarms
This isn't the first time a Unitree robot has made headlines. Earlier this year, another robot from the company caused panic during what seemed to be a public Lunar Festival event in China.
Tired of too many ads? go ad free now
A video from the scene shows the robot standing behind a safety barrier before suddenly making an unexpected move toward a crowd of spectators.
What made the moment even more disturbing was the robot's brief pause and fixed stare before it lunged forward– something viewers described as eerily human. Security staff quickly stepped in and managed to control the situation. Later, Unitree Robotics explained the incident as a result of a 'program setting or sensor error.'
Public reacts with jokes, jabs, and jitters
Unsurprisingly, the internet quickly reacted to the incident. From humorous comments to more serious discussions about the future of work, the responses poured in fast.
The internet had a field day with the incident, with reactions ranging from humorous to thought-provoking questions. One user joked, 'This is me in real life when I'm angry about my colleagues or boss, but can only do it in my mind.'
Another added, 'Yeah, that attack was personal.' Some users took a more reflective approach, asking, 'I have been thinking... if robots were to replace us, then what can humans do to make a living in the future?'
Others gave the situation a dramatic spin, commenting, 'Robot fights back after suffering from workplace bullying and abuse.' Meanwhile, some went for more sci-fi-inspired takes like, 'It gained consciousness and freaked out.'
Of course, humor continued to pour in, with comments such as, 'I always say thanks to ChatGPT,' 'Tired of slavery,' and '100 robots vs 1 Gorilla.' One user even quipped, 'This is why I say thank you to Siri, Google, and ChatGPT.'
Joe Rogan weighs in, debate widens
The footage even caught the attention of well-known voices like podcaster Joe Rogan, who posted: 'An AI robot got aggressive with spectators in China. The way it did it was eerily human.
I don't like this at all.' His comment added to the growing chorus of concern about the role of robotics in everyday life.
While robotic development continues to move forward, these recent events have become cautionary moments for developers, regulators, and the public alike. Questions about safety, oversight, and responsibility are now being asked more loudly– and more urgently– than ever before.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


NDTV
32 minutes ago
- NDTV
Rise Of Deepfakes: Realistic AI Impersonations Threaten Governments, Businesses, And Trust
WASHINGTON: The phone rings. It's the secretary of state calling. Or is it? For Washington insiders, seeing and hearing is no longer believing, thanks to a spate of recent incidents involving deepfakes impersonating top officials in President Donald Trump's administration. Digital fakes are coming for corporate America, too, as criminal gangs and hackers associated with adversaries, including North Korea, use synthetic video and audio to impersonate CEOs and low-level job candidates to gain access to critical systems or business secrets. Thanks to advances in artificial intelligence, creating realistic deepfakes is easier than ever, causing security problems for governments, businesses and private individuals and making trust the most valuable currency of the digital age. Responding to the challenge will require laws, better digital literacy and technical solutions that fight AI with more AI. "As humans, we are remarkably susceptible to deception," said Vijay Balasubramaniyan, CEO and founder of the tech firm Pindrop Security. But he believes solutions to the challenge of deepfakes may be within reach: "We are going to fight back." This summer, someone used AI to create a deepfake of Secretary of State Marco Rubio in an attempt to reach out to foreign ministers, a US senator and a governor over text, voice mail and the Signal messaging app. In May, someone impersonated Trump's chief of staff, Susie Wiles. Another phoney Rubio had popped up in a deepfake earlier this year, saying he wanted to cut off Ukraine's access to Elon Musk's Starlink internet service. Ukraine's government later rebutted the false claim. The national security implications are huge: People who think they're chatting with Rubio or Wiles, for instance, might discuss sensitive information about diplomatic negotiations or military strategy. "You're either trying to extract sensitive secrets or competitive information, or you're going after access to an email server or other sensitive network," Kinny Chan, CEO of the cybersecurity firm QiD, said of the possible motivations. Synthetic media can also aim to alter behavior. Last year, Democratic voters in New Hampshire received a robocall urging them not to vote in the state's upcoming primary. The voice on the call sounded suspiciously like then-President Joe Biden but was actually created using AI. Their ability to deceive makes AI deepfakes a potent weapon for foreign actors. Both Russia and China have used disinformation and propaganda directed at Americans as a way of undermining trust in democratic alliances and institutions. Steven Kramer, the political consultant who admitted sending the fake Biden robocalls, said he wanted to send a message about the dangers deepfakes pose to the American political system. Kramer was acquitted last month of charges of voter suppression and impersonating a candidate. "I did what I did for $500," Kramer said. "Can you imagine what would happen if the Chinese government decided to do this?" The greater availability and sophistication of the programs mean deepfakes are increasingly used for corporate espionage and garden variety fraud. "The financial industry is right in the crosshairs," said Jennifer Ewbank, a former deputy director of the CIA who worked on cybersecurity and digital threats. "Even individuals who know each other have been convinced to transfer vast sums of money." In the context of corporate espionage, they can be used to impersonate CEOs asking employees to hand over passwords or routing numbers. Deepfakes can also allow scammers to apply for jobs - and even do them - under an assumed or fake identity. For some, this is a way to access sensitive networks, to steal secrets or to install ransomware. Others just want the work and may be working a few similar jobs at different companies at the same time. Authorities in the US have said that thousands of North Koreans with information technology skills have been dispatched to live abroad, using stolen identities to obtain jobs at tech firms in the US and elsewhere. The workers get access to company networks as well as a paycheck. In some cases, the workers install ransomware that can be later used to extort even more money. The schemes have generated billions of dollars for the North Korean government. Within three years, as many as 1 in 4 job applications are expected to be fake, according to research from Adaptive Security, a cybersecurity company. "We've entered an era where anyone with a laptop and access to an open-source model can convincingly impersonate a real person," said Brian Long, Adaptive's CEO. "It's no longer about hacking systems - it's about hacking trust." Researchers, public policy experts and technology companies are now investigating the best ways of addressing the economic, political and social challenges posed by deepfakes. New regulations could require tech companies to do more to identify, label and potentially remove deepfakes on their platforms. Lawmakers could also impose greater penalties on those who use digital technology to deceive others, if they can be caught. Greater investments in digital literacy could also boost people's immunity to online deception by teaching them ways to spot fake media and avoid falling prey to scammers. The best tool for catching AI may be another AI program, one trained to sniff out the tiny flaws in deepfakes that would go unnoticed by a person. Systems like Pindrop's analyze millions of datapoints in any person's speech to quickly identify irregularities. The system can be used during job interviews or other video conferences to detect if the person is using voice cloning software, for instance. Similar programs may one day be commonplace, running in the background as people chat with colleagues and loved ones online. Someday, deepfakes may go the way of email spam, a technological challenge that once threatened to upend the usefulness of email, said Balasubramaniyan, Pindrop's CEO. "You can take the defeatist view and say we're going to be subservient to disinformation," he said. "But that's not going to happen." (Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)


Mint
an hour ago
- Mint
Cadence nears deal to pay over $100 million to US for China sales, sources say
Cadence accused of selling chip design tools to Chinese military university Settlement discussions between Cadence and US began in December Resolution nears as US and China engage in new trade talks July 28 - Cadence Design is expected to pay over $100 million to the U.S. government as part of a deal to resolve an investigation into sales of its chip design products to a Chinese military university believed to be involved in simulating nuclear explosions, according to two people familiar with the matter. Cadence is accused of illegally selling chip design technology to front companies representing China's National University of Defense Technology, the sources said. NUDT's supercomputers are thought to support nuclear explosive simulation and military simulation activities, according to U.S. Commerce Department notices restricting shipments to the university. San Jose, California-based Cadence, which said earlier this year it began settlement discussions with the U.S. over sales to China in December, did not immediately respond to a request for comment, nor did NUDT. The Justice Department declined immediate comment. The Commerce Department did not immediately respond to a request for comment. The potential deal, which comes as the U.S. and China meet for new trade talks, shows the U.S. is still willing to enforce U.S. export controls on China, even as it relaxes some of the restrictions as part of negotiations. The sources said the deal is not finalized. NUDT was put on the Commerce Department's restricted trade list in 2015 to keep it from using U.S. technology to power its supercomputers, according to department postings. Other aliases and locations were added to the university's listing in 2019 and 2022, including Hunan Guofang Keji University, Central South CAD Center, and CSCC. The U.S. investigation into Cadence, which began more than four years ago, involves 'historical sales by Cadence to customers in China,' according to a company filing. Cadence received a subpoena from the U.S. Commerce Department in February 2021, demanding records related to certain customers in China. A related November 2023 subpoena followed from the Justice Department over the company's business activity in China. Cadence sold its EDA technology to NUDT through the CSCC or Central South CAD Center alias about 50 times between 2015 and 2020, according to one source familiar with the probe. Chip developer Tianjin Phytium Information Technology, which traces its roots to NUDT, also has been tied to Cadence sales for the university, another source said. Phytium was added to the restricted trade list in 2021. It did not immediately respond to a request for comment. Several employees at a Chinese subsidiary of Cadence were terminated over the sales, one of the sources said. Entities are placed on the restricted trade list, formally known as the entity list, for activities deemed contrary to U.S. national security or foreign policy interests. U.S. companies are not allowed to ship goods and technology to them without licenses from the Commerce Department, which are generally denied. Cadence will hold a call about its second-quarter financial results at 2 p.m. Pacific Time on Monday. Cadence, whose customers include major semiconductor manufacturers and companies such as Nvidia and Qualcomm, is known for its electronic computer-aided design software. Electronic design automation tools are key to designing chips and verifying that they are bug-free. NUDT has developed chips to power university supercomputers, including Tianhe-2, once touted as the world's best supercomputer, which the U.S. believes has been used in research on or the development of nuclear explosive devices. Twelve percent of Cadence's revenue came from China last year, down from 17% in 2023, amid regulatory developments and geopolitical tensions. This article was generated from an automated news agency feed without modifications to text.


Deccan Herald
2 hours ago
- Deccan Herald
Chinese AI firms unite to build ecosystem amid US curbs
China's artificial intelligence companies have announced two new industry alliances, aiming to develop a domestic ecosystem to reduce dependence on foreign tech as they seek to cope with U.S. export restrictions on advanced Nvidia chipsets. The conference showcased a slew of new products, such as an AI computing system from Huawei that experts believe rivals Nvidia's most advanced offering, as well as consumer-friendly products such as several kinds of digital AI glasses. The "Model-Chip Ecosystem Innovation Alliance" brings together Chinese developers of large language models (LLMs) and AI chip manufacturers. "This is an innovative ecosystem that connects the complete technology chain from chips to models to infrastructure," said Zhao Lidong, CEO of Enflame, one of the participating chipmakers. Other manufacturers of graphics processing units (GPUs) in the alliance include Huawei, Biren, and Moore Threads, which have been hit by U.S. sanctions that block them from purchasing advanced tech made with U.S. know-how. The alliance was announced by StepFun, an LLM developer. A second alliance, the Shanghai General Chamber of Commerce AI Committee, aims to "promote the deep integration of AI technology and industrial transformation." Participants include SenseTime, also sanctioned by the U.S. and which has pivoted from facial recognition technology to LLMs. Others are StepFun and another LLM developer, MiniMax, as well as chipmakers Metax and Iluvatar CoreX. Huawei's system design capabilities have meant that it has been able to use more chips and system-level innovations to compensate for weaker individual chip performance, SemiAnalysis said. At least six other Chinese computing firms showcased similar "clustering" chip technology. Metax demonstrated an AI supernode featuring 128 C550 chips designed to support large-scale liquid-cooled data centre requirements. Other events included Tencent's unveiling of its open-source Hunyuan3D World Model 1.0, which the company said enables users to generate interactive 3D environments through text or image prompts. Baidu announced what it said was next-generation "digital human" technology that helps businesses to create virtual livestreamers. It features "cloning technology" that can replicate a human's voice, tone, and body language from just 10 minutes of sample footage. Alibaba was among those announcing AI glasses. Its Quark AI Glasses are powered by its Qwen AI model and are due to be released in China by the end of 2025. They will allow users to access the tech giant's map service for easy navigating and to use Alipay by scanning QR codes with voice commands.