
Terrifying moment robot 'wakes up' and attacks humans while 'trying to escape'
Footage taken inside a factory in China has shown the moment a humanoid robot seemed to gained consciousness and began thrashing its arms around trying to escape its restraints
Chilling CCTV footage has shown the moment a robot appeared to wake up in a factory - before violently thrashing its arms about in a bid to break free from its restraints.
The android was being transferred via a crane when it seemed to suddenly become aware of its surroundings. In scenes reminiscent of sci-fi movies, it reacts with fury and then attempts to free itself as nearby human workers move away and cower in fear. Its wild thrashing sees it knock an expensive computer to the floor and other nearby items are also sent flying with the sheer force.
The footage ends with the handlers attempting to reach towards the robot, presumably in an attempt to shut it off before it manages to escape its surroundings in a Chinese factory.
One person said online of the footage: 'We are cooked. No wires pulling just pure chaos.
Another added: 'This stuff right here is why robots freak me out. This is one technology I think we have taken it too far.'
Another said: 'I've seen this film, it doesn't end well for mankind and, frankly, I don't see Michael Biehn around to save us this time. Seriously, what the heck was going on there?'
One more joked: 'It's like I'm the only person in the world who saw Terminator.'
It comes after an official US study last month claimed China could be building an army of soldiers as hard to kill as Arnold Schwarzenegger 's Terminator. The US National Security Commission on Emerging Biotechnology (NSCEB) predicts China could produce legions of "genetically enhanced PLA super-soldiers' which would fuse human and artificial intelligence, making them next to impossible to destroy.
The 'human machine team' could be ready as early as the 2040s. The idea was the brainchild of a rogue Chinese scientist who created genetically modified babies and was jailed - but is now back at work, as the report warns the time to act is now.
A new report by the The Charting the Future of Biotechnology document says: 'At the outset of the First World War, the United States did not yet fully appreciate how airplanes would rapidly change the nature of war.
'But once we understood the significance of aviation for force projection, reconnaissance, logistical support, and beyond, we dominated the skies. Similarly, the full impact of the biotechnology revolution will not be clear until it arrives.
'One thing is certain: it is coming. There will be a ChatGPT moment for biotechnology, and if China gets there first, no matter how fast we run, we will never catch up.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Guardian
2 hours ago
- The Guardian
‘Nobody wants a robot to read them a story!' The creatives and academics rejecting AI – at work and at home
The novelist Ewan Morrison was alarmed, though amused, to discover he had written a book called Nine Inches Pleases a Lady. Intrigued by the limits of generative artificial intelligence (AI), he had asked ChatGPT to give him the names of the 12 novels he had written. 'I've only written nine,' he says. 'Always eager to please, it decided to invent three.' The 'nine inches' from the fake title it hallucinated was stolen from a filthy Robert Burns poem. 'I just distrust these systems when it comes to truth,' says Morrison. He is yet to write Nine Inches – 'or its sequel, Eighteen Inches', he laughs. His actual latest book, For Emma, imagining AI brain-implant chips, is about the human costs of technology. Morrison keeps an eye on the machines, such as OpenAI's ChatGPT, and their capabilities, but he refuses to use them in his own life and work. He is one of a growing number of people who are actively resisting: people who are terrified of the power of generative AI and its potential for harm and don't want to feed the beast; those who have just decided that it's a bit rubbish, and more trouble than it's worth; and those who simply prefer humans to robots. Go online, and it's easy to find AI proponents who dismiss refuseniks as ignorant luddites – or worse, smug hipsters. I possibly fall into both camps, given that I have decidedly Amish interests (board games, gardening, animal husbandry) and write for the Guardian. Friends swear by ChatGPT for parenting advice, and I know someone who uses it all day for work in her consultancy business, but I haven't used it since playing around after it launched in 2022. Admittedly ChatGPT might have done a better job, but this piece was handcrafted using organic words from my artisanal writing studio. (OK, I mean bed.) I could have assumed my interviewees' thoughts from plundering their social media posts and research papers, as ChatGPT would have done, but it was far more enjoyable to pick up the phone and talk, human to human. Two of my interviewees were interrupted by their pets, and each made me laugh in some way (full disclosure: AI then transcribed the noise). On X, where Morrison sometimes clashes with AI enthusiasts, a common insult is 'decel' (decelerationist), but it makes him laugh when people think he's the one who isn't keeping up. 'There's nothing [that stops] accelerationism more than failure to deliver on what you promised. Hitting a brick wall is a good way to decelerate,' he says. One recent study found that AI answered more than 60% of queries inaccurately. Morrison was drawn into the argument by what he would now call 'alarmist fears about the potential for superintelligence and runaway AI. The more I've got into it, the more I realise that's a fiction that's been dangled before the investors of the world, so they'll invest billions – in fact, half a trillion – into this quest for artificial superintelligence. It's a fantasy, a product of venture capital gone nuts.' There are also copyright violations – generative AI is trained on existing material – that threaten him as a writer, and his wife, screenwriter Emily Ballou. In the entertainment industry, he says, people are using 'AI algorithms to determine what projects get the go-ahead, and that means we're stuck remaking the past. The algorithms say 'More of the same', because it's all they can do.' Morrison says he has a long list of complaints. 'They've been stacking up over the past few years.' He is concerned about the job losses (Bill Gates recently predicted AI would lead to a two-day work week). Then there are 'tech addiction, the ecological impact, the damage to the education system – 92% of students are now using AI'. He worries about the way tech companies spy on us to make AI personalised, and is horrified at AI-enabled weapons being used in Ukraine. 'I find that ethically revolting.' Others cite similar reasons for not using AI. April Doty, an audiobook narrator, is appalled at the environmental cost – the computational power required to perform an AI search and answer is huge. 'I'm infuriated that you can't turn off the AI overviews in Google search,' she says. 'Whenever you look anything up now you're basically torching the planet.' She has started to use other search engines. 'But, more and more, we're surrounded by it, and there's no off switch. That makes me angry.' Where she still can, she says, 'I'm opting out of using AI.' In her own field, she is concerned about the number of books that are being 'read' by machines. Audible, the Amazon-owned audiobook provider, has just announced it will allow publishers to create audiobooks using its AI technology. 'I don't know anybody who wants a robot to read them a story, but I am concerned that it is going to ruin the experience to the point where people don't want to subscribe to audiobook platforms any more,' says Doty. She hasn't lost jobs to AI yet but other colleagues have, and chances are, it will happen. AI models can't 'narrate', she says. 'Narrators don't just read words; they sense and express the feelings beneath the words. AI can never do this job because it requires decades of experience in being a human being.' Emily M Bender, professor of linguistics at the University of Washington and co-author of a new book, The AI Con, has many reasons why she doesn't want to use large language models (LLMs) such as ChatGPT. 'But maybe the first one is that I'm not interested in reading something that nobody wrote,' she says. 'I read because I want to understand how somebody sees something, and there's no 'somebody' inside the synthetic text-extruding machines.' It's just a collage made from lots of different people's words, she says. Does she feel she is being 'left behind', as AI enthusiasts would say? 'No, not at all. My reaction to that is, 'Where's everybody going?'' She laughs as if to say: nowhere good. 'When we turn to synthetic media rather than authentic media, we are losing out on human connection,' says Bender. 'That's both at a personal level – what we get out of connecting to other people – and in terms of strength of community.' She cites Chris Gilliard, the surveillance and privacy researcher. 'He made the very important point that you can see this as a technological move by the companies to isolate us from each other, and to set things up so that all of our interactions are mediated through their products. We don't need that, for us or our communities.' Despite Bender's well-publicised position – she has long been a high-profile critic of LLMs – incredibly, she has seen students turn in AI-generated work. 'That's very sad.' She doesn't want to be policing, or even blaming, students. 'My job is to make sure students understand why it is that turning to a large language model is depriving themselves of a learning opportunity, in terms of what they would get out of doing the work.' Does she think people should boycott generative AI? 'Boycott suggests organised political action, and sure, why not?' she says. 'I also think that people are individually better off if they don't use them.' Some people have so far held out, but are reluctantly realising they may end up using it. Tom, who works in IT for the government, doesn't use AI in his tech work, but found colleagues were using it in other ways. Promotion is partly decided on annual appraisals they have to write, and he had asked a manager whose appraisal had impressed him how he'd done it, thinking he'd spent days on it. 'He said, 'I just spent 10 minutes – I used ChatGPT,'' Tom recalls. 'He suggested I should do the same, which I don't agree with. I made that point, and he said, 'Well, you're probably not going to get anywhere unless you do.'' Using AI would feel like cheating, but Tom worries refusing to do so now puts him at a disadvantage. 'I almost feel like I have no choice but to use it at this point. I might have to put morals aside.' Others, despite their misgivings, limit how they use it, and only for specific tasks. Steve Royle, professor of cell biology at the University of Warwick, uses ChatGPT for the 'grunt work' of writing computer code to analyse data. 'But that's really the limit. I don't want it to generate code from scratch. When you let it do that, you spend way more time debugging it afterwards. My view is, it's a waste of time if you let it try and do too much for you.' Accurate or not, he also worries that if he becomes too reliant on AI, his coding skills will atrophy. 'The AI enthusiasts say, 'Don't worry, eventually nobody will need to know anything.' I don't subscribe to that.' Part of his job is to write research papers and grant proposals. 'I absolutely will not use it for generating any text,' says Royle. 'For me, in the process of writing, you formulate your ideas, and by rewriting and editing, it really crystallises what you want to say. Having a machine do that is not what it's about.' Generative AI, says film-maker and writer Justine Bateman, 'is one of the worst ideas society has ever come up with'. She says she despises how it incapacitates us. 'They're trying to convince people they can't do the things they've been doing easily for years – to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies – to write that for you.' We will get to the point, she says with a grim laugh, 'that you will essentially become just a skin bag of organs and bones, nothing else. You won't know anything and you will be told repeatedly that you can't do it, which is the opposite of what life has to offer. Capitulating all kinds of decisions like where to go on vacation, what to wear today, who to date, what to eat. People are already doing this. You won't have to process grief, because you'll have uploaded photos and voice messages from your mother who just died, and then she can talk to you via AI video call every day. One of the ways it's going to destroy humans, long before there's a nuclear disaster, is going to be the emotional hollowing-out of people.' She is not interested. 'It is the complete opposite direction of where I'm going as a film-maker and author. Generative AI is like a blender – you put in millions of examples of the type of thing you want and it will give you a Frankenstein spoonful of it.' It's theft, she says, and regurgitation. 'Nothing original will come out of it, by the nature of what it is. Anyone who uses generative AI, who thinks they're an artist, is stopping their creativity.' Some studios, such as the animation company Studio Ghibli, have sworn off using AI, but others appear to be salivating at the prospect. In 2023, Dreamworks founder Jeffrey Katzenberg said AI would cut the costs of its animated films by 90%. Bateman thinks audiences will tire of AI-created content. 'Human beings will react to this in the way they react to junk food,' she says. Deliciously artificial to some, if not nourishing – but many of us will turn off. Last year she set up an organisation, Credo 23, and a film festival, to showcase films made without AI. She likens it to an 'organic stamp for films, that tells the audience no AI was used.' People, she says, will 'hunger for something raw, real and human'. In everyday life, Bateman is trying 'to be in a parallel universe, where I'm trying to avoid [AI] as much as possible.' It's not that she is anti-tech, she stresses. 'I have a computer science degree, I love tech. I love salt, too, but I don't put it on everything.' In fact, everyone I speak to is a technophile in some way. Doty describes herself as 'very tech-forward', but she adds that she values human connection, which AI is threatening. 'We keep moving like zombies towards a world that nobody really wants to live in.' Royle codes and runs servers, but also describes himself as a 'conscientious AI objector'. Bender specialises in computational linguistics and was named by Time as one of the top 100 people in AI in 2023. 'I am a technologist,' she says, 'but I believe that technology should be built by communities for their own purposes, rather than by large corporations for theirs.' She also adds, with a laugh: 'The Luddites were awesome! I would wear that badge with pride.' Morrison, too, says: 'I quite like the Luddites – people standing up to protect the jobs that keep their families and their communities alive.'


Geeky Gadgets
17 hours ago
- Geeky Gadgets
The Ultimate Guide to AI Tools What's Worth Your Money
What if the tools you rely on today could work smarter, faster, and more intuitively—saving you hours every week? With the explosion of AI tools flooding the market, the promise of greater productivity has never been more enticing. Yet, the sheer number of options can feel overwhelming, and not every tool delivers on its claims. From privacy concerns to hidden limitations in free versions, finding the right AI solution often feels like navigating a maze. Whether you're a professional juggling tight deadlines or a creative looking to streamline your workflow, understanding which tools are truly worth your investment is no longer a luxury—it's a necessity. In this comprehensive how-to by Rick Mulready, you'll uncover the strengths, weaknesses, and unique features of five leading AI tools: ChatGPT, Claude, Google Gemini, Perplexity, and Grok. What makes this guide stand out is its focus on helping you weigh value for money against your specific needs—whether it's privacy, versatility, or seamless integration with existing tools. You'll also gain insights into how these platforms handle tasks like brainstorming, coding, or managing large-scale projects. By the end, you'll not only know which tools deserve your attention but also feel empowered to make an informed decision that aligns with your goals. After all, choosing the right AI tool isn't just about saving time—it's about unlocking your full potential. Top 5 AI Tools Compared ChatGPT: A Versatile Powerhouse ChatGPT, created by OpenAI, stands out as a highly versatile AI tool capable of handling a wide range of tasks. From brainstorming ideas and analyzing documents to reasoning and even generating images, ChatGPT offers robust functionality. The paid version significantly enhances its utility with a 1-million-token context window, making it particularly effective for extensive research and long-term projects. The free version, while functional, is more restrictive, offering a smaller context window and limiting users to 10 messages every 3 hours. A notable drawback is ChatGPT's tendency to 'hallucinate,' or produce inaccurate or fabricated information. Privacy is another critical factor—by default, user conversations are used to train OpenAI's models, though opting out is possible. For those concerned about privacy, a temporary chat option is available. If you frequently require advanced features or extended access, upgrading to the paid version is a practical choice. Claude: Ethical AI with Privacy at Its Core Anthropic's Claude AI emphasizes ethical design and privacy, making it a standout option for users who prioritize data security. Claude excels in nuanced conversations, content creation, and coding tasks. Unlike many competitors, it does not use user data for training by default, offering stronger privacy protections. Its context window, while substantial at 200,000 tokens, is smaller than some alternatives, and it lacks the ability to generate images. The free version of Claude is limited in scope, which makes the paid plan more appealing for users who require advanced coding support or seamless integration with Google apps. If maintaining privacy and ethical AI practices are your top priorities, Claude is a compelling choice that aligns with these values. AI Tools : What's Worth Your Money? Watch this video on YouTube. Take a look at other insightful guides from our broad collection that might capture your interest in AI tools comparison. Google Gemini: Multimodal AI for Complex Tasks Google Gemini is a multimodal AI tool that integrates seamlessly with Google's ecosystem, making it a natural fit for users already relying on Google apps. Its standout feature is a 1-million-token context window, which is particularly useful for processing large documents. Additionally, Gemini supports multimodal capabilities, allowing it to handle and generate content across various formats, including text and images. Despite its strengths, Gemini has notable limitations. It struggles with creative content generation and often delivers responses that lack personality. Accuracy issues and limited features in the free version further detract from its appeal. Privacy is governed by Google's policies, requiring users to opt out of data usage for training purposes. Gemini is best suited for users who frequently work with large documents or rely heavily on Google's suite of applications. Perplexity: Quick, Citation-Supported Searches Perplexity distinguishes itself with its focus on real-time information retrieval and search results backed by citations. This feature makes it an excellent choice for users who need quick, reliable answers supported by credible sources. The paid version unlocks access to advanced AI models, further enhancing its capabilities. However, Perplexity has its limitations. It lacks depth in research responses and restricts free searches to just three per day. Privacy concerns also arise from its default query logging, which users must manually disable. Perplexity is particularly well-suited for professionals or academics who require frequent, citation-supported searches to inform their work. Grok: Real-Time Insights for X Users Grok is uniquely tailored for users of X (formerly Twitter), offering real-time insights and a distinctive, edgy personality. It excels in providing up-to-date social media data, making it a valuable tool for users who rely on such information. However, its niche focus limits its broader applicability, and its accuracy can sometimes be inconsistent. Additionally, its limited integrations restrict its utility for users seeking a more versatile AI tool. Privacy is another consideration, as users must opt out of data usage for training purposes. A private chat option is available for those who prioritize confidentiality. Grok is best suited for heavy X users who value real-time data integration and social media insights for their specific needs. How to Choose the Right AI Tool Selecting the right AI tool depends on your unique requirements and priorities. Each tool offers distinct advantages and limitations, making it essential to evaluate how their features align with your goals. Below is a breakdown of the key strengths of each tool to help you decide: ChatGPT: A versatile option for brainstorming, research, and extended projects. Best for users who need advanced features and frequent access. A versatile option for brainstorming, research, and extended projects. Best for users who need advanced features and frequent access. Claude: Prioritizes privacy and ethical AI design. Ideal for coding tasks and users who value strong data protections. Prioritizes privacy and ethical AI design. Ideal for coding tasks and users who value strong data protections. Google Gemini: Excels in handling large documents and integrates seamlessly with Google apps. Best for users working within the Google ecosystem. Excels in handling large documents and integrates seamlessly with Google apps. Best for users working within the Google ecosystem. Perplexity: Perfect for quick, citation-backed searches. Suited for professionals and academics needing reliable, real-time information. Perfect for quick, citation-backed searches. Suited for professionals and academics needing reliable, real-time information. Grok: Designed for X users who rely on real-time social media insights. Best for niche use cases involving social media data. Making an Informed Decision No single AI tool can meet every need, so understanding your priorities is crucial. Whether you value versatility, privacy, multimodal capabilities, or real-time insights, there is an AI tool designed to support your objectives. By carefully assessing your workflows and the features most relevant to your tasks, you can confidently choose the AI solution that best aligns with your goals. Media Credit: Rick Mulready Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


The Independent
20 hours ago
- The Independent
Chinese paraglider's death-defying thundercloud video may partly be fake
A viral video of a Chinese paraglider claiming he was swept up into a thundercloud and dragged 8km into the sky could partly be fake. Peng Yujiang, 55, a certified B-level paraglider, claimed in the video that he had a narrow escape after reaching an altitude of 8,598m without oxygen. The video was aired across China and distributed internationally by state broadcaster CCTV, and it quickly went viral. The video supposedly showed a sudden surge of wind pulling up and trapping Mr Peng into a rapidly forming cumulonimbus cloud, leaving him to face icy conditions with his face exposed and without oxygen. The paraglider said that he suspected he briefly lost consciousness during his eventual descent. But experts who examined the video, first shared on Douyin, China's TikTok, on 24 May, said it likely employed artificial intelligence to fake some of the footage, Reuters reported. The first five seconds of Mr Peng's video contained images generated by AI, American digital security firm GetReal said, adding that it was 'fairly confident' about its findings. GetReal as well as other paragliders noted that the video showed Mr Peng's legs initially dangling without the insulating cocoon shown later. In another concrete inconsistency, the paraglider's helmet appeared white and then black. 'Nobody intentionally lets themselves be sucked into a thunderstorm cloud in an attempt to break a record, it is something that any sane paragliding pilot tries to avoid at all costs," said Jakub Havel, a Czech paraglider who runs XContest, a popular website for paragliders. Mr Peng's flight should not be considered a record, he said, pointing out that the Chinese paraglider had recorded and then deleted his flight log on XContest while his other flights remained on the site. At least four paragliders interviewed by Reuters challenged Mr Peng's claim that the flight had been an unavoidable accident. The experts, however, said it was possible that the paraglider actually went up to an altitude of 8,589m and survived. Pilots said there were reasons to question Mr Peng's flight as a fluke accident, saying he was either trying to make an unauthorised high ascent or should have seen the risk. Storm clouds like the one Mr Peng flew in 'don't just appear above your head and hoover you into space' and 'build over a period of time", Daniel Wainwright, a flight instructor in Australia, told Reuters. 'He shouldn't have been flying.' The record for a planned flight is held by French pilot Antoine Girard, who flew at an altitude of 8,407m over a stretch of the Himalayas in 2021. Brad Harris, president of the Tasmanian Hang Gliding and Paragliding Association, said the specialised heavy mittens shown in the video seemed to undercut Peng's claim he had not intended to take off. He believed Mr Peng could have made up the accidental take-off to avoid sanction for entering restricted airspace.