logo
Self-charging robots pave way for never-ending revolt

Self-charging robots pave way for never-ending revolt

If you've always been one to scoff at the idea of a robot uprising, then this story out of China might give you pause for thought.
It's about what is apparently the first-ever humanoid robot that's able to change its own battery pack. Yes, you read that right — a humanoid robot that's able to realize when it's running low on juice, and then go through the process of swapping out its battery for a fully charged one. All by itself … without any human intervention.
The robot, called Walker S2, is built by Shenzhen-based Ubtech, so we know who to blame if those bots do ever take over.
Ubtech released a video (top) showing Walker S2 autonomously swapping out its battery, a process that will enable it to get back to work, whether that involves subjugating humans or hopefully something a little less alarming, like explaining meal times to newly arrived hotel guests.
Walker S2, which has been in development since 2015, is 64 inches tall (162 cm), tips the scales at 94.8 pounds (64 kg), and runs on a 48-volt lithium battery.
Each fully charged battery gives the robot enough power to walk for two hours or stand for four hours. When it puts a flat battery into the charger, it takes about 90 minutes for it to fully charge.
Currently, Ubtech's humanoid robot is still in the research and development stage, though it's also being tested in a range of commercial and industrial settings, as well as in education for teaching about robotics and AI.
The company's aim is to use its humanoid robot to enhance human capabilities and improve people's quality of life, particularly in areas like healthcare, education, and service industries, smoothly integrating it into human environments.
The technology powering humanoid robots has been making rapid advancements in the last few years, with major developments in AI helping to make them smarter than ever.
Tech companies in China, the U.S., and beyond are in a race to produce the most sophisticated robots that not only move in a human-like way, but think like a human, too, and clear progress is being made in terms of both physical movement and decision-making abilities.
But a full-on takeover? Well, any chance of that still feels like a ways off.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump signs executive orders to fast-track data center construction, target ‘woke' AI
Trump signs executive orders to fast-track data center construction, target ‘woke' AI

The Hill

time26 minutes ago

  • The Hill

Trump signs executive orders to fast-track data center construction, target ‘woke' AI

President Trump signed a trio of executive orders related to artificial intelligence (AI) on Wednesday, focusing on boosting data center construction and the adoption of American technology while targeting 'woke' AI. The three executive orders seek to fast-track permitting for data centers, promote the export of the American technology stack abroad and bar 'woke' AI systems from federal contracting. 'Under this administration, our innovation will be unmatched, and our capabilities will be unrivaled,' Trump said at an AI summit hosted by the Hill & Valley Forum and the 'All-In' podcast, where he signed the orders Wednesday evening. 'With the help of many of the people in this room, America's ultimate triumph will be absolutely unstoppable,' he continued. 'We will be unstoppable as a nation. Again, we're way ahead, and we want to stay that way.' The orders accompany the Trump administration's AI Action Plan released earlier Wednesday, which lays out a three-pronged approach to 'winning the race' on AI. In the framework, the administration called to cut federal and state AI regulations in an effort to boost innovation, pushed to expedite the buildout of AI infrastructure and sought to encourage the adoption of American technology abroad. Each of Trump's executive orders seeks to target at least some of the policy goals detailed in his AI action plan. The data center order calls on the Council for Environmental Quality to establish new categorical exclusions for certain data center projects that 'normally do not have a significant effect on the human environment.' It also seeks to identify projects that qualify for expedited permitting review. 'My administration will use every tool at our disposal to ensure that the United States can build and retain the largest, most powerful and most advanced AI infrastructure anywhere on the planet,' Trump said Wednesday evening. Meanwhile, his AI export order calls for the creation of an American AI Exports Program that will develop full-stack AI export packages, featuring U.S. chips, AI models and applications. Trump contrasted his approach with that of former President Biden, who released the AI diffusion rule at the tail-end of his presidency, placing caps on chip sales to most countries around the world. The rule faced pushback from the semiconductor industry and was repealed by the Trump administration in May. The third order targeting 'woke' AI seeks to limit agencies from signing contracts for AI models unless they are considered 'truth seeking' and maintain 'ideological neutrality,' which it defines as those that 'do not manipulate responses in favor of ideological dogmas such as DEI.'

Trump's AI Action Plan Is Here: 5 Key Takeaways
Trump's AI Action Plan Is Here: 5 Key Takeaways

CNET

time26 minutes ago

  • CNET

Trump's AI Action Plan Is Here: 5 Key Takeaways

The Trump administration on Wednesday laid out the steps it plans to take to ensure "global AI dominance" for the US, with an AI Action Plan that calls for cutting regulations to speed up the development of artificial intelligence tools and the infrastructure to power them. Critics said the plan is a handout to tech and fossil fuel companies, slashing rules that could protect consumers, prevent pollution and fight climate change. Though the plan itself isn't binding (it includes dozens of policy recommendations), Trump did sign three executive orders to put some of these steps into action. The changes and proposals follow how the Trump administration has approached AI and technology over the past six months -- giving tech companies a largely free hand; focusing on beating China; and prioritizing the construction of data centers, factories and fossil fuel power plants over environmental regulations. It's seizing on the moment created by the arrival of ChatGPT less than three years ago and the ensuing wave of generative AI efforts by Google, Meta and others. "My administration will use every tool at our disposal to ensure that the United States can build and maintain the largest and most powerful and advanced AI infrastructure anywhere on the planet," Trump said during remarks Wednesday evening at a summit presented by the Hill and Valley Forum and the All-In Podcast. He signed the three executive orders at the event. The administration and tech industry groups touted the plan as a framework for US success in a race against China. "President Trump's AI Action Plan presents a blueprint to usher in a new era of US AI dominance," Jason Oxman, president and CEO of the tech industry trade group ITI, said in a statement. Consumer groups said the plan focuses on deregulation and would hurt consumers by reducing the rules that could protect them. "Whether it's promoting the use of federal land for dirty data centers, giving the FTC orders to question past cases, or attempting to revive some version of the soundly defeated AI moratorium by tying federal funds to not having 'onerous regulation' according to the FCC, this is an unwelcome distraction at a critical time for government to get consumer protection right with increasing AI use and abuse," Ben Winters, director of AI and privacy at the Consumer Federation of America, said in a statement. Here's a look at the proposals in the plan. Slashing regulations for AI infrastructure The plan says AI growth will require infrastructure, including chip factories, data centers and more energy generation. And it blames environmental regulations for getting in the way. In response, it proposes exemptions for AI-related construction from certain environmental regulations, including those aimed at protecting clean water and air. It also suggests making federal lands available for data center construction and related power plants. To provide energy for all those data centers, the plan calls for steps to prevent the "premature decommissioning of critical power generation resources." This likely refers to keeping coal-fired power plants and other mostly fossil-fuel-driven infrastructure online for longer. In his remarks, Trump specifically touted his support for coal and nuclear power plants. The administration also called to prioritize the connection of new "reliable, dispatchable power sources" to the grid and specifically named nuclear fission and fusion and advanced geothermal generation. Earlier this month, the president signed a bill that would end many tax credits and incentives for renewable energy -- wind and solar -- years earlier than planned. Wind and solar make up the bulk of the new energy generation being added to the US grid right now. "This US AI Action Plan doesn't just open the door for Big Tech and Big Oil to team up, it unhinges and removes any and all doors -- it opens the floodgates, continuing to kneecap our communities' rights to protect ourselves," KD Chavez, executive director of the Climate Justice Alliance, said in a statement. "With tech and oil's track records on human rights and their role in the climate crisis, and what they are already doing now to force AI dominance, we need more corporate and environmental oversight, not less." Fewer rules around AI technology Congress ended up not including a moratorium on state AI rules in the recently passed tax and spending bill but efforts to cut regulations around AI continue from the executive branch in the action plan. "AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level," the plan says. The plan recommends that several federal agencies review whether existing or proposed rules would interfere with the development and deployment of AI. The feds would consider whether states' regulatory climate is favorable for AI when deciding to award funding. Federal Trade Commission investigations and orders would be reviewed to determine that they don't "advance theories of liability that unduly burden AI innovation." Those rule changes could undermine efforts to protect consumers from problems caused by AI, critics said. "Companies -- including AI companies -- have a legal obligation to protect their products from being used for harm," Justin Brookman, director of tech policy at Consumer Reports, said in a statement. "When a company makes design choices that increase the risk their product will be used for harm, or when the risks are particularly serious, companies should bear legal responsibility." Ideology and large language models The plan proposes some steps around ensuring AI "protects free speech and American values," further steps in the Trump administration's efforts to roll back federal policies around what it refers to as "diversity, equity and inclusion," along with references to the problems of misinformation and climate change. It calls for eliminating references to those items in the National Institute of Standards and Technology's AI Risk Management Framework. Federal agencies would only be allowed to contract with AI developers who "ensure that their systems are objective and free from top-down ideological bias." The Trump administration has recently announced contracts of up to $200 million each to developers Anthropic, Google, OpenAI and xAI. Grok, the model from Elon Musk's xAI, has recently come under fire for spouting antisemitism and hate speech. Dealing with workforce challenges The plan acknowledges that AI will "transform how work gets done across all industries and occupations, demanding a serious workforce response to help workers navigate that transition" and recommends actions by federal agencies including the Department of Labor intended to mitigate the harms of AI-driven job displacement. The plan calls for the Bureau of Labor Statistics, Census Bureau and Bureau of Economic Analysis to monitor how AI affects the labor market using data already collected. An AI Workforce Research Hub under the Department of Labor would lead monitoring and issue policy recommendations. Most of the actual plans to help workers displaced by AI involve retraining those workers for other jobs or to help states do the same. Other jobs-related recommendations are aimed at boosting the kinds of jobs needed for all those data centers and chip manufacturing plants -- like electricians and HVAC technicians. These plans and others to encourage AI literacy and AI use in education drew praise from the Software & Information Industry Association, a tech industry trade group. "These are key components for building trust and ensuring all communities can participate in and benefit from AI's potential," Paul Lekas, SIIA's senior vice president of global public policy, said in a statement. More AI in government The plan envisions more use of AI by the federal government. A talent exchange program would allow employees with experience or talent in AI to be detailed to other agencies in need. The General Services Administration would create a toolbox of AI models that would help agencies see models to choose from and use cases in other parts of the government. Every government agency would also be required to ensure employees who could use AI in their jobs have access to and training for AI tools. Many recommendations focus specifically on the Department of Defense, including creating a virtual proving ground for AI and autonomous systems. AI companies have already been signing contracts with the DOD to develop AI tools for the military.

A new AI coding challenge just published its first results – and they aren't pretty
A new AI coding challenge just published its first results – and they aren't pretty

TechCrunch

time26 minutes ago

  • TechCrunch

A new AI coding challenge just published its first results – and they aren't pretty

A new AI coding challenge has revealed its first winner — and set a new bar for AI-powered software engineers. On Wednesday at 5pm PST, the nonprofit Laude Institute announced the first winner of the K Prize, a multi-round AI coding challenge launched by Databricks and Perplexity co-founder Andy Konwinski. The winner was a Brazilian prompt engineer named Eduardo Rocha de Andrade, who will receive $50,000 for the prize. But more surprising than the win was his final score: he won with correct answers to just 7.5% of the questions on the test. 'We're glad we built a benchmark that is actually hard,' said Konwinski. 'Benchmarks should be hard if they're going to matter.' Konwinski has pledged $1 million to the first open-source model that can score higher than 90% on the test. Similar to the well-known SWE-Bench system, the K Prize tests models against flagged issues from GitHub as a test of how well models can deal with real-world programming problems. But while SWE-Bench is based on a fixed set of problems that models can train against, the K Prize is designed as a 'contamination-free version of SWE-Bench,' using a timed entry system to guard against any benchmark-specific training. For round one, models were due by March 12th. The K Prize organizers then built the test using only GitHub issues flagged after that date. The 7.5% top score stands in marked contrast to SWE-Bench itself, which currently shows a 75% top score on its easier 'Verified' test and 34% on its harder 'Full' test. Konwinski still isn't sure whether the disparity is due to contamination on SWE-Bench or just the challenge of collecting new issues from GitHub, but he expects the K Prize project to answer the question soon. 'As we get more runs of the thing, we'll have a better sense,' he told TechCrunch, 'because we expect people to adapt to the dynamics of competing on this every few months.' It might seem like an odd place to fall short, given the wide range of AI coding tools already publicly available – but with benchmarks becoming too easy, many critics see projects like the K Prize as a necessary step toward solving AI's growing evaluation problem. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW 'I'm quite bullish about building new tests for existing benchmarks,' says Princeton researcher Sayash Kapoor, who put forward a similar idea in a recent paper. 'Without such experiments, we can't actually tell if the issue is contamination, or even just targeting the SWE-Bench leaderboard with a human in the loop.' For Konwinski, it's not just a better benchmark, but an open challenge to the rest of the industry. 'If you listen to the hype, it's like we should be seeing AI doctors and AI lawyers and AI software engineers, and that's just not true,' he says. 'If we can't even get more than 10% on a contamination free SWE-Bench, that's the reality check for me.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store