Latest news with #FrankaFR3


Hans India
10 hours ago
- Hans India
Google's Gemini AI Now Powers Robots Without Internet Access
New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted. Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. 'It's small and efficient enough to run directly on a robot,' she told The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.' Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. 'We're actually quite surprised at how strong this on-device model is,' Parada added, pointing to its effectiveness even with minimal training. The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using Google's ALOHA robot, it has since been adapted to other robotic systems including Apptronik's Apollo humanoid and the dual-armed Franka FR3. Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla's Optimus, which still rely on cloud connectivity for processing. The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. 'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' Parada noted, emphasizing the model's flexibility and adaptability. However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. 'With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said Parada. This announcement follows Google's recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face. Together, these launches signal Google's broader move to decentralize AI, bringing high-performance intelligence directly to user devices—be it phones or robots.


Indian Express
12 hours ago
- Business
- Indian Express
Google's new Gemini Robotics On-Device AI model runs directly on robots: Watch it in action
Google's DeepMind division, on Tuesday, June 24, released a new large language model called Gemini Robotics On-Device that runs locally on robotic devices. In a blog post, Google says that the new AI model has been optimised to efficiently run on the robot and shows 'strong general-purpose dexterity and task generalisation.' The new offline AI model builds on the company's Gemini Robotics model, which the tech giant unveiled earlier this year in March. The Gemini Robotics On-Device model can control a robot's movement and, like ChatGPT, can understand natural language prompts. Since it works without an active internet connection, Google says it is really useful for latency-sensitive applications or in areas where there is zero connectivity. Designed for robots with two arms, Google explains that Gemini Robotics On-Device is engineered in such a way that it requires 'minimal computational resources' and can complete highly dexterous tasks like folding clothes and unzipping bags, to name a few. You can watch the new AI model in action in the video below. Compared to other on-device alternatives, Google claims that Gemini Robotics On-Device outperforms the competition when it comes to completing complex multi-step instructions and challenging out-of-distribution tasks. Coming to the benchmarks, it looks like Google's new offline model comes close to its cloud-based offering. Initially trained for work with ALOHA robots, the company says its new model has been adapted and successfully worked on a bi-arm Franka FR3 robot and an Apollo humanoid as well. The tech giant said that on the bi-arm Franka FR3, the model was able to follow general-purpose instructions and handle previously unseen objects and scenes, like executing industrial belt assembly. As for Apollo, the model was allowed the humanoid robot manipulate different and unseen objects in a general manner. Developers can try out Gemini Robotics On-Device using the software development kit (SDK). Google isn't the only tech giant working on AI models for robots. At GTC 2025, NVIDIA unveiled Groot N1, an AI model for humanoid robots, while Hugging Face is working on developing its very own robot powered by an in-house developed open-sourced model.
Yahoo
a day ago
- Yahoo
Google rolls out new Gemini model that can run on robots locally
Google DeepMind on Tuesday released a new language model called Gemini Robotics On-Device that can run tasks locally on robots without requiring an internet connection. Building on the company's previous Gemini Robotics model that was released in March, Gemini Robotics On-Device can control a robot's movements. Developers can control and fine-tune the model to suit various needs using natural language prompts. In benchmarks, Google claims the model performs at a level close to the cloud-based Gemini Robotics model. The company says it outperforms other on-device models in general benchmarks, though it didn't name those models. In a demo, the company showed robots running this local model doing things like unzipping bags and folding clothes. Google says that while the model was trained for ALOHA robots, it later adapted it to work on a bi-arm Franka FR3 robot and the Apollo humanoid robot by Apptronik. Google claims the bi-arm Franka FR3 was successful in tackling scenarios and objects it hadn't 'seen' before, like doing assembly on an industrial belt. Google DeepMind is also releasing a Gemini Robotics SDK. The company said developers can show robots 50 to 100 demonstrations of tasks to train them on new tasks using these models on the MuJoCo physics simulator. Other AI model developers are also dipping their toes in robotics. Nvidia is building a platform to create foundation models for humanoids; Hugging Face is not only developing open models and datasets for robotics, it is actually working on robots too; and Mirae Asset-backed Korean startup RLWRLD is working on creating foundational models for robots.


The Verge
a day ago
- Business
- The Verge
Google DeepMind's optimized AI model runs directly on robots
Google DeepMind is rolling out an on-device version of its Gemini Robotics AI model that allows it to operate without an internet connection. The vision-language-action model (VLA) comes with dexterous capabilities similar to the one released in March, but Google says 'it's small and efficient enough to run directly on a robot.' The flagship Gemini Robotics model is designed to help robots complete a wide range of physical tasks, even if it hasn't been specifically trained on them. It allows robots to generalize new situations and understand and respond to commands, as well as perform tasks that require fine motor skills. Carolina Parada, head of robotics at Google DeepMind, tells The Verge that the original Gemini Robotics model uses a hybrid approach, allowing it to operate on-device and on the cloud. But with this device-only model, users can access offline features that are almost as good as those of the flagship. The on-device model can perform several different tasks out of the box, and it can adapt to new situations 'with as few as 50 to 100 demonstrations,' according to Parada. Google only trained the model on its ALOHA robot, but the company was able to adapt it to different robot types, such as the humanoid Apollo robot from Apptronik and the bi-arm Franka FR3 robot. 'The Gemini Robotics hybrid model is still more powerful, but we're actually quite surprised at how strong this on-device model is,' Parada says. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.' It could also be useful for companies with strict security requirements. Alongside this launch, Google is releasing a software development kit (SDK) for the on-device model that developers can use to evaluate and fine-tune it — a first for one of Google DeepMind's VLAs. The on-device Gemini Robotics model and its SDK will be available to a group of trusted testers while Google continues to work toward minimizing safety risks.