Latest news with #GeminiRoboticsOn-Device


Mint
2 hours ago
- Mint
Google Gemini AI model brings real-time intelligence to bi-arm robots
Google DeepMind has announced the launch of a new artificial intelligence model tailored for robotics, capable of functioning entirely on a local device without requiring an active data connection. NamedGemini Robotics On-Device, the advanced model is designed to enable bi-arm robots to carry out complex tasks in real-world environments by combining voice, language and action (VLA) processing. In a blog post, Carolina Parada, Senior Director and Head of Robotics at Google DeepMind, introduced the new model, highlighting its low-latency performance and flexibility. As it operates independently of the cloud, the model is especially suited to latency-sensitive environments and real-time applications where constant internet connectivity is not feasible. Currently, access to the model is restricted to participants of Google's trusted tester programme. Developers can experiment with the AI system through the Gemini Robotics software development kit (SDK) and the company's MuJoCo physics simulator. Although Google has not disclosed specific details about the model's architecture or training methodology, it has outlined the model's robust capabilities. Designed for bi-arm robotic platforms, Gemini Robotics On-Device requires minimal computing resources. Remarkably, the system can adapt to new tasks using only 50 to 100 demonstrations, a feature that significantly accelerates deployment in diverse settings. In internal trials, the model demonstrated the ability to interpret natural language commands and perform a wide array of sophisticated tasks, from folding clothes and unzipping bags to handling unfamiliar objects. It also successfully completed precision tasks such as those found in industrial belt assembly, showcasing high levels of dexterity. Though originally trained on ALOHA robotic systems, Gemini Robotics On-Device has also been adapted to work with other bi-arm robots including Franka Emika's FR3 and Apptronik's Apollo humanoid robot. According to the American tech giant, the model exhibited consistent generalisation performance across different platforms, even when faced with out-of-distribution tasks or multi-step instructions.


Time of India
3 hours ago
- Time of India
Google launches Gemini Robotics model capable of running locally on robots
Synopsis Google DeepMind has introduced Gemini Robotics On-Device. This model allows robots to function independently, even without internet. It's designed for quick responses and use in areas with limited connectivity. The model has been tested on various robots, including the Apollo humanoid. Developers can now evaluate Gemini Robotics On-Device using Google's software development kit.


Hans India
8 hours ago
- Hans India
Google's Gemini AI Now Powers Robots Without Internet Access
New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted. Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. 'It's small and efficient enough to run directly on a robot,' she told The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.' Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. 'We're actually quite surprised at how strong this on-device model is,' Parada added, pointing to its effectiveness even with minimal training. The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using Google's ALOHA robot, it has since been adapted to other robotic systems including Apptronik's Apollo humanoid and the dual-armed Franka FR3. Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla's Optimus, which still rely on cloud connectivity for processing. The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. 'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' Parada noted, emphasizing the model's flexibility and adaptability. However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. 'With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said Parada. This announcement follows Google's recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face. Together, these launches signal Google's broader move to decentralize AI, bringing high-performance intelligence directly to user devices—be it phones or robots.


India Today
9 hours ago
- India Today
Google's new Gemini AI can power robots and make them work without internet
Google DeepMind has launched a new version of its Gemini Robotics AI model that allows robots to operate entirely without internet access. Called Gemini Robotics On-Device, the system is designed to power robots in real-world settings where speed, autonomy, and privacy are crucial. This update marks a significant shift from earlier models that relied on cloud connectivity. By enabling robots to process information and make decisions on the device itself, Google hopes to make robotics more practical in offline environments such as remote areas, secure facilities, and latency-sensitive small and efficient enough to run directly on a robot,' said Carolina Parada, head of robotics at Google DeepMind, in a statement to The Verge. 'I would think about it as a starter model or as a model for applications that just have poor connectivity.'Despite being a smaller variant, the on-device version holds its own. 'We're actually quite surprised at how strong this on-device model is,' Parada Robotics On-Device brings several new features to the table. The model can carry out tasks straight out of the box and learn new ones from as few as 50 to 100 demonstrations. The model was initially trained using Google's ALOHA robot, but it has since been successfully adapted for use with other robotic systems, such as Apptronik's Apollo humanoid and the dual-armed Franka Google says that it can perform detailed actions such as folding clothes or unzipping bags, all while running low-latency inference perspective, Tesla's humanoid robot, Optimus, can also do all those things – folding a t-shirt, boiling an egg, dancing, etc – but it needs an internet connection to send data to cloud servers. However, in the case of Gemini Robotics On-Device, a standout feature is that all data is processed locally. That makes it particularly useful for privacy-sensitive applications, such as healthcare and industrial automation, where data security is a concern.'When we play with the robots, we see that they're surprisingly capable of understanding a new situation,' said Parada, highlighting the model's flexibility and the system does not rely on the cloud, it also keeps functioning in places with weak or no connectivity, making it highly reliable. 'It's drawing from Gemini's multimodal world understanding in order to do a completely new task,' Parada unlike the cloud-based hybrid version, the on-device model does not include built-in semantic safety tools. Google recommends that developers implement their own safety systems, including using Gemini Live APIs and connecting to low-level safety the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,' said launch comes shortly after Google introduced the AI Edge Gallery, an Android app that lets users run AI models offline on their smartphones. Powered by the compact Gemma 3 1B model, the app allows users to generate images, write text, and interact with AI tools directly on their devices – no internet like Gemini Robotics On-Device, AI Edge Gallery focuses on privacy and low-latency performance. It uses open-source models from platforms like Hugging Face and technologies like TensorFlow Lite to ensure smooth experiences across a range of devices.- Ends


Indian Express
10 hours ago
- Business
- Indian Express
Google's new Gemini Robotics On-Device AI model runs directly on robots: Watch it in action
Google's DeepMind division, on Tuesday, June 24, released a new large language model called Gemini Robotics On-Device that runs locally on robotic devices. In a blog post, Google says that the new AI model has been optimised to efficiently run on the robot and shows 'strong general-purpose dexterity and task generalisation.' The new offline AI model builds on the company's Gemini Robotics model, which the tech giant unveiled earlier this year in March. The Gemini Robotics On-Device model can control a robot's movement and, like ChatGPT, can understand natural language prompts. Since it works without an active internet connection, Google says it is really useful for latency-sensitive applications or in areas where there is zero connectivity. Designed for robots with two arms, Google explains that Gemini Robotics On-Device is engineered in such a way that it requires 'minimal computational resources' and can complete highly dexterous tasks like folding clothes and unzipping bags, to name a few. You can watch the new AI model in action in the video below. Compared to other on-device alternatives, Google claims that Gemini Robotics On-Device outperforms the competition when it comes to completing complex multi-step instructions and challenging out-of-distribution tasks. Coming to the benchmarks, it looks like Google's new offline model comes close to its cloud-based offering. Initially trained for work with ALOHA robots, the company says its new model has been adapted and successfully worked on a bi-arm Franka FR3 robot and an Apollo humanoid as well. The tech giant said that on the bi-arm Franka FR3, the model was able to follow general-purpose instructions and handle previously unseen objects and scenes, like executing industrial belt assembly. As for Apollo, the model was allowed the humanoid robot manipulate different and unseen objects in a general manner. Developers can try out Gemini Robotics On-Device using the software development kit (SDK). Google isn't the only tech giant working on AI models for robots. At GTC 2025, NVIDIA unveiled Groot N1, an AI model for humanoid robots, while Hugging Face is working on developing its very own robot powered by an in-house developed open-sourced model.