
NVIDIA unveils new Isaac GR00T tools powering robot evolution
NVIDIA has introduced updated cloud-to-robot computing platforms designed to support humanoid robot development using new data generation blueprints, simulation frameworks, and powerful workstation and server hardware.
NVIDIA has announced NVIDIA Isaac GR00T N1.5, the first update to its open, customisable foundation model for humanoid reasoning and skills. In addition, the company unveiled Isaac GR00T-Dreams, a new synthetic motion data generation blueprint, and expanded its NVIDIA Blackwell systems portfolio to accelerate research, simulation, and deployment in the robotics industry.
A range of robotics companies, including Agility Robotics, Boston Dynamics, Fourier, Foxlink, Galbot, Mentee Robotics, NEURA Robotics, General Robotics, Skild AI and XPENG Robotics, have adopted NVIDIA's Isaac platform technologies to progress humanoid robot development.
Jensen Huang, Founder and Chief Executive Officer of NVIDIA, stated, "Physical AI and robotics will bring about the next industrial revolution. From AI brains for robots to simulated worlds to practice in or AI supercomputers for training foundation models, NVIDIA provides building blocks for every stage of the robotics development journey."
NVIDIA Isaac GR00T-Dreams, demonstrated by Huang, is positioned as a blueprint for generating synthetic motion data known as neural trajectories. This data supports physical AI developers in teaching robots new behaviours and in improving adaptation to various environments.
The Isaac GR00T-Dreams workflow involves the post-training of Cosmos Predict world foundation models (WFMs) for robots. Developers can use a single image as input, and GR00T-Dreams will output videos of the robot executing new tasks in novel environments. The system then extracts action tokens—small, processed data units—utilised to teach robots how to perform these tasks.
This blueprint complements NVIDIA's previously released Isaac GR00T-Mimic, which uses the NVIDIA Omniverse and Cosmos platforms to augment existing data sets. By contrast, GR00T-Dreams generates fresh synthetic data, enabling further enrichment of robot training datasets.
NVIDIA Research applied the GR00T-Dreams blueprint to quickly develop GR00T N1.5, updating its original model in 36 hours, compared to the nearly three months typically required for manual human data collection.
The updated GR00T N1.5 model is capable of more effectively adapting to new workspaces and recognising objects based on user instructions. NVIDIA reports significant improvements in the model's success rate when performing common material handling and manufacturing tasks, such as sorting and storing objects.
Early adopters of the GR00T N models include AeiRobot, Foxlink, Lightwheel and NEURA Robotics. These organisations are pursuing varied applications: AeiRobot is using the models to enable its ALICE4 robot to follow natural language instructions and perform complex pick-and-place tasks in industrial settings, while Foxlink Group is employing the models to enhance industrial robot manipulator efficiency. Lightwheel focuses on validating synthetic data for streamlined humanoid deployment in factories, and NEURA Robotics is evaluating the models to accelerate household automation system development.
NVIDIA has also released additional simulation and data generation frameworks intended to bridge the gap in data quantity and mitigate the costs and risks associated with real-world robot testing. These developments include NVIDIA Cosmos Reason, a new world foundation model that uses chain-of-thought reasoning to curate higher-quality synthetic data for training physical AI models, and Cosmos Predict 2, which offers enhancements for world generation and reduced hallucination and will be available soon.
Other new tools include Isaac GR00T-Mimic for generating vast quantities of synthetic motion trajectories using few human demonstrations, an open-source physical AI dataset incorporating 24,000 high-quality humanoid robot motion trajectories, and upcoming availability of Isaac Sim 5.0 and Isaac Lab 2.2 on open platforms such as GitHub. These simulation frameworks aim to make development pipelines more efficient and scalable.
Various companies are already utilising these simulation tools and blueprints. Foxconn and Foxlink use GR00T-Mimic for synthetic motion generation, while Agility Robotics, Boston Dynamics, Fourier, Mentee Robotics, NEURA Robotics and XPENG Robotics train their robots with Isaac Sim and Isaac Lab. Skild AI develops general robot intelligence with these frameworks, and General Robotics integrates them into its platform.
Regarding hardware, international system manufacturers have announced new RTX PRO 6000 Blackwell workstations and servers built with NVIDIA technology. These platforms are intended for robot developers to handle training, data generation, robot learning, and simulation workloads on a single architecture. Providers such as Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro offer NVIDIA RTX PRO-powered servers, with Dell Technologies, HPI and Lenovo supplying RTX PRO 6000 Blackwell workstations as well.
Developers with greater computational needs can access NVIDIA Blackwell systems including the GB200 NVL72, which is made available through NVIDIA DGX Cloud and cloud partners, enabling up to 18 times higher data processing performance for large-scale projects.
NVIDIA plans to enable deployment of robot foundation models to its Jetson Thor platform, which will support accelerated on-robot inference and runtime performance.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
a day ago
- Techday NZ
Oracle & NVIDIA expand OCI partnership with 160 AI tools
Oracle and NVIDIA have expanded their partnership to enable customers to access more than 160 AI tools and agents while leveraging the necessary computing resources for AI development and deployment. The collaboration brings NVIDIA AI Enterprise, a cloud-native software platform, natively to the Oracle Cloud Infrastructure (OCI) Console. Oracle customers can now use this platform across OCI's distributed cloud, including public regions, Government Clouds, and sovereign cloud solutions. Platform access and capabilities By integrating NVIDIA AI Enterprise directly through the OCI Console rather than a marketplace, Oracle allows customers to utilise their existing Universal Credits, streamlining transactions and support. This approach is designed to speed up deployment and help customers meet security, regulatory, and compliance requirements in enterprise AI processes. Customers can now access over 160 AI tools focused on training and inference, including NVIDIA NIM microservices. These services aim to simplify the deployment of generative AI models and support a broad set of application-building and data management needs across various deployment scenarios. "Oracle has become the platform of choice for AI training and inferencing, and our work with NVIDIA boosts our ability to support customers running some of the world's most demanding AI workloads," said Karan Batta, Senior Vice President, Oracle Cloud Infrastructure. "Combining NVIDIA's full-stack AI computing platform with OCI's performance, security, and deployment flexibility enables us to deliver AI capabilities at scale to help advance AI efforts globally." The partnership includes making NVIDIA GB200 NVL72 systems available on the OCI Supercluster, supporting up to 131,072 NVIDIA Blackwell GPUs. The new architecture provides a liquid-cooled infrastructure that targets large-scale AI training and inference requirements. Governments and enterprises can take advantage of the so-called AI factories, using platforms like NVIDIA's GB200 NVL72 for agentic AI tasks reliant on advanced reasoning models and efficiency enhancements. Developer access to advanced resources Oracle has become one of the first major cloud providers to integrate with NVIDIA DGX Cloud Lepton, which links developers to a global marketplace of GPU compute. This integration offers developers access to OCI's high-performance GPU clusters for a range of needs, including AI training, inference, digital twin implementations, and parallel HPC applications. Ian Buck, Vice President of Hyperscale and HPC at NVIDIA, said: "Developers need the latest AI infrastructure and software to rapidly build and launch innovative solutions. With OCI and NVIDIA, they get the performance and tools to bring ideas to life, wherever their work happens." With this integration, developers are also able to select compute resources in precise regions to help achieve both strategic and sovereign AI aims and satisfy long-term and on-demand requirements. Customer projects using joint capabilities Enterprises in Europe and internationally are making use of the enhanced partnership between Oracle and NVIDIA. For example, Almawave, based in Italy, utilises OCI AI infrastructure and NVIDIA Hopper GPUs to run generative AI model training and inference for its Velvet family, which supports Italian alongside other European languages and is being deployed within Almawave's AIWave platform. "Our commitment is to accelerate innovation by building a high-performing, transparent, and fully integrated Italian foundational AI in a European context—and we are only just getting started," said Valeria Sandei, Chief Executive Officer, Almawave. "Oracle and NVIDIA are valued partners for us in this effort, given our common vision around AI and the powerful infrastructure capabilities they bring to the development and operation of Velvet." Danish health technology company Cerebriu is using OCI along with NVIDIA Hopper GPUs to build an AI-driven tool for clinical brain MRI analysis. Cerebriu's deep learning models, trained on thousands of multi-modal MRI images, aim to reduce the time required to interpret scans, potentially benefiting the clinical diagnosis of time-sensitive neurological conditions. "AI plays an increasingly critical role in how we design and differentiate our products," said Marko Bauer, Machine Learning Researcher, Cerebriu. "OCI and NVIDIA offer AI capabilities that are critical to helping us advance our product strategy, giving us the computing resources we need to discover and develop new AI use cases quickly, cost-effectively, and at scale. Finding the optimal way of training our models has been a key focus for us. While we've experimented with other cloud platforms for AI training, OCI and NVIDIA have provided us the best cloud infrastructure availability and price performance." By expanding the Oracle-NVIDIA partnership, customers are now able to choose from a wide variety of AI tools and infrastructure options within OCI, supporting both research and production environments for AI solution development.


Techday NZ
a day ago
- Techday NZ
Milestone launches Project Hafnia for AI-driven city management
Milestone has commenced its Project Hafnia to develop AI-driven solutions for urban infrastructure and traffic management, with the first city being Genoa in Italy. The initiative is aimed at improving city operations by harnessing computer vision technologies, using high-quality video data that adheres to European regulatory frameworks, including GDPR and the AI Act. Video data used for the project is trained with NVIDIA's NeMo Curator on NVIDIA DGX Cloud. Collaboration and compliance Milestone is among the first companies to utilise the new NVIDIA Omniverse Blueprint for Smart City AI—a framework designed for optimising city operations through digital twins and AI agents. The company is also enhancing its data platform by generating synthetic video data via NVIDIA Cosmos, which processes real-world inputs. This combination of real and synthetic video data is used to build and train Vision Language Models (VLMs) in a manner that the company states is responsible and regulation-compliant. European cloud provider Nebius will supply the GPU compute for training these models, which is an element in keeping data processing anchored within European borders and compliant with regional data protection regulations. The application of AI within Project Hafnia spans smart traffic and transportation management, as well as improvements in safety and security for cities. VLMs establish connections between textual data and visual information from images or videos, enabling AI models to generate insights and summaries from visual sources. These efforts, the company asserts, are based upon regulatory integrity, data diversity, and relevance to European legal frameworks. "I'm proud that with Project Hafnia we are introducing the world's first platform to meet the EU's regulatory standards, powered by NVIDIA technology. With Nebius as our European cloud provider, we can now enable compliant, high-quality video data for training vision AI models — fully anchored in Europe. This marks an important step forward in supporting the EU's commitment to transparency, fairness, and regulatory oversight in AI and technology — the foundation for responsible AI innovation," says Thomas Jensen, CEO of Milestone. Genoa as a first Project Hafnia's first European service offering consists of a Visual Language Model specifically for transportation management, drawing on transportation data sourced from Genoa. The model is powered by NVIDIA technology and has been trained on data that is both responsibly sourced and compliant with applicable regulations. "AI is achieving extraordinary results, unthinkable until recently, and the research in the area is in constant development. We enthusiastically joined forces with Project Hafnia to allow developers to access fundamental video data for training new Vision AI models. This data-driven approach is a key principle in the Three-Year Plan for Information Technology, aiming to promote digital transformation in Italy and particularly within the Italian Public Administration," says Andrea Sinisi, Information Systems Officer, City of Genoa. The structure of Project Hafnia's collaborations allows for scalability, as the framework is designed to operate across multiple domains and data types. The compliant datasets and the fine-tuned VLMs will be supplied to participating cities via a controlled access licence model, supporting the region's AI ambitions within ethical standards. Role of Nebius Nebius has been selected as Project Hafnia's European cloud provider. The company operates EU-based data centres, facilitating digital sovereignty objectives and ensuring that sensitive public sector data remains within the jurisdiction of European data protection laws. "Project Hafnia is exactly the kind of real-world, AI-at-scale challenge Nebius was built for," says Roman Chernin, Chief Business Officer of Nebius."Supporting AI development today requires infrastructure engineered for high-throughput, high-resilience workloads, with precise control over where data lives and how it's handled. From our EU-based data centers to our deep integration with NVIDIA's AI stack, we've built a platform that meets the highest standards for performance, privacy and transparency." Project Hafnia data platform Project Hafnia acts as what Milestone refers to as a 'trusted librarian' of AI-ready video data, with the platform curating, tagging, and delivering video data that is described as ethically sourced and regulation-ready for AI model training. The emphasis is placed on maintaining precision, compliance, and citizen privacy throughout the process. According to Milestone, its network of customers, distributors, and technology partners enables the company to organise a comprehensive video data ecosystem that advances the development of AI in video analytics. Project Hafnia is positioned as a resource that companies can use to build AI models while meeting compliance and quality standards. The project will make both the compliant dataset and the fine-tuned Visual Language Model available to participating cities on a controlled basis as part of its effort to support AI development across Europe.


Techday NZ
a day ago
- Techday NZ
Vertiv unveils 142kW AI data centre design for NVIDIA GB300
Vertiv has introduced a 142kW cooling and power reference architecture for the NVIDIA GB300 NVL72 platform, aiming to facilitate higher density and energy efficiency in data centres supporting advanced AI workloads. This new reference architecture is designed for customisation in bespoke data centre environments to reduce both planning times and risks associated with modern data centre buildouts. Vertiv's solutions are now available as SimReady 3D assets within the NVIDIA Omniverse Blueprint, supporting AI factory design and operations through digital simulation and validation. Reference architecture capabilities The architecture supports rack densities of up to 142 kW and offers integrated end-to-end cooling and power strategies for AI-driven data centre deployments. These capabilities address the increasing requirements of data centres as AI workloads become more prevalent and power consumption rises accordingly. Vertiv collaborates closely with NVIDIA on developing AI infrastructure strategies and designs that anticipate higher rack power densities. The company is developing support for 800 VDC data centre power infrastructure, including 1 MW IT racks and beyond, with these solutions anticipated to be available starting in 2026. The Vertiv 360AI infrastructure platform, under which the new reference architecture is based, aims to help customers meet the demands of powering and cooling AI workloads and other high-performance computing requirements. Simulation and deployment path One of the key aspects of Vertiv's solution is the emphasis on digital simulation to streamline deployment. Leveraging NVIDIA Omniverse technologies, the architecture bridges physical and digital environments, enabling real-time collaboration and allowing data centre teams to test and optimise their designs before construction. The reference architecture for the NVIDIA GB300 NVL72 has several highlighted benefits: it allows simulation to deployment in a unified workflow; it is built to support the increasing power and cooling needs of large-scale AI operations; and it promises accelerated performance, scale, and speed, claiming to deliver 1.5 times more AI performance, up to 50% faster on-site builds, and operation in 30% less physical space compared to traditional data centre builds. The system is also liquid cooling-ready and adaptable to air- and hybrid-cooled configurations, enabling up to a 70% improvement in annual energy efficiency by operating at higher water temperatures. Vertiv's global reach, with over 4,000 field service engineers, underpins its capability to support large-scale, international rollouts of the reference architecture for GB300 NVL72. Industry collaboration Vertiv's announcement reflects the ongoing collaboration between the companies as they seek to equip data centres to meet the evolving requirements of AI infrastructure. Dion Harris, Senior Director of HPC and AI Infrastructure at NVIDIA, provided additional detail: "By combining NVIDIA's advanced AI platforms with Vertiv's expertise in power and cooling infrastructure, we're enabling customers to deploy next-generation data centres that are more efficient, scalable, and ready for the most demanding AI workloads. Together, we're helping organisations unlock new levels of performance and sustainability as they build the future of AI." As AI-generated workloads continue to accelerate on a global scale, data centre providers and operators are seeking new infrastructure strategies to meet demand efficiently and with a view to sustainability. Vertiv's latest reference architecture, together with its SimReady assets, is positioned to enable deployment-ready designs that anticipate future industry requirements. The company continues to develop energy-efficient solutions for cooling and power delivery in response to the escalating computing needs of next-generation AI applications, focusing on digital optimisation and global serviceability across data centre deployments.