Latest news with #RevLebaredian


Hans India
4 days ago
- Business
- Hans India
Nvidia Unveils Cosmos Reason AI to Give Robots Human-Like Thinking and Planning Skills
Nvidia has taken a bold step toward the future of robotics with the introduction of Cosmos Reason AI, a next-generation reasoning vision language model (VLM) built to help robots think, plan, and act more like humans in the physical world. The 7-billion-parameter open and customizable model is tailored specifically for physical-world AI and robotics, offering far more than traditional VLMs. According to Nvidia, while models like OpenAI's CLIP excel at identifying objects and patterns, they often struggle with complex or ambiguous instructions. Cosmos Reason aims to bridge that gap by incorporating prior knowledge, physics-based understanding, and common sense reasoning—skills crucial for breaking down tricky commands into smaller steps, adapting to unfamiliar surroundings, and making deliberate, methodical choices. 'By combining AI reasoning with scalable, physically accurate simulation, we're enabling developers to build tomorrow's robots and autonomous vehicles that will transform trillions of dollars in industries,' said Rev Lebaredian, vice president of Omniverse and simulation technologies at Nvidia. Real-World Uses and Early Adoption The company says Cosmos Reason can handle a range of tasks including data curation and annotation, robot planning and reasoning, and video analytics. For instance, it could help automate the labeling of massive, varied datasets, serve as a robot's 'brain' integrating vision, language, and actions, or process huge volumes of video to extract insights or detect problems. Already, Nvidia's robotics and DRIVE teams are employing the technology for training data filtering and annotation. Major companies such as Uber, Magna, VAST Data, Milestone Systems, and Linker Vision are exploring its potential for applications like autonomous driving, delivery robots, traffic monitoring, industrial inspection, and safety enhancements. Nvidia notes that in autonomous vehicles, Cosmos Reason could add 'world understanding' to improve trajectory planning. Part of a Larger AI Ecosystem Cosmos Reason has been developed alongside Nvidia's Cosmos world foundation models (WFMs), which have been downloaded more than two million times. Alongside this launch, Nvidia introduced Cosmos Transfer-2, an upgraded synthetic data platform that streamlines photorealistic 3D scene creation. This improvement reduces the process from 70 steps to just one, enabling much faster AI training using Nvidia RTX PRO servers. Simulation and Hardware Boosts To support the new AI model, Nvidia has also rolled out updates to its Omniverse simulation platform, adding SDKs and libraries for industrial AI and robotics. New features include interoperability between MuJoCo (MJCF) and Universal Scene Description (OpenUSD), plus Omniverse NuRec libraries for 3D Gaussian splatting. In robotics simulation, Isaac Sim 5.0 and Isaac Lab 2.2—now open source on GitHub—come with integrated NuRec rendering. These tools are already being linked with simulators like CARLA for autonomous vehicle testing. On the hardware side, Nvidia unveiled RTX PRO Blackwell Servers, designed to handle unified robot development workloads, and expanded DGX Cloud availability on Microsoft Azure Marketplace for Omniverse developers. With Cosmos Reason AI, Nvidia is not just teaching robots to 'see' the world—it's training them to truly understand and navigate it with intelligence and adaptability.


India Today
4 days ago
- Business
- India Today
Nvidia debuts Cosmos Reason AI to help robots think and plan in the physical world
Nvidia has unveiled Cosmos Reason AI, a reasoning vision language model (VLM), designed to give robots the ability to plan and act with human-like understanding of the physical world. The 7-billion-parametre open and customisable model - Cosmos Reason - is built specifically for physical-world AI and robotics, Nvidia says. Other VLMs such as OpenAI's CLIP excel at object and pattern recognition but fall short in handling complex or ambiguous tasks, Nvidia explains. Its Cosmos Reason AI however uses prior knowledge, physics understanding, and common sense so robots can - in theory - break down complex commands into smaller tasks, adapt to unfamiliar settings, and make deliberate, methodical decisions making them smarter and more efficient. advertisement'By combining AI reasoning with scalable, physically accurate simulation, we're enabling developers to build tomorrow's robots and autonomous vehicles that will transform trillions of dollars in industries,' said Rev Lebaredian, vice president of Omniverse and simulation technologies at Nvidia. Detailing real-world applications of its new model, Nvidia says Cosmos Reason can be used for data curation and annotation, robot planning and reasoning, and video analytics. Giving an example, Nividia explains that the Cosmos Reason can help companies automate labelling big, varied datasets, act as the 'brain' for robots combining vision, language, and actions, and analyse tons of video to find insights or solve company adds that its robotics and DRIVE teams are already using Cosmos Reason for training data filtering and annotation, while companies including Uber, Magna, VAST Data, Milestone Systems, and Linker Vision are exploring its use for autonomous vehicles, delivery robots, traffic monitoring, safety improvements, and industrial inspection. Nvidia says this new model will add world understanding to the vehicles' trajectory planning has developed its new Cosmos Reason model alongside its Cosmos world foundation models (WFMs), that has been downloaded over 2 million times, per the company. Additionally, Nvidia has also announced Cosmos Transfer-2, an update to its synthetic data platform that speeds up photorealistic 3D scene creation from simulations or spatial inputs. This update, according to the company, reduces processing from 70 steps to just one, enabling rapid generation on Nvidia RTX PRO servers, making AI training and development faster and more efficient. Omniverse and simulation upgradesNvidia has also rolled out updates to its Omniverse simulation platform, with new SDKs and libraries for industrial AI and robotics. These include interoperability between MuJoCo (MJCF) and Universal Scene Description (OpenUSD), and the new Omniverse NuRec libraries for 3D Gaussian splatting. For robot simulation, the company has introduced Isaac Sim 5.0 and Isaac Lab 2.2, which is now open source on GitHub, with NuRec rendering integrated into simulators like CARLA for autonomous vehicle support these new capabilities, Nvidia has introduced RTX PRO Blackwell Servers for unified robot development workloads and has expanded DGX Cloud availability on Microsoft Azure Marketplace for Omniverse developers.- Ends


Euronews
26-05-2025
- Science
- Euronews
Humanoid robots are the next step for AI. Here's how to train yours
The next phase of artificial intelligence (AI) is robots, which will help with the global labour shortage, an Nvidia executive told Euronews Next. "We are at a very interesting point in time. The promise of robotics has existed for a long time. It's been in our imaginations and science fiction," Rev Lebaredian, vice president of Omniverse and simulation technology at Nvidia, told Euronews Next at the Computex technology fair in Taiwan. He said that despite tech companies trying to build a general-purpose robot for years, the issue has been that, despite being able to build the physical robot, programming it has always been a challenge. "AI has changed all that. We now have the technology to make robots really programmable in a general-purpose way and make it so that normal people can programme them, not just specific robot programming engineers," he said. Companies such as Tesla are racing to build humanoid robots and have made strides. Last week, Elon Musk's company said its Optimus robot had learned to perform household chores. However, there is still much for robots to learn. For Nvidia, the company says robots should learn their tasks in the virtual world for safety, but also because it would take too long to train robots with humans. "The only way to actually create these robots, intelligent ones, is to employ simulation," Lebaredian said. "The fundamental problem that we have with physical AI is that AI is data hungry. You have to feed into your AI factory lots and lots of quality data to give it life experience to train from". He said that with large language models (LLMs), there is a large amount of data online to train them. But he said in physical AI, there is no such data that can be mined. "To get all of the information we need to train a robot on how to pick up an object, we have to go create it somehow," he said. "Collecting it from the real world is not possible. We can't create enough data. Even if you can, in some cases, it's dangerous, it's time-consuming, and it is expensive". What is needed is "a way to go from fossil data to renewable data sources," Lebaredian said. And the best renewable data source for physical data is a physical simulator, he added. Once your robot is tested, or has "graduated" and looks like it is working well, it can then go to its first employer. "A new college graduate is trained on a corpus of publicly available data. You study from textbooks and information that everybody has access to everywhere. And you have a generalist that enters your company, and they're useful," Lebaredian told Euronews Next. "But they're not really useful until you train them for a few years on the specific proprietary information and data in your company that's about your domain and your particular practices and how things are done," he added. In robot terms, it means that you could then specialise your robot with your own data to make it work best for you. Lebaredian did not specify a date when humanoid robots would come into our lives, but he said it would be "soon". The first use cases for them would be in factories and warehouses. "I think industrial use is going to be the first one because even if we can build a perfect robot that you can use in your home, it's not clear that all humans will want one," according to Lebaredian. "But industry, there is a great need for it. There aren't enough young people replacing the older skilled workers who are retiring in every country". Global labour shortages have reached historically high levels in the past decade, according to the OECD. Population declines, as well as ageing populations, and the fact that many people do not want the "three D" jobs, which, according to the Nvidia executive, were "jobs that are dangerous, dull, and dirty". Taiwan has jumped on this robotics need and is set to launch a five-year plan to boost the robotics industry in a bid to plug labour shortages, the government announced last week. Taiwan's population decline would strain the economy and the nation's ability to care for vulnerable and elderly people, Peter Hong, who heads the National Science and Technology Council's (NSTC) Department of Engineering and Technologies, was reported as saying, according to local media. Lebaredian said that after factory use, humanoid robots could help in retail, as he hears a lot of companies saying they cannot hire enough people to stack shelves. He also said they could be used in mines, nuclear reactors, or even in space. Eventually, he said they could be used to take care of the elderly if the demand is there. But just as we get excited about this next phase of AI, LLMs are still getting much wrong, which is causing them to sometimes make things up. Errors caused by a robot in the physical world could be much more dangerous. However, Lebaredian believes that just like autonomous vehicles seem like science fiction at first, people eventually get used to them, and the technology improves. "In generative AI, yes, there's still some stuff that's inaccurate, but I think you have to admit, in the last two and a half years since ChatGPT was introduced, accuracy and the quality of what it's producing have increased exponentially as well," he said. But he added that perhaps chatbots will never be quite right because we want humans to perform the tasks. "There's actually no right answer for a lot of that stuff," he said. "But for tasks that we have in industry, that is actually something that's very measurable, for example, did it accurately pick up this object and move it over here and do that safely and robustly?" He said those systems can be created, tested, and made sure they are safe before deployment. We can create these systems, test them, and make sure that they're working well before deploying them. "We have machinery and systems that we create that are quite dangerous if they're not set up right. But we've managed to create nuclear reactors and these systems, and keep them safe somehow. We can do the same with physical AI," he said. Women's jobs are at a higher risk of automation by artificial intelligence (AI) than those occupied by men, according to a new study from the United Nations. The recent report from the UN's International Labour Organisation (ILO) and Poland's National Research Institute of the Ministry of Digital Affairs (NASK) found that automation could replace just under 10 per cent of female-dominated positions in high-income countries compared to the 3.5 per cent it could replace for men. The biggest disparity between male and female-dominated jobs happens in high-income countries, where 41 percent of all high-income work for women could be exposed to AI, compared to 28 percent of men's jobs. In Europe and Central Asia, 39 per cent of women's jobs could be affected compared to 26 percent of men. The patterns identified by the study "reflect both occupational structures," and that AI-exposed jobs are "concentrated in higher-income countries". Overall, the ILO found that one in four workers globally work in an occupation with some AI exposure. To reach these findings, the survey was conducted with1,640 people employed in various fields in Poland, with the results analysed by a small group of international experts. Researchers then developed an AI that used this survey data alongside national job information to identify how likely 2,500 professions and over 29,000 work tasks would be automated. The study found that clerical occupations like data entry clerks, typists, word processing operators, accountants, and bookkeeping clerks are the most exposed to AI, due to some of the tasks performed in those professions, like taking meeting notes or scheduling appointments. Other professions identified with a large AI exposure are web and media developers, database specialists, financial, and software-related jobs. The study notes that these numbers reflect the "potential exposure," but that they don't reflect any actual job losses. Full replacement by AI is still "limited," the report continued, noting that human involvement is still needed to oversee certain tasks. "As most occupations consist of tasks that require human input, transformation of jobs is the most likely impact of generative AI," the report reads. What could impact the number of jobs lost or AI adoption more broadly are technological constraints, infrastructure gaps, and skills shortages, the report continued. The report asks governments, employees, and workers organisations to shape "inclusive strategies" that can help protect job quality and productivity in endangered fields. "It's easy to get lost in the AI hype," Janine Berg, senior economist at the ILO, said in a statement. "What we need is clarity and context".
Yahoo
18-03-2025
- Automotive
- Yahoo
NVIDIA Omniverse Physical AI Operating System Expands to More Industries and Partners
NVIDIA Omniverse Accenture, Ansys, Cadence, Databricks, Dematic, Hexagon, Omron, SAP, Schneider Electric With ETAP, Siemens Connect Omniverse to Leading Software Tools Four New Blueprints Enable Robot-Ready Factories and Large-Scale Synthetic Data Generation Foxconn, General Motors, Hyundai Motor Group, KION Group, Mercedes-Benz, Pegatron and Schaeffler Adopt Omniverse for Industrial AI Transformation SAN JOSE, Calif., March 18, 2025 (GLOBE NEWSWIRE) -- GTC -- NVIDIA today unveiled that leading industrial software and service providers Ansys, Databricks, Dematic, Omron, SAP, Schneider Electric with ETAP, Siemens and more are integrating the NVIDIA Omniverse™ platform into their solutions to accelerate industrial digitalization with physical AI. New NVIDIA Omniverse Blueprints connected to NVIDIA Cosmos™ world foundation models are now available to enable robot-ready facilities and large-scale synthetic data generation for physical AI development. 'Omniverse is an operating system that connects the world's physical data to the realm of physical AI,' said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. 'With Omniverse, global industrial software, data and professional services leaders are uniting industrial ecosystems and building new applications that will advance the next generation of AI for industries at unprecedented speed.' New Blueprints Enable Robot-Ready Facilities and Large-Scale Synthetic Data GenerationMega, an Omniverse Blueprint for testing multi-robot fleets at scale in industrial digital twins, is now available in preview on Also available is the NVIDIA AI Blueprint for video search and summarization, powered by the NVIDIA Metropolis platform, for building AI agents that monitor activity across entire facilities. Manufacturing leaders are using the blueprints to optimize their industrial operations with physical AI. In automotive manufacturing, Schaeffler and Accenture are starting to adopt Mega to test and simulate fleets of Agility Robotics Digit for material-handling automation. Hyundai Motor Group is using the blueprint to simulate Boston Dynamics Atlas robots on its assembly lines, and Mercedes-Benz is using it to simulate Apptronik's Apollo humanoid robots to optimize vehicle assembly operations. In electronics manufacturing, Pegatron is using Mega to develop physical AI-based NVIDIA Metropolis video analytics agents to improve factory operations and worker safety. Foxconn is using the blueprint to simulate industrial manipulators, humanoids and mobile robots in its manufacturing facilities for the NVIDIA Blackwell platform. 'Foxconn is constantly exploring ways to transform our operations as we continue our journey toward building the factories of the future,' said Brand Cheng, CEO of Fii, a core subsidiary of Foxconn. 'Using NVIDIA Omniverse and Mega, we're testing and training humanoids to operate in our leading factories as we advance to the next wave of physical AI.' For warehouses and supply chain solutions, KION Group, Dematic and Accenture announced they are integrating Mega to advance next-generation AI-powered automation. idealworks is integrating Mega into its fleet management software to simulate, test and optimize robotic fleets. SAP customers and partners can use Omniverse to develop their own virtual environments for warehouse management scenarios. A new Omniverse Blueprint for AI factory digital twins lets data center engineers design and simulate AI factory layouts, cooling and electrical to maximize utilization and efficiency. Cadence Reality Digital Twin Platform and Schneider Electric with ETAP are the first to integrate their simulation software into the blueprint, while Vertiv and Schneider Electric are providing Omniverse SimReady 3D models of their power and cooling units to accelerate the development of AI factory digital twins. The NVIDIA Isaac GR00T Blueprint for synthetic manipulation motion generation is also now available for robotics developers, enabling large-scale synthetic data generation from Omniverse and Cosmos. The blueprint helps humanoid developers reduce data collection time from hours to minutes, fast-tracking robot development. Omniverse Physical AI Operating System Expands Across IndustriesDigitalization is challenging for industries grounded in the physical world. Massive amounts of digital and physical world data from legacy systems create silos. Omniverse is an operating system built on the OpenUSD framework that enables developers to unify physical-world data and applications. Ansys, Cadence, Hexagon, Omron, Rockwell Automation and Siemens are integrating Omniverse data interoperability and visualization technologies into their leading industrial software, simulation and automation solutions to accelerate product development and optimize manufacturing processes. For physical AI, Intrinsic, an Alphabet company, is enabling Omniverse workflows and NVIDIA robotics foundation models to transition from digital twins to hardware deployments using Flowstate. Databricks is integrating NVIDIA Omniverse with the Databricks Data Intelligence Platform, which will enable large-scale synthetic data generation for physical AI. General Motors, America's largest auto manufacturer, announced its adoption of Omniverse to enhance its factories and train platforms for operations such as material handling, transportation and precision welding. At the other end of the manufacturing life cycle, Unilever announced its adoption of Omniverse and physically accurate digital twins to streamline and optimize marketing content creation for its products. Omniverse in Every CloudTo simplify development, deployment and scale-out of OpenUSD-based applications, NVIDIA Omniverse is now available as virtual desktop images on EC2 G6e instances with NVIDIA L40S GPUs in AWS Marketplace. The Microsoft Azure Marketplace now features preconfigured Omniverse instances and Omniverse Kit App Streaming on NVIDIA A10 GPUs, allowing developers to easily develop and stream their custom Omniverse applications. These cloud-based NVIDIA Omniverse developer tools and services are expected to be available later this year on Oracle Cloud Infrastructure compute bare-metal instances with NVIDIA L40S GPUs, as well as the newly announced NVIDIA RTX PRO™ 6000 Blackwell Server Edition on Google Cloud. OpenUSD Unifies Robotics WorkflowsAt GTC, NVIDIA introduced the OpenUSD Asset Structure Pipeline for Robotics with Disney Research and Intrinsic. This new structure and data pipeline uses today's best practices within OpenUSD to work toward unifying robotic workflows, providing a common language for all data sources. Learn more by watching the NVIDIA GTC keynote and registering for OpenUSD, physical AI and industrial AI sessions, as well as trainings featuring NVIDIA experts and industry leaders at the show, which runs through March 21. About NVIDIA NVIDIA (NASDAQ: NVDA) is the world leader in accelerated computing. For further information, contact:Quentin NoliboisCorporate CommunicationsNVIDIA Corporation+1-415-741-8356qnolibois@ Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, availability, and performance of NVIDIA's products, services, and technologies; third parties adopting NVIDIA's products and technologies, the benefits and impact thereof, and the availability of their offerings; with Omniverse, global industrial software, data and professional services leaders uniting industrial ecosystems and building new applications that will advance the next generation of AI for industries at unprecedented speed; and digitalization challenging for industries grounded in the physical world are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the company's website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances. Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein. © 2025 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, NVIDIA Cosmos, NVIDIA Omniverse and NVIDIA RTX PRO are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice. A photo accompanying this announcement is available at in to access your portfolio