
AgiBot Partners with Physical Intelligence to Pioneer Global Innovation in Embodied Intelligence
Shanghai, China--(Newsfile Corp. - April 2, 2025) - On April 2, AgiBot announced a partnership with the internationally renowned embodied intelligence company Physical Intelligence (Pi). The collaboration will focus on advancing deep technical cooperation in embodied intelligence, particularly targeting long-horizon complex tasks in dynamic environments. Notably, Dr. Luo Jianlan, who recently joined AgiBot, will lead the Embodied Intelligence Research Center and drive in-depth collaboration between the two parties.
EN:
To view an enhanced version of this graphic, please visit:
https://images.newsfilecorp.com/files/10589/247097_bb05f3f3c4bcfa91_001full.jpg
Forging a Synergistic Alliance to Scale New Heights in Embodied Intelligence
AgiBot and Physical Intelligence (Pi) have achieved preliminary milestones in their collaboration, demonstrating one policy capable of executing multiple tasks based on diverse instruction inputs. This model is compatible with various end-effectors as output, including dexterous hands and grippers, while seamlessly integrating multiple sensor types such as fisheye and pinhole cameras as input.
In a groundbreaking demonstration of a human-like task, the robot successfully completes a scarf-tying task through dual arm collaboration. The model observes the relative positions of the scarf and head in the mirror reflection to initiate proper placement first. Then the left hand grasps the asymmetrical end of the scarf and adjusts it to an optimal length ratio. Next the right hand receives the scarf from the left and executes a spiral "wrapping" motion with human-like dexterity.
Cannot view this video? Visit:
https://www.youtube.com/watch?v=0A3z7_Mnw70
Physical Intelligence is a global leader in embodied intelligence technology, dedicated to applying Artificial General Intelligence (AGI) to the physical world. Founded by top-tier scientists, engineers, and robotics experts worldwide, including pioneers in embodied intelligence such as Prof. Sergey Levine and Prof. Chelsea Finn, the company has developed advanced embodied models like π 0 and Hi Robot.
AgiBot focuses on integrating AI with robots to create general-purpose robotic products and applications. AgiBot has established a cutting-edge full-stack technology, investing in hardware, data, and algorithms at the same time. AgiBot has mass-produced over 1,000 units of general-purpose robots already.
As two globally leading innovators in embodied intelligence, AgiBot and Pi will join forces to effectively advance the development and application in this field.
Leading innovation with a world-class technical team
As a leading scholar in embodied intelligence, Dr. Luo Jianlan has recently officially joined AgiBot as the Chief Scientist. He will lead the establishment of the "AgiBot Embodied Intelligence Research Center", overseeing algorithm development and engineering implementation.
Dr. Luo Jianlan graduated from the University of California, Berkeley, and previously conducted research at Google X and Google DeepMind. During his postdoctoral fellowship at the Berkeley Artificial Intelligence Research Lab (BAIR), he served as a core member in Prof. Sergey Levine's team, leading the development of SERL/HIL-SERL, which is the world's first superhuman-level real-world robotic reinforcement learning system. This groundbreaking work has elevated task success rate to 100% for the first time and has been widely adopted globally.
The AgiBot Embodied Intelligence Research Center focuses on embodied system 1 and system 2 architectures, spatial intelligence, reinforcement learning, and other areas. By integrating software and hardware algorithms holistically, it comprehensively addresses core challenges in embodied intelligence. Driven by real-world challenges, the center aims to forge an ecosystem spanning foundational research, applied science, and commercialization. The Embodied Intelligence Research Center is currently actively recruiting talent.
Visit AgiBot's website to learn more
https://www.facebook.com/AgiBot.zhiyuan/
https://www.linkedin.com/company/agibot/
https://www.youtube.com/@About-AgiBot
https://www.tiktok.com/@agibot_
https://x.com/AgiBot_zhiyuan
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

CTV News
04-06-2025
- CTV News
Google's DeepMind CEO says there are bigger risks to worry about than AI taking our jobs
A general view of the Google DeepMind offices after the announcement that Founder and CEO Demis Hassabis and senior research scientist, John M. Jumper, received the 2024 Nobel Prize for Chemistry on October 9, 2024 in London, England. (via CNN Newsource) Demis Hassabis, CEO of Google's AI research arm DeepMind and a Nobel Prize laureate, isn't too worried about an AI 'jobpocalypse.' Instead of fretting over AI replacing jobs, he's worried about the technology falling into the wrong hands – and a lack of guardrails to keep sophisticated, autonomous AI models under control. 'Both of those risks are important, challenging ones,' he said in an interview with CNN's Anna Stewart at the SXSW festival in London, which takes place this week. Last week, the CEO of high-profile AI lab Anthropic had a stark warning about the future of the job landscape, claiming that AI could wipe out half of entry-level white-collar jobs. But Hassabis said he's most concerned about the potential misuse of what AI developers call 'artificial general intelligence,' a theoretical type of AI that would broadly match human-level intelligence. 'A bad actor could repurpose those same technologies for a harmful end,' he said. 'And so one big thing is… how do we restrict access to these systems, powerful systems to bad actors…but enable good actors to do many, many amazing things with it?' Demis Hassabis Google DeepMind Founder and CEO Demis Hassabis poses for a photograph after the announcement that he and John M. Jumper, a senior research scientist at DeepMind, received the 2024 Nobel Prize for Chemistry on October 9, 2024 in London, England. (via CNN Newsource) Hackers have used AI to generate voice messages impersonating US government officials, the Federal Bureau of Investigation said in a May public advisory. A report commissioned by the US State Department last year found that AI could pose 'catastrophic' national security risks, CNN reported. AI has also facilitated the creation of deepfake pornography — though the Take It Down Act, which U.S. President Donald Trump signed into law last month, aims to stop the proliferation of these deepfakes by making it illegal to share nonconsensual explicit images online. Hassabis isn't the first to call out such concerns. But his comments further underscore both the promise of AI and the alarm that it brings as the technology gets better at handling complex tasks like writing code and generating video clips. While AI has been heralded as one of the biggest technological advancements since the internet, it also gives scammers and other malicious actors more tools than ever before. And it's rapidly advancing without much regulation as the United States and China race to establish dominance in the field. Google removed language from its AI ethics policy website in February, pledging not to use AI for weapons and surveillance. Hassabis believes there should be an international agreement on the fundamentals of how AI should be utilized and how to ensure the technology is only used 'for the good use cases.' 'Obviously, it's looking difficult at present day with the geopolitics as it is,' he said. 'But, you know, I hope that as things will improve, and as AI becomes more sophisticated, I think it'll become more clear to the world that that needs to happen.' The DeepMind CEO also believes we're headed toward a future in which people use AI 'agents' to execute tasks on their behalf, a vision Google is working towards by integrating more AI into its search function and developing AI-powered smart glasses. 'We sometimes call it a universal AI assistant that will go around with you everywhere, help you in your everyday life, do mundane admin tasks for you, but also enrich your life by recommending you amazing things, from books and films to maybe even friends to meet,' he said. New AI models are showing progress in areas like video generation and coding, adding to fears that the technology could eliminate jobs. 'AI is starting to get better than humans at almost all intellectual tasks, and we're going to collectively, as a society, grapple with it,' Anthropic CEO Dario Amodei told CNN just after telling Axios that AI could axe entry-level jobs. In April, Meta CEO Mark Zuckerberg said he expects AI to write half the company's code by 2026. However, an AI-focused future is closer to promise than reality. AI is still prone to shortcomings like bias and hallucinations, which have sparked a handful of high-profile mishaps for the companies using the technology. The Chicago Sun-Times and the Philadelphia Inquirer, for example, published an AI-generated summer reading list including nonexistent books last month. While Hassabis says AI will change the workforce, he doesn't believe AI will render jobs obsolete. Like some others in the AI space, he believes the technology could result in new types of jobs and increase productivity. But he also acknowledged that society will likely have to adapt and find some way of 'distributing all the additional productivity that AI will produce in the economy.' He compared AI to the rise of other technological changes, like the internet. 'There's going to be a huge amount of change,' he said. 'Usually what happens is new, even better jobs arrive to take the place of some of the jobs that get replaced. We'll see if that happens this time.' By Lisa Eadicicco and Anna Stewart, CNN


Cision Canada
03-06-2025
- Cision Canada
Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI Français
MONTRÉAL, June 3, 2025 /CNW/ - Yoshua Bengio, the most-cited artificial intelligence (AI) researcher in the world and A.M. Turing Award winner, today announced the launch of LawZero, a new nonprofit organization committed to advancing research and developing technical solutions for safe-by-design AI systems. LawZero is assembling a world-class team of AI researchers who are building the next generation of AI systems in an environment dedicated to prioritizing safety over commercial imperatives. The organization was founded in response to evidence that today's frontier AI models are developing dangerous capabilities and behaviours, including deception, self-preservation, and goal misalignment. LawZero's work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today's systems, including algorithmic bias, intentional misuse, and loss of human control. LawZero is structured as a nonprofit organization to ensure it is insulated from market and government pressures, which risk compromising AI safety. The organization is also pulling together a seasoned leadership team to drive this ambitious mission forward. "LawZero is the result of the new scientific direction I undertook in 2023, after recognizing the rapid progress made by private labs toward Artificial General Intelligence and beyond, as well as its profound implications for humanity," said Yoshua Bengio, President and Scientific Director at LawZero. "Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase. LawZero is my team's constructive response to these challenges. It's an approach to AI that is not only powerful but also fundamentally safe. At LawZero, we believe that at the heart of every AI frontier system, there should be one guiding principle above all: The protection of human joy and endeavour." Scientist AI: a new model for safer artificial intelligence LawZero has a growing technical team of over 15 researchers, pioneering a radically new approach called Scientist AI, a practical, effective and more secure alternative to today's uncontrolled agentic AI systems. Scientist AI stands apart from the approaches of frontier AI companies, which are increasingly focused on developing agentic systems. Scientist AIs are non-agentic and primarily learn to understand the world rather than act in it, giving truthful answers to questions based on transparent externalized reasoning. Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery, and advance the understanding of AI risks and how to avoid them. Major institutions and individuals, including the Future of Life Institute, Jaan Tallinn, Open Philanthropy, Schmidt Sciences, and Silicon Valley Community Foundation have made donations to the project as part of its incubation phase. About LawZero LawZero is a nonprofit organization committed to advancing research and creating technical solutions that enable safe-by-design AI systems. Its scientific direction is based on new research and methods led by Professor Yoshua Bengio, the most cited AI researcher in the world. Based in Montréal, LawZero's research aims to build non-agentic AI that could be used to accelerate scientific discovery, to provide oversight for agentic AI systems, and to advance the understanding of AI risks and how to avoid them. LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing. LawZero was incubated at Mila - Quebec AI Institute, a non-profit founded by Professor Bengio. Mila now serves as LawZero's operating partner. For more information, visit


Calgary Herald
01-06-2025
- Calgary Herald
Artificial intelligence could spark massive job loss in U.S.: expert
Artificial intelligence could sound the death knell for entry-level white-collar jobs while causing U.S. unemployment to rise as high as 20%, the chief executive of a top Al company warned. Article content Article content Anthropic CEO Dario Amodei, whose firm built the 'Claude' AI chatbot, warned that executives and politicians must stop 'sugar-coating' the mass layoffs that could happen in fields such as tech, finance and law. Article content Article content 'Most of them are unaware that this is about to happen,' Amodei told Axios in a recent interview, per the New York Post. 'It sounds crazy, and people just don't believe it.' Article content Article content The Anthropic boss said he expects the job market downfall to play out over the next one to five years. At the same time, he said he expects Al to provide massive benefits to the economy and fuel unprecedented advancements in medicine. Article content 'Cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don't have jobs,' said Amodei, describing one potential scenario. Article content Amodei's most recent warning comes as Anthropic competes in a race with other tech giants such as Google, Meta and OpenAI to develop artificial general intelligence, or AGI — which describes an AI model with human-level cognitive capabilities or greater. Article content Article content Amodei started Anthropic after previously working at OpenAI under its CEO Sam Altman. Amodei is one of several executives who have warned about impending upheaval in the job market. Article content Meta CEO Mark Zuckerberg earlier this year discussed how AI was taking on a bigger role in Meta's workforce. Article content 'Probably in 2025, we at Meta, as well as the other companies that are basically working on this, are going to have an AI that can effectively be a sort of mid-level engineer that you have at your company that can write code,' Zuckerberg said during an appearance on The Joe Rogan Experience podcast. Article content