logo
#

Latest news with #ArtificialGeneralIntelligence

Yoshua Bengio launches LawZero to advance safe-by-design AI
Yoshua Bengio launches LawZero to advance safe-by-design AI

Yahoo

time20 hours ago

  • Business
  • Yahoo

Yoshua Bengio launches LawZero to advance safe-by-design AI

Yoshua Bengio, an AI researcher, has launched LawZero, a new nonprofit organisation focused on developing technical solutions for safe-by-design AI systems. LawZero was established in response to mounting concerns over the capabilities and behaviours of current frontier AI models, including tendencies toward deception, self-preservation, and goal misalignment. Its mission is to mitigate risks such as algorithmic bias, deliberate misuse, and the potential loss of human control over advanced AI systems, the organisation said in a statement. The nonprofit structure of LawZero is intended to shield the organisation from market and political pressures that could undermine its safety objectives. As president and scientific director at the organisation, Bengio will lead a group of over 15 researchers in developing a novel technical solution, named Scientist AI. Unlike agentic AI systems currently being pursued by frontier companies, Scientist AI is designed to be non-agentic. It will focus on understanding the world rather than acting within it, offering transparent and truthful responses grounded in external reasoning. Potential applications include providing oversight for agentic systems, contributing to scientific discovery, and enhancing the understanding of AI risks. The initiative launched with $30m in funding from several backers, including a philanthropic arm of former Google CEO Eric Schmidt and Skype co-founder Jaan Tallinn, Bloomberg reported. A professor of computer science at the Université de Montréal, Bengio is recognised as one of the pioneers of modern AI, alongside Geoffrey Hinton and Yann LeCun. Bengio said: 'LawZero is the result of the new scientific direction I undertook in 2023, after recognising the rapid progress made by private labs toward Artificial General Intelligence and beyond, as well as its profound implications for humanity. 'Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase. LawZero is my team's constructive response to these challenges. 'It's an approach to AI that is not only powerful but also fundamentally safe. At LawZero, we believe that at the heart of every AI frontier system, there should be one guiding principle above all: The protection of human joy and endeavour." "Yoshua Bengio launches LawZero to advance safe-by-design AI" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI
Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI

Yahoo

timea day ago

  • Business
  • Yahoo

Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI

MONTRÉAL, June 3, 2025 /CNW/ - Yoshua Bengio, the most-cited artificial intelligence (AI) researcher in the world and A.M. Turing Award winner, today announced the launch of LawZero, a new nonprofit organization committed to advancing research and developing technical solutions for safe-by-design AI systems. LawZero is assembling a world-class team of AI researchers who are building the next generation of AI systems in an environment dedicated to prioritizing safety over commercial imperatives. The organization was founded in response to evidence that today's frontier AI models are developing dangerous capabilities and behaviours, including deception, self-preservation, and goal misalignment. LawZero's work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today's systems, including algorithmic bias, intentional misuse, and loss of human control. LawZero is structured as a nonprofit organization to ensure it is insulated from market and government pressures, which risk compromising AI safety. The organization is also pulling together a seasoned leadership team to drive this ambitious mission forward. "LawZero is the result of the new scientific direction I undertook in 2023, after recognizing the rapid progress made by private labs toward Artificial General Intelligence and beyond, as well as its profound implications for humanity," said Yoshua Bengio, President and Scientific Director at LawZero. "Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase. LawZero is my team's constructive response to these challenges. It's an approach to AI that is not only powerful but also fundamentally safe. At LawZero, we believe that at the heart of every AI frontier system, there should be one guiding principle above all: The protection of human joy and endeavour." Scientist AI: a new model for safer artificial intelligence LawZero has a growing technical team of over 15 researchers, pioneering a radically new approach called Scientist AI, a practical, effective and more secure alternative to today's uncontrolled agentic AI systems. Scientist AI stands apart from the approaches of frontier AI companies, which are increasingly focused on developing agentic systems. Scientist AIs are non-agentic and primarily learn to understand the world rather than act in it, giving truthful answers to questions based on transparent externalized reasoning. Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery, and advance the understanding of AI risks and how to avoid them. Major institutions and individuals, including the Future of Life Institute, Jaan Tallinn, Open Philanthropy, Schmidt Sciences, and Silicon Valley Community Foundation have made donations to the project as part of its incubation phase. About LawZero LawZero is a nonprofit organization committed to advancing research and creating technical solutions that enable safe-by-design AI systems. Its scientific direction is based on new research and methods led by Professor Yoshua Bengio, the most cited AI researcher in the world. Based in Montréal, LawZero's research aims to build non-agentic AI that could be used to accelerate scientific discovery, to provide oversight for agentic AI systems, and to advance the understanding of AI risks and how to avoid them. LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing. LawZero was incubated at Mila - Quebec AI Institute, a non-profit founded by Professor Bengio. Mila now serves as LawZero's operating partner. For more information, visit View original content to download multimedia: SOURCE LawZero View original content to download multimedia: Sign in to access your portfolio

Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI Français
Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI Français

Cision Canada

timea day ago

  • Business
  • Cision Canada

Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI Français

MONTRÉAL, June 3, 2025 /CNW/ - Yoshua Bengio, the most-cited artificial intelligence (AI) researcher in the world and A.M. Turing Award winner, today announced the launch of LawZero, a new nonprofit organization committed to advancing research and developing technical solutions for safe-by-design AI systems. LawZero is assembling a world-class team of AI researchers who are building the next generation of AI systems in an environment dedicated to prioritizing safety over commercial imperatives. The organization was founded in response to evidence that today's frontier AI models are developing dangerous capabilities and behaviours, including deception, self-preservation, and goal misalignment. LawZero's work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today's systems, including algorithmic bias, intentional misuse, and loss of human control. LawZero is structured as a nonprofit organization to ensure it is insulated from market and government pressures, which risk compromising AI safety. The organization is also pulling together a seasoned leadership team to drive this ambitious mission forward. "LawZero is the result of the new scientific direction I undertook in 2023, after recognizing the rapid progress made by private labs toward Artificial General Intelligence and beyond, as well as its profound implications for humanity," said Yoshua Bengio, President and Scientific Director at LawZero. "Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase. LawZero is my team's constructive response to these challenges. It's an approach to AI that is not only powerful but also fundamentally safe. At LawZero, we believe that at the heart of every AI frontier system, there should be one guiding principle above all: The protection of human joy and endeavour." Scientist AI: a new model for safer artificial intelligence LawZero has a growing technical team of over 15 researchers, pioneering a radically new approach called Scientist AI, a practical, effective and more secure alternative to today's uncontrolled agentic AI systems. Scientist AI stands apart from the approaches of frontier AI companies, which are increasingly focused on developing agentic systems. Scientist AIs are non-agentic and primarily learn to understand the world rather than act in it, giving truthful answers to questions based on transparent externalized reasoning. Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery, and advance the understanding of AI risks and how to avoid them. Major institutions and individuals, including the Future of Life Institute, Jaan Tallinn, Open Philanthropy, Schmidt Sciences, and Silicon Valley Community Foundation have made donations to the project as part of its incubation phase. About LawZero LawZero is a nonprofit organization committed to advancing research and creating technical solutions that enable safe-by-design AI systems. Its scientific direction is based on new research and methods led by Professor Yoshua Bengio, the most cited AI researcher in the world. Based in Montréal, LawZero's research aims to build non-agentic AI that could be used to accelerate scientific discovery, to provide oversight for agentic AI systems, and to advance the understanding of AI risks and how to avoid them. LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing. LawZero was incubated at Mila - Quebec AI Institute, a non-profit founded by Professor Bengio. Mila now serves as LawZero's operating partner. For more information, visit

Rise of AGI: Danger of humanity entering the first stage of Kübler-Ross model for terminal diseases
Rise of AGI: Danger of humanity entering the first stage of Kübler-Ross model for terminal diseases

Time of India

time20-05-2025

  • Time of India

Rise of AGI: Danger of humanity entering the first stage of Kübler-Ross model for terminal diseases

Elizabeth Kübler-Ross and her colleagues proposed that when a person faces an inevitable tragedy, the psychological phases he goes through are typically denial, anger, bargaining, depression, and acceptance (in that order). When I look at the way we are treating the arrival of Artificial General Intelligence (AGI), I sense that we have entered the first stage of denial, and that is probably a very dangerous situation, as we may end up failing to act when we can, and we may regret it very shortly. Though it has still not percolated all the way upto the bottom of the pyramid, most of these who are working in close proximity of AGI have realised that it has reached the zone of unpredictability; not in terms of what it can do, but in terms of what its owners will make it do. The biggest mistake we are making today is imagining that AGI is a global phenomenon that will evolve democratically because it will be subjected to market forces, i.e., the choices we humans make while living. Truth is, AGI is neither global nor democratic. AGI is raw power and it is available only to a handful of people. We may still carry on making H/Bollywood movies where a genius kid will save the world, the real life is showing very little evidence of that to happen. AGI has and AGI shall usher in the era of THE BIG, as AGI lacks the limitations that being BIG traditionally brought to the table. AGI transcends the generalist-specialist divide, as it can be 'everything', ranging from advising a gardener on how to protect her French beans from aphid attack and (if need be, simultaneously) guide a drone to locate a human being an a battlefield and terminate him. Thus AGI can replace every human expertise, and it is slowly dawning upon some of us that the process has already started. When Microsoft lays off 6000 humans from their jobs, it is just a small beginning. Soon the masters of AGI will come for every job that requires a highly trained and thus expensive human being. As the cull will start from the top, humans are unlikely to have a revolution, as the bottom of the pyramid, the traditional fodder for any revolution will benefit from the new economics that AGI will lead to. Though my sympathies are with the doctors, engineers, lawyers and other experts who will face the first wave of the cull, it is least of my worries. AGI will change the world economics, and may even lead to a major economic collapse, but that is not half as scary as the more plausible deployment of AGI that we may end up having due to a small group of people controlling it. What scares me is the fact that AGI is a source of power, and those who understand how the world works know that money has no real value when one has the power. This means that AGI owners may initially deploy AGI to make money, but soon they will go for the bigger game, i.e., the game of controlling humanity. Lets recognise that humanity has produced Alexanders and Genghis Khans even when it was not actually possible to control the world even if you win it. With the rise of AGI, the limitation of not-being-able-to-control-the- world is also gone, making it a very attractive idea for a (most probably male) member of a species that is designed to try and rise higher and higher in the social hierarchy. With the wars raging across the planet and powerful nations getting a chance to deploy autonomous weapons and learning from the experiment, training and weaponisation of AGI is obviously on the fast track now. It is this arms race that is probably keeping ambitions at bay, but there is no doubt that, even as I write, ideas of ruling the world are passing through a few brains and may be even discussed in war rooms somewhere on this planet. Humanity did see something similar with arrival of Atom Bomb amongst us, but the way Bomb worked was a bit too diabolical and horrible for it to allow USA to go for direct world dominance, and soon that edge was lost with many other nations making the required scientific breakthrough and go nuclear. Weaponisation of AGI is similar to Atom Bomb in terms of disruption, and it may find power balance the way nuclear power did, but my concern is that AGI differs in how world dominance acquired via AGI will look like, making it worth a try. The masters of AGI can take over the world first through financial and manufacturing disruption, and then go for the kill by taking on the armed resistance if any. And from that point onwards humanity will enter into an unprecedented situation. AGI driven domination will allow its masters to track every individual human and prevent humans to build a collective with a critical mass required for successful revolution. With the current distribution of AGI empowerment, it looks very difficult that we can escape this fate, but the first step towards liberation is to wake up to the reality. If AGI is to live alongside us, it needs to be recognised as a natural product (as we did with genetic information) and make it impossible to patent or monetise it. This may slow down the growth dramatically and return AGI to the academies (where it actually belongs) but if it is not done, we will soon have to enter the second phase, i.e., anger. And then it will be a downhill journey with no way to stop. Facebook Twitter Linkedin Email Disclaimer Views expressed above are the author's own.

OpenAI Co-Founder's Doomsday Bunker Plan for AGI Apocalypse Revealed
OpenAI Co-Founder's Doomsday Bunker Plan for AGI Apocalypse Revealed

NDTV

time20-05-2025

  • Science
  • NDTV

OpenAI Co-Founder's Doomsday Bunker Plan for AGI Apocalypse Revealed

OpenAI co-founder Ilya Sutskever once proposed building a doomsday bunker that could protect the company's top researchers in case of an end-of-the-world "rapture" triggered by the release of a new form of artificial intelligence (AI), popularly referred to as Artificial General Intelligence (AGI), capable of surpassing the cognitive abilities of humans. The revelation has been made by Karen Hao in her upcoming book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Mr Sutskever, long regarded as the brains behind ChatGPT, made the comment during a meeting with key scientists at the company in 2023. "Once we all get into the bunker..." said Mr Sutskever, only to be interrupted by a confused colleague: "I'm sorry, the bunker?" To which he replied: "We're definitely going to build a bunker before we release AGI." As per the book's excerpts, published in The Atlantic, it wasn't the first and only time that Mr Sutskever broached the topic. Two other sources told Ms Hao that Mr Sutskever regularly referenced the bunker in internal discussions. Society not ready for AGI This is not the first instance when a top-level executive, working to build AI models, has sounded the alarm about the future of AGI. Google DeepMind CEO Demis Hassabis has already warned that society is not ready for AGI. "I think we are on the cusp of that. Maybe we are five to 10 years out. Some people say shorter, I wouldn't be surprised," said Mr Hassabis when quizzed about the timeline of AGI becoming a reality. "It's a sort of like probability distribution. But it's coming, either way it's coming very soon and I'm not sure society's quite ready for that yet. And we need to think that through and also think about these issues that I talked about earlier, to do with the controllability of these systems and also the access to these systems and ensuring that all goes well," he added. The 2024 Nobel Prize in Chemistry winner has previously called for the establishment of a UN-like umbrella organisation to oversee AGI's development. What is AGI? AGI takes AI a step further. While AI is task-specific, AGI aims to possess intelligence that can be applied across a wide range of tasks, similar to human intelligence. In essence, AGI would be a machine with the ability to understand, learn, and apply knowledge in diverse domains, much like a human being.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store