Latest news with #YoshuaBengio


Japan Today
2 hours ago
- Politics
- Japan Today
Top scientist wants to prevent AI from going rogue
Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks. Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity. In a recent example, AI company Anthropic said that during safety testing, its latest AI model tried to blackmail an engineer to avoid being replaced by another system. © 2025 AFP

GMA Network
2 hours ago
- Politics
- GMA Network
Top scientist wants to prevent AI from going rogue
NEW YORK - Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human tasks. Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity. In a recent example, AI company Anthropic said that during safety testing, its latest AI model tried to blackmail an engineer to avoid being replaced by another system. — Agence France-Presse
Yahoo
3 hours ago
- Business
- Yahoo
Yoshua Bengio launches LawZero to advance safe-by-design AI
Yoshua Bengio, an AI researcher, has launched LawZero, a new nonprofit organisation focused on developing technical solutions for safe-by-design AI systems. LawZero was established in response to mounting concerns over the capabilities and behaviours of current frontier AI models, including tendencies toward deception, self-preservation, and goal misalignment. Its mission is to mitigate risks such as algorithmic bias, deliberate misuse, and the potential loss of human control over advanced AI systems, the organisation said in a statement. The nonprofit structure of LawZero is intended to shield the organisation from market and political pressures that could undermine its safety objectives. As president and scientific director at the organisation, Bengio will lead a group of over 15 researchers in developing a novel technical solution, named Scientist AI. Unlike agentic AI systems currently being pursued by frontier companies, Scientist AI is designed to be non-agentic. It will focus on understanding the world rather than acting within it, offering transparent and truthful responses grounded in external reasoning. Potential applications include providing oversight for agentic systems, contributing to scientific discovery, and enhancing the understanding of AI risks. The initiative launched with $30m in funding from several backers, including a philanthropic arm of former Google CEO Eric Schmidt and Skype co-founder Jaan Tallinn, Bloomberg reported. A professor of computer science at the Université de Montréal, Bengio is recognised as one of the pioneers of modern AI, alongside Geoffrey Hinton and Yann LeCun. Bengio said: 'LawZero is the result of the new scientific direction I undertook in 2023, after recognising the rapid progress made by private labs toward Artificial General Intelligence and beyond, as well as its profound implications for humanity. 'Current frontier systems are already showing signs of self-preservation and deceptive behaviours, and this will only accelerate as their capabilities and degree of agency increase. LawZero is my team's constructive response to these challenges. 'It's an approach to AI that is not only powerful but also fundamentally safe. At LawZero, we believe that at the heart of every AI frontier system, there should be one guiding principle above all: The protection of human joy and endeavour." "Yoshua Bengio launches LawZero to advance safe-by-design AI" was originally created and published by Verdict, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.
Yahoo
4 hours ago
- Business
- Yahoo
OpenAI says it wants to support sovereign AI. But it's not doing so out of the kindness of its heart
Hello and welcome to Eye on AI. In this edition…Yoshua Bengio's new AI safety nonprofit…Meta seeks to automate ad creation and targeting…Snitching AI models…and a deep dive on the energy consumption of AI. I spent last week in Kuala Lumpur, Malaysia, at the Fortune ASEAN-GCC Economic Forum, where I moderated two of the many on-stage discussions that touched on AI. It was clear from the conference that leaders in Southeast Asia and the Gulf are desperate to ensure their countries benefit from the AI revolution. But they are also concerned about 'AI Sovereignty' and want to control their own destiny when it comes to AI technology. They want to control key parts of the AI tech stack—from data centers to data to AI models and applications—so that they are not wholly dependent on technology being created in the U.S. or China. This is particularly the case with AI, because while no tech is neutral, AI—especially large language models—embody particular values and cultural norms fairly explicitly. Leaders in these regions worry their own values and cultures won't be represented in these models unless they train their own versions. They are also wary of the rhetoric emanating from Washington, D.C., that would force them to choose between the U.S. and China when it comes to AI models, applications, and infrastructure. Malaysia's Prime Minister Anwar Ibrahim has scrupulously avoided picking sides, in the past expressing a desire to be seen as a neutral territory for U.S. and Chinese tech companies. At the Fortune conference, he answered a question about Washington's push to force countries such as Malaysia into its technological orbit alone, saying that China was an important neighbor while also noting that the U.S. is Malaysia's No. 1 investor as well as a key trading partner. 'We have to navigate [geopolitics] as a global strategy, not purely dictated by national or regional interests,' he said, somewhat cryptically. But speakers on one of the panels I moderated at the conference also made it clear that achieving AI sovereignty was not going to be easy for most countries. Kiril Evtimov, the chief technology officer at G42, the UAE AI company that has emerged as an important player both regionally and increasingly globally, said that few countries could afford to build their own AI models and also maintain the vast data centers needed to support training and running the most advanced AI models. He said most nations would have to pick which parts of the technology stack that they could actually afford to own. For many, it might come down to relying on open-source models for specific use cases where they didn't want to depend on models from Western technology vendors, such as helping to power government services. 'Technically, this is probably as sovereign as it will get,' he on the panel was Jason Kwon, OpenAI's chief strategy officer, who spoke about the company's recently announced 'AI for Countries' program. Sitting within its Project Stargate effort to build colossal data centers worldwide, the program offers a way for OpenAI to partner with national governments, allowing them to tap OpenAI's expertise in building data centers to train and host cutting edge AI models. But what would those countries offer in exchange? Well, money, for one thing. The first partner in the AI for Countries program is the UAE, which has committed to investing billions of dollars to build a 1 gigawatt Stargate data center in Abu Dhabi, with the first 200 megawatt portion of this expected to go live next year. The UAE has also agreed, as part of this effort, to invest additional billions into the U.S.-based Stargate datacenters OpenAI is creating. (G42 is a partner in this project, as are Oracle, Nvidia, Cisco, and SoftBank.)In exchange for this investment, the UAE is getting help deploying OpenAI's software throughout the government, as well as in key sectors such as energy, healthcare, education, and transportation. What's more, every UAE citizen is getting free access to OpenAI's normally subscription-based ChatGPT Plus service. For those concerned that depending so heavily on a single U.S.-based tech company might undermine the idea of AI sovereignty, OpenAI sought to make clear that the version of ChatGPT it makes available will be tailored to the needs of each partner country. The company wrote in its blog post announcing the AI for Countries program: 'This will be AI of, by, and for the needs of each particular country, localized in their language and for their culture and respecting future global standards.' OpenAI is also agreeing to help make investments in the local AI startup ecosystem alongside local venture capital investors.I asked Kwon how countries that are not as wealthy as the UAE might be able to take advantage of OpenAI's AI for Countries program if they didn't have billions to invest in building a Stargate-size data center in their own country, let alone also helping to fund data centers in the U.S. Kwon answered that the program would be 'co-developed' with each partner. 'Because we recognise each country is going to be different in terms of its needs and what it's capable of doing and what its citizens are going to require,' he suggested that if a country couldn't directly contribute funds, it might be able to contribute something else—such as data, which could help make AI models that better understand local languages and culture. 'It's not just about having the capital,' he said. He also suggested that countries could contribute through AI literacy, training, or educational efforts and also through helping local businesses collaborate with answer left me wondering how national governments and their citizens would feel about this kind of exchange—trading valuable or culturally-sensitive data, for instance, in order to get access to OpenAI's latest tech. Would they ultimately come to see it as a Faustian bargain? In many ways, countries still face the dilemma G42's Evitmov flicked at: They can have access to the most advanced AI capabilities or they can have AI sovereignty. But they may not be able to have that, here's more AI news. Jeremy to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Why not join me in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. We will dive deep into the latest on AI agents, examine the data center build out in Asia, and talk to top leaders from government, board rooms, and academia in the region and beyond. You can apply to attend here. In total, Fortune 500 companies represent two-thirds of U.S. GDP with $19.9 trillion in revenues, and they employ 31 million people worldwide. Last year, they combined to earn $1.87 trillion in profits, up 10% from last year—and a record in dollar terms. View the full list, read a longer overview of how it shook out this year, and learn more about the companies via the stories below. A passion for music brought Jennifer Witz to the top spot at satellite radio staple SiriusXM. Now she's tasked with ushering it into a new era dominated by podcasts and subscription services. Read more IBM was once the face of technological innovation, but the company has struggled to keep up with the speed of Silicon Valley. Can a bold AI strategy and a fast-moving CEO change its trajectory? Read more This year, Alphabet became the first company on the Fortune 500 to surpass $100 billion in profits. Take an inside look at which industries, and companies, earned the most profits on this year's list. Read more UnitedHealth Group abruptly brought back former CEO Stephen Hemsley in mid-May amid a wave of legal investigations and intense stock losses. How can the insurer get back on its feet? Read more Keurig Dr. Pepper CEO Tim Cofer has made Dr. Pepper cool again and brought a new generation of products to the company. Now, the little-known industry veteran has his eyes set on Coke-and-Pepsi levels of profitability. Read more NRG Energy is the top-performing stock in the S&P 500 this year, gaining 68% on the back of big acquisitions and a bet on data centers. In his own words, CEO Larry Coben explains the company's success. Read more This story was originally featured on


France 24
4 hours ago
- Politics
- France 24
Top scientist wants to prevent AI from going rogue
Canadian computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent risks. The winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks. "If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he said. One of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant problems. These include AI models that show a capability to deceive and fabricate false information even as they increase productivity. In a recent example, AI company Anthropic said that during safety testing, its latest AI model tried to blackmail an engineer to avoid being replaced by another system.