logo
#

Latest news with #LawZero

‘Godfather of AI' now fears it's unsafe and has a plan to fix it
‘Godfather of AI' now fears it's unsafe and has a plan to fix it

Asia Times

time5 hours ago

  • Asia Times

‘Godfather of AI' now fears it's unsafe and has a plan to fix it

This week, the US Federal Bureau of Investigation revealed two men suspected of bombing a fertility clinic in California last month allegedly used artificial intelligence (AI) to obtain bomb-making instructions. The FBI did not disclose the name of the AI program in question. This brings into sharp focus the urgent need to make AI safer. Currently we are living in the 'wild west' era of AI, where companies are fiercely competing to develop the fastest and most entertaining AI systems. Each company wants to outdo competitors and claim the top spot. This intense competition often leads to intentional or unintentional shortcuts – especially when it comes to safety. Coincidentally, at around the same time of the FBI's revelation, one of the godfathers of modern AI, Canadian computer science professor Yoshua Bengio, launched a new nonprofit organization dedicated to developing a new AI model specifically designed to be safer than other AI models – and target those that cause social harm. So what is Bengio's new AI model? And will it actually protect the world from AI-faciliated harm? In 2018, Bengio, alongside his colleagues Yann LeCun and Geoffrey Hinton, won the Turing Award for the groundbreaking research they had published three years earlier on deep learning. A branch of machine learning, deep learning attempts to mimic the processes of the human brain by using artificial neural networks to learn from computational data and make predictions. Bengio's new nonprofit organisation, LawZero, is developing 'Scientist AI.' Bengio has said this model will be 'honest and not deceptive', and incorporate safety-by-design principles. According to a preprint paper released online earlier this year, Scientist AI will differ from current AI systems in two key ways. First, it can assess and communicate its confidence level in its answers, helping to reduce the problem of AI giving overly confident and incorrect responses. Second, it can explain its reasoning to humans, allowing its conclusions to be evaluated and tested for accuracy. Interestingly, older AI systems had this feature. But in the rush for speed and new approaches, many modern AI models can't explain their decisions. Their developers have sacrificed explainability for speed. Bengio also intends 'Scientist AI' to act as a guardrail against unsafe AI. It could monitor other, less reliable and harmful AI systems — essentially fighting fire with fire. This may be the only viable solution to improve AI safety. Humans cannot properly monitor systems such as ChatGPT, which handle over a billion queries daily. Only another AI can manage this scale. Using an AI system against other AI systems is not just a sci-fi concept – it's a common practice in research to compare and test different levels of intelligence in AI systems. Large language models and machine learning are just small parts of today's AI landscape. Another key addition Bengio's team is adding to Scientist AI is the 'world model,' which brings certainty and explainability. Just as humans make decisions based on their understanding of the world, AI needs a similar model to function effectively. The absence of a world model in current AI models is clear. One well-known example is the 'hand problem': most of today's AI models can imitate the appearance of hands but cannot replicate natural hand movements, because they lack an understanding of the physics — a world model — behind them. Another example is how models such as ChatGPT struggle with chess, failing to win and even making illegal moves. This is despite simpler AI systems, which do contain a model of the 'world' of chess, beating even the best human players. These issues stem from the lack of a foundational world model in these systems, which are not inherently designed to model the dynamics of the real world. Yoshua Bengio is recognized as one of the godfathers of AI. Alex Photo: Wong / Getty Images via The Conversation Bengio is on the right track, aiming to build safer, more trustworthy AI by combining large language models with other AI technologies. However, his journey isn't going to be easy. LawZero's US$30 million in funding is small compared to efforts such as the US$500 billion project announced by US President Donald Trump earlier this year to accelerate the development of AI. Making LawZero's task harder is the fact that Scientist AI – like any other AI project – needs huge amounts of data to be powerful, and most data are controlled by major tech companies. There's also an outstanding question. Even if Bengio can build an AI system that does everything he says it can, how is it going to be able to control other systems that might be causing harm? Still, this project, with talented researchers behind it, could spark a movement toward a future where AI truly helps humans thrive. If successful, it could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritize safety. Perhaps if we had taken similar action when social media first emerged, we would have a safer online environment for young people's mental health. And maybe, if Scientist AI had already been in place, it could have prevented people with harmful intentions from accessing dangerous information with the help of AI systems. Armin Chitizadeh is lecturer, School of Computer Science, University of Sydney This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Godfather Of AI' Launches Nonprofit Focused On Safer Systems
‘Godfather Of AI' Launches Nonprofit Focused On Safer Systems

Forbes

time2 days ago

  • Business
  • Forbes

‘Godfather Of AI' Launches Nonprofit Focused On Safer Systems

Yoshua Bengio testifies on the importance of AI regulation at a U.S. Senate Judiciary Committee ... More hearing in July 2023. Computer scientist Yoshua Bengio, often referred to as the 'godfather' of AI, has launched a nonprofit aimed at creating AI systems that prioritize safety over business priorities. The organization, called LawZero, 'was founded in response to evidence that today's frontier AI models are developing dangerous capabilities and behaviors, including deception, self-preservation and goal misalignment,' reads a statement posted to its website on Tuesday. 'LawZero's work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today's systems, including algorithmic bias, intentional misuse and loss of human control.' LawZero is structured as a nonprofit 'to ensure it is insulated from market and government pressures, which risk compromising AI safety,' the statement says. Bengio is a worldwide leader in AI and a co-recipient of the 2018 A.M. Turing Award, the Association for Computing Machinery's prestigious annual prize that's sometimes called the Nobel Prize of Computing. He won the award alongside two other deep-learning pioneers — Geoffrey Hinton, another 'godfather of AI' who worked at Google, and Yann LeCun — for conceptual and engineering breakthroughs, made over decades, that have positioned deep neural networks as a critical component of computing. While artificial intelligence has sparked considerable excitement across industries — and Bengio recognizes its potential as a driver of significant innovation — it's also led to mounting concerns about possible pitfalls. Generative AI tools are capable of producing text, images and video that spread almost instantly over social media and can be difficult to distinguish from the real thing. Bengio has called for slowing the development of AI systems to better understand and regulate them. 'There is no guarantee that someone in the foreseeable future won't develop dangerous autonomous AI systems with behaviors that deviate from human goals and values,' the University of Montreal professor wrote in a blog post announcing why he'd signed a 2023 open letter calling for a slowdown in the development of some AI tools. He has signed other such statements, and appeared in front of the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law to outline the risks of AI misuse. Bengio also serves as scientific director at Mila (Montreal Institute for Learning Algorithms), an artificial-intelligence research institute. Now, he'll add LawZero president and scientific director to his resume. LawZero says it's assembling a team of world-class AI researchers, though it did not immediately respond to a request for comment on who is included in that group. Together, the scientists are working on a system called Scientist AI, which LawZero calls a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. 'Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery and advance the understanding of AI risks and how to avoid them,' LawZero says. 'LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing.'

AI godfather Yoshua Bengio launches non-profit for honest AI, warns current models are lying to you
AI godfather Yoshua Bengio launches non-profit for honest AI, warns current models are lying to you

India Today

time2 days ago

  • Politics
  • India Today

AI godfather Yoshua Bengio launches non-profit for honest AI, warns current models are lying to you

'Today's AI agents are trained to please and imitate—not always to tell the truth,' says Yoshua Bengio, one of the world's most respected AI researchers, who is also one of the three AI godfathers. Bengio says so as he launched a new non-profit called LawZero with a big mission: stop rogue AI before it does real harm. With $30 million in initial funding and a team of expert researchers, Bengio wants to build something called 'Scientist AI' – a tool that acts like a psychologist for want to build AIs that will be honest and not deceptive,' Bengio said. Unlike today's AI agents, which he describes as 'actors' trying to imitate humans and please users, Scientist AI will work more like a neutral observer. Its job is to predict when another AI might act in a harmful or dishonest way – and flag or stop it.'It has a sense of humility,' Bengio said of his new model. Instead of pretending to know everything, it will give probabilities, not firm answers. 'It isn't sure about the answer,' he goal? Create a kind of safety net that can monitor powerful AI agents before they go off track. These agents are increasingly being used to complete tasks without human supervision, raising fears about what could happen if one starts making dangerous decisions or tries to avoid being shut Scientist AI would assess how likely it is that an AI's actions could cause harm. If the risk is too high, it could block that action an ambitious plan but Bengio knows it has to scale. 'The point is to demonstrate the methodology so that then we can convince either donors or governments or AI labs to put the resources that are needed to train this at the same scale as the current frontier AIs,' he said. 'It is really important that the guardrail AI be at least as smart as the AI agent that it is trying to monitor and control.'Bengio's efforts are backed by major names in AI safety, including the Future of Life Institute, Skype co-founder Jaan Tallinn, and Schmidt Sciences, a research group set up by former Google CEO Eric initiative comes at a time when concerns about AI safety are rising — even among those who helped build Geoffrey Hinton – another AI godfather and Bengio's co-winner of the 2018 Turing Award – for instance. Hinton has spent the last few years warning the public about AI's risks. He's talked about machines that could spread misinformation, manipulate people, or become too smart for us to in a recent interview with CBS, Hinton made a surprising confession: he trusts AI more than he probably should. He uses OpenAI's GPT-4 model every day and admitted, 'I tend to believe what it says, even though I should probably be suspicious.'That said, Hinton, who left Google in 2023 to speak more freely about AI dangers, remains deeply concerned about where the technology is heading. He's warned that AI systems could become persuasive enough to influence public opinion or destabilise society. Still, his recent comments show the dilemma many experts face: they're impressed by AI's power, but worried by its then there's Yann LeCun, the third godfather of AI and Meta's top AI scientist. Unlike Bengio or Hinton, LeCun isn't too worried. In fact, he thinks people are an interview with the Wall Street Journal, last year, LeCun had said that today's AI systems don't even come close to human intelligence – or animal intelligence, for that matter. 'It's complete BS,' he said about the doomsday talk around AI. 'It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat,' he played a major role in shaping today's AI, especially in image and speech recognition. At Meta, his teams continue to build powerful tools that help run everything from automatic translation to content moderation. He believes AI is still just a useful tool – not something to different approaches highlight an important truth: when it comes to AI, even the experts don't agree. But if Bengio's project takes off, we might soon have systems smart enough – and honest enough – to keep each other in check.

AI pioneer launches non-profit to develop safe-by-design AI models
AI pioneer launches non-profit to develop safe-by-design AI models

Euronews

time2 days ago

  • General
  • Euronews

AI pioneer launches non-profit to develop safe-by-design AI models

One of the world's most cited artificial intelligence (AI) researchers is launching a new non-profit that will design safe AI systems. Yoshua Bengio, a Canadian-French AI scientist who has won the prestigious Alan Turing Prize for his work on deep learning and has been dubbed one of the "godfathers" of AI, announced the launch of LawZero in Montreal. The new non-profit is assembling a "world-class" team of AI researchers that is dedicated to "prioritising safety over commercial imperatives," a statement from the non-profit reads. "Today's frontier AI models are developing dangerous capabilities and behaviours, including deception, self-preservation, and goal misalignment," Bengio said in the statement, noting that the organisation will help unlock the "immense potential" of AI while reducing these risks. Bengio said the non-profit was born of a new "scientific direction" he took in 2023, which has culminated in "Scientist AI," a new non-agentic AI system he and his team are developing to act as a guardrail against "uncontrolled" agentic AI systems. This principle is different than other companies in that it wants to prioritise non-agentic AIs, meaning it needs direct instructions for each task instead of independently coming up with the answers, like most AI systems. The non-agentic AIs built by LawZero will "learn to understand the world rather than act in it," and will be trained to give "truthful answers to questions based on [external] reasoning". Bengio elaborated on Scientist AI in a recent opinion piece for Time, where he wrote that he is "genuinely unsettled by the behaviour unrestrained AI is already demonstrating, in particular self-preservation and deception". "Rather than trying to please humans, Scientist AI could be designed to prioritise honesty," he wrote. The organisation has received donations from other AI institutes like the Future of Life Institute, Jaan Tallin, and the Silicon Valley Community Foundation in its incubator phase. LawZero will be working out of the MILA - Quebec AI Institute in Montreal, which Bengio helped co-found.

Top scientist wants to prevent AI from going rogue
Top scientist wants to prevent AI from going rogue

Economic Times

time2 days ago

  • Politics
  • Economic Times

Top scientist wants to prevent AI from going rogue

Concerned about the rapid spread of generative AI, a pioneer researcher is developing software to keep tabs on a technology that is increasingly taking over human computer science professor Yoshua Bengio is considered one of the godfathers of the artificial intelligence revolution and on Tuesday announced the launch of LawZero, a non-profit organization intended to mitigate the technology's inherent winner of the Turing Award, also known as the Nobel Prize for computer science, has been warning for several years of the risks of AI, whether through its malicious use or the software itself going awry. Those risks are increasing with the development of so-called AI agents, a use of the technology that tasks computers with making decisions that were once made by human workers. The goal of these agents is to build virtual employees that can do practically any job a human can, at a fraction of the cost. "Currently, AI is developed to maximize profit," Bengio said, adding it was being deployed even as it persists to show flaws. Moreover, for Bengio, giving AI human-like agency will easily be used for malicious purposes such as disinformation, bioweapons, and cyberattacks."If we lose control of rogue super-intelligent AIs, they could greatly harm humanity," he of the first objectives at LawZero will be to develop Scientist AI, a form of specially trained AI that can be used as a guardrail to ensure other AIs are behaving properly, the company said. The organization already has over 15 researchers and has received funding from Schmidt Sciences, a charity set up by former Google boss Eric Schmidt and his wife Wendy. The project comes as powerful large language models (or LLMs) from OpenAI, Google and Anthropic are deployed across all sectors of the digital economy, while still showing significant include AI models that show a capability to deceive and fabricate false information even as they increase a recent example, AI company Anthropic said that during safety testing, its latest AI model tried to blackmail an engineer to avoid being replaced by another system.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store