logo
School safety patrollers rewarded with science-themed activities at University of Lethbridge

School safety patrollers rewarded with science-themed activities at University of Lethbridge

CTV News23-05-2025

Safety patrollers from across southern Alberta were treated to a science-filled exploration day at the University of Lethbridge for all their hard work.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe
Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe

Globe and Mail

time2 hours ago

  • Globe and Mail

Will AI go rogue? Noted researcher Yoshua Bengio launches venture to keep it safe

Famed Canadian artificial-intelligence researcher Yoshua Bengio is launching a non-profit organization backed by close to US$30-million in philanthropic funding to develop safe AI systems that cannot deceive or harm humans, and to find ways to ensure that humanity remains in control of the powerful technology. The Turing Award winner, whose work helped pave the way for today's generative AI technologies, already holds multiple titles. He is a professor at the Université de Montréal, the scientific adviser at the Mila - Quebec Artificial Intelligence Institute and recently chaired the first international report on AI safety. His new venture will operate differently. 'This is more like what a company would do to solve a particular problem. It's much more top-down and mission-oriented,' he said. The non-profit is called LawZero, a reference to science fiction writer Isaac Asimov's Three Laws of Robotics, which stipulate that intelligent machines may not harm human beings. 'I hope I'm wrong': Why some experts see doom in AI LawZero, based in Montreal, will develop a concept called Scientist AI, which Prof. Bengio and his colleagues outlined in a paper earlier this year. In short, it is an AI system that will not have the negative traits found in today's large language models and chatbots, such as sycophancy, overconfidence and deception. Instead, the system would answer questions, prioritize honesty and help unlock new insights to aid in scientific discovery. The system can also be used to develop a tool that will keep AI agents, which can plan and complete tasks on their own, from going rogue. 'The plan is to build an AI that will help to manage the risks and control AIs that are not trusted. Right now, we don't know how to build agents that are trustworthy,' he said. The tool, which he hopes will be adopted by companies, would act as a gatekeeper to reject actions from AI systems that could be harmful. The plan is to build a prototype in the next 18 to 24 months. AI agents are fairly rudimentary today. They can browse the web, fill out forms, analyze data and use other applications. AI companies are making these tools smarter to take over more complex tasks, however, ostensibly to make our lives easier. Some AI experts argue that the risk grows the more powerful these tools become, especially if they are integrated into critical infrastructure systems or used for military purposes without adequate human oversight. AI agents can misinterpret instructions and achieve goals in harmful or unexpected ways, which is called the alignment problem. Editorial: A real reform mandate for the first federal AI minister Researchers at AI company Hugging Face Inc. recently argued against developing autonomous agents. 'We find no clear benefit of fully autonomous AI agents, but many foreseeable harms from ceding full human control,' they wrote, pointing to an incident in 1980 when computer systems mistakenly warned of an impending Soviet missile attack. Human verification revealed the error. Prof. Bengio also highlighted recent research that shows that popular AI models are capable of scheming, deceiving and hiding their true objectives when pushed to pursue a goal at all costs. 'When they get much better at strategizing and planning, that increases the chances of loss of control accidents, which could be disastrous,' he said. Around 15 people are working with LawZero, and Prof. Bengio intends to bring on more by offering salaries competitive with corporate AI labs, which would be impossible in academia, he said. The non-profit setting is ideal for this kind of work because it is free of the pressure to maximize profit over safety, too. 'The leading companies are, unfortunately, in this competitive race,' he said. The project has been incubated at Mila and has received funding from Skype co-founder Jaan Tallinn, along with the Future of Life Institute, Schmidt Sciences and Open Philanthropy, organizations concerned about the potential risks posed by AI. After the release of ChatGPT in late 2022, many AI researchers, including Prof. Bengio and Geoffrey Hinton, began speaking up about the profound dangers posed by superintelligent AI systems, which some experts believe to be closer to reality than originally thought. The potential downsides of AI ran the gamut from biased decision-making, turbocharged disinformation campaigns, a concentration of corporate and geopolitical power, bad actors using the technology to develop bioweapons, mass unemployment and the disempowerment of humanity at-large. None of these outcomes are a given, and these topics are hotly debated. Experts such as Prof. Bengio who focus on what other researchers see as far-off and outlandish concerns have been branded as 'doomers.' Some governments took these warnings seriously, with the United Kingdom organizing major international summits about AI safety and regulation. But the conversation has swung heavily in the other direction toward rapid AI development and adoption to capture the economic benefits. U.S. Vice-President JD Vance set the tone in February with a speech at an AI conference in France. 'The AI future is not going to be won by hand-wringing about safety. It will be won by building,' he said. Prof. Bengio, among the more vigorous hand-wringers, was in the audience for that speech. He laughed when asked what he was thinking that day but answered more generally. 'I wish that the current White House had a better understanding of the objective data that we've seen over the last five years, and especially in the last six months, which really triggers red flags and the need for wisdom and caution,' he said.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store