Latest news with #AISecurityInstitute


Cision Canada
30-07-2025
- Business
- Cision Canada
Government of Canada partners with United Kingdom to invest in groundbreaking AI alignment research Français
OTTAWA, ON, July 30, 2025 /CNW/ - Investing in artificial intelligence (AI) is key to unlocking Canada's prosperity, resiliency and security as well as strengthening the country's leadership. The Government of Canada is committed to scaling up Canada's AI ecosystem, building AI infrastructure, increasing the adoption of AI systems and strengthening trust. In doing so, it is essential to develop AI in a safe and responsible manner so that it benefits all Canadians. Today, the Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario, announced that the Canadian AI Safety Institute, through the Canadian Institute for Advanced Research, will contribute $1 million to the UK AI Security Institute 's Alignment Project, a cutting-edge initiative to advance research on AI alignment. This critical field is focused on making advanced AI systems operate in a reliable and beneficial way, without unintended or harmful actions. The Alignment Project is backed by a CAN$29 million (£15.9 million) investment from an international coalition that includes Schmidt Sciences, Amazon Web Services, Halcyon Futures, the Safe AI Fund and the Advanced Research and Invention Agency. The project will support pioneering work to keep advanced systems safe by maintaining transparency, predictability and responsiveness to human oversight. Through its collaborative approach, the project will remove key barriers that have previously limited alignment research by offering three distinct support streams: Grant funding: Up to $1.8 million for researchers across disciplines, including computer science and cognitive science Compute access: Dedicated compute resources, enabling technical experiments beyond typical academic reach Venture capital: Investment from private funders to accelerate commercial alignment solutions The project will be guided by a world-class advisory board that includes Canadian AI expert Yoshua Bengio as well as Zico Kolter, Shafi Goldwasser and Andrea Lincoln. This partnership will allow Canada to navigate this pivotal period of rapid technological advancement alongside profound geopolitical shifts and to position our country and its partners for success. Quotes "We are at a hinge moment in the story of AI, where our choices today will shape Canada's economic future and influence the global trajectory of this technology. By investing strategically in scale, infrastructure and adoption, we're not just fuelling opportunity for Canadians—we're making sure progress is matched by purpose and responsibility. That's why this partnership, uniting the Canadian AI Safety Institute and the Canadian Institute for Advanced Research with the UK AI Security Institute, matters. Together, we're advancing cutting-edge research to ensure next generation of AI systems are not only powerful but also reliable—serving societies here at home and around the world." – The Honourable Evan Solomon, Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario "CIFAR is proud to partner in this vital international effort to ensure advanced AI systems remain aligned with human values. CIFAR's mandate is to convene the world's top researchers to address the most pressing challenges facing humanity, and few challenges are more urgent than ensuring AI is safe, predictable and beneficial for all. Through our leadership of the Canadian AI Safety Institute Research Program at CIFAR, we are advancing foundational research that will help safeguard the transformative potential of AI while protecting the public interest." – Elissa Strome, Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research Quick facts Established in November 2024, the Canadian AI Safety Institute (CAISI) seeks to advance scientific understanding of the risks associated with advanced AI systems, develop measures to reduce those risks and build trust to foster AI innovation. CAISI is partnering with counterparts around the world, including the UK AI Security Institute, to advance common understandings of and responses to safety risks. CAISI leverages the robust Canadian AI research ecosystem and advances AI safety research through two research streams: investigator-led research via the Canadian Institute for Advanced Research (CIFAR) and government-directed projects led by the National Research Council of Canada. CIFAR is a globally influential research organization based in Canada. It mobilizes experts from across disciplines and at various career stages to advance transformative knowledge and solve complex problems. The Government of Canada has also launched other initiatives to support the safe and responsible development and deployment of AI systems and safe AI adoption across the Canadian economy, including the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems and a guide for managers on implementing the Code of Conduct. Stay connected Find more services and information on the Innovation, Science and Economic Development Canada website. For easy access to government programs for businesses, download the Canada Business app. SOURCE Innovation, Science and Economic Development Canada Contacts: Sofia Ouslis, Press Secretary, Office of the Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario, [email protected]; Media Relations, Innovation, Science and Economic Development Canada, [email protected]


Calgary Herald
30-07-2025
- Business
- Calgary Herald
Canada commits funding to joint AI safety effort with the U.K.
Canada is collaborating with the U.K. on a new artificial intelligence safety initiative as the U.S. pushes its vision of a zero-sum AI race that will see it remove guardrails on development and deployment of the burgeoning technology. Article content On Wednesday, Canada announced that it will commit $1 million to a $29 million joint AI safety effort with the U.K. that will bankroll research and commercial projects focused on keeping advanced AI systems in line — in other words, ensuring that they operate safely, reliably, and in a useful way, and without unintended or harmful actions. Article content Article content Article content 'Together, we're advancing cutting-edge research to ensure (the) next generation of AI systems are not only powerful but also reliable — serving societies here at home and around the world,' said minister of AI and digital innovation Evan Solomon in a statement. Article content The initiative, called the AI Alignment Project, is spearheaded by the U.K.'s AI Security Institute (AISI) and will involve the Canadian AI Safety Institute (CAISI) under the umbrella of the Canadian Institute for Advanced Research (CIFAR), a research organization focussed on innovation and deep tech. Other financial backers include enterprise partners such as Amazon Web Services, Inc. and venture firm the Safe AI Fund, and non-profits such as New-York headquartered Schmidt Sciences, LLC. Article content The project will provide grant funding for researchers across disciplines including computer and cognitive science; help organizations and individuals access venture capital investment; and secure 'compute,' the computational power needed to train and run AI models. The call for proposals, which launches Wednesday, will be open until September. Article content Article content Prominent AI experts including Canadian-French computer scientist Yoshua Bengio, known as one of the 'godfathers of AI,' will serve on the advisory board to help steer the effort and to select the successful proposals by November. Article content Article content Canada's participation in the U.K.-led AI safety effort fits into Ottawa's broader AI vision for Canada — one that involves building trust in, and adoption of, the technology — and lets Canadian researchers have a seat at the table when it comes to global efforts on AI safety, according to Elissa Strome, executive director of pan-Canadian AI strategy at CIFAR. There are 'few challenges more urgent than ensuring AI is safe, predictable, and beneficial for all,' she said. Article content The launch of the new initiative takes place amid broader shifts in the conversation on AI. As the tech becomes an increasingly important feature of national security and economic competition, the U.S. has peddled a light-touch regulatory approach, while jurisdictions such as the EU have championed tougher-on-tech rules. Industry groups and AI safety scholars have also clashed over how AI should be regulated, developed and deployed.


Axios
03-03-2025
- Business
- Axios
Untangling safety from AI security is tough, experts say
Recent moves by the U.S. and the U.K. to frame AI safety primarily as a security issue could be risky, depending on how leaders ultimately define "safety," experts tell Axios. Why it matters: A broad definition of AI safety could encompass issues like AI models generating dangerous content, such as instructions for building weapons or providing inaccurate technical guidance. But a narrower approach might leave out ethical concerns, like bias in AI decision-making. Driving the news: The U.S. and the U.K. declined to sign an international AI declaration at the Paris summit this month that emphasized an "open," "inclusive" and "ethical" approach to AI development. Vice President JD Vance said at the summit that "pro-growth AI policies" should be prioritized over AI safety regulations. The U.K. recently rebranded its AI Safety Institute as the AI Security Institute. And the U.S. AI Safety Institute could soon face workforce cuts. The big picture: AI safety and security often overlap, but where exactly they intersect depends on perspective. Experts universally agree that AI security focuses on protecting models from external threats like hacks, data breaches and model poisoning. AI safety, however, is more loosely defined. Some argue it should ensure models function reliably — like a self-driving car stopping at red lights or an AI-powered medical tool correctly identifying disease symptoms. Others take a broader view, incorporating ethical concerns such as AI-generated deepfakes, biased decision-making, and jailbreaking attempts that bypass safeguards. Yes, but: Overly rigid definitions could backfire, Chris Sestito, founder and CEO of AI security company HiddenLayer, tells Axios. "We can't be flippant and just say, 'Hey, this is just on the bias side and this is on the content side,'" Sestito says. "It can get very out of control very quickly." Between the lines: It's unclear which AI safety initiatives may be deprioritized as the U.S. shifts its approach. In the U.K., some safety-related work — such as preventing AI from generating child sexual abuse materials — appears to be continuing, says Dane Sherrets, AI researcher and staff solutions architect at HackerOne. Sestito says he's concerned that AI safety will be seen as a censorship issue, mirroring the current debate on social platforms. But he says AI safety encompasses much more, including keeping nuclear secrets out of models. Reality check: These policy rebrands may not meaningfully change AI regulation. "Frankly, everything that we have done up to this point has been largely ineffective anyway," Sestito says. What we're watching: AI researchers and ethical hackers have already been integrating safety concerns into security testing — work that is unlikely to slow down, especially given recent criticisms of AI red teaming in a DEF CON paper. But the biggest signals may come from AI companies themselves, as they refine policies on whom they sell to and what security issues they prioritize in bug bounty programs.


The Guardian
24-02-2025
- Business
- The Guardian
UK delays plans to regulate AI as ministers seek to align with Trump administration
Ministers have delayed plans to regulate artificial intelligence as the UK government seeks to align itself with Donald Trump's administration on the technology, the Guardian has learned. A long-awaited AI bill, which ministers had originally intended to publish before Christmas, is not expected to appear in parliament before the summer, according to three Labour sources briefed on the plans. Ministers had intended to publish a short bill within months of entering office that would have required companies to hand over large AI models such as ChatGPT for testing by the UK's AI Security Institute. The bill was intended to be the government's answer to concerns that AI models could become so advanced that they pose a risk to humanity, and were different from separate proposals to clarify how AI companies can use copyrighted material. Trump's election has led to a rethink, however. A senior Labour source said the bill was 'properly in the background' and that there were still 'no hard proposals in terms of what the legislation looks like'. 'They said let's try and get it done before Christmas – now it's summer,' the source added. Another Labour source briefed on the legislation said an iteration of the bill had been prepared months ago but was now up in the air because of Trump, with ministers reluctant to take action that could weaken the UK's attractiveness to AI companies. Trump has torpedoed plans by his predecessor Joe Biden for regulating AI and revoked an executive order on making the technology safe and trustworthy. The future of the US AI Safety Institute, founded by Biden, is uncertain after its director resigned this month. At an AI summit hosted in Paris, JD Vance, the US vice-president, railed against Europe's planned regulation of the technology. The UK government chose to side with the US by refusing to sign the Paris declaration endorsed by 66 other countries at the summit. Peter Mandelson, the UK's ambassador to Washington, has reportedly drafted proposals to make the UK the main hub for US AI investment. Speaking to the committee in December, Peter Kyle, the science and technology secretary, appeared to suggest the AI bill was at an advantaged stage. But earlier this month Patrick Vallance, the science minister, told MPs that 'there is no bill at the moment'. A government spokesperson said: 'This government remains committed to bringing forward legislation which allows us to safely realise the enormous benefits of AI for years to come. 'As you would expect, we are continuing to engage extensively to refine our proposals and will launch a public consultation in due course to ensure our approach is future-proofed and effective against this fast-evolving technology.' Ministers are under pressure over separate plans to allow AI companies to draw on online material including creative work to train their models without needing copyright permission. Sign up to First Edition Our morning email breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion Artists including Paul McCartney and Elton John are campaigning against the move, which they have warned would allow firms to 'ride roughshod over the traditional copyright laws that protect artists' livelihoods'.


The Guardian
22-02-2025
- Business
- The Guardian
Creative industries are among the UK's crown jewels – and AI is out to steal them
There are decades when nothing happens (as Lenin is – wrongly – supposed to have said) and weeks when decades happen. We've just lived through a few weeks like that. We've known for decades that some American tech companies were problematic for democracy because they were fragmenting the public sphere and fostering polarisation. They were a worrying nuisance, to be sure, but not central to the polity. And then, suddenly, those corporations were inextricably bound into government, and their narrow sectional interests became the national interest of the US. Which means that any foreign government with ideas about regulating, say, hate speech on X, may have to deal with the intemperate wrath of Donald Trump or the more coherent abuse of JD Vance. The panic that this has induced in Europe is a sight to behold. Everywhere you look, political leaders are frantically trying to find ways of 'aligning' with the new regime in Washington. Here in the UK, the Starmer team has been dutifully doing its obeisance bit. First off, it decided to rename Rishi Sunak's AI Safety Institute as the AI Security Institute, thereby 'shifting the UK's focus on artificial intelligence towards security cooperation rather than a 'woke' emphasis on safety concerns', as the Financial Times put it. But, in a way, that's just a rebranding exercise – sending a virtue signal to Washington. Coming down the line, though, is something much more consequential; namely, pressure to amend the UK's copyright laws to make it easier for predominantly American tech companies to train their AI models on other people's creative work without permission, acknowledgment or payment. This stems from recommendation 24 of the AI Opportunities Action Plan, a hymn sheet written for the prime minister by a fashionable tech bro with extensive interests (declared, naturally) in the tech industry. I am told by a senior civil servant that this screed now has the status of holy writ within Whitehall. To which my response was, I'm ashamed to say, unprintable in a family newspaper. The recommendation in question calls for 'reform of the UK text and data-mining regime'. This is based on a breathtaking assertion that: 'The current uncertainty around intellectual property (IP) is hindering innovation and undermining our broader ambitions for AI, as well as the growth of our creative industries.' As I pointed out a few weeks ago, representatives of these industries were mightily pissed off by this piece of gaslighting. No such uncertainty exists, they say. 'UK copyright law does not allow text and data mining for commercial purposes without a licence,' says the Creative Rights in AI Coalition. 'The only uncertainty is around who has been using the UK's creative crown jewels as training material without permission and how they got hold of it.' As an engineer who has sometimes thought of IP law as a rabbit hole masquerading as a profession, I am in no position to assess the rights and wrongs of this disagreement. But I have academic colleagues who are, and last week they published a landmark briefing paper, concluding: 'The unregulated use of generative AI in the UK economy will not necessarily lead to economic growth, and risks damaging the UK's thriving creative sector.' And it is a thriving sector. In fact, it's one of the really distinctive assets of this country. The report says that the creative industries contributed approximately £124.6bn, or 5.7%, to the UK's economy in 2022, and that for decades it has been growing faster than the wider economy (not that this would be difficult). 'Through world-famous brands and production capabilities,' the report continues, 'the impact of these industries on Britain's cultural reach and soft power is immeasurable.' Just to take one sub-sector of the industry, the UK video games industry is the largest in Europe. There are three morals to this story. The first is that the stakes here are high: get it wrong and we kiss goodbye to one of 'global' Britain's most vibrant industries. The aim of public policy should be building a copyright regime that respects creative workers and engenders the confidence that AI can be fairly deployed to the benefit of all rather than just tech corporations. It's not just about 'growth', in other words. The second is that any changes to UK IP law in response to the arrival of AI need to be carefully researched and thought through, and not implemented on the whims of tech bros or of ministers anxious to 'align' the UK with the oligarchs now running the show in Washington. The third comes from watching Elon Musk's goons mess with complex systems that they don't think they need to understand: never entrust a delicate clock to a monkey. Even if he is as rich as Croesus. Sign up to Observed Analysis and opinion on the week's news and culture brought to you by the best Observer writers after newsletter promotion The man who would be kingTrump As Sovereign Decisionist is a perceptive guide by Nathan Gardels to how the world has suddenly changed. Technical supportTim O'Reilly's The End of Programming As We Know It is a really knowledgable summary of AI and software development. Computer says yes The most thoughtful essay I've come across on the potential upsides of AI by a real expert is Machines of Loving Grace by Dario Amodei.