logo
#

Latest news with #AISecurityInstitute

Untangling safety from AI security is tough, experts say
Untangling safety from AI security is tough, experts say

Axios

time03-03-2025

  • Business
  • Axios

Untangling safety from AI security is tough, experts say

Recent moves by the U.S. and the U.K. to frame AI safety primarily as a security issue could be risky, depending on how leaders ultimately define "safety," experts tell Axios. Why it matters: A broad definition of AI safety could encompass issues like AI models generating dangerous content, such as instructions for building weapons or providing inaccurate technical guidance. But a narrower approach might leave out ethical concerns, like bias in AI decision-making. Driving the news: The U.S. and the U.K. declined to sign an international AI declaration at the Paris summit this month that emphasized an "open," "inclusive" and "ethical" approach to AI development. Vice President JD Vance said at the summit that "pro-growth AI policies" should be prioritized over AI safety regulations. The U.K. recently rebranded its AI Safety Institute as the AI Security Institute. And the U.S. AI Safety Institute could soon face workforce cuts. The big picture: AI safety and security often overlap, but where exactly they intersect depends on perspective. Experts universally agree that AI security focuses on protecting models from external threats like hacks, data breaches and model poisoning. AI safety, however, is more loosely defined. Some argue it should ensure models function reliably — like a self-driving car stopping at red lights or an AI-powered medical tool correctly identifying disease symptoms. Others take a broader view, incorporating ethical concerns such as AI-generated deepfakes, biased decision-making, and jailbreaking attempts that bypass safeguards. Yes, but: Overly rigid definitions could backfire, Chris Sestito, founder and CEO of AI security company HiddenLayer, tells Axios. "We can't be flippant and just say, 'Hey, this is just on the bias side and this is on the content side,'" Sestito says. "It can get very out of control very quickly." Between the lines: It's unclear which AI safety initiatives may be deprioritized as the U.S. shifts its approach. In the U.K., some safety-related work — such as preventing AI from generating child sexual abuse materials — appears to be continuing, says Dane Sherrets, AI researcher and staff solutions architect at HackerOne. Sestito says he's concerned that AI safety will be seen as a censorship issue, mirroring the current debate on social platforms. But he says AI safety encompasses much more, including keeping nuclear secrets out of models. Reality check: These policy rebrands may not meaningfully change AI regulation. "Frankly, everything that we have done up to this point has been largely ineffective anyway," Sestito says. What we're watching: AI researchers and ethical hackers have already been integrating safety concerns into security testing — work that is unlikely to slow down, especially given recent criticisms of AI red teaming in a DEF CON paper. But the biggest signals may come from AI companies themselves, as they refine policies on whom they sell to and what security issues they prioritize in bug bounty programs.

UK delays plans to regulate AI as ministers seek to align with Trump administration
UK delays plans to regulate AI as ministers seek to align with Trump administration

The Guardian

time24-02-2025

  • Business
  • The Guardian

UK delays plans to regulate AI as ministers seek to align with Trump administration

Ministers have delayed plans to regulate artificial intelligence as the UK government seeks to align itself with Donald Trump's administration on the technology, the Guardian has learned. A long-awaited AI bill, which ministers had originally intended to publish before Christmas, is not expected to appear in parliament before the summer, according to three Labour sources briefed on the plans. Ministers had intended to publish a short bill within months of entering office that would have required companies to hand over large AI models such as ChatGPT for testing by the UK's AI Security Institute. The bill was intended to be the government's answer to concerns that AI models could become so advanced that they pose a risk to humanity, and were different from separate proposals to clarify how AI companies can use copyrighted material. Trump's election has led to a rethink, however. A senior Labour source said the bill was 'properly in the background' and that there were still 'no hard proposals in terms of what the legislation looks like'. 'They said let's try and get it done before Christmas – now it's summer,' the source added. Another Labour source briefed on the legislation said an iteration of the bill had been prepared months ago but was now up in the air because of Trump, with ministers reluctant to take action that could weaken the UK's attractiveness to AI companies. Trump has torpedoed plans by his predecessor Joe Biden for regulating AI and revoked an executive order on making the technology safe and trustworthy. The future of the US AI Safety Institute, founded by Biden, is uncertain after its director resigned this month. At an AI summit hosted in Paris, JD Vance, the US vice-president, railed against Europe's planned regulation of the technology. The UK government chose to side with the US by refusing to sign the Paris declaration endorsed by 66 other countries at the summit. Peter Mandelson, the UK's ambassador to Washington, has reportedly drafted proposals to make the UK the main hub for US AI investment. Speaking to the committee in December, Peter Kyle, the science and technology secretary, appeared to suggest the AI bill was at an advantaged stage. But earlier this month Patrick Vallance, the science minister, told MPs that 'there is no bill at the moment'. A government spokesperson said: 'This government remains committed to bringing forward legislation which allows us to safely realise the enormous benefits of AI for years to come. 'As you would expect, we are continuing to engage extensively to refine our proposals and will launch a public consultation in due course to ensure our approach is future-proofed and effective against this fast-evolving technology.' Ministers are under pressure over separate plans to allow AI companies to draw on online material including creative work to train their models without needing copyright permission. Sign up to First Edition Our morning email breaks down the key stories of the day, telling you what's happening and why it matters after newsletter promotion Artists including Paul McCartney and Elton John are campaigning against the move, which they have warned would allow firms to 'ride roughshod over the traditional copyright laws that protect artists' livelihoods'.

Creative industries are among the UK's crown jewels – and AI is out to steal them
Creative industries are among the UK's crown jewels – and AI is out to steal them

The Guardian

time22-02-2025

  • Business
  • The Guardian

Creative industries are among the UK's crown jewels – and AI is out to steal them

There are decades when nothing happens (as Lenin is – wrongly – supposed to have said) and weeks when decades happen. We've just lived through a few weeks like that. We've known for decades that some American tech companies were problematic for democracy because they were fragmenting the public sphere and fostering polarisation. They were a worrying nuisance, to be sure, but not central to the polity. And then, suddenly, those corporations were inextricably bound into government, and their narrow sectional interests became the national interest of the US. Which means that any foreign government with ideas about regulating, say, hate speech on X, may have to deal with the intemperate wrath of Donald Trump or the more coherent abuse of JD Vance. The panic that this has induced in Europe is a sight to behold. Everywhere you look, political leaders are frantically trying to find ways of 'aligning' with the new regime in Washington. Here in the UK, the Starmer team has been dutifully doing its obeisance bit. First off, it decided to rename Rishi Sunak's AI Safety Institute as the AI Security Institute, thereby 'shifting the UK's focus on artificial intelligence towards security cooperation rather than a 'woke' emphasis on safety concerns', as the Financial Times put it. But, in a way, that's just a rebranding exercise – sending a virtue signal to Washington. Coming down the line, though, is something much more consequential; namely, pressure to amend the UK's copyright laws to make it easier for predominantly American tech companies to train their AI models on other people's creative work without permission, acknowledgment or payment. This stems from recommendation 24 of the AI Opportunities Action Plan, a hymn sheet written for the prime minister by a fashionable tech bro with extensive interests (declared, naturally) in the tech industry. I am told by a senior civil servant that this screed now has the status of holy writ within Whitehall. To which my response was, I'm ashamed to say, unprintable in a family newspaper. The recommendation in question calls for 'reform of the UK text and data-mining regime'. This is based on a breathtaking assertion that: 'The current uncertainty around intellectual property (IP) is hindering innovation and undermining our broader ambitions for AI, as well as the growth of our creative industries.' As I pointed out a few weeks ago, representatives of these industries were mightily pissed off by this piece of gaslighting. No such uncertainty exists, they say. 'UK copyright law does not allow text and data mining for commercial purposes without a licence,' says the Creative Rights in AI Coalition. 'The only uncertainty is around who has been using the UK's creative crown jewels as training material without permission and how they got hold of it.' As an engineer who has sometimes thought of IP law as a rabbit hole masquerading as a profession, I am in no position to assess the rights and wrongs of this disagreement. But I have academic colleagues who are, and last week they published a landmark briefing paper, concluding: 'The unregulated use of generative AI in the UK economy will not necessarily lead to economic growth, and risks damaging the UK's thriving creative sector.' And it is a thriving sector. In fact, it's one of the really distinctive assets of this country. The report says that the creative industries contributed approximately £124.6bn, or 5.7%, to the UK's economy in 2022, and that for decades it has been growing faster than the wider economy (not that this would be difficult). 'Through world-famous brands and production capabilities,' the report continues, 'the impact of these industries on Britain's cultural reach and soft power is immeasurable.' Just to take one sub-sector of the industry, the UK video games industry is the largest in Europe. There are three morals to this story. The first is that the stakes here are high: get it wrong and we kiss goodbye to one of 'global' Britain's most vibrant industries. The aim of public policy should be building a copyright regime that respects creative workers and engenders the confidence that AI can be fairly deployed to the benefit of all rather than just tech corporations. It's not just about 'growth', in other words. The second is that any changes to UK IP law in response to the arrival of AI need to be carefully researched and thought through, and not implemented on the whims of tech bros or of ministers anxious to 'align' the UK with the oligarchs now running the show in Washington. The third comes from watching Elon Musk's goons mess with complex systems that they don't think they need to understand: never entrust a delicate clock to a monkey. Even if he is as rich as Croesus. Sign up to Observed Analysis and opinion on the week's news and culture brought to you by the best Observer writers after newsletter promotion The man who would be kingTrump As Sovereign Decisionist is a perceptive guide by Nathan Gardels to how the world has suddenly changed. Technical supportTim O'Reilly's The End of Programming As We Know It is a really knowledgable summary of AI and software development. Computer says yes The most thoughtful essay I've come across on the potential upsides of AI by a real expert is Machines of Loving Grace by Dario Amodei.

UK drops 'safety' from its AI body, now called AI Security Institute, inks MOU with Anthropic
UK drops 'safety' from its AI body, now called AI Security Institute, inks MOU with Anthropic

Yahoo

time14-02-2025

  • Business
  • Yahoo

UK drops 'safety' from its AI body, now called AI Security Institute, inks MOU with Anthropic

The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it's pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the 'AI Security Institute.' With that, it will shift from primarily exploring areas like existential risk and bias in Large Language Models, to a focus on cybersecurity, specifically 'strengthening protections against the risks AI poses to national security and crime.' Alongside this, the government also announced a new partnership with Anthropic. No firm services announced but MOU indicates the two will 'explore' using Anthropic's AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modelling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks. "AI has the potential to transform how governments serve their citizens," Anthropic co-founder and CEO Dario Amodei said in a statement. "We look forward to exploring how Anthropic's AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents." Anthropic is the only company being announced today — coinciding with a week of AI activities in Munich and Paris — but it's not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the Secretary of State for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.) The government's switch-up of the AI Safety Institute — launched just over a year ago with a lot of fanfare — to AI Security shouldn't come as too much of a surprise. When the newly-installed Labour government announced its AI-heavy Plan for Change in January, it was notable that the words 'safety,' 'harm,' 'existential,' and 'threat' did not appear at all in the document. That was not an oversight. The government's plan is to kickstart investment in a more modernized economy, using technology and specifically AI to do that. It wants to work more closely with Big Tech, and it also wants to build its own homegrown big techs. The main messages it's been promoting have development, AI, and more development. Civil Servants will have their own AI assistant called 'Humphrey,' and they're being encouraged to share data and use AI in other areas to speed up how they work. Consumers will be getting digital wallets for their government documents, and chatbots. So have AI safety issues been resolved? Not exactly, but the message seems to be that they can't be considered at the expense of progress. The government claimed that despite the name change, the song will remain the same. 'The changes I'm announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change," Kyle said in a statement. "The work of the AI Security Institute won't change, but this renewed focus will ensure our citizens – and those of our allies - are protected from those who would look to use AI against our institutions, democratic values, and way of life." 'The Institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public," added Ian Hogarth, who remains the chair of the institute. 'Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.' Further afield, priorities definitely appear to have changed around the importance of 'AI Safety'. The biggest risk the AI Safety Institute in the U.S. is contemplating right now, is that it's going to be dismantled. U.S. Vice President J.D. Vance telegraphed as much just earlier this week during his speech in Paris. Sign in to access your portfolio

Rebranded AI Security Institute to drop focus on bias and free speech
Rebranded AI Security Institute to drop focus on bias and free speech

The Independent

time14-02-2025

  • Politics
  • The Independent

Rebranded AI Security Institute to drop focus on bias and free speech

Britain's AI Safety Institute will drop its focus on bias and free speech to concentrate on crime and national security issues, the Technology Secretary will announce on Friday. The agency will also be rebranded as the AI Security Institute (AISI), emphasising its renewed focus on crime and security issues. Technology Secretary Peter Kyle is expected to announce the agency's new name at the Munich Security Conference on Friday along with a new 'criminal misuse' team, in partnership with the Home Office. The institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public Ian Hogarth, AI Security Institute Speaking ahead of the event, Mr Kyle said: 'The changes I'm announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our plan for change.' He added that the 'renewed focus' on security would 'ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values and way of life'. Crime and security concerns already form part of the institute's remit, but it currently also covers wider societal impacts of artificial intelligence, the risk of AI becoming autonomous and the effectiveness of safety measures for AI systems. Established in 2023, then-prime minister Rishi Sunak said the institute would 'advance the world's knowledge of AI safety', including exploring 'all the risks from social harms like bias and misinformation, through to the most extreme risks of all'. Although Mr Kyle insisted the institute's work 'won't change', the Department for Science, Innovation and Technology said the rebranded agency 'will not focus on bias or freedom of speech'. The refocusing on security is expected to include addressing how AI can be used to develop chemical and biological weapons, carry out cyber attacks and enable crimes such as fraud and child sexual abuse. AISI chairman Ian Hogarth said: 'The institute's focus from the start has been on security and we've built a team of scientists focused on evaluating serious risks to the public. 'Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.' Also on Friday, Mr Kyle announced a new partnership between the UK and Anthropic, a San Francisco-based AI company, the first deal involving the Government's new sovereign AI unit. Anthropic chief executive Dario Amodei said: 'We look forward to exploring how Anthropic's AI assistant Claude could help UK Government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents.' The Government began the year with a major focus on AI, setting out plans to both boost the UK's AI industry and incorporate the technology into the public sector. Alongside the launch in January of an AI action plan, Prime Minister Sir Keir Starmer wrote to ministers instructing them to make driving AI adoption and growth in their departments a top priority.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store