logo
#

Latest news with #AISafetyInstitute

Scoop: AI Safety Institute to be renamed Center for AI Safety and Leadership
Scoop: AI Safety Institute to be renamed Center for AI Safety and Leadership

Axios

time5 days ago

  • Business
  • Axios

Scoop: AI Safety Institute to be renamed Center for AI Safety and Leadership

The Trump administration is looking to change the AI Safety Institute's name to the Center for AI Safety and Leadership in the coming days, per two sources familiar with the matter. Why it matters: The U.S. standards-setting and AI testbed, housed inside the Commerce Department's National Institute for Standards and Technology, has been bracing for changes since President Trump took office. An early draft of a press release seen by one source tasks the agency with largely the same responsibilities it previously had, including engaging internationally. AISI was left out of a Paris AI summit earlier this year. More details of what the name shift means for the mission were not immediately clear. Context: Under the Biden administration, the AI Safety Institute acted as a testing ground of sorts for new AI models, working with private sector companies on evaluation and standards, and was viewed as important by both Republicans and Democrats. After narrowly dodging major DOGE cuts earlier this year, as Axios previously reported, the government body has been changing its identity and purpose as Republican Cabinet secretaries, Trump and Congress figure out their AI strategy. A Commerce Department spokesperson did not immediately respond to a request for comment. Between the lines: More than a name change, the resources the Trump administration invests in NIST will be a key indicator of how much of a priority AI safety and leadership is.

Understanding shift from AI Safety to Security, and India's opportunities
Understanding shift from AI Safety to Security, and India's opportunities

Indian Express

time08-05-2025

  • Business
  • Indian Express

Understanding shift from AI Safety to Security, and India's opportunities

Written by Balaraman Ravindran, Vibhav Mithal and Omir Kumar In February 2025, the UK announced that its AI Safety Institute would become the AI Security Institute. This triggered several debates about what this means for AI safety. As India prepares to host the AI Summit, a key question will be how to approach AI safety. The What and How of AI Safety In November 2023, more than 20 countries, including the US, UK, India, China, and Japan, attended the inaugural AI Safety Summit at Bletchley Park in the UK. The Summit took place against the backdrop of increasing capabilities of AI systems and their integration into multiple domains of life, including employment, healthcare, education, and transportation. Countries acknowledged that while AI is a transformative technology with potential for socio-economic benefit, it also poses significant risks through both deliberate and unintentional misuse. A consensus emerged among the participating countries on the importance of ensuring that AI systems are safe and that their design, development, deployment, or use does not harm society—leading to the Bletchley Declaration. The Declaration further advocated for developing risk-based policies across nations, taking into account national contexts and legal frameworks, while promoting collaboration, transparency from private actors, robust safety evaluation metrics, and enhanced public sector capability and scientific research. It was instrumental in bringing AI safety to the forefront and laid the foundation for global cooperation. Following the Summit, the UK established the AI Safety Institute (AISI), with similar institutes set up in the US, Japan, Singapore, Canada, and the EU. Key functions of AISIs include advancing AI safety research, setting standards, and fostering international cooperation. India has also announced the establishment of its AISI, which will operate on a hub-and-spoke model involving research institutions, academic partners, and private sector entities under the Safe and Trusted pillar of the IndiaAI Mission. UK's Shift from Safety to Security The establishment of AISIs in various countries reflected a global consensus on AI safety. However, the discourse took a turn in January 2025, when the UK rebranded its Safety Institute as the Security Institute. The press release noted that the new name reflects a focus on risks with security implications, such as the use of AI in developing chemical and biological weapons, cybercrimes, and child sexual abuse. It clarified that the Institute would not prioritise issues like bias or free speech but focus on the most serious risks, helping policymakers ensure national safety. The UK government also announced a partnership with Anthropic to deploy AI systems for public services, assess AI security risks, and drive economic growth. India's Understanding of Safety Given the UK's recent developments, it is important to explore what AI safety means for India. Firstly, when we refer to AI safety — i.e., making AI systems safe — we usually talk about mitigating harms such as bias, inaccuracy, and misinformation. While these are pressing concerns, AI safety should also encompass broader societal impacts, such as effects on labour markets, cultural norms, and knowledge systems. One of the Responsible AI (RAI) principles laid down by NITI Aayog in 2021 hinted at this broader view: 'AI should promote positive human values and not disturb in any way social harmony in community relationships.' The RAI principles also address equality, reliability, non-discrimination, privacy protection, and security — all of which are relevant to AI safety. Thus, adherence to RAI principles could be one way of operationalising AI safety. Secondly, safety and security should not be seen as mutually exclusive. We cannot focus on security without first ensuring safety. For example, in a country like India, bias in AI systems could pose national security risks by inciting unrest. As we aim to deploy 'AI for All' in sectors such as healthcare and education, it is essential that these systems are not only secure but also safe and responsible. A narrow focus on security alone is insufficient. Lastly, AI safety must align with AI governance and be viewed through a risk mitigation lens, addressing risks throughout the AI system lifecycle. This includes safety considerations from the conception of the AI model/system, through data collection, processing, and use, to design, development, testing, deployment, and post-deployment monitoring and maintenance. India is already taking steps in this direction. The Draft Report on AI Governance by IndiaAI emphasises the need to apply existing laws to AI-related challenges while also considering new laws to address legal gaps. In parallel, other regulatory approaches, such as self-regulation, are also being explored. Given the global shift from safety to security, the upcoming AI Summit presents India with an important opportunity to articulate its unique perspective on AI safety — both in the national context and as part of a broader global dialogue. Ravindran is Head, Wadhwani School of Data Science and AI & CeRAI; Mithal is Associate Research Fellow, CeRAI (& Associate Partner, Anand and Anand) and Kumar is Policy Analyst, CeRAI. CeRAI – Centre for Responsible AI, IIT Madras

Greens Senator Warns Australia Not ‘Nimble Enough' to Deal With Surge in AI Capabilities
Greens Senator Warns Australia Not ‘Nimble Enough' to Deal With Surge in AI Capabilities

Epoch Times

time21-04-2025

  • Politics
  • Epoch Times

Greens Senator Warns Australia Not ‘Nimble Enough' to Deal With Surge in AI Capabilities

Greens Senator David Shoebridge has called on the Australian federal parliament to be more nimble in addressing the risks around AI development. At a recent online event on AI safety, Shoebridge said one of the greatest challenges was getting the parliament to respond fast enough. 'We can't spend eight years working out a white paper before we roll out regulation in this space,' he said. 'We can't see a threat emerging and say, 'Okay, cool, we're going to begin a six-year parliamentary process in order to work out how we respond to a high-risk deployment of AI.' 'We need to be much more nimble, and we need the resources and assistance in parliament to get us there. And I think if you look at the last three years, you can see how non-nimble the parliament has been.' The senator also noted that while some work on AI safety had managed to get attention, not much progress had been made. Related Stories 4/14/2025 4/15/2025 'What's come out? Where's the product from parliament? Where is the AI Safety Act? Where is the national regulator?' he asked. 'Where's the resource agency that can help parliament navigate through this bloody hard pathway we're going to have to do in the next three years?' Shoebridge's remarks came as Greens' Proposal for National AI Regulator To address AI risks, Shoebridge says the Greens would put forward a standalone 'AI Act' to legislate guardrails and create a national regulator. 'We don't call it an AI Safety Institute, but it has the functions of an AI Safety Institute,' he said. 'So it's well-resourced. It's a national regulator. And its focus is on, first of all, guiding parliament so that we get the right regulatory models in place, strict handrails, strict guidelines, and they're legislated.' The senator further stated that the proposed national AI regulator would have a team of on-call, highly qualified experts led by an independent statutory officer to test high-risk deployments of AI. The expert team would also be responsible for establishing a reliable process to test AI models before they are deployed to identify any risks in real-time. In addition, the Greens would propose to set up a 'digital rights commissioner' whose role is to regulate digital rights and the impacts of AI on those rights. 'I would think of a digital rights commissioner as a kind of an ombudsman in the [digital] space to ensure that our data isn't being fed without our consent into large language models, [and] to put in remedy so that if that happens, people are held to account, and our data is removed from training data sets,' Shoebridge said. Greens Senator David Shoebridge speaks at an event in Sydney, Australia, on Jan. 26, Expert Says Liability Already a Hazy Area Kimberlee Weatherall, a law professor at the University of Sydney, said there were existing challenges with identifying where problems start or occur in AI automation processes. 'Automation makes liability hard at a general level–it's just harder to pin liability on a company or a person if they can say, well, it was the system, what done it? It wasn't me,' she said. 'And if the technology underlying that automated system is in any way unpredictable, which some of the AI is, or we don't understand it, it makes it even harder to pin things like liability and to hold companies responsible. 'Another reason why we need to be thinking about things like guard rails [is] to ensure that systems are safe before they go out and monitoring and auditing that goes on afterwards.'

Untangling safety from AI security is tough, experts say
Untangling safety from AI security is tough, experts say

Axios

time03-03-2025

  • Business
  • Axios

Untangling safety from AI security is tough, experts say

Recent moves by the U.S. and the U.K. to frame AI safety primarily as a security issue could be risky, depending on how leaders ultimately define "safety," experts tell Axios. Why it matters: A broad definition of AI safety could encompass issues like AI models generating dangerous content, such as instructions for building weapons or providing inaccurate technical guidance. But a narrower approach might leave out ethical concerns, like bias in AI decision-making. Driving the news: The U.S. and the U.K. declined to sign an international AI declaration at the Paris summit this month that emphasized an "open," "inclusive" and "ethical" approach to AI development. Vice President JD Vance said at the summit that "pro-growth AI policies" should be prioritized over AI safety regulations. The U.K. recently rebranded its AI Safety Institute as the AI Security Institute. And the U.S. AI Safety Institute could soon face workforce cuts. The big picture: AI safety and security often overlap, but where exactly they intersect depends on perspective. Experts universally agree that AI security focuses on protecting models from external threats like hacks, data breaches and model poisoning. AI safety, however, is more loosely defined. Some argue it should ensure models function reliably — like a self-driving car stopping at red lights or an AI-powered medical tool correctly identifying disease symptoms. Others take a broader view, incorporating ethical concerns such as AI-generated deepfakes, biased decision-making, and jailbreaking attempts that bypass safeguards. Yes, but: Overly rigid definitions could backfire, Chris Sestito, founder and CEO of AI security company HiddenLayer, tells Axios. "We can't be flippant and just say, 'Hey, this is just on the bias side and this is on the content side,'" Sestito says. "It can get very out of control very quickly." Between the lines: It's unclear which AI safety initiatives may be deprioritized as the U.S. shifts its approach. In the U.K., some safety-related work — such as preventing AI from generating child sexual abuse materials — appears to be continuing, says Dane Sherrets, AI researcher and staff solutions architect at HackerOne. Sestito says he's concerned that AI safety will be seen as a censorship issue, mirroring the current debate on social platforms. But he says AI safety encompasses much more, including keeping nuclear secrets out of models. Reality check: These policy rebrands may not meaningfully change AI regulation. "Frankly, everything that we have done up to this point has been largely ineffective anyway," Sestito says. What we're watching: AI researchers and ethical hackers have already been integrating safety concerns into security testing — work that is unlikely to slow down, especially given recent criticisms of AI red teaming in a DEF CON paper. But the biggest signals may come from AI companies themselves, as they refine policies on whom they sell to and what security issues they prioritize in bug bounty programs.

Creative industries are among the UK's crown jewels – and AI is out to steal them
Creative industries are among the UK's crown jewels – and AI is out to steal them

The Guardian

time22-02-2025

  • Business
  • The Guardian

Creative industries are among the UK's crown jewels – and AI is out to steal them

There are decades when nothing happens (as Lenin is – wrongly – supposed to have said) and weeks when decades happen. We've just lived through a few weeks like that. We've known for decades that some American tech companies were problematic for democracy because they were fragmenting the public sphere and fostering polarisation. They were a worrying nuisance, to be sure, but not central to the polity. And then, suddenly, those corporations were inextricably bound into government, and their narrow sectional interests became the national interest of the US. Which means that any foreign government with ideas about regulating, say, hate speech on X, may have to deal with the intemperate wrath of Donald Trump or the more coherent abuse of JD Vance. The panic that this has induced in Europe is a sight to behold. Everywhere you look, political leaders are frantically trying to find ways of 'aligning' with the new regime in Washington. Here in the UK, the Starmer team has been dutifully doing its obeisance bit. First off, it decided to rename Rishi Sunak's AI Safety Institute as the AI Security Institute, thereby 'shifting the UK's focus on artificial intelligence towards security cooperation rather than a 'woke' emphasis on safety concerns', as the Financial Times put it. But, in a way, that's just a rebranding exercise – sending a virtue signal to Washington. Coming down the line, though, is something much more consequential; namely, pressure to amend the UK's copyright laws to make it easier for predominantly American tech companies to train their AI models on other people's creative work without permission, acknowledgment or payment. This stems from recommendation 24 of the AI Opportunities Action Plan, a hymn sheet written for the prime minister by a fashionable tech bro with extensive interests (declared, naturally) in the tech industry. I am told by a senior civil servant that this screed now has the status of holy writ within Whitehall. To which my response was, I'm ashamed to say, unprintable in a family newspaper. The recommendation in question calls for 'reform of the UK text and data-mining regime'. This is based on a breathtaking assertion that: 'The current uncertainty around intellectual property (IP) is hindering innovation and undermining our broader ambitions for AI, as well as the growth of our creative industries.' As I pointed out a few weeks ago, representatives of these industries were mightily pissed off by this piece of gaslighting. No such uncertainty exists, they say. 'UK copyright law does not allow text and data mining for commercial purposes without a licence,' says the Creative Rights in AI Coalition. 'The only uncertainty is around who has been using the UK's creative crown jewels as training material without permission and how they got hold of it.' As an engineer who has sometimes thought of IP law as a rabbit hole masquerading as a profession, I am in no position to assess the rights and wrongs of this disagreement. But I have academic colleagues who are, and last week they published a landmark briefing paper, concluding: 'The unregulated use of generative AI in the UK economy will not necessarily lead to economic growth, and risks damaging the UK's thriving creative sector.' And it is a thriving sector. In fact, it's one of the really distinctive assets of this country. The report says that the creative industries contributed approximately £124.6bn, or 5.7%, to the UK's economy in 2022, and that for decades it has been growing faster than the wider economy (not that this would be difficult). 'Through world-famous brands and production capabilities,' the report continues, 'the impact of these industries on Britain's cultural reach and soft power is immeasurable.' Just to take one sub-sector of the industry, the UK video games industry is the largest in Europe. There are three morals to this story. The first is that the stakes here are high: get it wrong and we kiss goodbye to one of 'global' Britain's most vibrant industries. The aim of public policy should be building a copyright regime that respects creative workers and engenders the confidence that AI can be fairly deployed to the benefit of all rather than just tech corporations. It's not just about 'growth', in other words. The second is that any changes to UK IP law in response to the arrival of AI need to be carefully researched and thought through, and not implemented on the whims of tech bros or of ministers anxious to 'align' the UK with the oligarchs now running the show in Washington. The third comes from watching Elon Musk's goons mess with complex systems that they don't think they need to understand: never entrust a delicate clock to a monkey. Even if he is as rich as Croesus. Sign up to Observed Analysis and opinion on the week's news and culture brought to you by the best Observer writers after newsletter promotion The man who would be kingTrump As Sovereign Decisionist is a perceptive guide by Nathan Gardels to how the world has suddenly changed. Technical supportTim O'Reilly's The End of Programming As We Know It is a really knowledgable summary of AI and software development. Computer says yes The most thoughtful essay I've come across on the potential upsides of AI by a real expert is Machines of Loving Grace by Dario Amodei.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store