Latest news with #U.S.AISafetyInstitute
Yahoo
2 days ago
- Business
- Yahoo
Trump administration cuts 'Safety' from AI Safety Institute
The Trump administration says it's reforming a Biden-era artificial intelligence safety institute, renaming and reformulating one of the only federal government departments dedicated to oversight of the burgeoning technology. Commerce Secretary Howard Lutnick said in a news release Tuesday that the Trump administration would transform the former U.S. AI Safety Institute — which former President Joe Biden established in November 2023 — into the Center for AI Standards and Innovation. The reframing away from 'safety' is in line with the Trump administration's statements and actions signaling its belief that oversight efforts for AI companies could unnecessarily dull the United States' competitive edge in the space. 'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards,' Lutnick said in the release. 'CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.' The U.S. AI Safety Institute was created to evaluate and test AI models and create standards for safety and security. It also formed a consortium on AI safety, which was made up of over 200 members, including OpenAI, Meta and Anthropic. Although it's unclear whether the transformation will mean any major changes to the institute's operations, the move appears to reflect the Trump administration's 'pro-innovation' approach to deregulating AI technology. Unlike Biden's executive order on AI and the former institute, the reformed center is set to focus on additional aspects like evaluating 'potential security vulnerabilities and malign foreign influence arising from use of adversaries' AI systems, including the possibility of backdoors and other covert, malicious behavior,' as well as 'guard against burdensome and unnecessary regulation of American technologies by foreign governments.' In January, the Chinese-created AI app DeepSeek heightened national security concerns around AI with its latest release, which made waves with its advancements. President Donald Trump said the app 'should be a wake-up call' about the prospect of international competition for American tech companies. Lawmakers introduced a bill to ban DeepSeek from government devices, and the Navy advised its members not to use it 'in any capacity.' The move to reform the institute appears to have been in development for a while. Reuters reported this year that no one from the U.S. AI Safety Institute's staff would attend an AI summit in Paris in February alongside Vice President JD Vance. The institute's inaugural director, Elizabeth Kelly, also announced she would step down that month. In his speech at the summit, Vance echoed Lutnick's sentiments, saying, 'We need international regulatory regimes that fosters the creation of AI technology rather than strangles it.' He also spoke about how he believes AI should be free from 'ideological bias.' Since he returned to office, Trump has made it clear that his administration wants to embrace the expansion of AI. Within his first week, Trump announced the creation of the $500 billion Stargate initiative in collaboration with OpenAI, Oracle and SoftBank, which aims to make the United States a world leader in AI. Trump also signed an executive order on AI in his first week in office that focuses on easing regulations on AI technology and revoking 'existing AI policies and directives that act as barriers to American AI innovation.' Biden's executive order on AI, which focused on safety and privacy standards for the technology, has been scrapped from the White House's website. This article was originally published on


NBC News
2 days ago
- Business
- NBC News
Trump administration cuts 'Safety' from AI Safety Institute
The Trump administration says it's reforming a Biden-era artificial intelligence safety institute, renaming and reformulating one of the only federal government departments dedicated to oversight of the burgeoning technology. Commerce Secretary Howard Lutnick said in a news release Tuesday that the Trump administration would transform the former U.S. AI Safety Institute — which former President Joe Biden established in November 2023 — into the Center for AI Standards and Innovation. The reframing away from 'safety' is in line with the Trump administration's statements and actions signaling its belief that oversight efforts for AI companies could unnecessarily dull the United States' competitive edge in the space. 'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards,' Lutnick said in the release. 'CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards.' The U.S. AI Safety Institute was created to evaluate and test AI models and create standards for safety and security. It also formed a consortium on AI safety, which was made up of over 200 members, including OpenAI, Meta and Anthropic. Although it's unclear whether the transformation will mean any major changes to the institute's operations, the move appears to reflect the Trump administration's 'pro-innovation' approach to deregulating AI technology. Unlike Biden's executive order on AI and the former institute, the reformed center is set to focus on additional aspects like evaluating 'potential security vulnerabilities and malign foreign influence arising from use of adversaries' AI systems, including the possibility of backdoors and other covert, malicious behavior,' as well as 'guard against burdensome and unnecessary regulation of American technologies by foreign governments.' In January, the Chinese-created AI app DeepSeek heightened national security concerns around AI with its latest release, which made waves with its advancements. President Donald Trump said the app 'should be a wake-up call' about the prospect of international competition for American tech companies. Lawmakers introduced a bill to ban DeepSeek from government devices, and the Navy advised its members not to use it 'in any capacity.' The move to reform the institute appears to have been in development for a while. Reuters reported this year that no one from the U.S. AI Safety Institute's staff would attend an AI summit in Paris in February alongside Vice President JD Vance. The institute's inaugural director, Elizabeth Kelly, also announced she would step down that month. In his speech at the summit, Vance echoed Lutnick's sentiments, saying, 'We need international regulatory regimes that fosters the creation of AI technology rather than strangles it.' He also spoke about how he believes AI should be free from 'ideological bias.' Since he returned to office, Trump has made it clear that his administration wants to embrace the expansion of AI. Within his first week, Trump announced the creation of the $500 billion Stargate initiative in collaboration with OpenAI, Oracle and SoftBank, which aims to make the United States a world leader in AI. Trump also signed an executive order on AI in his first week in office that focuses on easing regulations on AI technology and revoking 'existing AI policies and directives that act as barriers to American AI innovation.' Biden's executive order on AI, which focused on safety and privacy standards for the technology, has been scrapped from the White House's website.


Fast Company
05-05-2025
- Business
- Fast Company
Anthropic hires a top Biden official to lead its new AI for social good team (exclusive)
Anthropic is turning to a Biden administration alum to run its new Beneficial Deployments team, which is tasked with helping extend the benefits of its AI to organizations focused on social good—particularly in areas like health research and education—that may lack market-driven incentives. The new team will be led by Elizabeth Kelly, who in 2024 was tapped by the Biden administration to lead the U.S. AI Safety Institute within the National Institute of Standards and Technology (NIST). Kelly helped form agreements with OpenAI and Anthropic that let NIST safety-test the companies' new models prior to their deployment. She left the government in early March, and in mid-March joined Anthropic. 'Our mission is to support the development and deployment of AI in ways that are good for the world but might not be incentivized by the market,' Kelly tells Fast Company. Anthropic views the new group as a reflection of its mission as a public benefit corporation, which commits it to distribute the advantages of its AI equitably, not just to deep-pocketed corporations. Anthropic CEO Dario Amodei emphasized in an essay he published last year AI's potential to drive progress in areas like life sciences, physical health, education, and poverty alleviation. The Beneficial Deployments team sits within Anthropic's go-to-market organization, which it says ensures that the company's AI software and services are designed and deployed with customer needs in mind. Kelly says her team will collaborate across departments—including with Anthropic's Applied AI group and sciences and social impact specialists—to help mission-aligned customers build successful products and services powered by Anthropic models. 'We need to treat nonprofits, ed techs, health techs, those organizations that are developing really transformative solutions the same way that we treat our biggest enterprise customers,' Kelly says. In fact, the smaller organizations, which often lack budget and in-house AI expertise, may get a level of support that's not considered standard for Anthropic's larger customers. 'Our primary focus here is making sure that … the work that we're doing has the biggest impact in terms of lives that we're improving, diseases that we're curing, educational outcomes we're improving,' Kelly says. When considering new beneficiaries, Kelly says she'll take input from members of Anthropic's 'long-term benefit trust,' an independent governance body whose five trustees have experience in global development.