Latest news with #AIExecutiveOrder


Forbes
11-04-2025
- Business
- Forbes
US AI Policy Pivots Sharply From ‘Safety' To ‘Security'
The Trump administration has pivoted its AI policies away from safety guardrails and toward national ... More defense amid growing global competition. Efforts from firms and governments to prioritize AI safety, which emphasizes ethics, transparency and predictability, have been replaced in the Trump era by a starkly realist doctrine of AI security. For those of us who have been watching this space, the demise of AI safety happened slowly during the last half of 2024, anticipating a potential change in administration, and then all at once. (Disclosure: I previously served as senior counselor for AI at the Department of Homeland Security during the Biden administration.) President Donald Trump rescinded former President Joe Biden's AI Executive Order on day one of his term, and Vice President JD Vance opened up the Paris AI Action Summit, a convening that was originally launched to advance the field of AI safety, by firmly stating that he was not actually there to discuss AI safety and would instead be addressing 'AI opportunity.' Vance went on to say that the U.S. would 'safeguard American AI' and stop adversaries from attaining AI capabilities that 'threaten all of our people.' Without more context, these sound like meaningless buzzwords — what's the difference between AI safety and AI security, and what does this shift mean for the consumers and businesses that continue to adopt AI? Simply put, AI safety is primarily focused on developing AI in a way that behaves ethically and reliably, especially when it's used in high-stakes contexts, like hiring or healthcare. To help prevent AI systems from causing harm, AI safety legislation typically includes risk assessments, testing protocols and requirements for human oversight. AI security, by contrast, does not fixate on developing ethical and safe AI. Rather, it assumes that America's adversaries will inevitably use AI in malicious ways and seeks to defend U.S. assets from intentional threats, like AI being exploited by rival nations to target U.S. critical infrastructure. These are not hypothetical risks — U.S. intelligence agencies continue to track growing offensive cyber operations in China, Russia and North Korea. To counter these types of deliberate attacks, organizations need a strong baseline of cybersecurity practices that also account for threats presented by AI. Both of these fields are important and interconnected — so why does it seem like one has eclipsed the other in recent months? I would guess that prioritizing AI security is inherently more aligned with today's foreign policy climate, in which the worldviews most in vogue are realist depictions of ruthless competition among nations for geopolitical and economic advantage. Prioritizing AI security aims to protect America from its adversaries while maintaining America's global dominance in AI. AI safety, on the other hand, can be a lightning rod for political debates about free speech and unfair bias. The question of whether a given AI system will cause actual harm is also context dependent, as the same system deployed in different environments could produce vastly different outcomes. In the face of so much uncertainty, combined with political disagreements about what truly constitutes harm to the public, legislators have struggled to justify passing safety legislation that could hamper America's competitive edge. News of DeepSeek, a Chinese AI company, achieving competitive performance with U.S. AI models at substantially lower costs, only reaffirmed this move, stoking widespread fear about the steadily diminishing gap between U.S. and China AI capabilities. What happens now, when the specter of federal safety legislation no longer looms on the horizon? Public comments from OpenAI, Anthropic and others on the Trump administration's forthcoming 'AI Action Plan' provide an interesting picture of how AI priorities have shifted. For one, 'safety' hardly appears in the submissions from industry, and where safety issues are mentioned, they are reframed as national security risks that could disadvantage the U.S. in its race to out-compete China. In general, these submissions lay out a series of innovation-friendly policies, from balanced copyright rules for AI training to export controls on semiconductors and other valuable AI components (e.g. model weights). Beyond trying to meet the spirit of the Trump administration's initial messaging on AI, these submissions also seem to reveal what companies believe the role of the U.S. government should be when it comes to AI: funding infrastructure critical to further AI development, protecting American IP, and regulating AI only to the extent that it threatens our national security. To me, this is less of a strategy shift on the part of AI companies than it is a communications shift. If anything, these comments from industry seem more mission-aligned than their previous calls for strong and comprehensive data legislation. Even then, not everyone in the industry supports a no-holds-barred approach to U.S. AI dominance. In their paper, 'Superintelligence Strategy,' three prominent AI voices, Eric Schmidt, Dan Hendrycks and Alexandr Wang, advise caution when it comes to pursuing a Manhattan project-style push for developing superintelligent AI. The authors instead propose 'Mutual Assured AI Malfunction,' or MAIM, a defensive strategy reminiscent of Cold War-era deterrence that would that forcefully counter any state-led efforts to achieve an AI monopoly. If the United States were to pursue this strategy, it would need to disable threatening AI projects, restrict access to advanced AI chips and open weight models and strengthen domestic chip manufacturing. Doing so, according to the authors, would enable the U.S. and other countries to peacefully advance AI innovation while lowering the overall risk of rogue actors using AI to create widespread damage. It will be interesting to see whether these proposals gain traction in the coming months as the Trump administration forms a more detailed position on AI. We should expect to see more such proposals — specifically, those that persistently focus on the geopolitical risks and opportunities of AI, only suggesting legislation to the extent that it helps prevent large-scale catastrophes, such as the creation of biological weapons or foreign attacks on critical U.S. assets. Unfortunately, safety issues don't disappear when you stop paying attention to them or rename a safety institute. While strengthening our security posture may help to boost our competitive edge and counter foreign attacks, it's the safety interventions that help prevent harm to individuals or society at scale. The reality is that AI safety and security work hand-in-hand — AI safety interventions don't work if the systems themselves can be hacked; by the same token, securing AI systems against external threats becomes meaningless if those systems are inherently unsafe and prone to causing harm. Cambridge Analytica offers a useful illustration of this relationship; the incident revealed that Facebook's inadequate safety protocols around data access served to exacerbate security vulnerabilities that were then exploited for political manipulation. Today's AI systems face similarly interconnected challenges. When safety guardrails are dismantled, security risks inevitably follow. For now, AI safety is in the hands of state legislatures and corporate trust and safety teams. The companies building AI know — perhaps better than anyone else — what the stakes are. A single breach of trust, whether it's data theft or an accident, can be destructive to their brand. I predict that they will therefore continue to invest in sensible AI safety practices, but discreetly and without fanfare. Emerging initiatives like ROOST, which enables companies to collaboratively build open safety tools, may be a good preview of what's to come: a quietly burgeoning AI safety movement, supported by the experts, labs and institutions that have pioneered this field over the past decade. Hopefully, that will be enough.

Associated Press
19-02-2025
- Business
- Associated Press
From Policy to Practice: Responsible AI Institute Announces Bold Strategic Shift to Drive Impact in the Age of Agentic AI
AUSTIN, Texas--(BUSINESS WIRE)--Feb 19, 2025-- The Responsible AI Institute (RAI Institute) is taking bold action to reshape and accelerate the future of responsible AI adoption. In response to rapid regulatory shifts, corporate FOMO, and the rise of agentic AI, RAI Institute is expanding beyond policy advocacy to deploy AI-driven tools, agentic AI services, and new AI verification, badging, and benchmarking programs. Backed by a new partner ecosystem, university collaborations in the U.S., U.K., and India, and a pledge from private foundations, RAI Institute is equipping organizations to confidently adopt and govern multi-vendor agent ecosystems. This press release features multimedia. View the full release here: (Graphic: Business Wire) THE AI LANDSCAPE HAS CHANGED — AND RAI INSTITUTE IS MOVING FROM POLICY TO IMPACT Global AI policy and adoption are at an inflection point. AI adoption is accelerating, but trust and governance have not kept pace. Regulatory rollbacks, such as the revocation of the U.S. AI Executive Order and the withdrawal of the EU's AI Liability Directive, signal a shift away from oversight, pushing businesses to adopt AI without sufficient safety frameworks. 51% of companies have already deployed AI agents, with another 78% planning implementation soon ( LangChain, 2024). 42% of workers say accuracy and reliability are top priorities for improving agentic AI tools ( Pegasystems, 2025). 67% of IT decision-makers across the U.S., U.K., France, Germany, Australia, and Singapore report adopting AI despite reliability concerns, driven by FOMO (fear of missing out) ( ABBYY Survey, 2025). At the same time, AI vendors like OpenAI and Microsoft are urging businesses to 'accept imperfection,' a stance that directly contradicts the principles of responsible AI governance. AI-driven automation is already reshaping the workforce, yet most organizations lack structured transition plans, leading to job displacement, skill gaps, and growing concerns over AI's economic impact. The RAI Institute sees this moment as a call to action, going beyond policy frameworks. It's about creating concrete, operational tools, sharing real-world experiences, and learning from real-world member experiences to safeguard AI deployment at scale. STRATEGIC SHIFT: FROM POLICY TO PRACTICE Following a six month review of its operations and strategy, RAI Institute is realigning its mission around three core pillars: 1. EMBRACING HUMAN-LED AI AGENTS TO ACCELERATE RAI ENABLEMENT The Institute will lead by example, integrating AI-powered processes across its operations as 'customer zero.' From AI-driven market intelligence to verification and assessment acceleration, RAI Institute is actively testing the power and exposing the limitations of agentic AI, ensuring it is effective, safe, and accountable in real-world applications. 2. SHIFTING FROM AI POLICY TO AI OPERATIONALIZATION RAI Institute is shifting from policy to action by deploying AI-driven risk management tools and real-time monitoring agents to help companies automate evaluation and 3rd party verification against frameworks like NIST RMF, ISO 42001, and the EU AI Act. Additionally, RAI Institute is partnering with leading universities and research labs in the U.S., U.K., and India to co-develop, stress-test, and pilot responsible agentic AI, ensuring enterprises can measure agent performance, alignment, and unintended risks in real-world scenarios. 3. LAUNCHING THE RAISE AI PATHWAYS PROGRAM RAI Institute is accelerating responsible AI adoption with the RAISE AI Pathways Program, delivering a suite of new human-augmented AI agent-powered insights, assessments, and benchmarking to help businesses evaluate AI maturity, compliance, and readiness for agentic AI ecosystems. This program will leverage collaborations with industry leaders, including the Green Software Foundation and FinOps Foundation and be backed by a matching grant pledge from private foundations, with further funding details to be announced later this year. 'The rise of agentic AI isn't on the horizon — it's already here, and we are shifting from advocacy to action to meet member needs,' said Jeff Easley, General Manager, Responsible AI Institute. 'AI is evolving from experimental pilots to large-scale deployment at an unprecedented pace. Our members don't just need policy recommendations — they need AI-powered risk management, independent verification, and benchmarking tools to help deploy AI responsibly without stifling innovation.' RAISE AI PATHWAYS: LEVERAGING HUMAN-LED AGENTIC AI FOR ACCELERATED IMPACT Beginning in March, RAI Institute will begin a phased launch of its six AI Pathways Agents, developed in collaboration with leading cloud and AI tool vendors and university AI labs in the U.S., U.K., and India. These agents are designed to help enterprises access external tools to independently evaluate, build, deploy, and manage responsible agentic AI systems with safety, trust, and accountability. The phased rollout will ensure real-world testing, enterprise integration, and continuous refinement, enabling organizations to adopt AI-powered governance and risk management solutions at scale. Early access will be granted to select partners and current members, with broader availability expanding throughout the year. Sign up now to join the early access program! Introducing the RAI AI Pathways Agent Suite: RAI Watchtower Agent – Real-time AI risk monitoring to detect compliance gaps, model drift, and security vulnerabilities before they escalate. RAI Corporate AI Policy Copilot – An intelligent policy assistant that helps businesses develop, implement, and maintain AI policies aligned with global policy and standards. RAI Green AI eVerification – A benchmarking program for measuring and optimizing AI's carbon footprint, in collaboration with the Green Software Foundation. RAI AI TCO eVerification – Independent Total Cost of Ownership verification for AI investments, in collaboration with the FinOps Foundation. RAI Agentic AI Purple Teaming – Proactive adversarial testing and defense strategies using industry standards and curated benchmarking data. This AI security agent identifies vulnerabilities, stress-tests AI systems, and mitigates risks such as hallucinations, attacks, bias, and model drift. RAI Premium Research – Access exclusive, in-depth analysis on responsible AI implementation, governance, and risk management. Stay ahead of emerging risks, regulatory changes, and AI best practices. MOVING FORWARD: BUILDING A RESPONSIBLE AI FUTURE The Responsible AI Institute is not merely adapting to AI's rapid evolution — it is leading the charge in defining how AI should be integrated responsibly. Over the next few months, RAI Institute will introduce: Scholarships, hackathons, and long-term internships funded by private foundations. A new global advisory board focused on Agentic AI regulations, safety, and innovation. Upskilling programs to equip organizations with the tools to navigate the next era of AI governance. JOIN THE MOVEMENT: THE TIME FOR RESPONSIBLE AI IS NOW! Join us in shaping the future of responsible AI. Sign up for early access to the RAI AI Agents and RAISE Pathways Programs. About the Responsible AI Institute Since 2016, the Responsible AI Institute has been at the forefront of advancing responsible AI adoption across industries. As a non-profit organization, RAI Institute partners with policymakers, industry leaders, and technology providers to develop responsible AI benchmarks, governance frameworks, and best practices. With the launch of RAISE Pathways, RAI Institute equips organizations with expert-led training, real-time assessments, and implementation toolkits to strengthen AI governance, enhance transparency, and drive innovation at scale. Members include leading companies such as Boston Consulting Group, AMD, KPMG, Chevron, Ally, Mastercard and many others dedicated to bringing responsible AI to all industry sectors. CONTACT: Media Contact Nicole McCaffrey Head of Strategy & Marketing, RAI Institute [email protected] +1 (440) 785-3588 SOURCE: Responsible AI Institute Copyright Business Wire 2025. PUB: 02/19/2025 09:11 AM/DISC: 02/19/2025 09:12 AM