From Policy to Practice: Responsible AI Institute Announces Bold Strategic Shift to Drive Impact in the Age of Agentic AI
The Responsible AI Institute (RAI Institute) is taking bold action to reshape and accelerate the future of responsible AI adoption. In response to rapid regulatory shifts, corporate FOMO, and the rise of agentic AI, RAI Institute is expanding beyond policy advocacy to deploy AI-driven tools, agentic AI services, and new AI verification, badging, and benchmarking programs. Backed by a new partner ecosystem, university collaborations in the U.S., U.K., and India, and a pledge from private foundations, RAI Institute is equipping organizations to confidently adopt and govern multi-vendor agent ecosystems.
This press release features multimedia. View the full release here:
(Graphic: Business Wire)
THE AI LANDSCAPE HAS CHANGED — AND RAI INSTITUTE IS MOVING FROM POLICY TO IMPACT
Global AI policy and adoption are at an inflection point. AI adoption is accelerating, but trust and governance have not kept pace. Regulatory rollbacks, such as the revocation of the U.S. AI Executive Order and the withdrawal of the EU's AI Liability Directive, signal a shift away from oversight, pushing businesses to adopt AI without sufficient safety frameworks.
51% of companies have already deployed AI agents, with another 78% planning implementation soon ( LangChain, 2024).
42% of workers say accuracy and reliability are top priorities for improving agentic AI tools ( Pegasystems, 2025).
67% of IT decision-makers across the U.S., U.K., France, Germany, Australia, and Singapore report adopting AI despite reliability concerns, driven by FOMO (fear of missing out) ( ABBYY Survey, 2025).
At the same time, AI vendors like OpenAI and Microsoft are urging businesses to 'accept imperfection,' a stance that directly contradicts the principles of responsible AI governance. AI-driven automation is already reshaping the workforce, yet most organizations lack structured transition plans, leading to job displacement, skill gaps, and growing concerns over AI's economic impact.
The RAI Institute sees this moment as a call to action, going beyond policy frameworks. It's about creating concrete, operational tools, sharing real-world experiences, and learning from real-world member experiences to safeguard AI deployment at scale.
STRATEGIC SHIFT: FROM POLICY TO PRACTICE
Following a six month review of its operations and strategy, RAI Institute is realigning its mission around three core pillars:
1. EMBRACING HUMAN-LED AI AGENTS TO ACCELERATE RAI ENABLEMENT
The Institute will lead by example, integrating AI-powered processes across its operations as 'customer zero.' From AI-driven market intelligence to verification and assessment acceleration, RAI Institute is actively testing the power and exposing the limitations of agentic AI, ensuring it is effective, safe, and accountable in real-world applications.
2. SHIFTING FROM AI POLICY TO AI OPERATIONALIZATION
RAI Institute is shifting from policy to action by deploying AI-driven risk management tools and real-time monitoring agents to help companies automate evaluation and 3rd party verification against frameworks like NIST RMF, ISO 42001, and the EU AI Act. Additionally, RAI Institute is partnering with leading universities and research labs in the U.S., U.K., and India to co-develop, stress-test, and pilot responsible agentic AI, ensuring enterprises can measure agent performance, alignment, and unintended risks in real-world scenarios.
3. LAUNCHING THE RAISE AI PATHWAYS PROGRAM
RAI Institute is accelerating responsible AI adoption with the RAISE AI Pathways Program, delivering a suite of new human-augmented AI agent-powered insights, assessments, and benchmarking to help businesses evaluate AI maturity, compliance, and readiness for agentic AI ecosystems. This program will leverage collaborations with industry leaders, including the Green Software Foundation and FinOps Foundation and be backed by a matching grant pledge from private foundations, with further funding details to be announced later this year.
'The rise of agentic AI isn't on the horizon — it's already here, and we are shifting from advocacy to action to meet member needs,' said Jeff Easley, General Manager, Responsible AI Institute. 'AI is evolving from experimental pilots to large-scale deployment at an unprecedented pace. Our members don't just need policy recommendations — they need AI-powered risk management, independent verification, and benchmarking tools to help deploy AI responsibly without stifling innovation.'
RAISE AI PATHWAYS: LEVERAGING HUMAN-LED AGENTIC AI FOR ACCELERATED IMPACT
Beginning in March, RAI Institute will begin a phased launch of its six AI Pathways Agents, developed in collaboration with leading cloud and AI tool vendors and university AI labs in the U.S., U.K., and India. These agents are designed to help enterprises access external tools to independently evaluate, build, deploy, and manage responsible agentic AI systems with safety, trust, and accountability.
The phased rollout will ensure real-world testing, enterprise integration, and continuous refinement, enabling organizations to adopt AI-powered governance and risk management solutions at scale. Early access will be granted to select partners and current members, with broader availability expanding throughout the year. Sign up now to join the early access program!
Introducing the RAI AI Pathways Agent Suite:
RAI Watchtower Agent – Real-time AI risk monitoring to detect compliance gaps, model drift, and security vulnerabilities before they escalate.
RAI Corporate AI Policy Copilot – An intelligent policy assistant that helps businesses develop, implement, and maintain AI policies aligned with global policy and standards.
RAI Green AI eVerification – A benchmarking program for measuring and optimizing AI's carbon footprint, in collaboration with the Green Software Foundation.
RAI AI TCO eVerification – Independent Total Cost of Ownership verification for AI investments, in collaboration with the FinOps Foundation.
RAI Agentic AI Purple Teaming – Proactive adversarial testing and defense strategies using industry standards and curated benchmarking data. This AI security agent identifies vulnerabilities, stress-tests AI systems, and mitigates risks such as hallucinations, attacks, bias, and model drift.
RAI Premium Research – Access exclusive, in-depth analysis on responsible AI implementation, governance, and risk management. Stay ahead of emerging risks, regulatory changes, and AI best practices.
MOVING FORWARD: BUILDING A RESPONSIBLE AI FUTURE
The Responsible AI Institute is not merely adapting to AI's rapid evolution — it is leading the charge in defining how AI should be integrated responsibly. Over the next few months, RAI Institute will introduce:
Scholarships, hackathons, and long-term internships funded by private foundations.
A new global advisory board focused on Agentic AI regulations, safety, and innovation.
Upskilling programs to equip organizations with the tools to navigate the next era of AI governance.
JOIN THE MOVEMENT: THE TIME FOR RESPONSIBLE AI IS NOW!
Join us in shaping the future of responsible AI. Sign up for early access to the RAI AI Agents and RAISE Pathways Programs.
About the Responsible AI Institute
Since 2016, the Responsible AI Institute has been at the forefront of advancing responsible AI adoption across industries. As a non-profit organization, RAI Institute partners with policymakers, industry leaders, and technology providers to develop responsible AI benchmarks, governance frameworks, and best practices. With the launch of RAISE Pathways, RAI Institute equips organizations with expert-led training, real-time assessments, and implementation toolkits to strengthen AI governance, enhance transparency, and drive innovation at scale.
Members include leading companies such as Boston Consulting Group, AMD, KPMG, Chevron, Ally, Mastercard and many others dedicated to bringing responsible AI to all industry sectors.
CONTACT: Media Contact
Nicole McCaffrey
Head of Strategy & Marketing, RAI Institute
[email protected]
+1 (440) 785-3588
SOURCE: Responsible AI Institute
Copyright Business Wire 2025.
PUB: 02/19/2025 09:11 AM/DISC: 02/19/2025 09:12 AM
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
18 minutes ago
- Forbes
The A List Writer Launching An Alternative To AI Song Creation
In theory at least, 'writing' a song has never been easier. Instead of spending hours or days hunched over a guitar, keyboard or digital workstation, all you need to do is open up a generative AI app, provide a few prompts and then sit back while the clever software does all the heavy lifting. Within hours, your creation can be vying for the attention of music consumers on a streaming platform, with AI also providing a cover picture and brand identity. But what happens when you score that inevitable multi-platinum hit record? Who actually owns the copyright? Is it you? Is it the Generative AI provider? Or more problematically, is it one or more of the countless musicians, songwriters and singers whose work has been diced, sliced and analysed to provide the software with the wherewithal to produce music tracks to order? And will you perhaps enjoy success with a song, only to be faced with a succession of lawsuits from artists who have noticed a certain similarity with their own work? This is a fast-moving space. Last year, generative AI startups Suno and Udio were on the receiving end of music industry lawsuits. However, as reported this year, talks are underway to allow AI models to be trained on music catalogs, with provision for musicians and songwriters to get paid. There's no denying that generative AI has its uses. Professional songwriters can use it as a means to speed up the production process and try out new ideas. Equally, online influencers with no musical ability can create soundtracks for their YouTube, Instagram and TikTok posts. It can even be used - as in the case of the last Beatles' single, to resurrect a poorly recorded demo and create something of real value. However, when it comes to creating original music for public consumption, there are real question marks around attribution and rights. Andreas Carlsson believes he has an alternative that is equally empowering in terms of music creation while also providing musicians, songwriters, record labels and catalogues with a means to protect their rights and open up new revenue streams. Aimed primarily at Generations Z and Alpha, it is a mobile app designed to create and ultimately distribute new music from rights-protected building blocks. A Lego Box Of Rights As Carlsson stresses, his perspective on this is very different from a Bay Area tech bro. Based in Sweden, he has been a successful songwriter for a quarter of a century, and his name has appeared on the recording credits of artists such as Britney Spears, NSYNC, The Backstreet Boys, Westlife and Celine Dion. 'I am a music man,' he says. 'I don't come from Palo Alto. I don't want to cannibalise the industry.' But what he does want to do is to enable musicians to monetise a market that is potentially huge. Traditionally, music making has required hard-won skills - the infamous 10,000 hours of practice - plus a certain amount of money to buy guitars, amps, computers, software and studio equipment. Despite that investment in time and energy, musicians struggle to secure any kind of decent return. Meanwhile, there are vast numbers of people who would like to put together songs from clips and samples, either for their own amusement or for commercial reasons. Carlsson's app - Hyph - is aimed at bringing these communities together. Musicians can provide stems (music tracks), which they retain full control over. Creators can meld those stems into finished tracks, either for personal use or to further their careers. In addition, record labels and catalog holders can make tracks from name artists available for manipulation, again keeping full control of the rights. 'What we've created is a Lego box of rights,' he says. Carlsson sees a historic precedent here, noting that the MP3 revolution was kicked off by Sweden's Pirate Bay and then Napster before a second Swedish company stepped in to bring order to the market. 'What Spotify got right was that you have to tick two boxes, namely attribution and control,' he says. In between that, you also need the consent of artists.' So who gains here? For instance, why should a major record label put its valuable music assets in the hands of a Generation Z creator? Well, at one level, it can be used for fan outreach. A way to engage music consumers more deeply. Something that can be quite difficult in the streaming age. However, Carlsson believes it is also a means to revive and monetise dormant sections of music catalogues by opening them up to be revisited and manipulated. And for bedroom musicians? Well, they get a chance to sell their music to creators. This isn't a new idea. Either offline or online, entrepreneurial musicians are already selling instrument and voice sample packs, either directly or through third parties. 'You should look at tech in terms of how it gives you the ability to scale your assets,' Carlsson says. You build a bank of assets that you can scale, and if you are a bedroom producer, you can play on a thousand songs.' Carlsson says his motivation is to provide a route to monetisation for those who struggle to make a living in today's music industry. The big names are fine. They generate millions of streams, not only for new releases but also for back catalogue work that might otherwise be dormant. In addition, they cash in on big-ticket shows, VIP access and merchandise. Further down the ladder, artists don't have those opportunities and per-play revenues for streaming are pretty paltry. 'I'm thinking of the self-releasing artists. The people who were like me when I was just starting out,' says Carlsson. The app was originally launched in Scandinavia, with backing from high-net-worth investors, with music stems supplied on a pay-for-work basis. The full launch takes place in September. In the future, he hopes to sign distribution deals with streamers while establishing its own distribution channels. It will be monetized through subscriptions - for professional users - and in-app purchases. Will this help musicians make money and will it prove to me more than a toy? That depends on whether the app can scale and that its alternative monetisation plan, involving active rather than passive consumption, will provide a big enough market. Carlsson thinks it will. 'Everyone is looking for the new format. If you look at the Zeitgeist and what is happening with youth, everyone wants to lean into what they're doing. It makes sense that everything becomes interactive,' he says.
Yahoo
an hour ago
- Yahoo
Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership
OpenAI CEO Sam Altman has secured a new partnership with the U.S. General Services Administration that will give federal agencies access to the company's leading frontier models through ChatGPT Enterprise for $1 per agency for the next year, according to an OpenAI announcement. The initiative, described as a core pillar of President Donald Trump's AI Action Plan, will provide federal employees with secure access to ChatGPT Enterprise and new training resources, OpenAI says, as well as additional advanced features during a 60-day introductory period. Don't Miss: The same firms that backed Uber, Venmo and eBay are investing in this pre-IPO company disrupting a $1.8T market — Bill Gates Warned About Water Scarcity. Historic Federal AI Partnership Focused on Productivity and Training Under the agreement, participating executive branch agencies can use OpenAI's most capable models through ChatGPT Enterprise for $1 yearly. According to OpenAI, the program is designed to help government workers allocate more time to public service priorities and less time to administrative tasks. OpenAI will also work with partners Slalom and Boston Consulting Group to support secure deployment and provide agency-specific training. Security safeguards are a key component of the rollout. OpenAI says that ChatGPT Enterprise does not use business data, including inputs or outputs, to train its models, and these same protections will apply to federal use. Trending: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. You can OpenAI for Government Broadens AI Access Beyond the GSA Deal The agreement is the first major initiative for the company under OpenAI for Government, a program designed to deliver advanced AI tools to public servants nationwide. The umbrella program consolidates OpenAI's existing federal, state, and local partnerships, including collaborations with the U.S. National Labs, Air Force Research Laboratory, NASA, National Institutes of Health, and the Treasury. Through OpenAI for Government, the company will offer secure and compliant access to its most capable models, limited custom models for national security applications, and hands-on support for integration into agency workflows. The first pilot under this program will be with the Department of Defense's Chief Digital and Artificial Intelligence Office under a contract with a $200 million ceiling, OpenAI says. The work will explore how frontier AI can improve administrative operations, healthcare access for service members, program data analysis, and proactive cyber defense, all within the company's usage Results Show Significant Time Savings for Public Servants OpenAI cited results from state-level pilot programs to demonstrate the technology's impact on productivity. Pennsylvania state employees saved an average of 95 minutes per day on routine tasks when using ChatGPT. In North Carolina, 85% of participants in a Department of State Treasurer pilot reported a positive experience with ChatGPT. At the federal level, OpenAI models are already in use at Los Alamos, Lawrence Livermore, and Sandia national laboratories to accelerate scientific research, strengthen national security readiness, and drive public sector innovation. AI Integration Expands Across Federal Agencies The Trump administration's interest in AI predates the OpenAI-GSA deal announcement. Earlier this year, Altman joined Trump at the White House to announce Stargate, a massive data center initiative designed to strengthen U.S. AI infrastructure. In May, Altman and other AI executives accompanied the president to the Middle East to promote deals aligned with U.S. foreign policy goals. While agencies hold vast datasets that could enhance AI systems, OpenAI has confirmed that interactions with federal employees will not be used for model training, addressing potential privacy concerns. Read Next: In a $34 Trillion Debt Era, The Right AI Could Be Your Financial Advantage — Image: Shutterstock UNLOCKED: 5 NEW TRADES EVERY WEEK. Click now to get top trade ideas daily, plus unlimited access to cutting-edge tools and strategies to gain an edge in the markets. Get the latest stock analysis from Benzinga? APPLE (AAPL): Free Stock Analysis Report TESLA (TSLA): Free Stock Analysis Report This article Trump's $1 Trillion Defense Budget Meets $1 AI Access: Sam Altman And ChatGPT Land U.S. Government Partnership originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved.
Yahoo
an hour ago
- Yahoo
While AI wipes out entry-level roles, OpenAI CEO Sam Altman says it's actually ‘the most exciting time to be starting out one's career'
Billionaire OpenAI CEO Sam Altman has a message for new Gen Z graduates struggling to gain a footing in the entry-level job market: 'This is probably the most exciting time to be starting out one's career, maybe ever.' But as the class of 2025 scrolls through LinkedIn for new postings, they're facing a tougher reality—AI has stolen most of their opportunities to kick-start their nine-to-fives. In what seems like a dumpster fire of an early career job market for Gen Z—filled with ghost jobs and AI agents—Sam Altman said it's actually 'the most exciting time to be starting out one's career, maybe ever.' 'I think that [a] 25-year-old in Mumbai can probably do more than any previous 25-year-old in history could,' Altman said on an episode of the People by WTF podcast with Nikhil Kamath. 'I felt the same way when I was 25, and the tools then were not as amazing as the tools we have now … A 25-year-old then could do things that no 25-year-old in history before would have been able to, and now that's happening in a huge way.' But Gen Z isn't experiencing the same exciting job market as Altman describes. Entry-level positions are decreasing for ambitious, fresh-faced graduates, as employers expect rookies to come in fully skilled. ChatGPT and AI agents are taking over junior staffers' beginner skills that Gen Zers use to kick-start their journey up the corporate ladder, and the dream of landing a six-figure tech job after college is becoming a distant reality. Some Gen Zers are even seeking their first jobs at Chipotle instead. As a result of skyrocketing tuition costs and a depressing white-collar job market, Gen Z's situation is so dire that 4.3 million young people are now NEETs: not in education, employment, or training. Altman even says he's envious of Gen Z's career options today Even though many young job-seekers are in despair, the tech leader said he's envious of young people because his early career jobs will look 'boring' by comparison. Comparatively, he said, Gen Z will be exploring the solar system and lock down jobs with sky-high salaries. 'If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history,' he added. The billionaire cofounder compared the current AI revolution to how computers changed the world of work when he was growing up. 'People are now limited only by the quality and creativity of their ideas,' the OpenAI CEO said, adding that advances in AI are transforming programming, accelerating scientific discovery, and enabling entirely new kinds of software. But still, in a job market where the first rung of the ladder is disappearing thanks to AI, Altman's optimism is a reminder that Gen Z's success will be determined by how they integrate the tools into their next role. The split of tech founders on AI Altman isn't alone in his optimism about AI. Billionaire Microsoft cofounder Bill Gates said using AI to improve productivity in the workplace could open up more jobs in the future, despite there being some career 'dislocation' for entry-level graduates. In addition, AMD CEO Lisa Su doesn't believe AI is out to cause massive job losses, but admits anxiety around the technology's innovation is a natural feeling. On the flip side, other tech leaders have warned of AI's threat to entry-level roles and the white-collar job market altogether. Anthropic CEO Dario Amodei said AI could wipe out roughly 50% of all entry-level white-collar jobs within five years, causing unemployment to spike as high as 20%. LinkedIn's chief economic opportunity officer Aneesh Raman also echoed that sentiment. Raman said that AI is increasingly threatening the types of jobs that historically have served as stepping stones for young workers who are just beginning their careers. 'While the technology sector is feeling the first waves of change, reflecting AI's mass adoption in this field, the erosion of traditional entry-level tasks is expected to play out in fields like finance, travel, food, and professional services, too,' he said. This story was originally featured on Solve the daily Crossword