logo
#

Latest news with #RAIInstitute

The Responsible AI Institute appoints Matthew Martin as Global Advisor
The Responsible AI Institute appoints Matthew Martin as Global Advisor

Zawya

time12-06-2025

  • Business
  • Zawya

The Responsible AI Institute appoints Matthew Martin as Global Advisor

Matthew brings over two decades of cybersecurity expertise to help organizations navigate evolving regulatory landscapes and deploy responsible AI with confidence Texas, U.S., – The Responsible AI Institute (RAI Institute), a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations, has appointed Matthew Martin, founder and CEO of Two Candlesticks and an international leader in cybersecurity, as a member of its Global Advisory Board. Matthew's extensive cybersecurity expertise will be leveraged to help organizations strengthen AI governance, enhance transparency, and scale innovation responsibly. With over 25 years of experience in the cybersecurity industry, Matthew has led and implemented security operations at Fortune 100 financial services companies. As CEO of Two Candlesticks, he currently provides high-level cybersecurity consultancy, strategy, and frameworks to underserved markets and regions. He will apply this expertise to his role at the RAI Institute to build awareness for transparent AI practices and help organizations overcome critical technological, ethical, and regulatory challenges. 'AI has the power to truly transform the world. If done correctly, it democratizes a lot of capabilities that used to be reserved just for developed markets. This is exactly why industries need organizations like the RAI Institute,' said Matthew Martin, Global Advisor at RAI Institute and CEO of Two Candlesticks. 'I'm proud to be a part of such a forward-thinking institute that's leading the way in advancing responsible AI innovation across diverse markets. Its mission directly aligns with my passion for playing an active role in establishing a resilient, future-ready cybersecurity foundation for all.' Through its global network of responsible AI experts, the RAI Institute offers valuable insights to practitioners, policymakers, and regulators. With over 34,000 members and collaborators, its community spans technology, finance, healthcare, academia, and government agencies. Its goal is to operationalize responsible AI through education, benchmarking, verification, and third-party risk assessments. 'We are so pleased to have Matthew on board as a Global Advisor for the RAI Institute. His drive for serving the underserved in cybersecurity makes him a perfect addition to the board as we advance responsible AI across the entire ecosystem,' said Manoj Saxena, Chairman and Founder of the Responsible AI Institute. 'Trusted AI foundations lead to sustainable and scalable AI solutions. It's through the expert contributions of industry leaders like Matthew that we can strengthen our mission to ensure a secure future for AI.' In addition to his role at RAI Institute, Matthew holds advisory positions on the boards of Ironscales, Trustwise, Stealth, and Surge Ventures. Through his work at Two Candlesticks, he is making robust cybersecurity strategies accessible, efficient, and impactful across Africa, Asia, Europe, the Middle East, and the Americas. About Responsible AI Institute (RAI Institute) Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI conformity assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations. Members include leading companies such as Amazon Web Services, Boston Consulting Group, KPMG, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.

From Policy to Practice: Responsible AI Institute Announces Bold Strategic Shift to Drive Impact in the Age of Agentic AI
From Policy to Practice: Responsible AI Institute Announces Bold Strategic Shift to Drive Impact in the Age of Agentic AI

Associated Press

time19-02-2025

  • Business
  • Associated Press

From Policy to Practice: Responsible AI Institute Announces Bold Strategic Shift to Drive Impact in the Age of Agentic AI

AUSTIN, Texas--(BUSINESS WIRE)--Feb 19, 2025-- The Responsible AI Institute (RAI Institute) is taking bold action to reshape and accelerate the future of responsible AI adoption. In response to rapid regulatory shifts, corporate FOMO, and the rise of agentic AI, RAI Institute is expanding beyond policy advocacy to deploy AI-driven tools, agentic AI services, and new AI verification, badging, and benchmarking programs. Backed by a new partner ecosystem, university collaborations in the U.S., U.K., and India, and a pledge from private foundations, RAI Institute is equipping organizations to confidently adopt and govern multi-vendor agent ecosystems. This press release features multimedia. View the full release here: (Graphic: Business Wire) THE AI LANDSCAPE HAS CHANGED — AND RAI INSTITUTE IS MOVING FROM POLICY TO IMPACT Global AI policy and adoption are at an inflection point. AI adoption is accelerating, but trust and governance have not kept pace. Regulatory rollbacks, such as the revocation of the U.S. AI Executive Order and the withdrawal of the EU's AI Liability Directive, signal a shift away from oversight, pushing businesses to adopt AI without sufficient safety frameworks. 51% of companies have already deployed AI agents, with another 78% planning implementation soon ( LangChain, 2024). 42% of workers say accuracy and reliability are top priorities for improving agentic AI tools ( Pegasystems, 2025). 67% of IT decision-makers across the U.S., U.K., France, Germany, Australia, and Singapore report adopting AI despite reliability concerns, driven by FOMO (fear of missing out) ( ABBYY Survey, 2025). At the same time, AI vendors like OpenAI and Microsoft are urging businesses to 'accept imperfection,' a stance that directly contradicts the principles of responsible AI governance. AI-driven automation is already reshaping the workforce, yet most organizations lack structured transition plans, leading to job displacement, skill gaps, and growing concerns over AI's economic impact. The RAI Institute sees this moment as a call to action, going beyond policy frameworks. It's about creating concrete, operational tools, sharing real-world experiences, and learning from real-world member experiences to safeguard AI deployment at scale. STRATEGIC SHIFT: FROM POLICY TO PRACTICE Following a six month review of its operations and strategy, RAI Institute is realigning its mission around three core pillars: 1. EMBRACING HUMAN-LED AI AGENTS TO ACCELERATE RAI ENABLEMENT The Institute will lead by example, integrating AI-powered processes across its operations as 'customer zero.' From AI-driven market intelligence to verification and assessment acceleration, RAI Institute is actively testing the power and exposing the limitations of agentic AI, ensuring it is effective, safe, and accountable in real-world applications. 2. SHIFTING FROM AI POLICY TO AI OPERATIONALIZATION RAI Institute is shifting from policy to action by deploying AI-driven risk management tools and real-time monitoring agents to help companies automate evaluation and 3rd party verification against frameworks like NIST RMF, ISO 42001, and the EU AI Act. Additionally, RAI Institute is partnering with leading universities and research labs in the U.S., U.K., and India to co-develop, stress-test, and pilot responsible agentic AI, ensuring enterprises can measure agent performance, alignment, and unintended risks in real-world scenarios. 3. LAUNCHING THE RAISE AI PATHWAYS PROGRAM RAI Institute is accelerating responsible AI adoption with the RAISE AI Pathways Program, delivering a suite of new human-augmented AI agent-powered insights, assessments, and benchmarking to help businesses evaluate AI maturity, compliance, and readiness for agentic AI ecosystems. This program will leverage collaborations with industry leaders, including the Green Software Foundation and FinOps Foundation and be backed by a matching grant pledge from private foundations, with further funding details to be announced later this year. 'The rise of agentic AI isn't on the horizon — it's already here, and we are shifting from advocacy to action to meet member needs,' said Jeff Easley, General Manager, Responsible AI Institute. 'AI is evolving from experimental pilots to large-scale deployment at an unprecedented pace. Our members don't just need policy recommendations — they need AI-powered risk management, independent verification, and benchmarking tools to help deploy AI responsibly without stifling innovation.' RAISE AI PATHWAYS: LEVERAGING HUMAN-LED AGENTIC AI FOR ACCELERATED IMPACT Beginning in March, RAI Institute will begin a phased launch of its six AI Pathways Agents, developed in collaboration with leading cloud and AI tool vendors and university AI labs in the U.S., U.K., and India. These agents are designed to help enterprises access external tools to independently evaluate, build, deploy, and manage responsible agentic AI systems with safety, trust, and accountability. The phased rollout will ensure real-world testing, enterprise integration, and continuous refinement, enabling organizations to adopt AI-powered governance and risk management solutions at scale. Early access will be granted to select partners and current members, with broader availability expanding throughout the year. Sign up now to join the early access program! Introducing the RAI AI Pathways Agent Suite: RAI Watchtower Agent – Real-time AI risk monitoring to detect compliance gaps, model drift, and security vulnerabilities before they escalate. RAI Corporate AI Policy Copilot – An intelligent policy assistant that helps businesses develop, implement, and maintain AI policies aligned with global policy and standards. RAI Green AI eVerification – A benchmarking program for measuring and optimizing AI's carbon footprint, in collaboration with the Green Software Foundation. RAI AI TCO eVerification – Independent Total Cost of Ownership verification for AI investments, in collaboration with the FinOps Foundation. RAI Agentic AI Purple Teaming – Proactive adversarial testing and defense strategies using industry standards and curated benchmarking data. This AI security agent identifies vulnerabilities, stress-tests AI systems, and mitigates risks such as hallucinations, attacks, bias, and model drift. RAI Premium Research – Access exclusive, in-depth analysis on responsible AI implementation, governance, and risk management. Stay ahead of emerging risks, regulatory changes, and AI best practices. MOVING FORWARD: BUILDING A RESPONSIBLE AI FUTURE The Responsible AI Institute is not merely adapting to AI's rapid evolution — it is leading the charge in defining how AI should be integrated responsibly. Over the next few months, RAI Institute will introduce: Scholarships, hackathons, and long-term internships funded by private foundations. A new global advisory board focused on Agentic AI regulations, safety, and innovation. Upskilling programs to equip organizations with the tools to navigate the next era of AI governance. JOIN THE MOVEMENT: THE TIME FOR RESPONSIBLE AI IS NOW! Join us in shaping the future of responsible AI. Sign up for early access to the RAI AI Agents and RAISE Pathways Programs. About the Responsible AI Institute Since 2016, the Responsible AI Institute has been at the forefront of advancing responsible AI adoption across industries. As a non-profit organization, RAI Institute partners with policymakers, industry leaders, and technology providers to develop responsible AI benchmarks, governance frameworks, and best practices. With the launch of RAISE Pathways, RAI Institute equips organizations with expert-led training, real-time assessments, and implementation toolkits to strengthen AI governance, enhance transparency, and drive innovation at scale. Members include leading companies such as Boston Consulting Group, AMD, KPMG, Chevron, Ally, Mastercard and many others dedicated to bringing responsible AI to all industry sectors. CONTACT: Media Contact Nicole McCaffrey Head of Strategy & Marketing, RAI Institute [email protected] +1 (440) 785-3588 SOURCE: Responsible AI Institute Copyright Business Wire 2025. PUB: 02/19/2025 09:11 AM/DISC: 02/19/2025 09:12 AM

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store