
Credo AI Collaborates with Microsoft to Launch AI Governance Developer Integration to Fast-Track Compliant, Trustworthy Enterprise AI
SAN FRANCISCO--(BUSINESS WIRE)--Credo AI, a global pioneer and leader in AI governance for the enterprise, today launched an integration with Microsoft Azure AI Foundry. First announced in November, the next step in this collaboration bridges a long-standing divide between technical development and AI governance teams—empowering enterprises to innovate with AI at speed and scale while simultaneously ensuring trust, safety, and compliance.
A recent Gartner report predicted that 60% of GenAI projects will fail after proof-of-concept due to gaps in governance, data, and cost control. Governance teams often lack the technical context to define or interpret AI evaluation results, while developers lack clarity on how to meet emerging governance requirements. The result is misalignment, friction, and AI innovation stuck in R&D.
'As AI becomes central to enterprise value creation, governance must shift from reactive oversight to proactive enablement,' said Navrina Singh, Founder and CEO of Credo AI. 'Our integration with Microsoft Azure AI Foundry represents a breakthrough: actionable, real-time governance that lives where AI is built. It's how innovation accelerates with responsibility.'
'Credo AI's integration tackles one of the biggest blockers in enterprise AI–the communication and alignment gap between AI governance teams and developers,' said Sarah Bird, Chief Product Officer for Responsible AI, Microsoft. 'The integration delivers prescriptive guidance to AI governance leaders on what to evaluate and empowers developers to run governance-aligned evaluations directly within their workflow.'
The 1st Step to Solving the AI R&D Bottleneck
This integration marks a breakthrough in Credo AI's vision to operationalize policy-to-code translation–turning abstract governance goals into concrete, actionable metrics and steps. By bridging policy and execution, the integration empowers governance teams to convert risk-management and innovation strategies into code-level evaluations–enabling scalable, measurable risk management across the AI lifecycle.
The benefits of Credo AI's integration with Azure AI Foundry:
Governance teams receive structured, validated technical evidence tied to each use case.
Developers get code to run evaluators (like groundedness, hallucination, bias) to ensure their development process is aligned with AI governance and business objectives
Evaluator results automatically flow back into the Credo AI platform, linking risk insights directly to governance workflows.
Unlocking Innovation with Built-In Trust
As part of this integration, all Azure AI Foundry models are governable within the Credo AI Platform, which is made possible by Credo AI's automatic integration specific mapping to the appropriate policies, risks, and evaluation requirements. This ensures:
Faster AI adoption and approvals through contextual risk insights
End-to-end compliance visibility aligned with the EU AI Act, NIST RMF, and ISO 42001
Smarter investment decisions based on governance readiness and risk adjusted ROI
The integration is already in active pilots with select Global 2000 enterprises and has received strong enthusiasm from Microsoft teams and customers alike. Early users report accelerated model approval, clearer cross-team multistakeholder collaboration, and faster time-to-value for high-risk AI initiatives.
'At Version1, we're using the new Credo AI and Microsoft Azure AI Foundry integration to streamline AI governance for our clients—embedding policy, risk, and compliance into development and easing the load on our AI Labs team,' said Brad Mallard, CTO of Version1.
More information on the Credo AI integration for Azure AI Foundry can be found here. To request a Demo of Credo AI's Platform, visit credo.ai.
AI Governance Platform and AI Governance Advisory Services empower your enterprise to adopt and scale trusted AI with confidence. From Generative AI to Agentic AI, Credo AI's centralized platform measures, monitors, and manages AI risk—enabling your organization to maximize AI's value while mitigating security, privacy, compliance, and operational challenges. Credo AI also future-proofs your AI investments by aligning with global regulations, industry standards, and company values. Recognized as Fast Company's Most Innovative Companies, CB Insights AI 100, Inc. Best Workplaces, and the World Economic Forum Technology Pionee r, is leading the charge in accelerating the adoption of trusted AI.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Business Wire
an hour ago
- Business Wire
SIGNING DAY SPORTS INVESTOR ALERT by the Former Attorney General of Louisiana: Kahn Swick & Foti, LLC Investigates Merger of Signing Day Sports, Inc.
NEW YORK CITY & NEW ORLEANS--(BUSINESS WIRE)--Former Attorney General of Louisiana Charles C. Foti, Jr., Esq. and the law firm of Kahn Swick & Foti, LLC ('KSF') are investigating the proposed merger of Signing Day Sports, Inc. (NYSE: SGN) and One Blockchain LLC. Upon closing of the proposed transaction, Signing Day shareholders are expected to own approximately 8.5% of the combined company. KSF is seeking to determine whether the merger and the process that led to it are adequate, or whether the merger is fair to Signing Day shareholders. If you would like to discuss your legal rights regarding the proposed transaction, you may, without obligation or cost to you, e-mail or call KSF Managing Partner Lewis S. Kahn ( toll free at any time at 855-768-1857, or visit to learn more. To learn more about KSF, whose partners include the Former Louisiana Attorney General, visit


Forbes
2 hours ago
- Forbes
AI Is Reshaping The Work Faster Than Companies Can Upskill
AI workforce preparation and upskilling within enterprises to increase adoption Work is no longer changing incrementally—it's evolving exponentially. The tectonic shifts being driven by artificial intelligence, automation, and multi-model transformation aren't projections for the distant future; they're the current reality unfolding in real time and we're all experiencing the impact at different speeds. According to a McKinsey Global Institute report, up to 45% of work activities could be automated by 2030. While this statistic might induce anxiety for the masses, it misses a nuanced truth: roles are not simply disappearing; they are also being redefined. As a former C-Suite exec in the Data, Analytics, Robotics, and AI Space, I've witnessed firsthand that the most impactful transformations aren't about replacing people with machines—they're about elevating human potential. AI isn't just about algorithms and automation; it's about adaptation, agility, and amplifying human potential. And while the pace of change is faster than what most people and enterprises can keep up with, the clear winners will be those that lead the future of work with AI, and prepare people to thrive alongside it. The transformation of AI isn't just technical; it's also organizational. If you take the use cases of predictive restocking algorithms, dynamic pricing tools, or warehouse robotics, behind the technology also comes the work that goes into preparing and upskilling the workers: In these cases, AI didn't reduce the workforce, it redefined their roles. In this juncture, licensing AI software or tools is not a strategy, it's a checkbox. Companies that are serious about the future of work know that AI implementation without workforce enablement is an empty investment. A 2024 IBM Institute for Business Value study supports this. It found that companies actively investing in AI training and upskilling see 15% higher productivity gains compared to those that don't. So what does real AI readiness look like? Outlined below are recommendations for forward-thinking companies, leaders, and individuals making the leap: Yes, the workplace is transforming at breakneck speed because of AI, and the future won't be built by those who use AI, but will be led by those who partner with it. As you reflect on your organization's AI journey, or your personal one, ask yourself not just what tech are we buying, but what transformation are we enabling? Because in the end, it's not about replacing jobs—it's about redesigning them. And those who master that redesign will lead the future of work.
Yahoo
2 hours ago
- Yahoo
We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic
We still have no idea why an AI model picks one phrase over another, Anthropic Chief Executive Dario Amodei said in an April essay—an admission that's pushing the company to build an 'MRI for AI' and finally decode how these black-box systems actually work. Amodei published the blog post on his personal website, warning that the lack of transparency is "essentially unprecedented in the history of technology." His call to action? Create tools that make AI decisions traceable—before it's too late. Don't Miss: 'Scrolling To UBI' — Deloitte's #1 fastest-growing software company allows users to earn money on their phones. Hasbro, MGM, and Skechers trust this AI marketing firm — When a language model summarizes a financial report, recommends a treatment, or writes a poem, researchers still can't explain why it made certain choices, according to Amodei,. We have no idea why it makes certain choices—and that is precisely the problem. This interpretability gap blocks AI from being trusted in areas like healthcare and defense. The post, 'The Urgency of Interpretability,' compares today's AI progress to past tech revolutions—but without the benefit of reliable engineering models. Amodei argued that artificial general intelligence will arrive by 2026 or 2027, as some predict, "we need a microscope into these models now." Anthropic has already started prototyping that microscope. In a technical report, the company deliberately embedded a misalignment into one of its models—essentially a secret instruction to behave incorrectly—and challenged internal teams to detect the issue. Trending: According to the company, three of four "blue teams" found the planted flaw. Some used neural dashboards and interpretability tools to do it, suggesting real-time AI audits could soon be possible. That experiment showed early success in catching misbehavior before it hits end users—a huge leap for safety. Mechanistic interpretability is having a breakout moment. According to a March 11 research paper from Harvard's Kempner Institute, mapping AI neurons to functions is accelerating with help from neuroscience-inspired tools. Interpretability pioneer Chris Olah and others argue that making models transparent is essential before AGI becomes a reality. Meanwhile, Washington is boosting oversight. The National Institute of Standards and Technology requested $47.7 million in its fiscal 2025 budget to expand the U.S. AI Safety capital is pouring into this frontier. In 2024, Amazon (NASDAQ:AMZN) finalized a $4 billion investment in Anthropic. The deal made Amazon Web Services the startup's primary cloud provider and granted its enterprise clients early access to Claude models. AWS now underwrites much of the compute needed for these deep diagnostics—and investors want more than raw performance. As risks grow, the demand for explainable AI is no longer academic. Transparency, it turns out, might just be the killer feature. Read Next: Deloitte's fastest-growing software company partners with Amazon, Walmart & Target – Image: Shutterstock Up Next: Transform your trading with Benzinga Edge's one-of-a-kind market trade ideas and tools. Click now to access unique insights that can set you ahead in today's competitive market. Get the latest stock analysis from Benzinga? (AMZN): Free Stock Analysis Report This article We Have No Idea Why It Makes Certain Choices, Says Anthropic CEO Dario Amodei as He Builds an 'MRI for AI' to Decode Its Logic originally appeared on © 2025 Benzinga does not provide investment advice. All rights reserved. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data