15-07-2025
The Missing Piece In AI Strategy: Governance That Drives Trust
Aditya Vikram Kashyap, VP, Firmwide Innovation at Morgan Stanley, driving innovation, AI strategy and transformative leadership.
I spend much of my time working at the intersection of innovation and risk. Through this experience, I've witnessed AI evolve from a curiosity in research labs to a force reshaping nearly every dimension of business and society.
The pace is staggering. We now talk in terms of weeks—not years—when it comes to breakthroughs in generative AI, predictive models and intelligent automation. Amid this whirlwind of progress, there is one conversation I believe leaders are not having enough, and it may be the most important of all: How do we govern the AI we unleash into the world?
This is not a legal formality or an afterthought. It is a matter of trust, reputation and long-term business resilience. In my view, it must now be owned at the very top of the enterprise.
Why This Keeps Me Up At Night
I see the risks firsthand. AI models that hallucinate. Automated decisions that encode societal bias. Brilliant prototypes that cannot be explained to regulators—or even to the teams that built them.
And I see the governance gap growing. A study from MIT Sloan Management Review and BCG found that although nearly 80% of organizations are ramping up AI investments, only one in five have fully implemented responsible AI governance practices.
As a leader, that number should give us all pause.
Without governance, the AI revolution risks becoming a crisis of trust. We will not get a second chance to do this right. Once customer trust is lost, once regulators step in forcefully, once reputations are damaged, it will be very difficult to unwind.
Governance: Not A Brake, An Engine For Scaling Trustworthy AI
One common concern I hear from fellow executives is that governance might slow down innovation. I would argue the opposite: Good governance enables sustainable innovation.
In my work with AI initiatives, I've seen this dynamic clearly. When governance is embedded from the start—across data sourcing, model design, deployment and monitoring—AI systems are more reliable, more explainable and more trusted by customers. And that trust becomes a competitive advantage.
In fact, according to an Accenture survey of 1,000 executives from around the world, "Companies expect a 25% increase in customer loyalty and satisfaction from offering responsible AI-enabled products and services."
In short: Governance is not red tape. It is the scaffolding that allows AI innovation to scale safely.
Principles Every Leader Should Embrace
Through experience and a fair share of missteps, I've come to believe that AI governance must rest on a few key pillars:
AI projects must be grounded in a clear understanding of what you are trying to achieve—and what human values the system must reflect. Compliance alone is not enough. Governance that starts with purpose will ultimately serve the organization better.
Too often, AI governance is treated as a late-stage review. But the risks, and opportunities, are present from day one. We need governance embedded at every stage, from data collection to post-deployment monitoring.
AI governance cannot live in a single silo. It must be co-owned by technology, risk, legal, ethics, business units and the board. Diversity of perspective is essential to navigating the complexities of AI's real-world impacts.
Transparency is not optional. We must be able to explain model behavior, document decision logic and assign clear accountability for outcomes. Without this, we undermine trust.
A Leadership Imperative, Not A Technical One
Perhaps the most important lesson I've learned is this: AI governance cannot be delegated. It belongs on the leadership agenda.
Boards must be asking: Are our governance frameworks keeping pace with AI adoption? How do we ensure our AI reflects our values? What accountability structures are in place? Are we building AI systems that our customers—and society—can trust?
In the years ahead, the winners in AI will not be those who race ahead the fastest. They will be those who build AI that is trustworthy, transparent and human-centered. That is a leadership challenge. And it is one we must rise to—together.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?