25-07-2025
AI Governance In Tax Technology: The New Mandate For Trust And Transparency
Gaurav Aggarwal, Senior Vice President at Onix, Global Lead, Data & AI Solutions Engineering.
In this day and age of shifting regulation, the position of tax technology is no longer merely operational—instead, it's becoming more strategic. As GenAI integrates itself into fundamental business systems, leaders in tax are no longer merely dealing with automation but accountability.
For indirect tax departments—too often the behind-the-scenes guardians of confidential financial information—AI adoption guarantees faster compliance, cleaner audits and more intelligent document workflows. But here's the catch: The smarter our systems get, the more explainable, ethical and observable they have to be.
Because in tax, trust is truly the currency.
The Stakes Are Different In Tax
Think about a customer support chatbot providing a wrong product recommendation—it's irritating. But think about an AI program misidentifying tax burdens or neglecting to monitor jurisdictional levels. The consequences aren't merely technical—they're legal.
Taxing mistakes can lead to penalties, regulatory probes or, even worse, loss of stakeholders' trust. According to Thomson Reuters' "2025 Generative AI in Professional Services Report", over 70% of tax professionals support the use GenAI, yet many lack the policies to govern its responsible use. That's why tax tech leaders require more than automation—they require assurance. They need not only to respond to what decisions were taken but also understand how and why GenAI solutions made them.
This change—from performance-first to governance-first—cannot be avoided. It's the cornerstone of accountable AI in a trust-sensitive environment.
From 'Black Box' To Boardroom-Ready Transparency
We've all heard the analogy: AI should no longer be a 'black box.' But in tax, that's not a metaphor—it's a board-level mandate.
For example, if an AI model recommends a different VAT treatment for a transaction in the EU versus Asia-Pacific, compliance officers need clarity on the inputs, decision path and data models that led there. That's where explainable AI (XAI) and observability come in.
According to a McKinsey 2024 global survey, 78% of organizations now use GenAI for at least one business function, yet concerns around explainability and risk remain top priorities for leaders.
By inserting logs, decision trees, input-output tracing and confidence scores, companies take the first steps toward what I refer to as a "glass box" strategy—where AI isn't only auditable but intelligible. In business segments regulated by trust and oversight, that isn't a best practice—it's a business necessity. As Gartner highlights, embedding explainability into enterprise AI systems is now considered a foundational governance requirement, especially in highly regulated fields like finance and tax.
Governance In Action: Five AI Mandates For Tax Leaders
Here are five practical steps companies have to undertake to enable GenAI ethically and securely across their tax stack:
GenAI may analyze gigantic data volumes—but context is still relevant. Tax professionals need to stay in the know to approve AI suggestions, identify edge cases and introduce judgment into decision making. AI is meant to support, not supplant, the domain expert.
Consider an "AI-augmented tax strategy," where humans define the ethical compass and GenAI provides guidance through complexity.
Even in taxation, bias can insinuate itself—particularly when models are trained on old rules or limited geographies. Leaders should query vendors:
• What did the model get trained on?
• How is bias quantified and addressed?
• Is the model inspected on a regular basis for fairness?
It's not about jumping to bad faith. It's about making the tools work in the complex tax environments we play in.
Clients aren't obligated to have a Ph.D. in AI. But they should be told where and how AI is impacting their filings or risk assessments.
A large global tax firm I recently had the opportunity to advise started sending "AI contribution summaries" to clients—short, plain-language summaries of how GenAI had influenced results. The outcome? Increased trust, reduced escalations and improved client retention.
Without explicit governance, GenAI utilization can quickly move into a legal gray area. Organizations must:
• Establish ownership of AI-based tax suggestions.
• Revamp employee policies and ethics education.
• Define escalation routes when AI results appear to be in error.
AI governance is not a back-office activity—it should be part of risk, legal and operations discussions from the beginning.
Before deploying GenAI in core tax functions, pilot it in sandbox environments. Employ synthetic data. Emulate edge cases. Conduct parallel analyses.
One CFO I counseled insisted on a "no production without sandbox" policy in finance tech. Within three months, it avoided two high-priced compliance mistakes and ensured AI models were honed before going live.
Closing Thought: Redefining Trust In The Age Of Intelligent Systems
Over the next five years, tax organizations will not only be embracing GenAI—they'll be measured by whether they do so responsibly.
Will they go all in on automation without thinking or prioritize governance? Will they apply AI to reduce spending or build trust?
The tax leaders who approach AI governance as a strength, not a box to check, will be positioned ahead—not just in adherence but in believability. Because in a field defined by scrutiny, how you use AI may matter more than whether you use it at all.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?