21-05-2025
Inside KPMG's Global AI Trust Study
There is a large conversation about trust in generative AI, and KPMG's latest study is an incredibly comprehensive review of trust, use, and attitudes towards AI. Their study captures the attitudes of 48,000 people across 47 countries. On average, 58% of people surveyed view AI systems as trustworthy, but only 46% are willing to trust them. Many are also concerned about AI-generated misinformation, with 70% not knowing if online content can be trusted because it might be AI-generated. Ruth Svensson, a partner at KPMG UK who serves as the Global Head of People and HR CoE, and Samantha Gloede, who leads AI Trusted Transformation for KPMG International, discuss the key findings.
Given the deep interest in trust (and mistrust), it naturally raises the question of if the technology has actually broken anyone's trust yet? Gloede and Svensson think the answer to that question is yes. "There is a breakdown in trust because AI is moving so quickly, and people's literacy is lagging behind the adoption," says Gloede. "People are using AI without proper education, so they don't quite know how to use it effectively or accurately."
People are also taking their experiences of interacting with AI outside of work and then assuming that the same experience will hold true in the workplace. Svensson uses the example of companion technology and how, over time, those businesses started manipulating their users. It's the manipulation of humans for profits that doesn't sit well with many and drives mistrust. That said, trust is also often context-specific. "There's quite a big difference in using AI in society and using AI within organizations," says Svensson. "So, I think there are pockets in society where that usage is causing mistrust." That mistrust can then transfer over to the workplace.
Another issue exacerbating trust issues is the fear of falling behind the times and the threat of job displacement, which creates uncertainty. This fear is largely driven by a lack of tools and training. "People either don't have the tools available yet through their organization, or they're there, but they don't quite know how to use them. Or, people aren't sure if the organization will approve of using AI to do things. That can drive people to use AI secretly and not tell anyone," says Gloede. In fact, 61% of people actually avoided revealing their use of AI, even though it's rampant in some pockets of the organization. "By far, the most widely used generative AI tools are the ones available to the public — but they're using them at work," says Svensson. Svensson says that around 70% of people are using public tools, versus just 42% using the organization's purpose-built tools.
"We're so passionate about the trusted story at KMPG because there is a lack of clear regulation about what is or isn't acceptable. For example, human manipulation for profit when it leads to negative consequences from a mental health perspective should never be acceptable," says Svensson. That is why clear AI usage policies at work are so important. Yet, only two in five employees surveyed claimed that their organization had such a policy in place. "I am 100% sure more than two in five organizations have a policy in place. Rather, there's just this huge lack of communication," says Svensson.
To that end, companies can learn a lot from KPMG's practices on both AI governance and communication.
"We created an 'AI Responsible Use' policy early on," says Gloede. "It's values-led, so it's all about being open and inclusive and operating at the highest ethical standards." It also borrows heavily from KPMG's AI trusted framework, which poses thoughtful questions to teams embarking on AI usage. The framework considers important questions regarding transparency, fairness, bias, ethics, and more. KPMG also provides mandatory foundational AI training along with more specific role-based training.
Given that, how is large-scale adoption going? "AI is more complex than legacy technology, but I think it's very achievable if you approach it in a systematic and holistic way," says Gloede. "Ultimately, people are going to have to want to use it," says Svensson. "That takes a level of effort and motivation that often isn't in there." Luckily increasing trust, governance and training can lesson effort and increase motivation for adoption.