Latest news with #SamanthaGloede


Forbes
21-05-2025
- Business
- Forbes
Inside KPMG's Global AI Trust Study
There is a large conversation about trust in generative AI, and KPMG's latest study is an incredibly comprehensive review of trust, use, and attitudes towards AI. Their study captures the attitudes of 48,000 people across 47 countries. On average, 58% of people surveyed view AI systems as trustworthy, but only 46% are willing to trust them. Many are also concerned about AI-generated misinformation, with 70% not knowing if online content can be trusted because it might be AI-generated. Ruth Svensson, a partner at KPMG UK who serves as the Global Head of People and HR CoE, and Samantha Gloede, who leads AI Trusted Transformation for KPMG International, discuss the key findings. Given the deep interest in trust (and mistrust), it naturally raises the question of if the technology has actually broken anyone's trust yet? Gloede and Svensson think the answer to that question is yes. "There is a breakdown in trust because AI is moving so quickly, and people's literacy is lagging behind the adoption," says Gloede. "People are using AI without proper education, so they don't quite know how to use it effectively or accurately." People are also taking their experiences of interacting with AI outside of work and then assuming that the same experience will hold true in the workplace. Svensson uses the example of companion technology and how, over time, those businesses started manipulating their users. It's the manipulation of humans for profits that doesn't sit well with many and drives mistrust. That said, trust is also often context-specific. "There's quite a big difference in using AI in society and using AI within organizations," says Svensson. "So, I think there are pockets in society where that usage is causing mistrust." That mistrust can then transfer over to the workplace. Another issue exacerbating trust issues is the fear of falling behind the times and the threat of job displacement, which creates uncertainty. This fear is largely driven by a lack of tools and training. "People either don't have the tools available yet through their organization, or they're there, but they don't quite know how to use them. Or, people aren't sure if the organization will approve of using AI to do things. That can drive people to use AI secretly and not tell anyone," says Gloede. In fact, 61% of people actually avoided revealing their use of AI, even though it's rampant in some pockets of the organization. "By far, the most widely used generative AI tools are the ones available to the public — but they're using them at work," says Svensson. Svensson says that around 70% of people are using public tools, versus just 42% using the organization's purpose-built tools. "We're so passionate about the trusted story at KMPG because there is a lack of clear regulation about what is or isn't acceptable. For example, human manipulation for profit when it leads to negative consequences from a mental health perspective should never be acceptable," says Svensson. That is why clear AI usage policies at work are so important. Yet, only two in five employees surveyed claimed that their organization had such a policy in place. "I am 100% sure more than two in five organizations have a policy in place. Rather, there's just this huge lack of communication," says Svensson. To that end, companies can learn a lot from KPMG's practices on both AI governance and communication. "We created an 'AI Responsible Use' policy early on," says Gloede. "It's values-led, so it's all about being open and inclusive and operating at the highest ethical standards." It also borrows heavily from KPMG's AI trusted framework, which poses thoughtful questions to teams embarking on AI usage. The framework considers important questions regarding transparency, fairness, bias, ethics, and more. KPMG also provides mandatory foundational AI training along with more specific role-based training. Given that, how is large-scale adoption going? "AI is more complex than legacy technology, but I think it's very achievable if you approach it in a systematic and holistic way," says Gloede. "Ultimately, people are going to have to want to use it," says Svensson. "That takes a level of effort and motivation that often isn't in there." Luckily increasing trust, governance and training can lesson effort and increase motivation for adoption.


Euronews
05-05-2025
- Business
- Euronews
Almost half of workers that use AI on the job dont trust it, new survey shows
ADVERTISEMENT A vast majority of people use artificial intelligence (AI) every day, even though they don't trust it's outputs, according to a new study. Researchers from the University of Melbourne in Australia and global consulting firm KPMG surveyed over 48,000 people in 47 countries from November to January 2025 about their trust, uses, and attitudes towards AI. The study found that while over two-thirds of their respondents use AI with some regularity, either at work, for school, or in their personal time, only 46 percent are willing to trust these systems. The participants were asked to rank how much they trusted the technical ability of AI systems and the safety, security, and ethical soundness of the systems. Related Almost half of Europeans do not trust 'tech bros' or companies rolling out frontier tech like AI Researchers then rated their responses on a nine-point scale to determine how much each recipient trusted the AI. "Trust is sort of the strongest predictor of AI's acceptance," Samantha Gloede, managing director at KPMG, told Euronews Next. "We don't think any organisation can move faster than the speed of trust". What people are struggling to believe in is the AI's ability to be fair and do no harm, according to the study authors. Where people have more faith is in the technical ability of AI to provide accurate and reliable output and services. 'Taking it into their own hands' The study also found that 58 per cent of their respondents are using AI regularly at work with 33 per cent of these respondents using it weekly or daily in their jobs. These employees say it makes them more efficient, gives them greater access to information, and lets them be more innovative. In almost half of the cases, the respondents noted that AI has increased revenue-generating activity. People feel almost pressured that if they don't use it, they will be… set behind their competitors. Samantha Gloede Managing Director, KPMG There's added risk, though, for companies whose employees do use AI at work, because half of the respondents who use chatbots at work say they use them despite violating company policies. "People feel almost pressured that if they don't use it, they will be… set behind their competitors," Gloede said. Gloede said they heard examples where employees admitted to uploading sensitive company information into free public tools like ChatGPT or that deepfakes were being made of senior leadership, which could damage their reputation or that of the company. Employees have also presented the work of AI chatbots as their own, with 57 per cent saying they've hidden the fact that they've used AI in their work. These employees have also done so without necessarily evaluating the accuracy of the content that the AI generated for them. ADVERTISEMENT Related ChatGPT, Deepseek & Co: How much energy do AI-powered chatbots consume? Another 56 per cent report making mistakes in their work due to their use of AI without subsequent fact-checking, the report continues. "For organisations that don't have an AI literacy study in place … [employees] are taking things into their own hands," she said. 'We have so much to gain' The study also found that half of the employees surveyed for this study didn't understand AI and how it's used. Furthermore, only two out of five employees reported getting any AI-related training or education about how to use it. One example of what companies could do to teach their employees about how to use AI is to create a "trusted AI framework," Gloede said, that includes 10 different principles that should be considered when using the technology in their work. ADVERTISEMENT She said she's hoping that the survey findings will encourage C-suite executives, tech companies, and public policy makers to take action. "We have so much to gain from [AI] if it is executed by organisations, by governments, in a responsible way," she said.

Associated Press
29-04-2025
- Business
- Associated Press
The American Trust in AI Paradox: Adoption Outpaces Governance
New York, New York--(Newsfile Corp. - April 29, 2025) - AI adoption in the U.S. workplace has outpaced most companies' ability to govern AI use according to the KPMG Trust, Attitudes and use of Artificial Intelligence: A global study 2025. Half of the U.S. workforce reported that they use AI tools at work without knowing whether it is allowed and more than four in ten (44%) are knowingly using it improperly at work. In addition, 58% of U.S. workers admit to relying on AI to complete work without properly evaluating the outcomes, and 53% claim to present AI-generated content as their own. 'This survey makes one thing clear: if you don't give people access to AI, they'll find their way into it anyway - often using it in ways that bypass policies, introduce errors, and blur accountability,' said Steve Chase, Vice Chair of AI & Digital Innovation. 'We're seeing this with clients too-especially those that have been slow to roll out tools or encourage responsible experimentation. If you haven't already, now's the time to invest in strong Trusted AI capabilities. And as agents become more and more a part of everyday workflows, getting this right only becomes more critical.' AI tools used without proper authorization or in inappropriate ways Nearly half (44%) of employees are using AI tools at work in ways that their employers haven't authorized - with 46% uploading sensitive company information and intellectual property to public AI platforms, violating policies, and creating vulnerabilities for their organizations. Furthermore, while two-thirds of U.S. workers are leveraging AI at work, many are not properly evaluating the outcomes. Sixty-four percent of employees admit to putting less effort into their work, knowing they can rely on AI, and 58% rely on AI output without thoroughly assessing the information. This reliance has led to 57% making mistakes in their work, and 53% avoid disclosing when they have used AI, often presenting AI-generated content as their own. 'Half of US workers are using AI tools without clear authorization, and many have admitted to using AI inappropriately,' said Samantha Gloede, Trusted Enterprise Leader, KPMG LLP. 'This highlights a significant gap in governance and raises serious concerns about transparency, ethical behavior, and the accuracy of AI-generated content. This should be a wake-up call for employers to provide comprehensive AI training to not only manage risks but also to maintain trust.' Trust is the bedrock of AI adoption While 70% of U.S. workers are eager to leverage AI's benefits and 61% have already experienced positive impacts, 75% remain concerned about negative outcomes. Despite the majority (80%) believing AI has improved operational efficiency and innovative strategy - because it can process massive volumes of data at incomprehensible speeds and strengthen humans' capabilities, insights, and productivity - trust in AI remains low, with 43% having low confidence in both commercial and government entities to develop and use AI responsibly. 'Employees are asking for greater investments in AI training and the implementation of clear governance policies to bridge the gap between AI's potential and its responsible use, said Bryan McGowan, Trusted AI leader, KPMG LLP. 'It's not enough for AI to simply work; it needs to be trustworthy. Building this strong foundation is an investment that will pay dividends in future productivity and growth.' AI governance is struggling to keep pace with the rapid integration of AI Only 54% of U.S. Consumers believe their organizations have policies for responsible AI use, and another 25% think no such policies exist altogether. Similarly, 55% believe their organizations regularly monitor AI systems; with only three fifths (59%) of U.S. workers believe there are people within their organizations accountable for overseeing the use of AI. 'AI is advancing rapidly, yet governance in many organizations has not kept pace; organizations must incorporate comprehensive safeguards into AI systems and proactively prepare for foreseeable challenges and mitigate operational, financial, and reputational risks,' said Gloede. Perception from survey participants mirrors these concerns, with only 29% of U.S. consumers believing current regulations are sufficient for AI safety, and 72% saying more regulation is needed. Trust in AI could improve if laws and policies were in place, as 81% of U.S. consumers would be more willing to trust AI systems under such conditions. However, currently U.S. consumers have low confidence in commercial and government to develop and use AI, with most putting their trust in universities, research institutions, healthcare providers and big technology companies to develop and use AI in the best interests of the public. There are also specific areas where U.S. consumers are most keen to see additional government oversight; notably 85% percent of U.S. consumers also express a strong desire for laws and policies to combat AI-generated misinformation. 'U.S. consumers see the value in guardrails and accountability,' said McGowan. 'The majority of our survey participants want regulation to combat AI-generated misinformation, and nearly all agreed that news and social media companies must ensure people can detect AI-generated content.' About the report The Trust, attitudes and use of Artificial Intelligence: A global study 2025 was led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne and Dr. Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG, is the most comprehensive global study into the public's trust, use and attitudes towards AI. The study surveyed over 48,000 people across 47 countries between November 2024 and January 2025, including 1,019 people in the U.S. Read more about the findings below About KPMG LLP KPMG LLP is the U.S. member firm of the KPMG global organization of independent member firms providing audit, tax and advisory services. The KPMG global organization operates in 142 countries and territories and has more than 275,000 people working in member firms around the world. Each KPMG firm is a legally distinct and separate entity and describes itself as such. KPMG International Limited is a private English company limited by guarantee. KPMG International Limited and its related entities do not provide services to clients. KPMG is widely recognized as an exceptional place to work and develop a career. Its people are united by a sense of purpose in their work, a strong commitment to increasing access to education and opportunity, advancing mental health, and supporting community vitality. Explore more at Media Contact: Andreas Marathovouniotis [email protected] To view the source version of this press release, please visit