logo
#

Latest news with #NicoleGillespie

76 pc Indians trust AI, far ahead of global average at 46 pc: Report
76 pc Indians trust AI, far ahead of global average at 46 pc: Report

Hans India

time06-05-2025

  • Business
  • Hans India

76 pc Indians trust AI, far ahead of global average at 46 pc: Report

New Delhi: About 76 per cent of Indians are confident in using artificial intelligence (AI) technologies, a figure far higher than the global average of 46 per cent, a new report said on Tuesday. The report by KPMG, which surveyed over 48,000 people across 47 countries, highlighted India as a global leader in public trust and adoption of AI. The report, titled 'Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025', found that India is not only more optimistic about AI but is also more prepared to use it in everyday life and at work. According to the report, 90 per cent of Indian respondents said AI has improved accessibility and effectiveness in various areas, making it a transformative force in the country. At the same time, 97 per cent of Indians said they intentionally use AI at work, and 67 per cent said they couldn't complete their tasks without it. In comparison, only 58 per cent of employees globally report intentionally using AI, with just 31 per cent using it regularly. The report was led by Professor Nicole Gillespie and Dr Steve Lockey from the Melbourne Business School, in collaboration with KPMG. KPMG India's Akhilesh Tuteja said the findings show that "India is well-positioned to lead the world in ethical and innovative AI use". He noted that while optimism is high, responsible governance and policy frameworks are essential to ensure that AI is used safely and fairly. Professor Gillespie added that the global public wants reassurance that AI is being developed and used in a secure and transparent way. 'The importance of trust and governance in ensuring AI technologies are accepted and adopted widely,' she emphasised. Nearly 86 per cent have personally experienced or seen positive outcomes from AI, including better productivity, improved innovation, and reduced time spent on routine tasks. The report also found that AI training and understanding are higher in India compared to advanced economies. About 78 per cent of Indian respondents feel confident in their ability to use AI, 64 per cent have received some form of AI training, and 83 per cent feel they can use AI tools effectively, said the report.

AI at work: Weighing up the benefits and issues of trust
AI at work: Weighing up the benefits and issues of trust

The Star

time06-05-2025

  • Business
  • The Star

AI at work: Weighing up the benefits and issues of trust

A global survey involving over 48,000 people from 47 countries shows that nearly six in ten employees say they use AI on their own initiative. A third of them use it at least once a week. — AFP Relaxnews AI is gradually making its way into our everyday working lives. From translating an email to analysing data or writing a report, in just a few clicks, these tasks can be delegated to tools like ChatGPT. But while the variety of uses grows, trust of AI remains a challenge. AI is going mainstream, becoming a true partner in the workplace. This is the finding of a global survey conducted by Melbourne Business School and KPMG, involving over 48,000 people from 47 countries. Nearly six in ten employees say they use AI on their own initiative. A third of them use it at least once a week. The benefits are numerous: time savings, better access to information, and a real boost for innovation. Nearly half of those surveyed even believe that AI has increased revenue-generating activity in their workplace. But behind the enthusiasm, doubt persists. For some, the use of AI raises a fundamental question: is it really still work? Others dread the judgments that will come their way if those around them at work – and especially their managers – discover that they are using these tools. Because AI, by changing the way we produce and collaborate, is forcing everyone to rethink their place, their skills, and the very essence of their professional commitment. As a result, a massive phenomenon of hidden use is developing. In fact, 57% of employees present AI-generated content as their own, without mentioning that this kind of tool has been involved. And 66% don't even check the answers provided, leading to errors for 56% of them. Lack of training Part of the reason for this is a glaring lack of guidance or training. Less than half of employees say they have received training in artificial intelligence. And only 40% say their company has a clear policy on its use. Added to this is growing pressure. Half of all respondents are afraid of being left behind professionally if they don't quickly familiariae themselves with these tools. "The findings [of this report] reveal that employees use of AI at work is delivering performance benefits but also opening up risk from complacent and non-transparent use," says professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne, quoted in a news release. This survey highlights the sometimes risky and poorly supervised use of these tools. Nearly one employee in two admits to having entered sensitive data into public tools such as ChatGPT. Plus, 44% admit to having violated their company's internal policy by preferring these solutions to those provided by their organisation. Younger employees, aged between 18 and 34, are the most inclined to adopt these unwise practices. This type of behaviour is not without consequences. It exposes both organisations and their employees to major risks, whether in terms of significant financial losses, serious reputational damage or breaches of data confidentiality. It is therefore urgent to strengthen governance around AI. "It is without doubt the greatest technology innovation of a generation and crucial that AI is grounded in trust given the fast pace at which it continues to advance. Organizations have a clear role to play when it comes to ensuring the AI revolution happens responsibly, vital to ensuring a future where AI is both trustworthy and trusted," says KPMG International's Global Head of AI David Rowlands. For companies, this means creating a healthy working environment, where everyone can share their use of AI without fear of judgment. This culture of trust is essential for experimenting, learning, and making AI a real lever for innovation, rather than a poorly controlled risk. Because without support, without a clear framework and without dialogue, the AI revolution could well elude us. – AFP Relaxnews

Global study reveals public trust is lagging growing AI adoption
Global study reveals public trust is lagging growing AI adoption

Techday NZ

time30-04-2025

  • Business
  • Techday NZ

Global study reveals public trust is lagging growing AI adoption

A global study has found that trust in artificial intelligence remains a significant hurdle, despite widespread and increasing use of the technology. The survey, titled 'Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025', was led by Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne, and Dr Steve Lockey, Research Fellow at Melbourne Business School, in collaboration with KPMG. Covering over 48,000 participants across 47 countries between November 2024 and January 2025, the study is described as the most comprehensive of its kind to date. The findings show that 66% of respondents already use AI regularly, and 83% believe the technology can offer a wide range of benefits. However, only 46% of those surveyed are willing to trust AI systems. Furthermore, 58% of global respondents consider AI to be untrustworthy, highlighting a growing disconnect between usage and confidence in the technology. Compared to a similar study covering 17 countries prior to the release of systems such as ChatGPT in 2022, the researchers found that public trust has dropped while concerns have increased as AI becomes more integrated into daily life. Professor Gillespie said, "The public's trust of AI technologies and their safe and secure use is central to sustained acceptance and adoption." She added, "Given the transformative effects of AI on society, work, education, and the economy - bringing the public voice into the conversation has never been more critical." In the workplace, the study reports that 58% of employees actively use AI, with 31% using it at least weekly. Reported benefits include increased efficiency, improved access to information, and greater innovation. Almost half of respondents (48%) said that AI has contributed to increased revenue-generating activity. Despite these positives, the use of AI at work is also associated with a number of risks. Nearly half of employees admitted to using AI in ways that go against company policies, such as uploading sensitive company data to public platforms like ChatGPT. There is evidence of complacency, as 66% of employees rely on AI-generated outputs without verifying accuracy, and 56% admitted to making mistakes because of AI. Over half (57%) of employees surveyed said they conceal their use of AI and present AI-generated content as their own work. Only 47% reported having received training in AI, and just 40% said their workplaces have clear policies or guidance on the use of generative AI. Factors contributing to this behaviour include a sense of urgency, with half of employees expressing concern about falling behind if they do not actively use AI. Professor Gillespie commented, "The findings reveal that employees use of AI at work is delivering performance benefits but also opening up risk from complacent and non-transparent use. They highlight the importance of effective governance and training, and creating a culture of responsible, open and accountable AI use." The survey also examined the impact of AI across wider society. Four in five people indicated they had personally experienced or observed benefits, such as reduced time on routine tasks, enhanced personalisation, lower costs, and improved accessibility. Nonetheless, the same proportion expressed concerns about potential risks, with two in five noting negative impacts, including diminished human interaction, cybersecurity issues, the spread of misinformation, inaccurate outcomes, and deskilling. Specifically, 64% of respondents are worried about elections being influenced by AI-powered bots and synthetic content. There is a strong perceived need for regulation, with 70% indicating that AI requires both national and international regulation, but only 43% believe current laws are adequate. A large majority (87%) called for stricter laws to combat AI-generated misinformation, and expect media and social media organisations to adopt stronger fact-checking processes. Professor Gillespie said, "The research reveals a tension where people are experiencing benefits from AI adoption at work and in society, but also a range of negative impacts. This is fuelling a public mandate for stronger regulation and governance of AI, and a growing need for reassurance that AI systems are being used in a safe, secure and responsible way." David Rowlands, KPMG International's Global Head of AI, said the latest findings point to opportunities for organisations to play a greater role in strengthening governance and building trust with employees, consumers, and regulators. "It is without doubt the greatest technology innovation of a generation and it is crucial that AI is grounded in trust given the fast pace at which it continues to advance. Organizations have a clear role to play when it comes to ensuring that AI is both trustworthy and trusted." Rowlands also said, "People want assurance over the AI systems they use which means AI's potential can only be fully realized if people trust the systems making decisions or assisting in them. This is why KPMG developed our Trusted AI approach, to make trust not only tangible but measurable for clients." The study notes marked differences in attitudes and adoption between advanced and emerging economies. Adoption rates, trust, and optimism about AI are higher in emerging economies, along with reported AI literacy (64% compared to 46%) and training (50% compared to 32%). In these regions, three in five people trust AI systems, compared to only two in five in advanced economies. Professor Gillespie said, "The higher adoption and trust of AI in emerging economies is likely due to the greater relative benefits and opportunities AI affords people in these countries and the increasingly important role these technologies play in economic development."

Emerging economies lead the way in AI trust, survey shows
Emerging economies lead the way in AI trust, survey shows

Economic Times

time29-04-2025

  • Business
  • Economic Times

Emerging economies lead the way in AI trust, survey shows

Live Events People in emerging economies are more willing to trust AI than those in advanced economies and are more optimistic and excited about its benefits, a major survey by the University of Melbourne and professional services firm KPMG has global study showed two-thirds of those surveyed were now using artificial intelligence regularly and even more - 83% - believed it would result in a wide range of 58% of respondents viewed the technology as untrustworthy, an increase in the level found in a previous study before the release of ChatGPT , a groundbreaking generative AI chatbot, in 2022."The public's trust of AI technologies and their safe and secure use is central to sustained acceptance and adoption," said study leader Nicole Gillespie, chair of trust at Melbourne Business School, on survey found a clear split between emerging economies, where three in five people trust AI, and advanced countries, where only two in five attributed the higher adoption and trust of AI in emerging economies to the greater relative benefits and opportunities the technology affords people in these countries and the increasingly important role it plays in economic AI gains broader use, businesses and governments have been grappling with how to balance innovation with ethical considerations such as job displacement and data study surveyed more than 48,000 people across 47 countries between November 2024 and January 2025.

‘Falling behind': New report reveals big AI wake up call for Australia
‘Falling behind': New report reveals big AI wake up call for Australia

News.com.au

time29-04-2025

  • Business
  • News.com.au

‘Falling behind': New report reveals big AI wake up call for Australia

Australia is falling behind in a key area, with a new report revealing we need to make some major changes – and quickly – if we want to keep up with the rest of the world. No matter your feelings about artificial intelligence (AI), there is no denying that it is becoming increasingly embedded within people's daily lives. From asking ChatGPT to help you write your shopping list to using AI to help give your work presentation some extra pizzazz, most people have used AI in some form or another. The University of Melbourne's report, Trust, attitudes and use of Artificial Intelligence: A global study 2025, found half of Australian regularly use AI, but just 36 per cent are willing to trust it. As part of the study, which was completed in collaboration with KPMG, 48,340 people were surveyed across 47 countries, including Australia, between November 2024 and January 2025. The report has highlighted just how different Australia's perception of and education around AI is compared to other countries. Australia ranks among the lowest globally on acceptance, excitement and optimism about AI, alongside New Zealand and the Netherlands. Just 30 per cent of Australians believe the benefits of AI outweigh the risks, the lowest ranking of any country. On top of this, 55 per cent say they realise the benefits, compared to 73 per cent globally. Professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne and one of the leads of the study, said there are three key factors contributing to Australia's comparatively low level of AI trust and acceptance. 'One is that Australians are worried about AI and its impact on society,' she told Almost 80 per cent of Australians were concerned about the negative outcomes of AI use and 37 per cent said they have personally experienced to observed those negative outcomes. 'In and of itself, that's not that different to other countries. The difference is that Australians have those concerns, but they're not being offset by the experience of the benefits,' Professor Gillespie said. So, while Australians see the benefits of AI, we aren't of the mind that these benefits outweigh the risks to the extent of some other countries. Another contributor to this gap between Australia and other parts of the world is our low rates of AI training. Only 24 per cent surveyed have had formal or informal training in AI, compared to almost two in five globally. Professor Gillespie also noted that regulation of AI is 'lagging' in Australia. It is clear that Australians want regulation but just 30 per cent feel the current laws and safeguards are adequate. 'Again, we see this gap between our expectations of regulation and what's actually being delivered,' she said. Speaking to KPMG Australia Chief Digital Officer John Munnelly, said it is time for Australia to 'take the next step' in terms of regulation. He said that when people and corporates feel like there are ground rules and boundaries in place when it comes to AI, then we will see a surge in investment. 'I think we are just a little bit behind as a country compared to some of the other countries that are clearly moving ahead pretty quickly embracing it,' Mr Munnelly said, noting that India in particular is surging ahead with AI. 'That's an economy that you would think will be severely impacted economically by AI and so they're driving into it quickly. 'They're investing, they're pushing out free education. I think we could learn a little bit from some of the way the emerging economies are working.' Mr Munnelly said not embracing AI and improving regulation would be a 'missed opportunity' for Australia, noting we are one of the economies that has 'the most' to gain from AI in terms of productivity. He said the majority of Australia's economy is made up of services, meaning there are a lot of opportunities for businesses to incorporate AI. 'There's great ways for AI to help in if it's trusted, and people will use it if they trust it. So I think it's solving this trust thing really does then unlock the productivity that brings revenue back on shore,' he said. 'I just think we're at the beginning of a great opportunity, but we're probably just not awake to it like some of the other economies are.' One particularly concerning finding from the report centred around the current use of AI in Aussie workplaces. Almost half of local employees admitted to using AI in ways that go against company policies, including uploading sensitive company information into free public AI tools like ChatGPT. Of those surveyed, 57 per cent said they also rely n AI output without evaluating accuracy and many admitted to hiding their use of it at work and presenting AI-generated content as their own. Professor Gillespie said these statistics were 'particularly concerning' as it opens companies up to a number of security and reputational risks, along with the potential for financial loss. If Australia doesn't get serious about regulation and education then it is 'only a matter of time before this does start to create more problems'. 'This is where we do believe that this complacent and inappropriate use is being fuelled by a lack of training of employees. So it comes back to AI literacy, but also insufficient guidance and governance around how these tools should be used,' Professor Gillespie said. 'It is a wake up call. And I think our findings do shine a light on the need for this governance of how employees are using AI work to be strengthened.' Mr Munnelly agreed, saying there is a 'real risk' is companies don't start to push ahead with some better structures around AI use and they will end up with 'accidents' happening as people interact with this technology. He also noted that we are now seeing a changing workforce, with younger generations coming in who are used to using AI in their daily lives and are expecting that to continue at work. 'So I do think it's changing and it's probably going to change fairly rapidly, because in a couple of years time, 30 or 40 per cent of the workforce will have had [AI] in their education environments,' Mr Munnelly said. 'So we need to make sure that everybody in the workforce can keep up.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store