logo
AI at work: Weighing up the benefits and issues of trust

AI at work: Weighing up the benefits and issues of trust

The Star06-05-2025

A global survey involving over 48,000 people from 47 countries shows that nearly six in ten employees say they use AI on their own initiative. A third of them use it at least once a week. — AFP Relaxnews
AI is gradually making its way into our everyday working lives. From translating an email to analysing data or writing a report, in just a few clicks, these tasks can be delegated to tools like ChatGPT. But while the variety of uses grows, trust of AI remains a challenge.
AI is going mainstream, becoming a true partner in the workplace. This is the finding of a global survey conducted by Melbourne Business School and KPMG, involving over 48,000 people from 47 countries. Nearly six in ten employees say they use AI on their own initiative. A third of them use it at least once a week.
The benefits are numerous: time savings, better access to information, and a real boost for innovation. Nearly half of those surveyed even believe that AI has increased revenue-generating activity in their workplace.
But behind the enthusiasm, doubt persists. For some, the use of AI raises a fundamental question: is it really still work? Others dread the judgments that will come their way if those around them at work – and especially their managers – discover that they are using these tools.
Because AI, by changing the way we produce and collaborate, is forcing everyone to rethink their place, their skills, and the very essence of their professional commitment. As a result, a massive phenomenon of hidden use is developing. In fact, 57% of employees present AI-generated content as their own, without mentioning that this kind of tool has been involved. And 66% don't even check the answers provided, leading to errors for 56% of them.
Lack of training
Part of the reason for this is a glaring lack of guidance or training. Less than half of employees say they have received training in artificial intelligence. And only 40% say their company has a clear policy on its use.
Added to this is growing pressure. Half of all respondents are afraid of being left behind professionally if they don't quickly familiariae themselves with these tools. "The findings [of this report] reveal that employees use of AI at work is delivering performance benefits but also opening up risk from complacent and non-transparent use," says professor Nicole Gillespie, Chair of Trust at Melbourne Business School at the University of Melbourne, quoted in a news release.
This survey highlights the sometimes risky and poorly supervised use of these tools. Nearly one employee in two admits to having entered sensitive data into public tools such as ChatGPT. Plus, 44% admit to having violated their company's internal policy by preferring these solutions to those provided by their organisation. Younger employees, aged between 18 and 34, are the most inclined to adopt these unwise practices.
This type of behaviour is not without consequences. It exposes both organisations and their employees to major risks, whether in terms of significant financial losses, serious reputational damage or breaches of data confidentiality.
It is therefore urgent to strengthen governance around AI. "It is without doubt the greatest technology innovation of a generation and crucial that AI is grounded in trust given the fast pace at which it continues to advance. Organizations have a clear role to play when it comes to ensuring the AI revolution happens responsibly, vital to ensuring a future where AI is both trustworthy and trusted," says KPMG International's Global Head of AI David Rowlands.
For companies, this means creating a healthy working environment, where everyone can share their use of AI without fear of judgment. This culture of trust is essential for experimenting, learning, and making AI a real lever for innovation, rather than a poorly controlled risk. Because without support, without a clear framework and without dialogue, the AI revolution could well elude us. – AFP Relaxnews

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Yageo says it will protect technology if Shibaura purchase succeeds
Yageo says it will protect technology if Shibaura purchase succeeds

The Star

time2 hours ago

  • The Star

Yageo says it will protect technology if Shibaura purchase succeeds

Yageo's founder and Chairman Pierre Chen speaks at the company's headquarters in New Taipei City, Taiwan June 7, 2025. REUTERS/Wen-Yee Lee TAIPEI/TOKYO (Reuters) -Taiwan's Yageo said it will implement strict controls to prevent technology from leaking if it succeeds in acquiring Japan's Shibaura Electronics, responding to concerns in Japan over what the deal could mean for national security. Chairman Pierre Chen told reporters in Taipei on Saturday that the company will meet with Shibaura in mid-June in Tokyo to discuss potential cooperation. Yageo, the world's largest maker of chip resistors, launched an unsolicited tender offer for Shibaura in February, seeking full control of the Japanese firm, which specialises in thermistor technology. Yageo offered to buy Shibaura at 4,300 yen per share, valuing the company at more than 65 billion yen ($450 million). Spurning Yageo's overture, Shibaura tapped Japanese components supplier Minebea Mitsumi as a white knight. Minebea and Yageo entered a bidding war, with the latter now offering 6,200 yen. The stock closed at 6,100 yen on Friday. "Our strategy is to inject resources and strengthen R&D for advanced technologies. We're also preparing to make larger investments to expand their facilities in Japan," Chen said. Asked about Japan's national security concerns, he said: "We will implement strict controls to ensure technology does not leak." Unsolicited takeovers were once rare in Japan, where companies often mounted elaborate defences. The Japanese industry ministry's M&A guidelines in 2023 cracked down on what it considered excessive defence tactics, de-stigmatising unsolicited buyouts and leading some of such deals to succeed. Chen said that negotiations with Japan's Ministry of Economy, Trade and Industry had been going smoothly. He said that if Yageo acquires Shibaura, the deal would address a gap in its portfolio of thermistors, making Yageo's offerings more complete for global customers and helping Shibaura expand its access to markets outside of Japan. Yageo said it aims to ease the burden of managing smaller component suppliers for its major clients, including Apple and Nvidia, by offering more comprehensive product portfolios and solutions. Yageo is also the world's number three manufacturers of multilayer ceramic capacitors and provides key components used in Apple's iPhones, Nvidia's AI servers, and Tesla's electric vehicles. ($1 = 144.8500 yen) (Reporting by Wen-Yee Lee in Taipei and Makiko Yamazaki in Tokyo; Editing by William Mallard)

Human coders are still better than AI, says this expert developer
Human coders are still better than AI, says this expert developer

The Star

time7 hours ago

  • The Star

Human coders are still better than AI, says this expert developer

Your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. — Pixabay In the complex 'will AI steal my job?' debate, software developers are among the workers most immediately at risk from powerful AI tools. It's certainly looking like the tech sector wants to reduce the number of humans working those jobs. Bold statements from the likes of Meta's Mark Zuckerberg and Anthropic's Dario Amodei support this since both of them say AI is already able to take over some code-writing roles. But a new blog post from a prominent coding expert strongly disputes their arguments, and supports some AI critics' position that AI really can't code. Salvatore Sanfilippo, an Italian developer who created Redis (an online database which calls itself the 'world's fastest data platform' and is beloved by coders building real-time apps), published a blog post this week, provocatively titled 'Human coders are still better than LLMs.' His title refers to large language model systems that power AI chatbots like OpenAI's ChatGPT and Anthropic's Claude. Sanfilippo said he's 'not anti-AI' and actually does 'use LLMs routinely,' and explained some specific interactions he'd had with Google's Gemini AI about writing code. These left him convinced that AIs are 'incredibly behind human intelligence,' so he wanted to make a point about it. The billions invested in the technology and the potential upending of the workforce mean it's 'impossible to have balanced conversations' on the matter, he wrote. Sanfilippo blogged that he was trying to 'fix a complicated bug' in Redis's systems. He made an attempt himself, and then asked Gemini, 'hey, what we can do here? Is there a super fast way' to implement his fix? Then, using detailed examples of the kind of software he was working with and the problem he was trying to fix, he blogged about the back-and-forth dialogue he had with Gemini as he tried to coax it toward an acceptable answer. After numerous interactions where the AI couldn't improve on his idea or really help much, he said he 'asked Gemini to do an analysis of (his last idea, and it was finally happy.' We can ignore the detailed code itself and just concentrate on Sanfilippo's final paragraph. 'All this to say: I just finished the analysis and stopped to write this blog post, I'm not sure if I'm going to use this system (but likely yes), but, the creativity of humans still have an edge, we are capable of really thinking out of the box, envisioning strange and imprecise solutions that can work better than others,' he wrote. 'This is something that is extremely hard for LLMs.' Gemini was useful, he admitted, to simply 'verify' his bug-fix ideas, but it couldn't outperform him and actually solve the problem itself. This stance from an expert coder goes up against some other pro-AI statements. Zuckerberg has said he plans to fire mid-level coders from Meta to save money, employing AI instead. In March, Amodei hit the headlines when he boldly predicted that all code would be written by AIs inside a year. Meanwhile, on the flip side, a February report from Microsoft warned that young coders coming out of college were already so reliant on AI to help them that they failed to understand the hard computer science behind the systems they were working on –something that may trip them up if they encountered a complex issue like Sanfilippo's bug. Commenters on a piece talking about Sanfilippo's blog post on coding news site Hacker News broadly agreed with his argument. One commenter likened the issue to a popular meme about social media: 'You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.' Another writer noted that AIs were useful because even though they give pretty terrible coding advice, 'It still saves me time, because even 50 percent accuracy is still half that I don't have to write myself.' Lastly, another coder pointed out a very human benefit from using AI: 'I have ADHD and starting is the hardest part for me. With an LLM it gets me from 0 to 20% (or more) and I can nail it for the rest. It's way less stressful for me to start now.' Why should you care about this? At first glance, it looks like a very inside-baseball discussion about specific coding issues. You should care because your team members may be tempted to rely on AI to help them write code for your company, either for cost or speed rationales or because they lack particular expertise. But you should be wary. AIs are known to be unreliable, and Sanfilippo's argument, supported by other coders' comments, point out that AI really isn't capable of certain key coding tasks. For now, at least, coders' jobs may be safe… and if your team does use AI to code, they should double and triple check the AI's advice before implementing it in your IT system. – Inc./Tribune News Service

Calling for ethical and responsible use of AI
Calling for ethical and responsible use of AI

New Straits Times

time9 hours ago

  • New Straits Times

Calling for ethical and responsible use of AI

LETTERS: In an era where artificial intelligence (AI) is rapidly shaping every facet of human life, it is critical that we ensure this powerful technology is developed and deployed with a human-centric approach. AI holds the potential to solve some of humanity's most pressing challenges, from healthcare innovations to environmental sustainability, but it must always serve the greater good. To humanise AI is to embed ethical considerations, transparency, and empathy into the heart of its design. AI is not just a tool; it reflects the values of those who create it. Therefore, AI development should prioritise fairness, accountability, and inclusivity. This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its' benefits accessible to all, not just a select few. Governments, industries, and communities must work together to create a governance framework that fosters innovation while protecting privacy and rights. We must also emphasise the importance of educating our workforce and future generations to work alongside AI, harnessing its capabilities while maintaining our uniquely human traits of creativity, compassion, and critical thinking. As AI continues to transform the way we live, work, and interact, it is becoming increasingly urgent to ensure that its development and use are grounded in responsibility, accountability, and integrity. The Alliance for a Safe Community calls for clear, forward-looking regulations and a comprehensive ethical framework to govern AI usage to safeguard the public interest. AI technologies are rapidly being adopted across sectors — from healthcare and education to finance, law enforcement, and public services. While these advancements offer significant benefits, they also pose risks, including: • Invasion of privacy and misuse of personal data; • Algorithmic bias leading to discrimination or injustice; • Job displacement and economic inequality; • Deepfakes and misinformation Without proper regulation, AI could exacerbate existing societal challenges and even introduce new threats. There must be checks and balances to ensure that AI serves humanity and does not compromise safety, security, or fundamental rights. We propose the following elements as part of a robust regulatory framework: 1. AI Accountability Laws – Define legal responsibility for harm caused by AI systems, especially in high-risk applications. 2. Transparency and Explainability – Mandate that AI decisions affecting individuals (e.g., in hiring, credit scoring, or medical diagnoses) must be explainable and transparent. 3. Data Protection and Privacy Standards – Strengthen data governance frameworks to prevent unauthorised access, misuse, or exploitation of personal data by AI systems. 4. Risk Assessment and Certification – Require pre-deployment risk assessments and certification processes for high-impact AI tools. 5. Public Oversight Bodies – Establish independent agencies to oversee compliance, conduct audits, and respond to grievances involving AI. Technology alone cannot determine what is right or just. We must embed ethical principles into every stage of AI development and deployment. A Code of Ethics should include: • Human-Centric Design – AI must prioritise human dignity, autonomy, and well-being. • Non-Discrimination and Fairness – AI systems must not reinforce or amplify social, racial, gender, or economic bias. • Integrity and Honesty – Developers and users must avoid deceptive practices and be truthful about AI capabilities and limitations. • Environmental Responsibility – Developers should consider the energy and environmental impact of AI technologies. • Collaboration and Inclusivity – The development of AI standards must include voices from all segments of society, especially marginalised communities. AI is one of the most powerful tools of our time. Like any powerful tool, it must be handled with care, guided by laws, and shaped by ethical values. We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity. The future of AI should not be one where technology dictates the terms of our humanity. Instead, we must chart a course where AI amplifies our best qualities, helping us to live more fulfilling lives, build fairer societies, and safeguard the well-being of future generations. Only by humanising AI can we ensure that its promise is realised in a way that serves all of mankind.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store