
Being honest about using AI at work makes people trust you less, research finds
By Oliver Schilke and Martin Reimann
Whether you're using AI to write cover letters, grade papers or draft ad campaigns, you might want to think twice about telling others. That simple act of disclosure can make people trust you less, our new peer-reviewed article found.
As researchers who study trust, we see this as a paradox. After all, being honest and transparent usually makes people trust you more. But across 13 experiments involving more than 5,000 participants, we found a consistent pattern: Revealing that you relied on AI undermines how trustworthy you seem.
Participants in our study included students, legal analysts, hiring managers and investors, among others. Interestingly, we found that even evaluators who were tech-savvy were less trusting of people who said they used AI. While having a positive view of technology reduced the effect slightly, it didn't erase it.
Why would being open and transparent about using AI make people trust you less? One reason is that people still expect human effort in writing, thinking and innovating. When AI steps into that role and you highlight it, your work looks less legitimate.
But there's a caveat: If you're using AI on the job, the cover-up may be worse than the crime. We found that quietly using AI can trigger the steepest decline in trust if others uncover it later. So being upfront may ultimately be a better policy.
Why it matters
A global survey of 13,000 people found that about half had used AI at work, often for tasks such as writing emails or analyzing data. People typically assume that being open about using these tools is the right choice.
Yet our research suggests doing so may backfire. This creates a dilemma for those who value honesty but also need to rely on trust to maintain strong relationships with clients and colleagues. In fields where credibility is essential – such as finance, health care and higher education – even a small loss of trust can damage a career or brand.
The consequences go beyond individual reputations. Trust is often called the social 'glue' that holds society together. It drives collaboration, boosts morale and keeps customers loyal. When that trust is shaken, entire organizations can feel the effects through lower productivity, reduced motivation and weakened team cohesion.
If disclosing AI use sparks suspicion, users face a difficult choice: embrace transparency and risk a backlash, or stay silent and risk being exposed later – an outcome our findings suggest erodes trust even more.
That's why understanding the AI transparency dilemma is so important. Whether you're a manager rolling out new technology or an artist deciding whether to credit AI in your portfolio, the stakes are rising.
What still isn't known
It's unclear whether this transparency penalty will fade over time. As AI becomes more widespread – and potentially more reliable – disclosing its use may eventually seem less suspect.
There's also no consensus on how organizations should handle AI disclosure. One option is to make transparency completely voluntary, which leaves the decision to disclose to the individual. Another is a mandatory disclosure policy across the board. Our research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors.
A third approach is cultural: building a workplace where AI use is seen as normal, accepted and legitimate. We think this kind of environment could soften the trust penalty and support both transparency and credibility.
Oliver Schilke is Director of the Center for Trust Studies, Professor of Management and Organizations, University of Arizona. Martin Reimann is Associate Professor of Marketing, University of Arizona.
The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts.
External Link
https://theconversation.com/being-honest-about-using-ai-at-work-makes-people-trust-you-less-research-finds-253590
© The Conversation

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Japan Today
4 hours ago
- Japan Today
Nvidia chief calls AI ‘the greatest equalizer' — but warns Europe risks falling behind
By THOMAS ADAMSON and KELVIN CHAN Will artificial intelligence save humanity — or destroy it? Lift up the world's poorest — or tighten the grip of a tech elite? Jensen Huang, the global chip tycoon, offered his opinion on Wednesday: neither dystopia nor domination. AI, he said, is a tool for liberation. Wearing his signature biker jacket and mobbed by fans for selfies, the Nvidia CEO cut the figure of a tech rock star as he took the stage at VivaTech in Paris. 'AI is the greatest equalizer of people the world has ever created,' Huang said, kicking off one of Europe's biggest technology industry fairs. But beyond the sheeny optics, Nvidia used the Paris summit to unveil a wave of infrastructure announcements across Europe, signaling a dramatic expansion of the AI chipmaker's physical and strategic footprint on the continent. In France, the company is deploying 18,000 of its new Blackwell chips with startup Mistral AI. In Germany, it's building an industrial AI cloud to support manufacturers. Similar rollouts are underway in Italy, Spain, Finland and the UK, including a new AI lab in Britain. Other announcements include a partnership with AI startup Perplexity to bring sovereign AI models to European publishers and telecoms, a new cloud platform with Mistral AI, and work with BMW and Mercedes-Benz to train AI-powered robots for use in auto plants. The announcements reflect how central AI infrastructure has become to global strategy, and how Nvidia — the world's most valuable chipmaker — is positioning itself as the engine behind it. At the center of the debate is Huang's concept of the AI factory: not a plant that makes goods, but a vast data center that creates intelligence. These facilities train language models, simulate new drugs, detect cancer in scans, and more. Asked if such systems risk creating a 'technological priesthood' — hoarding computing power and stymying the bottom-up innovation that fueled the tech industry for the past 50 years — Huang pushed back. 'Through the velocity of our innovation, we democratize,' he told The Associated Press. 'We lower the cost of access to technology.' As Huang put it, these factories 'reason,' 'plan,' and 'spend a lot of time talking to' themselves, powering everything from ChatGPT to autonomous vehicles and diagnostics. But some critics warn that without guardrails, such all-seeing, self-reinforcing systems could go the way of Skynet in ' The Terminator ' movie — vast intelligence engines that outpace human control. 'Just as electricity powered the last industrial revolution, AI will power the next one,' he said. 'Every country now needs a national intelligence infrastructure.' He added: 'AI factories are now part of a country's infrastructure. That's why you see me running around the world talking to heads of state — they all want AI to be part of their infrastructure. They want AI to be a growth manufacturing industry for them.' Europe, long praised for its leadership on digital rights, now finds itself at a crossroads. As Brussels pushes forward with world-first AI regulations, some warn that over-caution could cost the bloc its place in the global race. With the U.S. and China surging ahead and most major AI firms based elsewhere, the risk isn't just falling behind — it's becoming irrelevant. Huang has a different vision: sovereign AI. Not isolation, but autonomy — building national AI systems aligned with local values, independent of foreign tech giants. 'The data belongs to you,' Huang said. 'It belongs to your people, your country... your culture, your history, your common sense.' But fears over AI misuse remain potent — from surveillance and deepfake propaganda to job losses and algorithmic discrimination. Huang doesn't deny the risks. But he insists the technology can be kept in check — by itself. 'In the future, the AI that is doing the task is going to be surrounded by 70 or 80 other AIs that are supervising it, observing it, guarding it, ensuring that it doesn't go off the rails.' The VivaTech event was part of Huang's broader European tour. He had already appeared at London Tech Week and is scheduled to visit Germany. In Paris, he joined French President Emmanuel Macron and Mistral AI CEO Arthur Mensch to reinforce his message that AI is now a national priority. © Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Japan Times
16 hours ago
- Japan Times
Can Apple salvage the AI iPhone in China?
If you thought Apple's artificial intelligence woes in the U.S. were bad, just know that they could actually be worse. Look at China. The AI iPhone features haven't even arrived on the mainland. And those holding their breath for an update at this week's Worldwide Developers Conference were again let down. Despite earlier reports that Apple Intelligence could launch in the country — where demand for AI is among the highest in the world — in May, the lack of a fresh announcement isn't a total shock. The rollout has long been dodged by outsized regulatory scrutiny on both sides of the Pacific. Last week, the Financial Times reported that Apple's partnership with Alibaba Group Holding to bring its AI services to China was being further stalled by a Beijing regulator in the midst of U.S. President Donald Trump's trade war. Apple can't catch a break here. But there are reasons to hold onto hope that the service can still be salvaged, even if the window of opportunity for Chief Executive Officer Tim Cook is rapidly closing. Apple's popularity in China has undoubtedly waned amid the rise of domestic rivals such as the premium smartphone offerings from Huawei Technologies Just 21% of consumers picked it as their top choice for their next phone, according to a Bloomberg Intelligence survey released this week, compared to 29% a year ago. And perhaps more concerningly for Cook is that the marquee smartphone is losing its allure even for existing customers: 62% of iPhone users in China said they are planning on sticking with the brand, versus 81% a year earlier. But the data also reveal a bigger story. It's not just Apple losing its appeal. The popularity of homegrown titan Huawei's smartphones fell by a much larger share than Apple, with consumers across the board putting off replacing their existing phones for longer. While the West has been focused on Apple's problems in China, this is also about the pain of domestic consumers, who are holding back on spending due to deflationary pressures and macroeconomic uncertainty. As Beijing works to turn this around, it could buy much-needed time for the iPhone maker's eventual launch of an AI smartphone to a more hospitable consumer environment. Of course, the biggest hurdle is obtaining regulatory approval in Beijing. The more Cook can lean on local partner Alibaba to speed up this process and portray it to Chinese authorities as a joint win, the better chance of success. While this collaboration has already ruffled feathers in Washington, Apple should emphasize that it would still maintain control over the hardware, while its presence in China slows the global rise of firms like Huawei, which has long been a target for U.S. lawmakers. The company has been typically hush-hush about the scope of its partnership with Alibaba. But there are reasons to be bullish — perhaps even more so than for its U.S. alliance with OpenAI — that this collaboration can revitalize its AI ambitions, at least in China. While Apple's in-house efforts have sputtered, Alibaba's major investments into the technology have propelled it to be a top global player. Even Nvidia Chief Executive Officer Jensen Huang last month called its Qwen offerings "among the best open-source AI models' as he touted the benefits of having it run on American hardware. Their open-source nature also means that developers can continuously optimize the Qwen models, as well as download them to run on their own servers, mitigating some of the national security concerns Washington associates with Chinese technology. Still, relying on Alibaba to do the heavy lifting in AI development would be very un-Apple-like, given its history of seeking to maintain as much control as possible. But the company must find a way to adapt. And the more AI becomes commoditized in China, having a local partner has the benefit of outsourcing some of the expensive legwork. This would also allow it to focus resources on gaining an edge in the next-generation of AI consumer devices. This area is still in its infancy, giving Apple a fighting chance to catch up and do what it does best: Release perfected versions of devices launched prematurely by competitors. The consequences of falling further behind here threatens its global hardware dominance. Much of the recent discourse on Apple's relationship with China has centered around risks from the company's reliance on the country as a manufacturing mothership. But when it comes to tapping into the consumer base, this is also an asset, proof to Beijing that it isn't just a foreign firm exploiting the market, but a partner supporting millions of jobs. It remains to be seen how tenable this precarious position is for Cook. But he's spent decades buying time and playing both sides. For Apple's sake, let's hope there are more decades to go. Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech.

a day ago
Many Japanese Children Using AI for Study
Japan Data Society Education Jun 11, 2025 A survey in Japan found that many children had experience of using various types of AI tools, usually without any established rules in home or school. A survey of Japanese children found that 84% of elementary students and 80.4% of junior high students have used digital assistants like Alexa or Siri (defined in the survey as AI). It was conducted by the internet service provider Nifty among schoolchildren visiting its website Nifty Kids between March and April 2025, obtaining 1,430 valid responses. While fewer students use ChatGPT because of the age restriction and need for parental consent before opening an account, 50.7% of elementary and 62.5% of junior high students had used it. In addition. 36.6% of elementary and 44.6% of junior high students reported that they had used AI for studying or homework. Students are using AI in a variety of ways depending on the situation, including 'getting help with structuring an essay,' 'asking for explanations of math problems,' or 'checking homework.' Most students are using AI as a support tool, but some students said they had used AI 'to write an entire essay' for them. When asked whether there were any rules in place at school or home regarding the use of AI, 87.5% responded that they had no definite rules. (Translated from Japanese. Banner photo © Pixta.) AI