
AI is becoming a secret weapon for workers
42% of office workers say they use generative AI tools (like ChatGPT) at work. — AFP Relaxnews
Artificial intelligence is gradually becoming part of everyday working life, promising productivity gains and a transformation of working methods. Between enthusiasm and caution, companies are trying to harness this revolutionary technology and integrate it into their processes.
But behind the official rhetoric, a very different reality is emerging. Many employees have chosen to take the initiative, adopting these tools discreetly, out of sight of their managers.
A recent survey,* conducted by software company Ivanti, reveals the extent of this under-the-radar adoption of AI. One-third of employees surveyed use AI tools without their managers' knowledge. There are several distinct reasons for this covert strategy.
For 36% of them, it is primarily a matter of gaining a "secret advantage' over their colleagues. Meanwhile, 30% of respondents fear that revealing their dependence on this technology could cost them their jobs. This fear is understandable, considering that 29% of employees are concerned that AI will diminish the value of their skills in the eyes of their employer.
The figures reveal an explosion in clandestine use. Forty-two percent of office workers say they use generative AI tools such as ChatGPT at work (+16 points in one year). Among IT professionals, this proportion reaches an impressive 74% (+8 points). Now, nearly half of office workers use AI tools not provided by their company.
Underestimating the risks
This covert use exposes organizations to considerable risks. Indeed, unauthorized platforms do not always comply with security standards or corporate data protection requirements. From confidential data to business strategies to intellectual property, anything and everything can potentially be fed into AI tools unchecked.
"It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' emphasizes Brooke Johnson, Chief Legal Counsel at Ivanti.
The survey also reveals a troubling paradox. While 52% of office workers believe that working more efficiently simply means doing more work, many prefer to keep their productivity gains to themselves. This mistrust is accompanied by an AI-fueled impostor syndrome, with 27% of users saying they don't want their abilities to be questioned.
This situation highlights a huge gap between management and employees. Although 44% of professionals surveyed say their company has invested in AI, they simultaneously complain about a lack of training and skills to use these technologies effectively. This disconnect betrays a poorly orchestrated technological transformation.
In the face of this silent revolution, Brooke Johnson advocates a proactive approach: "To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications."
This survey suggests that companies should completely rethink their integration of AI, rather than turning a blind eye to this legion of secret users. The stakes go beyond mere operational optimization: the most successful organizations will need to balance technological use with the enhancement of human potential.
By encouraging open dialogue, employers can foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. Ignoring this silent revolution runs the risk of deepening mutual distrust between management and employees, to everyone's detriment. – AFP Relaxnews
*This survey was conducted by Ivanti in February 2025 among more than 6,000 office workers and 1,200 IT and cybersecurity professionals.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
14 hours ago
- The Star
Job interviews enter a strange new world with AI that talks back
For better or worse, the next generation of job interviews has arrived: Employers are now rolling out artificial intelligence simulating live, two-way screener calls using synthetic voices. Startups like Apriora, HeyMilo AI and Ribbon all say they're seeing swift adoption of their software for conducting real-time AI interviews over video. Job candidates converse with an AI "recruiter' that asks follow-up questions, probes key skills and delivers structured feedback to hiring managers. The idea is to make interviewing more efficient for companies – and more accessible for applicants – without requiring recruiters to be online around the clock. "A year ago this idea seemed insane,' said Arsham Ghahramani, co-founder and chief executive officer of Ribbon, a Toronto-based AI recruiting startup that recently raised US$8.2mil (RM34.81mil) in a funding round led by Radical Ventures. "Now it's quite normalised.' Employers are drawn to the time savings, especially if they're hiring at high volume and running hundreds of interviews a day. And job candidates – especially those in industries like trucking and nursing, where schedules are often irregular – may appreciate the ability to interview at odd hours, even if a majority of Americans polled last year by Consumer Reports said they were uncomfortable with the idea of algorithms grading their video interviews. At Propel Impact, a Canadian social impact investing nonprofit, a shift to AI screener interviews came about because of the need to scale up the hiring process. The organisation had traditionally relied on written applications and alumni-conducted interviews to assess candidates. But with plans to bring on more than 300 fellows this year, that approach quickly became unsustainable. At the same time, the rise of ChatGPT was diluting the value of written application materials. "They were all the same,' said Cheralyn Chok, Propel's co-founder and executive director. "Same syntax, same patterns.' Technology allowing AI to converse with job candidates on a screen has been in the works for years. Companies like HireVue pioneered one-way, asynchronous video interviews in the early 2010s and later layered on automated scoring using facial expressions and language analysis – features that drew both interest and criticism. (The visual analysis was rolled back in 2020.) But those platforms largely left the experience static: candidates talking into a screen with no interaction, leaving recorded answers for a human to dissect after the fact. It wasn't until the public release of large language models like ChatGPT in late 2022 that developers began to imagine – and build – something more dynamic. Ribbon was founded in 2023 and began selling its offering the following year. Ghahramani said the company signed nearly 400 customers in just eight months. HeyMilo and Apriora launched around the same time and also report fast growth, though each declined to share customer counts. "The first year ChatGPT came out, recruiters weren't really down for this,' said HeyMilo CEO Sabashan Ragavan. "But the technology has gotten a lot better as time has gone on.' Technical stumbles Even so, the rollout hasn't been glitch-free. A handful of clips circulating on TikTok show interview bots repeating phrases or misinterpreting simple answers. One widely shared example involved an AI interviewer created by Apriora repeatedly saying the phrase "vertical bar pilates. .Aaron Wang, Apriora's co-founder and CEO, attributed the error to a voice model misreading the term "Pilates'. He said the issue was fixed promptly and emphasised that such cases are rare. "We're not going to get it right every single time,' he said. "The incident rate is well under 0.001%.' Chok said Propel Impact had also seen minor glitches, though it was unclear whether they stemmed from Ribbon itself or a candidate's WiFi connection. In those cases, the applicant was able to simply restart. Braden Dennis, who has used chatbot technology to interview candidates for his AI-powered investment research startup FinChat, noted that AI sometimes struggles when candidates ask specific follow-up questions. "It is definitely a very one-sided conversation,' he said. "Especially when the candidate asks questions about the role. Those can be tricky to field from the AI.' Startups providing the technology emphasised their approach to monitoring and support. HeyMilo maintains a 24/7 support team and automated alerts to detect issues like dropped connections or failed follow-ups. "Technology can fail,' Ragavan said, "but we've built systems to catch those corner cases.' Ribbon has a similar protocol. Any time a candidate clicks a support button, an alert is triggered that notifies the CEO. "Interviews are high stakes,' Ghahramani said. "We take those issues really seriously.' And while the videos of glitches are a bad look for the sector, Ghahramani said he sees the TikToks making fun of the tools as a sign the technology is entering the mainstream. Preparing job applicants Candidates applying to FinChat, which uses Ribbon for its screener interviews, are notified up front that they'll be speaking to an AI and that the team is aware it may feel impersonal. "We let them know when we send them the link to complete it that we know it is a bit dystopian and takes the 'human' out of human resources,' Dennis said. "That part is not lost on us.' Still, he said, the asynchronous format helps widen the talent pool and ensures strong applicants aren't missed. "We have had a few folks drop out of the running once I sent them the AI link,' Dennis said. "At the end of the day, we are an AI company as well, so if that is a strong deterrent then that's OK.' Propel Impact prepares candidates by communicating openly about its reasons for using AI in interviews, while hosting information sessions led by humans to maintain a sense of connection with candidates. "As long as companies continue to offer human touch points along the way, these tools are going to be seen far more frequently,' Chok said. Regulators have taken notice. While AI interview tools in theory promise transparency and fairness, they could soon face more scrutiny over how they score candidates – and whether they reinforce bias at scale. Illinois now requires companies to disclose whether AI is analysing interview videos and to get candidates' consent, and New York City mandates annual bias audits for any automated hiring tools used by local employers. Beyond screening calls Though AI interviewing technology is mainly being used for initial screenings, Ribbon's Ghahramani said 15% of the interviews on its platform now happen beyond the screening stage, up from just 1% a few months ago. This suggests customers are using the technology in new ways. Some employers are experimenting with AI interviews in which they can collect compensation expectations or feedback on the interview process – potentially awkward conversations that some candidates, and hiring managers, may prefer to see delegated to a bot. In a few cases, AI interviews are being used for technical evaluations or even to replace second-round interviews with a human. "You can actually compress stages,' said Wang. "That first AI conversation can cover everything from 'Are you authorised to work here?' to fairly technical, domain-specific questions.' Even as AI handles more of the hiring process, most companies selling the technology still view it as a tool for gathering information, not making the final call. "We don't believe that AI should be making the hiring decision,' Ragavan said. "It should just collect data to support that decision.' – Bloomberg


The Star
a day ago
- The Star
AI ‘vibe coding' startups burst onto scene with sky-high valuations
NEW YORK, NY (Reuters) -Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or 'code-gen' startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called 'vibe coding,' which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. 'AI has automated all the repetitive, tedious work,' said Scott Wu, CEO of code gen startup Cognition. 'The software engineer's role has already changed dramatically. It's not about memorizing esoteric syntax anymore.' Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google, Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over $500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. LEARN TO CODE? As AI revolutionizes the industry, many jobs - particularly entry-level coding positions that are more basic and involve repetition - may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24% in 2024, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that 'well over 30%' of Google's code is now AI-generated, and Amazon CEO Andy Jassy said last year the company had saved 'the equivalent of 4,500 developer-years' by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30% of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40% of those being software developers in Microsoft's home state, Washington. 'We're focused on creating AI that empowers developers to be more productive, creative, and save time,' a Microsoft spokesperson said. 'This means some roles will change with the revolution of AI, but human intelligence remains at the center of the software development life cycle.' MOUNTING LOSSES Some 'vibe-coding' platforms already boast substantial annualized revenues. Cursor, with just 60 employees, went from zero to $100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in $50 million in annualized revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. 'The prices people are paying for coding assistants are going to get more expensive,' Quinn Slack, CEO at coding startup Sourcegraph, told Reuters. To make the higher cost an easier pill to swallow for customers, Sourcegraph is now offering a drop-down menu to let users choose which models they want to work with, from open source models such as DeepSeek to the most advanced reasoning models from Anthropic and OpenAI so they can opt for cheaper models for basic questions. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. 'I haven't seen people working this hard since the first Internet boom,' said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. 'In many cases, it's less about who's got the best technology -- it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others,' said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. CUSTOM AI MODELS Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed $3 billion in annualized revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimized for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than $600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly $500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment. (Reporting by Anna Tong and Krystal Hu in New York. Editing by Kenneth Li and Michael Learmonth)


The Star
a day ago
- The Star
Contradictheory: AI and the next generation
Here's a conversation I don't think we'd have heard five years ago: 'You know what they do? They send in their part of the work, and it's so obviously ChatGPT. I had to rewrite the whole thing!' This wasn't a chat I had with the COO of some major company but with a 12-year-old child. She was talking about a piece of group work they had to do for class. And this Boy as she called him (you could hear the capitalised italics in her voice) had waited until the last minute to submit his part. To be honest, I shouldn't be surprised. These days, lots of people use AI in their work. It's normal. According to the 2024 Work Trend Index released by Microsoft and LinkedIn, 75% of employees then used artificial intelligence (AI) to save time and focus on their most important tasks. But it's not without its problems. An adult using AI to help draft an email is one thing. A student handing in their weekly assignment is another. The adult uses AI to communicate more clearly, but there the student is taking a shortcut. So, in an effort to deliver better work, the child might actually be learning less. And it's not going away. A 2024 study by Impact Research for the Walton Family Foundation found that 48% of students use ChatGPT at least weekly, representing a jump of 27 percentage points over 2023. And more students use AI chatbots to write essays and assignments (56%) than to study for tests and quizzes (52%). So what about the other students that don't use AI, like the girl I quoted above? I find they often take a rather antagonistic view. Some kids I talk to (usually the ones already doing well in class) seem to look down on classmates who use AI and, in the process, they look down on AI to do their homework as well. And I think that's wrong. As soon as I learned about ChatGPT, I felt that the key to using AI tools well is obvious. It lies in its name: tools. Like a ruler for drawing straight lines, or a dictionary for looking up words, AI chatbots are tools, only more incredibly versatile ones. One of the biggest problems, of course, is that AI chatbots don't always get their facts right (in AI parlance, they 'hallucinate'). So if you ask it for an essay on 'fastest marine mammal', there's a chance it'll include references to 'sailfish' and 'peregrine falcon'. In one test of AI chatbots, hallucination rates for newer AI systems were as high as 79%. Even OpenAI, the company behind ChatGPT, isn't immune. Their o3 release hallucinated 33% of the time in their PersonQA benchmark test, which measures how well it answers questions about public figures. The new o4-mini performed even worse, hallucinating 48% of the time. There are ways to work around this, but I think most people don't know them. For example, many chatbots now have a 'Deep Research' mode that actively searches the internet and presents answers along with sources. The thing about this is that you, the reasonable, competent, and capable human being, can check the original source to see if it's something you trust. Instead of the machine telling you what it 'knows', it tells you what it found, and it's up to you to verify it. Another method is to feed the chatbot the materials you want it to use, like a PDF of your textbook or a research paper. Google's NotebookLM is designed for this. It only works with the data you supply, drastically reducing hallucinations. You can then be more sure of the information it produces. In one stroke, you've turned the chatbot into a hyper-intelligent search engine that not only finds what you're looking for but also understands context, identifies patterns, and helps organise the information. That's just a small part of what AI can do. But even just helping students find and organise information better is a huge win. And ideally, teachers should lead the charge in classrooms, guiding students on how to work with AI responsibly and effectively. Instead, many feel compelled to ban it or to try to 'AI-proof' assignments, for example, by demanding handwritten submissions or choosing topics that chatbots are more likely to hallucinate on. But we can do better. We should allow AI in and teach students how to use it in a way that makes them better. For example, teachers could say that the 'slop' AI generates is the bare minimum. Hand it in as-is, and you'll scrape a C or D. But if you use it to refine your thoughts, to polish your voice, to spark better ideas, then that's where the value lies. And students can use it to help them revise by getting it to generate quizzes to test themselves with (they, of course, have to verify the answers the AI gives are correct). Nevertheless, what I've written about so far is about using AI as a tool. The future is about using it as a collaborator. Right now, according to the 2025 Microsoft Work Trend Index, while 50% see it as a command-based tool, 48% of Malaysian workers treat AI as a thought partner. The former issues basic instructions, while the latter has conversations and you have human-machine collaboration. The report goes on to say explicitly that this kind of partnership is what all employees should strive for when working with AI. That means knowing how to iterate the output given, when to delegate, when to refine the results, and when to push back. In short: the same skills we want kids to learn anyway when working with classmates and teachers. And the truth is that while I've used AI to find data, summarise reports, and – yes – to proofread this article, I haven't yet actively collaborated with AI. However, the future seems to be heading in that direction. Just a few weeks ago, I wrote about mathematician Terence Tao who predicts that it won't be long until computer proof assistants powered by AI may be cited as co-authors on mathematics papers. Clearly, I still have a lot to learn about using AI day-to-day. And it's hard. It involves trial and error and wasted effort while battling with looming deadlines. I may deliver inferior work in the meantime that collaborators may have to rewrite. But I remain, as ever, optimistic. Because technology – whether as a tool or a slightly eccentric collaborator – has ultimately the potential to make us and our work better. Logic is the antithesis of emotion but mathematician-turned-scriptwriter Dzof Azmi's theory is that people need both to make sense of life's vagaries and contradictions. Write to Dzof at lifestyle@ The views expressed here are entirely the writer's own.