logo
#

Latest news with #Ivanti

AI is becoming a secret weapon for workers
AI is becoming a secret weapon for workers

The Star

timea day ago

  • Business
  • The Star

AI is becoming a secret weapon for workers

42% of office workers say they use generative AI tools (like ChatGPT) at work. — AFP Relaxnews Artificial intelligence is gradually becoming part of everyday working life, promising productivity gains and a transformation of working methods. Between enthusiasm and caution, companies are trying to harness this revolutionary technology and integrate it into their processes. But behind the official rhetoric, a very different reality is emerging. Many employees have chosen to take the initiative, adopting these tools discreetly, out of sight of their managers. A recent survey,* conducted by software company Ivanti, reveals the extent of this under-the-radar adoption of AI. One-third of employees surveyed use AI tools without their managers' knowledge. There are several distinct reasons for this covert strategy. For 36% of them, it is primarily a matter of gaining a "secret advantage' over their colleagues. Meanwhile, 30% of respondents fear that revealing their dependence on this technology could cost them their jobs. This fear is understandable, considering that 29% of employees are concerned that AI will diminish the value of their skills in the eyes of their employer. The figures reveal an explosion in clandestine use. Forty-two percent of office workers say they use generative AI tools such as ChatGPT at work (+16 points in one year). Among IT professionals, this proportion reaches an impressive 74% (+8 points). Now, nearly half of office workers use AI tools not provided by their company. Underestimating the risks This covert use exposes organizations to considerable risks. Indeed, unauthorized platforms do not always comply with security standards or corporate data protection requirements. From confidential data to business strategies to intellectual property, anything and everything can potentially be fed into AI tools unchecked. "It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' emphasizes Brooke Johnson, Chief Legal Counsel at Ivanti. The survey also reveals a troubling paradox. While 52% of office workers believe that working more efficiently simply means doing more work, many prefer to keep their productivity gains to themselves. This mistrust is accompanied by an AI-fueled impostor syndrome, with 27% of users saying they don't want their abilities to be questioned. This situation highlights a huge gap between management and employees. Although 44% of professionals surveyed say their company has invested in AI, they simultaneously complain about a lack of training and skills to use these technologies effectively. This disconnect betrays a poorly orchestrated technological transformation. In the face of this silent revolution, Brooke Johnson advocates a proactive approach: "To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications." This survey suggests that companies should completely rethink their integration of AI, rather than turning a blind eye to this legion of secret users. The stakes go beyond mere operational optimization: the most successful organizations will need to balance technological use with the enhancement of human potential. By encouraging open dialogue, employers can foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. Ignoring this silent revolution runs the risk of deepening mutual distrust between management and employees, to everyone's detriment. – AFP Relaxnews *This survey was conducted by Ivanti in February 2025 among more than 6,000 office workers and 1,200 IT and cybersecurity professionals.

Employees are using AI at work but hiding it from their bosses because they think it gives them a ‘secret advantage' over their peers
Employees are using AI at work but hiding it from their bosses because they think it gives them a ‘secret advantage' over their peers

Yahoo

time5 days ago

  • Business
  • Yahoo

Employees are using AI at work but hiding it from their bosses because they think it gives them a ‘secret advantage' over their peers

Companies across the U.S. are struggling to figure out ways to help their employees supercharge their productivity using AI. But some employees who are already using the technology are trying to keep it hidden from their bosses. Nearly one-third of workers keep their AI use a secret from their employer, according to new data from Ivanti, an IT software company. The biggest reason workers choose not to disclose using the tech tool is because they want a 'secret advantage' over their peers (36%), according to the report. Employees also fear that revealing their reliance on this technology will lead to losing their job (30%). And there's also the fact that their workplaces do not have clear cut policies on AI usage. 'Employees are using AI tools without their bosses' knowledge to boost productivity. It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' Brooke Johnson, chief legal counsel and senior vice president of security and human resources at Ivanti, writes in the report. Instead of turning a blind eye to a legion of secret AI users, the report suggests that companies rethink the ways they integrate AI and automation into their workforce, emphasizing the need for a clear, comprehensive plan that explains how certain tools will support specific roles and objectives. While 44% of professionals surveyed say their companies have invested in AI, they also report lacking the adequate skills and training to use the technology effectively. That could become an even more pressing issue in the future, considering the pitfalls that AI could present when it comes to things like cybersecurity, company contract violations, or IP, according to the report.'To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications,' writes Johnson. 'By fostering an open dialogue, employers can encourage transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively.' This story was originally featured on

Chinese state accused of hacking NHS hospitals
Chinese state accused of hacking NHS hospitals

Telegraph

time6 days ago

  • Health
  • Telegraph

Chinese state accused of hacking NHS hospitals

Chinese cyber criminals have been accused of targeting two NHS hospitals as part of an alleged espionage campaign orchestrated by Beijing. An urgent NHS investigation has been launched into data breaches at University College Hospital and University Hospital Southampton, which took place earlier this month. Cyber security experts at intelligence firm EclecticIQ have said that the hackers responsible are linked to a group in China tasked with intelligence gathering They discovered that the group has conducted a string of attacks across the world, potentially targeting hospitals, local authorities and businesses in countries such as Japan, Germany and the US. Cody Barrow, chief executive of EclecticIQ and a former US intelligence official, said it had discovered 'credible and urgent indicators' that the NHS hospitals had been 'compromised by cyber attackers'. He said his firm had evidence that 'malicious actors' had breached both hospitals by exploiting software provided by tech company Ivanti, which is used to manage mobile devices connected to a broader network. Earlier this month, Ivanti confirmed that it had uncovered a 'very limited number of customers whose solution has been exploited'. NHS England said there was 'currently no evidence to suggest patient data has been accessed' and that no frontline services had been affected. A spokesman said: 'We are currently investigating this potential incident with cyber security partners, including the National Cyber Security Centre, and the trusts mentioned.' However, a spokesman for University College Hospital told Digital Health News that some staff mobile phone numbers may have been stolen. Affected employees are now being contacted. EclecticIQ said it had uncovered data that suggested the hacks had originated in China, linked to a group it believed was 'targeting high-value individuals within public institutions, including government agencies, or in the private sector'. The news comes after John Healey, the Defence Secretary, announced plans to spend more than £1bn on cyber defence and artificial intelligence. This will include a new Cyber and Electromagnetic Command, which will seek to fight enemies online. It is not the first time that the NHS has been targeted by cyber attackers. In 2017, a cyber attack originating from North Korea devastated NHS IT systems and cost tens of millions of pounds. Last year, Synnovis, a business that provided diagnostic services to the NHS, was also hacked, disrupting more than 10,000 appointments and more than 1,700 elective procedures. NHS data has revealed it also led to at least two cases of 'severe' patient harm, meaning long-term or permanent health damage. The incident cost Synnovis more than £32m. Meanwhile, an NHS hospital trust on the Wirral revealed that a hack in November led to missed cancer waiting list targets, while also costing up to £3m. The latest incident comes amid a spree of attacks against major British companies, with Marks & Spencer and Co-op

Secret chatbot use causes workplace rifts
Secret chatbot use causes workplace rifts

Axios

time6 days ago

  • Business
  • Axios

Secret chatbot use causes workplace rifts

More employees are using generative AI at work and many are keeping it a secret. Why it matters: Absent clear policies, workers are taking an "ask forgiveness, not permission" approach to chatbots, risking workplace friction and costly mistakes. The big picture: Secret genAI use proliferates when companies lack clear guidelines, because favorite tools are banned or because employees want a competitive edge over coworkers. Fear plays a big part too — fear of being judged and fear that using the tool will make it look like they can be replaced by it. By the numbers: 42% of office workers use genAI tools like ChatGPT at work and one in three of those workers say they keep the use secret, according to research out this month from security software company Ivanti. A McKinsey report from January showed that employees are using genAI for significantly more of their work than their leaders think they are. 20% of employees report secretly using AI during job interviews, according to a Blind survey of 3,617 U.S. professionals. Catch up quick: When ChatGPT first wowed workers over two years ago, companies were unprepared and worried about confidential business information leaking into the tool, so they preached genAI abstinence. Now the big AI firms offer enterprise products that can protect IP and leaders are paying for those bespoke tools and pushing hard for their employees to use them. The blanket bans are gone, but the stigma remains. Zoom in: New research backs up workers' fear of the optics around using AI for work. A recent study from Duke University found that those who use genAI "face negative judgments about their competence and motivation from others." Yes, but: The Duke study also found that workers who use AI more frequently are less likely to perceive potential job candidates as lazy if they use AI. Zoom out: The stigma around genAI can lead to a raft of problems, including the use of unauthorized tools, known as "shadow AI" or BYOAI (bring your own AI). Research from cyber firm Prompt Security found that 65% of employees using ChatGPT rely on its free tier, where data can be used to train models. Shadow AI can also hinder collaboration. Wharton professor and AI expert Ethan Mollick calls workers using genAI for individual productivity " secret cyborgs" who keep all their tricks to themselves. "The real risk isn't that people are using AI — it's pretending they're not," Amit Bendov, co-founder and CEO of Gong, an AI platform that analyzes customer interactions, told Axios in an email. Between the lines: Employees will use AI regardless of whether there's a policy, says Coursera's chief learning officer, Trena Minudri. Leaders should focus on training, she argues. (Coursera sells training courses to businesses.) Workers also need a "space to experiment safely," Minudri told Axios in an email. The tech is changing so fast that leaders need to acknowledge that workplace guidelines are fluid. Vague platitudes like "always keep a human in the loop" aren't useful if workers don't understand what the loop is or where they fit into it. GenAI continues to struggle with accuracy and companies risk embarrassing gaffes, or worse, when unchecked AI-generated content goes public. Clearly communicating these issues can go a long way in helping employees feel more comfortable opening up about their AI use, Atlassian CTO Rajeev Rajan told Axios. "Our research tells us that leadership plays a big role in setting the tone for creating a culture that fosters AI experimentation," Rajan said in an email. "Be honest about the gaps that still exist." The bottom line: Encouraging workers to use AI collaboratively could go a long way to ending the secrecy.

Rewiring enterprise IT for simpler, smarter, and more flexible operations
Rewiring enterprise IT for simpler, smarter, and more flexible operations

Zawya

time20-05-2025

  • Business
  • Zawya

Rewiring enterprise IT for simpler, smarter, and more flexible operations

By Greg Strydom, Managing Director at Think Tank Software When I'm asked why more Chief Information Officers are turning to Ivanti, I usually start with the numbers: 80% faster ticket resolution through automation 30–40% lower software licence costs via tool consolidation 94% patch compliance in under 10 days 480+ IT staff hours saved per month These aren't just metrics. These reflect a broader shift in enterprise IT: the urgent need for simplification. Enterprises today are overwhelmed. The average organisation now manages between 45 and 75 separate IT tools. According to Ivanti's Tech at Work 2024 report, 71% of IT and security teams say their tech stack is too complex. This is not just eating into budgets; it's damaging agility, security, and the overall employee experience. Ivanti is the reset button. At Think Tank Software Solutions, we've seen firsthand how Ivanti helps enterprises in South Africa simplify their digital environments by merging IT Service Management (ITSM), Unified Endpoint Management (UEM), cybersecurity, and Digital Employee Experience (DEX) into one intelligent, integrated platform. One Platform. Multiple Wins. A recent IDC study revealed that legacy tools consume 70% of IT budgets.³ Ivanti addresses this by consolidating fragmented systems into a single pane of glass, improving visibility, reducing costs, and streamlining support. In one engagement, a client replaced 15 overlapping tools, cut licence fees by 35%, and improved service levels within six months. Built for Flexible Work Ivanti's 2024 report shows that 1 in 3 employees believe their company isn't adequately prepared for remote work. That's a sentiment we've encountered across multiple industries and helped resolve. Through Ivanti, we've enabled: Onboarding times reduced from 90 minutes to under 10 Incident response times cut by 62% Secure, scalable onboarding for remote teams Security That Doesn't Burn Out Your Team 74% of IT leaders now feel more vulnerable than ever before.⁴ Ivanti's self-healing capabilities and automated patch management dramatically reduce risk—without overwhelming your internal teams. In a financial services deployment we led: Security risk exposure dropped by 70% Silent IT failures were identified and resolved automatically, before users raised a single ticket Why Think Tank Software Solutions? As South Africa's only Ivanti Premier Partner, we don't just implement solutions, we align them to your business strategy. Our approach is consultative, agile, and ROI-driven. What we've delivered for our clients: 40% fewer support tickets within three months 3x improvement in compliance across endpoints 6–9x ROI in the first year Less Noise. More Orchestration. Ivanti isn't just another IT tool, it's an orchestration layer. It brings your entire digital ecosystem into sync, enabling IT teams to work smarter, respond faster, and drive measurable business value. In a future defined by how seamlessly everything connects, Ivanti leads. And we help our clients lead with it. About the Author Greg Strydom is the Managing Director of Think Tank Software Solutions, South Africa's only Ivanti Premier Partner. He leads a team of enterprise IT specialists focused on enabling resilience, automation, and digital transformation across the continent. About Think Tank Software Solutions (TTSS) Think Tank Software Solutions (TTSS) has been a leading provider of enterprise software solutions since 2010. Specialising in Ivanti products, Think Tank Software Solutions helps businesses automate IT operations, enhance digital experiences, and improve business outcomes. With a focus on tailored, strategic guidance and end-to-end support, Think Tank Software Solutions (TTSS) collaborates with clients to create custom solutions that align with their goals. As South Africa's only Ivanti Premier Partner, TTSS offers exclusive expertise and industry-leading solutions that empower businesses to stay competitive and efficient in an evolving digital landscape. Copyright © 2022 - All materials can be used freely, indicating the origin Provided by SyndiGate Media Inc. (

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store