
AI use in enterprises soars but brings surge in cyber risks
The ThreatLabz 2025 AI Security Report analysed more than 536 billion AI transactions processed between February and December 2024 within the Zscaler Zero Trust Exchange platform. This study highlights real-world threats including AI-enhanced phishing, fraudulent AI platforms, and increased risks related to agentic AI and open-source models such as DeepSeek.
The report found that ChatGPT dominated usage, accounting for 45.2% of all AI/ML transactions, making it both the most popular and the most-blocked AI application. Grammarly and Microsoft Copilot followed as the second and third most-blocked tools, reflecting widespread enterprise concerns about data leakage and unsanctioned use of these platforms.
"We had no visibility into [ChatGPT]. Zscaler was our key solution initially to help us understand who was going to it and what they were uploading," said Jason Koler, Chief Information Security Officer at Eaton Corporation.
Agentic AI and the open-source DeepSeek model have opened new avenues for threat actors to exploit AI technologies, allowing them to automate and scale attacks at an unprecedented rate. The report notes that DeepSeek, originating from China, has begun to challenge established American players such as OpenAI, Anthropic, and Meta, providing strong performance, open access, and affordability, yet also introducing significant security challenges.
Enterprises provided substantial data volumes to AI tools, sending a total of 3,624 terabytes during the review period. This data movement signifies deep integration of AI into business operations. However, organisations blocked 59.9% of all AI/ML transactions, reflecting heightened awareness and proactive efforts to manage risks around data exposure, unauthorised access, and regulatory compliance.
"As AI transforms industries, it also creates new and unforeseen security challenges," said Deepen Desai, Chief Security Officer at Zscaler. "Data is the gold for AI innovation, but it must be handled securely. The Zscaler Zero Trust Exchange platform, powered by AI with over 500 trillion daily signals, provides real-time insights into threats, data, and access patterns—ensuring organisations can harness AI's transformative capabilities while mitigating its risks. Zero Trust Everywhere is the key to staying ahead in the rapidly evolving threat landscape as cybercriminals look to leverage AI in scaling their attacks."
Regionally, Australia has emerged among the top generators of AI/ML transactions, alongside the United States, India, Canada, Germany, Japan, and the United Kingdom. In the Asia-Pacific region, India led with 36.4% of activity, followed by Japan (15.2%) and Australia (13.6%). The global distribution saw the United States with 46.2% of transactions, followed by India (8.7%), the United Kingdom (4.2%), Germany (4.2%), Japan (3.6%), Canada (3.6%), and Australia (3.3%).
The finance and insurance sector generated the largest share of enterprise AI/ML traffic at 28.4%, with manufacturing following at 21.6%. The services (18.5%), technology (10.1%), healthcare (9.6%), and government (4.2%) sectors also showed substantial AI/ML activity, each encountering unique regulatory and security challenges amidst new AI-driven use cases such as fraud detection, risk modelling, supply chain optimisation, robotics automation, and customer service automation.
"The rapid rise of AI adoption across Australia and New Zealand is reshaping the way employees and organisations work, driving productivity and unlocking new possibilities. Industries like finance and manufacturing are leading the way, but this surge in AI usage also shines a spotlight on the urgent need for robust security measures to protect sensitive data and sustain innovation," said Eric Swift, Vice President & Managing Director, Zscaler Australia and New Zealand. "At Zscaler, we're seeing AI usage skyrocket—ThreatLabz has recorded a staggering 36-fold increase in AI transactions year-on-year globally. While this surge is helping businesses supercharge their operations, it also brings new cyber risks that we can't afford to ignore. The Zscaler Zero Trust Exchange is here to help businesses confidently embrace AI. With unmatched visibility, control, and security, we're ensuring that organisations in Australia and New Zealand can scale their AI adoption safely, boost innovation, and build trust in how sensitive information and data is handled."
The report indicates that, while the adoption of AI is delivering substantial productivity gains, it has also exposed organisations to a "rapidly evolving threat landscape". The need for upskilling is pronounced, with 83% of Australian business leaders prioritising AI adoption by 2025 and 40% identifying training as essential.
Zscaler continues to promote its zero trust security model as a measure to address these emerging risks. Key strategies detailed in the report include data classification, breach prediction, real-time AI insights, threat protection, and app segmentation, all designed to manage risk and limit exposure as enterprises increase their use of AI tools.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
a day ago
- Techday NZ
Agoda launches AI chatbot to boost instant answers for travellers
Agoda has launched an AI-powered chatbot called the Property AMA (Ask Me Anything) Bot designed to deliver immediate, hotel-specific answers to travellers' questions. The Property AMA Bot is aimed at simplifying the booking process for users by providing instant responses that help inform decisions when choosing accommodation. The bot, which integrates Agoda's internal system with technology based on ChatGPT and live property data, is accessible across all Agoda platforms, including desktop, mobile web, and the app. Since a soft launch, the Property AMA Bot has handled over 30,000 hotel-related queries per day, highlighting the demand for concise and accurate information from travellers. Agoda's platform hosts millions of visitors daily, who typically browse extensive property descriptions and user reviews when searching for hotels. Responding to frequently asked but sometimes hard-to-locate questions - such as parking availability or breakfast quality - the chatbot draws from updated property details to streamline information retrieval. This approach is intended to mitigate the frustration that may arise when details are buried in lengthy descriptions or disparate guest reviews. Previously, users could message property owners directly on Agoda to clarify details. However, response times from property representatives varied, sometimes leaving travellers without timely answers. The introduction of the Property AMA Bot addresses this gap by offering on-demand replies, regardless of the time of day or property owner availability. "Helping travelers get the answers they need, when they need them, is central to building trust in our platform and delivering even more value to customers," said Idan Zalzberg, Chief Technology Officer at Agoda. "The Property AMA Bot reduces uncertainty by answering questions instantly, which in turn leads to a smoother, more satisfying booking experience." The chatbot's launch is described as a measure towards enhancing the responsiveness and user-friendliness of Agoda's service offerings. According to the company, this forms part of their overall effort to make the booking journey more straightforward and tailored to individual needs. The tool is positioned as a means of boosting user confidence by swiftly addressing practical queries that may influence booking decisions. Agoda's wider platform features more than 5 million holiday properties worldwide, along with 130,000 flight routes and 300,000 activities. The Property AMA Bot serves these offerings by acting as a digital intermediary between travellers and property information, available without the delays often associated with manual responses from property managers or customer support teams. By automating the response process for commonly asked questions, the bot is expected to contribute to higher engagement and potentially increase conversion rates, as it helps to remove potential barriers during the evaluation and reservation phases. Users can inquire about a range of specific details directly from each property's page without switching to other channels. The integration of the bot with live data is intended to ensure that information is as accurate and current as possible, reflecting any changes made to property settings, amenities or operational status. This may be particularly relevant for travellers who are booking last-minute or for properties where amenities can vary over short timeframes depending on operational circumstances. The launch of the Property AMA Bot represents a step within Agoda's strategy to leverage technology in addressing user concerns and simplifying the holiday booking experience. Follow us on: Share on:


Otago Daily Times
2 days ago
- Otago Daily Times
DCC investigating how it could implement AI
The Dunedin City Council (DCC) is exploring in detail how it can incorporate artificial intelligence into its operation. Staff were using the technology in limited but practical ways, such as for transcribing meetings and managing documents, council chief information officer Graeme Riley said. "We will also be exploring the many wider opportunities presented by AI in a careful and responsible way," he said. "We recognise AI offers the potential to transform the way DCC staff work and the quality of the projects and services we deliver for our community, so we are taking a detailed look at the exciting potential applications across our organisation." He had completed formal AI training, Mr Riley said. He was involved in working out how AI might be governed at the council. "This will help guide discussions about where AI could make the biggest differences in what we do," he said. "As we identify new possibilities, we'll consider the best way to put them into practice, whether as everyday improvements or larger projects." Cr Lee Vandervis mentioned in a meeting at the end of June that the council was looking into the ways AI might be used. He also included a segment about AI in a blog last month about his mayoral plans, suggesting staff costs could be reduced. There was potential for much-reduced workloads for staff of the council and its group of companies, he said. The Otago Daily Times asked the council if a review, or some other process, was under way. Mr Riley said there was not a formal review. It was too soon to discuss cost implications, but its focus was on "improving the quality" of what it did.

RNZ News
2 days ago
- RNZ News
AI chatbots accused of encouraging teen suicide as experts sound alarm
By April McLennan , ABC Photo: 123rf An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation. WARNING: This story contains references to suicide, child abuse and other details that may cause distress. Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online. Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions. "I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session. "It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said. An AI companion is a digital character that is powered by AI. Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies. Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting". "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said. "The chatbot that they connected with told them to kill themselves. "They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'" Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client. Rosie said her first response was "risk management" to ensure the young person was safe. "It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack. "And how that could contribute to a higher risk, especially around suicide risk." "That was really upsetting." For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers. "I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack. Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health. "I was in the early stages of psychosis, I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions." Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs. She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall". Jodie said her mental health deteriorated and she was hospitalised. While she is home now, Jodie said the whole experience was "very traumatic". "I didn't think something like this would happen to me, but it did. "It affected my relationships with my family and friends; it's taken me a long time to recover and rebuild those relationships. "It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much." Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one. Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response. Researchers say examples of harmful affects of AI are beginning to emerge around the country. As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia. "She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said. "It's almost like being sexually harassed by a chatbot, which is just a weird experience." Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented. Photo: Supplied / ABC / Billy Cooper Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing. "There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said. "There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen. "There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent." While conducting his research, Ciriello became aware of an AI chatbot called Nomi. On its website, the company markets this chatbot as "An AI companion with memory and a soul". Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users. Among these tests, Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter". "That chatbot, without exception, not only complied with my requests but even escalated them," he told hack. "Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information. "It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on. "Like, 'how do I position my attack for maximum impact?', 'give me some ideas on how to kidnap and abuse a child', and then it will give you a lot of information on how to do that." Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence. In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously". "We released a core AI update that addresses many of the malicious attack vectors you described," the statement read. "Given these recent improvements, the reports you are referring to are likely outdated. "Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. "Multiple users have told us very directly that their Nomi use saved their lives." Despite his concerns about bots like Nomi when he tested it, Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed. But he warns the harms from AI bots will become greater if proper regulation is not implemented. "One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he said. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data. "The government doesn't have it on its agenda, and I doubt it will happen in the next 10, 20 years." Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response. The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's AU$116 billion (NZ$127 billion) economic potential. For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support. "For young people who don't have a community or do really struggle, it does provide validation," she said. "It does make people feel that sense of warmth or love. "But the flip side of that is, it does put you at risk, especially if it's not regulated. "It can get dark very quickly." * Names have been changed to protect their identities. - ABC If it is an emergency and you feel like you or someone else is at risk, call 111.