
F5 enhances AI data protection with real-time leak detection tools
The newly introduced functionalities aim to increase visibility into encrypted traffic, enabling businesses to identify and block unauthorised data sharing and defend against shadow AI.
F5's updates include improvements to its AI Gateway and BIG-IP SSL Orchestrator, intending to simplify security management and regulatory compliance across modern hybrid and multicloud environments.
AI security and compliance
The ADSP enhancements are designed to address difficulties encountered when businesses adopt AI solutions and move towards hybrid cloud infrastructures.
Sensitive information is increasingly traversing encrypted channels and exposing organisations to new risks if accessed or transmitted via unapproved AI tools. F5's solution seeks to tackle these risks by offering real-time detection and mitigation of data leakage within complex, encrypted environments.
F5 officials highlighted the scale of the issue, noting the pressures facing business leaders in balancing the adoption of AI with stringent requirements for data protection. Kunal Anand, Chief Innovation Officer at F5, said: "The core tension in every boardroom today is the race to adopt AI versus the mandate to protect the firm's data. Forcing a choice between the two is a losing strategy. We're eliminating that choice. By providing deep visibility into encrypted AI conversations, we're giving leaders the controls to stop data leakage and govern AI use, effectively turning the CISO from a gatekeeper into the primary enabler of secure innovation."
According to F5, the newly deployed data leakage detection and prevention capabilities for the F5 AI Gateway use technology acquired from LeakSignal to analyse AI prompts and responses.
The system can spot sensitive information - such as personal or confidential data - and act according to customer-specified rules, including redacting, blocking or logging those details.
Key features detailed
The F5 AI Gateway enhancements include real-time detection and policy enforcement to protect and redact sensitive data as it flows into AI environments, and to prevent the unauthorised departure of information from approved environments.
Detailed reporting and audit logs can be integrated into existing Security Information and Event Management (SIEM) tools, giving organisations clearer oversight and simplifying compliance procedures.
Planned for release later in 2025, F5 intends to upgrade its BIG-IP SSL Orchestrator, introducing expanded AI data protection by providing real-time, deep visibility into encrypted traffic. This update will enable organisations to classify, detect and block unauthorised uses of AI and sharing of sensitive data while allowing permitted data traffic through. Reports and dashboards are to be centralised, supporting audit and investigation activities.
The company emphasises that these changes offer the ability for organisations to: Detect, classify and stop data leaks in encrypted and AI-driven traffic in real time.
Prevent risks from unauthorised AI use (shadow AI) and the exposure of sensitive information.
Apply consistent security and compliance policies across all applications, APIs and AI services.
Addressing regulatory demands
With the push for more effective data controls, organisations are contending with increased scrutiny from regulators and growing obligations to demonstrate robust management of AI infrastructure.
The adoption of these F5 technologies is positioned to help companies maintain compliance across environments that leverage on-premises, cloud and multicloud deployments.
The update to F5's AI Gateway allows organisations to define their own sensitive data policies, leveraging up-to-date inspection technology to intercept and manage data at key points of transit. This is anticipated to reduce risk by ensuring that sensitive details do not inadvertently leave the network or become exposed to third-party AI systems.
F5 confirmed that these new measures will support compliance and audit readiness, meeting evolving expectations as more enterprises adopt AI at scale within critical workflows and regulatory oversight increases in response to rapid AI integration.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

RNZ News
41 minutes ago
- RNZ News
ChatGPT upgrade 'like talking to a PhD-level expert'
By Lisa Eadicicco , CNN Photo: AFP / JONATHAN RAA This week, OpenAI has launched GPT-5 - an upgraded version of the AI model behind ChatGPT that the company claims is significantly faster and more capable than its predecessor. The AI giant has faced increased competition, amid growing concerns about AI's impact on mental health and future jobs. GPT-5, which launched across OpenAI's free and paid tiers, will make ChatGPT better at tasks like writing, coding and answering health-related questions, OpenAI claimed. It also promised to prevent the popular chatbot from hallucinating as often and deceiving users when it could not answer a question. New models like GPT-5 are important, because they determine how services like ChatGPT function and the new capabilities they will support, making them an essential part of its future direction. OpenAI chief executive Sam Altman said GPT-5 was a "significant step" on the path toward artificial general intelligence, a hypothetical point at which AI can match human-level thinking. "GPT-4 felt like you're kind of talking to a college student," Altman said. "GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert." In addition to ChatGPT, the new model was also available for developers who built tools and services based on OpenAI's technology. Software development was a big area of focus for GPT-5. Altman said the model could generate "an entire piece of software" for you - a practice colloquially known as "vibe coding". In a demonstration, OpenAI showed how ChatGPT could create a website for learning French, after typing in a prompt asking it to do so. Altman said use cases like this would be a "defining part of the new GPT-5 era". GPT-5 had arrived, as coders increasingly used AI to handle certain parts of their job. Meta chief executive Mark Zuckerberg previously said he expected about half the company's code to be written by AI next year. Twenty to 30 percent of Microsoft's code is written by AI, chief executive Satya Nadella said. Anthropic chief executive Dario Amodei sparked fears over AI impacting jobs in May, when he said he believed the technology could lead to a spike in unemployment. OpenAI's new model also hit on another concern around the use of AI - it could be deceptive, as research from Anthropic and AI research firm Apollo Research had shown, or provide incorrect information. Previously, the system would say it could do a task that it could not complete or refer to an image that was not there, said OpenAI safety research lead Alex Beutel. With GPT-5, the company said it had trained the model to be honest in these types of scenarios. GPT-5 would also be more careful about how it answered questions to queries that could be potentially harmful. While the company did not provide a specific example of how that would look in practice, it said the model would aim to give an answer that was helpful "within the constraints of remaining safe", which may involve giving high-level answers that are not too specific. The update came after concerns were raised about people becoming too reliant on AI assistants, particularly emotionally, raising questions about the technology's impact on mental wellbeing. A man in Idaho, for example, told CNN that ChatGPT sparked a spiritual awakening for him, after he began discussing topics like religion with the chatbot. His wife said it put a lot of strain on their family. OpenAI is widely considered the frontrunner in AI, thanks to ChatGPT, which is now on track to hit 700 million weekly active users, but the competition continues to grow, especially among younger users. Research from web analytics company SimilarWeb suggested AI search app Perplexity, DeepSeek, Anthropic's Claude and xAI's Grok all had higher app usage among 18-34 year olds. Zuckerberg is attempting to snatch up top-tier AI talent with reported multimillion dollar pay packages to get ahead in the AI race. The launch also came at a period of growth and expansion for AI. The tech giant now played a bigger role in education and the government in the United States, recently striking a partnership with classroom software provider Instructure and launching a study mode for ChatGPT. OpenAI had also worked with the Trump administration on projects like the US$500 billion ($NZ840 billion) AI infrastructure project known as Stargate. Altman had appeared in Washington several times and OpenAI recently confirmed planned to open its first office in the US capital city. - CNN


NZ Herald
2 days ago
- NZ Herald
ChatGPT got a big upgrade. Here's what to know about OpenAI's GPT-5
How much does it cost to access GPT-5? All ChatGPT users will get access to GPT-5, even those using the free version. But only those with a US$200-a-month ($335) 'Pro' subscription get unlimited access to the newly released system. GPT-5 will be the default mode on all versions. Users not paying for ChatGPT will only be able to ask a certain number of questions answered by GPT-5 before the chatbot switches back to using an older version of OpenAI's technology. How will GPT-5 change ChatGPT? GPT-5 responds to questions faster than OpenAI's previous offerings and is less likely to 'hallucinate' or make up false answers, OpenAI executives said at a news briefing before its release. It gives ChatGPT 'better taste' when generating writing, said Nick Turley, who leads work on the chatbot. OpenAI's new AI software can also answer queries using a process dubbed reasoning that shows the user a series of messages attempting to break down a question into steps before giving its final answer. 'GPT-5 is the first time that it really feels like talking to an expert, a PhD-level expert,' OpenAI CEO Sam Altman said. Altman said GPT-5 is particularly good at generating computer programming code, a feature that has become a major selling point for OpenAI and rival AI developers and has transformed the work of programmers. In a demo, the company showed how two paragraphs of instruction was enough to have GPT-5 create a simple website offering tutoring in French, complete with a word game and daily vocabulary tests. Execs say ChatGPT users can now connect the app with their Google calendars and email accounts. Photo / Getty Images Altman predicted that people without any computer science training will one day be able to quickly and easily generate any kind of software they need to help them at work or with other tasks. 'This idea of software on demand will be a defining part of the new GPT-5 era,' Altman said. Turley also claimed the upgrade made ChatGPT better at connecting with people. 'The thing that's really hard to put into words or quantify is the fact that just feels more human,' he said. In a livestream Thursday, OpenAI execs said ChatGPT users could now connect the app with their Google calendars and email accounts, allowing the chatbot to help people schedule activities around their existing plans. What does it mean for an AI chatbot to 'reason?' GPT-5 could give many people their first encounter with AI systems that attempt to work through a user's request step-by-step before giving a final answer. That so-called 'reasoning' process has become popular with AI companies because it can result in better answers on complex questions, particularly on math and coding tasks. Watching a chatbot generate a series of messages that read like an internal monologue can be alluring, but AI experts warn users not to confuse the technique with a peek into AI's black box. The self-chatter doesn't necessarily reflect an internal process like that of a human working on a problem, but designing chatbots to create what are sometimes dubbed 'chains of thought' forces the software to allocate more time and energy to a query. OpenAI released its first reasoning model in September for its paying users, but Chinese start-up DeepSeek in January released a free chatbot that made its 'chain of thought' visible to users, shocking Silicon Valley and temporarily tanking American tech stocks. The company said ChatGPT will now automatically send some queries to the 'reasoning' version of GPT-5, depending on the type of conversation and complexity of the questions asked. Is GPT-5 the 'super intelligence' or 'artificial general intelligence' OpenAI has promised? No. Tech leaders have for years been making claims that AI is improving so fast it will soon become able to learn and perform all tasks that humans can at or better than our own ability. But GPT-5 does not perform at that level. Super intelligence and artificial general intelligence, or AGI, remain ill-defined concepts because human intelligence is very different from the capabilities of computers, making comparisons tricky. OpenAI CEO Altman has been one of the biggest proponents of the idea that AI capabilities are increasing so rapidly that they will soon revolutionise many aspects of society. 'This is a significant step forward,' Altman said of GPT-5. 'I would say it's a significant fraction of the way to something very AGI-like.' Some people have alleged that loved ones were driven to violence, delusion or psychosis by hours spent talking to ChatGPT. Photo / Getty Images Does GPT-5 change ChatGPT's personality? Changes OpenAI made to ChatGPT in April triggered backlash online after examples of the chatbot appearing to flatter or manipulate users went viral. The company undid the update, saying an attempt to enhance the chatbot's personality and make it more personalised instead led it to reinforce user beliefs in potentially dangerous ways, a phenomenon the industry calls 'sycophancy'. OpenAI said it worked to reduce that tendency further in GPT-5. As AI companies compete to keep users engaged with their chatbots, they could make them compelling in potentially harmful ways, similar to social media feeds, The Washington Post reported in May. In recent months, some people have alleged that loved ones were driven to violence, delusion or psychosis by hours spent talking to ChatGPT. Lawsuits against other AI developers claim their chatbots contributed to incidents of self-harm and suicide by teens. OpenAI released a report on GPT-5's capabilities and limits Thursday that said the company looked closely at the risks of psychosocial harms and worked with Microsoft to probe the new AI system. It said the reasoning version of GPT-5 could still 'be improved on detecting and responding to some specific situations where someone appears to be experiencing mental or emotional distress'. Earlier this week, OpenAI said in a blog post it was working with physicians across more than 30 countries, including psychiatrists and paediatricians, to improve how ChatGPT responds to people in moments of distress. Turley, the head of ChatGPT, said the company is not optimising ChatGPT for engagement.


Techday NZ
2 days ago
- Techday NZ
Exclusive: Garrett O'Hara on Mimecast's AI fight against cyber risk
In a world where cyberattacks are growing more sophisticated and frequent, organisations are increasingly focusing on what Garrett O'Hara calls the "most unpredictable element in security" - humans. Speaking during a recent interview, Garrett O'Hara, Senior Director of Solutions Engineering for APAC at Mimecast, explained how artificial intelligence (AI) is now being deployed to manage and mitigate human risk at scale. "Human risk is anything people can do that exposes an organisation to risk, either by accident or intent," he said. "Most of the time, it's not malicious - it's tiredness, deadlines, or someone trying to do their job more efficiently." He pointed out that employees often unintentionally bypass security policies under pressure. "They might upload sensitive documents to a personal drive just so they can work from home, not realising the huge risk that introduces," he added. AI tools, while offering productivity benefits, have also opened new doors for attackers. "We're seeing employees use tools like ChatGPT to summarise documents or create presentations, not realising they're potentially uploading sensitive corporate data to third-party platforms," he said. On the flip side, O'Hara said AI is a vital asset in the fight against these new types of threats. "AI is incredibly good at detecting patterns and threats that traditional methods might miss. For example, analysing URLs for slight variations that indicate a phishing attempt or identifying AI-generated scam emails." He described how phishing campaigns have become almost indistinguishable from genuine communications. "The old advice about bad grammar or strange formatting doesn't apply anymore. With AI, attackers are producing flawless emails in seconds," he said. "But the good news is that AI on the defensive side is just as powerful." Mimecast's platform uses AI throughout its stack, from sandboxing and behavioural analysis to identifying language markers in emails associated with business email compromise (BEC). "We look for those AI fingerprints - which often show up in machine-generated messages," he explained. For example, if there was an email that simulates a CEO urgently requesting staff to buy gift cards - a common BEC tactic - Mimecast's AI can intercept it. "Instead of an employee reacting to that urgency, we use AI to throw bubble wrap around them, flagging the threat before any action is taken," he said. Trust in AI is still an issue, however. "It's a double-edged sword," O'Hara acknowledged. "There's hype fatigue in cybersecurity - zero trust, now AI. And the problem is when vendors slap 'AI' onto everything, it erodes trust." He noted that some vendors rely solely on AI, which leads to high false positive rates and overburdened security teams. "AI is probability-based. Without cross-checking, it can trigger too many false alarms, and analysts burn out sifting through them," he said. "Our platform uses a layered approach - AI decisions are supported by additional checks across other systems, improving accuracy." Mimecast has gone a step further by achieving ISO certification for ethical use of AI, addressing concerns about bias and data misuse. "Transparency matters. You need to understand how the model works, especially if it goes off track," he said. "That's why we plan for machine unlearning - to rollback models if they learn something they shouldn't." Looking ahead, O'Hara envisions a future where AI acts as a sort of digital guardian angel. "Imagine a Clippy-like assistant - but useful - that knows your role, your habits, and quietly keeps you safe behind the scenes," he said. He also discussed how application programming interfaces (APIs) play a crucial role in integrating Mimecast's human risk platform with other systems. "We pull in data from HR, endpoint and identity platforms to paint a picture of risk - right down to the individual level," he explained. "If someone's on notice or switching roles, their risk profile changes. APIs help us adapt protection accordingly." Importantly, AI in cybersecurity is no longer just about detection and defence. Mimecast also uses it for prediction and prevention. "With data from 44,000 companies and billions of emails daily, our AI tools can identify emerging threats early and act before damage is done," he said. "That's where we're moving - from reactive to proactive security." But for smaller organisations, predictive security can seem out of reach. "The average Australian SMB doesn't have the budget or capacity for that level of protection," he noted. "We offer it as a service - so they benefit without the overhead." As for the future of cybersecurity training, O'Hara predicts a shift from generic instruction to highly tailored behavioural nudges. "Instead of monthly sessions, we'll see hyper-contextual, AI-generated interventions in the moment," he said. "That's the power of AI - it knows how to reach each individual in a way that resonates." He added that balancing automation with human oversight remains a key concern. "Right now, most organisations use automation to assist - not replace - analysts. And that's wise," he said. "False positives can grind a business to a halt if something like Salesforce gets blocked. But as AI improves, that balance will shift." Ultimately, he believes that the most exciting developments are still unknown. "I'm genuinely excited by what we don't yet see coming," he said. "AI has unlocked possibilities that feel like magic." And while security teams dream of AI replacing their most tedious tasks, O'Hara points out there's a long way to go. "If AI can act like Cinderella's godmother - guiding users to return home just before the stroke of midnight - then we're on the right track," he said.