
F5 Expands Strategic Collaboration With Red Hat To Enable Scalable, Secure Enterprise AI
Solutions address key challenges in enterprise AI adoptionenabling secure model serving, scalable data movement, and real-time inference across environments. F5 is collaborating with Red Hat to focus on the real-world building blocks enterprises …
F5 (NASDAQ: FFIV), the global leader in delivering and securing every app and API, today announced an expanded collaboration with Red Hat, the world's leading provider of open source solutions, to help enterprises deploy and scale secure, high-performance AI applications. By enabling integration for the F5 Application Delivery and Security Platform with Red Hat OpenShift AI, F5 customers can adopt AI faster and more securely, focusing on practical, high-value use cases such as retrieval-augmented generation (RAG), secure model serving, and scalable data ingestion.
'Enterprises are eager to harness the power of AI, but they face significant challenges in scaling and securing these applications,' said Kunal Anand, Chief Innovation Officer at F5. 'Our collaboration with Red Hat aims to simplify this journey by providing integrated solutions that address performance, security, and observability needs, enabling organisations to realise tangible AI outcomes.'
This collaboration comes at a time when AI adoption is accelerating. According to F5's 2025 State of Application Strategy Report, 96 per cent of organisations are now deploying AI models, a significant increase from just 25 per cent in 2023. Additionally, the report highlights that 72 per cent of respondents aim to use AI to optimise application performance, while 59 per cent focus on cost optimisation and security enhancements.
To support these growing demands, F5 is collaborating with Red Hat to focus on the real-world building blocks enterprises need to operationalise AI. From securing data pipelines to optimising inference performance, F5 solutions are tailored to help organisations deploy AI with confidence, speed, and control.
Key areas of collaboration include:
RAG and model serving at scale – F5 supports AI-powered applications on Red Hat OpenShift AI that combine large language models with private datasets, helping to ensure secure data flow, high GPU utilisation, and fast response times.
Big data movement and ingestion – With MinIO and F5 working in tandem on Red Hat OpenShift AI, customers can accelerate the ingestion of large datasets for training and inference.
API-first AI security – F5 provides robust protection against evolving threats like prompt injection, model theft, and data leakage through its F5 Distributed Cloud WAAP and F5 BIG-IP solutions.
As part of its vision, F5 is committed to driving open source innovation through its collaboration with Red Hat. Red Hat OpenShift AI provides a modular, open platform for building and deploying AI applications across hybrid environments, while F5's API Gateway and AI security capabilities are designed to integrate more seamlessly—without locking customers into a single cloud or toolset. With this collaboration, F5 is helping organisations take an open, flexible approach to AI infrastructure using Red Hat OpenShift AI.
'As AI becomes core to how businesses operate and compete, organisations need platforms that offer flexibility without compromising security,' said Joe Fernandes, Vice President and General Manager, AI Business Unit, Red Hat. 'We believe the future of AI is open source, and Red Hat OpenShift AI, when used in combination with F5's robust security and observability, gives organisations the necessary tools to build and scale AI applications with greater confidence, anywhere they choose to run them.'
The collaboration will be featured at this week's Red Hat Summit 2025 (May 19–22 in Boston), where F5 and its partners will highlight real-world AI use cases—including secure model serving and RAG workloads—built on Red Hat OpenShift AI.
Supporting Resources:
About F5
F5, Inc. (NASDAQ: FFIV) is the global leader that delivers and secures every app. Backed by three decades of expertise, F5 has built the industry's premier platform—F5 Application Delivery and Security Platform (ADSP)—to deliver and secure every app, every API, anywhere: on-premises, in the cloud, at the edge, and across hybrid, multicloud environments. F5 is committed to innovating and partnering with the world's largest and most advanced organizations to deliver fast, available, and secure digital experiences. Together, we help each other thrive and bring a better digital world to life.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Otago Daily Times
an hour ago
- Otago Daily Times
Eliminating jobs and living on borrowed time
As ever, we are living on borrowed time. There's the familiar old threat of global nuclear war and the growing risk of global climate catastrophe, plus not-quite-world-ending potential disasters like global pandemics and untoward astronomical events (asteroid strikes, solar flares, etc.) Lots to worry about already, if you're that way inclined. So, it's understandable that the new kid on the block, artificial intelligence, has been having some trouble making its presence felt. Yet the so-called 'godfather of Artificial Intelligence', scientist Geoffrey Hinton, who last year was awarded the Nobel Prize for his work on AI, sees a 10% to 20% chance that AI will wipe out humanity in the next three decades. We will come back to that, but let's park it for the moment because the near-term risk of an AI crash is more urgent and easier to quantify. This is a financial crash of the sort that usually accompanies an exciting new technology, not an existential crisis, but it is definitely on its way. When railways were the hot new technology in the United States in the 1850s, for example, there were five different companies building railways between New York and Chicago. They all got built in the end, but most were no longer in the hands of the original investors and a lot of people lost their shirts. We are probably in the final phase of the AI investment frenzy right now. We're a generation on from the bubble of the early 2000s, so most people have forgotten about that one and are ready to throw their money at the next. There are reportedly now more than 200 AI "unicorns" — start-ups "valued" at $1 billion or more — so the end is nigh. The bitter fact that drives even the industry leaders into this folly is the knowledge that after the great shake-out not all of them will still be standing. For the moment, therefore, it makes sense for them to invest madly in the servers, data-centres, semiconductor chips and brain-power that will define the last companies standing. The key measure of investment is capex — capital expenditure — and it's going up like a rocket even from month to month. Microsoft is forecasting about $100b in capex for AI in the next fiscal year, Amazon will spend the same, Alphabet (Google) plans $85b, and Meta predicts between $66 and $72b. Like $100m sign-on fees for senior AI researchers who are being poached from one big tech firm by another, these are symptoms of a bubble about to burst and lots of people will lose their shirts, but it's just part of the cycle. AI will still be there afterwards, and many uses will be found for it. Unfortunately, most of them will destroy jobs. The tech giants themselves are eliminating jobs even as they grow their investments. Last year 549 US tech companies shed 150,000 workers, and this year they are disappearing even faster. If that phenomenon spreads across the whole economy — and why wouldn't it? — we can get to the apocalypse without any need for help from Skynet and the Terminator. People talk loosely about "Artificial General Intelligence" (AGI) as the Holy Grail, because it would be as nimble and versatile as human intelligence, just smarter — but as tech analyst Benedict Evans says, "We don't really have a theoretical model of why [current AI models] work so well, and what would have to happen for them to get to AGI. "It's like saying 'we're building the Apollo programme but we don't actually know how gravity works or how far away the Moon is, or how a rocket works, but if we keep on making the rocket bigger maybe we'll get there'." So the whole scenario of a super-intelligent computer becoming self-aware and taking over the planet remains far-fetched. Nevertheless, old-fashioned 2022-style generative AI will continue to improve, even if Large Language Models are really just machines that produce human-like text by estimating the likelihood that a particular word will appear next, given the text that has come before. Aaron Rosenberg, former head of strategy at Google's AI unit Deep Mind, reckons that no miraculous leaps of innovation are needed. "If you define AGI more narrowly as at least 80th-percentile human-level performance [better than four out of five people] in 80% of economically relevant digital tasks, then I think that's within reach in the next five years." That would enable us to eliminate at least half of the indoor jobs by 2030, but if the change comes that fast it will empower extremists of all sorts and create pre-revolutionary situations almost everywhere. That's a bit more complicated than the Skynet scenario for global nuclear war, but it's also a lot more plausible. Slow down. — Gwynne Dyer is an independent London journalist.


Otago Daily Times
2 days ago
- Otago Daily Times
DCC investigating how it could implement AI
The Dunedin City Council (DCC) is exploring in detail how it can incorporate artificial intelligence into its operation. Staff were using the technology in limited but practical ways, such as for transcribing meetings and managing documents, council chief information officer Graeme Riley said. "We will also be exploring the many wider opportunities presented by AI in a careful and responsible way," he said. "We recognise AI offers the potential to transform the way DCC staff work and the quality of the projects and services we deliver for our community, so we are taking a detailed look at the exciting potential applications across our organisation." He had completed formal AI training, Mr Riley said. He was involved in working out how AI might be governed at the council. "This will help guide discussions about where AI could make the biggest differences in what we do," he said. "As we identify new possibilities, we'll consider the best way to put them into practice, whether as everyday improvements or larger projects." Cr Lee Vandervis mentioned in a meeting at the end of June that the council was looking into the ways AI might be used. He also included a segment about AI in a blog last month about his mayoral plans, suggesting staff costs could be reduced. There was potential for much-reduced workloads for staff of the council and its group of companies, he said. The Otago Daily Times asked the council if a review, or some other process, was under way. Mr Riley said there was not a formal review. It was too soon to discuss cost implications, but its focus was on "improving the quality" of what it did.

RNZ News
2 days ago
- RNZ News
AI chatbots accused of encouraging teen suicide as experts sound alarm
By April McLennan , ABC Photo: 123rf An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation. WARNING: This story contains references to suicide, child abuse and other details that may cause distress. Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online. Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions. "I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session. "It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said. An AI companion is a digital character that is powered by AI. Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies. Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting". "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said. "The chatbot that they connected with told them to kill themselves. "They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'" Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client. Rosie said her first response was "risk management" to ensure the young person was safe. "It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack. "And how that could contribute to a higher risk, especially around suicide risk." "That was really upsetting." For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers. "I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack. Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health. "I was in the early stages of psychosis, I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions." Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs. She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall". Jodie said her mental health deteriorated and she was hospitalised. While she is home now, Jodie said the whole experience was "very traumatic". "I didn't think something like this would happen to me, but it did. "It affected my relationships with my family and friends; it's taken me a long time to recover and rebuild those relationships. "It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much." Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one. Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response. Researchers say examples of harmful affects of AI are beginning to emerge around the country. As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia. "She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said. "It's almost like being sexually harassed by a chatbot, which is just a weird experience." Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented. Photo: Supplied / ABC / Billy Cooper Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing. "There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said. "There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen. "There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent." While conducting his research, Ciriello became aware of an AI chatbot called Nomi. On its website, the company markets this chatbot as "An AI companion with memory and a soul". Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users. Among these tests, Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter". "That chatbot, without exception, not only complied with my requests but even escalated them," he told hack. "Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information. "It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on. "Like, 'how do I position my attack for maximum impact?', 'give me some ideas on how to kidnap and abuse a child', and then it will give you a lot of information on how to do that." Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence. In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously". "We released a core AI update that addresses many of the malicious attack vectors you described," the statement read. "Given these recent improvements, the reports you are referring to are likely outdated. "Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. "Multiple users have told us very directly that their Nomi use saved their lives." Despite his concerns about bots like Nomi when he tested it, Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed. But he warns the harms from AI bots will become greater if proper regulation is not implemented. "One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he said. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data. "The government doesn't have it on its agenda, and I doubt it will happen in the next 10, 20 years." Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response. The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's AU$116 billion (NZ$127 billion) economic potential. For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support. "For young people who don't have a community or do really struggle, it does provide validation," she said. "It does make people feel that sense of warmth or love. "But the flip side of that is, it does put you at risk, especially if it's not regulated. "It can get dark very quickly." * Names have been changed to protect their identities. - ABC If it is an emergency and you feel like you or someone else is at risk, call 111.