
Palo Alto Networks launches Prisma AIRS to secure enterprise AI
Palo Alto Networks has unveiled Prisma AIRS, an AI security platform designed to offer protection for the entire enterprise AI ecosystem, including applications, agents, models, and data.
The platform addresses the security needs of organisations adopting AI technologies at scale, particularly as enterprises increasingly employ AI applications and large language models (LLMs) in varied functions such as customer support and code generation.
Prisma AIRS provides several key security capabilities, including AI model scanning to detect vulnerabilities, posture management to monitor permissions and data exposures, AI red teaming to simulate attacks against AI systems, runtime security to guard against evolving threats during operation, and security for AI agents, including those built with no-code or low-code tools.
AI model scanning enables organisations to assess their AI models for vulnerabilities such as tampering, malicious scripts, and deserialization attacks. This component aims to help organisations adopt AI models safely by identifying security risks before deployment.
Posture management provides insights into security risks associated with an enterprise's AI ecosystem. It highlights issues such as excessive permissions, sensitive data exposure, and both platform and access misconfigurations.
The AI red teaming feature allows organisations to perform automated penetration testing on AI applications and models by utilising a red teaming agent. This agent stress-tests AI deployments, learning and adapting like a real attacker to uncover potential exposures and lurking risks before malicious actors can exploit them.
Runtime security within Prisma AIRS protects AI applications powered by LLMs against threats encountered during operation. These threats may include prompt injection, malicious code execution, toxic content, sensitive data leaks, resource overload, and hallucination.
To address risks associated with AI agents, Prisma AIRS includes safeguards for both standard and no-code/low-code platforms. It aims to defend against agentic threats such as identity impersonation, memory manipulation, and misuse of tools.
Lee Klarich, Chief Product Officer for Palo Alto Networks, said, "AI agents and apps are transforming the way we work and live. In parallel, the attack surface isn't just expanding, it's fundamentally changing. The last thing organizations need is more point products to secure their use of AI. Organizations need best-in-class security delivered via the right architecture - platformization is that architecture. Prisma AIRS addresses both traditional and AI specific threats with best-in-class security capabilities delivered in a comprehensive, unified AI security platform that enables organizations to deploy AI bravely."
Anand Oswal, Senior Vice President and General Manager at Palo Alto Networks, added, "As organizations integrate AI into every aspect of their operations, securing it requires a runtime security platform that provides continuous visibility and real-time insight. Without this, security teams are left in the dark about how AI is being used, misused, or manipulated, which puts critical data and decisions at risk. Prisma AIRS empowers teams with answers to essential questions, like whether someone is exploiting an LLM to extract sensitive information or if a compromised API is feeding the model poisoned data. These insights are vital to maintaining trust and safeguarding AI."
Palo Alto Networks indicated that Prisma AIRS would be enhanced by the company's plans to acquire Protect AI. The acquisition is subject to customary closing conditions and is expected to be finalised in the first quarter of the company's fiscal 2026. Protect AI focuses on securing the use of AI, aligning with Palo Alto Networks' efforts to address AI security concerns comprehensively.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Spinoff
2 hours ago
- The Spinoff
‘I wanted to be a teacher not a cop': the reality of teaching in the world of AI
As AI becomes more ubiquitous, teachers across the board are struggling to adapt. Hera Lindsay Bird reports. *All names in this article have been changed. Julia*, a New Zealand teacher working in an American university, doesn't explicitly forbid her students from using AI. The top ranked public university where she teaches academic writing has told her not to. Instead, the students in her classroom must abide by two simple rules. They must ask for permission. And they must provide the LLM prompts, transcripts, and a written reflection on how AI contributed to their work. Of her class of approximately 20 students, several have acknowledged using ChatGPT. It's possible others have managed to sneak AI past her undetected, but she's not so worried about that, because it means 'on some level they got the point of the class, which was about adapting writing to different audiences, and as the audience for their assignments I was an openly seething AI hater.' But there are two repeat offenders who continue to hand in essay after essay of LLM generated writing. The essays are 'bloviating nonsense,' complete with fake quotes and made up sources. The two students repeatedly dodge her requests for in-person meetings. They only reluctantly agree to meet through Zoom. When she questions them about their use of ChatGPT, they lie, saying they only ever used it as a brainstorming tool. Only when she manages to 'circle them in conversation' do they grudgingly give her a fuller account of how they produced the work. Julia is sympathetic to the pressures her students are under. She's patient with them. She painstakingly walks them through the assignment step by step, offering suggestions for how they might resubmit their work. 'What was really sad was the fact it didn't seem to stick,' she says. 'I got final AI assignments in the end, and had to again chase them for versions they had at least edited using their own language, righteously incandescent with frustration like Hades in Disney's Hercules.' She passes the students with low marks. She's not convinced the university would back her up if her students contested their failing grade. 'My main beef with AI is that it made me into a grown adult asshole who had 18-year- old enemies,' she says. 'I wanted to be a teacher not a cop.' In the last few years LLMs and generative AI has graduated from a novelty tool that generates attractive women with more than the amount of medically recommended thumbs, to something that can write a convincingly mediocre essay on the role of ambiguity in Henry James's The Turn of the Screw. Two weeks ago, James D. Walsh published a piece in New York Magazine called 'Everyone is Cheating Their Way Through College' about how students and teachers in the US college system are dealing with education in a post LLM world. The article painted a grim picture about the ubiquity of students using generative AI to cheat on their assignments, claiming, as per the subtitle, 'ChatGPT has unraveled the entire academic project'. New Zealand universities are facing similar problems. In May, Victoria University of Wellington made the decision to enforce a handwritten exam policy for two third year law papers in an attempt to prevent student cheating. Days after the announcement, the university backpedalled. The Post reported that, in response to student concerns, the university had settled on a compromise. Half of the exam would comprise a handwritten multi-choice test. Students would still be permitted to use their laptops for the second half of the exam. Use of AI was still strictly forbidden and would be 'enforced through active and enhanced invigilation.' Universities aren't the only ones struggling with the new reality. Almost every educator I spoke with, from primary school teachers to those supervising postgraduate dissertations, raised serious concerns, with some teachers estimating that up to 80% of their students relied on ChatGPT to complete assignments. I spoke to MA supervisors whose history students theses were riddled with fictitious sources and 'archival' Midjourney photographs, and primary and intermediate school teachers, who said students as young as 11 were using it to answer simple personal prompts, such as 'what did you do in the summer holidays?' and 'what was your favourite penguin in the text?' All of this raises the question: What's the point of an education? Is this the inevitable result of an increasingly consumerist tertiary sector, where students are simply paying for credentials in exchange for entry into the job market? Should we learn to embrace AI as an educational inevitability, in the way that we've come to accept the pocket calculator? Are we being unnecessarily catastrophist? Ben*, an English teacher at a North Island all-boys school, says 'I find it really frustrating to read boosterish pieces about it in the press because they all assume that students will use it to learn. AI is seen as offering kids autonomy over their own learning, but this is not an autonomy they know what to do with! Students are using it to solve their problems and get answers, not to think with.' Ben worries that information literacy is falling by the wayside. 'Traditional research and evaluation skills are totally bypassed by AI,' he says. 'If [a student] asks 'what is evidence about how veganism is unhealthy', it will supply them with exactly that, without them having to do any sleuthing, comparison or evaluation'. The greatest concern from teachers was the atrophying effect it was having on students' critical thinking capacities. Julia from the undergraduate writing programme says, 'They can't get their head around why authorship matters, so they can't tell slop from scientific journals.' This problem is further complicated by the fact that 'reputable journals are accepting GPT bollocks that slips through'. Lily*, a lecturer in the arts at a large New Zealand university, predicts that her colleagues will be shocked come exam season. 'There is going to be a vast difference between what has been submitted for take home assignments and what students will be able to produce in exams.' There is an argument to be made that ChatGPT can level the playing field, especially for students who struggle to translate their learning into academic language. But as other teachers were quick to point out, over-reliance on this technology actually has the potential to widen the gap. Mark*, a special education teacher working at a US high school, uses the example of a student with a reading disability. 'If a student has difficulty reading, difficult reading frustrates them. They can just tell the bot to read the article and answer the questions. They'll get the correct answers… and of course their special ed documentation will say they're making progress when actually they aren't'. Ben is adamant that AI is a tool being utilised in unhelpful ways. 'When all students have AI, why would they choose to do the hard task when they can make it three times as easy in an instant? Again we assume students understand learning in a more holistic sense and that they know what's best for their own academic progress. But that just isn't true. Students will (like all humans!) take the easy route out most of the time, and by enabling this, they're missing out.' So what, precisely, are teachers supposed to do about this problem? Unlike the plagiarism detectors of the previous academic generation, there doesn't seem to be any surefire way to prove that LLMs have been used, even though most teachers felt confident in their ability to detect the linguistic hallmarks of ChatGPT, citing essays that sound like 'management reports' chequered with phrases like 'these themes converge' or 'a conceptual alignment is revealed'. One teacher noted the 'degrassi era syntax'. Preventing students from using AI is becoming increasingly difficult, as the technology becomes more ubiquitous. Even if teachers ask for handwritten essays, or check over a Google document's revision history in an attempt to rumble obvious copy paste jobs, students are quick to adapt. 'They could get AI to rewrite it in the style of a 15-year-old, complete with spelling errors,' says Ben. Or painstakingly copy the LLM's essay by hand. So far, the only reliable method of preventing students from using AI is to revert to technology-free classrooms, with handwritten or oral examinations, but even this solution is potentially short-lived, as technology like Google Glass becomes more mainstream. Some teachers have gone to great lengths to identify students using AI, like the lecturers in the New York Magazine piece, who hid 'gotcha' instructions in their assignments, such as including 'mention Ariana Grande' in white font to identify students who pasted the assignment brief directly into ChatGPT. But strategies like this have a limited shelf life, as students communicate and learn to adapt. The efforts of teachers to prevent the use of LLMs are often undermined by the educational institutions themselves. Many of the assessors I spoke to said that even when they had identified something they were 99% sure was written by generative AI, they were often powerless to do anything about it. Some teachers were explicitly discouraged from banning AI. Others had banned AI, but the policy was effectively unenforceable, as the use of AI is notoriously hard to prove, and the universities feared complaints or litigation. Many teachers I spoke to felt as if they had been left to navigate the grey areas alone. Unsurprisingly, many teachers are becoming increasingly disillusioned. Julia said dealing with AI took up half of her mental load, 'due to the cat-and-mouse of it all'. Another teacher, Jo* who has been teaching undergrad at a large New Zealand university for around a decade, says policing it is a nightmare. 'It takes up so much time and our workload calculations haven't changed to account for this'. It's not just a physical toll for Jo either. 'Dealing with student work which is entirely AI written is by a wide margin the most discouraging thing I've experienced working in academia, and given how many very discouraging things have happened lately that's really saying something.' Concerns over AI aren't limited to educators. I spoke to a few students, who were dismayed to discover their essays were being marked by AI. 'There is a trend for teachers to use it for marking but I'm not a fan,' says Ben. 'Marking is about finding out where your students are at so you can sequence your lessons. Taking the teacher out of the equation is antithetical to good teaching. It's like driving with your eyes closed.' Other teachers have begun using AI as a lesson planning tool. For some, this is an effective time-saving measure and eases the burden on a sector that is notoriously under-resourced and overworked. As Ben is quick to point out, teachers using AI to plan lessons is hardly equivalent to students using it to answer questions. Teachers, he points out, have the competence and skill base required to use AI 'selectively and thoughtfully'. But the outcomes aren't necessarily positive. One witness reported a new entrant teacher getting students to write descriptions of monsters, before entering the descriptions into ChatGPT to create illustrations. But the generated pictures were inappropriately terrifying. To quote the observer; 'Why cant they draw the fucking monsters?' The news isn't all bad. I spoke to some teachers who said they'd observed positive use of LLMs by students to further their learning, like a Romanian sociology lecturer who saw students practising statistical analysis by generating fictitious interview subjects. Others, like Rachel*, a teacher at an international primary school in Malaysia, say that LLMs are a useful tool for translation. 'For some kids, it has really improved their creativity. If it's used effectively and taught well, it could revolutionise teaching.' But by and large, the mood was depressing. Are we staring down the barrel of the future, in which we're simply teaching one robot how to talk to another? Ben, the high school teacher, suggested a 'de-teching' of the classroom was inevitable, and that he and his colleagues have already seen positive interpersonal results from the government's school cellphone ban. 'We need to start thinking about what skills we want humans to have, regardless of their utility in an economy that will slowly be replaced by AI. There's no point trying to predict the jobs of the future. Let's think: what are the things we want everyone in our society to know and understand? What are the key skills, human and disciplinary, that we want everyone in our society to work on? What kind of traits and characteristics do we want to encourage for the good of the individual and the community? That's where we need to start,' says Ben. 'It's in our hands to make a more robust and more human education system. In fact, we have to. If we don't, and if we allow kids to outsource to AI how to think, write, imagine, and work, we'd be doing them a generational disservice.' Julia, despite her struggles, is quick to caution against hyperbole. She believes claims that everyone is using AI are 'highly exaggerated', and the people pushing these products are incentivised to promote the idea that 'not to cheat is to get left behind'. Most of her students, she says, are 'honestly trying to get by on their own thought and expression. So it's certainly not a lost battle we should just give up on due to perceived ubiquity of the tech'. Most teachers agree that the Rubicon has been crossed, and complete abstinence from AI tech is no longer possible. But providing students with the tools to think critically and parse fact from fiction is perhaps more urgent than ever, in a world that is increasingly full of bad actors and disinformation. As Julia says, 'eventually they'll need to rub two brain cells together in a circumstance where the stakes feel real to them, and the less they've practiced thinking and expressing their thoughts the more cringe their failures will be.'


Techday NZ
2 days ago
- Techday NZ
Cloudera joins AI-RAN Alliance to boost AI in telecoms sector
Cloudera has joined the AI-RAN Alliance, a global consortium focused on integrating artificial intelligence into telecommunications infrastructure, with particular relevance for service providers in Australia and New Zealand. The AI-RAN Alliance, whose founding members include NVIDIA, also counts Dell, SoftBank, T-Mobile, KT and LG U+ among its participants. The group aims to address the integration of AI within current and emerging telecommunications networks and to standardise the use of AI for optimising shared infrastructure, accelerating edge AI application development, and providing reliable deployment models for AI in telecoms. Cloudera's entry into the alliance follows a period of growing interest among telecommunications providers in using AI to optimise network operations and reduce operational costs. Virtualisation and new infrastructure architectures are key drivers, and AI is seen as an important means of improving service efficiency and enabling new business opportunities for operators. The complexities associated with deploying AI at scale across distributed edge environments present significant challenges for the sector. Telecoms need to take an enterprise-wide approach to operationalise these technologies within the radio access network (RAN) if they are to unlock commercial benefits. As a member of the AI-RAN Alliance, Cloudera will participate in the 'Data for AI-RAN' working group, which is tasked with standardising data orchestration, large language model driven network automation, and hybrid-enabled MLOps across telecommunications and AI workloads. According to the company, this involvement will aim to align data and AI pipeline development with operational requirements, thereby supporting quicker innovation and the deployment of AI-native use cases. Cloudera will also support the Alliance's three stated objectives-AI-for-RAN, AI-and-RAN, and AI-on-RAN-and will work to accelerate the use of AI in real-world scenarios. Potential applications include service level agreement-driven network availability and real-time anomaly detection. The company plans to develop and evaluate reference architectures that telecoms operators can deploy in live environments, facilitating shorter development cycles and improving collaboration around model reusability. Another focus is demonstrating the use of Cloudera's platform for real-time decision-making at the network edge. This will involve enabling scalable preparation of training data and MLOps, as well as operationalising AI inference at scale, while maintaining governance and edge-to-core orchestration. Keir Garrett, Regional Vice President for Cloudera Australia and New Zealand, said, "Joining the AI-RAN Alliance enhances our ability to drive innovation and operational excellence for telecommunications providers across Australia and New Zealand. Leading telcos are already leveraging AI to optimise networks, improve engagement, and streamline operations, with edge computing enabling scalable transformation. Now, we're focused on guiding them through the next phase-unlocking greater value while future-proofing infrastructure. Just as smart highways enhance outdated roads, this shift ensures telcos meet growing user demand with speed, reliability, and adaptability-paving the way for the future of connectivity." Speaking about the company's contribution to the Alliance, Abhas Ricky, Chief Strategy Officer at Cloudera, commented, "Cloudera is proud to bring its data and AI expertise to the AI-RAN Alliance. The network is the heart of the telecom business, both in driving margin growth and in service transformation, and AI can unlock substantial value across those dimensions. Given our leadership in the domain - having powered data and AI automation strategies for hundreds of telecommunications providers around the world, we now look forward to accelerating innovation alongside fellow AI-RAN Alliance members and bringing our customers along. Our goal is to help define the data standards, orchestration models, and reference architectures that will power intelligent, adaptive, and AI-native networks of the future." Jemin Chung, Vice President Network Strategy at KT, said, "We are proud to collaborate with Cloudera and fellow AI-RAN Alliance members in the 'Data for AI-RAN' working group. As AI becomes increasingly central to next-generation networks, the ability to harness data securely and at scale will be a key differentiator. Through this initiative, we look forward to defining best practices that enable AI-centric RAN evolution and improve operational intelligence." Dr Alex Jinsung Choi, Principal Fellow, SoftBank's Research Institute of Advanced Technology, and Chair of the AI-RAN Alliance, said, "Cloudera is an incredible addition to the AI-RAN Alliance, which has grown rapidly as demand for improved AI access and success increases across the industry. The company's leadership in data and AI, combined with their extensive telecommunications footprint, will play a vital role in advancing our shared vision of intelligent, AI-native networks."


Techday NZ
3 days ago
- Techday NZ
Diliko launches partner scheme for AI data in regulated sectors
Diliko has introduced a partner programme that allows IT service providers to offer AI-driven data management and analytics solutions to mid-sized clients in regulated industries, without the need to invest in their own infrastructure. The partner programme gives consulting firms, analytics service providers, and systems integrators access to the Diliko Agentic AI Platform, aiming to streamline the deployment of data management and analytics projects for mid-sized organisations in industries such as healthcare and financial services. According to Diliko, the platform manages complex aspects of data integration, orchestration, privacy, and compliance, allowing their partners to focus on higher-value services such as business intelligence, client enablement, and strategic consulting. Ken Ammon, Chief Strategy Officer at Diliko, said, "The Diliko Partner Program gives service firms a powerful advantage: the ability to go to market with a proven enterprise platform that's already secure, scalable, and trusted in regulated industries. Our platform reduces project risk and time-to-value, while the partner program offers new revenue opportunities through referral incentives and streamlined delivery. It's a win-win for firms looking to grow their services business while helping clients succeed faster." The Agentic AI platform is designed to automate data engineering processes, manage AI usage, and reduce the administrative burden typically associated with regulatory compliance, thereby removing the requirement for client organisations to establish and maintain their own complex data infrastructure. By leveraging Diliko's platform, services partners are able to deliver outcomes more rapidly by avoiding the need to integrate multiple tools or build bespoke infrastructure. The company also highlights risk reduction through built-in governance and security features that align with standards such as HIPAA, GDPR, and CCPA. Efficiency improvements and the use of smaller teams contribute to potential profitability gains. Andriy Krupa, Chief Executive Officer of ELEKS, addressed the benefits of the collaboration: "Our partnership with Diliko enables us to offer clients immediate access to an enterprise-grade data platform without the infrastructure burden. This helps us deliver secure, scalable analytics faster and with less complexity—an especially powerful proposition for clients in healthcare and finance who are navigating stringent compliance requirements." Rich Bruggemann, Managing Partner at Transcendent Analytics Consulting Group, referenced the importance of compliance in healthcare, stating: "Healthcare organizations face enormous pressure to extract value from their data while staying compliant with regulations like HIPAA. Partnering with Diliko allows us to offer our clients a modern, secure data platform without the operational overhead. By removing infrastructure barriers and automating compliance, we can focus on delivering clinical insights and business outcomes that truly make a difference in patient care." The partner programme comprises several key elements aimed at enabling service providers to market and deliver AI-powered data solutions effectively. These include revenue sharing and referral commissions for sourcing and influencing deals, access to technical resources and demo environments, structured training and certification offerings, and co-marketing activities such as webinars and event sponsorships. Diliko states that the programme features a tiered structure - Registered, Preferred, and Elite - to support partners at different stages of engagement and business maturity. The Registered tier offers entry-level access to basic tools and training. The Preferred tier introduces increased incentives, co-marketing opportunities, and dedicated partner management. The Elite tier, available by invitation, provides lead sharing, executive sponsorship, and strategic business planning with Diliko's leadership.