logo
Threat spotlight: How attackers poison AI tools and defences

Threat spotlight: How attackers poison AI tools and defences

Techday NZ2 days ago
Barracuda has reported on how generative AI is being used to create and distribute spam emails and craft highly persuasive phishing attacks. These threats continue to evolve and escalate – but they are not the only ways in which attackers leverage AI.
Security researchers are now seeing threat actors manipulate companies' AI tools and tamper with their AI security features to steal and compromise information and weaken a target's defences.
Email attacks targeting AI assistants
AI assistants and the Large Language Models (LLMs) that support their functionality are vulnerable to abuse.
Barracuda's threat analysts have found attacks where malicious prompts are hidden inside benign looking emails. This malicious payload is designed to manipulate the behaviour of the target's AI information assistants.
For example, a recently reported – and fixed – vulnerability in Microsoft 365's AI assistant, Copilot, could allow anyone to extract information from a network without authorisation. Threat actors can exploit to collect and exfiltrate sensitive information from a target.
They do this by leveraging the ability of internal AI assistants to look for and collate contextual data from internal emails, messages and documents when answering queries or completing tasks. First, the attackers send one or more employees a seemingly harmless email containing a concealed and embedded malicious prompt payload.
This email needs no interaction from the user and lives benignly in their inbox. When the employee asks the AI assistant for help with a task or query, the assistant scans look through older emails, files and data to provide context for its response. As a result, the AI assistant unwittingly infects itself with the malicious prompt.
The malicious prompt could then ask the AI assistant to silently exfiltrate sensitive information, to execute malicious commands or to alter data.
Weaponised emails also try to manipulate AI assistants by corrupting their underlying memory or data retrieval logic. These include emails with exploits targeting vulnerabilities in RAG (Retrieval-Augmented Generation) deployments. RAG is a technique that enables the LLMs to retrieve and incorporate new information beyond their training model.
Such attacks can lead to AI assistants making incorrect decisions, providing false information, or performing unintended actions based on corrupted data.
Tampering with AI-based protection
Attackers are also learning how to manipulate the AI components of defensive technologies.
Email security platforms are being enhanced with AI-powered features that make them easier to use and more efficient. These include features such as auto-replies, 'smart' forwarding, auto-triage to remove spam, automated ticket creation for issues, and more. This is expanding the potential attack surface that threat actors can target.
If an attacker successfully manipulates these security features, they could: Manipulate intelligent email security tools to autoreply with sensitive data.
Abuse AI security features to escalate helpdesk tickets without verification. This could lead to unauthorised access to systems or data, as attackers could exploit the escalated privileges to perform malicious activities.
Trigger workflow automation based on a malicious prompt. This could lead to the execution of harmful actions, such as deploying malware, altering critical data, or disrupting business operations.
Casting doubt on reality
Identity confusion and spoofing
When AI systems operate with high levels of autonomy, they can be tricked into either impersonating users or trusting impersonators. This can lead to: 'Confused Deputy' attacks: This involves an AI agent with higher privileges performing unauthorised tasks on behalf of a lower-privileged user (such as an attacker.)
Spoofed API access: This involves existing AI-based integrations with Microsoft 365 or Gmail, for example, being manipulated to leak sensitive data or send fraudulent emails.
Cascading hallucinations: trusting the untrue
As mentioned above, email attacks targeting AI assistants can try to manipulate the assistant's functionality. This could lead the assistant to summarise a user's inbox, generate reports, and set the calendar – but based on false or manipulated data.
In such cases, a single poisoned email could: Mislead task prioritisation. For example, send "urgent" emails from fake executives.
Skew summaries and recommendations.
Influence critical business decisions based on hallucinations.
How email defenses need to adapt
Legacy email gateways, traditional email authentication protocols such as SPF or DKIM and standard IP blacklists are no longer enough to defend against these threats. Organisations need an email security platform that is generative-AI resilient. This platform should include: LLM-aware filtering: Able to understand email context (topic, target, type etc.), tone and behavioural patterns in addition to the email content.
Contextual memory validation: This helps to sanitise what AI-based filters learn over time and can prevent long-term manipulation.
Toolchain isolation: AI assistants need to operate in sandboxes, with measures in place to block any unverified action based on a received email prompt.
Scoped identity management: This involves using minimal-privilege tokens and enforcing identity boundaries for AI integrations.
Zero trust AI execution: Just because an email claims to be "from the CEO" doesn't mean the AI should automatically act on it. Tools should be set to verify everything before execution.
The future of email security is 'agent-aware'
The AI tools being used within organisations are increasing built on 'agentic' AI. These are AI systems capable of independent decision-making and autonomous behavior. These systems can reason, plan and perform actions, adapting in real time to achieve specific goals.
This powerful capability can be manipulated by attackers and security measures must shift from passive filtering to proactive threat modelling for AI agents.
Email is a great example. Email is becoming an AI-augmented workspace, but it remains one of the top attack vectors. Security strategies need to stop seeing email as a channel. Instead, they need to approach it as an execution environment requiring zero trust principles and constant AI-aware validation.
How Barracuda email protection helps defend against AI attacks
Barracuda's integrated cybersecurity platform is purpose-built to meet the dual challenge of AI-based attacks and attacks targeting AI components.
Our email protection suite combines intelligent detection, adaptive automation, and human-centric design to help customers outpace AI-powered threats.
This includes: Advanced AI-based detection: Barracuda uses behavioural AI and NLP to spot social engineering even without obvious malware or links. It catches impersonation, business email compromise (BEC), and tone-shift anomalies that traditional filters miss.
Defence in depth: Barracuda covers every stage of the kill chain from phishing prevention to account takeover detection and automated incident response, closing the gaps that attackers exploit.
Real time threat intelligence: With data from a global detection network, Barracuda rapidly adapts to evolving threats like prompt injection, RAG poisoning, and AI hallucination abuse.
User training and awareness: Technology alone isn't enough. Barracuda empowers employees to recognise AI-powered phishing through ongoing awareness training because trust is the new vulnerability.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Riskified & HUMAN join forces to tackle AI-driven eCommerce risks
Riskified & HUMAN join forces to tackle AI-driven eCommerce risks

Techday NZ

time2 days ago

  • Techday NZ

Riskified & HUMAN join forces to tackle AI-driven eCommerce risks

Riskified has announced a partnership with HUMAN Security to help merchants manage risks and opportunities created by the rise of AI shopping agents in eCommerce. The collaboration between Riskified, which provides eCommerce fraud prevention and risk intelligence, and HUMAN Security, a cybersecurity company, is focused on advancing a unified security framework for merchants participating in agentic commerce channels. Both companies will leverage their AI platforms and network insights in a joint effort to help merchants address the challenges of increased AI-driven transactions. Online consumers are rapidly adopting large language models (LLMs) such as ChatGPT, Claude, Gemini, Grok, Llama, and Perplexity to research products, compare prices, and identify deals. This trend is gaining pace as LLM providers enhance browser experiences and integrations, broadening the impact of AI on eCommerce behaviour. However, as AI agents become intermediaries in shopping decisions and purchases, merchants face new risks as well as opportunities. Changing risks Traditional, rules-based fraud management tools often rely on behavioural signals from human shoppers. When an AI agent conducts a transaction, these signals can be absent, leading to increased false declines or undetected fraud. Merchants adopting AI-driven shopping features can potentially win new customers and improve conversion rates, but they also encounter risks such as revenue loss, inventory manipulation and reputational harm. Data from Riskified's merchant network highlights the risks associated with LLM-referred web traffic. LLM-referred traffic for a large ticketing merchant was found to be 2.3 times riskier than traffic originating from Google search, and for an electronics merchant, AI-driven traffic was 1.8 times riskier. Riskified also reports early signs of automated reseller arbitrage, where AI agents purchase inventory rapidly to resell at higher prices via fraudulent storefronts recommended by other agents. Such activities can undermine pricing strategies, damage customer trust, and result in substantial financial losses if not properly managed. New solutions Riskified is introducing a suite of new products and tools designed to help merchants monitor and control eCommerce orders emanating from AI shopping agents while preventing fraud and policy abuse. Announced solutions include AI Agent Approve, which enables merchants and LLMs to interact securely with Riskified's APIs through a package available on AWS Marketplace; AI Agent Intelligence, offering dashboard monitoring of eCommerce orders originating from AI agents; and AI Agent Policy Builder, which detects and enforces policies related to returns abuse, reseller arbitrage and promotional abuse. The partnership allows merchants to apply consistent trust policies and automated transaction decisions across both human and AI-driven interactions, supported by HUMAN's recently launched Sightline platform featuring AgenticTrust and Riskified's expertise in chargeback and policy abuse prevention. "In a world where AI agents transact on behalf of individuals, resolving identity and trust becomes more complex. By working with HUMAN and developing new agentic tools and capabilities, we give merchants a way to safely embrace this shift, turning what could be a threat into a new, profitable digital channel," said Assaf Feldman, CTO and Co-Founder of Riskified. John Searby, Chief Strategy Officer at HUMAN Security, added, "We are incredibly excited to be working with Riskified as a launch partner, bringing together HUMAN Sightline featuring AgenticTrust with their eCommerce risk management expertise to help establish a trusted ecosystem for agentic commerce. HUMAN provides the trust layer and visibility to identify and govern AI shopping agent interactions, empowering merchants to set and enforce 'trust or not' policies." Searby continued, "Riskified brings deep expertise in eCommerce transaction fraud prevention, chargeback protection, and policy abuse prevention. Together, we enable merchants to approve more legitimate AI-driven orders, reduce false declines, and protect margins, setting the standard for how agentic commerce can grow safely and profitably." By combining Riskified's and HUMAN's technologies and expertise, the two companies aim to help merchants confidently manage the ongoing evolution of eCommerce as AI agents play a larger role in online shopping and digital transactions.

Sage unveils AI features to boost speed & sustainability in finance
Sage unveils AI features to boost speed & sustainability in finance

Techday NZ

time2 days ago

  • Techday NZ

Sage unveils AI features to boost speed & sustainability in finance

Sage has announced a new range of AI-powered features for its Sage Intacct platform, intended to enhance the operations of finance teams through automation, embedded payment capabilities, and sustainability insights. The company's updates focus on automating key finance tasks, streamlining payment workflows, and delivering actionable sustainability data, aiming to address the growing need among finance leaders for greater speed, efficiency, and strategic value in their roles. AI-driven automation The newly introduced upgrades include AI-supported automation for the close process, enhanced accounts payable (AP) workflows, and sustainability monitoring capabilities. These features are designed to simplify operations, accelerate period closures, and provide finance professionals with more time to concentrate on higher-value activities. Recent research cited by Sage indicates strong demand among finance professionals for advanced tools. According to the research, 84% of surveyed finance leaders expressed a desire to close the books faster, while 87% called for more advanced AP automation to support their objectives. Dan Miller, EVP, Financials & ERP Division at Sage, emphasised the company's commitment to supporting finance teams with technology that delivers measurable time savings and operational clarity. "Finance teams are always looking to save time so they can invest in growth and their business's success while maintaining clarity and control," said Dan Miller, EVP, Financials & ERP Division, Sage. "We're delivering AI-powered automation in everyday finance tasks, helping leaders move with agility, reduce risk and operate with greater confidence." New feature breakdown The updates to Sage Intacct include several tools set for availability across various markets. In close management, the Subledger Reconciliation Assistant leverages Sage Copilot to monitor subledger and general ledger balances, flag discrepancies automatically, and provide detailed resolution pathways prior to final close. A dedicated Close Workspace offers customisable task templates, checklists, and real-time notifications from Copilot to keep end-of-period financial processes on track. For accounts payable and procurement teams, embedded vendor payments powered by MineralTree now allow customers to handle payments via a range of methods, including ACH, cheques, and virtual cards within the Sage Intacct interface. Additionally, an AI-powered line-level matching system automatically pairs invoice lines to purchase order lines, highlighting discrepancies and aiming to reduce manual reviews and approval bottlenecks. These AP automation tools are being extended to customers across regions and industries, with US customers gaining immediate access to eProcurement capabilities. Support features such as Copilot Search Help use natural language processing to answer product-related queries rapidly, a function available to users in English across the US, UK, Canada, South Africa, and Australia. Sustainability capabilities Sage has also embedded sustainability insights directly into its finance suite with the launch of Sage Earth Carbon Accounting. This tool provides UK customers with quick, no-setup carbon footprint estimates based on their industry and revenue information, aimed at supporting early-stage sustainability planning and helping organisations prepare for upcoming reporting requirements. Feedback from partners Sage's initiatives have been well received by implementation partners, who note tangible benefits to their customers resulting from the new automation capabilities. "Sage is always at the forefront of helping customers realise the value of automation and insight," said Scott Schimberg, Partner, Armanino. "By further automating reconciliation, payments, and procurement, Sage Intacct is helping them save time and make smarter decisions. It's the kind of innovation finance teams need to stay ahead." Industry context The enhancements come at a time when finance teams face mounting pressures from manual operations, fragmented systems, and the need to address environmental, social, and governance (ESG) requirements. Sage's recent global survey of more than 1,700 CFOs and finance leaders highlighted acceleration of closing processes and improved automation as top priorities amid these challenges. The company reports that the latest wave of updates to Sage Intacct is an extension of its broader High-Performance Finance strategy, aimed at giving finance leaders the integrated tools and insights required to eliminate inefficiencies, reinforce compliance, improve resilience, and support sustainable organisational growth.

Threat spotlight: How attackers poison AI tools and defences
Threat spotlight: How attackers poison AI tools and defences

Techday NZ

time2 days ago

  • Techday NZ

Threat spotlight: How attackers poison AI tools and defences

Barracuda has reported on how generative AI is being used to create and distribute spam emails and craft highly persuasive phishing attacks. These threats continue to evolve and escalate – but they are not the only ways in which attackers leverage AI. Security researchers are now seeing threat actors manipulate companies' AI tools and tamper with their AI security features to steal and compromise information and weaken a target's defences. Email attacks targeting AI assistants AI assistants and the Large Language Models (LLMs) that support their functionality are vulnerable to abuse. Barracuda's threat analysts have found attacks where malicious prompts are hidden inside benign looking emails. This malicious payload is designed to manipulate the behaviour of the target's AI information assistants. For example, a recently reported – and fixed – vulnerability in Microsoft 365's AI assistant, Copilot, could allow anyone to extract information from a network without authorisation. Threat actors can exploit to collect and exfiltrate sensitive information from a target. They do this by leveraging the ability of internal AI assistants to look for and collate contextual data from internal emails, messages and documents when answering queries or completing tasks. First, the attackers send one or more employees a seemingly harmless email containing a concealed and embedded malicious prompt payload. This email needs no interaction from the user and lives benignly in their inbox. When the employee asks the AI assistant for help with a task or query, the assistant scans look through older emails, files and data to provide context for its response. As a result, the AI assistant unwittingly infects itself with the malicious prompt. The malicious prompt could then ask the AI assistant to silently exfiltrate sensitive information, to execute malicious commands or to alter data. Weaponised emails also try to manipulate AI assistants by corrupting their underlying memory or data retrieval logic. These include emails with exploits targeting vulnerabilities in RAG (Retrieval-Augmented Generation) deployments. RAG is a technique that enables the LLMs to retrieve and incorporate new information beyond their training model. Such attacks can lead to AI assistants making incorrect decisions, providing false information, or performing unintended actions based on corrupted data. Tampering with AI-based protection Attackers are also learning how to manipulate the AI components of defensive technologies. Email security platforms are being enhanced with AI-powered features that make them easier to use and more efficient. These include features such as auto-replies, 'smart' forwarding, auto-triage to remove spam, automated ticket creation for issues, and more. This is expanding the potential attack surface that threat actors can target. If an attacker successfully manipulates these security features, they could: Manipulate intelligent email security tools to autoreply with sensitive data. Abuse AI security features to escalate helpdesk tickets without verification. This could lead to unauthorised access to systems or data, as attackers could exploit the escalated privileges to perform malicious activities. Trigger workflow automation based on a malicious prompt. This could lead to the execution of harmful actions, such as deploying malware, altering critical data, or disrupting business operations. Casting doubt on reality Identity confusion and spoofing When AI systems operate with high levels of autonomy, they can be tricked into either impersonating users or trusting impersonators. This can lead to: 'Confused Deputy' attacks: This involves an AI agent with higher privileges performing unauthorised tasks on behalf of a lower-privileged user (such as an attacker.) Spoofed API access: This involves existing AI-based integrations with Microsoft 365 or Gmail, for example, being manipulated to leak sensitive data or send fraudulent emails. Cascading hallucinations: trusting the untrue As mentioned above, email attacks targeting AI assistants can try to manipulate the assistant's functionality. This could lead the assistant to summarise a user's inbox, generate reports, and set the calendar – but based on false or manipulated data. In such cases, a single poisoned email could: Mislead task prioritisation. For example, send "urgent" emails from fake executives. Skew summaries and recommendations. Influence critical business decisions based on hallucinations. How email defenses need to adapt Legacy email gateways, traditional email authentication protocols such as SPF or DKIM and standard IP blacklists are no longer enough to defend against these threats. Organisations need an email security platform that is generative-AI resilient. This platform should include: LLM-aware filtering: Able to understand email context (topic, target, type etc.), tone and behavioural patterns in addition to the email content. Contextual memory validation: This helps to sanitise what AI-based filters learn over time and can prevent long-term manipulation. Toolchain isolation: AI assistants need to operate in sandboxes, with measures in place to block any unverified action based on a received email prompt. Scoped identity management: This involves using minimal-privilege tokens and enforcing identity boundaries for AI integrations. Zero trust AI execution: Just because an email claims to be "from the CEO" doesn't mean the AI should automatically act on it. Tools should be set to verify everything before execution. The future of email security is 'agent-aware' The AI tools being used within organisations are increasing built on 'agentic' AI. These are AI systems capable of independent decision-making and autonomous behavior. These systems can reason, plan and perform actions, adapting in real time to achieve specific goals. This powerful capability can be manipulated by attackers and security measures must shift from passive filtering to proactive threat modelling for AI agents. Email is a great example. Email is becoming an AI-augmented workspace, but it remains one of the top attack vectors. Security strategies need to stop seeing email as a channel. Instead, they need to approach it as an execution environment requiring zero trust principles and constant AI-aware validation. How Barracuda email protection helps defend against AI attacks Barracuda's integrated cybersecurity platform is purpose-built to meet the dual challenge of AI-based attacks and attacks targeting AI components. Our email protection suite combines intelligent detection, adaptive automation, and human-centric design to help customers outpace AI-powered threats. This includes: Advanced AI-based detection: Barracuda uses behavioural AI and NLP to spot social engineering even without obvious malware or links. It catches impersonation, business email compromise (BEC), and tone-shift anomalies that traditional filters miss. Defence in depth: Barracuda covers every stage of the kill chain from phishing prevention to account takeover detection and automated incident response, closing the gaps that attackers exploit. Real time threat intelligence: With data from a global detection network, Barracuda rapidly adapts to evolving threats like prompt injection, RAG poisoning, and AI hallucination abuse. User training and awareness: Technology alone isn't enough. Barracuda empowers employees to recognise AI-powered phishing through ongoing awareness training because trust is the new vulnerability.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store