
MirrorWeb launches Sentinel to cut false compliance alerts by 90%
The increasing use of tools such as Teams, Slack, WhatsApp, LinkedIn and iMessage has led to a surge in data volumes that compliance teams must monitor. According to figures from the Institute of International Finance, 75% of financial firms experienced a 50% increase in compliance alerts during the past year. This overload has created concerns for compliance officers, with 67% reporting that they fear missing critical risks, which could result in fines from regulators.
The financial sector has already faced penalties for lapses in compliance, as highlighted when the SEC fined JPMorgan Chase USD $125 million last August for inadequate management of communications compliance.
Sentinel, MirrorWeb's newly launched platform, aims to help organisations reduce the number of false positive alerts generated by legacy monitoring systems. In product testing, the company reports reducing such irrelevant alerts by up to 90%.
The solution is built using natural language processing and intelligent risk scoring to highlight genuinely risky communications, rather than relying on basic keyword matching. This approach enables the system to assess the intent and context behind messages, offering what MirrorWeb describes as a more accurate identification of potential compliance risks.
Key features highlighted for Sentinel include intelligent risk scoring, a pre-configured scenario library, comprehensive conversation capture, audit-ready reporting, and security features designed with privacy in mind. The risk scoring function analyses communications for intent, sentiment, and likely impact, allowing teams to focus resources where they matter most. The scenario library covers over 110 scenarios across eight risk categories, while the conversation capture function records entire threads, including message edits and deletions, to provide investigators with full context.
Every alert flagged by Sentinel is accompanied by reasoning that references specific policy requirements. This is intended to help compliance professionals prepare for regulatory audits and inquiries. Security measures are also emphasised; all communications data is encrypted, not used to further train AI models, and is managed under standards such as SOC 2 and ISO 27001.
Jamie Hoyle, Vice President of Product at MirrorWeb, said, "Compliance has evolved beyond just ticking boxes; it's about making informed decisions that safeguard the business. Sentinel helps customers cut through the noise, focusing on real risks - the needles in the expanding data haystack. We have worked with our customers to develop innovations that meet their needs and address today's most pressing compliance challenges."
"Our Risk Scoring system and comprehensive Scenarios Library minimise the burden of false positive alerts, providing compliance professionals with the clarity and confidence to efficiently manage today's spiraling communication risks."
As supervised communication channels become ever more pervasive in regulated industries, companies face mounting regulatory scrutiny. Tools such as Sentinel are positioned to support compliance efforts by focusing investigative attention on genuinely high-risk content and offering audit-ready data for regulatory review.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Techday NZ
18 hours ago
- Techday NZ
Rubrik & Sophos launch advanced Microsoft 365 resilience tool
Rubrik and Sophos have entered a strategic partnership to offer a new Microsoft 365 backup and recovery solution optimised for Managed Detection and Response customers. The collaboration introduces Sophos M365 Backup and Recovery Powered by Rubrik, which enables organisations to recover Microsoft 365 data, including Teams, OneDrive, Exchange, and SharePoint, from within the Sophos Central platform. Sophos Central is currently used by over 75,000 MDR (Managed Detection and Response) and XDR (Extended Detection and Response) customers. Integration for resilience This integrated solution is designed to help IT and cybersecurity teams bolster their defences against cyber threats such as ransomware, account compromise, insider threats, and data loss across Microsoft 365 applications. Sophos Central currently integrates telemetry from more than 350 sources, employing deep learning and language models to monitor and respond to threats across endpoints, cloud, networks, identity, email, and business applications. Joe Levy, Chief Executive Officer of Sophos, said the partnership addresses the ongoing challenges faced by organisations in the digital era. We are reshaping what it means to stay operational in a world shaped by constant digital disruption. This is the future of cyber resilience: an intelligent, adaptive partnership that ensures organisations remain secure, responsive, and uninterrupted. By combining Sophos' prevention-first approach with Rubrik's unwavering recovery capabilities, we empower businesses to withstand attacks and maintain continuity, even under pressure. Sophos will offer this backup and recovery capability as an add-on for its MDR and XDR customers. The integration brings Rubrik's protection and recovery technology into the Sophos Central platform, giving organisations additional tools for secure data recovery in the face of accidental or malicious data loss. The new offer aims to provide flexibility and enhanced operational resilience for users already invested in Sophos security solutions. Industry context The need for more robust protection for Microsoft 365 data is highlighted by recent research. According to The State of Ransomware report by Sophos, nearly half of organisations impacted by ransomware incidents paid a ransom to restore their data. Only 54 per cent of affected companies restored their data using backups, pointing to a gap in effective resilience strategies. Further data suggests that 60 per cent of Microsoft 365 tenants have experienced account takeovers and 81 per cent have encountered email compromise. Compromised global admin credentials are a specific risk, as attackers can sometimes manipulate data retention policies to permanently delete critical information. Bipul Sinha, Chief Executive Officer, Chairman, and Co-founder of Rubrik, said the complex threat environment requires comprehensive strategies that cover both prevention and recovery. The reality of today's threat landscape demands a holistic approach to cyber resilience. With AI-enabled attacks and sophisticated breaches on the rise, organisations need more than just prevention; they need the ability to recover rapidly and reliably. Our partnership with Sophos delivers this critical capability directly within a platform security teams already use and trust, raising the bar for Microsoft 365 resilience. Features and benefits Sophos MDR and XDR customers are set to benefit from a range of security and recovery capabilities. Rubrik will provide secure, immutable backups for Microsoft 365 data using air-gapped storage, WORM locks, and customer-held encryption keys. Additional security measures such as multifactor authentication and data locks help prevent unauthorised tampering, even if credentials have been compromised. The solution enables restoration of Microsoft 365 content - emails, OneDrives, SharePoint sites, Teams channels, and more - to original or alternative users, including inactive accounts. Rubrik's software will automatically detect users, sites, and mailboxes for protection, apply appropriate policies, and support delegated administration, all while being fully integrated with Sophos Central to minimise administrative overhead. Both companies stated their commitment to supporting organisations in maintaining operational confidence despite evolving risks. The new joint solution is intended to provide customers and partners with the means to recover from cyber threats with minimal disruption. The offering will be made available through Sophos' channel partner network in the coming months.


NZ Herald
21 hours ago
- NZ Herald
Meta bans 6.8 million WhatsApp accounts linked to scam operations
A security expert said in a video accompanying Meta's blog post that users should pause before responding to messages on internet platforms, noting that scammers often create a fake sense of urgency to get people to respond quickly. Photo / Getty Images Meta said today that it had banned more than 6.8 million WhatsApp accounts this year linked to scam operations. A wave of criminal activity on the internet has wrangled billions of dollars out of victims' savings. Scam accounts were often linked to criminal centres across Southeast Asia, where


Techday NZ
7 days ago
- Techday NZ
AMD brings 128B LLMs to Windows PCs with Ryzen AI Max+ 395
AMD has announced a free software update enabling 128 billion parameter Large Language Models (LLMs) to be run locally on Windows PCs powered by AMD Ryzen AI Max+ 395 128GB processors, a capability previously only accessible through cloud infrastructure. With this update, AMD is allowing users to access and deploy advanced AI models locally, bypassing the need for third-party infrastructure, which can provide greater control, lower ongoing costs, and improved privacy. The company says this shift addresses growing demand for scalable and private AI processing at the client device level. Previously, models of this scale, such as those approaching the size of ChatGPT 3.0, were operable only within large-scale data centres. The new functionality comes through an upgrade to AMD Variable Graphics Memory, included with the upcoming Adrenalin Edition 25.8.1 WHQL drivers. This upgrade leverages the 96GB Variable Graphics Memory available on the Ryzen AI Max+ 395 128GB machine, supporting the execution of memory-intensive LLM workloads directly on Windows PCs. A broader deployment This update also marks the AMD Ryzen AI Max+ 395 (128GB) as the first Windows AI PC processor to run Meta's Llama 4 Scout 109B model - specifically with full vision and multi-call processing (MCP) support. The processor can manage all 109 billion parameters in memory, although the mixture-of-experts (MoE) architecture means only 17 billion parameters are active at any given time. The company reports output rates of up to 15 tokens per second for this model. According to AMD, the ability to handle such large models locally is important for users who require high-capacity AI assistants on-the-go. The system also supports flexible quantisation and can facilitate a range of LLMs, from compact 1B parameter models to Mistral Large, using the GGUF format. This isn't just about bringing cloud-scale compute to the desktop; it's about expanding the range of options for how AI can be used, built, and deployed locally. The company further states that performance in MoE models like Llama 4 Scout correlates with the number of active parameters, while dense models depend on the total parameter count. The memory capacity of the AMD Ryzen AI Max+ platform allows users to opt for higher-precision models, supporting up to 16-bit models through when trade-offs between quality and performance are warranted. Context and workflow AMD also highlights the importance of context size when working with LLMs. The AMD Ryzen AI Max+ 395 (128GB), equipped with the new driver, can run Meta's Llama 4 Scout at a context length of 256,000 (with Flash Attention ON and KV Cache Q8), significantly exceeding the standard 4,096 token window default in many applications. Examples provided include demonstrations where an LLM summarises extensive documents, such as an SEC EDGAR filing, requiring over 19,000 tokens to be held in context. Another example cited the summarisation of a research paper from the ARXIV database, needing more than 21,000 tokens from query initiation to final output. AMD notes that more complex workflows might require even greater context capacity, particularly for multi-tool and agentic scenarios. AMD states that while occasional users may manage with a context length of 32,000 tokens and a lightweight model, more demanding use cases will benefit from hardware and software that support expansive contexts, as offered by the AMD Ryzen AI Max+ 395 128GB. Looking ahead, AMD points to an expanding set of agentic workflows as LLMs and AI agents become more widely adopted for local inferencing. Industry trends indicate that model developers, including Meta, Google, and Mistral, are increasingly integrating tool-calling capabilities into their training runs to facilitate local personal assistant use cases. AMD also provides guidance on maintaining caution when enabling tool access for large language models, noting the potential for unpredictable system behaviour and outcomes. Users are advised to install LLM implementations only from trusted sources. The AMD Ryzen AI Max+ 395 (128GB) is now positioned to support most models available through and other tools, offering flexible deployment and model selection options for users with high-performance local AI requirements.