logo
Google Cloud unveils advanced AI security tools & SOC updates

Google Cloud unveils advanced AI security tools & SOC updates

Techday NZ12 hours ago
Google Cloud has announced new security solutions and enhanced capabilities focused on securing AI initiatives and supporting defenders in the context of growing enterprise adoption of artificial intelligence technologies.
With the introduction of AI across various sectors, organisations are increasingly concerned with the risks presented by sophisticated AI agents. Google Cloud has responded by expanding on the security measures available within its Security Command Centre, emphasising protection for AI agents and ecosystems using tools such as Sensitive Data Protection and Model Armour.
According to Jon Ramsey, Vice President and General Manager, Google Cloud Security, "AI presents an unprecedented opportunity for organizations to redefine their security posture and reduce the greatest amount of risk for the investment. From proactively finding zero-day vulnerabilities to processing vast amounts of threat intelligence data in seconds to freeing security teams from toilsome work, AI empowers security teams to achieve not seen before levels of defence and efficiency."
Expanded protection for agentic AI
Google Cloud has detailed three new capabilities for securing AI agents in Google Agentspace and Google Agent Builder. The first, expanded AI agent inventory and risk identification, will enable automated discovery of AI agents and Model Context Protocol (MCP) servers. This feature aims to help security teams quickly identify vulnerabilities, misconfigurations, and high-risk interactions across their AI agent estate.
The second, advanced in-line protection and posture controls, extends Model Armour's real-time security assurance to Agentspace prompts and responses. This enhancement is designed to provide controls against prompt injection, jailbreaking, and sensitive data leakage during agent interactions. In parallel, the introduction of specialised posture controls will help AI agents adhere to defined security policies and standards.
Proactive threat detection rounds out these developments, introducing detections for risky behaviours and external threats to AI agents. These detections, supported by intelligence from Google and Mandiant, assist security teams in responding to anomalous and suspicious activity connected to AI agents.
Agentic security operations centre
Google Cloud is advancing its approach to security operations through an 'agentic SOC' vision in Google Security Operations, which leverages AI agents to enhance efficiency and detection capabilities. By automating processes such as data pipeline optimisation, alert triage, investigation, and response, Google Cloud aims to address traditional gaps in detection engineering workflows. "We've introduced our vision of an agentic security operations center (SOC) that includes a system where agents can coordinate their actions to accomplish a shared goal. By offering proactive, agent-supported defense capabilities built on optimizing data pipelines, automating alert triage, investigation, and response, the agentic SOC can streamline detection engineering workflows to address coverage gaps and create new threat-led detections."
The new Alert Investigation agent, currently in preview, is capable of autonomously enriching events, analysing command-line interfaces, and building process trees. It produces recommendations for next steps and aims to reduce the manual effort and response times for security incidents.
Expert guidance and consulting
Google Cloud's Mandiant Consulting arm is extending its AI consulting services in response to demand for robust governance and security frameworks in AI deployments. These services address areas such as risk-based AI governance, pre-deployment environment hardening, and comprehensive threat modelling.
Mandiant Consulting experts noted, "As more organizations lean into using generative and agentic AI, we've seen a growing need for AI security consulting. Mandiant Consulting experts often encounter customer concerns for robust governance frameworks, comprehensive threat modeling, and effective detection and response mechanisms for AI applications, underscoring the importance of understanding risk through adversarial testing."
Clients working with Mandiant can access pre-deployment security assessments tailored to AI and benefit from continuous updates as threats evolve.
Unified platform enhancements
Google Unified Security, a platform integrating Google's security solutions, now features updates in Google Security Operations and Chrome Enterprise. Within Security Operations, the new SecOps Labs offers early access to AI-powered experiments related to parsing, detection, and response, many of which use Google Gemini technology. Dashboards with native security orchestration, automation, and response (SOAR) data integration are now generally available, reflecting user feedback from previous previews.
On the endpoint side, Chrome Enterprise enhancements bring secured browsing to mobile, including Chrome on iOS, with features such as easy account separation and URL filtering. This allows companies to block access to unauthorised AI sites and provides enhanced reporting for investigation and compliance purposes.
Trusted Cloud and compliance
Recent updates in Trusted Cloud focus on compliance and data security. Compliance Manager, now in preview, enables unified policy configuration and extensive auditing within Google Cloud. Data Security Posture Management, also in preview, delivers governance for sensitive data and integrates natively with BigQuery Security Centre. The Security Command Centre's Risk Reports can now summarise unique cloud security risks to inform both security specialists and broader business stakeholders.
Updates in identity management include Agentic IAM, launching later in the year, which will facilitate agent identities across environments to simplify credential management and authorisation for both human and non-human agents. Additionally, the IAM role picker powered by Gemini, currently in preview, assists administrators in granting least-privileged access through natural language queries.
Enhanced Sensitive Data Protection now monitors assets in Vertex AI, BigQuery, and CloudSQL, with improvements in image inspection for sensitive data and additional context model detection.
Network security innovations announced include expanded tag support for Cloud NGFW, Zero Trust networking for RDMA networks in preview, and new controls for Cloud Armour, such as hierarchical security policies and content-based WAF inspection updates.
Commitment to responsible AI security
Jon Ramsey emphasised Google Cloud's aim to make security a business enabler: "The innovations we're sharing today at Google Cloud Security Summit 2025 demonstrate our commitment to making security an enabler of your business ambitions. By automating compliance, simplifying access management, and expanding data protection for your AI workloads, we're helping you enhance your security posture with greater speed and ease. Further, by using AI to empower your defenders and meticulously securing your AI projects from inception to deployment, Google Cloud provides the comprehensive foundation you need to thrive in this new era."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Workday breach exposes business contact data via CRM attack
Workday breach exposes business contact data via CRM attack

Techday NZ

time2 hours ago

  • Techday NZ

Workday breach exposes business contact data via CRM attack

Workday has disclosed a data breach after attackers exploited a third-party Customer Relationship Management (CRM) platform through social engineering tactics. The company confirmed that no customer tenant or core system data was affected, with the exposed information limited to business contact details such as names, email addresses and phone numbers. The breach, discovered on 6 August and disclosed on 15 August, involved attackers impersonating HR and IT staff to trick employees via SMS and phone calls. This enabled access to the CRM through malicious OAuth applications. Workday said it has since blocked unauthorised access, introduced additional safeguards, and urged stakeholders to remain vigilant against phishing attempts. The company stressed that official communications will never request passwords or sensitive data over the phone. The incident follows a wave of similar CRM-targeted breaches affecting companies including Google, Adidas and Qantas, underscoring the growing threat of OAuth abuse and the risks associated with third-party integrations. Expert reaction Security experts have warned that the breach highlights the growing risks posed by social engineering and third-party applications. Dray Agha, senior manager of security operations at Huntress, said: "This incident underscores three non-negotiable defences: Eliminate OAuth blind spots and enforce strict allow-listing for third-party app integrations and review connections at regular intervals. Adopt phishing-resistant MFA: Hardware tokens are essential, as 'MFA fatigue' attacks remain trivial. A huge number of attacks begin with social engineering, users being deceived, and user enrolment in execution of malware - effective security awareness training is a must for any organisation that wishes to repudiate cyber-attacks." Tim Ward, CEO and co-founder at Redflags, noted the psychological risks of such attacks: "Workday's warning is correct; any information that attackers can use to increase 'familiarity' in subsequent social engineering attacks will significantly increase their impact. Psychological effects like authority bias, cognitive ease, social proof, and the mere exposure effect mean we are more likely to trust communications from them and be less likely to check for or notice telltale signs of social engineering. A healthy scepticism combined with helpful security awareness nudges at the point of risk to help encourage caution can be critical to protect people in organisations from these threats." Boris Cipot, senior security engineer at Black Duck, emphasised the manipulative nature of such attacks: "Social engineering is a manipulative attack method that relies on psychology and social interaction skills to deceive victims into releasing sensitive information. Attackers trick victims into performing actions that aid in gaining access to sensitive information, often requiring multiple interactions and 'internal' information to appear legitimate. To protect against social engineering, organisations should establish and enforce strict procedures for handling sensitive information, such as not providing information over the phone, even to high-ranking executives, including the CEO." He added: "Although the breached information may be limited to commonly known business data in this case, individuals should still be vigilant to avoid falling prey to further attacks." Jamie Akhtar, CEO and co-founder at CyberSmart, said training is crucial: "This breach demonstrates two things. Firstly, given that Workday is the latest in a long list that includes Adidas, Qantas, Google, and Air France-KLM to be compromised in this way, it shows how effective and sophisticated social engineering campaigns have become. Secondly, it highlights the need for every business to engage in proper, targeted cybersecurity awareness training. It's very difficult to completely eliminate social engineering threats through technical means alone." Third-party risk Darren Guccione, CEO and co-founder of Keeper Security, warned that integration points remain vulnerable: "The data breach impacting Workday is a perfect illustration of the persistent and evolving risk posed by social engineering tactics targeting third-party platforms. The situation is reflective of a troubling trend across enterprise software vendors, and it appears connected to a broader wave of recent attacks similarly targeting CRM systems at multiple global enterprises via sophisticated social engineering and OAuth-based tactics." He added that organisations must "require all partners and third-party platforms to undergo regular security assessments and continuous monitoring". Javvad Malik, lead security awareness advocate at KnowBe4, said: "Social engineering continues to be the most common way organisations get breached, for this very reason, that technical controls have their limitations. We currently don't have effective ways for technology to screen and block phone calls in the same way that we can reduce some of the risk with emails." Chris Hauk, consumer privacy advocate at Pixel Privacy, called for stronger internal processes: "Organisations like Workday need to put processes in place that will foil vishing calls like the ones that took down Workday. Companies need to train their employees and executives on how to recognise schemes like this and provide ways to immediately contact IT when an attempt occurs." Chris Linnell, associate director of data privacy at Bridewell, highlighted the importance of supply chain security: "The recent disclosure by Workday regarding a breach of its third-party CRM platform has understandably raised concerns across the data protection and security community. On the surface, the impact appears to be low – primarily because the compromised data consists of business contact information, much of which is already publicly accessible. However, this should not lull organisations into complacency. The real risk lies in the potential for targeted social engineering attacks." He concluded: "This incident underscores the ongoing need for robust employee training around social engineering. Traditional phishing simulations are no longer sufficient. Organisations must explore more creative and engaging methods to ensure that awareness messaging resonates and drives behavioural change. Finally, the breach serves as a reminder of the importance of supply chain security. As the saying goes, you're only as strong as your weakest link."

Transcend launches AI tools for compliance with data privacy laws
Transcend launches AI tools for compliance with data privacy laws

Techday NZ

time4 hours ago

  • Techday NZ

Transcend launches AI tools for compliance with data privacy laws

Transcend has launched two new AI governance tools, "Do Not Train" and "Deep Deletion," aimed at providing B2B AI companies with enhanced data privacy controls. The tools address specific privacy concerns among organisations using artificial intelligence, particularly around the use and deletion of customer data. These concerns have become prominent as enterprises increasingly demand proof that their data is not being improperly used for model training and is deleted in compliance with regulatory requirements. AI data use controls Transcend's "Do Not Train" feature gives AI developers the ability to guarantee on a record-level basis that particular customer data will not be utilised for model training or development. This solution satisfies both user preferences and the contractual obligations often required in enterprise agreements. The second tool, Deep Deletion, enables companies to identify and permanently erase customer data from their data systems. It also provides verifiable documentation that the deletion has occurred, meeting increasing demands from regulators for proof of data erasure. Together, these mechanisms allow AI companies to exercise full lifecycle governance over customer data - preventing non-compliant information from entering AI models and ensuring data can be reliably purged if required by contract or law. With regulation such as the General Data Protection Regulation (GDPR), the EU AI Act, and varying state privacy and AI statutes, there is growing pressure on AI vendors not only to provide opt-outs for training data but also to produce audit-ready confirmation of deletion. Vendors failing to provide these assurances may risk losing enterprise customers or running afoul of new legal frameworks. Industry adoption Transcend reports that its tools are already deployed by some of the world's largest AI companies, collectively processing over two hundred million workflows to date. This indicates broad uptake among vendors providing enterprise-grade AI services. "We've seen firsthand that enterprise AI contracts hinge on a vendor's ability to prove both 'Do Not Train' compliance and true data deletion," said Ben Brook, Co-Founder and Chief Executive Officer of Transcend. "These capabilities are already helping AI-Native industry leaders land enterprise customers - and now we're scaling them to power the next wave of responsible AI adoption." The infrastructure underpinning these controls works in real-time, enforcing compliance directly within data systems. This approach is designed to ensure that a company's data processing activities remain aligned with privacy commitments and regulatory obligations, eliminating the need for manual oversight. Market context The expansion of Transcend's governance tools comes as more organisations are scrutinised over their handling of training data. The EU AI Act, along with ongoing shifts in state and international privacy regulations, has heightened expectations for transparency in data management, specifically around AI model development and data deletion. Industry observers note that providing robust data governance is increasingly considered a competitive differentiator for vendors seeking to acquire or retain enterprise customers. The demand for audit-ready compliance and demonstrable action on data privacy is anticipated to continue rising as regulations evolve. Transcend's new offerings position the company to cater to these needs by supplying vendors with the ability to offer proof of their data handling practices during contract negotiations and regulatory reviews. The company's solutions are designed to replace manual processes with automation, aiming for continual adherence to privacy requirements as enterprise adoption of AI expands. Follow us on: Share on:

ISACA unveils AI security credential to boost cyber expertise
ISACA unveils AI security credential to boost cyber expertise

Techday NZ

time4 hours ago

  • Techday NZ

ISACA unveils AI security credential to boost cyber expertise

ISACA has launched a new AI-centred security management certification for cybersecurity professionals. The new Advanced in AI Security Management (AAISM) credential is now available to Certified Information Security Managers (CISM) and Certified Information Systems Security Professionals (CISSP), the organisation has confirmed. This development comes as ISACA's latest AI Pulse Poll shows that 95 percent of digital trust professionals are concerned that generative AI will be exploited by malicious actors. The use of AI in business operations across sectors has led to heightened cyber threats and an increased demand for skilled security professionals capable of managing and protecting AI systems. The AAISM certification is intended to enable security professionals to implement enterprise AI solutions while identifying, assessing, monitoring and mitigating associated risks. It is the first credential of its kind offered by ISACA and has been designed to provide a comprehensive learning path across three core domains: AI governance and programme management, AI risk management, and AI technologies and controls. Eligibility for the AAISM is limited to professionals who already hold either a CISM or CISSP. ISACA outlined that the credential builds upon the security management best practices from these certifications, with a specific focus on the threat landscape related to AI. This approach aims to help professionals manage risk profiles and leverage AI within security operations effectively. Exam preparation materials for the AAISM are available in both digital and print formats, including an official review manual, an online review course, and an extensive database of questions, answers, and explanations. Access to these learning options is provided for one year, allowing candidates sufficient time to prepare for the examination. AI risk skills "The AAISM credential validates information security managers' commitment to elevating their expertise and proving they are attuned to how AI is reshaping enterprise security," said Goh Ser Yoong, Head of Compliance, and member of the ISACA Emerging Trends and IT Risk Advisory Working Groups. "AAISM's synergy with existing, award-winning security credentials with a focus on AI is a key differentiator that will equip security leaders to excel and grow their careers in this dynamic security landscape." The AAISM is positioned for professionals with proven experience in security or advisory roles, as well as those with some expertise in assessing, implementing and maintaining AI systems. Its content reflects the changing requirements for managing organisational security in an era where AI technologies are rapidly advancing. ISACA has been expanding its suite of AI-related courses and resources to meet demand, including recent course offerings on the AI threat landscape and ethical perspectives in AI. The AAISM joins the ISACA Advanced in AI Audit (AAIA) credential, which is available to audit professionals holding appropriate high-level audit certifications such as the Certified Information Systems Auditor (CISA). Supporting career progression "We're proud to be the leader in developing world-class AI-focused training and credentialing for professionals in IT audit and security," says Erik Prusch, ISACA CEO. "From a robust slate of courses and resources to the first advanced audit-specific AI certifications for experienced auditors and security managers, we are committed to finding groundbreaking ways to empower digital trust leaders to harness the transformative potential of artificial intelligence responsibly and effectively, while propelling their careers forward." The broader initiative from ISACA reflects ongoing industry requirements for digital trust professionals to keep pace with the increasing impact of AI. With its global membership, ISACA has developed the AAISM and associated materials to provide accessible pathways for professionals to validate their knowledge and skills in this developing field. Follow us on: Share on:

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store