logo
#

Latest news with #Audits

Should You Trust AI With Your Company's Secrets?
Should You Trust AI With Your Company's Secrets?

Forbes

time05-08-2025

  • Business
  • Forbes

Should You Trust AI With Your Company's Secrets?

Priya Mohan, Manager, Cybersecurity & Technology Risk @ KPMG | Author of Linkedin Learning AI Courses | AI Compliance Strategist. Gone are the days when audits meant weeks of spreadsheets, screenshots and endless calendar invites. AI promises to do the heavy lifting by spotting risks, automating controls and helping teams avoid burnout. To add fuel to this, leadership at just about every major company is pushing to bring AI automation into every corner of the business, including audits. The idea of letting AI handle your company's sensitive data and automate IT and security audits sounds like a dream. But as you consider handing over the keys to your digital kingdom, you may pause. Can you really trust AI with your company's most sensitive secrets? How AI Is Changing Security Audits Traditionally, IT audits have been time-consuming and manual—auditors sampled data and hoped they caught everything. AI changes that with the following: • Speed: It can scan huge amounts of data quickly to flag anomalies and unusual behaviors. • Better Detection: Instead of just sampling, AI scans datasets continuously and identifies control issues that humans might miss. It continuously monitors systems to catch threats as they emerge. • Data Collection: AI automates the collection of evidence from logs, emails and transactions • Always On: AI tools don't take breaks. They monitor your systems 24/7 and alert you to risks right away. • Cost Reduction: AI cuts down on manual data sifting by audit teams. • Automated Reporting: AI can streamline compliance reporting with real-time dashboards and insight into the status of compliance initiatives. The Elephant In The Room: Where Does The Data Go? Here's the trade-off: AI-based audit tools often require deep read-level access to sensitive areas of your stack, such as source code, production databases, employee access logs or cloud config files. Many AI security tools use a shared SaaS model where your data may be anonymized and aggregated, but still used to fine-tune the vendor's detection capabilities. Even more concerning is that some tools offer ChatGPT-like interfaces on top of your infrastructure data without clarifying if prompts or results are stored or processed by third parties. How To Use AI Tools Without Losing Control of Your Data So, how do you get the benefits of AI without handing over the keys to the kingdom? Here's what I believe companies should do to protect themselves: 1. Choose tools that offer private cloud or on-prem LLM hosting. Some vendors now offer fully private LLM deployments that never leave your VPC or store data externally. When layering LLMs on top of audit data, insist on private inference. 2. Mandate data residency and isolation. Push vendors to store your data in your chosen geography and isolate it from other tenants. Some tools may offer EU-only data residency or dedicated instances. 3. Negotiate explicit data use clauses in contracts. Get clarity on whether your data is used to improve the AI model; if logs are retained, and for how long; whether AI prompts (and responses) are stored; and whether the vendor uses subprocessors. Add kill-switch clauses and DPA rights to revoke access or force deletion upon request. 4. Use synthetic or redacted data for POC/testing. Use synthetic IAM policies and redacted GitHub repos to test out an AI audit tool before committing. This can let you evaluate detection logic without exposing real secrets. 5. Build a human-in-the-loop approval model. Even if AI flags a control failure (e.g., an overly permissive S3 bucket), have it routed through a compliance analyst before triggering remediation or reports. This ensures context isn't lost—and reduces false positives. What AI Tool Can Auditors Start With? The most common way to start using AI in audits is through a privately hosted LLM (E.g., GPT-4o) within a secure cloud environment. This keeps data secure while enabling smart automation. Start with the following: • Process Change Identification: Input the process narratives from prior years into this year's narratives to identify new changes and to assess the impact of those changes. • Analyze Code Configurations: Look at code configuration provided by control owners to understand whether the current configuration satisfies the control requirements. • Test Of Design Report: Upload meeting notes, narratives and supporting evidence. The LLM can generate a draft test of design and conclusion, which should be reviewed and validated by a qualified auditor. • 100% Testing Of Large Datasets: Upload large datasets and provide instructions on how to perform the analysis necessary to identify control exceptions. Validate the output's accuracy. • Produce An Audit Report: Provide the LLM with the prior year's audit report and a list of current findings. The LLM can produce a well-structured draft report reflecting current issues and aligned formatting. The Bottom Line: Is AI Ready To Guard Your Secrets? AI can supercharge audits by making them faster, smarter and more cost-effective. But it's not a magic bullet. You need to balance automation with strong controls, transparency and careful management of AI tool access. So yes, you can trust AI with your company's secrets, but only if you keep a close eye on how it's used. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store