logo
ISACA launches first advanced AI audit certification for auditors

ISACA launches first advanced AI audit certification for auditors

Techday NZ20-05-2025
ISACA has introduced the Advanced in AI Audit (AAIA) certification aimed at experienced auditors working with artificial intelligence in their audit processes.
The AAIA is positioned as the first advanced audit-specific certification for artificial intelligence, targeting professionals who hold credentials such as Certified Information Systems Auditor (CISA), Certified Internal Auditor (CIA), or Certified Public Accountant (CPA). The new certification is designed to validate expertise in areas such as AI governance, risk management, operations, and the use of AI auditing tools.
According to ISACA, the certification aims to address the evolving requirements of audit and compliance as AI becomes more integral across various industries. It signals an adaptation to the increasing demand for AI literacy, highlighted in a recent LinkedIn report listing AI skills among the fastest-growing in today's professional environment.
Shannon Donahue, ISACA Chief Content and Publishing Officer, stated: "ISACA is proud to have served the global audit community for more than 55 years through our audit and assurance standards, frameworks and certifications, and we are continuing to help the community evolve and thrive with the certifications and training they need in this new era of audits involving AI. Through AAIA, auditors can demonstrate their expertise and trusted advisory skills in navigating AI-driven challenges while upholding the highest industry standards."
The AAIA certification programme is based on established standards underpinning credentials such as CISA from ISACA, CIA from Institute of Internal Auditors, and CPA from American Institute of Certified Public Accountants. It is designed to ensure that certified auditors can address the specific challenges linked with AI integration, AI compliance, and audit process enhancement through AI-driven insights.
ISACA highlights that the certification not only verifies the ability of professionals to audit AI-powered systems but also enables auditors to use AI tools to streamline their own audit processes. This may lead to reduced manual effort and potentially more timely and accurate decision-making, while maintaining the standards required for accuracy and regulatory compliance.
Eligibility for the AAIA is currently extended to those who hold an active CISA, CIA, or CPA certification. The content of the AAIA exam covers three main domains: AI governance and risk, AI operations, and AI auditing tools and techniques. Preparation resources include the AAIA Review Manual, an AAIA Online Review Course, and a Questions, Answers, and Explanations Database, all of which allow for a full year of access to support candidates preparing for the examination.
ISACA points to recent internal research highlighting the growing urgency for AI-related skills within audit and digital trust professions. According to the data, 85 percent of digital trust professionals, including auditors, believe they will need to expand their knowledge and skills in AI within the next two years to maintain or advance in their roles. Additionally, 94 percent agree that AI skills will be important for professionals in this field.
To support ongoing learning, ISACA has expanded its training offerings to include new AI-focused courses such as Introduction to AI for Auditors and Auditing Generative AI. An Artificial Intelligence Audit Toolkit has also been introduced to supplement professional development in this area.
ISACA also indicated plans for additional AI credentials, including the Advanced in AI Security Management (AAISM) certification, to be launched in the third quarter. This qualification is expected to cater to information security managers and professionals holding credentials such as CISM and CISSP.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Is AI the future of deciding prices?
Is AI the future of deciding prices?

RNZ News

timean hour ago

  • RNZ News

Is AI the future of deciding prices?

Delta Airlines has hit turbulence after it publicly stated it wants AI to set 20% of its domestic ticket prices by the end of the year, raising concerns about so called survellience pricing. Delta has since issued a statement saying it does not intend to use AI to leverage individual consumer-specific data such as prior purchasing activity, but several senators are still concerned they're working on legislaton to stop it happening. Professional teaching fellow at the University of Auckland and an expert in the use of AI and digital technologies in marketing, Patrick Dodd spoke to Lisa Owen. To embed this content on your own webpage, cut and paste the following: See terms of use.

Pentera unveils AI web attack testing to boost cyber defences
Pentera unveils AI web attack testing to boost cyber defences

Techday NZ

time2 hours ago

  • Techday NZ

Pentera unveils AI web attack testing to boost cyber defences

Pentera has launched AI-powered Web Attack Testing with new features designed to emulate advanced cyber threats and enhance security validation for organisations. The latest addition introduces AI-driven payload generation and adaptive testing logic, aiming to provide security teams with tools to emulate contemporary threats more effectively. These capabilities are intended to deliver more nuanced and context-aware attack emulation, supporting organisations in validating their defences against increasingly sophisticated, AI-assisted cyberattacks. Pentera's Chief Product Officer, Ran Tamir, commented on the growing impact of artificial intelligence in the cybersecurity landscape, stating, AI is leveling the playing field, turning even keyboard kiddies into credible threat actors. Leveraging AI, attackers can move faster and with more precision than ever before. With the addition of AI to our adversarial testing arsenal we're giving defenders that same advantage, adapting in real time to new threat patterns and tuning each test with the context needed to uncover what traditional scans miss. We have a strong vision for how AI will permeate throughout the security validation practice, and these additions are only the beginning. The new capabilities extend Pentera's AI suite, which commenced with the introduction of AI Insight Reporting earlier in the year. Drawing on the experience from that launch, the company is now focusing on the external-facing web attack surface, incorporating AI in several key areas. AI-driven payload generation According to Pentera, the system can now generate attack payloads informed by current threat intelligence, allowing for faster emulation of newly discovered attack techniques. By building payloads based on the latest trends, the platform is designed to ensure that testing keeps pace with the evolution of real-world cyberattacks. PII-aware attack chaining Another capability highlighted by Pentera is PII-aware attack chaining. The system proactively identifies and extracts exposed Personally Identifiable Information (PII) during testing, automatically leveraging that data within identity threat attack emulations when relevant. This aims to reflect how attackers might exploit such data in actual intrusion attempts. No language or cultural barriers Pentera's platform reportedly accommodates variations in language, naming conventions, and terminology across different regions. The company states this enables consistent and accurate attack simulations regardless of regional differences in labelling or structuring of user-facing components. This feature aims to improve the realism and applicability of tests in diverse environments. System-aware logic The platform also features system-aware logic within its attack tactics. It can recognise the type of system it is interacting with, attempting the most relevant default credentials based on how authentication is structured in each case. This approach is intended to support more precise and context-driven attack scenarios. AI security insights reporting Alongside the AI-based web attack testing, Pentera has also introduced AI-powered security posture reporting specifically for assets exposed externally. These reports analyse historical test data across a selected timeframe, surfacing trends in security posture, regressions, and top remediation priorities. The AI insights reports are exportable, supporting both technical teams and executives by offering a clear overview of exposure and progress over time. The goal, according to Pentera, is to furnish stakeholders with actionable intelligence to guide security priorities and track the effectiveness of remediation efforts. Pentera's growing suite of AI tools reflects a broader movement in the cyber defence sector, where rapid advancements in attack automation and adversarial AI present ongoing challenges to enterprise security. The organisation focuses on supporting security teams by equipping them with assessment and validation functions that align with developments in the threat landscape.

CrowdStrike & OpenAI enhance SaaS security with AI agent oversight
CrowdStrike & OpenAI enhance SaaS security with AI agent oversight

Techday NZ

time7 hours ago

  • Techday NZ

CrowdStrike & OpenAI enhance SaaS security with AI agent oversight

CrowdStrike has announced a new integration with OpenAI aimed at improving security and governance for AI agents used throughout the software-as-a-service (SaaS) landscape. The company's Falcon Shield product now features integration with the OpenAI ChatGPT Enterprise Compliance API, providing the ability to discover and manage both GPT and Codex agents created within OpenAI's ChatGPT Enterprise environment. This expansion supports more than 175 SaaS applications, addressing the increasing use of agentic AI in business operations. AI and the expanding attack surface As enterprises leverage AI agents to automate workflows and increase efficiency, the number of such agents is rising rapidly. CrowdStrike highlighted that while these agents deliver operational benefits, they also introduce new security challenges. Organisations may struggle to monitor agent activities, understand the data and systems these agents can access, and determine who is responsible for creating or controlling them. Autonomous AI agents frequently operate with non-human identities and persistent privileges. If a human identity associated with such an agent is compromised, there is potential for adversaries to use the agent to exfiltrate data, manipulate systems, or move across key business applications undetected. The proliferation of these agents increases the attack surface and can significantly amplify the impact of a security incident. Enhanced visibility and governance Falcon Shield's new capabilities are intended to help organisations address these risks by mapping each AI agent to its human creator, identifying risky behaviour, and aiding real-time policy enforcement. When combined with the company's Falcon Identity Protection, CrowdStrike's platform aims for unified visibility and protection for both human and non-human identities. "AI agents are emerging as superhuman identities, with the ability to access systems, trigger workflows, and operate at machine speed," said Elia Zaitsev, chief technology officer, CrowdStrike. "As these agents multiply across SaaS environments, they're reshaping the enterprise attack surface, and are only as secure as the human identities behind them. Falcon Shield and Falcon Identity Protection help secure this new layer of identity to prevent exploitation." Key features of the Falcon Shield integration include the discovery of embedded AI tools such as GPTs and Codex agents across various platforms, including ChatGPT Enterprise, Microsoft 365, Snowflake, and Salesforce. This is designed to give security teams increased visibility into AI agent proliferation within an organisation's digital environment. Accountability and threat containment The integration links each AI agent to its respective human creator. According to CrowdStrike, this supports greater accountability and enables organisations to trace access and manage privileges using contextual information. Falcon Identity Protection works alongside these capabilities to further secure human identities associated with AI agent activity. CrowdStrike stated that the system is capable of analysing identity, application, and data context to flag risks such as overprivileged agents, GPTs with sensitive abilities, and any unusual activity. Threats can be contained automatically using Falcon Fusion, the company's no-code security orchestration, automation, and response (SOAR) engine, which can block risky access, disable compromised agents, and trigger response workflows as required. Unified protection approach The product suite combines Falcon Shield, Falcon Identity Protection, and Falcon Cloud Security to provide what the company describes as end-to-end visibility and control over AI agent activity, tracking actions from the person who created an agent to the cloud systems it is able to access. Organisations using agentic AI in their operations are being encouraged to consider tools and approaches that not only monitor the agents themselves but also strengthen oversight of the human identities behind these digital entities.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store