logo
ISACA launches first advanced AI audit certification for auditors

ISACA launches first advanced AI audit certification for auditors

Techday NZ20-05-2025

ISACA has introduced the Advanced in AI Audit (AAIA) certification aimed at experienced auditors working with artificial intelligence in their audit processes.
The AAIA is positioned as the first advanced audit-specific certification for artificial intelligence, targeting professionals who hold credentials such as Certified Information Systems Auditor (CISA), Certified Internal Auditor (CIA), or Certified Public Accountant (CPA). The new certification is designed to validate expertise in areas such as AI governance, risk management, operations, and the use of AI auditing tools.
According to ISACA, the certification aims to address the evolving requirements of audit and compliance as AI becomes more integral across various industries. It signals an adaptation to the increasing demand for AI literacy, highlighted in a recent LinkedIn report listing AI skills among the fastest-growing in today's professional environment.
Shannon Donahue, ISACA Chief Content and Publishing Officer, stated: "ISACA is proud to have served the global audit community for more than 55 years through our audit and assurance standards, frameworks and certifications, and we are continuing to help the community evolve and thrive with the certifications and training they need in this new era of audits involving AI. Through AAIA, auditors can demonstrate their expertise and trusted advisory skills in navigating AI-driven challenges while upholding the highest industry standards."
The AAIA certification programme is based on established standards underpinning credentials such as CISA from ISACA, CIA from Institute of Internal Auditors, and CPA from American Institute of Certified Public Accountants. It is designed to ensure that certified auditors can address the specific challenges linked with AI integration, AI compliance, and audit process enhancement through AI-driven insights.
ISACA highlights that the certification not only verifies the ability of professionals to audit AI-powered systems but also enables auditors to use AI tools to streamline their own audit processes. This may lead to reduced manual effort and potentially more timely and accurate decision-making, while maintaining the standards required for accuracy and regulatory compliance.
Eligibility for the AAIA is currently extended to those who hold an active CISA, CIA, or CPA certification. The content of the AAIA exam covers three main domains: AI governance and risk, AI operations, and AI auditing tools and techniques. Preparation resources include the AAIA Review Manual, an AAIA Online Review Course, and a Questions, Answers, and Explanations Database, all of which allow for a full year of access to support candidates preparing for the examination.
ISACA points to recent internal research highlighting the growing urgency for AI-related skills within audit and digital trust professions. According to the data, 85 percent of digital trust professionals, including auditors, believe they will need to expand their knowledge and skills in AI within the next two years to maintain or advance in their roles. Additionally, 94 percent agree that AI skills will be important for professionals in this field.
To support ongoing learning, ISACA has expanded its training offerings to include new AI-focused courses such as Introduction to AI for Auditors and Auditing Generative AI. An Artificial Intelligence Audit Toolkit has also been introduced to supplement professional development in this area.
ISACA also indicated plans for additional AI credentials, including the Advanced in AI Security Management (AAISM) certification, to be launched in the third quarter. This qualification is expected to cater to information security managers and professionals holding credentials such as CISM and CISSP.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time5 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Article – RNZ It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? , Producer – 30′ with Guyon Espiner It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? 'My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people.' Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception – before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril – and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. 'It's going to do wonderful things for us,' he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. 'In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients – including quite a few with the same very rare condition you have – that knows your genome, knows all your tests, and hasn't forgotten any of them.' He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive – a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. 'I think that's a bit optimistic.' 'If he said 25 years I'd believe it.' The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. 'Right now, we're at a special point in history,' he says. 'We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes.' He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. 'This shows,' says Hinton of his former employers, 'the company's principals were up for sale.' He believes defense departments of all major arms dealers are already busy working on 'autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind'. He also points out the grim fact that Europe's AI regulations – some of the world's most robust – contain 'a little clause that says none of these regulations apply to military uses of AI'. Then there is AI's capacity for deception – designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged – in just one year – by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? 'I think it would be a bad thing for people – because we'd no longer be needed.' Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. 'If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose.' 'Ask a chicken' For those who believe a rogue AI can simply be shut down by 'pulling the plug', Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off – despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. 'There are so many bad uses as well as good,' he says. 'And our political systems are just not in a good state to deal with this coming along now.' It's a sobering reflection from one of the brightest minds in AI – whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: 'If you want to know what it's like not to be the apex intelligence, ask a chicken.' Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop

time5 hours ago

  • Scoop

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet? "My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people." Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim. His career at the forefront of machine learning began at its inception - before the first Pacman game was released. But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence. Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril - and a potential apocalypse. The Good: 'It's going to do wonderful things for us' Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. "It's going to do wonderful things for us," he says. According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade. Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses. "In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients - including quite a few with the same very rare condition you have - that knows your genome, knows all your tests, and hasn't forgotten any of them." He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive - a human-AI synergy he believes will only improve over time. Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. "I think that's a bit optimistic." "If he said 25 years I'd believe it." The Bad: 'Autonomous lethal weapons' Despite these benefits, Hinton warns of pressing risks that demand urgent attention. "Right now, we're at a special point in history," he says. "We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes." He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war. "This shows," says Hinton of his former employers, "the company's principals were up for sale." He believes defense departments of all major arms dealers are already busy working on "autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind". He also points out the grim fact that Europe's AI regulations - some of the world's most robust - contain "a little clause that says none of these regulations apply to military uses of AI". Then there is AI's capacity for deception - designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged - in just one year - by 1200 percent. The Apocalyptic: 'We'd no longer be needed' At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet? "I think it would be a bad thing for people - because we'd no longer be needed." Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise. "If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose." 'Ask a chicken' For those who believe a rogue AI can simply be shut down by "pulling the plug", Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive. This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off - despite being given clear instructions to do so by the research team. Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. "There are so many bad uses as well as good," he says. "And our political systems are just not in a good state to deal with this coming along now." It's a sobering reflection from one of the brightest minds in AI - whose work helped build the systems now raising alarms. He closes on a metaphor that sounds absurd as it does chilling: "If you want to know what it's like not to be the apex intelligence, ask a chicken." Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.

Why it's time Australian asset management leaders explore the true value of AI
Why it's time Australian asset management leaders explore the true value of AI

Techday NZ

time6 hours ago

  • Techday NZ

Why it's time Australian asset management leaders explore the true value of AI

Modern asset management across industries like mining, utilities, public infrastructure, road and rail represents a massive exercise in managing and analysing data. This data-driven domain is a critical function within these industries, albeit one that has seen less digital transformation than other segments in recent years. Because of this, there appears to be increasing pressure on asset-centric business leaders to lunge for an AI project as a means to quickly manifest efficiencies in their operations. However, whether organisations are curious, excited, motivated, or feeling like they're under boardroom duress to deploy an AI solution into their asset management workflows, strategic considerations about how best to integrate these tools must come first. Digital transformation is not a small undertaking, nor should it be about technology in and of itself, especially for companies managing critical infrastructure and assets potentially worth billions. The stakes are high, which is why one industry leader is calling on organisations to zoom out and take a look at the big picture. Hype cycles, distractions and trust If you ask Scott McGowan, CEO of data-led asset management solutions firm COSOL, what he thinks of AI, he'll tell you it represents a lot of potential in the asset management sector. However, he'll also tell you not to run out and invest in an AI solution just because your board thinks you should. Observing the proliferation of AI adoption hype across industry events, social media and the press, McGowan is concerned that market pressures might push organisations in asset-centric industries to try and run with AI before they can walk. "Everyone expects everyone to build an AI division in-house, but if you're a mining or infrastructure company, the reality is that's not your core business," he says. "Your core business is to produce iron, or coal, or copper, or to run trains and provide services to the public." Without reason or a roadmap, the discourse around AI integration often represents an unwelcome distraction from the primary mission of asset-centric industries. Adoption pressures are being met with a degree of hesitation from business leaders, McGowan adds, due to the somewhat untested nature of AI in an asset management context and an absence of established trust. "Hesitation toward AI probably comes from multiple aspects, but they all come back to one underlying principle, which is around trust," he says. "Trust in security, trust in algorithms, and trust in data." "Trust is built on experience and understanding, and I think the challenge we have with AI as it stands today, is that the algorithms are designed in such a way that there's little to no transparency as to the decision-making methodology those algorithms have. "Data foundations must also be trusted, and solutions that act on that data must be trusted to do so correctly. Organisations need to trust that any solution they adopt is repeatable, robust and resilient." Exploring a path forward for AI in asset-centric industries will be the subject of a series of roundtables that COSOL, in partnership with IBM, will be running across three Australian cities in June. While the sessions are closed, select insights are set to be released highlighting discussions, pain points and strategies business leaders face. Ahead of the events, COSOL's McGowan touched on some of the messaging set to be covered during the campaign, including what might it look like for an organisation to take its first steps with AI before building maturity in considered stages. First steps: Learning to walk with AI AI transformation is a journey, not a big bang project. Organisations that approach their AI maturity through a lens of a walk-jog-run approach based on the true needs of the business are much more likely to find value in implementation versus those that seek shiny solutions. In the first instance, the walk phase, McGowan urges that AI has to be solving an existing problem. "There are a lot of AI initiatives and a lot of technologies looking for a problem; I think that's the wrong way to go about it," he says. "The walk concept is solving a discrete problem that already lies within the technology space. If you think about self-solving and trust in data, using AI to resolve, infer, continually assess or optimise data quality is actually a really interesting place to start. "Through this approach AI starts to solve its own trust problem by addressing one of the initial play areas. The opportunity around master data quality and the ability to link physical equipment with digital representations for accuracy is in my view the first part of the AI journey in our context." David Small, Principal Sales Leader for IBM Automation Software in Australia and New Zealand, echoes the need for organisational data layers to be updated and as accurate as possible to enable smarter asset management. "Without quality data, organisations will struggle to achieve key benefits as they move along the asset maturity curve," he says. "Data is the building block that asset management systems need and rely on to support the company's chosen asset management strategies. "The direction that IBM has taken is to embed AI capability within the Maximo Application Suite vs saying to organisations go and build your own AI capability. These AI enhancements are integrated AI experiences that deliver immediate value, increase productivity and seamlessly scale." Adding capability and building an AI-integrated business Looking further into an AI modernisation program, companies might look to select an AI partner to help them further investigate ways and means AI can be implemented, be it in a proof of concept, or solving discrete problems. Partnering with someone who specialises in AI can help companies build capability within the business from a cross functional perspective around asset management. "This phase is around automation of things like non-value tasks, work order administration, and master data creation as examples," McGowan says. "There is a lot of work that goes into master data creation. So, following from a solution in the first phase that knows what good data looks like, you can take it to the next phase of being a master data specialist that understands how to take a piece of equipment and represent it digitally and accurately." When a company is running with AI, it is essentially changing its operating business model. With each new wave of technology, transformation initiatives should not be about solving technology problems, but enhancing the operational capability of the business. Of course, McGowan notes that business parameters will need to be changed to enable technology to solve business problems if the integration is significant. In the case of AI, this may look like redefining career paths, redefining the operating model and the job descriptions for individual components and then rolling out the purpose of AI. "I can almost see a place where you define a job description for the AI agent or component, and it performs that task, has regular performance reviews, learns from its mistakes, and is managed like anyone else in a business," he says. "I think that's the evolution: creating a space where AI has a role to play that is embedded in the organisation and the operating model, which measures and manages it like every other human resource within the business." Playing the long game With every new wave of technology across data, analytics, automation and AI, business leaders are often targeted and lobbied to act with urgency on a new solution. The warning to 'innovate or be left behind' is regularly bandied around these concepts. While McGowan notes the AI opportunity in asset management represents considerable potential, he feels strongly that it must be approached driven by practical business needs, organisational fit and readiness, rather than hype. "The potential for AI particularly in the asset management space is almost endless, because largely the sector has not gone through as many digital revolutions that other industries and sectors have gone through," he says. "There are opportunities to do things like provide real-time feedback around avoiding potential incidents, and the automation of non-value-adding tasks will also be significant. "But it's important that when we talk about driving efficiency with AI that we evolve our thinking in terms of what it means for careers and jobs and tasks, and working to make those AI-infused rather than AI-replaced. "We need to look at evolving the operating model to support the introduction of efficiency gains from AI and then driving your valued labour to much higher value tasks." This messaging will form part of the jumping off point for COSOL's upcoming roundtable series in Australia. With 60 asset management leaders across Sydney, Melbourne and Brisbane coming together to discuss the pivotal role of AI in shaping the future of asset management in the coming weeks, valuable insights are guaranteed to emerge. Many of these insights will be shared in an upcoming report. To register for these learnings, click this link.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store