5 days ago
EY & ACCA urge trustworthy AI with robust assessment frameworks
EY and the Association of Chartered Certified Accountants (ACCA) have released a joint policy paper offering practical guidance aimed at strengthening confidence in artificial intelligence (AI) systems through effective assessments.
The report, titled "AI Assessments: Enhancing Confidence in AI", examines the expanding field of AI assessments and their role in helping organisations ensure their AI technologies are well governed, compliant, and reliable. The paper is positioned as a resource for business leaders and policymakers amid rapid AI adoption across global industries.
Boosting trust in AI
According to the paper, comprehensive AI assessments address a pressing challenge for organisations: boosting trust in AI deployments. The report outlines how governance, conformity, and performance assessments can help businesses ensure their AI systems perform as intended, meet legal and ethical standards, and align with organisational objectives.
The guidance comes as recent research highlights an ongoing trust gap in AI. The EY Response AI Pulse survey found that 58% of consumers are concerned that companies are not holding themselves accountable for potential negative uses of the technology. This concern has underscored the need for greater transparency and assurance around AI applications. "Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity."
Marie-Laure Delarue, EY's Global Vice-Chair, Assurance, expressed the significance of the current moment for AI: "AI has been advancing faster than many of us could have imagined, and it now faces an inflection point, presenting incredible opportunities as well as complexities and risks. It is hard to overstate the importance of ensuring safe and effective adoption of AI. Rigourous assessments are an important tool to help build confidence in the technology, and confidence is the key to unlocking AI's full potential as a driver of growth and prosperity."
She continued, "As businesses navigate the complexities of AI deployment, they are asking fundamental questions about the meaning and impact of their AI initiatives. This reflects a growing demand for trust services that align with EY's existing capabilities in assessments, readiness evaluations, and compliance."
Types of assessments
The report categorises AI assessments into three main areas: governance assessments, which evaluate the internal governance structures around AI; conformity assessments, determining compliance with laws, regulations and standards; and performance assessments, which measure AI systems against specific quality and performance metrics.
The paper provides recommendations for businesses and policymakers alike. It calls for business leaders to consider both mandatory and voluntary AI assessments as part of their corporate governance and risk management frameworks. For policymakers, it advocates for clear definitions of assessment purposes, methodologies, and criteria, as well as support for internationally compatible assessment standards and market capacity-building.
Public interest and skills gap
Helen Brand, Chief Executive of ACCA, commented on the wider societal significance of trustworthy AI systems. "As AI scales across the economy, the ability to trust the technology is vital for the public interest. This is an area where we need to bridge skills gaps and build trust in the AI ecosystem as part of driving sustainable business. We look forward to collaborating with policymakers and others in this fascinating and important area."
The ACCA and EY guidance addresses several challenges related to the current robustness and reliability of AI assessments. It notes that well-specified objectives, clear assessment criteria, and professional, objective assessment providers are essential to meaningful scrutiny of AI systems.
Policy landscape
The publication coincides with ongoing changes in the policy environment on AI evaluation. The report references recent developments such as the AI Action Plan released by the Trump administration, which highlighted the importance of rigorous evaluations for defining and measuring AI reliability and performance, particularly in regulated sectors.
As AI technologies continue to proliferate across industries, the report argues that meaningful and standardised assessments could support the broader goal of safe and responsible AI adoption both in the private and public sectors.
In outlining a potential way forward, the authors suggest both businesses and governments have roles to play in developing robust assessment frameworks that secure public confidence and deliver on the promise of emerging technologies.