logo
#

Latest news with #AIaudit

How Leaders Can Choose The Right AI Auditing Services
How Leaders Can Choose The Right AI Auditing Services

Forbes

time12 hours ago

  • Business
  • Forbes

How Leaders Can Choose The Right AI Auditing Services

AI related law concept shown by robot hand using lawyer working tools in lawyers office with legal ... More astute icons depicting artificial intelligence law and online technology of legal law regulationsNow that the 'big four' accounting firms— Deloitte, PwC, Ernst & Young, and KPMG— are beginning to offer AI audit services, what do leaders need to know about choosing the right AI audit services and about responsible AI (RAI)? The first step would be understanding key vulnerabilities from implementing AI systems, and how to mitigate such risks. It is important to understand the unintended consequences from black box AI systems and lack of transparency in deployment of such AI. In consumer facing industries, unintended consequences of deploying black box AI systems without due attention to how such systems are trained and what data is being used to train such systems can result in harm to consumers, such as price discrimination or quality discrimination. Disparate impact laws allow individuals to sue for such unintentional discrimination. Next, leaders need to understand frameworks to manage such risks. The National Institute of Standards and Technology offers an AI risk management framework, which outlines a comprehensive risks. Management frameworks help leaders to better manage risks to individuals, organizations, and society associated with AI, like standards in other industries that mandate transparency. When rightly used, AI audits can be effective in examining whether an AI system is lawful, ethical, and technically robust. However, there are vast gaps in how companies understand these principles and integrate them into their organizational goals and values. A 2022 study by the Boston Consulting Group and the Sloan Management Review found that RAI programs typically neglect three dimensions—fairness and equity, social and environmental impact mitigation, and human plus AI—because they are difficult to address. Responsible AI principles cannot be in a vacuum but need to be tied to a company's broader goals for being a responsible business. For example, is top management intentionally connecting RAI with its governance, methods, and processes?Have Clear Goals For AI Audits Standard frameworks used in procurement of technology typically focus on performance, cost, and quality considerations. However, evaluating tools also requires values such as equity, fairness, and transparency. Leaders need to envision values such as trustworthiness and alignment with organizational mission, human-AI teaming, explainability and interpretability in deploying AI. A study by researchers Yueqi Li and Sanjay Goel found significant knowledge gaps around AI audits. These gaps stem from immature AI governance implementation and insufficient operationalization of AI governance processes. A cohesive approach to AI audits requires a foundation of ethical principles integrated into AI governance. To take one example, a financial institution could explicitly mandate fairness as a criterion in AI-enabled decision-making models. For that we would first need a clear and consistent criterion of fairness, and one that can be supported by the principle of law and by a settled body of trade and commerce practice. Second, we need clear standards that can establish if norms of fairness are violated, which could be used as a stress test to determine whether AI based models are indeed fair. Auditing predictions of automated business decisions using fairness criteria will allow companies to establish if their policies are disadvantageous to some groups more than the others. If a bank is interested in predicting who to lend to, adding fairness as a criterion does not mean that the bank would have to stop screening borrowers altogether. It would necessitate that the bank does avoid metrics that would constitute a more stringent burden on some groups of borrowers (holding different groups of people to different standards). Business person holding AI box for technology and Artificial Intelligence concept. Internet of ... More Thinking and data analysis. Algorithmic stress tests before deploying black box AI models allow us to visualize different scenarios that not only help in establishing the goals of the fairness audit. It may also allow decision makers to specify different performance criteria (both from a technical perspective but also from a business objective performance). Such stress tests would allow vendors to quantify legal and operational constraints in the business, history of practices in the industry, and policies to protect confidential data, to name a few. Companies such as Microsoft and Google have used AI 'red teams' to stress test their Cross-functional Leadership Can Leverage With AI The above-mentioned BCG/SMR survey identified a key role for leaders, with most organizations that are in the leading stage of RAI maturity have both an individual and a committee guiding their RAI strategy. Increasingly, practitioners are also calling for Institutional Review Boards for the use of AI. Low frequency, but high business impact decisions, such as the choice of credit rating models, needs a systematic process to build consensus. An RAI champion, working with a cross-departmental team, could be entrusted with such a responsibility. The institutional review board needs to map algorithmic harms into an organization's risk framework. Smaller organizations can rely on best practice checklists developed by auditing bodies and industry standards organizations. Recognizing when a human decision maker is needed and when automated decisions can be employed will be increasingly important as we learn to navigate the algorithmic era. It is equally important to understand how business processes demarcate the boundaries between judgement exercised by a human actor and what is automated. The IRB can consider questions such as who should set these boundaries, is it the responsibility of division heads or mid-level managers. The AI ethics team and the legal team need to consider what are the policy implications of such boundaries and the legal implications of such a Foundation for AI Audits Three key aspects need to be understood before leaders embark on AI audits: Define goals: Understand AI audit is not about the technology itself, but how AI is intertwined with organizational values Establish AI governance: Before undertaking AI audits, we need a comprehensive AI governance framework in place. Establish cross-functional teams: Algorithmic risks need to be understood in the context of the organization's own risk profile. Cross-functional teams are key to build this understanding AI is increasingly intertwined with almost every aspect of business. Leaders should be cognizant of the algorithmic harms from the lack of transparency and oversight in AI, alongside the considerable benefits of digital transformation. Establishing the right governance frameworks and auditing AI will ensure transparency in AI model development, deployment, and use.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store