logo
#

Latest news with #AIDecisionMakers

Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests
Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests

Forbes

time2 days ago

  • Business
  • Forbes

Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests

Everyone is talking about 'responsible AI,' but few are doing anything about it. A new survey shows that most executives see the logic and benefits of pursuing a responsible AI approach, but little has been done to make this happen. As a result, many have already experienced issues such as privacy violations, systemic failures, inaccurate predictions, and ethical violations. While 78% of companies see responsible AI as a business growth driver, a meager 2% have adequate controls in place to safeguard against reputational risk and financial loss stemming from AI, according to the survey of 1,500 AI decision-makers published by Infosys Knowledge Institute, the research arm of Infosys. The survey was conducted in March and April of this year. So what exactly constitutes responsible AI? The survey report's authors outlined several elements essential to responsible AI, starting with explainability, 'a big part of gaining trust in AI systems.' Technically, explainability involves techniques to 'explain single prediction by showing features that mattered most for specific result," as well as counterfactual analysis that 'identifies the smallest input changes needed to change a model outcome.' Another techniques, chain-of-thought reasonings, "breaks down tasks into intermediate reasoning stages, making the process transparent." Other processes essential to attaining responsible AI include continuous monitoring, anomaly detection, rigorous testing, validation, robust access controls, following ethical guidelines, human oversight, along with data quality and integrity measures. Most do not yet use these techniques, the survey's authors found. Only 4% have implemented at least five of the above measures. Eighty-three percent deliver responsible AI in a piecemeal manner. On average, executives believe they are underinvesting in responsible by at least 30%. There's an urgency to adopting more responsible AI measures. Just about all the survey's respondents, 95%, report having AI-related incidents in the past two years. At least 77% reported financial loss as a result of AI-related incidents, and 53% suffered reputational impact from such AI related incidents. Three quarters cited damage that was at least considered 'substantial,' with 39% claiming the damage was 'severe' or 'extremely severe.' AI errors "can inflict damage faster and more widely than a simple database error of a rogue employee," the authors pointed out. Those leading the way with responsible AI have seen 39% lower financial losses, 18% lower average severity from their AI incidents. Leading AI incidents experienced over the past two years include the following: The executives with more advanced responsible AI initiatives take measure such as developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan, the survey report's authors stated.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store