logo
#

Latest news with #FREE-AICommitteeReport

Can AI wrongly flag legitimate transactions as ‘suspicious'? RBI flags key concerns in latest report
Can AI wrongly flag legitimate transactions as ‘suspicious'? RBI flags key concerns in latest report

Mint

time3 days ago

  • Business
  • Mint

Can AI wrongly flag legitimate transactions as ‘suspicious'? RBI flags key concerns in latest report

In its latest report, the Reserve Bank of India (RBI) has flagged some concerns relating to the impact of artificial intelligence on the world of finance. The report says that automation can potentially amplify faults across high-volume transactions. For example, an AI-powered fraud detection system that incorrectly flags legitimate transactions as suspicious or fails to detect actual fraud due to model drift can cause financial losses and reputational damage, as mentioned in RBI's FREE-AI Committee Report, Framework for Responsible and Ethical Enablement of Artificial Intelligence. Another danger that the RBI's report proposes relates to the credit scoring model. It says that a credit scoring model that depends on real-time data feeds could fail on account of data corruption in upstream systems. While emphasising the importance of monitoring, it says that if monitoring is not done consistently, AI systems can degrade over time, delivering sub-optimal or inaccurate outcomes. The Financial Stability Board (FSB) has also highlighted that artificial intelligence can reinforce existing vulnerabilities. One of such concerns is where AI models, learning from historical patterns, could reinforce market trends, thereby exacerbating boom-bust cycles. When multiple institutions make use of similar AI models or strategies, it could lead to a herding effect where synchronised behaviours could magnify market volatility and stress. Excessive dependence on AI for risk management and trading could expose institutions to model convergence risk, just as dependence on analogous algorithms could undermine market diversity and resilience. The opacity of AI systems could make it difficult to predict how shocks transmit through interconnected financial systems, especially at times of crisis. AI deployments blur the lines of responsibility between various stakeholders. This difficulty in allocating liability can expose institutions to legal risk, regulatory sanctions, and reputational harm, particularly when AI-driven decisions affect customer rights, credit approvals, or investment outcomes. For example, if an AI model shows biased outcomes due to inadequately representative training data, questions may arise as to whether the responsibility lies with the deploying institution, the model developer, or the data provider. For all personal finance updates, visit here

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store