logo
#

Latest news with #societalinequalities

What is AI bias?
What is AI bias?

Finextra

time03-06-2025

  • Business
  • Finextra

What is AI bias?

0 This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community. The term 'AI bias' refers to situations whereby an artificial intelligence (AI) system produces prejudiced results, as a consequence of flaws in its machine learning process. Often, AI bias mirrors society's inequalities, be they around race, gender, class, nationality, and so on. In this instalment of Finextra's Explainer series, we ask where bias in AI originates, the consequence of feeding models skewed data, and how the risks can be mitigated. The sources of bias AI bias can develop in three key areas, namely: 1. The data fed to the model If the data used to train an AI is not representative of the real world, and contains existing societal biases, the model will ingrain these prejudices and perpetuate them in its decisioning. 2. The algorithms The very design of an AI algorithm can also introduce bias and misrepresentation. Some algorithms may overstate or understate patterns in data – resulting in skewed predictions. 3. The programmed objectives The goals that an AI system is programmed to achieve can also be biased. If objectives are not designed with fairness and equity in mind, the engine may discriminate against certain groups. Loan applications: A case study So, what relevance does AI bias have to the financial services industry? Tools powered by AI are currently being rolled out by financial institution (FI)s across the globe. Indeed, AI is being deployed to automate operations; detect fraud; manage economic risk; trade algorithmically; design personalised products; support data analytics and reporting; and even improve customer services. This technology is embedding itself in bank operations, and our dependency on it will only become deeper. It is vital for the integrity of our financial systems, therefore, that AI bias is identified, understood, and controlled. For example, in the world of loan applications AI-powered approval systems are increasingly being leveraged to streamline banks' back-office processes. In some cases, loans have been denied to individuals from certain socioeconomic backgrounds, as a result of bias baked into AI models' data or algorithms. In 2021, an investigation by The Markup found lenders were more likely to deny mortgages to people of colour than to white people with similar financial characteristics. Indeed, AI-powered mortgage approval systems meant that 80% of black applicants were more likely to be rejected, along with 40% of Latinos, and 70% of Native Americans. Unchecked AI bias and its consequences Failing to address AI bias is not just discriminatory for the end-users of financial services. It can also result in legal liabilities, reputational damage, financial and operational risks, as well as regulatory non-compliance, for the institutions. The European Union (EU)'s AI Act compels providers of AI systems to ensure their training, validation, and testing datasets stand up to an appropriate examination of biases and correction measures. Failure to meet this instruction could trigger penalties and fines. Operational and ethical issues aside, allowing bias in AI to fester is simply bad for business. Indeed, bias can reduce the overall accuracy and effectiveness of AI tools – hindering their potential to deliver on the outcomes they were designed to. Mitigating the risks The fuel of today's industrial revolution, also known as Industry 4.0, is data – and the locomotive that it powers is the intelligent system. If the fuel is not refined (and bias-free) it will damage both AI's ability to run efficiently and consumers' trust in the technology itself. Though bias in datasets may never be entirely erased, it is incumbent on AI providers and the institutions that deploy it to mitigate the risks. Banks must strive for data diversity to ensure training data is representative. Algorithm design must be reviewed to guarantee processes are fair and equitable, and all AI systems should become transparent and explainable, so that any flaws can be ironed out. Supporting banks' efforts with these challenges are bias detection and mitigation tools, which help to flag and remedy cases of AI bias, as and when they appear.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store