Latest news with #predictiveAI

Finextra
27-05-2025
- Business
- Finextra
AI adoption in financial services and fintech in 2025: By John Adam
A few weeks ago, I visited several events in London over UK Fintech Week. I listened to a lot of speakers and panels and spoke to a lot of people in the sector—at established financial services companies, scaleups, those still early in their product journeys and other ecosystem participants. By far the most common use cases of predictive AI in financial services are KYC (Know Your Customer) and AML (anti-money laundering) compliance. KYC and AML flows contain a lot of rules-based repeat processes that require flawless accuracy in their execution. Rare mistakes and oversights led to fines totaling almost $24 billion in 2024. American companies alone were slapped with over $3 billion in fines. Automation of these processes reduces errors and oversights drastically and requires a fraction of the resources as legacy KYC and AML processes. In April of 2024, Visa announced that its AI-powered fraud detection system helped prevent $41 billion in fraudulent transactions in a single year by analyzing customer behaviour patterns, geolocation and transaction velocity. Another compelling example of an automated AML feature is SEON's transaction monitoring model and screening system used to reduce their clients' manual fraud reviews by 37%, or by over 1/3. Similarly, Revolut has released an AI-powered feature to protect customers against APP (Authorised Push Payment) scams that often act as a precursor to money laundering. The feature uses a ML (machine learning) model to flag potential scams in real time by comparing normal user transaction patterns to irregularities, identifying suspicious user behaviour automatically and simplifying a task that is otherwise quite tedious. A few other common use cases are Extracting data from 10-ks. Automating loan application processing and customer data cross-referencing. Identifying customer relationships to decrease time spent on and overall costs of KYC reviews. And financial services companies can apply predictive AI to other low-complexity, high-impact processes that are currently resource intensive. There are also opportunities to add extra value for customers with more adventurous features and a competitive edge. It's not easy standing out in an often-crowded sector dominated by financial giants, especially as a newcomer, and to do so requires a culture of innovation. Predictive AI offers fintech innovators the insights to better forecast how new features will be received, adopted, or rejected by different segments of their audiences before spending money and time building them. It can also be applied to critique the ideation process and prototypes. Predictive AI is useful at most steps in the feature ideation to development process. Use it for instance To validate ideas by projecting historical user behaviour patterns onto feature prototypes. Monzo, for example, uses ML models to identify patterns in user behaviour like login activity, onboarding flow interactions and when customers use certain features like making payments. When building a new feature, Monzo can now use this data to predict how users might interact with the proposed feature. The model could show whether a certain user profile would use or ignore the feature, if it may increase engagement with core or prioritised services, fail at inspiring an uptick in meaningful engagement or perhaps offer value to users in some unexpected way. To assess compliance implications of new products or features before they are fully developed and launched. Upstart, a U.S.-based AI-powered consumer lending fintech, uses predictive AI tools to run pre-deployment simulations to gauge if their loan platform is compliant with the ECOA (Equal Credit Opportunity Act) by simulating variables like demographic groups to measure if any are disproportionately declined. To make the model's findings transparent to regulators, Upstart uses an XAI (Explainable Artificial Intelligence) model to unpack its logic and show if decisions meet regulatory standards. Many financial services providers, including Upstart, use proxy models to simulate potential biases. However, the CFPB (Consumer Financial Protection Bureau) has not set clear legal rules for proxy models and that lack of clarity does mean using proxy models does entail a degree of compliance risk. Traditionally U.K. regulators have focused more on general decision-making transparency without any particular focus on demographics. As a result, financial institutions usually put an emphasis on making outcomes explainable to the FCA (Financial Conduct Authority). To predict churn and to prioritize the development of certain features over others. ML models trained on enough user data can identify potential markers for disengagement before human analysts are able to connect the dots. For instance, normally when David receives his paycheck he checks his account that same day and splits his salary up between accounts. But over the last 4 months, he has waited two to three days to take any action. ML models could keep track of which accounts are receiving a delayed reaction to financial events, like David's, and take steps to reengage him and others like him before churn happens, building a new feature to meet their needs or targeting them with a new campaign. Blockers for adopting predictive AI Predictive AI models estimate outcomes before companies allocate resources, helping to optimise R&D. But the quality of predicted outcomes is reliant on the fullness and quality of the data the AI models making them are trained on. If data is obsolete or incomplete, predictions will be less accurate. This is the Achilles' heel for many older and larger financial institutions: bringing together siloed data in multiple formats. Two solutions in this case are data integration techniques like data fabric architecture and AI-driven document understanding that can be used to bridge gaps between legacy systems and unify documentation with minimal manual intervention. Younger financial services providers built as digitally native have an advantage when it comes to data accessibility but also have less data than older, more established rivals. If larger, mature financial institutions can use their deep pockets of historical data, they can give themselves an edge when developing AI models and tools whose quality of performance is correlated to data volume. Some of the digitally native new providers are now getting large enough to be taken seriously by the big beasts of the financial services sector—Revolut as the most obvious example. The financial sector and predictive AI going forward Digitalization has allowed companies in the financial sector to ramp up operations exponentially, but most have still been somewhat constrained by 'human factor' limitations like time zones and working hours. That is beginning to change. At the start of the year, Goldman Sachs introduced an internal AI assistant for its employees and BBVA released a customer-facing AI assistant. Revolut announced it will release its AI-powered assistant later this year. The autonomous nature of AI agents and assistants will let companies increase productivity and go beyond what a financially viable legacy human workforce makes it possible to achieve. Market research by McKinsey estimates that AI adoption will represent $1 trillion in value to the global banking sector annually through a combination of efficiency gains and new commercial opportunities. As financial services companies build and refine their AI adoption processes and regulation is defined, use cases for AI will continue to expand past KYC and AML compliance automations and become commonplace in areas like new feature idea validation, prototype testing and churn prediction. With success stories like Visa avoiding $41B worth of fraudulent transactions, and more companies prioritizing AI assistants and agents, AI adoption and integration are higher stakes. Wins like Visa's come down to access to and quality of data, and enterprises fall into one category and young fintechs another - one with decades' worth of data points split between various databases and systems, the other with sparser data but unified access. It will be interesting to see which turns out to be the more advantageous starting point for AI adoption: deep reservoirs of data and long-term experience, or digitally native systems and agile teams.


Forbes
27-05-2025
- Business
- Forbes
Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It
Most predictive AI projects neglect to estimate the potential profit – a practice known as ML ... More valuation – and that spells project failure. Here's the how-to. To be a business is to constantly work toward improved operations. As a business grows, this usually leads to the possibility of using predictive AI, which is the kind of analytics that improves existing, large-scale operations. But the mystique of predictive AI routinely kills its value. Rather than focusing on the concrete win that its deployment could deliver, leaders get distracted by the core tech's glamor. After all, learning from data to predict is sexy. This in turn leads to skipping a critical step: forecasting the operational improvement that predictive AI operationalization would deliver. As with any kind of change to large-scale operations, you can't move forward without a credible estimation of the business improvement you stand to gain – in straightforward terms like profit or other business KPIs. Not doing so makes deployment a shot in the dark. Indeed, most predictive AI launches are scrubbed. So why do most predictive AI projects fail to estimate the business value, much to their own demise? Ultimately, this is not a technology fail – it's an organizational one, a glaring symptom of the biz/tech divide. Business stakeholders delegate almost every aspect of the project to data scientists. Meanwhile, data scientists as a species are mostly stuck on arcane technical metrics, with little attention to business metrics. The typical data scientist's training, practice, shop-talk and toolset omits business metrics. Technical metrics define their comfort zone. Estimating the profit or other business upside of deploying predictive AI – aka ML valuation – is only a matter of arithmetic. It isn't the "rocket science" part, the ML algorithm that learns from data. Rather, it's the much-needed prelaunch stress-testing of the rocket. Say you work at a bank processing 10 million credit card and ATM card transactions each quarter. With 3.5% of the transactions fraudulent, the pressure is on to predictively block those transactions most likely to fall into that category. With ML, your data scientists have developed a fraud-detection model that calculates a risk level for each transaction. Within the most risky 150,000 transactions – that is, the 1.5% of transactions that are considered by the model most likely to be fraud – 143,000 are fraudulent. The other 7,000 are legitimate. So, should the bank block that group of high-risk transactions? Sounds reasonable off the cuff, but let's actually calculate the potential winnings. Suppose that those 143,000 fraudulent transactions represent $18,225,000 in charges – that is, they're about $127 each on average. That's a lot of fraud loss to be saved by blocking them. But what about the downside of blocking them? If it costs your bank an average of $75 each time you wrongly block due to cardholder inconvenience – which would be the case for each of the 7,000 legit transactions – that will come to $525,000. That barely dents the upside, with the net win coming to $17,700,000. So yeah, if you'd like to gain almost $18 million, then block those 1.5% most risky transactions. This is the monetary savings of fraud detection, and a penny saved is a penny earned. But that doesn't necessarily mean that 1.5% is the best place to draw the line. How much more might we save by blocking even more? The more we block, the more lower-risk transactions we block – and yet the net value might continue to increase if we go a ways further. Where to stop? The 2% most risky? The 2.5% most risky? To navigate the range of predictive AI deployment options, you've just got to look at it: A savings curve comparing the potential money saved by blocking the most risky payment card ... More transactions with fraud-detection models. The performance of three competing models is shown. This shows the monetary win for a range of deployment options. The vertical axis represents the money saved with fraud detection – based on the same kind of calculations as those in the previous example – and the horizontal axis represents the portion of transactions blocked, from most risky (far left) to least risky (far right). This view has zoomed into the range from 0% to 15%, since a bank would normally block at most only the top, say, two or three percent. The three colors represent three competing ML models: two variations of XGBoost and one random forest (these are popular ML methods). The first XGBoost model is the best one overall. The savings are calculated over a real collection of e-commerce transactions. So was the previous example's calculations. Let's jump to the curve's peak. We would maximize the expected win to more than $26 million by blocking the top 2.94% most risky transactions according to the first XGBoost model. But this deployment plan isn't a done deal yet – there are other, competing considerations. First, consider how often transactions would be wrongly blocked. It turns out that blocking that 2.94% would inconvenience legit cardholders an estimated 72,000 times per quarter. That adverse effect is already baked into the expected $26 million estimate, but it could incur other intangible or longer-term costs; the business doesn't like it. But the relatively flatness that you can see near the curve's peak signals an opportunity: If we block fewer transactions, we could greatly reduce the expected number wrongly blocked with only a small decrease in savings. For example, it turns out that blocking 2.33% rather than 2.94% cuts the number of estimated bad blocks in half to 35,000, while still capturing an expected $25 million in savings. The bank might be more comfortable with this plan. As compelling as these estimated financial wins are, we must take steps to shore up their credibility, since they hinge on certain business assumptions. After all, the actual win of any operational improvement – whether driven by analytics or otherwise – is only certain after it's been achieved, in a "post mortem" analysis. Before deployment, we're challenged to estimate the expected value and to demonstrate its credibility. One business assumption within the analysis described so far is that unblocked fraudulent transactions cost the bank the full magnitude of the transaction. A $100 fraudulent transaction costs $100 (while blocking it saves $100). And a $1,000 fraudulent transaction indeed costs ten times as much. But circumstances may not be that simple, and they may be subject to change. For example, certain enforcement efforts might serve to recoup some fraud losses by investigating fraudulent transactions even after they were permitted. Or the bank might hold insurance that covers some losses due to fraud. If there's uncertainty about exactly where this factor lands, we can address it by viewing how the overall savings would change if such a factor changed. Here's the curve when fraud costs the bank only 80% rather than 100% of each transaction amount: The same chart, except with each unblocked fraudulent transaction costing only 80% of the amount of ... More the transaction, rather than 100%. It turns out, the peak decreases from $26 million down to $20 million. This is because there's less money to be saved by fraud detection when fraud itself is less costly. But the position of the peak has moved only a little: from 2.94% to 2.62%. In other words, not much doubt is cast upon where to draw the decision boundary. Another business assumption we have in place is the cost of wrongly blocking, currently set at $75 – since an inconvenienced cardholder will be more likely to use their card less often (or cancel it entirely). The bank would like to decrease this cost, so it might consider taking measures accordingly. For example, it could consider providing a $10 "apology" gift card each time it realizes its mistake – an expensive endeavor, but one that might turn out to decrease the net cost of wrongly blocking from $75 down to $50. Here's how that would affect the savings curve: The same chart, except with each wrongly-blocked transaction costing only $50, rather than $75. This has increased the peak estimated savings to $28.6 million, and moves that peak from 2.94% up to 3.47%. Again, we've gained valuable insight: This scenario would warrant a meaningful increase in how many transactions are blocked (drawing the decision boundary further to the right), but would only increase profit by $2.6 million. Considering that this guesstimated cost reduction is a pretty optimistic one, is it worth the expense, complexity and uncertainty of even testing this kind of "apology" campaign in the first place? Perhaps not. For a predictive AI project to defy the odds and stand a chance at successful deployment, business-side stakeholders must be empowered to make an informed decision as to whether, which and how: whether the project is ready for deployment, which ML model to deploy and with what decision boundary (percent of cases to be treated versus not treated). They need to see the potential win in terms of business metrics like profit, savings or other KPIs, across a range of deployment options. And they must see how certain business factors that could be subject to change or uncertainty affect this range of options and their estimated value. We have a name for this kind of interactive visualization: ML valuation. This practice is the main missing ingredient in how predictive AI projects are typically run. ML valuation stands to rectify today's dismal track record for predictive AI deployment, boosting the value captured by this technology up closer to its true potential. Given how frequently predictive AI fails to demonstrate a deployed ROI, the adoption of ML valuation is inevitable. In the meantime, it will be a true win for professionals and stakeholders to act early, get out ahead of it and differentiate themselves as a value-focused practitioner of the art.


Forbes
15-05-2025
- Business
- Forbes
5 Ways To Hybridize Predictive AI And Generative AI
AI is in trouble. Both of its main two flavors, generative AI and predictive AI, face crippling limitations that compromise their ability to realize value. The solution? GenAI helps predictive AI and vice versa. GenAI's problem is reliability. For example, while almost three-quarters of lawyers plan to use genAI for their work, their AI tools hallucinate at least one-sixth of the time. Predictive AI's problem is that it's hard to use. While it has enjoyed decades of success improving large-scale business operations, it still realizes only a fraction of its potential because its deployment demands that stakeholders hold a semi-technical understanding. These two flavors of AI – strictly speaking, two categories of use cases of machine learning – are positioned to solve one another's problems. Here are five ways they can work together. Predictive AI has the potential to do what might otherwise be impossible: Realize genAI's bold, ambitious promise of autonomy – or at least a great deal of that often overzealous promise. By predicting which cases require a human in the loop, an otherwise unusable genAI system will gain the trust needed to unleash it broadly. For example, consider a question-answering system based on genAI. Such systems can be quite reliable if only meant to answer questions pertaining to several pages worth of knowledge, but performance comes into question for more ambitious, wider-scoped systems. Let's assume the system is 95% reliable, meaning users receive false or otherwise problematic information 5% of the time. Often, that's a deal-killer; it's not viable for deployment. The solution is predictive intervention. If predictive AI flags for human review the, say, 15% of cases most likely to be problematic, this might decrease the rate of problematic content reaching customers to an acceptable 1%. For more information, see this Forbes article, where I cover this approach in greater detail. The remaining four ways to hybridize predictive and generative AI each help in the opposite direction: genAI making predictive AI easier and more accessible. Anyone can use genAI, since it's trained to respond to human-language prompts, but predictive AI isn't readily accessible to business users in general. To use it, a business professional needs the assistance of data scientists as well as a semi-technical understanding of how ML models improve operations. Since this understanding is generally lacking, most predictive AI projects fail to deploy – even when there are data scientists on hand. An AI chatbot does the trick. With the right configuration, it puts into the hands of the business user a virtual, plain-spoken data scientist that helps guide the project and answers any question about predictive AI in general. It serves as an assistant and thought partner that elucidates, clarifies and suggests, answering endless questions (without the user ever fearing they're pestering, overtaxing or asking 'stupid questions'). For example, for a project targeting marketing with predictive AI, I asked a well-prompted chatbot (powered by Anthropic's Claude Sonnet 3 large language model), to explain the profit curve 'for a 10-year-old using a story.' it responded with a charming and easily-understood description of the diminishing returns you face when marketing your lemonade stand. For more information, see this Forbes article, where I cover this use of a chatbot in greater detail. Crazy story. Although I've been a data scientist for more than 30 years, the thought leadership side of my career 'distracted' me from hands-on practice for so long that, until recently, I had never used scikit-learn, which has become the leading open source solution for machine learning. But now that we're in the genAI age, I found getting started extremely easy. I simply asked an LLM, 'Write Python code to use scikit-learn to split the data into a training set and test set, train a random forest model and then evaluate the model on the test set. For the training data, load a (local) file ' The dependent variable is called 'isFraud'. Include clear comments on every line. Make sure your code can be used within Jupyter notebooks and be sure to include any necessary 'import' lines.' It worked. Moreover, the code it generated served as a tutorial for various uses, without me needing to pour through any documentation about scikit-learn (boring!). For more information, this approach will be covered by a Machine Learning Week training workshop, 'Automating Building of Predictive Models: Predictive AI + Generative AI,' to be held on June 5, 2025. Since LLMs are well-suited for processing human language – i.e., the domain of natural language processing and also known as processing unstructured data – they may outperform standard machine learning methods for certain language-heavy tasks, such as detecting misinformation or detecting the sentiment of online reviews. To create a proof-of-concept, we tapped a Stanford project that tested various LLMs on various benchmarks, including one that gauges how often a model can establish whether a given statement is true or false. Under certain business assumptions, the resulting detection capabilities proved valuable, as I detailed in this Forbes article. More generally, rather than serving as a complete predictive model, an LLM may better serve as a way to perform feature engineering – turning unstructured data fields into features that can serve as input to a predictive model. For example, Dataiku does this, allowing the user (typically, a data scientist) to select which LLM to use, and what kind of task to perform, such as sentiment analysis. As another example, Clay derives new model inputs from across the web with an LLM. For decades, NLP has been applied to turn unstructured data into structured data that can then be used by standard machine learning methods. LLMs serve as a more advanced type of NLP for this purpose. Even as LLMs have been making a splash, another incoming AI wave has been quietly emerging: large database models. LDMs complement LLMs by capitalizing on the world's other main data source: enterprise databases. Rather than tapping the great wealth of human writing such as books, documents and the web itself—as LLMs do—LDMs tap a company's tabular data. Swiss Mobiliar, Switzerland's oldest private insurance company, put LMDs to use to drive a predictive AI project. Their system tells sales staff the odds of closing a new client so that they can adjust their proposed insurance quotes accordingly. The deployed system delivered a substantial increase in sales. Swiss Mobiliar will present these results at Machine Learning Week 2025. For further detail, see also my Forbes article on large database models. Predictive AI and genAI need one another. Marrying the two will solve their respective problems, expand the ecosystem of tools and approaches available to AI practitioners and reunite what is now a siloed field to become more cohesive. But perhaps most important of all, these hybrid approaches will place AI value above AI hype by turning the focus to project outcome rather than treating any one technical approach as a panacea. In a few weeks, I'll deliver a keynote address on this topic, 'Five Ways to Hybridize Predictive and Generative AI,' at Machine Learning Week, June 2-5, 2025 in Phoenix, AZ. Beyond my keynote, the conference will also feature an entire track of sessions covering how organizations are applying such hybrid approaches. You can also view the archive of a presentation on this topic that I gave at this online event.


Free Malaysia Today
15-05-2025
- Business
- Free Malaysia Today
AI could identify biases of judges in future, says lawyer
Constitutional lawyer GK Ganesan pointed to the Huckabee v Bloomberg case in 2024, in which the use of predictive AI secured the dismissal of major claims. KUALA LUMPUR : Constitutional lawyer GK Ganesan suggests artificial intelligence, or AI, could be used to strategise legal arguments based on the biases of individual judges, citing a copyright lawsuit in the US. 'The Huckabee v Bloomberg litigation case was a popular lawsuit as the defence team utilised predictive AI to analyse the judge's history of scepticism. 'Judge McMahon would always dismiss complaints of digital copyright infringements. So, instead the defence argued about statutory interpretation, a subject she liked very much. 'That strategy secured the dismissal of major claims in November 2024,' he said at Marsden's Supreme Today AI launch event held at the AIAC Auditorium, Bangunan Sulaiman, here. The Supreme Today AI model offers a variety of services, including an advanced database of legal documents and a precedent map for case law analysis, featuring detailed information on prior judgments rendered by local courts. However, Federal Court Justice Nallini Pathmanathan cautioned against the undeclared usage of AI in the courtroom. She emphasised that all legal practitioners, whether judges or counsel, should disclose their usage of AI in court to uphold integrity. 'I think it's particularly serious for judges, because if any sort of cut and paste happened, it would be a disciplinary issue. It's about ethics,' she said. Nallini said the same would naturally also apply to lawyers who 'already owe a huge duty of disclosure'. The Thomson Reuters Foundation recently conducted a survey showing approximately 26% of lawyers acknowledge actively using generative AI in their work. While High Court Justice Atan Mustaffa Yussof Ahmad acknowledged lawyers may use whatever tools at their disposal, he highlighted their professional responsibility to apply their minds to the questions before them. 'AI may assist but your ethical duties cannot be outsourced to an algorithm. Be mindful of the specific risks that AI tools present,' he said. He said these risks include AI hallucinations, where falsehoods are presented as fact, and client confidentiality. 'Your ultimate professional responsibility is to focus on developing uniquely human skills, that is, empathy, ethical judgment, creative problem solving and strategic thinking. 'You bear full responsibility for the content and advice rendered to your client,' he said.