logo
#

Latest news with #predictiveAI

Why AI Knows You're Shopping — Before You Even Do
Why AI Knows You're Shopping — Before You Even Do

Entrepreneur

time28-07-2025

  • Business
  • Entrepreneur

Why AI Knows You're Shopping — Before You Even Do

In 2025, predictive AI is changing search by knowing user intent before queries are made, transforming how brands engage and convert in a privacy-conscious digital landscape. Opinions expressed by Entrepreneur contributors are their own. The rules of digital engagement are changing rapidly, thanks to the rise of artificial intelligence and everything it brings to the table. One of the biggest shifts we're seeing in 2025 is happening in the way we search. In the past, search was all about keywords — you typed in what you needed, whether it was a product, service or piece of information. But now, search is evolving into something smarter, something that can anticipate what you're looking for before you even start typing. This shift toward predictive search capabilities is not just a technological leap; it's a seismic change in how businesses connect with intent, personalize experiences and drive conversions. For digital marketers, product teams and CX leaders, understanding the mechanics and applications of predictive AI in search is no longer optional; it is part and parcel of success. Related: Want to Rank in AI Search? Focus on These Sources The evolution from keyword to intent Search used to be reactive, which means that a person has a need and they type it out into a search engine in order to find an answer. Based on that practice, brands optimised for what people were searching for, utilising keywords, trends, SEO tactics and other methods in order to be ranked by search engines and be found by people. But it responded instead of anticipated. These methods required users and consumers to make the first move. In 2025, predictive AI is flipping the script. Instead of waiting for consumers to express intent, platforms are now learning to recognise patterns, analyze behaviors and predict probable actions. That means consumers are seeing content, products or answers they were about to search for, sometimes even before realising they needed it. This shift is part of a broader movement toward proactive digital experiences, powered by big data, machine learning and hyper-personalisation. That isn't to say that search is dead, but it is becoming increasingly invisible, ambient and eerily prescient. How predictive AI understands intent At the heart of predictive search is an algorithmic cocktail: machine learning, natural language processing, deep behavioral analytics and vast datasets pulled from across channels — web activity, location data, app usage, purchase history and even social media sentiment. AI models today can map micro-behaviors like scroll speed, dwell time or mouse hover to determine intent. How long you spend on a website or watching a TikTok video will all play into the content that will be shown to you across the board. Whether you are logging onto a shopping platform or a social media platform, your behaviors will carry forward and offer you similar things that you might be interested in. For example, if a user browses organic skincare on Instagram, likes a product review and then opens a wellness app later in the day, an AI-driven search platform could predict that they're likely to seek "best clean moisturisers for sensitive skin" later that evening — and serve that result proactively, even before the user searches. Google, Microsoft and the race for predictive dominance The tech giants are locked in a quiet arms race to own the predictive future. Google's Search Generative Experience — now fully mainstream in 2025 — uses AI to blend traditional search with contextual understanding, generating summaries and proactive suggestions based on intent, not just input. Microsoft's integration of Copilot into Bing and Microsoft 365 has also led to smarter enterprise search. Employees no longer have to look up files or protocols; they're suggested in the workflow before the query forms. Both platforms are investing heavily in LLMs (Large Language Models) fine-tuned for intent prediction, not just language generation. The goal: remove friction and surface what users need before they ask for it. Related: How to Control the Search Results For Your Name What this means for brands in 2025 For brands, this is a goldmine of opportunity — but only if they're prepared. Predictive AI doesn't just change how users search; it changes how businesses must structure, tag and deploy their digital content. Here's how brands are responding: 1. Creating content for "pre-intent" moments. Instead of focusing solely on transactional keywords ("buy running shoes"), forward-thinking marketers are now creating content for precursor behaviors. That means that consuming information like "How to avoid knee pain when jogging" or "Signs your shoes need replacing" will alert AI algorithms to show you the best shoes that protect your knees. It's about mapping the customer journey upstream, anticipating the questions that come before the conversion, and positioning your brand as the default source before the user is even aware of their need. 2. Structured data and AI-friendly taxonomy. To appear in predictive search, content must be easy for machines to read and index. Brands are investing in structured data, semantic markup and content taxonomies designed for AI interpretation. This helps AI systems link product attributes, FAQs and guides to broader intent signals. So the next time you search for "how to pet-proof a rental apartment", you'll likely get ads with products tagged with things like "pet-proof", "small-space friendly" or other pet-related products and furniture that are non-destructive and ideal for rental spaces. 3. Integrating first-party data with predictive engines. Brands with strong CRM and loyalty ecosystems are integrating first-party data with predictive platforms. This includes purchase cycles, user preferences and engagement history. When done ethically and securely, this allows companies to anticipate individual needs with astonishing precision. A beauty brand, for instance, might know that a customer repurchases foundation every six weeks. In week five, a push notification appears: "Running low? Your shade is in stock — and 10% off today." Related: The Most Successful Founders Take Retreats — Here's Why You Should, Too The privacy-intent tradeoff: A delicate balance One of the biggest debates in 2025 is where the line lies between convenience and intrusion. Predictive AI walks a fine line between helpfulness and creepiness. Consumers are growing more aware of how their data is used—and more selective about who gets access to it. This has led to a renewed focus on consent-based tracking, zero-party data and transparency. Companies that overstep with overly personal or mistimed suggestions risk backlash and lost trust. The key is relevance without overreach. Predictive search must feel like intuition and not like surveillance. For one consumer, getting a "rain expected this weekend – here are your most-viewed waterproof boots at 15% off" might signal convenience, but for another, it might feel like tech is encroaching on their privacy… but AI models will be able to glean consumer behaviors and dole out the appropriate approach for each consumer. For the latter consumer, AI models might subtly provide ads that are targeted at their subconscious needs or desires rather than their current situation. For example, drawing information from their stress indicators or mood predictors, AI models may provide weekend getaway ideas with the current deals and promos. This not only offers what the stressed user might need, but it also doesn't feel too hard-sell, which can be a turn off for some. What marketers need to do now As predictive AI reshapes search, here's how marketers can future-proof their strategy: Invest in clean, structured data: Make sure your product and content assets are indexed in machine-readable ways Map out intent journeys: Don't just optimise for conversion—optimise for the moments that lead to it Collaborate with AI teams: Work closely with data scientists to align content production with AI discovery Respect privacy and trust: Make sure predictive suggestions feel empowering, not invasive Test, learn, iterate: Predictive tools will improve rapidly—brands that experiment early will gain a lasting edge We're entering an era where search is no longer a conscious act but a seamless service. Predictive AI in 2025 is transforming how intent is understood, how brands are discovered and how decisions are made. It rewards those who can think ahead about their customers, their data and their digital footprint. For businesses willing to embrace this shift, the payoff is enormous: smoother journeys, higher engagement and deeper loyalty. Because in the end, the smartest brands won't wait for their customers to ask — they'll already be there with the answer.

5 Things Businesses Need To Know Before Automating Work With AI Agents
5 Things Businesses Need To Know Before Automating Work With AI Agents

Forbes

time07-07-2025

  • Business
  • Forbes

5 Things Businesses Need To Know Before Automating Work With AI Agents

Harvey Hu is Founder & CTO at General Agency, building state-of-the-art AI agents to automate complex web workflows. Duolingo's CEO recently emailed employees announcing the company will go 'AI First.' This means replacing most contractor work with AI and requiring every team to try automation before adding headcount. A few days later, Shopify's CEO told staff that using AI is now a 'fundamental expectation' for every role. They are not alone: McKinsey's latest global survey finds 78% of companies already apply AI in business functions. In 2025, effectively automating your work often means going beyond ChatGPT and utilizing agentic AI that closely integrates with your day-to-day workflow. As a business leader navigating the plurality of AI tools on the market today, there are five important considerations before you decide if/how your organization should join this 'AI-First' wave. 1. Predictive Vs. Generative AI—Understanding What's New About This Wave Of AI The first order of business is understanding what's different about this wave of AI and why it may influence your business more than past ones. Prior to 2023, artificial intelligence in industries was mostly predictive. Predictive AI typically uses historical data to forecast a single target. Often, this is a user behavior metric that the business cares about, such as Click Through Rate (CTR) in advertising or the probability that a user finishes watching a video on social media. In the past 10 years (since landmark publications like Deep & Cross Networks and Tensorflow by Google), big tech companies have leveraged these models to earn billions of dollars in revenue. Though powerful, these models have narrow applications. A single model, for example, is designed and trained specifically for Google Ad prediction and can not be used for anything else. Coupled with their high training costs, businesses outside of tech have found it hard to harness predictive AI's power. Generative AI, on the other hand, creates new text, images or audio without a strict target in mind. This means a single model can be trained to simultaneously learn many things, from answering customers' questions to solving math problems. Though still expensive, a single model now becomes useful to many businesses at the same time, from customer support to education. The downside of a weaker supervision and the lack of a single training target is that generative models are harder to evaluate. They can more easily 'hallucinate,' for example, where they produce false answers confidently. 2. Horizontal Agents Vs. AI Workflows When browsing through the many options for AI tools in the market, it helps to distinguish between AI workflows and AI agents. Most 'AI agents' creating business value today, outside of simple chatbots, are actually 'AI workflows.' They are sequences of steps hand-designed to solve specific problems in a certain vertical, often built upon common tools like Zapier and n8n. Identifying the right workflow and setting it up could be time-consuming, but seeing through the 'agent' facade and understanding the underlying technology could help. AI Agents, on the other hand, make decisions on what actions to take autonomously. They can learn and excel at a broad range of tasks. Horizontal AI agents are starting to create value in the market in early 2025. It's still the early days, but we could see significantly better user experiences offered by them compared to 'AI workflows' this year, especially with stronger reasoning models and better multimodal LLMs. 3. Target High-ROI, Repetitive Tasks First A Zapier study shows 94% of knowledge workers spend part of every day on repetitive, time-consuming chores such as copying data or formatting reports. They follow clear patterns and are ripe for automation. Start with processes that look like assembly lines: data entry, report compilation, standard email replies, document classification. Freeing even one hour a day per employee yields ~250 hours a year—more than a month of recovered capacity. Not to mention, these low-value tasks drain morale and contribute to burnout. 4. Data Access, Security And Privacy—Treat The AI Like A New Hire In 2023, a Samsung engineer pasted confidential source code into ChatGPT; weeks later, the company banned public chatbots internally to prevent future leaks. Apple, JPMorgan, Verizon and others enacted similar restrictions. Instead of reacting to incidents like this with drastic measures, think ahead about what data and systems it should see—no more, no less. Use the 'intern test.' If you wouldn't hand a summer intern unrestricted access to a customer database, don't hand it to an AI. On the other hand, if an intern needs access to some of your internal tools to contribute effectively, AI agents probably need the same access as well. 5. Research What Human Supervision Is Available AI agents still make errors. A U.S. judge fined two lawyers $5,000 after ChatGPT invented six non-existent cases they failed to verify. PwC advises firms to 'rigorously oversee GenAI.' Depending on your business, keeping human in the loop with your AI automations could: • Reduce business risk. Keep an eye on your new AI agents the same way you wouldn't let your new hire handle a client meeting alone. • Build trust with AI Agents to unlock more capabilities. Models today have limitations. Working closely with them to understand their capabilities could unlock more potential. Over time, you can relax oversight, but it's essential to understand what level of human supervision is possible with it when building new automations. At a minimum, you should be able to audit details of the AI Agent's work, ex post facto, in case something goes wrong. Conclusion: Automate Boldly With Eyes Open McKinsey estimates that generative AI could unlock up to $4.4 trillion in annual economic value—roughly the size of Germany's economy. The AI-first era has begun; steer into it with eyes open and hand on the wheel by understanding the technology, its limitations and what to look out for when making your choice. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Regalis Capital Harnesses AI to Improve M&A Outcomes
Regalis Capital Harnesses AI to Improve M&A Outcomes

Associated Press

time22-06-2025

  • Business
  • Associated Press

Regalis Capital Harnesses AI to Improve M&A Outcomes

06/22/2025, Ontario // KISS PR Brand Story PressWire // In a high-stakes market where timing and accuracy dictate success, Regalis Capital is redefining the business acquisition landscape. Its game-changing approach is driven by predictive AI and a white-glove service model that makes it possible for U.S. buyers to seize cash-flowing businesses with unmatched speed, accuracy, and personalized support. This innovation marks a transformative shift in how entrepreneurs approach business ownership, replacing outdated M&A methods with a faster and more precise AI-supported strategy. While many firms provide advisory services, Regalis Capital takes the process several steps further. Its three-phase M&A strategy—Preparation, Execution, and Optimization—equips buyers with a powerful system built on real-world insights, deal data, and operational discipline. By combining predictive technology with hands-on support, Regalis Capital simplifies acquisitions without sacrificing strategic depth. 'Buying a business shouldn't feel overwhelming. It should feel empowering,' said a company representative . 'That's why our process is built to give buyers the clarity, support, and confidence they need to move forward.' Regalis Capital's competitive edge comes from its proprietary AI-backed deal flow system. It scans thousands of opportunities to match U.S. buyers with businesses that meet strict criteria—from profitability to industry fit. Once a target is identified, the firm supports negotiations, lending discussions, deal preparation and ensures a seamless close. This all-in-one approach is a radical departure from conventional advisory services, which often leave clients to manage critical steps themselves. It raises the bar for new and seasoned business buyers. Regalis Capital helps U.S. buyers acquire cash-flowing businesses across industries with minimal friction. Through tailored deal assessments, operational insights, and access to vetted lenders, it removes the usual M&A roadblocks. 'We do as much of the heavy lifting as humanly possible,' the representative added. 'That's not just a promise—it's how we operate.' This hands-on model positions Regalis Capital as a reliable guide in the complex world of business acquisition. Unlike generalist firms or high-volume platforms, it curates each opportunity with precision and strategy, providing a personalized experience that leaves buyers informed, equipped, and ready to lead. Founded to bring clarity and control to the acquisition process for U.S. buyers, the Ontario-based firm continues to lead with strategy and innovation. Using predictive tools and expert guidance, it streamlines business ownership transitions. As the only M&A advisory offering a fully done-for-you acquisition model powered by AI, Regalis Capital is more than a partner—it offers a competitive advantage. With a proven process and predictive strategy, Regalis Capital is changing how entrepreneurship through acquisition gets done. For U.S. buyers seeking clarity, speed, and success in M&A, the firm delivers not just support but also outcomes. To learn more about Regalis Capital, visit About Regalis Capital Regalis Capital is a boutique M&A advisory firm based in Niagara-on-the-Lake, Ontario, Canada, specializing in helping U.S. citizens acquire cash-flowing businesses through a fully done-for-you process. As the only white-glove firm of its kind, Regalis Capital combines proprietary AI technology with deep industry expertise to deliver deals that are vetted, funded, negotiated, and closed with precision. With a mission to remove complexity from business acquisition, the firm empowers everyday entrepreneurs to own high-performing companies with confidence and clarity. ### Media Contact Regalis Capital Address: Ontario, Canada Phone: (647) 946-8687 Website: Source published by Submit Press Release >> Regalis Capital Harnesses AI to Improve M&A Outcomes

AI adoption in financial services and fintech in 2025: By John Adam
AI adoption in financial services and fintech in 2025: By John Adam

Finextra

time27-05-2025

  • Business
  • Finextra

AI adoption in financial services and fintech in 2025: By John Adam

A few weeks ago, I visited several events in London over UK Fintech Week. I listened to a lot of speakers and panels and spoke to a lot of people in the sector—at established financial services companies, scaleups, those still early in their product journeys and other ecosystem participants. By far the most common use cases of predictive AI in financial services are KYC (Know Your Customer) and AML (anti-money laundering) compliance. KYC and AML flows contain a lot of rules-based repeat processes that require flawless accuracy in their execution. Rare mistakes and oversights led to fines totaling almost $24 billion in 2024. American companies alone were slapped with over $3 billion in fines. Automation of these processes reduces errors and oversights drastically and requires a fraction of the resources as legacy KYC and AML processes. In April of 2024, Visa announced that its AI-powered fraud detection system helped prevent $41 billion in fraudulent transactions in a single year by analyzing customer behaviour patterns, geolocation and transaction velocity. Another compelling example of an automated AML feature is SEON's transaction monitoring model and screening system used to reduce their clients' manual fraud reviews by 37%, or by over 1/3. Similarly, Revolut has released an AI-powered feature to protect customers against APP (Authorised Push Payment) scams that often act as a precursor to money laundering. The feature uses a ML (machine learning) model to flag potential scams in real time by comparing normal user transaction patterns to irregularities, identifying suspicious user behaviour automatically and simplifying a task that is otherwise quite tedious. A few other common use cases are Extracting data from 10-ks. Automating loan application processing and customer data cross-referencing. Identifying customer relationships to decrease time spent on and overall costs of KYC reviews. And financial services companies can apply predictive AI to other low-complexity, high-impact processes that are currently resource intensive. There are also opportunities to add extra value for customers with more adventurous features and a competitive edge. It's not easy standing out in an often-crowded sector dominated by financial giants, especially as a newcomer, and to do so requires a culture of innovation. Predictive AI offers fintech innovators the insights to better forecast how new features will be received, adopted, or rejected by different segments of their audiences before spending money and time building them. It can also be applied to critique the ideation process and prototypes. Predictive AI is useful at most steps in the feature ideation to development process. Use it for instance To validate ideas by projecting historical user behaviour patterns onto feature prototypes. Monzo, for example, uses ML models to identify patterns in user behaviour like login activity, onboarding flow interactions and when customers use certain features like making payments. When building a new feature, Monzo can now use this data to predict how users might interact with the proposed feature. The model could show whether a certain user profile would use or ignore the feature, if it may increase engagement with core or prioritised services, fail at inspiring an uptick in meaningful engagement or perhaps offer value to users in some unexpected way. To assess compliance implications of new products or features before they are fully developed and launched. Upstart, a U.S.-based AI-powered consumer lending fintech, uses predictive AI tools to run pre-deployment simulations to gauge if their loan platform is compliant with the ECOA (Equal Credit Opportunity Act) by simulating variables like demographic groups to measure if any are disproportionately declined. To make the model's findings transparent to regulators, Upstart uses an XAI (Explainable Artificial Intelligence) model to unpack its logic and show if decisions meet regulatory standards. Many financial services providers, including Upstart, use proxy models to simulate potential biases. However, the CFPB (Consumer Financial Protection Bureau) has not set clear legal rules for proxy models and that lack of clarity does mean using proxy models does entail a degree of compliance risk. Traditionally U.K. regulators have focused more on general decision-making transparency without any particular focus on demographics. As a result, financial institutions usually put an emphasis on making outcomes explainable to the FCA (Financial Conduct Authority). To predict churn and to prioritize the development of certain features over others. ML models trained on enough user data can identify potential markers for disengagement before human analysts are able to connect the dots. For instance, normally when David receives his paycheck he checks his account that same day and splits his salary up between accounts. But over the last 4 months, he has waited two to three days to take any action. ML models could keep track of which accounts are receiving a delayed reaction to financial events, like David's, and take steps to reengage him and others like him before churn happens, building a new feature to meet their needs or targeting them with a new campaign. Blockers for adopting predictive AI Predictive AI models estimate outcomes before companies allocate resources, helping to optimise R&D. But the quality of predicted outcomes is reliant on the fullness and quality of the data the AI models making them are trained on. If data is obsolete or incomplete, predictions will be less accurate. This is the Achilles' heel for many older and larger financial institutions: bringing together siloed data in multiple formats. Two solutions in this case are data integration techniques like data fabric architecture and AI-driven document understanding that can be used to bridge gaps between legacy systems and unify documentation with minimal manual intervention. Younger financial services providers built as digitally native have an advantage when it comes to data accessibility but also have less data than older, more established rivals. If larger, mature financial institutions can use their deep pockets of historical data, they can give themselves an edge when developing AI models and tools whose quality of performance is correlated to data volume. Some of the digitally native new providers are now getting large enough to be taken seriously by the big beasts of the financial services sector—Revolut as the most obvious example. The financial sector and predictive AI going forward Digitalization has allowed companies in the financial sector to ramp up operations exponentially, but most have still been somewhat constrained by 'human factor' limitations like time zones and working hours. That is beginning to change. At the start of the year, Goldman Sachs introduced an internal AI assistant for its employees and BBVA released a customer-facing AI assistant. Revolut announced it will release its AI-powered assistant later this year. The autonomous nature of AI agents and assistants will let companies increase productivity and go beyond what a financially viable legacy human workforce makes it possible to achieve. Market research by McKinsey estimates that AI adoption will represent $1 trillion in value to the global banking sector annually through a combination of efficiency gains and new commercial opportunities. As financial services companies build and refine their AI adoption processes and regulation is defined, use cases for AI will continue to expand past KYC and AML compliance automations and become commonplace in areas like new feature idea validation, prototype testing and churn prediction. With success stories like Visa avoiding $41B worth of fraudulent transactions, and more companies prioritizing AI assistants and agents, AI adoption and integration are higher stakes. Wins like Visa's come down to access to and quality of data, and enterprises fall into one category and young fintechs another - one with decades' worth of data points split between various databases and systems, the other with sparser data but unified access. It will be interesting to see which turns out to be the more advantageous starting point for AI adoption: deep reservoirs of data and long-term experience, or digitally native systems and agile teams.

Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It
Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It

Forbes

time27-05-2025

  • Business
  • Forbes

Predictive AI Must Be Valuated – But Rarely Is. Here's How To Do It

Most predictive AI projects neglect to estimate the potential profit – a practice known as ML ... More valuation – and that spells project failure. Here's the how-to. To be a business is to constantly work toward improved operations. As a business grows, this usually leads to the possibility of using predictive AI, which is the kind of analytics that improves existing, large-scale operations. But the mystique of predictive AI routinely kills its value. Rather than focusing on the concrete win that its deployment could deliver, leaders get distracted by the core tech's glamor. After all, learning from data to predict is sexy. This in turn leads to skipping a critical step: forecasting the operational improvement that predictive AI operationalization would deliver. As with any kind of change to large-scale operations, you can't move forward without a credible estimation of the business improvement you stand to gain – in straightforward terms like profit or other business KPIs. Not doing so makes deployment a shot in the dark. Indeed, most predictive AI launches are scrubbed. So why do most predictive AI projects fail to estimate the business value, much to their own demise? Ultimately, this is not a technology fail – it's an organizational one, a glaring symptom of the biz/tech divide. Business stakeholders delegate almost every aspect of the project to data scientists. Meanwhile, data scientists as a species are mostly stuck on arcane technical metrics, with little attention to business metrics. The typical data scientist's training, practice, shop-talk and toolset omits business metrics. Technical metrics define their comfort zone. Estimating the profit or other business upside of deploying predictive AI – aka ML valuation – is only a matter of arithmetic. It isn't the "rocket science" part, the ML algorithm that learns from data. Rather, it's the much-needed prelaunch stress-testing of the rocket. Say you work at a bank processing 10 million credit card and ATM card transactions each quarter. With 3.5% of the transactions fraudulent, the pressure is on to predictively block those transactions most likely to fall into that category. With ML, your data scientists have developed a fraud-detection model that calculates a risk level for each transaction. Within the most risky 150,000 transactions – that is, the 1.5% of transactions that are considered by the model most likely to be fraud – 143,000 are fraudulent. The other 7,000 are legitimate. So, should the bank block that group of high-risk transactions? Sounds reasonable off the cuff, but let's actually calculate the potential winnings. Suppose that those 143,000 fraudulent transactions represent $18,225,000 in charges – that is, they're about $127 each on average. That's a lot of fraud loss to be saved by blocking them. But what about the downside of blocking them? If it costs your bank an average of $75 each time you wrongly block due to cardholder inconvenience – which would be the case for each of the 7,000 legit transactions – that will come to $525,000. That barely dents the upside, with the net win coming to $17,700,000. So yeah, if you'd like to gain almost $18 million, then block those 1.5% most risky transactions. This is the monetary savings of fraud detection, and a penny saved is a penny earned. But that doesn't necessarily mean that 1.5% is the best place to draw the line. How much more might we save by blocking even more? The more we block, the more lower-risk transactions we block – and yet the net value might continue to increase if we go a ways further. Where to stop? The 2% most risky? The 2.5% most risky? To navigate the range of predictive AI deployment options, you've just got to look at it: A savings curve comparing the potential money saved by blocking the most risky payment card ... More transactions with fraud-detection models. The performance of three competing models is shown. This shows the monetary win for a range of deployment options. The vertical axis represents the money saved with fraud detection – based on the same kind of calculations as those in the previous example – and the horizontal axis represents the portion of transactions blocked, from most risky (far left) to least risky (far right). This view has zoomed into the range from 0% to 15%, since a bank would normally block at most only the top, say, two or three percent. The three colors represent three competing ML models: two variations of XGBoost and one random forest (these are popular ML methods). The first XGBoost model is the best one overall. The savings are calculated over a real collection of e-commerce transactions. So was the previous example's calculations. Let's jump to the curve's peak. We would maximize the expected win to more than $26 million by blocking the top 2.94% most risky transactions according to the first XGBoost model. But this deployment plan isn't a done deal yet – there are other, competing considerations. First, consider how often transactions would be wrongly blocked. It turns out that blocking that 2.94% would inconvenience legit cardholders an estimated 72,000 times per quarter. That adverse effect is already baked into the expected $26 million estimate, but it could incur other intangible or longer-term costs; the business doesn't like it. But the relatively flatness that you can see near the curve's peak signals an opportunity: If we block fewer transactions, we could greatly reduce the expected number wrongly blocked with only a small decrease in savings. For example, it turns out that blocking 2.33% rather than 2.94% cuts the number of estimated bad blocks in half to 35,000, while still capturing an expected $25 million in savings. The bank might be more comfortable with this plan. As compelling as these estimated financial wins are, we must take steps to shore up their credibility, since they hinge on certain business assumptions. After all, the actual win of any operational improvement – whether driven by analytics or otherwise – is only certain after it's been achieved, in a "post mortem" analysis. Before deployment, we're challenged to estimate the expected value and to demonstrate its credibility. One business assumption within the analysis described so far is that unblocked fraudulent transactions cost the bank the full magnitude of the transaction. A $100 fraudulent transaction costs $100 (while blocking it saves $100). And a $1,000 fraudulent transaction indeed costs ten times as much. But circumstances may not be that simple, and they may be subject to change. For example, certain enforcement efforts might serve to recoup some fraud losses by investigating fraudulent transactions even after they were permitted. Or the bank might hold insurance that covers some losses due to fraud. If there's uncertainty about exactly where this factor lands, we can address it by viewing how the overall savings would change if such a factor changed. Here's the curve when fraud costs the bank only 80% rather than 100% of each transaction amount: The same chart, except with each unblocked fraudulent transaction costing only 80% of the amount of ... More the transaction, rather than 100%. It turns out, the peak decreases from $26 million down to $20 million. This is because there's less money to be saved by fraud detection when fraud itself is less costly. But the position of the peak has moved only a little: from 2.94% to 2.62%. In other words, not much doubt is cast upon where to draw the decision boundary. Another business assumption we have in place is the cost of wrongly blocking, currently set at $75 – since an inconvenienced cardholder will be more likely to use their card less often (or cancel it entirely). The bank would like to decrease this cost, so it might consider taking measures accordingly. For example, it could consider providing a $10 "apology" gift card each time it realizes its mistake – an expensive endeavor, but one that might turn out to decrease the net cost of wrongly blocking from $75 down to $50. Here's how that would affect the savings curve: The same chart, except with each wrongly-blocked transaction costing only $50, rather than $75. This has increased the peak estimated savings to $28.6 million, and moves that peak from 2.94% up to 3.47%. Again, we've gained valuable insight: This scenario would warrant a meaningful increase in how many transactions are blocked (drawing the decision boundary further to the right), but would only increase profit by $2.6 million. Considering that this guesstimated cost reduction is a pretty optimistic one, is it worth the expense, complexity and uncertainty of even testing this kind of "apology" campaign in the first place? Perhaps not. For a predictive AI project to defy the odds and stand a chance at successful deployment, business-side stakeholders must be empowered to make an informed decision as to whether, which and how: whether the project is ready for deployment, which ML model to deploy and with what decision boundary (percent of cases to be treated versus not treated). They need to see the potential win in terms of business metrics like profit, savings or other KPIs, across a range of deployment options. And they must see how certain business factors that could be subject to change or uncertainty affect this range of options and their estimated value. We have a name for this kind of interactive visualization: ML valuation. This practice is the main missing ingredient in how predictive AI projects are typically run. ML valuation stands to rectify today's dismal track record for predictive AI deployment, boosting the value captured by this technology up closer to its true potential. Given how frequently predictive AI fails to demonstrate a deployed ROI, the adoption of ML valuation is inevitable. In the meantime, it will be a true win for professionals and stakeholders to act early, get out ahead of it and differentiate themselves as a value-focused practitioner of the art.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store