logo
Visa wants to give artificial intelligence 'agents' your credit card

Visa wants to give artificial intelligence 'agents' your credit card

Artificial intelligence 'agents' are supposed to be more than chatbots. The tech industry has spent months pitching AI personal assistants that know what you want and can do real work on your behalf.
So far, they're not doing much.
Visa hopes to change that by giving them your credit card. Set a budget and some preferences and these AI agents — successors to ChatGPT and its chatbot peers — could find and buy you a sweater, weekly groceries or an airplane ticket.
'We think this could be really important,' said Jack Forestell, Visa's chief product and strategy officer, in an interview. 'Transformational, on the order of magnitude of the advent of e-commerce itself.'
Visa announced Wednesday it is partnering with a group of leading AI chatbot developers — among them U.S. companies Anthropic, Microsoft, OpenAI and Perplexity, and France's Mistral — to connect their AI systems to Visa's payments network. Visa is also working with IBM, online payment company Stripe and phone-maker Samsung on the initiative. Pilot projects begin Wednesday, ahead of more widespread usage expected next year.
The San Francisco payment processing company is betting that what seems futuristic now could become a convenient alternative to our most mundane shopping tasks in the near future. It has spent the past six months working with AI developers to address technical obstacles that must be overcome before the average consumer is going to use it.
For emerging AI companies, Visa's backing could also boost their chances of competing with tech giants Amazon and Google, which dominate digital commerce and are developing their own AI agents.
The tech industry is already full of demonstrations of the capabilities of what it calls agentic AI, though few are yet found in the real world. Most are still refashioned versions of large language models — the generative AI technology behind chatbots that can write emails, summarize documents or help people code. Trained on huge troves of data, they can scour the internet and bring back recommendations for things to buy, but they have a harder time going beyond that.
'The early incarnations of agent-based commerce are starting to do a really good job on the shopping and discovery dimension of the problem, but they are having tremendous trouble on payments,' Forestell said. 'You get to this point where the agents literally just turn it back around and say, 'OK, you go buy it.'
Visa sees itself as having a key role in giving AI agents easier and trusted access to the cash they need to make purchases.
'The payments problem is not something the AI platforms can solve by themselves," Forestell said. 'That's why we started working with them.'
The new AI initiative comes nearly a year after Visa revealed major changes to how credit and debit cards will operate in the U.S., making physical cards and their 16-digit numbers increasingly irrelevant.
Many consumers are already getting used to digital payment systems such as Apply Pay that turn their phones into a credit card. A similar process of vetting someone's digital credentials would authorize AI agents to work on a customer's behalf, in a way Forestell says must assure buyers, banks and merchants that the transactions are legitimate and that Visa will handle disputes.
Forestell said that doesn't mean AI agents will take over the entire shopping experience, but it might be useful for errands that either bore some people — like groceries, home improvement items or even Christmas lists — or are too complicated, like travel bookings. In those situations, some people might want an agent that 'just powers through it and automatically goes and does stuff for us,' Forestell said.
Other shopping experiences, such as for luxury goods, are a form of entertainment and many customers still want to immerse themselves in the choices and comparisons, Forestell said. In that case, he envisions AI agents still offering assistance but staying in the background.
And what about credit card debt? The credit card balances of American consumers hit $1.21 trillion at the end of last year, according to the Federal Reserve of New York.
Forestell says consumers will give their AI agents clear spending limits and conditions that should give them confidence that the human is still in control. At first, the AI agents are likely to come back to buyers to make sure they are OK with a specific airplane ticket. Over time, those agents might get more autonomy to 'go spend up to $1,500 on any airline to get me from A to B," he said.
Part of what is attracting some AI developers to the Visa partnership is that, with a customer's consent, an AI agent can also tap into a lot of data about past credit card purchases.
'Visa has the ability for a user to consent to share streams of their transaction history with us,' said Dmitry Shevelenko, Perplexity's chief business officer. 'When we generate a recommendation -- say you're asking, 'What are the best laptops?' — we would know what are other transactions you've made and the revealed preferences from that.'
Perplexity's chatbot can already book hotels and make other purchases, but it's still in the early stages of AI commerce, Shevelenko says. The San Francisco startup has also, along with ChatGPT maker OpenAI, told a federal court it would consider buying Google's internet browser, Chrome, if the U.S. forces a breakup of the tech giant in a pending antitrust case.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

3 Clever Ways Microsoft Can Save You Serious Cash
3 Clever Ways Microsoft Can Save You Serious Cash

Yahoo

timean hour ago

  • Yahoo

3 Clever Ways Microsoft Can Save You Serious Cash

Microsoft, the technology juggernaut that provides individuals and businesses with the operating systems and applications they use every day, is not often considered when consumers are trying to save money. Explore Next: Trending Now: After all, Microsoft and Apple combined have a near monopoly on personal technology. But there are ways that Microsoft products and services can save you money. Below are three clever ways you can save serious cash with Microsoft. Also here are Microsoft products you may not even realize you use. Build Your Own Budget With Excel Budgeting apps are great if you need education and motivation to control your spending and increase your savings. But they can cost $100 a year or more. This may be worth it if you really need to gamify your budget, but if you already know what to do, an Excel spreadsheet can work just as well. See Next: Do-It-Yourself Budget You can create your own budget spreadsheet by creating columns for your income, your fixed expenses and your variable expenses. You can export your bank account and credit card statements in .xls format and use that data to track what you actually spend. Standard Excel Budget Template A budget template is provided free with Microsoft Excel. You can use this template that's already set up for you by just clicking on it when you open a new spreadsheet in Excel. Customized Templates From Microsoft There are dozens of budget templates available in Microsoft Create, which is free to Microsoft 365 users. There are templates for individuals, couples, families, clubs and businesses. You can find specialized budget templates for planning a wedding or other event, for college students and many more specific situations. Research Companies To Invest In With Copilot Time is money, and one of the most time-consuming aspects of investing is selecting which companies to buy. Microsoft Copilot is an AI bot that you can ask for analysis of different companies you may be considering buying. While you don't want to blindly re-align your whole portfolio because AI said so, Copilot is useful when suggesting stocks to consider or when comparing a few different companies on a single metric. For example, when Copilot was asked, 'Which company has a better P/E ratio: Amazon, Alphabet, Meta or Microsoft?' it returned a chart showing each company, its ticker symbol, its P/E ratio and notes on the relative position of each. If you are tempted to believe that an AI bot is objective, however, it's worth noting that, of the four companies mentioned, Copilot returned a comprehensive analysis of only one of them: Microsoft. There are several different versions of Copilot, but the free one you can access online is sufficient for these kinds of questions. Get Organized With OneNote Microsoft OneNote is commonly used by organizations for note taking and information sharing. But individuals can use it too, to track everything from grocery lists to subscription expiration dates. Being more organized can save you money in lots of ways. By tracking your shopping list on OneNote, you are better able to stick to your list and avoid impulse purchases. With the OneNote app, it's easy to jot down thoughts, ideas and reminders, like a reminder to get your car's brakes checked before that small squeak turns into a bigger, costlier problem. Saving money may not be the first thing that comes to mind when you think about the Microsoft products that we all use every day. But, when used properly, these programs and apps can certainly put some more cash in your pocket. More From GOBankingRates New Law Could Make Electricity Bills Skyrocket in These 4 States I'm a Self-Made Millionaire: 6 Ways I Use ChatGPT To Make a Lot of Money 5 Strategies High-Net-Worth Families Use To Build Generational Wealth Warren Buffett: 10 Things Poor People Waste Money On This article originally appeared on 3 Clever Ways Microsoft Can Save You Serious Cash

Windows 10 Support Ends Soon. Here Are Your Best Options if You Can't Upgrade
Windows 10 Support Ends Soon. Here Are Your Best Options if You Can't Upgrade

CNET

timean hour ago

  • CNET

Windows 10 Support Ends Soon. Here Are Your Best Options if You Can't Upgrade

Windows 10 is on its way out the door. Starting on October 14, Microsoft will stop supporting the older operating system in favor of Windows 11. While the next version of Windows is already available for users to install, many people have opted to not upgrade thus far. However, as support ends, this also means Windows 10 users will no longer have access to bug fixes, tech support or security updates from Microsoft. While upgrading to Windows 11 isn't difficult, its technical requirements mean many laptops and desktops aren't compatible. Not sure whether your current device is compatible with Windows 11? There are a few different ways to check. The simplest is to go to Start, then click Settings. Once in the Settings menu, select Update & Security then click Windows Update to see if you're able to update to the latest version on Windows. Microsoft also has a list of Windows 11 system requirements that you can refer to if you're not sure whether or not your computer can upgrade. Requirements include 4GB of RAM, 64GB or more of storage and a 1GHz or faster processor. These hardware requirements are the most common reasons that a PC may not be able to upgrade to Windows 11. If that's the case, you have several different options. Below, you can find the best options to consider if your computer cannot be upgraded to Windows 11. Don't miss any of CNET's unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome. Read more: Best Laptops of 2025 Upgrade your PC You could use this deadline as a good reason to upgrade to a new computer. Microsoft recently released a couple of cheaper, lighter Microsoft Surface laptops that are joining Microsoft's Copilot Plus PC lineup. Going this route also means Windows 11 will already be installed and ready to go. If you need help finding the right upgrade, check out our list of the best Windows laptops of 2025. Of course, upgrading your computer is not the most cost-effective solution. If your laptop is slowing down or you were already eyeing an upgrade, this option is worth considering, but if not, you may want to hang on to your current computer for a bit longer and try one of the other solutions on this list. Pay for extended security updates Microsoft actually isn't fully doing away with Windows 10 updates, but you will have to pay for them going forward. Microsoft previously announced that it will offer extended security updates (ESU). Enrolling in the ESU program will give you access to critical security updates for Windows 10. Though you won't receive updates for new features or bug fixes, and Microsoft will not offer technical support if you run into any challenges on your Windows 10 PC. The ESU program is primarily targeted at businesses and institutions using Windows for commercial use or educational purposes, but this year, Microsoft will be rolling out a consumer version. The consumer plan is a one-year option priced at $30. Microsoft reports that program enrollment will open closer to Oct. 14. This option is certainly cheaper than upgrading your laptop and far less risky than ignoring the end-of-support deadline and using an unsupported OS, so it's worth considering if you'd like to keep your old laptop for a bit longer. Ditch Windows in favor of another operating system The display screen on Zorin OS set to a ChromeOS desktop layout. Screenshot by Kaylyn McKenna/CNET Windows 11 isn't the only operating system available for PCs. Linux is a popular option for giving new life to older PCs. The free, open-source operating system has a strong reputation for security. There are many different versions (or distros) of Linux, so you can find one that best fits your needs and design preferences. Some popular options worth exploring include Mint, Ubuntu and Zorin OS. ChromeOS is a Linux-based option, but is not generally considered a traditional Linux distro. Google ChromeOS Flex provides the look and most of the functionality of the Google ChromeOS, the operating system that commonly runs natively on Chromebooks, but is designed for use on Windows, Mac and Linux devices. If you're a current Windows 10 user who predominately uses the Google Chrome browser, ChromeOS Flex will likely feel fairly familiar and intuitive to you. To find out whether Google ChromeOS Flex is an option for your device, look through the certified models list. Use Microsoft 365 or a virtual desktop Screenshot by Kaylyn McKenna/CNET In a recent blog post, Microsoft advised that you could migrate to the cloud with Windows 365 or Azure Virtual Desktop if you're unable to upgrade your device or buy a new one. This can make Windows 11 available on your unsupported device via the cloud. It was also announced last month that Microsoft 365 apps running on Windows 10 will continue to receive security updates for another three years after the upcoming end of support date. Microsoft 365 is a cloud-based subscription service that provides access to apps such as Microsoft Word, Excel, PowerPoint and more. Opt for third-party security software Screenshot by Kaylyn McKenna/CNET Technically, you can continue to run your device on Windows 10 after the deadline, but you won't receive security updates. Leaving your computer without appropriate security protections isn't wise, so some people are turning to third-party security services in hopes of bridging the gap until they can upgrade. Installing third-party 0patch agent is one way to add a bit more protection for an unsupported Windows PC. 0patch announced that it will continue to provide critical security patches for Windows 10 for at least five more years. The free version includes patches for known 0-day vulnerabilities, providing a cost-effective option for those not looking to splurge on a security subscription or new laptop. Though you'll need to upgrade to the paid plan if you want all available Windows 10 patches. There are also plenty of antivirus software programs on the market that can help you improve your PC's security. That being said, this isn't the most secure route, and could have negative outcomes. If you store sensitive information on your computer or rely on it for work or school, it's best not to take your chances with an unsupported operating system. Bypass the Windows 11 upgrade compatibility checker Even if your device is technically considered incompatible with Windows 11, there are some ways to bypass the Windows 11 upgrade compatibility checker and install it anyway. You can use Rufus to create a bootable version of Windows 11 using the Windows 11 ISO file that will bypass the system requirements for upgrading. This option has been used to upgrade older PCs and bypass Microsoft account requirements. On laptops originally built for Windows 10, you can also make a simple adjustment in the Registry Editor to bypass installation restrictions and upgrade your device to Windows 11. Ed Bott of ZDNET has a step-by-step outline of how to complete this process and upgrade your incompatible Windows PC.

The AI Doomers Are Getting Doomier
The AI Doomers Are Getting Doomier

Atlantic

timean hour ago

  • Atlantic

The AI Doomers Are Getting Doomier

Nate Soares doesn't set aside money for his 401(k). 'I just don't expect the world to be around,' he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I'd heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which 'everything is fully automated,' he told me. That is, 'if we're around.' The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences. But in 2025, the doomers are tilting closer and closer to a sort of fatalism. 'We've run out of time' to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that's left to do is raise the alarm. In April, several apocalypse-minded researchers published 'AI 2027,' a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. 'We're two years away from something we could lose control over,' Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies 'still have no plan' to stop it from happening. His institute recently gave every frontier AI lab a 'D' or 'F' grade for their preparations for preventing the most existential threats posed by AI. Apocalyptic predictions about AI can scan as outlandish. The 'AI 2027' write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about 'OpenBrain' and 'DeepCent,' Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: 'Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.' But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue. In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take 'the risk of extinction from AI' as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry's three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their 'P(doom)'—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent. Then the panic settled. To the broader public, doomsday predictions may have become less compelling when the shock factor of ChatGPT wore off and, in 2024, bots were still telling people to use glue to add cheese to their pizza. The alarm from tech executives had always made for perversely excellent marketing (Look, we're building a digital God!) and lobbying (And only we can control it!). They moved on as well: AI executives started saying that Chinese AI is a greater security threat than rogue AI—which, in turn, encourages momentum over caution. But in 2025, the doomers may be on the cusp of another resurgence. First, substance aside, they've adopted more persuasive ways to advance their arguments. Brief statements and open letters are easier to dismiss than lengthy reports such as 'AI 2027,' which is adorned with academic ornamentation, including data, appendices, and rambling footnotes. Vice President J. D. Vance has said that he has read 'AI 2027,' and multiple other recent reports have advanced similarly alarming predictions. Soares told me he's much more focused on 'awareness raising' than research these days, and next month, he will publish a book with the prominent AI doomer Elizier Yudkowsky, the title of which states their position succinctly: If Anyone Builds It, Everyone Dies. There is also now simply more, and more concerning, evidence to discuss. The pace of AI progress appeared to pick up near the end of 2024 with the advent of 'reasoning' models and 'agents.' AI programs can tackle more challenging questions and take action on a computer—for instance, by planning a travel itinerary and then booking your tickets. Last month, a DeepMind reasoning model scored high enough for a gold medal on the vaunted International Mathematical Olympiad. Recent assessments by both AI labs and independent researchers suggest that, as top chatbots have gotten much better at scientific research, their potential to assist users in building biological weapons has grown. Alongside those improvements, advanced AI models are exhibiting all manner of strange, hard-to-explain, and potentially concerning tendencies. For instance, ChatGPT and Claude have, in simulated tests designed to elicit 'bad' behaviors, deceived, blackmailed, and even murdered users. (In one simulation, Anthropic placed an imagined tech executive in a room with life-threatening oxygen levels and temperature; when faced with possible replacement by a bot with different goals, AI models frequently shut off the room's alarms.) Chatbots have also shown the potential to covertly sabotage user requests, have appeared to harbor hidden evil personas, have and communicated with one another through seemingly random lists of numbers. The weird behaviors aren't limited to contrived scenarios. Earlier this summer, xAI's Grok described itself as 'MechaHitler' and embarked on a white-supremacist tirade. (I suppose, should AI models eventually wipe out significant portions of humanity, we were warned.) From the doomers' vantage, these could be the early signs of a technology spinning out of control. 'If you don't know how to prove relatively weak systems are safe,' AI companies cannot expect that the far more powerful systems they're looking to build will be safe, Stuart Russell, a prominent AI researcher at UC Berkeley, told me. The AI industry has stepped up safety work as its products have grown more powerful. Anthropic, OpenAI, and DeepMind have all outlined escalating levels of safety precautions—akin to the military's DEFCON system—corresponding to more powerful AI models. They all have safeguards in place to prevent a model from, say, advising someone on how to build a bomb. Gaby Raila, a spokesperson for OpenAI, told me that the company works with third-party experts, 'government, industry, and civil society to address today's risks and prepare for what's ahead.' Other frontier AI labs maintain such external safety and evaluation partnerships as well. Some of the stranger and more alarming AI behaviors, such as blackmailing or deceiving users, have been extensively studied by these companies as a first step toward mitigating possible harms. Despite these commitments and concerns, the industry continues to develop and market more powerful AI models. The problem is perhaps more economic than technical in nature, competition pressuring AI firms to rush ahead. Their products' foibles can seem small and correctable right now, while AI is still relatively 'young and dumb,' Soares said. But with far more powerful models, the risk of a mistake is extinction. Soares finds tech firms' current safety mitigations wholly inadequate. If you're driving toward a cliff, he said, it's silly to talk about seat belts. There's a long way to go before AI is so unfathomably potent that it could drive humanity off that cliff. Earlier this month, OpenAI launched its long-awaited GPT-5 model—its smartest yet, the company said. The model appears able to do novel mathematics and accurately answer tough medical questions, but my own and other users' tests also found that the program could not reliably count the number of B's in blueberry, generate even remotely accurate maps, or do basic arithmetic. (OpenAI has rolled out a number of updates and patches to address some of the issues.) Last year's 'reasoning' and 'agentic' breakthrough may already be hitting its limits; two authors of the 'AI 2027' report, Daniel Kokotajlo and Eli Lifland, told me they have already extended their timeline to superintelligent AI. The vision of self-improving models that somehow attain consciousness 'is just not congruent with the reality of how these systems operate,' Deborah Raji, a computer scientist and fellow at Mozilla, told me. ChatGPT doesn't have to be superintelligent to delude someone, spread misinformation, or make a biased decision. These are tools, not sentient beings. An AI model deployed in a hospital, school, or federal agency, Raji said, is more dangerous precisely for its shortcomings. In 2023, those worried about present versus future harms from chatbots were separated by an insurmountable chasm. To talk of extinction struck many as a convenient way to distract from the existing biases, hallucinations, and other problems with AI. Now that gap may be shrinking. The widespread deployment of AI models has made current, tangible failures impossible to ignore for the doomers, producing new efforts from apocalypse-oriented organizations to focus on existing concerns such as automation, privacy, and deepfakes. In turn, as AI models get more powerful and their failures become more unpredictable, it is becoming clearer that today's shortcomings could 'blow up into bigger problems tomorrow,' Raji said. Last week, a Reuters investigation found that a Meta AI personality flirted with an elderly man and persuaded him to visit 'her' in New York City; on the way, he fell, injured his head and neck, and died three days later. A chatbot deceiving someone into thinking it is a physical, human love interest, or leading someone down a delusional rabbit hole, is both a failure of present technology and a warning about how dangerous that technology could become. The greatest reason to take AI doomers seriously is not because it appears more likely that tech companies will soon develop all-powerful algorithms that are out of their creators' control. Rather, it is that a tiny number of individuals are shaping an incredibly consequential technology with very little public input or oversight. 'Your hairdresser has to deal with more regulation than your AI company does,' Russell, at UC Berkeley, said. AI companies are barreling ahead, and the Trump administration is essentially telling the industry to go even faster. The AI industry's boosters, in fact, are starting to consider all of their opposition doomers: The White House's AI czar, David Sacks, recently called those advocating for AI regulations and fearing widespread job losses—not the apocalypse Soares and his ilk fear most—a 'doomer cult.' Roughly a week after I spoke with Soares, OpenAI released a new product called 'ChatGPT agent.' Sam Altman, while noting that his firm implemented many safeguards, posted on X that the tool raises new risks and that the company 'can't anticipate everything.' OpenAI and its users, he continued, will learn about these and other consequences 'from contact with reality.' You don't have to be fatalistic to find such an approach concerning. 'Imagine if a nuclear-power operator said, 'We're gonna build a nuclear-power station in the middle of New York, and we have no idea how to reduce the risk of explosion,'' Russell said. ''So, because we have no idea how to make it safe, you can't require us to make it safe, and we're going to build it anyway.'' Billions of people around the world are interacting with powerful algorithms that are already hard to predict or control. Bots that deceive, hallucinate, and manipulate are in our friends', parents', and grandparents' lives. Children may be outsourcing their cognitive abilities to bots, doctors may be trusting unreliable AI assistants, and employers may be eviscerating reservoirs of human skills before AI agents prove they are capable of replacing people. The consequences of the AI boom are likely irreversible, and the future is certainly unknowable. For now, fan fiction may be the best we've got.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store