
Arcade.dev and Lithic Announce Partnership to Create E-Commerce Agents
To make fully agentic commerce a reality, Lithic and Arcade introduced the pioneering approach of 'just-in-time auth' for agentic payments, which ensures agents can only access the exact payment permissions they need precisely when they need them. To complete a purchase, at checkout, the application creates a single-use credit card bound to the exact merchant and price of the user's cart. The technology is intended to be open-source, so that any developer can build their own agent with these capabilities. This approach unlocks one of the last major hurdles to secure and simple AI-powered commerce: the ability for agents to simply and securely check out.
'The shift to AI-native commerce will be a multi-trillion dollar transformation, and, until now, payments have not existed within the AI ecosystem,' said Lithic Co-founder and CEO Bo Jiang. 'Partnering with Arcade allows us to create the first truly secure payment rails for AI agents, enabling highly trustworthy AI-driven commerce for Lithic clients.'
AI is reshaping how consumers discover and evaluate products online, from personalized search results to intelligent product recommendations. Large language models help millions of users research purchases and compare options, but consumers are still unable to complete transactions in an AI-native environment. This partnership aims to eliminate this friction and deliver comprehensive AI-powered commerce by enabling AI agents to securely complete purchases on behalf of users.
'Security is the foundation of any reliable agent capable of taking real action,' said Arcade.dev Co-Founder and CEO Alex Salazar. 'Arcade is making reliable agentic transactions possible. We're building a future where consumers don't think twice about an agent making a purchase for them.'
Arcade is building the essential authentication and authorization layer that lets these agents securely transact across a broad catalogue of merchants and platforms.
About Arcade
Arcade is the industry's first agent action platform with enterprise-grade authorization enabling AI to take secure, real-world actions. Arcade's platform transforms AI applications from conversational interfaces into trusted automation tools by solving the fundamental challenges of authentication and integration. Arcade provides best-in-class security and developer-friendly infrastructure to deploy AI that can act on behalf of users. Learn more and try it for free at www.arcade.dev.
About Lithic
Lithic is a leading card issuing technology company built for high growth technology companies. Lithic's APIs enable businesses to move money, build card programs, and issue debit, credit, and prepaid cards to consumers and businesses with unparalleled ease and flexibility. With a focus on empowering businesses to scale globally, Lithic is committed to providing innovative solutions that meet the evolving needs of the financial services industry. For more information, visit www.lithic.com.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
a few seconds ago
- Forbes
DeepSeek: A Paradigm Shift, What It Means For Humanity
The whale that is DeepSeek was invisible prior to Jan 20th 2025. Then the Blue Whale breaches to the whole world's sight on Jan 20th. That body slam sent shockwaves around the world. The release of DeepSeek-R1 immediately cratered the market cap of several hardware and software companies which were buoyed by what investors thought was American exceptionalism. Withholding the latest chips and AI Intellectual Property from China was thought to be the strategy to follow. Except it was wrong. Such is the stuff that leapfrogging is made of. Especially for manufacturing and design powerhouse such as China. Ironically, the latest models from DeepSeek are free to use. They even run it on their servers for free. Development of general purpose large language models through scaling of parameters and training data led to many breakthroughs. The release of ChatGPT-3.5 and 4.0 in 2022-23 unleashed the general purpose potential of AI to the general public. This approach also increased costs tremendously as compute and data demands spurred bigger and better processors. In late 2023 and 2024 and even now, the construction of power hungry data centers were thought to be the only way to improve the performance of the models. Limiting access to computing and the latest chips was thought to restrain China as a source of these powerful models. With DeepSeek that paradigm was shifted. Companies like Nvidia whose stock was heavily affected by the announcement have since recovered and thrived. The lessons were lost on global markets. The worst may yet to come as the companies buoyed by the rising of AI and its use are brought down to earth by a combination of new methods and the lessening of compute needed to do training as well as inference. Sunk costs and the costs of switching with their own powerful economic adherents prevent a longer term view and lock the American AI in their paths. Success breeds complacency and adherence to the model that produced success. In AI, a rapidly developing field, getting stuck on algorithms, process and practice is deadly. DeepSeek showed that just piling on computing and data does not make for exponential progress. This is a lesson from many fields, that is often ignored with an overused but wrong dictum 'This time it is different.' Innovation follows familiar patterns; slowly then rapidly. Efficiency The costs of training and running DeepSeek are much lower than for other models. The ratio in a recent presentation showed $6M for DeepSeek/ versus $600M for Llama (the open source model from Meta). One hundredth the cost. The costs for other models, including ChatGPT, are even more. The cost savings are a result of implementing DeepSeek's own discoveries in reinforcement learning and training using distillation. Further, the model is very efficient in generating Chinese language. As of three months ago, a large number of Chinese companies had joined the AI revolution by subscribing to DeepSeek. As the national champion, the government industrial policy supports DeepSeek. RL as a training method was invented in the University of Amherst. The recipients of the 2024 ACM Turing award, Andrew Barto and Richard Sutton were the inventors of the classic reinforcement learning techniques. For LLMs and other large models, such an approach falls under supervised learning. The model is refined by feedback, classically from humans, called RLHF (Reinforcement Learning with Human Feedback). This is called supervised fine- tuning. Humans are the supervisors. The paper released by the creators of DeepSeek R1 goes into detail on the way that they modified RL. Anything that involves humans in the loop at scale requires a lot of money. Removing the human in the loop makes training cheaper. A version of the model is used to fine-tune the other. In other words, one model functions as the supervisor and the other is trained. The arrival of new companies with models such as MiniMax-M1 epitomizes this shift even more. Such techniques will overtake models which are created using conventional scaling. DeepSeek-R1 was effective through its evolution utilizing multiple strategies. A combination of novel methods based on existing techniques made the training and inference efficient in time and resources. More details can be found in this article. In short, all aspects of the creation and running of large language models were changed, enhanced or reworked for cost and time efficiency. MiniMax-M1 MiniMax-M1 claims to have chopped the cost of DeepSeek-R1 training by 90%. They trained their model for a cost of $500K. Contrast this to the $6M cost for DeepSeek-R1 and $600M for LLaMa. There have been doubts cast on the numbers publicized by both DeepSeek and MiniMax. Efficiencies have been through further refining RL with what is called lightning attention. This is mostly for deterministic problems such as mathematical and logical reasoning and long context problems such as coding. Minimax is also available through HuggingFace the open source AI host. Privacy There is concern that DeepSeek is harvesting private data for its own use. This phenomenon is rife in the world of AI and social media in general. What makes the sharing of private data with DeepSeek or other private companies is the fact that they will be used to refine the models. In the case of DeepSeek or other China based companies, there is a fear of data reaching the Chinese government. Private AI companies, even those in the United States do the same, except they will share that data with the US government if they are forced by law. At this juncture, such a scenario is more disquieting. The fourth amendment will fall by the wayside, if the government can search not only our persons and our homes, but our minds without a warrant. To read more about the risks of DeepSeek, read this analysis from Hidden Layer. Since Hidden Layer's business model is based on these kinds of analysis, it is best to look closely at the analysis and compare with their work on other open models. Open Source AI Models Open Source International (OSI) has a definition of Open Source AI. It is 1.0 right now, subject to revision. Like the Open Source definition for software, it allows users to use, observe, modify and distribute without any restrictions. AI models depend a lot on their training data. AI use involves inference, consuming resources. The expenditure on training is separate from the expense of inference. In the classic definition of open source software the source code is available to any user to use, observe, modify and distribute. In a strict interpretation of AI open-source, the source code should include data used to train the model. However this may not be practical, nor is it part of the OSI definition of Open Source AI. This is drastically different from the OSI guidance for open source software. The other difference is the observability of the model weights and hyperparameters. During the learning phase model weights are refined. Model weights embody the model in its current form, crystallizing all the training that the model has undergone. Hyperparameters control the initial configuration of the learning setup. In an open model, model weights and model parameters are meant to be open. Open Source AI models can be called open weights models. Many models from China are open weights models, including Qwen (From AliBababa). This competition has also forced OpenAI to release an open weight model. This is the gpt-oss base model with two variants. The Future We have not delved into the technology behind the creation of multi-modal prompts and multi-modal generation. By multi-modal, we mean not only text, but images, audio as well as video. MiniMax as well as DeepSeek have these capabilities. It is clear that limiting access to hardware and know-how cannot hold true innovation back. Such constraints also make for multiple paradigm shifts, making AI cheaper to develop with lower hardware and power resources, creating democratized and decentralized future where we could fine-tune and run models on commodity hardware. These developments give us hope that we will be able to control and bend these capabilities to help humanity rather than harm ourselves.
Yahoo
28 minutes ago
- Yahoo
Paycom raises 2025 revenue and profit forecasts on AI-driven demand
(Reuters) -Payroll processor Paycom Software raised its forecast for annual revenue and profit on Wednesday, as the addition of AI features helps accelerate demand for its employee management services, sending its shares up 7% in extended trading. The company now expects fiscal 2025 revenue of $2.05 billion to $2.06 billion, up from its previous projection of $2.02 billion to $2.04 billion. Analysts on average expect $2.03 billion, according to data compiled by LSEG. Paycom has been integrating artificial intelligence features into its software with its 'smart AI' suite that automates tasks such as writing job descriptions and helps employers identify which employees are most at risk of leaving. This has boosted demand for Paycom's services as businesses look to simplify workforce management functions. "We are well positioned to extend our product lead and eclipse the industry with even greater AI and automation," CEO Chad Richison said in a statement. Paycom expects 2025 core profit in the range of $872 million to $882 million, up from previous expectations of $843 million to $858 million. The payroll processor reported revenue of $483.6 million for the second quarter ended June 30, beating analysts' estimate of $472 million. Adjusted core profit was $198.3 million, compared to $159.7 million in the same period last year. Paycom's expectation of strong growth comes despite a sharp deterioration in U.S. labor market conditions. U.S. employment growth was weaker than expected in July, while the nonfarm payrolls count for the prior two months was revised down by 258,000 jobs, according to a Labor Department report. Sign in to access your portfolio


CNET
29 minutes ago
- CNET
Researchers Seize Control of Smart Homes With Malicious Gemini AI Prompts
Recent reports and demonstrations from the Black Hat computer-security conference have shown how outside Gemini AI prompts -- dubbed promptware -- could fool the AI and force it to control Google Home-connected smart devices. That's an issue for Google, which has been working to add Gemini features to its Google Home app and replace Google Assistant with the new AI helper. The secret to these serious vulnerabilities is how Gemini is designed to respond to basic commands in English. Demonstrations show how a prompt sneakily added to an inserted Google Calendar invite will be read by Gemini the same way it scans other Google app data, such as when it is summarizing emails. But in this case, the addition gives Gemini a very specific order, like creating an agent to control everyday devices from Google Home. The Tel Aviv University researchers, including Ben Nassi, Stav Cohen and Or Yair, have created their own website that showcases their report, "Invitation is All You Need." It includes videos showing how the right Gemini prompts could be used to open windows, turn off lights, turn on a boiler, or geolocate the current user. As the Invitation is All You Need research shows, a detailed prompt can be hidden in an innocuous Calendar invite title or similar spot. These commands can make Gemini create a hidden agent and wait for a common response (like saying "thank you" in an email) to trigger certain actions. Even if your calendar controls are tight, some of these promptware attacks could be performed through other things that Gemini scans, such as an email subject line. Other demonstrations showed how similar commands could lead to spam messages, deleted events, automatic Zoom streaming and more unpleasant tricks. Should you worry about your Google Home devices? Google told CNET they have introduced multiple fixes to address the promptware vulnerabilities since the researchers provided Google with their report in February 2015. That's the point of the Black Hat conferences -- to uncover problems before real cybercriminals seize them, and get the fixes in fast. Andy Wen, senior director of security product management at Google Workspace, told CNET, "We fixed this issue before it could be exploited thanks to the great work and responsible disclosure by Ben Nassi and team. Their research helped us better understand novel attack pathways, and accelerated our work to deploy new, cutting edge defenses which are now in place protecting users." If you're still concerned, you can disable Gemini entirely in most cases. As I've covered before as CNET's home security editor, smart home hacking is very rare and very difficult with today's latest security measures. But as these new generative AIs get added to smart homes (the slowly rolling out Alexa Plus and eventual Siri AI upgrades included), there's a chance they could bring new vulnerabilities with them. Now, we're seeing how that actually works, and I'd like these AI features to get another security pass, ASAP.