Latest news with #AIagents


Associated Press
10 hours ago
- Business
- Associated Press
Agentic Commerce Will Reshape Payments; Javelin Strategy & Research Outlines the Road Ahead
SAN FRANCISCO, June 03, 2025 (GLOBE NEWSWIRE) -- Today, Javelin Strategy & Research, part of the Escalent Group, launched Here Come the AI Agents–Agentic Commerce: The Javelin 360 View, a first-of-its-kind bundled report that unites insights from across eight of Javelin's payments research areas. Agentic commerce is poised to reshape the future of payments as intelligent agents begin to take over decision-making and transactions for consumers. These digital tools act on a user's behalf to handle purchases and everyday tasks, raising urgent questions and prompting a need for evolution across the payments ecosystem. Amid the headlines celebrating the rise of AI-powered agents, Here Come the AI Agents takes a more grounded view. The series explores not just what's possible but also what's practical, highlighting the infrastructure, identity authentication, and regulatory hurdles that must be overcome before agentic commerce becomes mainstream. As major players like Visa, Mastercard, and PayPal unveil early capabilities, this report helps industry leaders separate signal from noise and prepare for what's next. 'The industry is making it appear as if the future has arrived, packaged like a fancy carbon fiber bicycle with every available feature,' said Jordan Hirschfield, Director of Prepaid Payments at Javelin Strategy & Research. 'In reality, agentic commerce is still in its early stages and needs transactional training wheels.' This report was developed for payments leaders, issuers, merchants, and technology providers seeking to understand how agentic commerce will affect their business holistically. Drawing on expertise from eight research areas—Emerging Payments, Prepaid Payments, Credit Payments, Merchant Payments, Technology & Infrastructure, Digital Assets & Cryptocurrency, Commercial & Enterprise Payments, and Fraud Management—the report provides a comprehensive and unified view of this nascent and evolving landscape. Key questions discussed in the series include what types of payment solutions agentic commerce will require, how existing rails might adapt, and where new models like stablecoins fit in. The report also explores how evolving roles, regulatory clarity, and identity verification will shape adoption of agentic commerce across the payment stack. 'Lurking behind all of the fanfare is the bigger claim that agentified commerce represents something different from plain old commerce or the more recent e-commerce,' said Christopher Miller, Lead Emerging Payments Analyst at Javelin Strategy & Research. 'The implication is you must act and everyone must adopt or die. But the reality is more granular than that, requiring thoughtful analysis and a strategic approach.' To access the Here Come the AI Agents report, please contact our team directly. About Javelin Strategy & Research Javelin Strategy & Research, part of the Escalent Group, helps its clients make informed decisions in a digital financial world. It provides strategic insights to financial institutions including banks, credit unions, brokerages and insurers, as well as payments companies, technology providers, fintechs and government agencies. Javelin's independent insights result from a rigorous research process that assesses consumers, businesses, providers, and the transactions ecosystem. It conducts in-depth primary research studies to pinpoint dynamic risks and opportunities in digital banking, payments, and fraud & security. Learn more at Contact Allison Bondi [email protected]


Geeky Gadgets
4 days ago
- Business
- Geeky Gadgets
Build Your Own AI Agent Army of Automations with n8n with Claude
What if you could build a team of tireless, intelligent assistants that not only handle repetitive tasks but also collaborate to solve complex problems—all without breaking a sweat? It might sound like science fiction, but with tools like n8n and Claude, this vision is now within reach. Imagine an AI agent network where one agent analyzes incoming data, another generates actionable insights, and a third seamlessly executes follow-up tasks. The result? A system that's not just automated but truly adaptive, capable of responding dynamically to your needs. In this quick-start guide, Mark Kashef shows you how to harness the power of these tools to create your own AI agent army, transforming the way you work. By the end of this guide, you'll learn how to integrate Claude, a innovative AI model, with n8n, an open source automation platform, to design workflows that are as intelligent as they are efficient. From setting up APIs to crafting collaborative workflows, you'll discover how to unlock the full potential of AI-driven automation. Whether you're looking to streamline operations, enhance customer support, or tackle large-scale data analysis, this guide will give you the building blocks to get started. The possibilities are vast—so what will your AI agents achieve together? Building AI Agent Networks What is n8n? n8n is a versatile automation platform that connects tools, services, and applications into cohesive workflows. Its intuitive visual interface enables users to design complex processes without requiring extensive programming expertise. As an open source platform, n8n is highly customizable, making it adaptable to a wide range of use cases. With n8n, you can: Automate repetitive tasks: Save time and resources by reducing manual effort. Save time and resources by reducing manual effort. Streamline operations: Integrate multiple systems for smoother workflows. Integrate multiple systems for smoother workflows. Create tailored workflows: Design processes that meet your specific requirements. This flexibility positions n8n as an ideal foundation for building an AI agent network, allowing seamless integration with other tools and technologies. What is Claude? Claude, developed by Anthropic, is a sophisticated AI model designed for natural language processing and understanding. It excels at generating human-like responses, analyzing text, and performing cognitive tasks. Claude's ability to interpret context and provide insightful outputs makes it a powerful tool for intelligent automation. When integrated with n8n, Claude becomes the decision-making core of your AI agent network, allowing dynamic task execution and collaboration. Its contextual understanding ensures that workflows remain adaptive and responsive to changing inputs. How to Build An AI Agent Automation Army Watch this video on YouTube. Expand your understanding of n8n automation platform with additional resources from our extensive library of articles. How to Integrate Claude with n8n Integrating Claude with n8n is a critical step in building your AI agent network. This process involves connecting Claude's API to n8n's workflow environment, allowing the AI model to interact with other tools and services. Follow these steps to get started: Obtain API Credentials: Secure the necessary API keys or tokens from Claude's platform to enable communication. Secure the necessary API keys or tokens from Claude's platform to enable communication. Configure n8n: Set up n8n to interact with Claude's API by adding the required credentials and endpoints. Set up n8n to interact with Claude's API by adding the required credentials and endpoints. Design Workflows: Create workflows that use Claude's capabilities, such as text analysis, decision-making, or generating responses. Once integrated, Claude can act as a central node in your workflows, processing inputs and generating outputs that guide other AI agents. This integration allows you to build a robust system where AI agents collaborate effectively to achieve shared objectives. Designing Collaborative AI Workflows The strength of an AI agent network lies in its ability to assist collaboration among multiple agents. With n8n, you can design workflows where AI agents work together to accomplish complex tasks. For instance, one agent might analyze incoming data, another could generate actionable insights, and a third might execute follow-up tasks. These workflows can be configured to: Distribute Tasks: Assign responsibilities to agents based on their specific capabilities, making sure efficient task management. Assign responsibilities to agents based on their specific capabilities, making sure efficient task management. Assist Communication: Enable agents to share information and coordinate actions seamlessly. Enable agents to share information and coordinate actions seamlessly. Adapt Dynamically: Monitor progress and adjust workflows in response to changing conditions or new inputs. By using Claude's contextual understanding, these workflows become highly adaptive and efficient, making sure smooth collaboration between AI agents. This collaborative approach enhances the overall functionality and effectiveness of your automation system. Practical Applications of AI Agent Networks AI agent networks have a wide range of applications across industries, offering innovative solutions to common challenges. Here are some practical use cases: Customer Support: Automate responses to frequently asked questions while escalating complex issues to human agents for resolution. Automate responses to frequently asked questions while escalating complex issues to human agents for resolution. Content Creation: Collaboratively generate, edit, and review content using multiple AI agents to improve quality and efficiency. Collaboratively generate, edit, and review content using multiple AI agents to improve quality and efficiency. Data Analysis: Collect, process, and interpret data to deliver actionable insights that support informed decision-making. Collect, process, and interpret data to deliver actionable insights that support informed decision-making. Task Management: Assign and track tasks across teams or departments, making sure accountability and streamlined operations. These examples demonstrate how AI agent networks can enhance productivity, reduce manual effort, and streamline operations in various domains. By automating routine tasks and allowing intelligent decision-making, these networks empower organizations to focus on strategic priorities. Benefits of Automating Tasks with AI Agents Integrating AI agents into your workflows offers several significant advantages: Increased Efficiency: AI agents can process large volumes of data and complete tasks faster than humans, saving time and resources. AI agents can process large volumes of data and complete tasks faster than humans, saving time and resources. Improved Accuracy: Automation minimizes human error, making sure consistent and reliable results across workflows. Automation minimizes human error, making sure consistent and reliable results across workflows. Scalability: AI agent networks can handle growing workloads without compromising performance, making them ideal for expanding operations. AI agent networks can handle growing workloads without compromising performance, making them ideal for expanding operations. Cost Savings: By reducing the need for manual labor, automation lowers operational costs and improves overall efficiency. By combining the capabilities of n8n and Claude, you can unlock these benefits and create a system tailored to your organization's unique needs. This integration not only enhances operational efficiency but also provides a scalable solution for future growth. Unlocking the Potential of AI-Driven Workflows Building an AI agent network with n8n and Claude represents a forward-thinking approach to using automation and artificial intelligence. This integration enables you to design workflows where AI agents collaborate effectively, automate tasks, and drive innovation. Whether your goal is to streamline operations, enhance customer experiences, or gain deeper insights from data, this solution offers scalability, efficiency, and adaptability. By exploring the possibilities of AI-driven workflows, you can transform the way your organization operates and achieve new levels of productivity. Media Credit: Mark Kashef Filed Under: AI, Guides Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


Globe and Mail
5 days ago
- Business
- Globe and Mail
1 $8 Billion Reason You Should Buy Salesforce Stock Now
In 2024's final stretch, software makers began pivoting from passive generative artificial intelligence 'copilots' to assertive, goal-driven 'agents.' Where copilots merely enhanced productivity through prompts, agents are autonomous executors. They are designed to solve problems, take initiative, and deliver outcomes. Salesforce (CRM), now known for its Agentforce platform, is doubling down on its AI agent strategy. It has announced an $8 billion all-cash acquisition of data management leader Informatica (INFA). Informatica specializes in aggregating, cleansing, and orchestrating data across silos, precisely the infrastructure required to fuel agentic AI. By folding Informatica into its platform, Salesforce could build the most agent-ready data platform on the market. With Wall Street's bullish call on CRM, along with double-digit gains in sight, Salesforce's shares could be a solid portfolio addition as AI agents take center stage. About Salesforce Stock Incorporated in 1999, San Francisco-based Salesforce (CRM) commands a $265 billion market cap and sits atop the global customer relationship management (CRM) software arena. Its cloud-based platform redefines enterprise-customer engagement, powering sales, service, marketing, and commerce with intelligent precision. Salesforce's deep push into data analytics and artificial intelligence (AI), coupled with strategic acquisitions like Slack and partnerships with tech giants like International Business Machines (IBM), reflects an unyielding pursuit of technological edge. However, the climb has not been without setbacks. Salesforce touched an all-time high of $369 on Dec. 4, buoyed by a strong fiscal Q3 2025 showing, only to face a sharp pullback of nearly 22% in 2025. Salesforce Beats Q1 Estimates Salesforce kicked off fiscal 2026 with precision and power, unveiling its upbeat fiscal Q1 2026 report on May 28. The cloud CRM giant generated revenue of $9.8 billion, up 8% year over year, with subscription and support revenue comprising 95% of the top line, rising 9% excluding foreign exchange to $9.3 billion. Non-GAAP EPS hit $2.58, up 5.7%, exceeding Wall Street's estimates. Meanwhile, operating cash flow climbed 4% annually to $6.5 billion, and free cash flow rose to $6.3 billion. Operating margin held strong at 32.3%, reflecting Salesforce's disciplined financial execution. The company returned $3.1 billion to shareholders, including $2.7 billion in buybacks and $402 million in dividends, signaling shareholder confidence even as it leans into aggressive innovation. Plus, Salesforce's AI ambitions are paying off. Data Cloud and AI annual recurring revenue topped $1 billion, surging 120% year over year. Nearly 60% of its top 100 Q1 deals included Data Cloud and AI. Agentforce - Salesforce's AI-powered selling assistant - has already closed over 8,000 deals, with 750,000 support requests handled and a 7% reduction in case volume. The AI-driven momentum is anchored by Data Cloud, which ingested a staggering 22 trillion records, up 175%. AWS activity tripled, with Salesforce transacting $2 billion across hundreds of deals. In the earnings call, CEO Marc Benioff defended the Informatica deal as strategic for AI dominance, emphasizing that unifying enterprise data is essential for AI transformation. He signaled ongoing hiring and hinted at more acquisitions, without budging on margin or cash flow discipline. Salesforce is pressing forward with precision, projecting Q2 revenue between $10.11 billion and $10.16 billion. Non-GAAP EPS is anticipated to be between $2.76 and $2.78. Management raised Salesforce's full-year revenue target by $400 million to a new range of $41 billion to $41.3 billion, marking approximately 8% annual growth. Despite global headwinds, it is holding the line on a 34% non-GAAP operating margin and solid operating cash flow growth between 10% and 11%. Plus, executives are keeping 2026 guidance unchanged, brushing off concerns about the Informatica deal. The company is navigating volatility with discipline, clarity, and having just enough firepower to keep Wall Street's attention. Analysts monitoring Salesforce predict EPS of $8.41 in fiscal 2026, up 6.6% annually, with the bottom line projected to rise another 12.7% to $9.48 per share in fiscal 2027. Fueling AI Agents With Data Muscle Salesforce's deal to acquire Informatica for $8 billion in an all-cash transaction, paying $25 per share, is a steep drop from the mid-$30-per-share price floated back in April 2024 when talks first surfaced. Informatica's stock had even touched $40 that month, making this final price point a calculated bargain. Salesforce is buying deep data infrastructure to power its AI ambitions. The move aims to unify Informatica's data management tools with Salesforce's Data Cloud, MuleSoft, and Tableau to create a seamless, agent-ready architecture. As enterprises move from copilots to autonomous AI agents, Salesforce stands well-positioned to lead the charge with a future-ready data and AI stack. What Do Analysts Expect for Salesforce Stock? Salesforce's bid for Informatica is drawing praise, not for growth, but for AI firepower. William Blair's Arjun Bhatia gives a 'Buy' rating on CRM, saying it's all about strengthening Salesforce's data management and AI game. As Salesforce reignites acquisitions post-2023, Bhatia sees long-term growth fueled by smart integration and expanding tech infrastructure. CRM stock has a consensus ' Strong Buy ' rating overall. Out of 47 analysts offering recommendations, 34 suggest a 'Strong Buy,' four give a 'Moderate Buy,' seven analysts stay cautious with a 'Hold' rating, and two advise a 'Strong Sell' rating. The average analyst price target for CRM is $362.89, indicating a potential upside of 38%.


Forbes
6 days ago
- Business
- Forbes
AI Agents Deliver Productivity, But That's Only Part Of The Story
Agents will transform the workplace getty The word on agentic AI's ability to deliver on its promises is: so far, so good. With caveats. A majority of 300 senior executives adopting AI agents, 66%, say they're delivering positive results in terms of productivity, a recent PwC survey suggests. But, let's face it -- all systems deliver some degree of productivity. What executives need is that extra edge that delivers extreme competitive differentiation. At this point, few AI agents are 'transforming how work gets done," the PwC report's authors state. 'Many employees are using agentic features built into enterprise apps to speed up routine tasks — surfacing insights, updating records, answering questions. It's a meaningful boost in productivity, but it stops short of transformation.' The biggest barrier isn't the technology; 'it's mindset, change readiness and workforce engagement,' the PwC authors conclude, Mahe Bayireddi, CEO and co-founder of Phenom, which offers agents for HR tasks, agrees this is where the challenge lies. I had the opportunity to sit down with Bayireddi at Phenom's recent user conference in Philadelphia, where he pointed out that for AI agents, context is everything. 'I think there's a lot of learning in this whole process,' he said. 'There are no experts dynamically saying how they can handle AI agents effectively.' Agents can bring up productivity almost by 20-30%, if they use it in the right format, do the change management effectively, and use the data in an engagement format,' Bayireddi continued. 'The point is how do they make it fly, how do they manage the change management." AI agents and the data they consume need to be domain-specific, and will vary industry to industry, company to company. 'The data at the universal level is actually complex," he said. "The nuance of a context and that nuance of personalization is very critical for AI to work. It can't be too general.' The rise of agents advances generative AI to more practical levels. Once put into place, agents can be 'baked into the workflows," he said. 'Up to now, everybody has had to go to ChatGPT and ask a question and get an answer. It's not the way how people work.' The emphasis needs to be on addressing the nuances of functions and processes to be automated with agents. 'That has to manifest in an effective format with a context,' he said. "That can only happen with an agent being effective in a department.' Bayireddi doesn't see agents as a threat to jobs, but they will change the nature of jobs. 'There are new jobs which are going to come up because of agents. There is a new work which is also going to pop up because of agents. Skills is one thing, but also the work will change and the jobs will change.' Don't settle for too little when it comes to AI agents, the PwC authors advised. 'Companies that stop at pilot projects will soon find themselves outpaced by competitors willing to redesign how work gets done. We see few companies moving early to define the future, building new operating models that integrate and orchestrate multiple AI agents. Fewer than half are fundamentally rethinking operating models and how work gets done (45%) or redesigning processes around AI agents (42%).'


Harvard Business Review
26-05-2025
- Business
- Harvard Business Review
Can AI Agents Be Trusted?
Agentic AI has quickly become one of the most active areas of artificial intelligence development. AI agents are a level of programming on top of large language models (LLMs) that allow them to work towards specific goals. This extra layer of software can collect data, make decisions, take action, and adapt its behavior based on results. Agents can interact with other systems, apply reasoning, and work according to priorities and rules set by you as the principal. Companies such as Salesforce have already deployed agents that can independently handle customer queries in a wide range of industries and applications, for example, and recognize when human intervention is required. But perhaps the most exciting future for agentic AI will come in the form of personal agents, which can take self-directed action on your behalf. These agents will act as your personal assistant, handling calendar management, performing directed research and analysis, finding, negotiating for, and purchasing goods and services, curating content and taking over basic communications, learning and optimizing themselves along the way. The idea of personal AI agents goes back decades, but the technology finally appears ready for prime-time. Already, leading companies are offering prototype personal AI agents to their customers, suppliers, and other stakeholders, raising challenging business and technical questions. Most pointedly: Can AI agents be trusted to act in our best interests? Will they work exclusively for us, or will their loyalty be split between users, developers, advertisers, and service providers? And how will be know? The answers to these questions will determine whether and how quickly users embrace personal AI agents, and if their widespread deployment will enhance or damage business relationships and brand value. What Could Go Wrong? Think of a personal AI agent as someone you might hire as an employee, contractor or other real-world agent. Before delegating responsibility, you need to know if a person or business is reliable, honest, capable, and required by law to look out for you. For a human agent with the ability to commit your financial and other resources, for example, you would almost certainly conduct a background check, take out insurance, and, in some cases, require them to post a bond. Depending on the duties of your personal AI agents, digital versions of these and other controls will be essential. That's because the risks of bad employees and contractors apply to personal AI agents, too. Indeed, given the potential scope and speed of agentic AI, users will need to be even more confident that their personal AI agents are trustworthy before turning over the keys to their most valuable assets. The most serious risks that must be addressed include: Vulnerability to Criminals A worst-case scenario is that personal AI agents could be programmed (or reprogrammed by hackers) to work against you, analogous to an identity thief or criminal employee embezzling funds. It's too early for widespread reports of hijacked personal AI agents, but the U.S. National Institute of Standards and Technology and private Internet security firms have been conducting regular tests of leading LLMs and their agent technology for potential security flaws. These simulated hacks reveal that even today's most secure models can be easily tricked into performing malicious activities, including exposing passwords, sending phishing emails, and revealing proprietary software. Retail Manipulation by Marketers and Paid Influencers In retail, personal AI agents could be intentionally designed with biased marketing preferences to steer purchases towards those who develop them or their business partners. Consider online shopping. Already, it's deluged by misleading advertising and paid promotion—much of which isn't disclosed. Consumer marketers have strong incentives to keep AI agents from shopping in a truly independent environment. 'Free' agents may steer business towards certain brands or retailers; worse, programmed bias in recommendations and purchases may be invisible to users. Just as humans can be tricked into buying and selling from those who manipulate information unfairly or even illegally, AI agents may fall victim to similar abuse through software deployed by marketers to influence or even alter the LLMs that personal AI agents rely on. You believe your agent is finding you the best deal, but its analysis, decision-making and learning may be subtly or not-so-subtly altered by modifications to the inputs and reasoning it uses. Preference for Sponsors and Advertisers Manipulation can also include special preference for certain kinds of content or viewpoints. For instance, in news, entertainment, and social media, personal AI agents could be slanted to prioritize digital content or promote a service provider's sponsor instead of giving users the information that best meets their needs or preferences. This is especially likely if the deployment of personal AI agents follows the approach of existing digital services, where users are given free or subsidized access to content, leaving platform operators to make their money from advertising, product placement, and other indirect sources linked to the content. As in the old days of ad-supported radio and television, that business model strongly aligns the interests of service providers not with those of their users but with their sponsors, leading to direct and indirect influence on content to best reflect the interests of advertisers and their brands. Consider music service Spotify, which recently added a feature that allows subscribers to listen to music curated by an automated DJ, 'a personalized AI guide that knows you and your music taste so well that it can choose what to play for you.' Spotify also allows artists to have their work promoted in some user recommendation algorithms in exchange for a reduction in royalties, a system it refers to as 'Discovery Mode.' For now, Spotify confirmed that its AI DJ does not operate in conjunction with Discover Mode. Susceptibility to Misinformation Personal AI agent decision-making could be skewed intentionally or unintentionally by misinformation, a problem human principals and agents alike already face with today. This is perhaps the most general but also the most significant risk. Personal AI agents, for example, may be fooled, as are humans, by faked videos, which in some cases are used to blackmail or extort victims. Examples of LLMs relying on erroneous or intentionally false information in response to user queries—in some cases giving dangerous health recommendations —have been regularly reported since the first release of ChatGPT and other early AI applications. Some courts have already held developers responsible when AI chatbots give incorrect answers or advice: For example, the case of an Air Canada passenger who was promised a discount that wasn't actually available. Since the purveyors of false information have different objectives, including political, criminal, financial, or just plain maliciousness, it's difficult to gauge the risk that personal AI agents will inadvertently rely on such data in making consequential choices for their users. Bringing Together Legal, Market, and Technical Solutions One way to keep AI agents honest, just as with their human counterparts, is careful supervision, auditing, and limiting autonomy by establishing levels of approval based on the potential scale and cost of delegated decisions. Implementing such complex oversight over AI agents, however, would largely defeat the time-saving benefits of authorizing them to act on our behalf in the first place. Instead, we believe the need for tedious micromanagement of AI agents by their users can be minimized by applying a combination of public and private regulation, insurance, and specialized hardware and software. Here are three key steps to ensuring trustworthy personal AI agents, some of which is already in development: 1. Treat AI Agents as Fiduciaries Attorneys, legal guardians, trustees, financial advisors, board members, and other agents who manage the property or money of their clients are held to an enhanced duty of care, making them what is known as fiduciaries. Depending on the context, the legal responsibilities of a fiduciary vis-à-vis the client typically include obedience, loyalty, disclosure, confidentiality, accountability, and reasonable care and diligence in managing the client's affairs. As a baseline, legal systems must ensure AI agents and any other software with the capability to make consequential decisions are treated as fiduciaries, with appropriate public and private enforcement mechanisms for breaches including failure to disclose potential conflicts of interest or failing to operate independently of paid influencers. Already, some legal scholars argue that existing precedent would treat personal AI agents as fiduciaries. If not, this may be a rare area of bi-partisan consensus on the need for regulation, with the leading developers of agentic AI technology themselves calling for legislation. In the U.S., some fiduciaries are closely regulated by public agencies including the Securities and Exchange Commission and the Department of Labor, which oversee licensing, reporting, and disciplinary processes. Private self-regulatory bodies, such as bar associations, the Certified Financial Planner Board, and the National Association of Realtors can also act directly or indirectly to enforce fiduciary duties. Similar mechanisms, perhaps overseen by a new organization created by AI developers and corporate users, will need to monitor personal AI agents. 2. Encourage Market Enforcement of AI Agent Independence Business leaders who will benefit from offering personal AI agents to their stakeholders should work together with service providers, private regulators, and entrepreneurs to promote trust and safety for agentic AI technology. This includes offering and including insurance in the deployment of personal AI agents. For example, as retail and banking applications have exploded in use, a fast-growing, multi-billion dollar industry of identity theft protection quickly evolved to protect users against the unauthorized use of digital information by financial fiduciaries. Insurers in particular have strong incentives to police the practices of data managers and lobby for stronger laws or to engage private enforcement tools, including class action lawsuits, when appropriate. Other service providers who already help users manage their online relationships with fiduciaries could expand their business to cover personal AI agents. Credit bureaus, for example, not only oversee a wide range of transactions and provide alerts based on user-defined criteria, they also provide consumers the ability to freeze their financial history so that criminals and other unauthorized users cannot open new lines of credit or manage credit history without explicit permission. (Since 2018, some of these tools must be offered free of charge to consumers in the U.S.) Likewise, those deploying personal AI agents should encourage insurers and other service providers to give users the ability to monitor, control, and audit the behavior of their agents, independent of whoever creates and operates the software itself. AI 'credit bureaus' could offer tools to limit the autonomy of AI agents at user-defined levels, including the number or scale of consequential decisions the agent can make during a certain period of time. 3. Keep Decisions Local Careful design and implementation of agentic AI technology can head off many trust-related issues before they arise. One effective way to deter commercial or criminal manipulation of personal AI agents is to restrict their ability to disclose personal data. Several device and operating system developers, including Google and Microsoft, are working on agentic AI tools that keep all sensitive data and decision-making performed by agents localized to the user's phone, tablet, or personal computer. This both limits the opportunity for outsiders to interfere with the agent and reduces the risk that sensitive data could be hijacked and used by rogue software posing as an authorized agent. Apple Intelligence, Apple's AI architecture, will likewise limit most agent activity to a user's device. When more computing power is required, the company will use what it calls Private Cloud Compute (PCC), which can access larger LLMs and processing resources using Apple hardware and strong encryption. When using PCC, the company says, personal data will not be stored. The company has also committed to allowing independent privacy and security researchers to verify the integrity of the system at any time. To ensure a rapid rollout of personal AI agents, all companies offering personal AI agents to their stakeholders should consider similar features, including strict localization of individual user data, strong encryption both for internal and external processing, and trustworthy business partners. Verifiable transparency of the agent's behavior, and full disclosure of sponsorships, paid promotions, and advertising interactions with personal AI agents are also essential. Technical solutions like these are not foolproof, of course, but greatly limit the number of potential points of failure, reducing the risk that fiduciary responsibilities will not be fulfilled. Getting Started Agentic AI technology holds tremendous promise for making life easier and better, not only for enterprises but for individuals as well. Still, users will not embrace AI agents unless they are confident the technology can be trusted, that there is both public and private oversight of agent behavior, and appropriate monitoring, reporting, and customization tools that are independent of the developers of the agents themselves. Getting it right, as with any fiduciary relationship, will require a clear assignment of legal rights and responsibilities, supported by a robust market for insurance and other forms of third-party protection and enforcement tools. Industry groups, technology developers, consumer services companies, entrepreneurs, users, consumer advocates, and lawmakers must come together now to accelerate adoption of this key technology.