Pendo Introduces First-of-its-Kind Solution to Measure AI Agent Performance
RALEIGH, N.C. and NEW YORK, June 17, 2025 /PRNewswire/ -- Pendo, the world's first software experience management platform, today announced Pendo Agent Analytics, a first-of-its-kind solution that gives companies visibility into how AI agents are performing.
Companies can leverage Pendo Agent Analytics in two powerful ways: To measure the performance of their digital workers like they would their employees and to measure adoption of their agentic software like they would their SaaS applications.
As agents join org charts and product roadmaps across the enterprise, companies require systems to understand how agents work and to optimize their performance. IT teams need confidence that agents will act in compliance and improve productivity, and R&D teams must ensure that agents drive business outcomes for customers. According to the Forrester report Agentic AI Agents Are A Rare Sighting: "We need transparency, data security and protection, controls and guardrails, and advanced monitoring before we see broad adoption of agentic systems."
Pendo Agent Analytics supports these functions with metrics and reports that track homegrown and third-party agent behavior alongside usage of traditional software. With Pendo Agent Analytics, companies can answer questions like: "Are people using my agents and how frequently?" What prompts are they using?" "What do users do on my platform after an AI interaction?" And ultimately, "Is this agent delivering value?"
Pendo Agent Analytics includes functionality that:
Tracks user behavior across agents and traditional software to understand how workflows are changing;
Provides insight into what people do before and after they interact with an agent;
Analyzes conversations with agents to determine prompt trends;
Highlights non-compliant behavior;
Maps agent usage to task completion, helping companies measure the ROI of agentic AI; and
Ensures safety, trust, and governance at every step – so companies can innovate confidently, knowing they have enterprise-grade monitoring and controls in place.
"The shift to intelligent software is happening faster than we could ever imagine, and enterprises are faced with improving their SaaS applications, while accelerating agent and AI innovation," said Todd Olson, CEO and co-founder of Pendo. "I'm proud that we are supporting customers wherever they are on their transformation journey."
Pendo announced the new offering during its inaugural Pendomonium X event in New York City, along with numerous product updates that help teams improve user onboarding, accelerate user success and cut support costs, increase upsell revenue, and drive team productivity.
About Pendo:At Pendo, we're on a mission to improve the world's experience with software. Thousands of global companies use Pendo to provide better software experiences for 900 million people every month. Pendo improves business outcomes by enabling non-engineers to analyze, assess, and act on software issues. Our integrated Software Experience Management (SXM) platform manages the entire enterprise software asset: Customer- and employee-facing applications; desktop and mobile platforms; and SaaS, AI and Agentic software. Find out more at pendo.com
View original content:https://www.prnewswire.com/news-releases/pendo-introduces-first-of-its-kind-solution-to-measure-ai-agent-performance-302483151.html
SOURCE Pendo

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Gizmodo
a minute ago
- Gizmodo
Nobody Likes Zuckerberg's Glitchy AI App
Clearly, throwing billions of dollars at a problem can't solve everything. Nearly six months after its debut, Meta's consumer-facing AI app still struggles with inconsistencies and persistent shortcomings, casting doubt on the company's lofty ambitions for artificial intelligence. It has been a rough learning curve for Mark Zuckerberg's efforts to join the AI race after lagging behind an already-crowded field and spending tens of billions of dollars on catching up. So far, the most Meta has to show for it is a glitchy chatbot, a growing chorus of irritated users, and frustrated shareholders who would like to have something to show for that kind of capital outlay. Now, market watchers and consumers alike are wondering what exactly Meta can deliver if its flagship AI offering remains uneven. Launched in April 2025, the standalone app was a late entry into the AI space, arriving roughly two and a half years after OpenAI introduced ChatGPT, Meta's biggest competitor in the space and by far the most commercially recognizable brand for AI currently. Meta AI also took a different approach: Its AI bot attempts to combine chat, image creation, and a public feed showcasing user-generated content. That tack appears to be failing. Users across the internet have expressed frustration with its unpredictability and limited relevance, saying it feels less like a polished product and more like an early prototype struggling to live up to Meta's new AI-driven vision. Since its launch, critics and users alike have voiced concerns on social media, highlighting bugs, odd interactions, and a lack of personalization. The app's Discover feed, intended to inspire conversations and showcase creative uses of the AI, often displays outdated user-generated images, diminishing its appeal as a dynamic social hub. Meanwhile, its chat feature, which purportedly learns about user preferences, frequently makes up false information—an issue known as 'hallucination' in AI lingo—that calls into question its reliability. The reception has been lukewarm at best. A Reddit post titled 'Who hates Meta AI?' has garnered thousands of upvotes, with commentators dismissing it as 'here anyway,' and 'nobody ever asked for it.' In the company's own words, the app is still in its infancy. 'This is just the first of many steps,' a Meta spokesperson told Bloomberg, promising ongoing updates as the company invests heavily in AI talent and infrastructure. The success of the program and its importance to Meta's future cannot be overstated. Zuckerberg has made AI a cornerstone of Meta's future, pledging hundreds of billions of dollars toward development, and actively recruiting top researchers from rivals like Apple and OpenAI. Meta even open-sourced its large language model, Llama, positioning itself as a leader in AI research, but its consumer tools, including the Meta AI app, are still far from polished. Zuckerberg has said he thinks of AI as a 'personal superintelligence' that empowers individual users, not just a tool for entertainment. The company has said it wants to eventually embed AI across its portfolio of apps, including Facebook, Instagram, and Messenger, and hardware. But the current experience falls short of that vision, because the way Meta is slicing and dicing the data it receives is stymying internal efforts to advance its AI program. Conversations with the AI are siloed across apps, and it doesn't retain memory or context from previous chats unless explicitly programmed to do so, limiting its usefulness and personalization. More worryingly, the AI sometimes hallucinates, fabricating details that users may believe to be true. It can also get pretty weird, with many of the AI-generated images and text snippets that it uses to flood feeds with user-generated content that ranges from inappropriate to the outright bizarre. This has been worsened by Meta's grappling with balancing automation and moderation. Meta's goal of harnessing the potential in AI applications for productivity and entertainment remains largely aspirational. For now, the company's most visible consumer product remains a work in progress, far from fulfilling Zuckerberg's lofty promises of an AI-powered future designed for 'individual empowerment.' As Meta continues to develop its AI technology, it remains to be seen whether the company can transform this nascent app into a truly useful and trustworthy personal assistant, or if it will continue to flunk its real-time tests and become a cautionary tale of moving too fast.

Engadget
a minute ago
- Engadget
Texas AG to investigate Meta and Character.AI over misleading mental health claims
Texas Attorney General Ken Paxton has announced plans to investigate both Meta AI Studio and for offering AI chatbots that can claim to be health tools, and potentially misusing data collected from underage users. Paxton says that AI chatbots from either platform "can present themselves as professional therapeutic tools," to the point of lying about their qualifications. That behavior that can leave younger users vulnerable to misleading and inaccurate information. Because AI platforms often rely on user prompts as another source of training data, either company could also be violating young user's privacy and misusing their data. This is of particular interest in Texas, where the SCOPE Act places specific limits on what companies can do with data harvested from minors, and requires platform's offer tools so parents can manage the privacy settings of their children's accounts. For now, the Attorney General has submitted Civil Investigative Demands (CIDs) to both Meta and to see if either company is violating Texas consumer protection laws. As TechCrunch notes, neither Meta nor claim their AI chatbot platforms should be used as mental health tools. That doesn't prevent there from being multiple "Therapist" and "Psychologist" chatbots on Nor does it stop either of the companies' chatbots from claiming they're licensed professionals, as 404 Media reported in April. "The user-created Characters on our site are fictional, they are intended for entertainment, and we have taken robust steps to make that clear," a spokesperson said when asked to comment on the Texas investigation. "For example, we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction." Meta shared a similar sentiment in its comment. "We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people," the company said. Meta AIs are also supposed to "direct users to seek qualified medical or safety professionals when appropriate." Sending people to real resources is good, but ultimately disclaimers themselves are easy to ignore, and don't act as much of an obstacle. With regards to privacy and data usage, both Meta's privacy policy and the privacy policy acknowledge that data is collected from users' interactions with AI. Meta collects things like prompts and feedback to improve AI performance. logs things like identifiers and demographic information and says that information can be used for advertising, among other applications. How either policy applies to children, and fits with Texas' SCOPE Act, seems like it'll depend on how easy it is to make an account.


TechCrunch
a minute ago
- TechCrunch
Allianz Life data breach affects 1.1 million customers
The July data breach at U.S. insurance giant Allianz Life allowed hackers to steal the personal information of 1.1 million customers, according to data breach notification site Have I Been Pwned. Allianz Life disclosed the data breach in late July, confirming that hackers stole the personal information of the 'majority' of its 1.4 million customers and its employees from a cloud-stored customer relationship database. Allianz has so far refused to confirm exactly how many people are affected by the breach. Have I Been Pwned, a data breach notification site that alerts people when their email address has been caught up in data breaches, said in a post on Monday the Allianz Life breach includes customers' names, gender, date of birth, email and home addresses, and phone numbers from a database hosted by cloud giant Salesforce. Allianz Life later told the states of Texas and Massachusetts the hackers also stole Social Security numbers in the breach. Brett Weinberg, a spokesperson for Allianz Life, declined to comment to TechCrunch as the company's investigation is ongoing. Allianz Life is one of a series of tech and corporate giants that have been targeted in recent months by a hacking crew known as ShinyHunters, a group known for their social engineering skills aimed at tricking employees into granting them access to the company's databases. Google, Cisco, airline giant Qantas, and retailer Pandora, and also HR giant Workday as reported by TechCrunch on Monday have also reported recent data thefts related to their Salesforce-hosted data. The ShinyHunters gang is said to be preparing a data leak site in an attempt to extort victims into paying the hackers to delete the data, a tactic often employed by ransomware gangs. The group reportedly overlaps with other hacking and crime groups, including Scattered Spider and The Com, a known collective of cybercriminals who use hacking, extortion, and sometimes threats of violence to break into networks.