
Here Comes The Customer Service Resolution Revolution
When Mikkel Svane and his co-founders, Morten Primdahl and Alexander Aghassipour, founded Zendesk in 2007, they aimed to revolutionize customer service by creating beautifully simple software that enabled businesses to easily interact with their customers and act on their feedback.
Eighteen years later, one could argue that they have been successful. However, Tom Eggemeier, CEO of Zendesk, hopes they will be one of those rare companies that can reinvent an industry for a second time.
However, this time around, they want to start a resolution revolution because, according to Eggemeier, 'The only metric that matters in customer service is resolution.'
He's not wrong.
Who wouldn't want to be able to resolve their problems faster and easier?
Customers definitely would.
In fact, research finds that most customers in the UK and the US are frustrated with the effort required to find answers to their questions.
#revolution.
Central to Zendesk's resolution revolution is its new Resolution Platform, which it launched at its recent annual customer event, Relate, held in Las Vegas.
The Zendesk Resolution Platform is built on five core components:
While all of these five core components are impressive and make sense, the Knowledge Graph component stood out to me.
Why?
Well, for two reasons:
Firstly, customers have long complained about how difficult it is to find answers to their questions and to serve themselves. The research cited above isn't groundbreaking; it simply provides even more evidence that customers are and remain frustrated.
Secondly, the comprehensive knowledge graph is an idea that has been long in the making.
I remember talking to Adrian McDermott, Zendesk's CTO, back in 2018 about a new AI-powered knowledge management product they had developed.
At that time, their application was based on Content Cue technology, which uses artificial intelligence and supervised learning to proactively identify gaps in existing knowledge and identifies where new content needs to be developed.
While impressive, one of the big problems it didn't solve was the actual writing of the new content. And anyone who has been involved with customer service and knowledge base development knows that getting already pressurized agents to write articles is hard and using third-party agencies or other resources takes time, effort, and resources that they often don't have.
That was more than four years before ChatGPT burst onto the scene.
Now, their new knowledge graph product can, as Eggemeier explains, 'analyze all of your articles, all of your tickets, and all of your voice data to identify any gaps in your knowledge base. We then use generative AI to create articles for you to approve, to help you plug those gaps in your knowledge center. Those new articles can then be automatically translated across languages.'
As someone who has been working on this problem for some time, McDermott adds that he is 'really excited about Knowledge Builder because I think it will take some of the grind out of knowledge gap identification, creation and maintenance.'
In a world of ever-expanding and changing product portfolios, new service lines, new customers and new geographies, keeping up with the creation of knowledge to help customers help themselves, empower agents and enable AI agents is no small task.
As McDermott points out, 'If the AI revolution is the latest industrial revolution, knowledge is the coal.'
I've seen a demo of the Knowledge Builder in action. It was impressive and will go a long way toward solving that problem.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
10 minutes ago
- New York Post
OpenAI finds more Chinese bad actors using ChatGPT for malicious purposes
Chinese bad actors are using ChatGPT for malicious purposes – generating social media posts to sow political division across the US and seeking information on military technology, OpenAI said. An organized China-linked operation, in one such incident dubbed 'Uncle Spam,' used ChatGPT to generate social media posts that were supportive and critical of contentious topics related to US politics – and then posted both versions of the comments from separate accounts, the company said in a report released Thursday. 'This appears likely designed to exploit existing political divisions rather than to promote a specific ideological stance,' OpenAI wrote in the report, describing what is known as an influence operation. Advertisement 3 A growing number of Chinese bad actors are using ChatGPT for malicious purposes, OpenAI said. REUTERS OpenAI said it followed Meta's lead to disrupt this operation, after the social media conglomerate discovered the actors were posting at hours through the day consistent with a work day in China. The actors also used ChatGPT to make logos for their social media accounts that supported fake organizations – mainly creating personas of US veterans critical of President Trump, like a so-called 'Veterans For Justice' group. These users also tried to request code from ChatGPT that they could use to extract personal data from social media platforms like X and Bluesky, OpenAI said. Advertisement While the number of these operations has jumped, they had relatively little impact as these social media accounts typically had small followings, OpenAI said. Another group of likely Chinese actors used ChatGPT to create polarizing comments on topics like USAID funding cuts and tariffs, which were then posted across social media sites. In the comments of a TikTok video about USAID funding cuts, one of these accounts wrote: 'Our goodwill was exploited. So disappointing.' Advertisement 3 Another group of likely Chinese actors used ChatGPT to create polarizing comments on topics like USAID funding cuts and tariffs. REUTERS Another post on X took the opposite stance: '$7.9M allocated to teach Sri Lankan journalists to avoid binary-gender language. Is this the best use of development funds?' These actors made posts on X appearing to justify USAID cuts as a means of offsetting the tariffs. 'Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?' one post said. Advertisement Another read: 'Tariffs are choking us, yet the government is spending money to 'fund' foreign politics.' 3 The operations used ChatGPT to write divisive comments on some of the Trump administration's policies, including USAID funding cuts and tariffs. AFP via Getty Images In another China-linked operation, users posed as professionals based in Europe or Turkey working for nonexistent European news outlets. They engaged with journalists and analysts on social media platforms like X, and offered money in exchange for information on the US economy and classified documents, all while using ChatGPT to translate their requests. OpenAI said it also banned ChatGPT accounts associated with several bad actors who have been publicly linked to the People's Republic of China. These accounts asked ChatGPT for help with software development and for research into US military networks and government technology. OpenAI regularly releases reports on malicious activity across its platform, including reports on fake content for websites and social media platforms and attempts to create damaging malware.
Yahoo
21 minutes ago
- Yahoo
Why two AI leaders are losing talent to startup Anthropic
Why two AI leaders are losing talent to startup Anthropic originally appeared on TheStreet. Many tech companies have announced layoffs over the past few months. In the case of Microsoft, () it's happened more than once. The rise of artificial intelligence has upended the job market in undeniable ways, prompting companies to either completely automate away some positions or scale back their hiring in other areas while increasing their reliance on chatbots and AI agents. 💵💰💰💵 As more and more companies opt for job cuts and shift focus toward implementing an AI-first strategy, questions abound as to which jobs will survive this technology revolution. The number of companies embracing this method includes prominent names such as Shopify and Box. Not every tech company is slashing its workforce, though. AI startup Anthropic isn't slowing down on its hiring. In fact, it is successfully attracting talent for several industry leaders, launching a new battle for AI talent as the industry continues to boom. Founded in 2021, Anthropic is still a fairly new company, although it is making waves in the AI market. Often considered a rival to ChatGPT maker OpenAI, it is best known for producing the Claude family, a group of large language models (LLMs) that have become extremely popular, particularly in the tech describes itself as an AI safety and research company with a focus on creating 'reliable, interpretable, and steerable AI systems.' Most recently, though, it has been in the spotlight after CEO Dario Amodei predicted that AI will wipe out many entry-level white-collar jobs. Even so, Amodei's own company is currently hiring workers for many different areas, including policy, finance, and marketing. But recent reports indicate that Anthropic has been on an engineering hiring spree as well lately, successfully poaching talent from two of its primary competitors. Venture capital firm SignalFire recently released its State of Talent Report for 2025, in which it examined hiring trends in the tech sector. This year's report showed that in an industry dependent on highly skilled engineers, Anthropic isn't just successfully hiring the best talent; it is retaining it. According to SignalFire's data, 80% of the employees hired by Anthropic at least two years remain with the startup. While DeepMind is just behind it with a 78% retention rate, OpenAI trails both with only 78%, despite ChatGPT's popularity among broad ranges of users. As always, the numbers tell the story, and in this case, they highlight a compelling trend that is already shaping the future of AI. The report's authors provide further context on engineers choosing Anthropic over its rivals, stating: 'OpenAI and DeepMind. Engineers are 8 times more likely to leave OpenAI for Anthropic than the reverse. From DeepMind, the ratio is nearly 11:1 in Anthropic's favor. Some of that's expected—Anthropic is the hot new startup, while DeepMind's larger, tenured team is ripe for movement. But the scale of the shift is striking.' More AI News: OpenAI teams up with legendary Apple exec One AI stock makes up 78% of Nvidia's investment portfolio Nvidia, Dell announce major project to reshape AI Tech professionals seeking out opportunities with innovative startups is nothing new. But in this case, all three companies are offering engineers opportunities to work on important projects. This raises the question of what Anthropic more appealing than its peers. AI researcher and senior software engineer Nandita Giri, spoke to TheStreet about this trend, offering insight into why tech workers may be making these decisions. She sees it as being about far more than financial matters.'Anthropic is making serious investments in transparency tooling, scaling laws, and red-teaming infrastructure, which gives technical contributors greater ownership over how systems are evaluated and evolved,' she states. 'Compared to OpenAI and DeepMind both of which are increasingly focused on product cycles Anthropic offers more freedom to pursue deep, foundational research.' However, other experts speculate that it may be more than that. Wyatt Mayham, a lead consultant at Northwest AI, share some insights from his team, stating 'What we've heard from clients is that it's simply easier to work there with less burnout. More worklife balance if you will.' Technology consultant Kate Scott adds that while all three companies are doing important work, she sees this trend as reflecting a shift in the broader industry, one that shows engineers seeking environments 'where organizational purpose and daily execution feel closely aligned,' something that Anthropic seems to be two AI leaders are losing talent to startup Anthropic first appeared on TheStreet on Jun 5, 2025 This story was originally reported by TheStreet on Jun 5, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Engadget
2 hours ago
- Engadget
Foreign propagandists continue using ChatGPT in influence campaigns
Chinese propaganda and social engineering operations have been using ChatGPT to create posts, comments and drive engagement at home and abroad. OpenAI said it has recently disrupted four Chinese covert influence operations that were using its tool to generate social media posts and replies on platforms including TikTok, Facebook, Reddit and X. The comments generated revolved around several topics from US politics to a Taiwanese video game where players fight the Chinese Communist Party. ChatGPT was used to create social media posts that both supported and decried different hot button issues to stir up misleading political discourse. Ben Nimmo, principal investigator at OpenAI told NPR , "what we're seeing from China is a growing range of covert operations using a growing range of tactics." While OpenAI claimed it also disrupted a handful of operations it believes originated in Russia, Iran and North Korea, Nimmo elaborated on the Chinese operations saying they "targeted many different countries and topics [...] some of them combined elements of influence operations, social engineering, surveillance." This is far from the first time this has occurred. In 2023, researchers from cybersecurity firm Mandiant found that AI-generated content has been used in politically motivated online influence campaigns in numerous instances since 2019. In 2024, OpenAI published a blog post outlining its efforts to disrupt five state-affiliated operations across China, Iran and North Korea that were using OpenAI models for malicious intent. These applications included debugging code, generating scripts and creating content for use in phishing campaigns. That same year, OpenAI said it disrupted an Iranian operation that was using ChatGPT to create longform political articles about US elections that were then posted on fake news sites posing as both conservative and progressive outlets. The operation was also creating comments to post on X and Instagram through fake accounts, again espousing opposing points of view. "We didn't generally see these operations getting more engagement because of their use of AI," Nimmo told NPR . "For these operations, better tools don't necessarily mean better outcomes." This offers little comfort. As generative AI gets cheaper and smarter , it stands to reason that its ability to generate content en masse will make influence campaigns like these easier and more affordable to build, even if their efficacy remains unchanged.