logo
When The Bots Pick Up The Phone: AI And Customer Service

When The Bots Pick Up The Phone: AI And Customer Service

Forbes2 days ago
Federico Sendra, CEO and cofounder of SpaceDev, a consultancy and development services company with a focus on blockchain and web3.
Some time ago, I had to resolve a billing issue with my phone service provider. I was ready to endure one of those drawn-out sagas where you cycle through five menu options, wait 12 minutes and finally get routed to someone who asks you to repeat everything, but things started to go smoothly.
Something dawned on me: I hadn't interacted with a person at any point during the call. Besides being able to understand me quite well, the voice assistant was pleasantly cordial and patient. The problem was resolved in three minutes, no transfer, no hold music.
Moments like these are becoming more common, and not by accident. The quiet revolution in customer service is being powered by AI, and we're not talking about the clunky chatbots of 2017. This is something far more sophisticated, nuanced and, ultimately, human-centered.
The Current Customer Service Landscape
AI customer service involves using intelligent technologies—like agents and automated workflows—to enhance every aspect of the support journey. It may sound like we're far away from seeing it implemented on a massive scale, but current use cases abound: Financial institutions use AI to triage fraud inquiries, airlines deploy virtual agents to handle rebookings during travel disruptions and e-commerce brands rely on AI to manage things like product recommendations and order updates.
According to a 2023 report by Gartner, "by 2025, 80% of customer service and support organizations will be applying generative AI technology in some form to improve agent productivity and customer experience (CX)." And it's working: Zendesk's "CX Trends 2024" report indicates that over two-thirds of CX leaders believe AI's more personalized, empathetic interactions help foster long-term loyalty.
What AI Can (And Can't) Do
There's a familiar rationale that leads most executives toward AI: cost savings. And it's true: AI-enabled customer service platforms can reduce operational costs by up to 30% by handling routine queries, shortening resolution times and lowering headcount.
AI's existence may seem to threaten someone's dream job, but let's be honest: No one's losing sleep over getting a gig doing appointment scheduling, order tracking, password resets and so on. These are all repetitive, predictable interactions with a clear logic tree, ideal for automation. The ultimate point here is to create better experiences, and AI doesn't get tired, lose its temper or forget to follow up.
Where things get more complex is in emotional nuance, exception handling or conflict resolution. That's where hybrid models, where AI assists the human agent, become powerful. For example, AI can summarize the customer's last three conversations before the agent picks up the call, allowing for quicker context and greater rapport. And, in any case, most people still want humans on the other end.
How To Think About Tools
One of the questions I get most often is, "Should we build our own AI or use an existing platform?" The answer depends on your scale, your data and your ambitions.
If you're a growing business with a modest support team, you'll likely get a lot of mileage from off-the-shelf tools like Intercom's Fin, Zendesk's AI suite or HubSpot's ChatSpot. These platforms are trained on general customer service patterns and integrate easily with CRM systems.
But if your product has complex logic or regulatory nuance—let's say, a fintech—then custom models fine-tuned on your internal knowledge base might be worth the investment. We've helped several mid-size companies transition from "rule-based" bots to AI agents powered by retrieval-augmented generation (RAG) models. These aren't plug-and-play, but when done right, the results are transformative.
Lessons From The Field
One of our clients—a logistics company operating across Latin America—was struggling with a high churn rate in support staff and inconsistent service quality. Their goal wasn't just automation; it was knowledge preservation: How do you make sure new support agents don't start from zero?
We built them a system where AI agents acted as first responders and knowledge companions. These bots could answer 70% of incoming queries with precision. Within six months, resolution time dropped by 38%, and support team satisfaction increased because they got to focus on more meaningful, non-scripted work.
But we've also seen missteps. Another client deployed a chatbot without sufficient training data or fallback logic. It misunderstood basic terms, didn't recognize when customers were angry and, ultimately, eroded trust. The lesson? AI needs human guidance, not just technical tuning.
Where To Begin And What To Expect
Companies that succeed with AI in customer service have to do their homework first. This entails understanding workflows, data and customer pain points.
A few practical tips if you're considering the move:
• Don't automate everything. Start with a high-volume, low-risk interaction, like order status inquiries or booking confirmations.
• Train your AI with real conversations. The more representative the data, the smarter the bot.
• Involve your support team. They're being enhanced, not replaced. Make them part of the implementation, and use their insights to refine the AI.
• Test with shadow mode. Run the AI in the background while humans still handle queries. Compare outputs before going live.
As for timelines, a basic implementation of a third-party tool can take weeks. A custom model that integrates across systems? Think months. But if you do it right, it's an investment that compounds because AI keeps learning.
A Mirror Of What You Value
The thing about customer service is that it's not just a function; it's a reflection. A company that automates support without care sends a message: You're a cost to be managed. A company that uses AI to augment the experience sends a different message: We value your time, and we're using technology to honor it.
At SpaceDev, we've watched companies fail fast with AI because they treated it like a Band-Aid. And we've seen others succeed, because they saw it as an opportunity to rethink how they serve, listen and respond.
AI in customer service is about what's meaningful. And that's a question only humans can answer.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

SoftBank's AI investment spree to be in focus on at Q1 earnings
SoftBank's AI investment spree to be in focus on at Q1 earnings

Yahoo

time22 minutes ago

  • Yahoo

SoftBank's AI investment spree to be in focus on at Q1 earnings

By Anton Bridge TOKYO (Reuters) -When Japan's SoftBank Group reports earnings on Thursday, its mammoth investments in artificial intelligence companies are set to take the spotlight. Analysts and investors are keen for updates on how they will be financed, the timeline for returns to materialise and whether assets will be sold to fund the new projects. SoftBank has embarked on its biggest spending spree since the launch of its Vision Funds in 2017 and 2019. It is leading a $40 billion funding round for ChatGPT maker OpenAI. SoftBank has until the end of the year to fund its $22.5 billion portion, although the remainder has been subscribed, according to a source familiar with the matter. It is also leading the financing for the Stargate project - a $500 billion scheme to develop data centres in the United States, part of its effort to position itself as the "organiser of the industry," founder Masayoshi Son said in June. SoftBank has yet to release details on what kinds of returns its financing of the Stargate project could generate. The extent of third-party investment will determine what other financing tools, such as bank loans and debt issuance, it may have to deploy. In July, SoftBank raised $4.8 billion by selling off a portion of its holding in T-Mobile. "If other sources of capital are less supportive, SoftBank could look to asset-backed finance, which is collateralised by equity in other holdings," Macquarie analyst Paul Golding said. The Japanese conglomerate is expected to post a net profit of 127.6 billion yen ($865 million) in the April-June quarter, according to the average estimate of three analysts polled by LSEG. That would mark SoftBank's second consecutive quarter of profit and follow its first annual profit in four years when it was helped by a strong performance by its telecom holdings and higher valuations for its later-stage startups. Its results are, however, typically very volatile and difficult to estimate due to manifold investments, many of which are not listed. SoftBank's performance in exiting from investments and distributing profits has been patchy of late. The Vision Funds had made a cumulative investment loss of $475 million as of end-March. That said, 13 of 18 analysts have a "buy" or "strong buy" rating for SoftBank's stock, according to LSEG. Although there is some concern in the market that AI-related valuations have become bubbly, they continue to climb. OpenAI is in early-stage discussions about a stock sale that would allow employees to cash out and could value the company at about $500 billion, according to the source - a huge jump from its current valuation of $300 billion. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

"Clankers": A robot slur emerges to express disdain for AI's takeover
"Clankers": A robot slur emerges to express disdain for AI's takeover

Axios

time24 minutes ago

  • Axios

"Clankers": A robot slur emerges to express disdain for AI's takeover

AI is everywhere whether you like it or not, and some online have turned to a choice word to express their frustration. Why it matters: Referring to an AI bot as a "clanker" (or a "wireback," or a "cogsucker") has emerged as a niche, irreverent internet phenomenon that illuminates a broader disdain for the way AI is overtaking technology, labor, and culture. State of play: The concerns range from major to minor: people are concerned that AI will put them out of a job, but they're also annoyed that it's getting harder to reach a human being at their mobile carrier. "When u call customer service and a clanker picks up" one X post from July reads, with over 200,000 likes, alongside a photo of someone removing their headset in resignation. "Genuinely needed urgent bank customer service and a clanker picked up," reads another from July 30. Here's what to know: Where "clanker" comes from Context: The word is onomatopoeic, but the term can be traced back to Star Wars. It comes from a 2005 Star Wars video game, "Republic Commando," according to Know Your Meme. The term was also used in 2008's Star Wars: The Clone Wars: "Okay, clankers," one character says. "Eat lasers." Robot-specific insults are a common trope in science fiction. In the TV Show Battlestar Galactica, characters refer to the robots as "toasters" and "chrome jobs." "Slang is moving so fast now that a [Large Language Model] trained on everything that happened before... is not going to have immediate access to how people are using a particular word now," Nicole Holliday, associate professor of linguistics at UC Berkeley, told Rolling Stone. "Humans [on] Urban Dictionary are always going to win." How people feel about AI Anxiety over AI's potential impact on the workforce is especially strong. By the numbers: U.S. adults' concerns over AI have grown since 2021, according to Pew Research Center, and 51% of them say that they're more concerned than excited about the technology. Only 23% of adults said that AI will have a very or somewhat positive impact on how people do their jobs over the next 20 years. And those anxieties aren't unfounded. AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Anthropic CEO Dario Amodei told Axios in May. And the next job market downturn — whether it's already underway or still years off — might be a bloodbath for millions of workers whose jobs can be supplanted by AI, Axios' Neil Irwin wrote on Wednesday. People may have pressing concerns about their jobs or mental health, but their annoyances with AI also extend to the mundane, like customer service, Google searches, or dating apps. Social media users have described dating app interactions where they suspect the other party is using AI to write responses. There are a number of apps solely dedicated, in fact, to creating images and prompts for dating apps. Yes, but: Hundreds of millions of people across the world are using ChatGPT every day, its parent company reports. What we're watching: Sens. Ruben Gallego (D-AZ) and Jim Justice (R-WV) introduced a bipartisan bill last month to ensure that people can speak to a human being when contacting U.S. call centers. "Slur" might not be the right word for what's happening People on the internet who want a word to channel their AI frustrations are clear about the s-word. The inclination to "slur" has clear, cathartic appeal, lexical semantician Geoffrey Nunberg wrote in his 2018 article "The Social Life of Slurs." But any jab at AI is probably better classified as "derogatory." "['Slur'] is both more specific and more value-laden than a term like "derogative," Nunberg writes, adding that a derogative word "qualifies as a slur only when it disparages people on the basis of properties such as race, religion, ethnic or geographical origin, gender, sexual orientation or sometimes political ideology." "Sailing enthusiasts deprecate the owners of motor craft as 'stinkpotters,' but we probably wouldn't call the word a slur—though the right-wingers' derogation of environmentalists as 'tree-huggers' might qualify, since that antipathy has a partisan cast."

Law enforcement department in Denver metro area begins using AI to assist with police reports
Law enforcement department in Denver metro area begins using AI to assist with police reports

CBS News

time25 minutes ago

  • CBS News

Law enforcement department in Denver metro area begins using AI to assist with police reports

Police in Wheat Ridge are getting a technology upgrade. As of Aug. 1, the department has begun using artificial intelligence to help officers write their reports. "It was hand notes. You could go through a notepad in two shifts because you're writing everything down," said Sgt. Jamie Watson. The department's AI technology works directly with body-worn cameras. "I don't have to sit there and go back in my memory and go now 'What did she say about this?' It's right there -- it's transcribed for you," Watson said. To demonstrate, Watson simulated a traffic stop: "Speed limit there is 40 (mph). And I hate to tell you, you were doing 52," she said during the demonstration. Once the body camera is activated, the system begins running and listening to the interaction in real time. Within seconds, a report is generated for the officer to review. "It says, 'On August 5, 2025, at approximately 2:29 p.m., I conducted a traffic stop in the parking lot of the Wheat Ridge Police Department.' Now, I never said that -- it just picks up your location based on where you are," Watson explained. Wheat Ridge police spokesperson Alex Rose said the program costs about $100,000 a year and is an addition to the department's existing contract with Axon for body cameras. "In a net-net, you ask our officers, it's saving about 40% to 60% of their time behind a computer writing reports," Rose said. Rose says that means there's more time that officers can now spend out in the community. The department's new AI tool joins a growing list of technologies used by law enforcement agencies, including body-worn cameras and automatic license plate readers. Anaya Robertson, policy director at the ACLU Colorado, said the use of AI in police reporting raises legal and ethical questions. "When we're talking about incident reports and case notes -- those are often admissible in court," Robertson said. "There's a general bias that AI doesn't make mistakes, which is problematic when you think about something being used as evidence." He emphasized the need for transparency. "(It's) needed so the community knows their law enforcement agency wants to utilize this kind of technology," she said. In Wheat Ridge, the AI-generated report is only considered a first draft. Officers are prompted to include additional observations and personal reflections the AI may not capture. Built-in safeguards require officers to review the report carefully. Errors are intentionally placed in the drafts and must be corrected before submission. Watson, who participated in the testing phase, said the technology allows her to spend less time behind a desk and more time in the field. "It is an incredible benefit to the officer," she said. According to Rose, the department consulted with the district attorney's office, which approved the use of AI-generated reports. Each report includes a disclaimer noting the use of artificial intelligence in its creation.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store