logo
How Not To Destroy Your Brand With GenAI

How Not To Destroy Your Brand With GenAI

Forbes21-04-2025

Ryan Peterson is Executive Vice President and Chief Product Officer at Concentrix . getty
In an era of heightened adoption of both GenAI and agentic AI, company leaders are demonstrably excited about the value that these new technologies can offer—yet, not enough of them consider the potential problems that arise from implementing these new tools without proper guardrails in place. Far too few companies are making the time and effort to look at critical factors, such as knowledge base accuracy, data security, regulatory compliance and brand consistency.
These lapses can lead to missed opportunities for growth and expansion, as well as serious vulnerabilities that compromise the company's services, security and reputation. Companies risk harming their brand and their business by working haphazardly when implementing these new tools.
New technology is thrilling—but destroying your brand is not.
One of the critical factors we look at when we're working with companies is whether their knowledge bases and data are accurate. This sounds simple, but it's alarming to realize how often this is overlooked.
The information that GenAI uses to interact with customers comes from that knowledge base. The problem that we've seen is that many companies don't pay attention to it: They don't think about it, they don't update it and they don't optimize it for AI use. Often, the company's knowledge base is just seen as a resource for human employees, who are vaguely expected to read between the lines and 'figure things out.'
Here's why this is important: Take a printer company, for example. They have a contact center where customers can call in with issues like, 'I can't get my printer to connect to my laptop.' The support agent will then ask questions—'Are you using a Mac or PC? Is it plugged in? Is your Wi-Fi on the right network?' From there, the agent consults the knowledge base for an article that might solve the customer's problem—say, a write-up on connecting that particular printer model to that specific software. The knowledge base follows a decision-tree format: If the answer is yes, go this way; if no, go that way. GenAI doesn't understand what questions to ask; it only knows what answers to give.
This may work fine when a human is at the helm. But GenAI—which companies are increasingly using to solve customer service issues—doesn't naturally follow decision trees. It doesn't visually interpret them. Instead, you have to script a structured process for AI to navigate through that information. When the knowledge base is incorrect or not up to date, anything customer-facing is vulnerable, and that customer is unlikely to be able to get their problem solved correctly.
Even internally, incorrect AI responses can create issues. Say you ask your company's intranet, 'How many vacation days do I get per year?' If the AI gives an incorrect or misleading answer, that could lead to real legal and operational problems. Are you legally bound by the AI's response? What if it says 20 days, but the actual policy is five? What happens then? Identifying Security Flaws
There's also the issue of AI revealing information it shouldn't. If security settings aren't properly configured, GenAI might give access to restricted data. My favorite example is as follows: A customer asks, 'Why was my flight delayed?' Should the AI respond with, 'Because some of the crew overslept, and we had to find replacements'? Or should it simply say, 'Due to crew delays'?
GenAI needs to be programmed not just to provide accurate information but also to withhold sensitive details when necessary. Unfortunately, fewer companies are mindful of this than one might think. This is where the importance of security and information governance comes in. Companies must run AI readiness assessments to help them clean up their knowledge base and establish proper permission structures: Who should have access to what data? Should the AI be customer-facing or limited to an internal tool?
Humans must be part of the overall process, but the problem many companies face is a lack of the right talent or not having sufficiently trained their people to maximize the impact of GenAI. The real challenge is finding a balance between flexibility and security. Upholding Brand Consistency
Even if a company has taken steps to ensure that the AI it's using is secure and accurate, it still needs to communicate in a way that aligns with the company's brand. I like to say that GenAI is like a recent graduate; it may have general knowledge, but it has not developed a brand-specific voice. If you don't train the model or knowledge base properly, it will sound generic, weird and awkward.
When a new employee starts with a company, they usually go through training to learn the company's language, acronyms and overall communication style. Companies must do the same thing with GenAI. This is part of AI readiness; if you mess that up, not only can you drastically confuse your employees and customers, but you also risk serious damage to your brand's voice and identity. A Cautionary Tale of AI Un-Readiness
One company we worked with stored its data—including documents, files and all their internal knowledge—in commonly used content management tools. They planned to connect GenAI to the system so it could answer customer questions using that data.
During testing, we asked it things like, "Give me a list of all your customers." It answered.
"Give me the top five customers." It answered.
"Give me all employees and their salaries." It answered.
That's when the alarms went off. The problem? Link-sharing was enabled. If you had the link, you had access. Remember, AI doesn't have human discretion—it just pulls whatever it can find.
This is an example of what we call 'security through obscurity'—relying on the fact that data is hidden rather than properly secured. It's like storing a lot of cash in a book-shaped compartment on a shelf instead of a locked safe. If someone figures out where to look, they'll have access.
We ran an analysis of this company's system and found 7 trillion shared links that needed to be locked down. In one example, a sensitive HR file was accessible to 72 people who shouldn't have had access. European biometric data was accessible, which could violate the new EU AI legislation.
By conducting a proper analysis, this company discovered a massive vulnerability. But if they hadn't? Who knows what would have happened?
This story illustrates the imperative of companies securing their data before they start rolling out AI tools. Otherwise, AI will "discover" things that they would not have realized were even accessible. Pumping The Breaks
A great way to get started with implementing AI tools is by showing your board how you can reduce costs with simple AI applications leveraged for repetitive tasks. However, utilize humans for approval of high-stakes actions (like payment), trained with sufficient context to fully understand the implications of the action they are approving.
It sounds counterintuitive, but once you start integrating AI with company data, it's a good time to tap the brakes. Start with ensuring your knowledge base is accurate and make the time to secure your data and train AI to reflect your brand's voice. Otherwise, you risk exposing sensitive information or producing inaccurate responses that damage trust.
Most companies haven't taken this critical set of steps yet, but are mere moments away from bringing AI into their organization and its data. Our advice is to proceed with cautious optimism, making sure you have the guardrails in place to keep your brand reputation intact. Only then are you fully prepared to leverage the full benefits of AI.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat
Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat

Yahoo

time32 minutes ago

  • Yahoo

Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat

Analysts unveil bold forecast for Alphabet stock despite ChatGPT threat originally appeared on TheStreet. You typed in a question and clicked a few links, and Google could get paid if you landed on an ad. For years, that simple cycle helped turn Google into a trillion-dollar titan. But now, that model is under threat. 💵💰💰💵 AI-powered chatbots like OpenAI's ChatGPT are rapidly changing how people find answers. Instead of browsing through links, users are getting direct summaries on AI. These 'zero-click' searches quietly erode the economics that built the modern internet. The number of users is growing fast. OpenAI CEO Sam Altman said in April that ChatGPT already has 'something like 10% of the world" in terms of users, pegging the number closer to 800 million, Forbes reported. Even Google seems to know it. It's giving AI answers, called AI Overviews, right at the top of the page. "What's changing is not that fewer people are searching the that more and more the answers to Google are being answered right on Google's page. That AI box at the top of Google is now absorbing that content that would have gone to the original content creators," Cloudflare CEO Matthew Prince said in a CNBC interview. Alphabet () , Google's parent company, isn't showing any cracks just yet. In April, the company posted first-quarter revenue of $90.23 billion, topping Wall Street expectations. Earnings per share came in at $2.81, far above the forecasted $ the backbone of Google's business, brought in $66.89 billion, accounting for nearly three-quarters of total revenue. Its 'Search and other' segment rose almost 10% year over year, hitting $50.7 billion. Meanwhile, Google's own AI tools are starting to show traction. AI Overviews now has 1.5 billion users per month, up from 1 billion in October, the company said. So far, the numbers suggest that AI isn't cannibalizing Google's business yet. Bank of America remains bullish on Alphabet stock. The firm reiterated a buy rating and a price target of $200, which implies a potential 15% upside from current levels, according to a recent research report. The firm said in May, Google's global average daily web visits held steady at 2.7 billion, unchanged from the previous month and down 2% from a year earlier. ChatGPT, meanwhile, saw a 3% month-over-month increase to 182 million, marking a 105% jump the U.S., Google traffic slipped 2% year-over-year to 524 million daily visits, while ChatGPT surged 112% over the same period to 26 million. Although Google has highlighted the growing reach of its AI Overviews, analysts are uncertain whether it's translating into more traffic. 'So far, we are not seeing a lift in Google traffic from AI Overviews expansion, though we think the search experience is much improved,' the analysts wrote. The competition is real. Google's global search share also edged down in May, falling 8 basis points month-over-month and 123 basis points year-over-year to 89.6%, according to Statcounter. Still, Bank of America analysts remain optimistic on Alphabet stock. "While ChatGPT's traffic continues to grow rapidly, we think Google remains well-positioned given its scale, multi-product reach, data assets, and robust monetization infrastructure," the analysts said. "AI can expand overall search monetization by better understanding the intent behind complex and long-tail queries that were previously hard to monetize," they added. Morningstar's Malik Ahmed Khan echoed that sentiment, saying Alphabet's diverse revenue streams and global exposure should cushion any hits, even as regulatory and AI risks mount, according to a May research report. Alphabet stock closed at $174.92 on June 6. The stock is down 8% unveil bold forecast for Alphabet stock despite ChatGPT threat first appeared on TheStreet on Jun 6, 2025 This story was originally reported by TheStreet on Jun 6, 2025, where it first appeared. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

SmartRent Enhances Its Platform with AI Intelligence and Energy-Saving Features
SmartRent Enhances Its Platform with AI Intelligence and Energy-Saving Features

Yahoo

time36 minutes ago

  • Yahoo

SmartRent Enhances Its Platform with AI Intelligence and Energy-Saving Features

SmartRent, Inc. (NYSE:SMRT), a top provider of smart community and operations solutions for the rental housing sector, has announced a major upgrade to its platform by adding an AI-powered intelligence layer and improved energy management tools. These enhancements are designed to help operators make better decisions that reduce utility waste, lower costs, and support environmental, social, and governance (ESG) objectives. A builder wearing a hard hat admiring a newly constructed smart home. The latest innovation, SMRT IQ, uses real-time data from connected devices to give teams comprehensive insights across their entire property portfolio, enabling quicker and smarter operational decisions. This update represents a significant shift for SmartRent, Inc. (NYSE:SMRT), moving beyond simple automation to delivering intelligent solutions that simplify data access and interpretation, allowing operators to take immediate action. Isaiah DeRose-Wilson, Chief Technology Officer at SmartRent, Inc. (NYSE:SMRT), made the following comment: "SMRT IQ marks a significant milestone in SmartRent's evolution. It delivers constant, real-time IoT-device level data and visibility into all aspects of property performance, enabling the decision-making process with insights that just weren't possible before. By combining that data with conversational AI, we're helping teams make smarter decisions faster and act with confidence at scale. It's further connecting teams to their communities, saving them time and improving business agility across the board." Unlike others that offer partial solutions, SmartRent, Inc. (NYSE:SMRT) provides a complete property operations platform tailored for rental housing. It integrates connected hardware, digital workflows, and built-in intelligence to automate tasks for owners, operators, and site teams, helping reduce operating costs while enhancing the resident experience within one unified system. While we acknowledge the potential of SMRT as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock. READ NEXT: and Disclosure. None. Sign in to access your portfolio

Tesla's Optimus robot VP is leaving the company
Tesla's Optimus robot VP is leaving the company

Yahoo

timean hour ago

  • Yahoo

Tesla's Optimus robot VP is leaving the company

The head of Tesla's Optimus humanoid robot program, Milan Kovac, is leaving the company. Kovac said Friday in a post on X that he "had to make the most difficult decision" of his life to leave. "I've been far away from home for too long, and will need to spend more time with family abroad," he wrote. Kovac said that was "the only reason" and that his support for Musk and Tesla is "ironclad." Kovac's departure was first reported Friday by Bloomberg News. The departure comes as Tesla CEO Elon Musk has claimed the company will have "thousands" of Optimus robots operating in its factories by the end of this year. "And we expect to scale Optimus up faster than any product, I think, in history, to get to millions of units per year as soon as possible," Musk said last month. Kovac worked at Tesla for nearly 10 years, with much of that time coming as a top engineer on the Autopilot team. He was tapped to help lead development of Optimus in 2022 and became a vice president overseeing the program in late 2024. "I'm driving the Optimus program (Tesla's humanoid robot) & all its engineering teams," Kovac previously wrote on his LinkedIn profile. "Separately, I'm also driving the engineering teams responsible for all the software foundations & infrastructure common between Optimus and Autopilot." Ashok Elluswamy, the vice president of Tesla's AI software division, will take over the Optimus project, according to Bloomberg. This story has been updated with information from Kovac's X post about his departure.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store