logo
Exclusive: How Glu.ai blends brand safety with AI-driven commerce

Exclusive: How Glu.ai blends brand safety with AI-driven commerce

Techday NZ5 days ago
As eCommerce brands race to adopt generative AI, Glu.ai is betting big on doing it safely - with brand integrity and human oversight baked in.
The company, which specialises in creative automation for conversational commerce, is investing heavily in agentic AI while embedding human oversight into every layer of development and deployment. It's a strategy built not just for scale - but for trust.
In an exclusive interview with TechDay, Glu.ai's Chief Technology Officer, Sangeeta Mudnal, said the company's approach is rooted in balancing ambition with accountability.
"One of the key things that I have learned through my experience in these big tech companies is how AI can be both deeply scalable and profoundly human-centred," she said.
Mudnal, who held senior roles at Amazon and Microsoft, explained that Glu.ai's platform has been designed around the principle of "human-in-the-loop" AI.
It's a philosophy that informs everything from the way chatbot responses are evaluated to how brand elements are integrated into campaign workflows.
"Whatever AI tools and capabilities we are building, we make sure they are trustworthy, transparent and that customers feel confident and comfortable in using them," she added.
This commitment translates into a development process where human reviewers are involved even before deployment.
"When we test our responses from our chatbots or AI agents, we're not using just LLMs as a judge, but we're using humans in the loop to make sure that the answers are accurate, coherent and relevant."
Once AI features go live, users are kept in control. Features such as brand font extraction and colour scheme suggestions include confirmation steps that form a continuous feedback loop. These measures improve efficiency while reinforcing the customer's authority over final outputs.
The need for such checks is particularly urgent in eCommerce, where AI tools now span the full customer lifecycle. "That end-to-end journey is now powered by AI," Mudnal said, citing the rise of single-window chats that take consumers from product discovery to purchase in seconds.
But speed introduces its own risks - especially when scaling brand messaging.
"When you generate thousands of copies and creatives, how do you come off not sounding templated, or off-tone or off-brand?" she asked.
Glu.ai's answer is to personalise automation at a foundational level. "We move to more content-aware and brand-consistent automation," she said.
"Glu.ai learns the brand's identity - fonts, colours, tone, mission statements - and we infuse that in everything our customers do within the platform."
The result is a system that lets teams launch coordinated campaigns across email, TikTok or Instagram while maintaining a consistent brand voice and aesthetic.
The technical infrastructure powering this capability is built on three core tenets: clear problem framing, modular architecture and post-deployment resilience.
"You can build AI agents in two days," Mudnal explained, "but keeping them performant and safe is a different matter."
She emphasised that modern AI systems demand a new engineering mindset. "Eighty percent of the code is now moving to post-production - to make sure these LLMs are not hallucinating, they're behaving, they're not going rogue." For organisations used to deterministic codebases, she said, this shift will be a steep learning curve.
Trust and compliance are critical in this environment, especially when AI models interact with sensitive business data.
At Glu.ai, that trust is safeguarded through strict data contracts and a "minimum required data" philosophy. Models are trained on brand-approved assets only.
"We include AI fine-tuned on brand-consented content," Mudnal explained.
"Brands upload their catalogues or kits, and we learn and fine tune our models based on that. This ensures outputs are not hallucinating or leaking sensitive details."
The next major leap, she said, is conversational commerce - where entire shopping journeys happen through natural, real-time dialogue.
"Twenty-four percent of consumers now use generative AI in shopping experiences, with a prediction of almost 50% in conversational settings within this year."
This evolution will require creative teams to think less about search engine optimisation and more about emotional design. "Now, creative producers need to design conversations that resonate with human emotion and personality," she added.
Tackling the ethical risks of these technologies, Glu.ai employs multiple safeguards - from toxicity and bias detection to prompt injection defences. Transparency is reinforced with prompt libraries, explainable outputs, and rigorous internal reviews.
"It's an ongoing practice," she said. "We hope to be agile, but it's important to do cross-functional audits - legal, product, data science, prompt engineering - so we build trust with the customer."
For startups looking to enter the AI commerce space, Mudnal's advice is clear. "Dive in. There might be some limitations and disadvantages now, but over time, technology is going to increase the quality of these models."
She believes Glu.ai is positioned to lead the next wave of innovation through its agentic AI focus. These AI systems will work autonomously on behalf of brands, managing workflows and driving campaign efficiency - always under human direction.
"We want agents to work on behalf of our customers, to make their workflow seamless and really drive efficiencies for them."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why are we more likely to buy when our options are limited?
Why are we more likely to buy when our options are limited?

NZ Herald

time3 days ago

  • NZ Herald

Why are we more likely to buy when our options are limited?

Listening to articles is free for open-access content—explore other articles or learn more about text-to-speech. Why are we more likely to buy when our options are limited? Good business sense using consumer psychology. Photo / Getty Images Every city has its signatures. In Ho Chi Minh City it was someone pointing at my sneakers and offering to clean them. In Da Nang it was, 'Taxi, sir?' and in Hội An it's been, 'Want a boat ride?' We have resolutely fought off all efforts to part us from our money. Well, most efforts. It was our first afternoon in Hội An, a historical port city in central Vietnam, home to a Unesco world-heritage ancient town. We'd gone in search of a particular tailor, recommended to us by our hotel and breathless English tourists on TikTok. At the first street corner, I got out my phone to check directions, and 15 minutes later we were at a completely different tailor, having been expertly waylaid by one of their 'scouts', who'd seen us and asked if she could offer directions … The next day we did a lantern-making class at our hotel, led by the ever-patient tutor, Moon. Moon asked us what we had planned and made a few recommendations, including one for dinner at the Citadel restaurant at which a friend of hers worked. That evening, we followed her advice and had a frankly delightful evening marked by fantastic food, an absolutely lovely waitress, Anna, and regular check-ins from Gray, the manager (who also happens to be a Kiwi). As with every restaurant we visited, we had to force ourselves to sit back and enjoy the experience; at no point did we ever feel like we had to rush to finish, pay, and give up our table to the next customer. Not like, ahem, at home in Wellington. What do these latter examples have in common? Bloody good business sense based on friendliness and strategic use of consumer psychology. Having recently hosted friends visiting Wellington from overseas, my heart was warmed by hearing them say how friendly New Zealanders are, but it's a step change to Vietnamese hospitality. For example, first and last impressions count or, in technical terms, primacy and recency. We make impressions incredibly quickly and largely unconsciously, and research shows that, while we care deeply about how good the chef is, we have to be drawn in first to find out. That can hang entirely on the rapport we sense from our first encounter. When we left the restaurant, Anna farewelled us by our names (which she remembered several days later when we happened to pass by). That's a personal touch that leaves a positive impression. Ever started to feel tense because wait staff check in on you a little too frequently? Or neglected because they don't check in at all? That's another tricky balance, and one that requires a bit of intuition about the best time to stop by. Another thing Citadel did well, but almost every other restaurant we ate at didn't, was a sensibly curated set of options. Ever eaten at the American restaurant chain The Cheesecake Factory? The menu runs to more than 200 items and around 20 pages. It is frankly exhausting. You get to a point where you no longer care what you order, you just want to make it stop. Psychology researchers Sheena Iyengar and Mark Lepper are probably best associated with the notion of this 'paradox of choice'. In a particularly well-known experiment they showed that people may be more likely to head over to a counter offering 24 types of jam than a counter with only six, but people were 10 times more likely to buy jam when the number of types available was reduced from 24 to six. Why? Because what if you make the wrong choice? The more choices, the harder the decision, and the greater the likelihood of buyer's remorse. So in keeping with this research, we broke our holiday rule and went back to the Citadel and its more limited number of choices a second time.

Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster
Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster

Techday NZ

time5 days ago

  • Techday NZ

Exclusive: Google's John Hultquist warns cyber attackers are getting younger & faster

Children and teenagers are behind some of the most aggressive and profitable cyberattacks in the world, and many are getting away with it because they know they're unlikely to face serious consequences. It comes as John Hultquist, Chief Analyst at Google's Threat Intelligence Group, spoke with TechDay exclusively to reveal who exactly is behind these attacks. "We're talking tens of millions - if not hundreds of millions - of dollars that these kids are making," Hultquist said. "There's clearly a financial motive, but it's also about reputation. They feed off the praise they get from peers in this subculture." The average cybercriminal today is not a shadowy figure backed by a government agency, but often a teenager with a high tolerance for risk and little fear of repercussions. And according to Hultquist, that combination is proving incredibly difficult for law enforcement to counter. "There's no deterrent," he said. "They know they're unlikely to face serious consequences, and they exploit that. One reason I wouldn't do cybercrime - aside from the ethical one - is I don't want to go to jail. These kids know they probably won't." His concern is echoed by Mandiant Consulting's latest global data. In 2024, 55% of cyberattacks were financially motivated, the majority involving ransomware or extortion. Mandiant also observed that teen-driven groups like UNC3944 (aka Scattered Spider) are behind many of the most damaging breaches, often relying on stolen credentials and social engineering to bypass defences. "Younger actors are willing to cross lines even the Russian criminals won't - threatening families, for example," Hultquist said. "They don't worry about norms outside their subculture. Inside their world, they're being praised." Even when authorities know who is behind an attack, bringing them to justice is rarely fast. "Building a case takes years. In the meantime, they can do serious damage," he said. The urgency is underscored by the pace at which attackers now move. According to Mandiant, the median global dwell time - the time it takes to detect an intruder - has dropped to just 11 days, and in ransomware cases, often as little as 6 days. More than 56% of ransomware attacks are discovered within a week, showing just how rapidly these operations unfold. Though many of these actors operate independently, some operate in blurred lines between criminal enterprises and state-sanctioned campaigns. Hultquist explained that governments - particularly in Russia and Iran - often outsource cyber operations to criminal groups, giving them protection in exchange for service. "It's a Faustian bargain," he said. "The government lets them continue their criminal activity as long as they're also doing work on its behalf." Google's acquisition of Mandiant in 2022 has enabled Hultquist and his team to monitor global threats more effectively by combining Google's in-house security team with Mandiant's threat intelligence capabilities. This merger formed the Google Threat Intelligence Group, which Hultquist described as a "juggernaut". "We've got great visibility on threats all over the world," he said. "We get to see the threats targeting Google users." That level of access and scale has allowed Google's team to take cyber defence to unprecedented levels. In one recent case, they used an AI model to uncover and neutralise a zero-day vulnerability before attackers could use it. "It literally found the zero-day," Hultquist said. "The adversary was preparing their attack, and we shut it down. It doesn't get any better than that." AI is becoming both an asset and a threat. While Google uses it to pre-emptively defend systems, attackers are beginning to leverage it to enhance their own capabilities. Fake images, videos, and text have long been used in phishing and disinformation campaigns, but Hultquist said the next phase is far more concerning. "We've seen malware that calls out to AI to write its own commands on the fly," he said. "That makes it harder to detect because the commands are always changing." He warned that AI could soon automate entire intrusions, allowing cybercriminals to break into networks, escalate privileges, and deploy ransomware faster than defenders can respond. "If someone can move through your network at machine speed, they might ransom you before you even know what's happening," he said. "Your response window gets smaller and smaller." As attackers evolve, many defenders still rely on outdated mental models, particularly when it comes to cloud security. "People are still thinking like they're defending old-school, on-prem systems," Hultquist said. "One of the biggest problems in cloud is identity - especially third-party access. That's where your crown jewels might be, and you don't always have full control." And while some worry about cyber threats to governments, Hultquist said the private sector is often the true target. "If a country retaliates against the Five Eyes, they're not going after military or intelligence," he said. "They'll go after privately held critical infrastructure. That's always been the asymmetrical advantage." Despite the constant evolution of threats, Hultquist said progress has been made on both sides. He recalled the early days of Chinese state-backed attacks, where errors in spelling and grammar made their emails laughable - and traceable. "We used to print them out and tack them to our cubicle walls," he said. "Now, they're incredibly sophisticated. But the reason they've improved is because we've gotten better. Our defences have evolved." And according to Hultquist, that cat-and-mouse game won't be ending anytime soon. "We're not fighting the laws of physics like safety engineers," Hultquist said. "Our adversaries adapt. If we fix everything, they'll just change to overcome it."

Exclusive: How Glu.ai blends brand safety with AI-driven commerce
Exclusive: How Glu.ai blends brand safety with AI-driven commerce

Techday NZ

time5 days ago

  • Techday NZ

Exclusive: How Glu.ai blends brand safety with AI-driven commerce

As eCommerce brands race to adopt generative AI, is betting big on doing it safely - with brand integrity and human oversight baked in. The company, which specialises in creative automation for conversational commerce, is investing heavily in agentic AI while embedding human oversight into every layer of development and deployment. It's a strategy built not just for scale - but for trust. In an exclusive interview with TechDay, Chief Technology Officer, Sangeeta Mudnal, said the company's approach is rooted in balancing ambition with accountability. "One of the key things that I have learned through my experience in these big tech companies is how AI can be both deeply scalable and profoundly human-centred," she said. Mudnal, who held senior roles at Amazon and Microsoft, explained that platform has been designed around the principle of "human-in-the-loop" AI. It's a philosophy that informs everything from the way chatbot responses are evaluated to how brand elements are integrated into campaign workflows. "Whatever AI tools and capabilities we are building, we make sure they are trustworthy, transparent and that customers feel confident and comfortable in using them," she added. This commitment translates into a development process where human reviewers are involved even before deployment. "When we test our responses from our chatbots or AI agents, we're not using just LLMs as a judge, but we're using humans in the loop to make sure that the answers are accurate, coherent and relevant." Once AI features go live, users are kept in control. Features such as brand font extraction and colour scheme suggestions include confirmation steps that form a continuous feedback loop. These measures improve efficiency while reinforcing the customer's authority over final outputs. The need for such checks is particularly urgent in eCommerce, where AI tools now span the full customer lifecycle. "That end-to-end journey is now powered by AI," Mudnal said, citing the rise of single-window chats that take consumers from product discovery to purchase in seconds. But speed introduces its own risks - especially when scaling brand messaging. "When you generate thousands of copies and creatives, how do you come off not sounding templated, or off-tone or off-brand?" she asked. answer is to personalise automation at a foundational level. "We move to more content-aware and brand-consistent automation," she said. " learns the brand's identity - fonts, colours, tone, mission statements - and we infuse that in everything our customers do within the platform." The result is a system that lets teams launch coordinated campaigns across email, TikTok or Instagram while maintaining a consistent brand voice and aesthetic. The technical infrastructure powering this capability is built on three core tenets: clear problem framing, modular architecture and post-deployment resilience. "You can build AI agents in two days," Mudnal explained, "but keeping them performant and safe is a different matter." She emphasised that modern AI systems demand a new engineering mindset. "Eighty percent of the code is now moving to post-production - to make sure these LLMs are not hallucinating, they're behaving, they're not going rogue." For organisations used to deterministic codebases, she said, this shift will be a steep learning curve. Trust and compliance are critical in this environment, especially when AI models interact with sensitive business data. At that trust is safeguarded through strict data contracts and a "minimum required data" philosophy. Models are trained on brand-approved assets only. "We include AI fine-tuned on brand-consented content," Mudnal explained. "Brands upload their catalogues or kits, and we learn and fine tune our models based on that. This ensures outputs are not hallucinating or leaking sensitive details." The next major leap, she said, is conversational commerce - where entire shopping journeys happen through natural, real-time dialogue. "Twenty-four percent of consumers now use generative AI in shopping experiences, with a prediction of almost 50% in conversational settings within this year." This evolution will require creative teams to think less about search engine optimisation and more about emotional design. "Now, creative producers need to design conversations that resonate with human emotion and personality," she added. Tackling the ethical risks of these technologies, employs multiple safeguards - from toxicity and bias detection to prompt injection defences. Transparency is reinforced with prompt libraries, explainable outputs, and rigorous internal reviews. "It's an ongoing practice," she said. "We hope to be agile, but it's important to do cross-functional audits - legal, product, data science, prompt engineering - so we build trust with the customer." For startups looking to enter the AI commerce space, Mudnal's advice is clear. "Dive in. There might be some limitations and disadvantages now, but over time, technology is going to increase the quality of these models." She believes is positioned to lead the next wave of innovation through its agentic AI focus. These AI systems will work autonomously on behalf of brands, managing workflows and driving campaign efficiency - always under human direction. "We want agents to work on behalf of our customers, to make their workflow seamless and really drive efficiencies for them."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store