2 days ago
Tracer unveils AI tool to shield brands from ChatGPT fraud
Tracer AI has launched a product designed to monitor and protect brands from fraudulent activities that are perpetrated via AI chatbots, focusing on OpenAI's ChatGPT.
The introduction of Tracer Protect for ChatGPT comes as enterprise concern grows about the ways in which generative AI, particularly chatbots, are being exploited to commit brand fraud, distribute counterfeits and conduct executive impersonation scams. Tracer AI's system is built to track mentions of specific brands, products, services, and executives within ChatGPT responses and to proactively identify and mitigate schemes targeting both businesses and consumers.
Changing search habits
The escalation in generative AI use, including ChatGPT, which reportedly has approximately 400 million weekly active users worldwide, has resulted in consumers rapidly shifting from traditional search engines such as Google to AI chatbots for information and product recommendations. Tracer AI states that this mainstream adoption and the public's general trust in chatbot answers has opened effective new channels for bad actors to carry out brand abuse campaigns. These can manifest as phishing schemes, fraudulent product recommendations, or the promotion of unsafe counterfeit goods.
Mechanisms of attack
According to Tracer AI, generative engine optimisation (GEO) is being used to promote fraudulent content within chatbot outputs, which often remains hidden from traditional search engines and detection tools. The result is a growing incidence of social engineering, impersonation, and narrative poisoning, in which misleading stories about brands are intentionally promoted to influence consumers and even affect how AI systems respond to future queries regarding those brands.
Tracer's approach "OpenAI is already taking important steps to battle nefarious activity in ChatGPT. With Tracer Protect for ChatGPT, Tracer will now be a key pillar of a robust solution to this problem, proactively partnering with brands to remediate infringement activity on OpenAI and any links to websites, mobile apps and marketplaces engineered to prey on consumers," said Rick Farnell, CEO of Tracer.
The system's architecture features Flora, Tracer's agentic AI platform, which uses experience-based learning to improve its threat detection and enforcement capabilities over time. The company claims its integration of advanced automation and human analysis allows for continuous monitoring, drastically reducing the time required to identify and respond to brand misuse, and filtering out irrelevant threats.
Collaboration and platform integration
Tracer Protect for ChatGPT is powered by the Universal AI Platform from Dataiku, aiming to deliver fast and accurate brand threat detection at scale within generative AI environments. "Tracer AI is leading from the front, proving that building and controlling advanced AI agents can deliver a transformative and durable business advantage, including moving towards better genAI control. By combining its Flora agent with the right analytics and its Marlin vision model through The Universal AI Platform, Tracer has shown how to translate frontier AI into real-world outcomes," explained Sophie Dionnet, Senior Vice President of Product and Business Solutions at Dataiku. "This is exactly what Dataiku was built for: enabling visionary teams to create, govern and connect AI agents that solve real-world challenges in entirely new ways while achieving measurable business impact."
The product leverages proprietary Human-in-the-Loop AI (HITL) methodologies, combining algorithmic speed with review by expert analysts to ensure that enforcement decisions are legally robust and aligned with client-specific requirements.
Industry perspectives "The emergence of AI chatbots as a new vector for brand manipulation is a pressing concern for enterprise organisations," said Sawyer Ramsey, Strategic Account Executive at Snowflake. "Using the Snowflake AI Data Cloud, Tracer's platform is able to achieve incredible response times to provide right-time insights for their customers. The company's proactive approach to monitoring brand reputation in AI outputs represents exactly the kind of forward-thinking protection the cybersecurity industry needs as we navigate our new digital environment and is a testament to the powerful things companies can do on our unified platform."
Farnell further emphasised the heightened need for advanced digital brand protection, stating, "The urgency to get ahead of this threat cannot be overstated given the increasing ease with which fraudulent content can be generated and the unprecedented consumer shift from using search engines to now using AI chatbots. Organisations must adopt equally advanced countermeasures to protect their digital presence."
"This escalating technological arms race between malicious actors and brand defenders necessitates a proactive and nuanced approach to digital security, which is why we built Tracer Protect for ChatGPT. With our first-of-its-kind brand protection product that actively monitors and analyses ChatGPT outputs to detect and neutralise brand infringements, we're helping enterprises use AI for good and get ahead of these dangerous new threats posing grave risks to brands and their customers."
Expansion plans
Tracer Protect for ChatGPT is the first of several planned products targeting generative AI ecosystems. Tracer AI intends to extend its monitoring and enforcement solutions to other platforms, including Claude, Perplexity, and Gemini, later in the year.
The company highlights its ongoing efforts in proactive brand protection, asserting that live, targeted monitoring of large language model (LLM) outputs represents a shift from reactive to anticipatory security measures. Tracer AI's continued development in this area is marked by its use of its HITL AI, Flora, and Tracer Graph technologies. The firm has recently been recognised within the technology industry, receiving awards for both product development and leadership.