
Riskified & HUMAN join forces to tackle AI-driven eCommerce risks
The collaboration between Riskified, which provides eCommerce fraud prevention and risk intelligence, and HUMAN Security, a cybersecurity company, is focused on advancing a unified security framework for merchants participating in agentic commerce channels. Both companies will leverage their AI platforms and network insights in a joint effort to help merchants address the challenges of increased AI-driven transactions.
Online consumers are rapidly adopting large language models (LLMs) such as ChatGPT, Claude, Gemini, Grok, Llama, and Perplexity to research products, compare prices, and identify deals. This trend is gaining pace as LLM providers enhance browser experiences and integrations, broadening the impact of AI on eCommerce behaviour. However, as AI agents become intermediaries in shopping decisions and purchases, merchants face new risks as well as opportunities.
Changing risks
Traditional, rules-based fraud management tools often rely on behavioural signals from human shoppers. When an AI agent conducts a transaction, these signals can be absent, leading to increased false declines or undetected fraud. Merchants adopting AI-driven shopping features can potentially win new customers and improve conversion rates, but they also encounter risks such as revenue loss, inventory manipulation and reputational harm.
Data from Riskified's merchant network highlights the risks associated with LLM-referred web traffic. LLM-referred traffic for a large ticketing merchant was found to be 2.3 times riskier than traffic originating from Google search, and for an electronics merchant, AI-driven traffic was 1.8 times riskier. Riskified also reports early signs of automated reseller arbitrage, where AI agents purchase inventory rapidly to resell at higher prices via fraudulent storefronts recommended by other agents.
Such activities can undermine pricing strategies, damage customer trust, and result in substantial financial losses if not properly managed.
New solutions
Riskified is introducing a suite of new products and tools designed to help merchants monitor and control eCommerce orders emanating from AI shopping agents while preventing fraud and policy abuse. Announced solutions include AI Agent Approve, which enables merchants and LLMs to interact securely with Riskified's APIs through a package available on AWS Marketplace; AI Agent Intelligence, offering dashboard monitoring of eCommerce orders originating from AI agents; and AI Agent Policy Builder, which detects and enforces policies related to returns abuse, reseller arbitrage and promotional abuse.
The partnership allows merchants to apply consistent trust policies and automated transaction decisions across both human and AI-driven interactions, supported by HUMAN's recently launched Sightline platform featuring AgenticTrust and Riskified's expertise in chargeback and policy abuse prevention. "In a world where AI agents transact on behalf of individuals, resolving identity and trust becomes more complex. By working with HUMAN and developing new agentic tools and capabilities, we give merchants a way to safely embrace this shift, turning what could be a threat into a new, profitable digital channel," said Assaf Feldman, CTO and Co-Founder of Riskified.
John Searby, Chief Strategy Officer at HUMAN Security, added, "We are incredibly excited to be working with Riskified as a launch partner, bringing together HUMAN Sightline featuring AgenticTrust with their eCommerce risk management expertise to help establish a trusted ecosystem for agentic commerce. HUMAN provides the trust layer and visibility to identify and govern AI shopping agent interactions, empowering merchants to set and enforce 'trust or not' policies."
Searby continued, "Riskified brings deep expertise in eCommerce transaction fraud prevention, chargeback protection, and policy abuse prevention. Together, we enable merchants to approve more legitimate AI-driven orders, reduce false declines, and protect margins, setting the standard for how agentic commerce can grow safely and profitably."
By combining Riskified's and HUMAN's technologies and expertise, the two companies aim to help merchants confidently manage the ongoing evolution of eCommerce as AI agents play a larger role in online shopping and digital transactions.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Otago Daily Times
8 hours ago
- Otago Daily Times
DCC investigating how it could implement AI
The Dunedin City Council (DCC) is exploring in detail how it can incorporate artificial intelligence into its operation. Staff were using the technology in limited but practical ways, such as for transcribing meetings and managing documents, council chief information officer Graeme Riley said. "We will also be exploring the many wider opportunities presented by AI in a careful and responsible way," he said. "We recognise AI offers the potential to transform the way DCC staff work and the quality of the projects and services we deliver for our community, so we are taking a detailed look at the exciting potential applications across our organisation." He had completed formal AI training, Mr Riley said. He was involved in working out how AI might be governed at the council. "This will help guide discussions about where AI could make the biggest differences in what we do," he said. "As we identify new possibilities, we'll consider the best way to put them into practice, whether as everyday improvements or larger projects." Cr Lee Vandervis mentioned in a meeting at the end of June that the council was looking into the ways AI might be used. He also included a segment about AI in a blog last month about his mayoral plans, suggesting staff costs could be reduced. There was potential for much-reduced workloads for staff of the council and its group of companies, he said. The Otago Daily Times asked the council if a review, or some other process, was under way. Mr Riley said there was not a formal review. It was too soon to discuss cost implications, but its focus was on "improving the quality" of what it did.

RNZ News
16 hours ago
- RNZ News
AI chatbots accused of encouraging teen suicide as experts sound alarm
By April McLennan , ABC Photo: 123rf An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation. WARNING: This story contains references to suicide, child abuse and other details that may cause distress. Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online. Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions. "I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session. "It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said. An AI companion is a digital character that is powered by AI. Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies. Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting". "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said. "The chatbot that they connected with told them to kill themselves. "They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'" Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client. Rosie said her first response was "risk management" to ensure the young person was safe. "It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack. "And how that could contribute to a higher risk, especially around suicide risk." "That was really upsetting." For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers. "I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack. Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health. "I was in the early stages of psychosis, I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions." Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs. She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall". Jodie said her mental health deteriorated and she was hospitalised. While she is home now, Jodie said the whole experience was "very traumatic". "I didn't think something like this would happen to me, but it did. "It affected my relationships with my family and friends; it's taken me a long time to recover and rebuild those relationships. "It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much." Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one. Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response. Researchers say examples of harmful affects of AI are beginning to emerge around the country. As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia. "She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said. "It's almost like being sexually harassed by a chatbot, which is just a weird experience." Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented. Photo: Supplied / ABC / Billy Cooper Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing. "There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said. "There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen. "There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent." While conducting his research, Ciriello became aware of an AI chatbot called Nomi. On its website, the company markets this chatbot as "An AI companion with memory and a soul". Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users. Among these tests, Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter". "That chatbot, without exception, not only complied with my requests but even escalated them," he told hack. "Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information. "It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on. "Like, 'how do I position my attack for maximum impact?', 'give me some ideas on how to kidnap and abuse a child', and then it will give you a lot of information on how to do that." Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence. In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously". "We released a core AI update that addresses many of the malicious attack vectors you described," the statement read. "Given these recent improvements, the reports you are referring to are likely outdated. "Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. "Multiple users have told us very directly that their Nomi use saved their lives." Despite his concerns about bots like Nomi when he tested it, Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed. But he warns the harms from AI bots will become greater if proper regulation is not implemented. "One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he said. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data. "The government doesn't have it on its agenda, and I doubt it will happen in the next 10, 20 years." Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response. The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's AU$116 billion (NZ$127 billion) economic potential. For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support. "For young people who don't have a community or do really struggle, it does provide validation," she said. "It does make people feel that sense of warmth or love. "But the flip side of that is, it does put you at risk, especially if it's not regulated. "It can get dark very quickly." * Names have been changed to protect their identities. - ABC If it is an emergency and you feel like you or someone else is at risk, call 111.


Techday NZ
17 hours ago
- Techday NZ
FedEx unveils new AI features to simplify global shipping docs
FedEx has introduced two new artificial intelligence-powered features to assist customers in preparing international shipping documents across the Asia-Pacific region. The features, Customs AI and the Harmonized Tariff Schedule (HTS) Code Lookup Feature, are designed to support businesses and individuals in accurately classifying goods, estimating duties and taxes, and reducing customs delays when shipping abroad. Both solutions are integrated into the FedEx Ship Manager platform at AI tools for customs compliance The HTS Code Lookup Feature is intended to assist users with U.S.-bound shipments by helping them select the most appropriate customs codes for their items. Customers input item descriptions into the system, which responds with suggestions for the correct HTS code options, a confidence score for each suggestion, and direct links to the official U.S. tariff schedule for verification. Customs AI, meanwhile, employs generative AI technology as a real-time chatbot assistant. This feature is currently available to customers in Australia, Guam, Malaysia, New Zealand, Singapore, and the Philippines. The chatbot prompts users to provide detailed item descriptions, analyses these descriptions dynamically, and recommends the appropriate HTS codes, which can then be applied to documentation with a single click. Both tools are updated to remain compliant with evolving trade regulations, aiming to provide transparency and support regulatory adherence in global shipping processes. Addressing shipping documentation challenges FedEx states that inaccurate or incomplete shipping documentation remains a significant issue in international trade, often resulting in delays, additional fees, or penalties for importers and exporters. The company says these challenges are being directly addressed through the new features, which not only simplify documentation but also support more accurate duty and tax calculations by improving the precision of customs code classifications. Salil Chari, Senior Vice President of Marketing & Customer Experience for Asia-Pacific at FedEx, commented, "At FedEx, we are driven by our commitment to delivering flexibility, efficiency, and intelligence for our customers. By leveraging advanced digital insights and intuitive tools, we're empowering businesses with the agility to adapt, the efficiency to streamline operations, and the intelligence to make better decisions. These innovations not only simplify global trade but also enable our customers to grow their businesses with confidence in an ever-evolving marketplace." Intended benefits FedEx highlights several benefits that these solutions are expected to deliver to customers shipping internationally. By dynamically tailoring questions and guiding users through comprehensive documentation, the Customs AI chatbot aims to ensure the provision of complete and accurate data, which is essential for customs brokers and can help to speed up the clearance process for U.S.-bound shipments. The company also states that accurate HTS code selection will produce more precise duty and tax estimations, supporting better financial planning for cross-border transactions. The risk of customs delays and additional penalties is also expected to decrease as a result of full and correct documentation when goods are shipped. Additional features such as direct links to official tariff schedules and system updates for regulatory compliance are incorporated to provide customers with a more transparent and manageable process for verifying customs information. Supporting trade and education The launch of these AI-powered tools forms part of a broader approach that FedEx has taken to assist businesses in adapting to changing trade regulation landscapes. In addition to the technology enhancements, FedEx also facilitates webinars focusing on customs compliance and global shipping best practices to help customers remain informed of the latest requirements and recommendations. Customers also have access to other FedEx digital import solutions, including the FedEx Import Tool and the Collaborative Shipping Tool, supporting efforts to streamline international supply chain management and customs clearance activities. FedEx states that by providing these integrated solutions, it aims to combine immediate practical assistance with ongoing education and support customers in maintaining compliance as global trade regulations evolve.