logo
#

Latest news with #consumerTrust

Customers Are Already Demanding AI Security. Are You Listening?
Customers Are Already Demanding AI Security. Are You Listening?

Forbes

time08-07-2025

  • Business
  • Forbes

Customers Are Already Demanding AI Security. Are You Listening?

AI Data Security There was a time when companies decided how data was collected and used, with little input from the people it came from. That time has passed. Customers are asking sharper questions, expecting accountability, and making choices based on how much they trust the organizations they engage with. Cisco's 2024 Consumer Privacy Survey reflects this shift, with 75% of consumers saying they won't buy from companies they don't trust with their data. More than half have already changed providers because of privacy concerns. In addition, 78% expect AI to be used responsibly. These numbers reflect a change in how people evaluate businesses and therefore, a significant impact on a company's bottom line. Recent findings from Prosper Insights & Analytics reinforce that sentiment. When asked about concerns related to AI, 39% of adults said the technology needs more human oversight. Another 32% pointed to a lack of transparency, and more than a quarter were concerned about AI making incorrect decisions. Respondents also cited fears around job displacement and algorithmic bias, highlighting the demand for responsible AI is rooted in both practical fears and ethical expectations. People want systems they can understand, challenge and trust. Prosper - Concerns About Recent Developments in AI For organizations investing in AI, this change affects how technology decisions are made and how success is measured. AI systems increasingly play a role in customer-facing experiences, whether they're used to deliver product recommendations, support decisions or streamline transactions. These systems operate on personal data and are often judged by the quality of those interactions. That means trust, reliability and transparency have become just as important as accuracy or speed. The security environment is evolving in parallel. New risks are emerging as AI systems become more advanced. Vulnerabilities like model inversion, adversarial prompts and data poisoning create entry points for attackers that didn't exist with traditional software. Appknox recently conducted security reviews of AI-driven apps Perplexity and Deepseek and found issues ranging from weak network configurations to lax authentication and insufficient privacy protections. These findings underscore how new technology introduces new exposure and how security needs to evolve alongside capability. Internally, IT teams are feeling this pressure as they weigh the risks of adoption against the demands of innovation. A ShareGate survey of 650 professionals across North America and Europe showed that 57% of those exploring or deploying Microsoft Copilot identified security and access management as top concerns. Another 57% flagged data retention and quality as areas that needed improvement. These responses suggest that building the right foundation for trust is more important than building models or writing policies. That foundation can be difficult to establish when usage and understanding vary widely across the organization. According to a recent Prosper Insights & Analytics survey, 44% of executives already use generative AI tools like ChatGPT and Copilot, while only 27% of employees report the same. An additional 32% of employees said they've heard of these tools but don't understand them. This gap in experience and understanding introduces operational risk, especially when AI tools are adopted faster than organizations can educate and align their teams. Prosper - Heard of Generative AI Customers are paying attention to how companies approach this. Cisco's research shows that awareness of privacy laws has grown significantly in recent years. More than half of consumers say they are now familiar with their data rights. People are reviewing how their information is used, adjusting settings and opting out when they feel companies don't offer enough control or clarity. This level of engagement shows that trust must be earned, not assumed. Prosper Insights & Analytics data further reinforces this, with 59% of respondents reporting that they are either extremely or very concerned about their privacy being violated by AI systems. These findings reflect a deep emotional undercurrent that companies must take seriously if they want customers to stay engaged and confident in their use of AI-enabled services. Prosper - How Concerned are You About Privacy Being Violated From AI Using Your Data In healthcare, the importance of trust becomes even more pronounced. A recent Iris Telehealth survey found that 70% of respondents had concerns about how their mental health data would be protected when using AI-powered tools. When asked what would influence their trust, people pointed to clear explanations, strong encryption, collaboration with licensed professionals and systems that make it easy to shift from AI assistance to human care. Technology needs to be effective, but understandable and respectful of user autonomy. That expectation extends beyond healthcare. In any industry where AI interacts with customers, explainability matters. Business leaders are seeing that even well-functioning systems can lose credibility if their logic and purpose aren't communicated clearly. The case of Amazon's AI recruiting tool, which was found to disadvantage female applicants due to biased training data, remains a cautionary example. The company ultimately pulled the system, but the incident left a lasting impression of what happens when organizations overlook the importance of oversight and transparency. Responsible AI should reflect how companies see their role in the broader ecosystem of data, ethics, and service. Customers are forming opinions based on whether companies appear to handle information responsibly, communicate honestly and design technology in ways that respect the people who use it. Even simple measures like minimizing how long personal data is stored can signal that a business takes privacy seriously. Those efforts will soon be measured against evolving regulatory frameworks. The EU's AI Act introduces new requirements around transparency and risk management, especially for high-impact systems. In the US, emerging privacy laws are raising expectations across sectors. These legal changes reflect a growing belief that companies need to be more deliberate about how AI systems are developed and deployed. 'AI is evolving fast, but trust moves slower. Businesses need to meet regulatory expectations today while building systems flexible enough to meet tomorrow's. That means aligning with GDPR and the AI Act now, but also investing in explainability, continuous monitoring and ethical review processes. That's how you stay compliant and competitive,' said Bill Hastings, CISO, Language I/O. Many businesses are acting now rather than waiting for regulation. Some are embedding privacy-by-design principles into their development cycles. Others are producing clear AI usage policies and making transparency reports available to customers. Internal education is becoming more common too, with teams working to ensure employees understand how AI tools work and how to use them responsibly. 'Securing AI starts with visibility,' added Hastings. 'You can't protect what you don't fully understand, so begin by mapping where AI is being used, what data it touches and how decisions are made. From there, build in access controls, auditing and explainability features from day one. Trust grows when systems are designed to be clear, not just clever.' Doing this well often requires cross-functional coordination. Security, legal, product and compliance teams must work together from the start, not just at review points. Vendor evaluation processes need to include questions about AI ethics and security posture. Technical audits should examine how models behave under real-world conditions, including how they handle edge cases or unexpected inputs. This level of care goes beyond basic risk avoidance to shaping how customers perceive the entire relationship. Businesses that take the time to explain what their AI systems do, how decisions are made and how information is protected are showing customers they deserve their trust. These are the companies that build deeper loyalty and differentiate themselves in markets where products and services can otherwise feel interchangeable. Trust builds slowly through a pattern of responsible choices, clear communication, and consistent follow-through. AI is a powerful tool, but it works best in the hands of teams that treat security and ethics as shared values, not as checklists. As the landscape continues to evolve, the companies that earn lasting trust will be the ones that take the time to build systems and relationships that are meant to last.

Amazon to ban fake reviewers and punish sellers that cheat system
Amazon to ban fake reviewers and punish sellers that cheat system

Yahoo

time06-06-2025

  • Business
  • Yahoo

Amazon to ban fake reviewers and punish sellers that cheat system

Amazon has vowed to ban users who post fake reviews and punish companies that cheat its rating system after a UK crackdown on the practice. The retail giant has told regulators it will block businesses from selling on its website if they are found to have boosted their star ratings via bogus reviews. It will also take action against users who post the false reviews. Amazon said it would enhance its existing systems for tackling the problem, including instances where reviews of highly rated products are hijacked and added to separate items. Amazon has been investing heavily in clamping down on fake star ratings, using expert investigators to spot bogus reviews. It is also using artificial intelligence (AI) to track how many reviews an account is posting and their sign-in activity to identify potential fraudsters. The company said it blocked more than 275m fake reviews last year. The tech giant told the Competition and Markets Authority (CMA) that it was committed to tackling fake reviews. Sarah Cardell, the chief executive of the regulator, said it meant 'people can make decisions with greater confidence – knowing that those who seek to pull the wool over their eyes will be swiftly dealt with'. She said the CMA would consider further action if shoppers did not see real change. Around 90pc of consumers use reviews to make decisions on what to buy online. Official figures have previously suggested that at least one in 10 of all product reviews on websites used by third-party sellers are likely to be fake. A large proportion are written by bots. However, in some instances sellers 'merge' listings for new products with older and more popular items in order to acquire their review history. In 2019, a study by Which? found that reviews for a soap dispenser and a phone screen cover were listed under a pair of headphones. In a report by the Department for Business and Trade from 2023, officials said 'the presence of well-written 'subtle' fake reviews leads to a statistically significant increase in the proportion of consumers buying the product with these fake reviews'. The Amazon pledges come months after new laws came into force that make online retailers liable for false reviews on their websites. Since April, companies have been legally required to take steps to prevent and remove the publication of fake reviews. The work is being overseen by the CMA, whose interim chairman, Doug Gurr, is the former UK boss of Amazon. He was appointed last year amid a push by ministers to ensure regulators 'tear down the barriers hindering business and refocus their efforts on promoting growth'. Researchers have warned that the rise of AI threatens to lead to more fake reviews. Amazon claims that more than 99pc of all products in its store contain only authentic reviews. A spokesman for Amazon said: 'Ever since we introduced reviews in 1995 we have continued to innovate and work constructively with regulators around the world, including the CMA, to help customers share feedback that is relevant, helpful and trustworthy, in both Amazon's store and across the industry.' Broaden your horizons with award-winning British journalism. Try The Telegraph free for 1 month with unlimited access to our award-winning website, exclusive app, money-saving offers and more.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store