
Customers Are Already Demanding AI Security. Are You Listening?
There was a time when companies decided how data was collected and used, with little input from the people it came from. That time has passed. Customers are asking sharper questions, expecting accountability, and making choices based on how much they trust the organizations they engage with.
Cisco's 2024 Consumer Privacy Survey reflects this shift, with 75% of consumers saying they won't buy from companies they don't trust with their data. More than half have already changed providers because of privacy concerns. In addition, 78% expect AI to be used responsibly. These numbers reflect a change in how people evaluate businesses and therefore, a significant impact on a company's bottom line.
Recent findings from Prosper Insights & Analytics reinforce that sentiment. When asked about concerns related to AI, 39% of adults said the technology needs more human oversight. Another 32% pointed to a lack of transparency, and more than a quarter were concerned about AI making incorrect decisions. Respondents also cited fears around job displacement and algorithmic bias, highlighting the demand for responsible AI is rooted in both practical fears and ethical expectations. People want systems they can understand, challenge and trust.
Prosper - Concerns About Recent Developments in AI
For organizations investing in AI, this change affects how technology decisions are made and how success is measured. AI systems increasingly play a role in customer-facing experiences, whether they're used to deliver product recommendations, support decisions or streamline transactions. These systems operate on personal data and are often judged by the quality of those interactions. That means trust, reliability and transparency have become just as important as accuracy or speed.
The security environment is evolving in parallel. New risks are emerging as AI systems become more advanced. Vulnerabilities like model inversion, adversarial prompts and data poisoning create entry points for attackers that didn't exist with traditional software. Appknox recently conducted security reviews of AI-driven apps Perplexity and Deepseek and found issues ranging from weak network configurations to lax authentication and insufficient privacy protections. These findings underscore how new technology introduces new exposure and how security needs to evolve alongside capability.
Internally, IT teams are feeling this pressure as they weigh the risks of adoption against the demands of innovation. A ShareGate survey of 650 professionals across North America and Europe showed that 57% of those exploring or deploying Microsoft Copilot identified security and access management as top concerns. Another 57% flagged data retention and quality as areas that needed improvement. These responses suggest that building the right foundation for trust is more important than building models or writing policies.
That foundation can be difficult to establish when usage and understanding vary widely across the organization. According to a recent Prosper Insights & Analytics survey, 44% of executives already use generative AI tools like ChatGPT and Copilot, while only 27% of employees report the same. An additional 32% of employees said they've heard of these tools but don't understand them. This gap in experience and understanding introduces operational risk, especially when AI tools are adopted faster than organizations can educate and align their teams.
Prosper - Heard of Generative AI
Customers are paying attention to how companies approach this. Cisco's research shows that awareness of privacy laws has grown significantly in recent years. More than half of consumers say they are now familiar with their data rights. People are reviewing how their information is used, adjusting settings and opting out when they feel companies don't offer enough control or clarity. This level of engagement shows that trust must be earned, not assumed.
Prosper Insights & Analytics data further reinforces this, with 59% of respondents reporting that they are either extremely or very concerned about their privacy being violated by AI systems. These findings reflect a deep emotional undercurrent that companies must take seriously if they want customers to stay engaged and confident in their use of AI-enabled services.
Prosper - How Concerned are You About Privacy Being Violated From AI Using Your Data
In healthcare, the importance of trust becomes even more pronounced. A recent Iris Telehealth survey found that 70% of respondents had concerns about how their mental health data would be protected when using AI-powered tools. When asked what would influence their trust, people pointed to clear explanations, strong encryption, collaboration with licensed professionals and systems that make it easy to shift from AI assistance to human care. Technology needs to be effective, but understandable and respectful of user autonomy. That expectation extends beyond healthcare. In any industry where AI interacts with customers, explainability matters. Business leaders are seeing that even well-functioning systems can lose credibility if their logic and purpose aren't communicated clearly. The case of Amazon's AI recruiting tool, which was found to disadvantage female applicants due to biased training data, remains a cautionary example. The company ultimately pulled the system, but the incident left a lasting impression of what happens when organizations overlook the importance of oversight and transparency.
Responsible AI should reflect how companies see their role in the broader ecosystem of data, ethics, and service. Customers are forming opinions based on whether companies appear to handle information responsibly, communicate honestly and design technology in ways that respect the people who use it. Even simple measures like minimizing how long personal data is stored can signal that a business takes privacy seriously.
Those efforts will soon be measured against evolving regulatory frameworks. The EU's AI Act introduces new requirements around transparency and risk management, especially for high-impact systems. In the US, emerging privacy laws are raising expectations across sectors. These legal changes reflect a growing belief that companies need to be more deliberate about how AI systems are developed and deployed.
'AI is evolving fast, but trust moves slower. Businesses need to meet regulatory expectations today while building systems flexible enough to meet tomorrow's. That means aligning with GDPR and the AI Act now, but also investing in explainability, continuous monitoring and ethical review processes. That's how you stay compliant and competitive,' said Bill Hastings, CISO, Language I/O.
Many businesses are acting now rather than waiting for regulation. Some are embedding privacy-by-design principles into their development cycles. Others are producing clear AI usage policies and making transparency reports available to customers. Internal education is becoming more common too, with teams working to ensure employees understand how AI tools work and how to use them responsibly. 'Securing AI starts with visibility,' added Hastings. 'You can't protect what you don't fully understand, so begin by mapping where AI is being used, what data it touches and how decisions are made. From there, build in access controls, auditing and explainability features from day one. Trust grows when systems are designed to be clear, not just clever.'
Doing this well often requires cross-functional coordination. Security, legal, product and compliance teams must work together from the start, not just at review points. Vendor evaluation processes need to include questions about AI ethics and security posture. Technical audits should examine how models behave under real-world conditions, including how they handle edge cases or unexpected inputs.
This level of care goes beyond basic risk avoidance to shaping how customers perceive the entire relationship. Businesses that take the time to explain what their AI systems do, how decisions are made and how information is protected are showing customers they deserve their trust. These are the companies that build deeper loyalty and differentiate themselves in markets where products and services can otherwise feel interchangeable.
Trust builds slowly through a pattern of responsible choices, clear communication, and consistent follow-through. AI is a powerful tool, but it works best in the hands of teams that treat security and ethics as shared values, not as checklists. As the landscape continues to evolve, the companies that earn lasting trust will be the ones that take the time to build systems and relationships that are meant to last.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
14 minutes ago
- Yahoo
Intel stock surges on report Trump administration mulls taking stake in company
Shares of Intel (INTC) jumped more than 7% on Thursday following a report the US government is considering taking a stake in the struggling chipmaker. According to Bloomberg, the plan could see the government help Intel build out its planned chip complex in Ohio, which the company has had to delay as part of its ongoing turnaround effort. Intel announced the facility in 2022 with an initial investment of $20 billion that could grow to $100 billion over time. The report follows President Trump's meeting with Intel CEO Lip-Bu Tan on Monday. The Ohio site was supposed to include two new manufacturing plants and start producing chips by 2025, but that's since been delayed into the 2030s. CEO Lip-Bu Tan, who took over when former CEO Pat Gelsinger was ousted by the company's board due to the slow turnaround in 2024, has since delayed the plant even further. The Trump administration is making a major push to reshore American chip manufacturing. Intel and rivals TSMC (TSM), Samsung, and others, have been working to build more chip fabricating plants in the US since the Biden administration signed the CHIPS Act in 2022. Part of Intel's plan under Gelsinger was to turn Intel into a contract chip manufacturer, similar to TSMC. But the gambit has run into trouble. So far, the foundry's main customer is still Intel, and the company is reportedly facing headwinds in getting its 18A chip technology up to the level needed to statisfy clients. Intel has signed agreements to build chips for Amazon (AMZN) and Microsoft (MSFT) using its chip designs. Trump initially called for Tan to resign as CEO of Intel due to his investments in Chinese tech firms, but backed away from the stance after meeting with him on Monday. Intel is still far away from making any meaningful headway in the AI space, ceding the market to both Nvidia and rival AMD. Earlier this week, the White House announced the US government will take a 15% cut of the sale of Nvidia and AMD chips shipped to China, an unconventional arrangement that highlights the government's increased focus on the semiconductor industry. Email Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
14 minutes ago
- Yahoo
Lyft co-founders to step down from ride-hailing firm's board
(Reuters) -Lyft said on Thursday its co-founders Logan Green and John Zimmer are stepping down from the ride-hailing services provider's board, following the completion of a two-year transition plan. Green and Zimmer began serving as the chair and vice chair of Lyft's board in 2023 after stepping down as CEO and president, respectively, handing the reins to David Risher, who has been a board member since 2021. The duo founded Lyft in 2012, with the company now operating across four continents and nearly 1,000 cities. Sean Aggarwal, who was the chair of Lyft's board from 2019 to 2023, will reprise his role. Zimmer is launching a new consumer-focused business venture named YES&, while Green will continue as a venture partner at Autotech Ventures, a firm investing in the mobility and transportation sector. Lyft, which recently completed its nearly $200 million acquisition of European mobility platform FreeNow, has signed a deal with China's Baidu to introduce the search-engine giant's robotaxis in the region. It posted revenue of $1.59 billion in the second quarter, missing estimates of $1.61 billion, according to data compiled by LSEG. Rides on Lyft's platform grew 14% to a record high of 234.8 million in the quarter, slightly below estimates of 235.9 million, per Visible Alpha. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
14 minutes ago
- Yahoo
SmartStop Self Storage REIT, Inc. Highlights AI Strategy at Ai4 2025, North America's Leading Artificial Intelligence Conference
LADERA RANCH, Calif., August 14, 2025--(BUSINESS WIRE)--SmartStop Self Storage REIT, Inc. ("SmartStop") (NYSE: SMA), an internally managed real estate investment trust and a premier owner and operator of self-storage facilities in the United States and Canada, announced that Chief Operations Officer Joe Robinson and Director of Revenue Rene Hernandez presented at the Ai4 2025 conference on August 12 in Las Vegas. Shop Top Mortgage Rates Personalized rates in minutes A quicker path to financial freedom Your Path to Homeownership Ai4 is the largest artificial intelligence event in North America, drawing thousands of executives, technologists, and innovators from leading organizations across industries, including financial services, retail, health care, and manufacturing. Robinson and Hernandez were the exclusive speakers representing the self-storage sector, underscoring SmartStop's leadership in technology innovation and its commitment to transforming operations through artificial intelligence. Robinson and Hernandez's presentation spotlighted SmartStop's proprietary AI pricing agents, which drive millions of automated pricing changes per month. Built in-house using SmartStop's robust data warehouse and data science capabilities, the system enables dynamic, real-time adjustments that support profitable decision making, improve operational efficiency, and enhance customer experience. "AI is a key part of our operating platform and a critical component of our broader innovation strategy," said H. Michael Schwartz, SmartStop Chairman and CEO. "Our AI-powered pricing system allows us to respond faster to market conditions and customer behavior while delivering meaningful performance improvements across the portfolio." SmartStop's appearance at Ai4 reflects its ongoing investment in technology alongside long-term partner Kaizen Analytix, helping it maintain a competitive edge in the evolving self-storage landscape. About SmartStop Self Storage REIT, Inc. (SmartStop): SmartStop Self Storage REIT, Inc. ("SmartStop") (NYSE: SMA) is a self-managed REIT with a fully integrated operations team of more than 600 self-storage professionals focused on growing the SmartStop® Self Storage brand. SmartStop, through its indirect subsidiary SmartStop REIT Advisors, LLC, also sponsors other self-storage programs. As of August 14, 2025, SmartStop has an owned or managed portfolio of 230 operating properties in 23 states, the District of Columbia, and Canada, comprising approximately 167,200 units and 18.7 million rentable square feet. SmartStop and its affiliates own or manage 44 operating self-storage properties in Canada, which total approximately 39,000 units and 3.9 million rentable square feet. View source version on Contacts David CorakSVP of Corporate Finance & StrategySmartStop Self Storage REIT, Sign in to access your portfolio