logo
#

Latest news with #dataanonymization

Lessons from a startup shutdown: Seattle founder on product-market fit, finding value, and self-worth
Lessons from a startup shutdown: Seattle founder on product-market fit, finding value, and self-worth

Geek Wire

timea day ago

  • Business
  • Geek Wire

Lessons from a startup shutdown: Seattle founder on product-market fit, finding value, and self-worth

GeekWire's startup coverage documents the Pacific Northwest entrepreneurial scene. Sign up for our weekly startup newsletter , and check out the GeekWire funding tracker and venture capital directory . The Privacy Dynamics team at a company offsite. (Photo courtesy of Graham Thompson) Privacy Dynamics had signs of startup success. Revenue approached $1 million, happy customers ran billions of records through its platform, and investors put $13 million behind the business. But the Seattle startup, which helped customers reduce data privacy risks, made the difficult decision to shut down earlier this year. Founded in 2018, Privacy Dynamics built a data anonymization tool to help companies meet regulatory compliance standards such as GDPR and CPRA. Its software processed datasets with personal information and created anonymized versions. Graham Thompson, CEO and co-founder of Privacy Dynamics, said one of the biggest lessons learned was about identifying value. 'You have to get as close to the value that your customers are getting as possible,' he said. 'For us, the value that they were getting was out of the data we were making available to them.' Instead of pitching itself as a privacy product, it may have been better for the company to focus on selling anonymized datasets. Graham Thompson. 'Had we approached it this way from the beginning, I believe a privacy-oriented data brokerage would have been a better play,' Thompson said. 'We could have leveraged our privacy tech to deliver a better data product to customers.' His advice for other founders: figure out what customers need and value most. 'They'll tell you, but you have to ask the right questions,' he said. 'I think if we did that, and we were really open to the response, we would have heard that the data is what was valuable.' Customer education was another hurdle for Privacy Dynamics. The company had a new type of product and it was tough convincing customers to rethink their infrastructure for regulatory compliance with ambiguous enforcement. 'Customers don't know when to use your product or why they need it — that can be incredibly difficult,' he said. And privacy never rose to be a dominant 'wave' that was top of mind for customers, such as the current generative AI boom. 'It was never a top-tier problem,' Thompson said. In the end, product-market fit remained elusive for the company. Privacy Dynamics was able to close deals, but getting there required significant effort and resources. 'We never quite got to the point where I felt there was a correlation between the gas we poured on, or the throttle that we pushed, and the response,' Thompson said. Thompson, who spent six years at Microsoft before launching Privacy Dynamics, wrote about the process of shutting down the company last month on LinkedIn, saying he 'drastically underestimated how difficult the process would be for me emotionally.' 'There have been many sleepless nights, stress induced outbursts, and plenty of questioning my professional abilities during this process,' he wrote. 'Shutting down a company flat out sucks.' But he's learned to separate his own self-worth from the company. 'I highly urge founders to not focus on the outcome and focus on the journey,' he said. Thompson said having an opportunity to be a founder is the greatest privilege of his professional life, despite also calling it a painful job. 'I wouldn't wish it upon anyone,' he said. 'But I can't wait to do it again.'

Anonymizing Data For AI: Protect Privacy, Preserve Value
Anonymizing Data For AI: Protect Privacy, Preserve Value

Forbes

time13-05-2025

  • Business
  • Forbes

Anonymizing Data For AI: Protect Privacy, Preserve Value

Sean Nathaniel is CEO of DryvIQ, the Unstructured Data Management Company trusted by over 1,100 organizations worldwide. Artificial intelligence (AI) is hungry for data—the more real-world data we can give it, the better. AI applications can deliver immense value when provided with a continuous stream of robust, high-quality data. But a critical challenge looms for organizations increasing their adoption of AI: how to satiate these models without feeding them sensitive information. Enterprise data often contains personally identifiable information (PII), information about internal processes, customer data or intellectual property (IP). Removing this data from training sets may seem like the safest route, but it can weaken the models' ability to generate quality outputs. For enterprises managing large volumes of unstructured data, leveraging AI without exposing sensitive information is a concern shared by many CSOs and CISOs. The key to finding the sweet spot between data privacy and business impact is to incorporate data anonymization into AI data readiness strategies. Data anonymization strikes a smart middle ground: It enables organizations to retain the value of their data without compromising their responsibility to protect confidential customer, employee or non-public company data. The true value of AI comes from its ability to uncover actionable insights from high-volume, context-rich data. Never before have organizations been able to feed their knowledge worker content into a machine, such as meeting notes, call summaries, project outcomes and customer feedback assessments, and quickly identify trends, predict outcomes and use that information to drive business impact or unlock new revenue streams. But much of this information is often as sensitive as it is valuable. Consider a consulting firm using AI to enhance its services. By analyzing past client engagements, including statements of work, customer success metrics and proprietary strategies, consultants can refine their methods to repeat successes, providing more targeted recommendations and driving greater value for future clients. Ensuring that client identifiers, PII, IPand other sensitive data remain protected during this process is essential, but can feel impossible to do at scale. But without the right safeguards, these employees risk exposing not only confidential customer information but also their own company's private data. According to Cisco's 2025 Data Privacy Benchmark Study, more than half of respondents admitted to entering personal employee data or non-public information into GenAI tools. When this data is fed into AI systems without protections, businesses expose themselves to risk, including data breaches, regulatory penalties and serious damage to their brand and reputation. Anonymized data plays a crucial role in advancing responsible AI, enabling businesses to mitigate risks while harnessing the full value of real-world information. By classifying, encrypting, redacting or replacing sensitive identifiers within the data, companies can strike a balance between their objective of driving business value with AI and their obligation to protect private information. • Enhancing Customer Trust: Protecting customer data builds brand trust and confidence. Companies ensuring secure and responsible AI usage position themselves as leaders in an increasingly privacy-conscious landscape. • Ensuring Regulatory Preparedness: Anonymized, encrypted and redacted datasets mitigate the risk of data leaks and ensure compliance with data protection and privacy laws. • Protecting Intellectual Property: By using anonymized data, enterprises can safely train AI models without exposing their IP or trade secrets, thereby maintaining their competitive edge. To effectively anonymize and prepare data for AI, data owners must go beyond basic data curation and adopt a robust framework for data readiness. We've developed a recommended framework to help our customers prepare their data for the AI era, centered on data being relevant, organized, cleansed and secure. In practice, organizations that use this framework guarantee their data is: • Relevant: Has the data been curated to include only the information necessary to achieve the desired outcomes (as determined by the business objective of the AI initiative), while excluding outdated, trivial or redundant content? • Organized: Is the data categorized, labeled and structured to accelerate AI model training? Proper organization helps surface high-value insights while enabling more precise control over sensitive information. • Cleansed: Have customer identifiers, IP or other sensitive information been anonymized, redacted or encrypted to protect privacy without diminishing the quality of insights? • Secure: Are robust governance policies and access controls in place to ensure data is only accessible to authorized users and used strictly within the scope of the AI initiative? Building on the framework of relevant, organized, cleansed and secure data ensures that information is appropriately managed and protected when being used to train AI models. This requires a structured approach from the outset of any AI initiative. Here are four recommended steps to ensure your data is ready for responsible AI use: 1. Audit existing data repositories. Analyze all of your unstructured data repositories to understand where your most sensitive information resides and how it's currently being secured. This helps you assess your current risk level and determine the required steps to prepare your data for specific AI projects. 2. Build privacy-first AI initiatives. Commit to using anonymized datasets to power your AI initiatives. This will maintain the quality of AI outputs without the risk of leaking confidential customer, employee or company information. 3. Adopt a modern data readiness framework. Leverage intelligent data management platforms to continually ensure your data stays relevant, organized, cleansed and secure for your AI initiatives, including the automation of data anonymization techniques. 4. Reinforce governance policies. Establish robust protocols and conduct regular audits to ensure compliance with relevant privacy laws and regulations. Using high-value knowledge worker data in the AI era doesn't have to come at the cost of privacy. By implementing data anonymization as a core component of their data readiness strategy, organizations can fuel innovation while safeguarding sensitive information and intellectual property. Adopting a robust framework for data readiness enables organizations to prepare their data responsibly for an AI-driven future. Enterprises that prioritize privacy alongside innovation position themselves as leaders in this space, building strong foundations for sustainable success. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store