logo
#

Latest news with #GinnyFahs

Data Defense Agents For People
Data Defense Agents For People

Forbes

timea day ago

  • Business
  • Forbes

Data Defense Agents For People

SAN JOSE, CALIFORNIA - MARCH 18: A In a world where AI agents are everywhere, how do we ensure that people still have agency? One idea that's surfacing, albeit in sort of a vague way, is similar to the concept of a service dog or emotional support animal: that a person would have a dedicated personal AI entity that works as their guardian angel in a world of peril. Think about trying to navigate all of the AI stuff coming your way as a human: all of the scams, all of the drama of other people's communications, not to mention government and business messaging churned out in automated ways. 'Consumers are out there trying to navigate a really complex marketplace, and as AI is injected into the marketplace by many companies, it's probably going to become even harder for consumers to understand if they're getting a good deal, to understand the different options out there when they're making a purchase,' said Ginny Fahs of Consumer Reports in a recent panel aimed at an idea very much like this, the idea of personal defense AI. 'And so an AI that is loyal to the consumer, loyal to us as individuals, first and foremost, is really going to be essential for building trust in these AI systems, and for … migrating to a more authentic economy.' Fahs was among a set of expert panelists at Imagination in Action in April, and I found this to be one of the more compelling talks, not least because of past interviews I've seen in the last two years. Take data rights advocate who famously coined the term 'idatity' to talk about the intersection of personal data and technology. Anyway, my colleague Sandy Pentland moderated this group discussion, which covered a lot of thoughts on just how this kind of AI advocacy would work. 'There was a need to reform laws to keep up, to have electronic signatures, electronic contracts, automated transactions,' said panelist Dazza Greenwood of the Internet age, relating that to today's efforts. 'And I helped to write those laws as a young lawyer and technologist.' Panelist Amir Sarhangi spoke about the value of trust and familiarity with a person's AI advocate. 'Having that trust being established there, and having the ability to know who the agent is and who the enterprise is, becomes very important,' he said. 'Part of it is this general problem of, how do you make sure that agents don't break laws, introduce unexpected liabilities, and (that they) represent the authentic interest of the consumer, and (that they can) actually be loyal, by design?' said panelist Tobin South, who got his PhD at MIT. How It Might Work Panelists also discussed some of the procedural elements of such technology. 'In collaboration with the Open ID Foundation, who kind of leads all the standards and protocols keeping our internet safe, we are pushing forward standards that can help make agents safe and reliable in this kind of new digital age,' South said. Fahs talked about something her company developed called a 'permission slip.' 'You could go to a company through the agent, and the agent would say to the company, 'please delete this person's data,' or 'please opt out of the sale of this person's data,'' she said. 'It was a version of an agentic interaction that was (prior to the explosion of AI), but where we really were getting an authorization from a user for a specific purpose to help them manage their data, and then going out to a company and managing that transaction, and then reporting back to the customer on how it went.' On privacy, Greenwood discussed how systems would deal with laws like California's CCPA, which he called a 'mini-GDPR,' and encouraged people to use the term 'fiduciary' to describe the agent's responsibilities to the user. Sarhangi talked about the history of building KYA. 'One of the things we started talking about is KYA which is, 'know your agent,' and 'know your agent' really is about understanding who's behind the agent,' he said. 'These agents will have wallets, basically on the internet, so you know what transactions are being conducted by the agent. And that's really powerful, because when they do something that's not good, then you have a good way of understanding what the history of that agent has been, and that will go as part of their … reputation.' Crowdsourcing Consumer Information Another aspect of this that came up is the ability of the agents to put together their people's experiences, and share them, to automate word of mouth. 'A really key type of a thing I'm excited about is what Consumer Reports does without thinking about it,' said Pentland, 'which is compiling all the experiences of all your millions of members to know that 'these blenders are good' and 'those blenders are bad,' and 'don't buy that' and 'you don't trust that dude over there.' So once an agent is representing you, you can begin doing this automatically, where all the agents sort of talk about how these blenders are no good, right?' Fahs agreed. 'I can so casually mention to my AI agent, 'oh, this purchase, I don't like that one feature'' she said. 'And if that agent has a memory, and has the ability to coordinate and communicate with other agents, that becomes kind of known in the network, and it means that future consumers can purchase better, or future consumers have more awareness of that feature.' South added some thoughts on data tools. 'There are many really cool cryptographic tools you can build to make the sharing of data really safe, right?' he said. 'You don't need to trust Google, to just own all your data, promise not to do anything wrong with it. There are real security tools you can build into this, and we're seeing this explosion right now.' South also mentioned NANDA, a protocol being developed by people like my colleague Ramesh Raskar at MIT. NANDA is a way to build a decentralized Internet with AI, and it seems likely to blossom into one of the supporting pillars of tomorrow's global interface. Agents and Agency The panel also talked about some of the logistics, for instance: how will the agent really know what you want? 'You want the user to feel like they can provide very, very fine-grained permissions, but you also don't want to be bugging them all the time saying, 'Do I have permission for this? Do I have permission for that?'' Fahs said. 'And so … what the interface is to articulate those preferences, and to, even, as the agent, have real awareness of the consumer's intent, and where that can be extended, and where there really does need to be special additional permission granted, I think is, is a challenge that product managers and designers and many of us are going to be trying to thread the needle on.' 'One of the things that current LLMs don't do very well is recognize what a specific person wants,' Pentland added. 'In other words, values alignment for a specific person. It can do it for groups of people, sort of with big interviews, but an agent like this really wants to represent me, not necessarily you, or you. And I think one of the most interesting problems there is, how do we do that?' 'Finally, we have the tools that (resemble) something like fiduciary loyal agents,' Greenwood said. 'There's an expression going around Stanford, which is: the limiting factor on AI is context: not the size of the window, but your ability to structure information, to feed it to the AI, both for understanding consumers, but to also do preference solicitation,' South said. 'If you want the agent to act on your behalf, or an AI to do things you actually want, you need to extract that information somehow, and so both as individuals, making your data available to AI systems, but also as an organization, structuring information so that AIs can know how to work with your systems.' The Race Toward Personal Advocacy I think all of this is very necessary right now, in 2025, as we try to really integrate AI into our lives. This is happening, it seems, pretty much in real time, so this is the time to ask the questions, to find the answers, and to build the solutions.

23andMe files for bankruptcy and looks for a buyer - here's what that means for your data
23andMe files for bankruptcy and looks for a buyer - here's what that means for your data

The Independent

time24-03-2025

  • Business
  • The Independent

23andMe files for bankruptcy and looks for a buyer - here's what that means for your data

The genetic testing company 23andMe is filing for bankruptcy and looking for a new buyer. The company announced the move after losing millions of dollars over the last several quarters. But now, experts are warning users' genetic data could be in danger. Here's what you need to know about how to protect your 23andMe data: What the sale of 23andMe could mean for your data Officials across the US are warning 23andMe users to delete their data as soon as possible following the bankruptcy announcement. 'I remind Californians to consider invoking their rights and directing 23andMe to delete their data and destroy any samples of genetic material held by the company,' California Attorney General Rob Bonta said Friday. 23andMe said in a statement there have been 'no changes' to their data storage or protection. The company also said any buyer of its assets would have to observe applicable privacy laws for customer data. 'Our users' privacy and data are important considerations in any transaction, and we remain committed to our users' privacy and to being transparent with our customers about how their data is managed,' the company said in an open letter to customers. But once the sale is complete users' genetic information could be used in unexpected ways by someone else, The Washington Post reports. 23andMe's privacy policy even states that your personal information could be sold or transferred in the event of a sale. 'If we are involved in a bankruptcy, merger, acquisition, reorganization, or sale of assets, your Personal Information may be accessed, sold or transferred as part of that transaction,' the policy reads. Consumer Reports' Ginny Fahs told The Washington Post users are 'right to be concerned' their data could be up for grabs. 'The DNA data could be used to discern your relatives and ancestry, unearth family secrets, and reveal clues about diseases you have or could be predisposed to,' Fahs said. 'If the data makes its way to certain insurers, they may deny you coverage or charge you more for life, disability, or long-term care insurance because of your genetics.' 'This is some of the most precious data that exists about you,' she added. Why is 23andMe filing for bankruptcy? 23andMe has kickstarted voluntary Chapter 11 proceedings in the US – meaning it intends to reorganize its debts and assets to have a fresh start, while remaining in business. The company is now searching for a buyer. The move comes in the aftermath of a data hack and heavy financial losses. The DNA testing company has also announced the immediate resignation of Anne Wojcicki, its co-founder and chief executive. Wojcicki said she was 'disappointed' by the bankruptcy plan but that she had resigned so she could 'be in the best position to pursue the company as an independent bidder.' How to delete your 23andMe data To delete your 23andMe account, first log into your account. Then, go to your profile, and hit the 'settings' button. Scroll down to the '23andMe Data' section and hit 'view.' From there, scroll to 'delete data' and select 'permanently delete data.' The company will then send you an email prompting you to confirm the deletion request.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store