logo
#

Latest news with #EuropeanGDPR

'We follow strict protocols' - popular period tracking app hits back at backlash
'We follow strict protocols' - popular period tracking app hits back at backlash

Daily Mirror

time17 hours ago

  • Health
  • Daily Mirror

'We follow strict protocols' - popular period tracking app hits back at backlash

A report from the University of Cambridge has claimed that menstrual apps are a risk to privacy, but period tracking app Clue has hit back, detailing exactly how they use users' data After the damning report from University of Cambridge that select period tracking apps are harvesting and selling user information, popular tracking app Clue has set the record straight. Clue is a science-based, data-driven menstrual and reproductive health app, trusted by 10 million people globally, and despite their mission to help women - has come under scrutiny following the release of a report from University of Cambridge's Minderoo Centre. ‌ The report said the tracking apps were a "gold mine" for consumer profiling. By collecting customer data, it could allow companies to produce targeted advertisements linked to information users think is kept private. ‌ Under EU and UK law, the data from these period-tracking apps comes under a special category, which means it should have special protections from being sold on - but this report highlights that consent options are not always enforced or implemented. This then allows the data can be sold to advertisers and tech giants such as Facebook and Google. However Clue has assured users the app follows "strict protocols" when it comes to how data is managed, and said keeping their users safe is at their "core". Clue CEO Rhiannon White told The Mirror: "We adhere to the very strict standards the European GDPR sets for data security and storage. This applies to the data we hold regardless of where in the world our users are located. Our policy and firm commitment is that no matter where our users are in the world, we will never allow their private health data to be used against them. "We have never disclosed such data to any authority, and we never will. Anything that does not fundamentally serve female health and the empowerment of people with cycles would be at odds with our principles at Clue," she added. One of Clue's missions is to help close the research gap in women's health and White assured that when using the data for research, Clue takes the "utmost care and follow strict protocols". ‌ She said gaining insight from de-identified data is an "important part of our mission" because the historical lack of data for research into female health is a major contributing factor to the health gap, so will share this anonymised data with researchers from leading global institutions, such as Stanford and University of Oxford. "It is up to each user whether they want to help to close that data gap by consenting to their de-identified data being used for this purpose, which is why we offer granular consent options," and added: "This de-identified data is only shared with user consent and all research projects are carefully vetted against our strict criteria to ensure they're in the interest of our community. Help us improve our content by completing the survey below. We'd love to hear from you! ‌ "We have never and will never sell or share sensitive data with advertisers, insurers or data brokers. That is not our business model -– our business model is direct to consumer subscriptions, ensuring that our users are our customers, and we serve them." Rhiannon further detailed that the third party tools Clue uses to work are "vetted and assessed" against the strictest GDPR standards and assured they transparently detail exactly what data is handled by each tool and how in the privacy policy. ‌ "Our servers are located in the EU in Germany and in Ireland. When your data is sent between your device and our Clue servers, we use encrypted data transmission, which scrambles the information being sent so it's unreadable. Doing this increases the security of your data transfer," she added. But the researchers from the Cambridge study warn that by collecting information, it could allow companies to produce targeted advertisements linked to information users think is kept private. They also worry that if this data gets into the wrong hands, it could even affect access to abortion, health insurance discrimination and cyberstalking as well as risks to job prospects. "There are real and frightening privacy and safety risks to women as a result of the commodification of the data collected by cycle tracking app companies," said Dr Stefanie Felsberger, the lead author of the report. The report calls on organisations such as the NHS and other health bodies to create a "safer" alternative that is trustworthy.

Three Aspects Of Intellectual Property With AI
Three Aspects Of Intellectual Property With AI

Forbes

time14-04-2025

  • Forbes

Three Aspects Of Intellectual Property With AI

View of a Server room data center - 3d rendering Amid all of the amazing innovations going on right now, there's a lot of concern about how to treat human 'content.' And before that happens, societies have to look more closely at the issue. First of all, we have to define human content, and bring a broadness to that category of information. You have creative works like songs, and poems and pieces of visual art. But you also have professional intellectual property – information about how someone does their job, about their routines, their strategies, and how they excel in their given role. Then you have their likeness, their characteristics, their voices, their faces, and their bodies - what makes them, 'them.' All of this is personal data, and all of it should be protected. In fact, you can see some canary in the coal mine cases around compliance with the European GDPR. So the issues of intellectual property and fair use in AI go far beyond what we are used to dealing with in the legal world. With that in mind, here are three aspects of this that leaders are talking about in trying to figure out appropriate regulation for AI systems. One of the biggest questions is: in all of this broad application of personal data, how do you know when the system has crossed the line from taking general influence, to stealing content? In other words, the owners or leaders of AI companies could argue that the systems are just taking information piecemeal from different places, while the underlying intellectual property, whatever it is, is being siphoned off into their jurisdictions. And some would argue that the lawsuits from NYT and others against OpenAI or other model companies represent a case of this. I wanted to showcase part of a conversation that Chris Anderson recently had with Sam Altman of OpenAI, and I'll be coming back to this, because they really discussed all three of my categories here. One part that's relevant to influence begins with Altman describing how AI can take more direct influence, or coalesce from broader training sets, and how it's often hard to tell the difference. 'If you can't tell the difference, how much do you care? Chris Anderson: 'So that's what you're saying — it doesn't matter. But isn't that, though, at first glance, just IP theft?' The two then discuss how there may be more than one human source, and questions about how to divide up the money. Altman seemed to suggest that in his view, humans will still be at the center of the process. The consensus seems to be that it's hard to tell when the system is cross the line. Here's some additional thought on fair use in AI: 'The goal of (AI) training is to teach AI how to recognize patterns and generate outputs that mimic human creativity, which has worked to varying degrees of success,' writes Syed Balkhi at copyrighted. 'As I'm sure you can imagine, this crosses critical legal lines. Specifically, most people want to know if using copyrighted material to train an AI is an infringement or is fair use. AI advocates would say that training data is used in transformative ways: the AI does not reproduce the original content; rather, it abstracts patterns to create new works. However, critics say that even such indirect use is an exploitation of copyrighted material, especially when the original creators did not agree to the use of their works. This makes it a serious copyright infringement issue. Legal cases are just beginning to address these issues.' Another key issue is consent. I've seen this happen before with my own eyes – someone comes on stage with a new AI application, and they talk to the host about what it can do. The host wants a demo, so the presenter works out a little piece based on the host's own data, and they have a laugh about how good the AI system does. But once in a while, someone will turn to the purveyor of AI systems, and say, 'you know, I never really gave consent for that.' And that makes everyone take a minute, and stop and think. How do we enforce consent? This is a question we have to be thinking about. Going back to the discussion between Altman and Anderson. There's a piece where Anderson brings up the movie 'Her,' and talks about an AI reviewing someone's work, deciding it is good, and influencing people to make a decision on bringing their artistic efforts to the world. This is something that creative people can be pretty excited about. It seems counterintuitive, and sort of strange, that an AI could succeed in promoting an author or artist, or musician, where humans have failed. But it makes it kind of odd sense as well. If we are understanding that AI acts as an agent for our collective consciousness, using the information on the Internet, then we would value its evaluations of anything, whether it's recommendations of previously published content, or enthusiasm for a new unpublished work. And then there's AGI. Without going directly back to the above conversation, I came up with two themes that Altman and Anderson discussed, and that I've heard elsewhere, characterizing what these systems, systems endowed with a form of artificial general intelligence, will be able to do: 1. Do my job 2. Do stuff for me So first, AGI agents would be able to mine your professional data and be able to replicate your role in your company or business, even if you're a strategy consultant or leader – maybe especially if you're a strategy consultant or leader. The second one is a little more straightforward. AI will be doing tasks for us that we don't want to do ourselves. And that can be a broad range of tasks, from utilizing a robot to do dishes, to coming up with a report or drafting a grant application. This is all a lot to think about, not just in terms of job displacement, but in trying to create a new regulatory regime for something brand new. Let's keep thinking about it together.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store