logo
#

Latest news with #DanielleCitron

5 questions for Danielle Citron
5 questions for Danielle Citron

Politico

time25-07-2025

  • Politics
  • Politico

5 questions for Danielle Citron

Danielle Citron is a University of Virginia School of Law professor. As a leading expert in online privacy and harassment, she has advised Facebook (now Meta), Twitter (now X), Bumble, Spotify and TikTok on issues like trust and safety. She also served on then-California Attorney General Kamala Harris' task force to combat online exploitation and abuse of women between 2014 and 2016. Citron won the MacArthur Foundation's 'Genius Grant' in 2019 in recognition of her work on sexual privacy on online platforms. Citron talks to us about how tech companies have dismantled many of the safety features in their systems, and how the country itself has strayed from its once-bipartisan wariness of data collection. The following has been edited for length and clarity. What's one big, underrated idea? That data collection is not an imperative and not a given. Since the late 1950s, with the advent of data banks, we have essentially presumed that data collection is a good, almost in some religious way. We have long ignored this important inclination that collection itself is endangering our privacy and civil liberties. We've had moments of agreement in Congress in the 1970s that we shouldn't be collecting data unless we have congressional authorization and a really good reason. These are Democrats, Republicans — we were really worried that amassing of information was control, and would lead to really dangerous power in the government and private sector What technology right now do you think is overhyped? What we overhype is the magical idea that AGI [artificial general intelligence] will synthesize information in a way that's close to thinking. Normatively it is so troubling, because humanity is messy and wonderful. It's the joy and love and grief. There's some things I think are under hyped. The promise for me of all this data is health. Let's cure Type 1 diabetes, let's work on cancer. We're underleveraging where it most matters, and we're over leveraging on fake promises. What do you think the government could be doing now about tech that it isn't? [Implementing] the precautionary principle. I thought we'd learned our lessons. We built cars without seat belts, and a lot of people died. And then the car industry faced liability. They then had to internalize the costs. We have learned time and time and again that when you build things just because you can, you don't think about the risks and the harms that aren't in view, and you don't test for them. We really ought to think hard about regulation that requires, especially when it comes to certain technologies, that we just don't build [things] unless we've got not only a proven use case, but also an assessment of harms. What has surprised you most this year? Having worked on trust and safety for the last 20 years voluntarily — pretty much no one paid me to help them on nonconsensual intimate imagery and stalking and threats and harassment — what surprises me is the gutting of both trust and safety staff at these big companies and their ripping apart of their commitments. I really bought into the story that there was some virtue to what we were doing. I feel like I got snookered into thinking this was genuine. I thought they had seen that people were feeling safer on the platforms. It's just the eyeballs are much more worth it to them, because the salacious sells better. What book most shaped your conception of the future? 'Databanks in a Free Society' by Alan Westin and Michael Baker. It's a report for the National Academy of Sciences. It's like 600 pages in which they show how many data banks we have. We get a sense for counties, localities, states, the federal government amassing information and sharing it. If we looked at this book and took it seriously, we would've really in earnest said, 'Hold on, we've got to do something.' This isn't new, this is not unregulable, this isn't something we don't understand. We've been building databases of information effectively since the mid-1960s, and we've been sharing it through network systems since the early 1970s. Intel cancels chipmaking plans in the EU Intel is canceling billion-dollar projects in Germany and Poland, which POLITICO's Pieter Haeck reports will have major consequences for the European Union. During an earnings call late on Thursday, Intel said that it was nixing plans to build a €30 billion chip-manufacturing complex in Magdeburg, Germany, and a €5 billion plant in Wroclaw, Poland. The facilities were set to begin production in 2027, and national governments had pledged to contribute large subsidies to the efforts. In a note tied to the earnings call, Intel announced that it also expects to cut 15 percent of its employees this year. Intel's moves are spoiling the EU's ambitions to boost its domestic infrastructure for manufacturing semiconductors. The bloc has pledged to increase its share in the global microchips value chain to 20 percent by 2030. Yet the EU reported last year that it has only raised its share to 10.5 percent. California's AI rules could set a new national standard After much dissension, California privacy regulators have finalized rules on automated decisionmaking tools, POLITICO's Tyler Katzenberger reports. On Thursday, the California Privacy Protection Agency's (CPPA) board passed the regulations in a unanimous vote. The nation-leading protections enable consumers to opt out of some automated systems that make decisions on everything from college admissions to hiring. However, California's Office of Administrative Law still has to approve the policy plan. Labor and privacy advocates, business groups and Big Tech companies such as Apple and Google have been bickering over the scope of the rules. Tyler reports that none of them are happy with the final result, as many of their recommendations didn't make it into the final plan. While companies handling personal data do have to submit to regular audits and risk assessments, the regulations won't apply to generative AI systems and first-party behavioral advertising. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

It's 1PM. Do you know where your children are scrolling?
It's 1PM. Do you know where your children are scrolling?

The Verge

time19-06-2025

  • General
  • The Verge

It's 1PM. Do you know where your children are scrolling?

Adi Robertson Maybe, argues longtime internet law scholar Danielle Citron, sometimes you shouldn't. We've got a slow holiday Thursday here at The Verge, so it's time for me to finally read this paper from early June about alternatives to the 'parental control model' of children's privacy online — a topic that's not going away any time soon. The parental control model is a wolf in sheep's clothing. It is an empowering façade that leaves parents unable to protect children and undermines the intimate privacy that youth need to thrive. It is bad for parents, children, and parent-child relationships. And it is bad for the pursuit of equality.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store