Judge rejects class action for Google privacy lawsuit
People who accused Google of illegally collecting their personal information, after they chose not to synchronise their Google Chrome browsers with their Google accounts, cannot sue the Alphabet unit as a group in a class action, a U.S. judge ruled.
In a decision on Monday, U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California agreed with Google that it was appropriate to address case-by-case whether millions of Chrome users understood and agreed to its data collection policies.
"Inquiries relating to Google's implied consent defense will overwhelm the damages claims for all causes of action," Rogers wrote.
She dismissed the proposed damages class action with prejudice, meaning it cannot be brought again. The judge also said Chrome users cannot seek policy changes as a group.
David Straite, a lawyer for the plaintiffs, declined to comment on Tuesday. Sandi Knight, vice president of litigation at Google, in a statement said the company appreciated the decision, and that Chrome Sync has "clear privacy controls."
Class actions let plaintiffs seek potentially greater recoveries at lower cost than they could in individual lawsuits.
The decision followed a ruling last August by the federal appeals court in San Francisco, which said Rogers should consider whether reasonable Chrome users consented to letting Google collect their data when they browsed online.
Chrome users pointed to Chrome's privacy notice, which said they "don't need to provide any personal information to use Chrome" and Google would not collect such information unless they turned on the "sync" function.
Rogers had dismissed the case in December 2022. She said she oversees two other privacy cases against Mountain View, California-based Google, but the claims in those cases differed "significantly."
The appeals court ruling followed Google's 2023 agreement to destroy billions of records to settle a lawsuit claiming it tracked people who thought they were browsing privately, including in Chrome's "Incognito" mode.
The case is Calhoun et al v Google LLC, 9th U.S. Circuit Court of Appeals, No. 22-16993.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
38 minutes ago
- Time of India
Can AI offer the comfort of a therapist?
One evening, feeling overwhelmed, 24-year-old Delhi resident Nisha Popli typed, 'You're my psychiatrist now,' into ChatGPT. Since then, she's relied on the AI tool to process her thoughts and seek mental support. 'I started using it in late 2024, especially after I paused therapy due to costs. It's been a steady support for six months now,' says Popli. Similarly, a 30-year-old Mumbai lawyer, who uses ChatGPT for various tasks like checking recipes and drafting emails, turned to it for emotional support. 'The insights and help were surprisingly valuable. I chose ChatGPT because it's already a part of my routine.' With AI tools and apps available 24/7, many are turning to them for emotional support. 'More people are increasingly turning to AI tools for mental health support, tackling everything from general issues like dating and parenting to more specific concerns, such as sharing symptoms and seeking diagnoses,' says Dr Arti Shroff, a clinical psychologist. But what drives individuals to explore AI-generated solutions for mental health? WHY USERS ARE USING AI Therapy is expensive 'As someone who values independence, I found therapy financially difficult to sustain,' shares Popli, adding, 'That's when I turned to ChatGPT. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Illinois: Gov Will Cover Your Cost To Install Solar If You Live In These Zips SunValue Learn More Undo I needed a safe, judgment-free space to talk, vent, and process my thoughts. Surprisingly, this AI offered just that — with warmth, logic, and empathy. It felt like a quiet hand to hold.' People feel shy about in-person visits Dr Santosh Bangar, senior consultant psychiatrist, says, 'Many people often feel shy or hesitant about seeking in-person therapy. As a result, they turn to AI tools to express their feelings and sorrows, finding it easier to open up to chatbots. These tools are also useful in situations where accessing traditional therapy is difficult.' Nobody to talk to Kolkata-based Hena Ahmed, a user of the mental health app Headspace, says she started using it after experiencing loneliness. 'I've been using Headspace for about a month now. The AI tool in the app helps me with personalised suggestions on which mindfulness practices I should follow and which calming techniques can help me overcome my loneliness. I was feeling quite alone after undergoing surgery recently and extremely stressed while trying to manage everything. It was responsive and, to a certain extent, quite helpful,' she shares. Users see changes in themselves Mumbai-based 30-year-old corporate lawyer says, 'ChatGPT offers quick solutions and acts as a reliable sounding board for my concerns. I appreciate the voice feature for instant responses. It helps create mental health plans, provides scenarios, and suggests approaches for tackling challenges effectively.' 'My panic attacks have become rare, my overthinking has reduced, and emotionally, I feel more grounded. AI didn't fix me, but it walked with me through tough days—and that's healing in itself,' expresses Popli. CAN AI REPLACE A THERAPIST? Dr Arti expresses, 'AI cannot replace a therapist. Often, AI can lead to incorrect diagnoses since it lacks the ability to assess you in person. In-person interactions provide valuable non-verbal cues that help therapists understand a person's personality and traits.' Echoing similar thoughts, Dr Santosh Bangar, senior consultant psychiatrist, says, 'AI can support mental health by offering helpful tools, but it shouldn't replace a therapist. Chatbots can aid healing, but for serious issues like depression, anxiety, or panic attacks, professional guidance remains essential for safe and effective treatment.' DO CHATBOTS EXPERIENCE STRESS? Researchers found that AI chatbots like ChatGPT-4 can show signs of stress, or 'state anxiety', when responding to trauma-related prompts. Using a recognised psychological tool, they measured how emotionally charged language affects AI, raising ethical questions about its design, especially for use in mental health settings. In another development, researchers at Dartmouth College are working to legitimise the use of AI in mental health care through Therabot, a chatbot designed to provide safe and reliable therapy. Early trials show positive results, with further studies planned to compare its performance with traditional therapy, highlighting AI's growing potential to support mental wellbeing. ARE USERS CONCERNED ABOUT DATA PRIVACY? While some users are reluctant to check whether the data they share during chats is secure, others cautiously approach it. Ahmed says she hasn't considered privacy: 'I haven't looked into the data security part, though. Moving forward, I'd like to check the terms and policies related to it.' In contrast, another user, Nisha, shares: 'I don't share sensitive identity data, and I'm cautious. I'd love to see more transparency in how AI tools safeguard emotional data.' The Mumbai-based lawyer adds, 'Aside from ChatGPT, we share data across other platforms. Our data is already prevalent online, whether through social media or email, so it doesn't concern me significantly.' Experts say most people aren't fully aware of security risks. There's a gap between what users assume is private and what these tools do. Pratim Mukherjee, senior director of engineering at McAfee, explains, 'Many mental health AI apps collect more than what you type—they track patterns, tone, usage, and emotional responses. This data may not stay private. Depending on the terms, your chat history could help train future versions or be shared externally. These tools may feel personal, but they gather data.' Even when users feel anonymous, these tools collect data like IP addresses, device type, and usage patterns. They store messages and uploads, which, when combined, can reveal personal patterns. This data can be used to create profiles for targeted content, ads, or even scams Pratim Mukherjee, senior director of engineering, McAfee Tips for protecting privacy with AI tools/apps - Understand the data the app collects and how it's used - Look for a clear privacy policy, opt-out options, and data deletion features - Avoid sharing location data or limit it to app usage only - Read reviews, check the developer, and avoid apps with vague promises What to watch for in mental health AI apps - Lack of transparency in data collection, storage, or sharing practices - Inability to delete your data - Requests for unnecessary permissions - Absence of independent security checks - Lack of clear information on how sensitive mental health data is used One step to a healthier you—join Times Health+ Yoga and feel the change


Indian Express
an hour ago
- Indian Express
Targeted killing plot by Arsh Dalla foiled, two arrested: Punjab Police
The State Special Operations Cell (SSOC), Punjab Police, has claimed to have foiled a targeted killing plot allegedly masterminded by Canada-based designated terrorist Arsh Dalla. Acting on intelligence, the SSOC apprehended two key operatives Kawaljit Singh, a resident of Dharamkot, and Navdeep Singh, alias Hani, a resident of Badduwal. According to initial investigations, the duo were operating under the directions of Dalla and had been tasked with eliminating rival gang members and extortion targets. 'This timely and intelligence-driven operation has helped dismantle a targeted killing module and has averted a major threat to public safety,' a senior SSOC official said. During the operation, police recovered one Zigana .30 bore pistol along with nine live cartridges from the possession of the accused. An FIR has been registered at the police station in SSOC, Mohali, under relevant sections of the law. Further investigations are underway to identify both forward and backward linkages of the module and unravel the full extent of the conspiracy. Reaffirming its stance, Punjab Police DGP posted on X at the official handle @PunjabPoliceInd, 'we remain committed to eliminating organized crime and ensuring peace, safety, and harmony across the state'.


Time of India
an hour ago
- Time of India
Targeted killing plot foiled, 2 gang men held
Mohali: Punjab Police state special operations cell (SSOC) on Sunday foiled a targetted killing plot by arresting two key operatives of Canada-based gangster Arshdeep Singh, alias Arsh Dala, from Moga district. Kawaljit Singh alias Kaka and Navdeep Singh alias Honey, both residents of Moga, have been arrested along with a pistol and nine live cartridges. According to AIG Ravjot Grewal, the SSOC received intelligence that Dala assigned the duo a high-profile killing in Faridkot. Acting on this input, a case was registered on June 13 under sections 25 and 25(7) of the Arms Act and Section 61(2) of the Bharatiya Nyaya Sanhita at SSOC police station in Mohali. The accused were arrested during a raid in Dharamkot, Moga, on Friday. Interrogation revealed that Kawaljit, a repeat offender with NDPS cases against him, was in direct contact with Dala. He recently received the weapon and Rs 1 lakh in cash to eliminate a Faridkot-based target, along with instructions to kill any family members present at the time. Navdeep was providing logistical support, while another associate already conducted a recce of the target. Efforts are underway to identify other members of the module. The SSOC secured a four-day police remand of the accused.