
Meta wins $167 million in damages from NSO group over Pegasus spyware
A US jury has ordered NSO Group, the company behind the notorious Pegasus spyware, to pay more than $167 million in punitive damages to Meta for deploying malware via WhatsApp. The decision marks a significant legal victory for Meta following years of courtroom battles. A US judge previously found that NSO had violated the Computer Fraud and Abuse Act.(Pixabay/Representative)
Meta first sued NSO Group in 2019, alleging that the Israeli firm used its Pegasus spyware to target over 1,400 individuals across 20 countries—including journalists, human rights activists, and political dissidents. According to Meta, the malware was delivered through WhatsApp video calls, even if those calls went unanswered.
A US judge previously found that NSO had violated the Computer Fraud and Abuse Act, setting the stage for the jury trial to determine financial damages. On Tuesday, the jury awarded $444,719 in compensatory damages and $167,254,000 in punitive damages to Meta. Meta calls ruling a 'critical deterrent'
Carl Woog, WhatsApp's VP of Global Communications, welcomed the verdict, calling it 'a critical deterrent to this malicious industry against their illegal acts aimed at American companies and the privacy and security of the people we serve.'
Meta has also said it plans to seek a court injunction to prevent NSO from targeting WhatsApp in the future and hopes to donate the awarded funds to digital rights organisations. NSO pledges to appeal
NSO Group, which describes itself as a 'cyber intelligence' company, maintained in court that Pegasus cannot be used on US phone numbers and claimed WhatsApp had suffered no actual harm.
Gil Lainer, a spokesperson for NSO, criticised the verdict, calling it 'another step in a lengthy judicial process.' He said the firm would explore 'further proceedings' or an appeal, adding: 'We firmly believe our technology plays a critical role in preventing serious crime and terrorism… this perspective was excluded from the jury's consideration.'
Despite the ruling, Meta acknowledged that recovering the damages may be a lengthy process.
Mobile finder: CMF Phone 2 Pro goes on sale in India
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
25 minutes ago
- Time of India
Forget texts, calls or emails: Woman catches cheating husband using electric toothbrush. Here's how
In a world where cheating is often uncovered through texts, calls, or shady messages, one UK woman cracked her husband's lies using something far more mundane—an electric toothbrush. Yes, you read that right. It wasn't WhatsApp or a lipstick-stained shirt that gave him away. It was a synced toothbrush app that recorded his every brushstroke. Private investigator Paul Jones, who recounted the bizarre case to Mirror UK , called it one of the strangest infidelity busts he's seen in his career. But in a digital age where even our toothbrushes are smart, perhaps the betrayal wasn't so surprising after all. Brushing Off the Truth—Until It Brushed Back The woman had connected the toothbrush app to her phone to monitor her children's dental hygiene. But soon she noticed irregularities in the brushing logs—not from her kids, but her husband. Logs showed he was brushing at home during school hours and regular workdays, especially on Fridays. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like New Container Houses Vietnam (Prices May Surprise You) Container House | Search ads Search Now Undo At first, the data seemed trivial. But week after week, brushing sessions popped up at suspiciously similar times—late mornings on Fridays when he was supposedly at work. The digital trail didn't lie. It told a precise, timestamped story: someone was at home brushing their teeth while pretending to be at the office. The Dirty Truth Curious and increasingly suspicious, the woman hired Jones to investigate further. What he found confirmed her worst fears. Her husband hadn't worked on a Friday in over three months. Instead, he was inviting his mistress—one of his coworkers—over to the family home while the rest of the household was out. You Might Also Like: ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down All the while, his alibi was spotless—or so he thought. But his toothbrush had kept a perfect record of his infidelity, logging every session and outing him with brutal digital honesty. Data Doesn't Lie, People Do Jones emphasized the importance of noticing seemingly insignificant digital cues. 'It's timestamped, often location-based, and emotionless,' he said. 'When a device says someone brushed their teeth at 10:48 am on a workday, that's very hard to explain away.' He urged those suspicious of infidelity not to limit their attention to text messages or phone records. Instead, he advised looking into smart devices—be it a toothbrush, voice assistant, or even supermarket loyalty apps—that quietly collect data in the background. The New Digital Detectives Jones isn't the only private eye warning about hidden digital trails. Another UK investigator, Aaron Bond, said supermarket loyalty cards like Tesco's Clubcard are another untapped resource for uncovering deception. These apps log dates and locations of purchases—meaning a simple trip to a different store can contradict an alibi. You Might Also Like: After girlfriend cheated on him, techie pulls off a shocking scam to get paid leave from office 'If your partner says they were working late, but the app shows they were shopping across town, that's a red flag,' Bond explained. From toothbrushes to shopping apps, our everyday devices are evolving into unintentional truth-tellers. While lies can be rehearsed and stories spun, data remains precise and unfeeling. It doesn't care about guilt, shame, or excuses—it just logs what happened and when. So next time something feels off in your relationship, you might not need to dig through messages or check the call history. Just open the toothbrush app—you might find more than just plaque.


Time of India
an hour ago
- Time of India
Hey Siri, Am I Okay? : AI tools are being trained to detect suicidal signals.
Live Events Suicidal risk identification on SNS: The prompts fed to AI do not remain confined to tasks related to needing help in everyday activities, such as asking Alexa to play the family's favourite song, asking Siri on a random Tuesday to set a reminder, or asking Google Assistant to search the song based on humming. But what if users, in an especially low moment, were to ask, 'Am I okay?' Or maybe other such prompts that insinuate the user's want to harm themselves, whether through means of self-harm or and suicide attempts remain alarmingly prevalent, requiring more effective strategies to identify and support individuals at high risk. Current methods of suicide risk assessment largely rely on direct questioning, which can be limited by subjectivity and inconsistent interpretation. Simply put, their accuracy and predictive value remain limited, regardless of the large variety of scales that can be used to assess the risk; predictability remains unimproved over the past 50 intelligence and machine learning offer new ways to improve risk detection, but their accuracy depends heavily on access to large datasets that can help identify patient profiles and key risk factors. As outlined in a clinical review, AI tools can help identify patterns in the dataset, generate risk algorithms, and determine the effect of risk and protective factors on suicide. The use of AI reassures healthcare professionals with an improved accuracy rate, especially when combined with their skills and expertise, even when diagnostic accuracy could never reach 100%.According to Burke et al. , there are three main goals of machine learning studies in suicide: the first is improving the accuracy of risk prediction, the second is identifying important predictors and the interaction between them, and the last one is to model subgroups of patients. At an individual level, AI could allow for better identification of individuals in crisis and appropriate intervention, while at a population level, the algorithm could find groups at risk and individuals at risk of suicide attempts within these groups. Social media platforms have become both the cause and solution for the mental health crisis. While they are often criticized for contributing to the mental health crisis, these platforms also provide a rich source of real-time data to AI, enabling it to identify individuals portraying signs of suicidal intent. This is achieved by analyzing users' posts, comments, and behavioral patterns, allowing AI tools to detect linguistic cues, such as expressions of hopelessness or other emotional signals that may indicate psychological distress. For instance, Meta employs AI algorithms to scan user content and identify signs of distress, allowing the company to reach out and offer support or even connect users with crisis helplines. Studies such as those by the Black Dog Institute also demonstrate how AI's natural language processing can flag at-risk individuals earlier than traditional methods, enabling timely are also companies such as Samurai Labs and Sentinet that have developed AI-driven systems that monitor social media content and flag posts that insinuate suicidal ideation. For example, Samurai Labs 'One Life' project scans online conversations to detect signs that indicate high suicide risk. Upon detecting these indicators, the platform then leads the user to support resources or emergency assistance. In the same manner, Sentient's algorithms analyze thousands of posts on a daily basis, triggering alerts when users express some form of emotional distress, allowing for timely AI isn't a replacement for human empathy or professional mental health care, it offers a promising advancement in suicide prevention. By identifying warning signs at a much faster and more precise rate than human diagnosis and enabling early interventions, AI tools can serve as valuable allies in this fight against suicide.


India Today
2 hours ago
- India Today
Anthropic working on building AI tools exclusively for US military and intelligence operations
Artificial Intelligence (AI) company Anthropic has announced that it is building custom AI tools specifically for the US military and intelligence community. These tools, under the name 'Claude Gov', are already being used by some of the top US national security agencies. Anthropic explains in its official blog post that Claude Gov models are designed to assist with a wide range of tasks, including intelligence analysis, threat detection, strategic planning, and operational support. According to Anthropic, these models have been developed based on direct input from national security agencies and are tailored to meet the specific needs of classified introducing a custom set of Claude Gov models built exclusively for US national security customers,' the company said. 'Access to these models is limited to those who operate in such classified environments.'Anthropic claims that Claude Gov has undergone the same safety checks as its regular AI models but has added capabilities. These include better handling of classified materials, improved understanding of intelligence and defence-related documents, stronger language and dialect skills critical to global operations, and deeper insights into cybersecurity data. While the company has not disclosed which agencies are currently using Claude Gov, it stressed that all deployments are within highly classified environments, and the models are strictly limited to national security use. Anthropic also reiterated its 'unwavering commitment to safety and responsible AI development.'Anthropic's move highlights a growing trend of tech companies building advanced AI tools for defence. advertisementEarlier this year, OpenAI introduced ChatGPT Gov, a tailored version of ChatGPT that was built exclusively for the US government. ChatGPT Gov tools run within Microsoft's Azure cloud, giving agencies full control over how it's deployed and managed. The Gov model shares many features with ChatGPT Enterprise, but it places added emphasis on meeting government standards for data privacy, oversight, and responsible AI usage. Besides Anthropic and OpenAI, Meta is also working with the US government to offer its tech for military month, Meta CEO Mark Zuckerberg revealed a partnership with Anduril Industries, founded by Oculus creator Palmer Luckey, to develop augmented and virtual reality gear for the US military. The two companies are working on a project called EagleEye, which aims to create a full ecosystem of wearable tech including helmets and smart glasses that give soldiers better battlefield awareness. Anduril has said these wearable systems will allow soldiers to control autonomous drones and robots using intuitive, AR-powered interfaces.'Meta has spent the last decade building AI and AR to enable the computing platform of the future,' Zuckerberg said. 'We're proud to partner with Anduril to help bring these technologies to the American service members that protect our interests at home and abroad.'Together, these developments point to a larger shift in the US defence industry, where traditional military tools are being paired with advanced AI and wearable tech.