Latest news with #digitalrights


TechCrunch
22-07-2025
- Business
- TechCrunch
Apple alerted Iranians to iPhone spyware attacks, say researchers
Apple notified more than a dozen Iranians in recent months that their iPhones had been targeted with government spyware, according to security researchers. Miian Group, a digital rights organization that focuses on Iran, and Hamid Kashfi, an Iranian cybersecurity researcher who lives in Sweden, said they spoke with several Iranians who received the notifications in the last year. Bloomberg first wrote about these spyware notifications. Miaan Group published a report on Tuesday on the state of cybersecurity of civil society in Iran, which mentioned that the organization's researchers have identified three cases of government spyware attacks against Iranians, two in Iran and one in Europe, who were alerted in April of this year. 'Two people in Iran come from a family with a long history of political activism against the Islamic Republic. Many members of their family have been executed, and they have no history of traveling abroad,' Amir Rashidi, Miaan Group's director of digital rights and security, told TechCrunch. 'I believe there have been three waves of attacks, and we have only seen the tip of the iceberg.' Rashidi said that Iran is likely the government behind the attacks, although there needs to be more investigations into these attacks to reach a more conclusive determination. 'I see no reason for members of civil society to be targeted by anyone other than Iran,' he said. Kashfi, who founded the security firm DarkCell, said in an email that he helped two victims go through preliminary forensics steps, but he wasn't able to confirm which spyware maker was behind the attacks. And, he added, some of the victims he worked with preferred not to continue the investigation. Contact Us Have you received a threat notification from Apple? We'd love to hear from you. From a non-work device and network, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or Have you received a threat notification from Apple? We'd love to hear from you. From a non-work device and network, you can contact Lorenzo Franceschi-Bicchierai securely on Signal at +1 917 257 1382, or via Telegram and Keybase @lorenzofb, or email . 'Pretty much all victims spooked out and ghosted us as soon as we explained the seriousness of the case to them. I presume partly because of their place of work and sensitivity of the matters related to that,' said Kashfi, who added that one of the victims received the notification in 2024 It's unclear which spyware maker is behind these attacks. Over the last few years, Apple has sent several rounds of notifications to people whom the company believes have been targeted with government spyware, such as NSO Group's Pegasus, or Paragon's Graphite. This kind of malware is also known as 'mercenary' or 'commercial' spyware. The notifications have helped security researchers who focus on spyware to document abuses in several countries such as India, El Salvador, and Thailand. On Apple's support page for what the company calls 'threat notifications,' last updated in April, the tech giant said that since 2021 it has notified users in 'in over 150 countries,' which shows how widespread the use of government spyware is. Apple does not disclose the names of the countries, nor the total number of people it has notified. To help victims, since last year, Apple has recommended those who received these threat notifications to reach out to digital rights group AccessNow, which runs an around-the-clock helpline staffed with researchers who can investigate spyware attacks. AccessNow has documented cases of spyware abuse all over the world. Apple did not respond to a request for comment on the notifications sent to Iranians.

Reuters
22-07-2025
- Business
- Reuters
Trips Acquires Mentaport; Rebrands as KINETK
NEW YORK, NY, July 22, 2025 (EZ Newswire) -- Trips, opens new tab, the leading pioneer in IP tokenization, announced today that it has acquired Mentaport, opens new tab, the leading provider of cutting edge IP watermarking and agentic AI tracking. Together, the two companies will unite under a new name, KINETK, opens new tab, serving IP holders across the entire spectrum of media, from individual creators to global IP enterprises. As part of the transaction, Mariale Montenegro, CEO and co-founder of Mentaport, will be joining KINETK as CTO and co-founder. KINETK brings together two first movers in IP infrastructure, establishing the next generation of IP rights and fixing the current content data model. Today, IP is everywhere, driving consumption behaviors across the global digital economy. KINETK embraces a bigger, more interconnected vision: a future where every piece of content carries its own signature, where IP holders have full transparency into their reach, their rights, and their value wherever their content goes and is consumed. 'From the very beginning, our mission has been to reshape the way creative work is protected, tracked, and valued in the digital world,' said Michael Finkelstein, CEO and co-founder of KINETK. 'We've always believed that IP holders deserve cutting edge infrastructure to not only defend their work, but also amplify their impact. Today, we're taking a bold step forward in that journey. I am beyond thrilled to partner with Mariale and her entire team in bringing this to life.' 'KINETK captures the core of what we started at Mentaport; a dynamic network powered by invisible watermarking, real-time tracking, and creator-first intelligence,' said Mariale Montenegro, CTO and co-founder of KINETK. 'Every post. Every platform. Every time that content moves, KINETK moves with it. I couldn't be happier to partner with Michael and his team in seeing the initial Mentaport vision come to life.' To learn more, visit KINETK's website, opens new tab. About KINETK KINETK is a first of its kind platform providing anyone with the unprecedented ability to safeguard, track, and control where their content goes, and how it is used. The next generation of IP holders require a robust and transparent system to manage their IP in real-time, across platforms, and without friction. Through the combination of proprietary invisible watermarking, agentic AI detection and tracking, and on-chain registration, KINETK is providing the infrastructure that will underpin the future data models of the digital landscape. Real IP. Real protection. Real speed. For more information, visit opens new tab. Media Contact Karsen Daily karsen@ ### SOURCE: KINETK Copyright 2025 EZ Newswire See release on EZ Newswire


Bloomberg
22-07-2025
- Bloomberg
Iranians Targeted With Spyware in Lead-Up to War With Israel
More than a dozen Iranians' mobile phones were targeted with spyware in the months prior to the country's war with Israel, according to new research. Miaan Group, a digital human rights organization based in Austin, Texas, found a number of Iranians who received threat notifications from Apple Inc. in the first half of 2025, and researchers believe they only identified a fraction of the total targets. Another round of Iranian spyware targets was discovered by Hamid Kashfi, a Sweden-based cybersecurity researcher and founder of the firm DarkCell.


Fast Company
03-07-2025
- Politics
- Fast Company
Denmark wants you to copyright yourself. It might be the only way to stop deepfakes
'Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I'm not willing to accept that,' Danish Culture Minister Jakob Engel-Schmidt recently told The Guardian after Denmark introduced an amendment to its copyright legislation so people could own their own likeness. 'In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice, and their own facial features, which is apparently not how the current law is protecting people against generative AI.' The Danish culture minister is right. We need to stop this problem decisively. Deepfakes are a serious problem—one that is fundamentally altering our perception of reality. People are getting bullied, coerced into doing things against their will, and even framed for crimes they didn't commit. Stopping the software will not work. That ship sailed a long time ago. And normal people don't have the resources to fight in court for a deepfake to be taken down. The answer, like the Danish government has done, is to include personal likeness in copyright law. The proposal establishes legal definitions for unauthorized digital reproductions, specifically targeting 'very realistic digital representation of a person, including their appearance and voice.' The Danish administration intends to introduce the legislative proposal for public input ahead of the summer parliamentary break, with formal submission planned for autumn. Under the revised copyright framework, Danish citizens would gain legal authority to request removal of nonconsensual deepfake content from digital platforms. The legislation extends protection to cover unauthorized artificial recreations of artistic performances, with potential financial remedies for victims. Creative works such as parody and satirical content remain exempt from these restrictions. 'Of course this is new ground we are breaking, and if the platforms are not complying with that, we are willing to take additional steps,' Engel-Schmidt said. Digital platforms that fail to comply face substantial financial penalties, with potential escalation to European Commission oversight. 'That is why I believe the tech platforms will take this very seriously indeed,' the minister added. Denmark plans to leverage its upcoming EU presidency to promote similar legislative approaches across European nations. Fix copyright to fix the deepfake problem If imposing heavy penalties on any social network or video service that hosts a copyrighted work sounds familiar, that's because it is how the Digital Millennium Copyright Act (DMCA) works in the United States. Under U.S. copyright law and similar systems globally, copyright protection is granted exclusively to original creative works fixed in a tangible form, such as writings, music, artwork, software, films, or photographs. Crucially, copyright law explicitly excludes protection for abstract concepts like ideas, facts, systems, methods, or short phrases, which may fall under trademark law but can't be copyrighted. Most importantly, it does not extend to fundamental aspects of an individual's identity, including their likeness, voice, or persona. Copyright protects specific, authored expressions—like a particular photograph of you or a recording of your voice singing a song—but not the underlying person. Your face, body, or general identity can be reproduced, although there are rights concerning the commercial or personal use of one's likeness, voice, or identity. They are addressed by separate legal doctrines, primarily the right of publicity and the right to privacy. The problem is that, to fight someone from using your likeness under that framework, you will need a lot of power and money. Someone like Scarlett Johansson could take down OpenAI's version of her voice because it sounded too much like her with a simple tweet and the threat of litigation. Likewise, the lawyers of famous people like President Obama or footballer Christiano Ronaldo can strike down any unsanctioned use of their likeness. 'If Ronaldo complains about a deepfake video of him, a platform will take the video down,' Metaphysic CEO Tom Graham told me in an interview last year talking about his company's efforts to copyright anonymous people's likeness. 'But if Joe Schmoe complains about his right of publicity or privacy, the platform will shrug. Unless Mr. Schmoe fires a DMCA complaint, that is. Then YouTube will take down the deepfake instantly, because not complying with a DMCA takedown notice will have serious consequences for YouTube that could reach millions of dollars.' Graham has been trying to fix this issue for a while. Metaphysic was the company that made deepfake Tom Cruise viral and then went on to work with iconic 'brands' like ABBA, Tom Hanks, and Elvis himself, to make legal digital clones for use in concerts, movies, and TV. The last time I spoke with him, his company was working on a pioneering system that allowed famous people and individuals to register the copyright of AI-generated versions of themselves. 'Copyright law says that you can't copyright anything other than works of human authorship,' Graham explains. 'So, you can't copyright yourself because you are from nature, right? You are not a work of human authorship.' But what if you could use AI to create a digital self and copyright that? That will effectively give you right over any digital representation of yourself, potentially putting you under DMCA protection without the Danish copyright patch. 'What we're doing here is we are creating the AI character of you. So, just like Disney can own Mickey Mouse and the Avatar characters, you can own the character that happens to look exactly, perfectly like you,' he told me. The process involves creating an AI-generated avatar from user-provided video, which becomes a copyrightable work because it's technically an artificial creation, even though it looks identical to the real person. 'If somebody takes a video of you in real life, you don't have any claim. But if someone makes a character that looks just like you, that looks exactly like your character, then we are trying to say that that unauthorized character infringes your character,' Graham described. 'So, you're not copywriting yourself. You're copywriting this AI character of yourself.' In theory, this would give people instant practical enforcement benefits. Under current takedown procedures, when someone with registered copyright complains about infringing content, platforms must remove it within 24 hours. 'That's the remedy. That's the thing you're looking to do,' Graham says. For deepfake victims, this creates a powerful tool. He already submitted his own AI likeness for copyright registration with the U.S. Copyright Office, though he's still awaiting a decision. The process was designed specifically to address recent Copyright Office decisions that denied protection for AI-generated images from tools like Midjourney, which were deemed to lack sufficient human authorship. 'We designed this system to embed that human control and authorship into every layer of the process. Just the same as if you were using Photoshop to design a new character,' Graham explains. The system requires users to manually curate their video data and select specific frames, creating what Graham argues is sufficient human involvement to qualify for copyright protection. If that sounds convoluted to you, you are not wrong. Denmark's legislative approach offers a more direct path than the complex workarounds required in countries like the United States. By explicitly granting individuals rights over their digital likeness, the Danish law could provide the legal foundation needed to effectively combat deepfake abuse. Whether the European Union follows Denmark's lead may determine how quickly this new form of digital rights protection spreads across the world, hopefully changing the mind of U.S. legislators in the process.


Times
30-06-2025
- Entertainment
- Times
Four-year-olds ‘exploited' by tech giants' app store age ratings
Children as young as four are being exploited because of misleading age ratings on Apple and Google's app stores, it has been claimed. The recommended app store ages for some of the most popular apps, such as Candy Crush Saga, Whiteout Survival and Toca Boca World, are much younger than the limits set by developers in the terms and conditions. This leads to young children being left in the 'firing line' of in-app purchases, targeted advertising and data processing, campaigners say. The Good Law Project and 5Rights, a charity protecting children's digital rights, have filed a legal complaint with the Competition and Markets Authority (CMA) over the issue. Candy Crush Saga, which has 275 million monthly users, has an age rating of 4+ on Apple and 3 on Google, but its terms and conditions say players have to be at least 13. For Toca Boca World, which has 60 million monthly users, the ages are 4+ on Apple and 3 on Google but the terms and conditions say under-18s need parental consent. Whiteout Survival, which has 10 million monthly users, is rated 4+ on Apple and 7 on Google but its policies set a minimum age of 13 and under-18s need parental consent. All these games are free to download but generate revenue from in-app purchases, as well as data processing and advertising. Apple and Google can take up to 30 per cent of this revenue. The disparity is created by the app stores rating on content of the games but developers state ages based on data-processing laws. Of the top 500 apps by in-app revenue, 45 per cent display a lower age rating in the app store than terms and conditions and 74 per cent have a lower app-store age than the privacy policy, the complaint says. Duncan McCann, Good Law Project's tech and data policy lead, said: 'These tech giants are refusing to do the right thing and act, simply because it is so lucrative not to do so.' Leanda Barrington-Leach, executive director of 5Rights, said: 'It is unfathomable how Apple and Google can so blatantly mislead consumers.' The CMA is investigating whether Apple and Google have 'strategic market status'. If the regulator finds that they do, it can impose conduct requirements on them. Apple said: 'We are committed to protecting user privacy and security and providing a safe experience for children.' Google said: 'Google Play does not control app ratings — these are the responsibility of the app developers and the International Age Rating Coalition. Ratings in Europe (including the United Kingdom) are maintained by Pan European Game Information.'