Latest news with #AI-manipulated

The Hindu
20 hours ago
- Science
- The Hindu
Cornell researchers working on light-watermarking tactic to detect AI videos
Cornell researchers have proposed a way to help forensic experts tell AI-manipulated videos from genuine ones by using specially designed light sources at key events that reveal when videos have been morphed. A paper titled, 'Noise-Coded Illumination for Forensic and Photometric Video,' describes how light sources featured in a video could be encoded secretly through visual noise fluctuations. In essence, this would watermark the light source itself, rather than individually trying to watermark every video shot at an event in order to prevent these clips from being morphed. These curated light sources have a 'secret code' that can be used to verify the video authenticity and see if the visuals have been manipulated. Computer scientist and graduate student Peter Michael led this work on Noise-Coded Illumination (NCI). 'Our approach effectively adds a temporal watermark to any video recorded under coded illumination. However, rather than encoding a specific message, this watermark encodes an image of the unmanipulated scene as it would appear lit only by the coded illumination,' stated the paper. This tactic would allow forensic experts to compare a manipulated video to an easily accessible version of the original, instead of having to search for the source material manually. 'When an adversary manipulates video captured under coded illumination, they unwittingly change the code images contained therein. Knowing the codes used by each light source lets us recover and examine these code images, which we can use to identify and visualize manipulation,' stated the paper. The work noted that such an approach could be useful for public events and interviews, to prevent clips of these key meetings from being morphed. However, the success of the venture depends on the widespread adoption of the specially designed lights. As AI-generated videos or AI-morphed clips become more realistic, experts are looking at more ways to watermark original content. However, the need of the hour is a watermarking method that even malicious attackers cannot remove from the videos they work with.


India.com
15-07-2025
- Entertainment
- India.com
Archita Phukan aka Babydoll Archi's shocking truth: Ex-boyfriend's chilling revenge made her an adult star due to...
When Archita Phukan, also known as 'Babydoll Archi', first lit up the digital space, it was for her glam transitions and eye-catching reels. But no one—least of all Archita—knew that this overnight fame would turn into a deeply disturbing digital betrayal. What began as social media admiration quickly spiralled into one of India's most troubling AI-driven cybercrime cases. Who is Archita Phukan, and how did she become an internet sensation? Born and raised in Assam, 23-year-old Archita Phukan was already a rising social media influencer before her name exploded across trending charts in early July 2025. It all began with a now-viral transition reel set to Kate Linn's Dame Un Grrr—where Archita switches from a saree to a high-glam avatar. The video racked up millions of views, making her a new face of internet virality. What happened with Kendra Lust? A seemingly innocent picture with American adult star Kendra Lust pushed things even further. Archita posted a selfie with Kendra along with a caption expressing admiration: 'Felt truly inspired after meeting her.' That one post triggered a wildfire of speculation, with meme pages and online gossip circuits claiming Archita was headed for the adult film industry. When did things turn dark? As the buzz grew, a page titled Just Assam Things began circulating claims that Archita wasn't real at all. Some accused her of being AI-generated, pointing out how her page lacked behind-the-scenes content or public sightings. The name on the profile also changed from Archita Phukan to 'Amira Ishtara.' Suspicion turned to conspiracy. But the truth was far worse. Who was behind it all? It was a revenge plot—crafted by none other than Archita's ex-boyfriend. After a painful breakup, 27-year-old Pratim Bora, a mechanical engineer from Tinsukia, allegedly stole Archita's photos from social media and morphed them using AI tools. The result? A fake, explicit persona named Babydoll Archi. The entire online identity—viral reel, edited images with Kendra Lust, suggestive bios—was created without Archita's knowledge. She only found out after the profile exploded online. Her brother filed an FIR with the Dibrugarh police, who later traced the fake content back to Pratim's device. What was his motive? 'This was a deliberate attempt to malign her image following a personal fallout,' confirmed the Dibrugarh police chief. 'The visuals are fake, and Archita has no connection with adult content or any activities abroad.' Pratim's aim, investigators revealed, was to humiliate Archita and damage her reputation using AI-manipulated content. The case has sparked national outrage, particularly among women and digital rights activists. Influencers are demanding tougher cyber laws and clearer accountability when it comes to AI misuse. The Archita Phukan case is no longer just a personal tragedy—it's a wake-up call. Archita's case isn't isolated. As AI tools become more accessible, so does the potential for tech-driven revenge and defamation. Her story is a chilling reminder that one viral post can turn into a digital warzone, especially when the attacker is someone you once trusted.

Engadget
24-06-2025
- Politics
- Engadget
The Oversight Board calls Meta's uneven AI moderation 'incoherent and unjustifiable'
As Meta's platforms fill up with more AI-generated content, the company still has a lot of work to do when it comes to enforcing its policies around manipulated media. The Oversight Board is once again criticizing the social media company over its handling of such posts, writing in its latest decision that its inability to enforce its rules consistently is "incoherent and unjustifiable." If that sounds familiar, it's because this is the second time since last year the Oversight Board has used the word "incoherent" to describe Meta's approach to manipulated media. The board had previously urged Meta to update its rules after a misleadingly edited video of Joe Biden went viral on Facebook. In response, Meta said it would expand its use of labels to identify AI-generated content and that it would apply more prominent labels in "high risk" situations. These labels, like the one below, note when a post was created or edited using AI. An example of a label when Meta determines a piece of Ai-manipulated content is "high risk." (Screenshot (Meta)) This approach is still falling short though, the board said. "The Board is concerned that, despite the increasing prevalence of manipulated content across formats, Meta's enforcement of its manipulated media policy is inconsistent," it said in its latest decision. "Meta's failure to automatically apply a label to all instances of the same manipulated media is incoherent and unjustifiable." The statement came in a decision related to a post that claimed to feature audio of two politicians in Iraqi Kurdistan. The supposed "recorded conversation" included a discussion about rigging an upcoming election and other "sinister plans" for the region. The post was reported to Meta for misinformation, but the company closed the case "without human review," the board said. Meta later labeled some instances of the audio clip but not the one originally reported. The case, according to the board, is not an outlier. Meta apparently told the board that it can't automatically identify and apply labels to audio and video posts, only to "static images." This means multiple instances of the same audio or video clip may not get the same treatment, which the board notes could cause further confusion. The Oversight Board also criticized Meta for often relying on third-parties to identify AI-manipulated video and audio, as it did in this case. "Given that Meta is one of the leading technology and AI companies in the world, with its resources and the wide usage of Meta's platforms, the Board reiterates that Meta should prioritize investing in technology to identify and label manipulated video and audio at scale," the board wrote. "It is not clear to the Board why a company of this technical expertise and resources outsources identifying likely manipulated media in high-risk situations to media outlets or Trusted Partners." In its recommendations to Meta, the board said the company should adopt a "clear process" for consistently labeling "identical or similar content" in situations when it adds a "high risk" label to a post. The board also recommended that these labels should appear in a language that matches the rest of their settings on Facebook, Instagram and Threads. Meta didn't respond to a request for comment. The company has 60 days to respond to the board's recommendations.


DW
20-06-2025
- Business
- DW
DW appoints Barbara Massing as new Director General – DW – 06/20/2025
Germany's international broadcaster will be headed by a woman for the first time, after the DW Broadcasting Council appointed Barbara Massing as new Director General. She will replace Peter Limbourg on October 1, 2025. German international broadcaster Deutsche Welle announced Friday that Barbara Massing will replace Peter Limbourg as the company's director general on October 1, 2025. "I am thrilled to appoint Barbara Massing as the next director general," said Karl Jüsten, chair of the DW Broadcasting Council and its selection committee. "She brings not only top-tier leadership and journalistic expertise but also the strategic foresight needed to position Deutsche Welle for long-term success in a challenging global media environment." As managing director for Business Administration, Massing has been key to expanding DW programming as well as streamlining the organization, said Jüsten, who emphasized that she "is exactly the leader Deutsche Welle needs to strengthen its role as a trusted, independent global voice for democracy and freedom." Achim Dercks, deputy director of DW's Advisory Board, also praised Massing's success in expanding and restructuring DW activities and pledged to work alongside her to insure that DW "remains a relevant voice in the world, providing people with free information" in what he described as "geopolitically challenging times." Massing thanked the council for its trust in her leadership and for the opportunity to help shape DW's future. "Fact-based, reliable journalism is our most valuable asset and it is more important now, in times of AI-manipulated content and disinformation, than it has ever been," said Massing on Friday. Massing's nomination was put forth after a unanimous decision by the Broadcasting Council's seven-member selection committee. Massing will replace outgoing Director General Peter Limbourg, who announced his retirement in September 2024, after holding the position since 2013. Barbara Massing will be first woman to lead DW A fully qualified lawyer, Massing joined DW in 2006 and became part of its Management Team in 2014 after previously working as a producer for German public broadcaster ARD and for the Franco-German broadcaster Arte. Massing, who holds, among others, positions on the advisory boards for the city of Bonn's International Beethovenfest and the University Hospital Bonn, will become the first woman to lead DW since its founding on May 3, 1953. During her career Massing has focused on digital transformation, organizational culture and sustainability. The director general is responsible for steering and coordinating DW's strategic and operational activities in close collaboration with its governing bodies. According to the DW Act, the director general must be elected via secret ballot by the Broadcasting Council for a term of six years. Re-election to the post is permitted, and a two-thirds majority is required for appointment. DW is Germany's independent international broadcaster and provides news and information in 32 languages around the world with TV, online and radio services reaching 320 million users every week and employs around 4,000 people from 140 different countries DW's work focuses on topics such as freedom and human rights, democracy and the rule of law, world trade and social justice, health education and environmental protection, technology and innovation.


Time of India
13-06-2025
- Politics
- Time of India
‘You're going to take my job': Donald Trump praises first lady Melania after revenge porn law passes; watch video
(Source: X) US President Donald Trump praised first lady Melania Trump's push to protect victims of digital exploitation, joking that she might end up taking his job after her widely backed bill cleared Congress with overwhelming bipartisan support. Speaking during the annual Congressional Picnic on the South Lawn of the White House, Trump singled out Melania for helping pass the Take It Down Act, a new law that criminalises the sharing of explicit images, including deepfakes, without consent. 'When I saw that bill passed bipartisan, I said, 'You know, I think you're going to take my job, Melania,'' Trump said to cheers, with the First Lady standing beside him on the Truman Balcony. 'We don't get so much bipartisan,' he added. The new law was signed by the president last month and is the first federal legislation targeting revenge porn and the spread of manipulated sexual images made using artificial intelligence. The bill passed the Senate by unanimous consent and was approved in the House with only two lawmakers opposing it — Thomas Massie of Kentucky and Eric Burlison of Missouri. Melania Trump had played a major role in championing the measure, which aims to protect both children and adults from digital abuse. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Memperdagangkan CFD Emas dengan salah satu spread terendah? IC Markets Mendaftar Undo 'We've even come together on a bipartisan basis with the help of our great first lady to pass the Take It Down Act, protecting our youth from exploitation,' the president said. Trump also revealed that his wife was surprised at how rare bipartisan cooperation is in Washington. 'She said, 'Why is that?' I said, 'There is no reason for it. But you did it,'' he told lawmakers and guests. 'Congratulations. It's a great job.' The law requires websites and social media platforms to remove non-consensual explicit content, including AI-manipulated images, within 48 hours of a request from the victim. Offenders who knowingly post such material can now face prison time. At the bill's signing ceremony, Trump handed the pen to Melania for her to add her signature. Melania called the law 'a national victory that will help parents and families protect children from online exploitation.' Earlier this year, Melania warned of the growing dangers posed by artificial intelligence and social media. 'It's heartbreaking to witness young teens, especially girls, grappling with the overwhelming challenges posed by malicious online content like deep fakes,' she said. 'This toxic environment can be severely damaging.' She also cautioned that new digital technologies are being misused. 'Artificial intelligence and social media are the digital candy for the next generation — sweet, addictive and engineered to have an impact on the cognitive development of our children,' she said. 'They can be weaponised, shape beliefs and, sadly, affect emotions and even be deadly.' The law holds major tech companies accountable for removing abusive content and aims to curb the spread of harmful material. Platforms such as Meta, Snapchat and TikTok have voiced support for the act. However, some digital rights groups have expressed concern that it could lead to censorship or misuse through false takedown requests. The measure gained momentum following several high-profile incidents, including AI-generated sexual images of celebrities like Taylor Swift and Rep. Alexandria Ocasio-Cortez, as well as victims among young women. The Take It Down Act was introduced last year by Senators Ted Cruz and Amy Klobuchar.