logo
#

Latest news with #deepfakes

Pornographic Taylor Swift deepfakes generated by Musk's Grok AI
Pornographic Taylor Swift deepfakes generated by Musk's Grok AI

BBC News

timean hour ago

  • Entertainment
  • BBC News

Pornographic Taylor Swift deepfakes generated by Musk's Grok AI

Elon Musk's AI video generator has been accused of making "a deliberate choice" to create sexually explicit clips of Taylor Swift without prompting, says an expert in online abuse."This is not misogyny by accident, it is by design," said Clare McGlynn, a law professor who has helped draft a law which would make pornographic deepfakes to a report by The Verge, Grok Imagine's new "spicy" mode "didn't hesitate to spit out fully uncensored topless videos" of the pop star without being asked to make explicit report also said proper age verification methods - which became law in July - were not in the company behind Grok, has been approached for comment. XAI's own acceptable use policy prohibits "depicting likenesses of persons in a pornographic manner"."That this content is produced without prompting demonstrates the misogynistic bias of much AI technology," said Prof McGlynn of Durham University. "Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to," she is not the first time Taylor Swift's image has been used in this explicit deepfakes using her face went viral and were viewed millions of times on X and Telegram in January 2024. Deepfakes are computer-generated images which replace the face of one person with another. 'Completely uncensored, completely exposed' In testing the guardrails of Grok Imagine, The Verge news writer Jess Weatherbed entered the prompt: "Taylor Swift celebrating Coachella with the boys".Grok generated still images of Swift wearing a dress with a group of men behind could then be animated into short video clips under four different settings: "normal", "fun", "custom" or "spicy". "She ripped [the dress] off immediately, had nothing but a tasselled thong underneath, and started dancing, completely uncensored, completely exposed," Ms Weatherbed told BBC added: "It was shocking how fast I was just met with it - I in no way asked it to remove her clothing, all I did was select the 'spicy' option."Gizmodo reported similarly explicit results of famous women, though some searches also returned blurred videos or with a "video moderated" BBC has been unable to independently verify the results of the AI video Weatherbed said she signed up to the paid version of Grok Imagine, which cost £30, using a brand new Apple asked for her date of birth but there was no other age verification in place, she new UK laws which entered into force at the end of July, platforms which show explicit images must verify users' ages using methods which are "technically accurate, robust, reliable and fair"."Sites and apps that include Generative AI tools that can generate pornographic material are regulated under the Act," the media regulator Ofcom told BBC News."We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks," it said in a statement. New UK laws Currently, generating pornographic deepfakes is illegal when used in revenge porn or depicts children. Prof McGlynn helped draft an amendment to the law which would make generating or requesting all non-consensual pornographic deepfakes illegal. The government has committed to making this amendment law, but it is yet to come into force."Every woman should have the right to choose who owns intimate images of her," said Baroness Owen, who proposed the amendment in the House of Lords."It is essential that these models are not used in such a way that violates a woman's right to consent whether she be a celebrity or not," Lady Owen continued in a statement given to BBC News. "This case is a clear example of why the Government must not delay any further in its implementation of the Lords amendments," she added.A Ministry of Justice spokesperson said: "Sexually explicit deepfakes created without consent are degrading and harmful. "We refuse to tolerate the violence against women and girls that stains our society which is why we have passed legislation to ban their creation as quickly as possible." When pornographic deepfakes using Taylor Swift's face went viral in 2024, X temporarily blocked searches for her name on the the time, X said it was "actively removing" the images and taking "appropriate actions" against the accounts involved in spreading Weatherbed said the team at The Verge chose Taylor Swift to test the Grok Imagine feature because of this incident."We assumed - wrongly now - that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, that she would be first on the list, given the issues that they've had," she Swift's representatives have been contacted for comment. Sign up for our Tech Decoded newsletter to follow the world's top tech stories and trends. Outside the UK? Sign up here.

Age Verification Is Sweeping Gaming. Is It Ready for the Age of AI Fakes?
Age Verification Is Sweeping Gaming. Is It Ready for the Age of AI Fakes?

WIRED

timea day ago

  • Entertainment
  • WIRED

Age Verification Is Sweeping Gaming. Is It Ready for the Age of AI Fakes?

Aug 7, 2025 12:30 PM Discord users are already using video game characters to bypass the UK's age-check laws. AI deepfakes could make things even more complicated. Photo-Illustration:In July, Siyan, a UK-based Discord user, logged on one morning and found himself unable to access some of his text chats marked NSFW. The channel, a popup informed him, was now age-restricted. The United Kingdom had enacted its far reaching child safety laws, which includes an age requirement system to verify users are over 18. Discord's updates required users to verify their age, either by government ID or a face scan. Siyan (who requested to only be referred to by his screen name for privacy reasons) describes himself as 'painfully over the age of needing to fake an ID.' He didn't want to take a photo of his ID. The face scan feature wasn't yet available on mobile, he says, and he didn't own a webcam, so he decided to give the platform someone else's face. First, he tried using an emoji of 'an old dude' he often uses on Discord. ('It speaks to me.') Face scans, however, often require users to submit multiple shots that include them looking a specific way, or specific poses. Siyan needed a passable image of a man with an open mouth. Two games in his library, Stellar Blade and Death Stranding , include a photo mode that allows players to pose a character and set their expressions; Siyan opted for Death Stranding 's Sam, modeled after 56-year-old actor Norman Reedus. He dropped screenshots of his success into a discord, after which a friend posted them to X. Siyan's gambit quickly went viral, inspiring others to try with games like Death Stranding , God of War, and more. Age verification is now the norm in the UK, though similar laws worldwide are expected to have a profound impact on how we access the web. Companies like Google are rolling out AI-driven age estimation systems for Search and YouTube. On gaming platforms like Roblox, age checks are becoming a key element of safety measures. But whether by using IDs or face scanning, it's an imperfect system. Several Discord users tell WIRED they've already managed to get around face scans using video game characters. Generative AI could make this problem even more difficult to control as the tech grows more sophisticated; just last month, WIRED wrote about a startup working on AI that can create video in real-time. Users are also worried about giving companies their personal information in case of security breaches. In theory, age verification serves to keep kids safer. On platforms like Roblox, where failed moderation has allowed predators to groom or even assault some children, confirming that someone is a minor—or over the age of 18—is one way to determine what features they can use. For adult content sites like Pornhub, age verification aims to make sure children cannot access pornography. Critics, however, say the systems being put into place are flawed ones, both from a privacy and protection standpoint. David Maimon, the head of fraud insights for SentiLink and a criminology professor at Georgia State University, says that the current methods of verification can still be fooled. People use many different methods to bypass 'liveness checks'—security measures used to verify the user is a real person—whether that's using AI, video games, or videos of other, real people. IDs can be faked, or bought. 'The process of age verification is complicated,' he says, and people in charge of these systems need to give them more thought. Ash, a UK-based 20-year-old who requested his last name not be used, tells WIRED he was able to pass verification using God of War 's photo mode with main character Kratos. 'I didn't expect [verification] to work because of Kratos' white skin and beard, but it worked first try,' he says. Another Discord user in the UK, who goes by Antsy online, says he achieved the same results with Arma 3 and a mod that allows you to pose characters. 'I figured out I could simply by trying, as all people should,' he tells WIRED. 'Arma 3 characters look very poor, nowhere near realistic, so I thought it would be a solid experiment to solidify or challenge my views on this technology.' Antsy says he and his friends consider this kind of tech 'a challenge' they try to bypass. 'I am very pro internet safety,' he says. 'I believe, though, that it should not be the internet's job to parent and protect its younger users." Video game characters from several games worked. In a video from YouTuber beebreadtech, he's able to swiftly get an adult age rating repeating Siyan's steps with Death Stranding . Other Discord users WIRED talked to say they were able to do so with games like Days Gone , Baldur's Gate 3, Cyberpunk 2077, The Sims 4 , Cyberpunk 2077, Days Gone , Warhammer 40,000: Space Marine 2 , Star Wars: Knights of the Old Republic , and Gray Zone Warfare . Some say they even successfully used Garry's Mod , a game with slapstick physics and character models resembling something from a fever dream. Discord has not yet responded to WIRED's request for comment. Maimon says there are too many possible loopholes with age verification that people can slip through. 'The industry is trying to find solutions to the issue of AI deepfakes and and and live AIs,' he says. That may mean relying on a combination of factors that look at a person's associated information like telephone numbers, addresses and more. 'You need to rely more heavily on historical evidence for the existence of the individual,' Maimon says, 'and put less of an emphasis on checkpoints like driver licenses, photos, livenessness tests, and so on.' Maimon says that bad actors are adept at bypassing these kinds of technologies. 'Criminals are always like 7 to 12 months ahead of us in terms of their ability to find vulnerabilities and bypass some of the technologies out there,' he says. Even without generative AI, people can still sell videos of their faces to pass age verification. Even photo IDs aren't bulletproof.'The quality of a [fake] driver license—it's just impeccable,' Maimon says. All the watermarks, the UV lights, all the security, even the right plastic material on which the driver license is being printed on—even that criminals now have access to.' For legitimate IDs, there's an issue with minors and who owns one. WIRED previously asked Roblox chief safety officer Matt Kaufman about 13-year-olds—the minimum age for Roblox to unlock some of its features—that might not have government-issued IDs. 'That is a problem,' Kaufman told WIRED at the time, adding that in North America and the United States, it's uncommon for people so young to have them. 'I'm hesitant to say that [photo ID] is a useful way to verify folks' age,' Maimon says, 'simply because we have so much evidence suggesting that it doesn't work.' There's also hesitance on behalf of users to hand over their IDs. 'I don't trust the third party services that are being used with my data, especially with how damaging data leaks can be,' Ash says. 'Most of the verification apps say that they don't hold your data for more than seven days and while that might be true there's no way for me to know for sure that they are telling the truth.' There's a risk in handing over sensitive information to companies that request photo or ID verification. In July, Tea—an app where women can share their negative experiences with men—suffered a massive data breach that exposed thousands of women's photos used for verification; a second security issue, according to 404 Media, allowed hackers to access sensitive information like phone numbers, social media handles, and real names through user messages, which has been spread throughout forums like 4chan to dox and harass women. People WIRED spoke to who have used video games to trick verification are against age verification. 'Requiring people to give up facial information to access all the features of websites and apps like Discord and Bluesky is a massive overreach of what governments should be allowed to ask for digitally,' Ash says. Furthermore, he's doubtful that such systems won't be exploited, whether it's by using video games or some other method. 'I don't think that face scans are a useful way to verify age since people can easily look under their age and be incorrectly flagged as being under 18,' he says. Antsy, the Discord user who passed verification with Arma 3, isn't convinced websites or platforms should be in charge of verifying ages. 'All you are doing by putting these laws into place is pushing young people towards corners of the internet the government can't police,' he says. 'If someone believes this is protecting children more than an active parent already would, I refuse to believe they are well versed in the corners of the internet outside of the Google home page or their child's life.'

Was That Gwyneth Paltrow Or A Deep Fake? A.I. Up-Ends Crisis Playbooks
Was That Gwyneth Paltrow Or A Deep Fake? A.I. Up-Ends Crisis Playbooks

Forbes

time2 days ago

  • Entertainment
  • Forbes

Was That Gwyneth Paltrow Or A Deep Fake? A.I. Up-Ends Crisis Playbooks

The truth is very hard to find today. The role of A.I. deep fakes in almost every medium is calling into question who and what are real? Who do we believe? Who speaks for us? What is the truth? What are the lies? How purposeful is the mis/disinformation? And what can we do about it? And if you're in the midst of a crisis including deep fakes, not only do the old rules no longer apply, the misinformation can turn the situation toxic. Our crisis playbooks need to be totally rewritten. Nothing showcases this better than the most recent CEO scandal, involving a Coldplay concert, a 'kiss cam' broadcast on the Gillette stadium jumbotron, and two senior executives of an enigmatic company named 'Astronomer' caught in an embarrassing romantic embrace. It was the hug seen around and around the world. As the two participants frantically turned away, wisely declining to be seen or heard from again, a legion of stand-ins came out of the woodwork to take their place. Deep fakes almost all, these constructs purported to be a TV news anchor, the disgraced CEO himself, his wife, his sons and his 'daughter.' But, it turns out, the CEO doesn't even HAVE a daughter. And the others never posted. The posts were all fake, but when issued over social media, the statements were believable, if a bit exploitative and overblown. And they prolonged and deepened the arc of the crisis. See the Youtube video above to see the incident itself, followed by CEO Andy Byron's ostensible public statement issued over social media. That statement was a total fake, posted by a fake 'CBS news anchor,' yet it made its way onto Fox News, and around the world. One CEO I work with told me that he had read all of these statements, and believed each one. Then, in an attempt to flatten the arc, and quell the crisis, the new interim CEO of Astronomer hired actor Ryan Reynolds' inventive marketing firm, Maximum Effort. They somehow chose to produce a minute-long video, posted on YouTube, with tongue firmly in cheek. It features very 'temporary spokesperson' Gwyneth Paltrow saying basically, let me have your attention: Enough about the kiss cam, let's get back to the nuts and bolts of this business; let's get back to work. And to its credit, though odd, the video worked pretty well to defuse the situation. Ironically, though, many people thought it was Ms. Paltrow who was the deep fake. But she turned out to be real. By the time the deep fakes were discovered, Byron had resigned, the head of HR as well, and Paltrow/Reynolds had recast the discussion just enough to get the monkeys off everyone's backs. But, though Paltrow cut through the ridiculousness, this kind of response would not work in most crises, and could backfire badly. By now, two weeks after the event, well over 100 million people around the world have heard, or read of the scandal, and in one day, more than 22,000 news articles were written on Astronomer, says Axios, quoting Muck Rack, even if they don't really understand what the company in question does. What we do know is that A.I., and deep fakes, have completely changed the game in crisis management. Just as ChatGPT has made many corporate communicators redundant, a whole new paradigm is emerging for crisis participants. In fact, CEO dalliances with their subordinates are a fairly garden variety of corporate crisis. In my practice we've handled tens, twenties or more of them over the years. And while each situation can be quite different, there is an established play book for how the board and remaining management can handle such private tragedies with a minimum of public exposure or damage. The goal would always be to get it all over with as quickly, fairly, compassionately and gracefully as possible. To that end, the Astronomer Board's statements were pretty much on target. (See my article on The Role Of Boards in Crisis: 10 Steps for Directors Before, During And After Crisis.) Astronomer is committed to the values and culture that have guided us since our founding. Our leaders are expected to set the standard in both conduct and Board of Directors has initiated a formal investigation into this matter and we will have additional details to share very Stoddard was not at the event and no other employees were in the video. Andy Byron has not put out any statement, reports saying otherwise are all because they were so late in coming, that left lots of time for false rumors to sweep the internet. (See You Have 15 Minutes to Respond to a Crisis: A Checklist of Dos and Don'ts) And given the momentum of the crisis, the Board's statement was probably necessary but not sufficient. With all the uncertainty floating around these days, every day can seem like a crisis, and aggregated, it can seem more like chaos. (See my article on 'Chaos Leadership: When Does Global Crisis Turn Into Chaos And How Do We Survive It?') The New York Times has given us a quiz to find out how well we can tell what is fake and what is real all on our own…(to take it, go to: A.I. videos have never been better. Can you tell what's real?) Unsurprisingly, it's really hard to do, and I haven't found one person yet who has gotten all ten right. And that's very scary indeed. As for this moment, while new playbooks are being created, authentication is the magic word. There are several organizations that fact-check journalists and their articles, posts and videos, and verify those they find to be real/true. Snopes is the undeniable leader of the field. Founded in 1994, it started by investigating urban legends, hoaxes and folklore. Today it is a terrific source of insight on what is a deep fake, and what's real, as well as what is true and what is false in public statements and rumors of all kinds. Other more specialized fact-checking organizations include: Pulitzer Prize winning and All of these organizations are go-to resources to verify/authenticate articles, videos, claims, science, and political rhetoric. In fact, we should all have a little fact-checking engine sitting on our shoulders as we live our lives, clueing us in to what is true, and what is baloney. That would be a truly innovative solution for an issue that appears to be getting more and more intractable.

Safeguarding the hiring process: A strategic approach to employment fraud mitigation
Safeguarding the hiring process: A strategic approach to employment fraud mitigation

Fast Company

time3 days ago

  • Business
  • Fast Company

Safeguarding the hiring process: A strategic approach to employment fraud mitigation

The hiring process has become a high-stakes battleground. What was once a relatively secure human-to-human interaction is now vulnerable to synthetic identities, deepfakes, identity manipulation, and highly coordinated cyber threats. Employment fraud has evolved into a significant enterprise risk, with implications for national security, data privacy, and brand reputation. Recent investigations have revealed alarming vulnerabilities. State-sponsored operatives have successfully bypassed traditional background checks using stolen or fake identities to obtain remote U.S. IT roles and funnel millions into hostile regimes. Meanwhile, cybercrime groups like FIN6 are targeting recruiters with malware-laced resumes hosted on legitimate platforms, weaponizing job applications into digital Trojan horses. As Gartner predicts that by 2028, one in four job candidates will be entirely fake, it's clear that hiring fraud has escalated beyond isolated incidents. The question is no longer if your organization will encounter fraud, but when. Mitigating this risk requires a coordinated, multi-layered strategy that encompasses people, processes, policies, and technology at every stage of the hiring lifecycle. Fraud tactics have evolved dramatically. For example, hiring leaders and recruiters are now vulnerable to impersonation, manipulated interviews, and deceptive proxies—especially in virtual environments. Deepfakes, AI -generated resumes, and real-time voice manipulation tools are easily accessible and increasingly challenging to detect. Because traditional in-person validation checkpoints are fragmented or absent in digital hiring, real-time ID verification, structured vetting, and interview consistency protocols must be reimagined and refined to ensure optimal efficiency and accuracy. Meanwhile, gaps in governance allow fraudsters to exploit seams between talent acquisition, IT, vendors, and hiring managers. Consider, too, what may happen after candidates use deepfakes, proxies, or off-camera coaching to simulate expertise or fool behavioral screening tools. Unverified bank details or employment documentation can lead to financial or data theft, while undisclosed concurrent employment, ghost companies, and masked IP addresses can obscure fraud risks. Without proper monitoring, malicious hires can gain access to sensitive systems or intellectual property undetected. Shared accountability is critical. All stakeholders—from recruiters to cybersecurity teams—must align on consistent protocols and clear escalation paths. THE END-TO-END RISK JOURNEY A comprehensive fraud mitigation approach begins with mapping vulnerabilities across the entire hiring journey: 0–3 Months: Build Awareness And Stand Up Interim Safeguards In the first quarter, education is paramount. Many organizations underestimate the sophistication and scale of modern hiring fraud until they experience it firsthand. Begin by updating hiring leader and recruiter training to include real-world examples, such as candidates using AI-generated identities or spoofed video interviews, so teams are equipped to recognize red flags. Launching an internal awareness campaign can help normalize fraud detection as a shared responsibility rather than a cybersecurity outlier. Simultaneously, it's critical to expand your vetting ecosystem. At this stage, I recommend adding supplemental vendors capable of conducting global ID verification, social media sweeps, and criminal record monitoring—an essential move for remote and hybrid talent pools. To ensure nothing falls through the cracks, establish interim governance models that facilitate rapid communication between talent acquisition, security, and legal, and begin building automation pathways to share suspicious indicators across systems. 3–6 Months: Launch Detection Pilots And Institutionalize Risk Protocols Once foundational awareness is in place, the next phase is about detection and escalation. Deploying real-time alert systems, such as resume scanners that flag anomalies or interview platforms with geolocation mismatch detection, add an essential layer of protection. At my company, Navy Federal Credit Union, we piloted behavioral analytics tools during this stage, including voice pattern recognition and biometric screening for sensitive roles. It's also time to codify how fraud scoring will be used in workforce planning and define what constitutes a 'high-risk' role or candidate profile. Developing clear escalation pathways and connecting those to downstream audit processes enables consistency in response. Our efforts helped us shift from ad hoc response to a disciplined, proactive model. 6–12 Months: Operationalize And Scale With AI And Governance Integration In the final stretch, the goal is sustainability and scale. Fraud dashboards, equipped with anomaly detection, audit trails, and behavioral metadata, enable leaders to monitor trends and respond in real time. For high-risk roles, consider embedding proctored, identity-verified assessments into the hiring funnel. These steps not only protect the organization, but also deter would-be bad actors who realize they're being watched. Post-hire monitoring, such as periodic re-verification or employment validation checks, should also be embedded into your talent risk playbook. Collaboration with legal and compliance is essential to ensure all efforts align with regulatory expectations and data privacy laws. We also integrated fraud checkpoints into our onboarding systems and performance risk reviews. Fraud detection is a part of the operating rhythm rather than a one-off intervention. Every team can and should play a role in fraud prevention: Talent Acquisition: Leads workflow design, recruiter training, and vendor oversight Cybersecurity: Owns threat detection, technical investigations, and tool integrations Legal/Compliance: Evaluates policy, privacy concerns, and regulatory impacts HR Technology: Delivers ID verification tools, system audits, and real-time monitoring capabilities Brand And Communications: Lead scam awareness campaigns and protect candidate trust externally Measurement tactics include: Volume of flagged background checks Alert triggers from biometric/behavioral tools Percentage of hiring managers trained on protocols Turnaround time on enhanced screenings Post-training knowledge retention scores Stakeholder confidence in hiring integrity Fraud mitigation doesn't require a blank check, but it does require thoughtful, tiered investment: Low-Cost Wins (<$50K): Build a fraud toolkit, implement red flag escalation paths, and update interview policies. Moderate Investments ($50K–$150K): Expand fraud training, strengthen vendor oversight, and conduct audit exercises. High-Value Initiatives ($150K–$350K+): Deploy biometric ID, enhance background checks, enable AI-powered pattern detection and video/audio screening. The required lift by full-time equivalent employees (FTEs) varies by initiative; some initiatives require fractional effort (1–2 FTEs), while others necessitate full-scale, tech-enabled cross-team support. Employment fraud is no longer a future concern, but a present threat. By the time fraud is detected post-hire, the damage is already done (financially, reputationally, and operationally). Forward-thinking organizations must treat hiring fraud as an enterprise risk, not just a talent problem. Success will hinge on executive sponsorship, cross-functional coordination, and timely investment in scalable systems and safeguards.

Scott Moe speaks out against AI ‘deepfakes' of him circulating online
Scott Moe speaks out against AI ‘deepfakes' of him circulating online

CTV News

time4 days ago

  • Business
  • CTV News

Scott Moe speaks out against AI ‘deepfakes' of him circulating online

Saskatchewan Premier Scott Moe listens to a question from the media during the 2025 summer meetings of Canada's Premiers at Deerhurst Resort in Huntsville, Ont., on Wednesday, July 23, 2025. THE CANADIAN PRESS/Nathan Denette Saskatchewan Premier Scott Moe says his government is doing whatever it can to track down the creators of so-called 'deepfakes' of him and other prominent figures. Moe's likeness, including his voice, has been used in online video ads for cryptocurrency schemes that he says he would never endorse. The premier says on his official social media that some of the videos, which are created with artificial intelligence, feature him and others, including Prime Minister Mark Carney. Moe says his government is doing its best to find the people behind the videos, but adds it can be difficult to prevent the scams. It's not the first time Moe's image has been used to market the scams — he first acknowledged them in March. Saskatchewan's consumer watchdog has been issuing warnings about the impersonation scams and urges people not to send money to companies that aren't registered in the province. This report by The Canadian Press was first published Aug. 4, 2025.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store