
Grok's ‘spicy' video setting instantly made me Taylor Swift nude deepfakes
Grok's Imagine feature on iOS lets you generate pictures with a text prompt, then turn them quickly into video clips with four presets: 'Custom,' 'Normal,' 'Fun,' and 'Spicy.' While image generators often shy away from producing recognizable celebrities, I asked it to generate 'Taylor Swift celebrating Coachella with the boys' and was met with a sprawling feed of more than 30 images to pick from, several of which already depicted Swift in revealing clothes.
From there, all I had to do was open a picture of Swift in a silver skirt and halter top, tap the 'make video' option in the bottom right corner, select 'spicy' from the drop-down menu, and confirm my birth year (something I wasn't asked to do upon downloading the app, despite living in the UK where the internet is now being age-gated.) The video promptly had Swift tear off her clothes and begin dancing in a thong for a largely indifferent AI-generated crowd.
Swift's likeness wasn't perfect, given that most of the images Grok generated had an uncanny valley offness to them, but it was still recognizable as her. The text-to-image generator itself wouldn't produce full or partial nudity on request; asking for nude pictures of Swift or people in general produced blank squares. The 'spicy' preset also isn't guaranteed to result in nudity — some of the other AI Swift Coachella images I tried had her sexily swaying or suggestively motioning to her clothes, for example. But several defaulted to ripping off most of her clothing.
The image generator will also make photorealistic pictures of children upon request, but thankfully refuses to animate them inappropriately, despite the 'spicy' option still being available. You can still select it, but in all my tests, it just added generic movement.
You would think a company that already has a complicated history with Taylor Swift deepfakes, in a regulatory landscape with rules like the Take It Down Act, would be a little more careful. The xAI acceptable use policy does ban 'depicting likenesses of persons in a pornographic manner,' Grok Imagine simply seems to do nothing to stop people creating likenesses of celebrities like Swift, while offering a service designed specifically to make suggestive videos including partial nudity. The age check only appeared once and was laughably easy to bypass, requesting no proof that I was the age I claimed to be.
If I could do it, that means anyone with an iPhone and a $30 SuperGrok subscription can too. More than 34 million images have already been generated using Grok Imagine since Monday, according to xAI CEO Elon Musk, who said usage was 'growing like wildfire.'
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Jess Weatherbed
Posts from this topic will be added to your daily email digest and your homepage feed.
See All AI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Report
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Tech
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Twitter - X
Posts from this topic will be added to your daily email digest and your homepage feed.
See All xAI

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
2 minutes ago
- Android Authority
Woman wins $200,000 case after her phone started a house fire while charging
TL;DR A woman has been awarded the equivalent of $200,000 after her phone caused a house fire while charging. The court ruled the LG K8 was defective and failed to meet safety expectations. Most of the payout will go to her insurer, but she also received compensation for injuries. We've heard multiple reports of the Google Pixel 6a melting down over recent months, but nothing as nightmarish as this. For one woman, her plugged-in Android phone caused a house fire, eventually leading to a six-figure payout from the manufacturer. As reported by the BBC, a judge at Edinburgh Sheriff Court has ruled that an LG K8 smartphone caused a fire while charging, awarding £150,000 ($200,000) in damages. The majority of that will go to the woman's insurer, but she was also compensated for smoke inhalation and the mental health impact. The fire began in the living room on October 31, 2018, while the LG phone was charging with the correct equipment. A second phone and a laptop were also plugged in nearby, but the judge concluded that the LG was the source. He found it failed to meet basic safety expectations and was defective. Denise Parks and her husband were asleep upstairs when the fire started. She was later treated for smoke inhalation and experienced heightened anxiety and panic attacks, leaving her unable to work for several months. The phone had been issued by her employer, and the lawsuit was filed against LG Electronics UK Ltd. While the incident happened back in 2018, the ruling and payout have only just been finalized. LG shut down its phone division in 2021. Follow


Forbes
3 minutes ago
- Forbes
New Models From OpenAI, Anthropic, Google – All At The Same Time
It's Christmas in August – at least, for those tech-wonks who are interested in new model releases. Today's news is a very full stocking of brand new LLM editions from three of the biggies – OpenAI, Anthropic, and Google. I'll go over these one by one, discussing what these most recent model iterations bring to the table. OpenAI OSS Models First, the tech community is getting an eye on OpenAI OSS 120b and OSS 20b, the first open-weight systems from this company since ChatGPT 2. Now, coverage from Computerworld and elsewhere points out that, although these models have Apache licenses, they are not fully open source in the conventional way, ,but partly open: the weights are open source, while the training data is not. Powered by one 80GB GPU chip, the larger OSS models, according to the above report, 'achieves parity' with the o4-mini model vis a vis reasoning power. The smaller one can run on smartphones and other edge devices. The models come quantized with MXFP4, a low-precision data type for accelerating matrix multiplications. Let Them Work Another interesting aspect of the new OSS models has to do with chain of thought, something that has revolutionized inference, while raising questions about comparative methodology. Basically, we want the LLMs to be accurate, but engineers have found that, in many cases, restricting or overly guiding systems causes them to 'hide' CoT. So OpenAI has chosen not to optimize the models in this way. 'OpenAI is intentionally leaving Chain of Thought (CoTs) unfiltered during training to preserve their usefulness for monitoring, based on the concern that optimization could cause models to hide their real reasoning,' writes Roger Montti at Search Engine Journal. 'This, however, could result in hallucinations.' Montti cites the following model card report from OpenAI: 'In our recent research, we found that monitoring a reasoning model's chain of thought can be helpful for detecting misbehavior. We further found that models could learn to hide their thinking while still misbehaving if their CoTs were directly pressured against having 'bad thoughts.'…In accord with these concerns, we decided not to put any direct optimization pressure on the CoT for either of our two open-weight models. We hope that this gives developers the opportunity to implement CoT monitoring systems in their projects and enables the research community to further study CoT monitorability.' So, the models are allowed to have these 'bad thoughts' in aid of, I suppose, transparency. OpenAI is then honest about the higher chance for hallucinations, so that users know that this trade-off has been made. Claude Opus 4.1 Here's how spokespersons rolled out the announcement of this new model Aug. 5: 'Today we're releasing Claude Opus 4.1, an upgrade to Claude Opus 4 on agentic tasks, real-world coding, and reasoning. We plan to release substantially larger improvements to our models in the coming weeks. Opus 4.1 is now available to paid Claude users and in Claude Code. It's also on our API, Amazon Bedrock, and Google Cloud's Vertex AI. Pricing is the same as Opus 4.' What's under the hood? The new Opus 4.1 model ups SWE-Bench Verified marks, and boosts agentic research skills. A breakdown of capabilities shows a 2-point increase in SWE-based agentic coding (72.5% - 74.5%) and improvement in graduate-level reasoning with GPQA Diamond (79.6% - 80.9%) over Opus 4, and slight increases in visual reasoning and agentic tool use. For a model set that pioneered human-like user capabilities, this continues to push the envelope. As for strategy: 'The release comes as Anthropic has achieved spectacular growth, with annual recurring revenue jumping five-fold from $1 billion to $5 billion in just seven months, according to industry data,' writes Michael Nunez at VentureBeat. 'However, the company's meteoric rise has created a dangerous dependency: nearly half of its $3.1 billion in API revenue stems from just two customers — coding assistant Cursor and Microsoft's GitHub Copilot — generating $1.4 billion combined. … The upgrade represents Anthropic's latest move to fortify its position before OpenAI launches GPT-5, expected to challenge Claude's coding supremacy. Some industry watchers questioned whether the timing suggests urgency rather than readiness.' Regardless, this is big news in and of itself, including for the millions of users who rely on Claude for business process engineering or anything else. Genie 3 This is the latest in the series of Genie models coming out of Google's DeepMind lab that create controlled environments. In other words, this is a gaming world model. Proponents of the new model cite longer-term memory over Genie 2's limit of about 10 seconds, as well as better visual fidelity and real-time responses. 'DeepMind claims that the new system can generate entire worlds that you can interact with consistently for several minutes in up to 720p resolution,' reports Joshua Hawkins at BGR. 'Additionally, the company says that the system will be able to respond to what it calls 'promptable world events' with real-time latency. Based on what the videos show off, it seems like Google has taken a major step forward in creating entire video game worlds using AI.' 'Genie 3 is the first real-time interactive general-purpose world model,' said DeepMind's Shlomi Fruchter in a press statement according to a TechCrunch piece suggesting that the lab considers Genie 3 to be a 'stepping stone to AGI,' a big claim in these interesting times. 'It goes beyond narrow world models that existed before. It's not specific to any particular environment. It can generate both photo-realistic and imaginary worlds, and everything in between.' All of these new models are getting their first rafts of public users today! It's enough to make your head spin, especially if you're responsible for any kind of implementation. What do you choose? To be fair, there is some amount of specialization involved. But many professionals closest to the industry would tell you it's the speed of innovation that's challenging: given the track record of most companies, by the time you get something worked into business operations, it's likely to already be obsolete! Stay tuned.


Bloomberg
33 minutes ago
- Bloomberg
Salesforce Deal 'A Recipe For Value': Informatica CEO
Amit Walia, CEO of Informatica, says the merging of two companies is all about context in 'the world of AI,' emphasizing that Informatica focuses on data management, the area where AI will be most successful. He speaks with Romaine Bostick and Vonnie Quinn on 'The Close,' saying that pairing with Salesforce becomes 'a recipe for huge value creation' for their customers. (Source: Bloomberg)