logo
Elon Musk's ‘spicy' upgrade to Grok spits out deepfake clip of Taylor Swift dancing topless: report

Elon Musk's ‘spicy' upgrade to Grok spits out deepfake clip of Taylor Swift dancing topless: report

New York Post5 hours ago
Elon Musk's xAI chatbot Grok got a 'spicy' upgrade that reportedly spits out explicit deepfake videos, including a clip of Taylor Swift dancing topless, according to a report.
Grok Imagine, the startup's new generative AI tool launched Monday, created a six-second clip of the 'Shake It Off' singer whipping off a silver dress to reveal her breasts and wearing skimpy panties, according to the tech-centric news site The Verge.
Even more troubling was that its spicy mode conjured up the NSFW clip without mentioning nudity when prompted to show Swift celebrating at music festivals, the outlet found while testing the software.
The video generating tool on Elon Musk's xAI chatbot Grok has a 'spicy' mode that allowed users to create a sexualized deepfake clip featuring singer Taylor Swift.
Getty Images for TAS Rights Management
The Verge published the video but ran a black bar to cover the superstar's bare chest.
The new feature's lack of safeguards against creating celebrity deepfakes and adult materials 'feels like a lawsuit waiting to happen,' The Verge wrote.
The Post reached out to Musk, xAI and representatives for Swift for comment.
Swift has been a frequent target of AI-generated explicit content across various platforms.
In January of last year, explicit AI-generated images of the 'Cruel Summer' singer were widely shared on platforms like Musk-owned X and 4chan, sparking public outrage and urgent content takedowns.
The situation escalated with deepfake videos falsely portraying her in political and sexual contexts, including through Grok.
The controversy led to tech companies tightening safeguards and prompted Swift to consider legal action. US lawmakers began advancing bipartisan legislation to criminalize non-consensual deepfake pornography.
The user guidelines for xAI, which Musk has positioned as a rival to ChatGPT maker OpenAI, prohibit creating pornographic depictions of real people's likenesses.
Grok Imagine, which is available for those with Heavy or Premium+ subscriptions to Grok, takes AI-generated images and transforms them into video clips of up to 15 seconds using style options that include Custom, Normal, Fun, and Spicy.
'Usage was growing like wildfire,' Musk wrote on X on Tuesday, though he did not address the content moderation challenges that have emerged alongside this growth.
Musk has touted the features of Grok Imagine, stating that more than 34 million images have been created since the feature launched on Monday.
xAI, Musk's AI startup, rolled out Grok Imagine, a video generation tool that includes a 'spicy' mode.
AFP via Getty Images
The timing of the Swift controversy could potentially be problematic for xAI, given the company's previous entanglements with deepfake incidents targeting Swift.
Deepfakes are synthetic media — typically videos, images or audio — which are created using artificial intelligence in order to realistically mimic a person's likeness or voice.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Macronix Introduces Cutting-Edge Secure-Boot NOR Flash Memory
Macronix Introduces Cutting-Edge Secure-Boot NOR Flash Memory

Yahoo

time22 minutes ago

  • Yahoo

Macronix Introduces Cutting-Edge Secure-Boot NOR Flash Memory

ArmorBoot MX76 Features Ultra-Fast Boot Times, High Performance, Integrated Authentication Protection, Data Integrity Verification, Secure Update, SPI Interface, Low-Power Options HSINCHU, Aug. 6, 2025 /PRNewswire/ -- Macronix International Co., Ltd. (TSE: 2337), a leading integrated device manufacturer in the non-volatile memory (NVM) market, today announced ArmorBoot MX76, a robust NOR flash memory combining in a single device, the essential performance and an array of security features that deliver rapid boot times and iron-clad data protection. The newest member of Macronix's highly successful and widely deployed ArmorFlash™ family, ArmorBoot MX76 is a high-performance, highly secure flash memory designed for artificial intelligence (AI), Internet of Things (IoT), automotive electronics and other demanding applications that require ultra-fast boot times, integrated authentication protection, data integrity verification, secure flash updates, a Serial Peripheral Interface (SPI), power options of 3V or 1.8V, and capacities up to 1 Gb. "We developed ArmorBoot MX76 to address specific challenges designers in the AI, IoT and automotive-electronics markets face: achieving fast boot times, enhanced security and seamless updates," said F.L. Ni, vice president of marketing at Macronix International. "Those, along with other rapidly growing markets – industrial automation, healthcare and wireless communications, for example -- represent broad spectrum of system and device manufacturers demanding the greater performance and heightened security that ArmorBoot MX76 provides. In an age when cybersecurity is no longer a luxury but a necessity, and boot times are increasingly consequential to products' success, Macronix is delivering the highest level of security and performance in flash memory." ArmorBoot MX76's unique architecture sets it apart from previous secure NOR flash solutions in that it combines in-demand performance and security functions in a single device; no additional chips are needed to ensure required levels of data protection. During a system's critical startup or update phase, the memory's performance and security features kick in automatically and seamlessly. In addition to being a single-device solution for ultra-fast booting and robust security, ArmorBoot MX76's SPI interface enables smooth integration into new or existing systems. Top worldwide manufacturers who've already designed ArmorFlash memories into their AI, IoT and automotive-focused solutions are presently evaluating ArmorBoot to expand their product portfolios to offer enhanced performance and secure-boot capabilities. ArmorBoot's Key Features Support for the secure boot process without exposing code on the data bus Device authentication Unmatched array of security features that ensure data protection SPI interface for seamless design integration Ideal flash solution for AI, IoT, automotive, healthcare, communications, industrial systems 3V or 1.8V for optimal power efficiency Capacities up to 1Gb – suitable for targeted applications Broad Array of Secure NOR Flash Macronix's award-winning ArmorFlash family, of which ArmorBoot MX76 is the latest addition, features a broad array of NOR flash memories developed for a wide range of applications demanding data security. It features security schemes such as Physical Unclonable Function, or PUF, and unique ID, with authenticated and encrypted links for NOR, NAND or flash. ArmorFlash has achieved several safety and security certifications, including from standards bodies focused on automotive electronics and cybersecurity. For more information and specifications on ArmorFlash and ArmorBoot, go to Availability ArmorBoot MX76 is sampling now, with production scheduled for later in 2025. Showcased at FMS Macronix will showcase ArmorBoot, along with a range of other NVM solutions, at the Future of Memory and Storage conference, August 5-7, at the Santa Clara Convention Center, in Santa Clara, California. Macronix will be located in booth #1142. About Macronix Macronix, a leading integrated device manufacturer in the non-volatile memory (NVM) market, provides a full range of NOR Flash, NAND Flash, and ROM products. With its world-class R&D and manufacturing capability, Macronix continues to deliver high-quality, innovative and performance-driven products to its customers in the consumer, communication, computing, automotive, networking and other market segments. Find out more at ArmorFlash is a trademark of Macronix International Co., Ltd. eMMC is a trademark of JEDEC/MMCA View original content: SOURCE Macronix Sign in to access your portfolio

Trump Media Is Testing an AI Search Engine Powered by Perplexity
Trump Media Is Testing an AI Search Engine Powered by Perplexity

CNET

time23 minutes ago

  • CNET

Trump Media Is Testing an AI Search Engine Powered by Perplexity

President Donald Trump's media company, Trump Media, is beta-testing a new AI search feature, Truth Search AI, on the Truth Social platform. The Florida-based company announced the news on Wednesday in a press release. Trump Media and Technology Group is perhaps best known for its social-media program Truth Social. The company is separate from the New York-based Trump Organization. "We're proud to partner with Perplexity to launch our public Beta testing of Truth Social AI, which will make Truth Social an even more vital element in the Patriot Economy," Trump Media CEO Devin Nunes said in the statement. "We plan to robustly refine and expand our search function based on user feedback as we implement a wide range of additional enhancements to the platform." Truth Search AI is now available on the Web version of Truth Social and will begin public beta testing on the Truth Social iOS and Android apps at an unnamed future date. Representatives for Trump Media and Perplexity didn't immediately respond to a request for comment. Will results be politically biased? In today's divided political landscape, one immediate concern is that a search engine from a conservative president's media company will select only search results that favor conservative opinions. UAE state-owned newspaper The National conducted searches using the new product and reported that the AI-generated answers, perhaps unsurprisingly, source conservative-leaning media outlets. But 404Media was able to get some possibly surprising results. When reporters asked how the American economy is doing, the new search engine said it was "currently facing significant headwinds, with signs of slowdown." The media outlet pressed further, asking if the president's international tariffs are to blame. "Recent tariff increases in the United States have generally had a negative effect on economic growth and employment, raising costs for businesses and consumers while providing only limited benefits to some manufacturing sectors," Truth Search AI replied. Read more: What Is Perplexity? Here's Everything You Need to Know About This AI Chatbot Perplexity's history San Francisco-based Perplexity was founded in 2022. As CNET noted in a review, it calls itself the world's first "answer engine," and instead of showing a list of links, it pulls info directly from the sources and summarizes that information. The company has made headlines for how it acquires its content. In June, the BBC threatened to sue Perplexity for unauthorized use of its content, alleging the artificial intelligence company reproduced BBC material "verbatim." At the time, Perplexity gave a statement to the Financial Times calling the BBC's claims "manipulative and opportunistic" and that the broadcasting giant fundamentally doesn't understand how the technology, internet or IP law works. Perplexity also alleged that the threat of litigation shows "how far the BBC is willing to go to preserve Google's illegal monopoly for its own self-interest." As 404Media notes, Forbes, the New York Times, New York Post and the Dow Jones have all accused Perplexity of plagiarism, and News Corp's Dow Jones & Co., which publishes the Wall Street Journal and the New York Post sued Perplexity in 2024 for copyright infringement.

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why
AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

CNET

timean hour ago

  • CNET

AI Sucks at Sudoku. Much More Troubling Is That It Can't Explain Why

Chatbots can be genuinely impressive when you watch them do things they're good at, like writing realistic-sounding text or creating weird futuristic-looking images. But try to ask generative AI to solve one of those puzzles you find in the back of a newspaper, and things can quickly go off the rails. That's what researchers at the University of Colorado Boulder found when they challenged different large language models to solve Sudoku. And not even the standard 9x9 puzzles. An easier 6x6 puzzle was often beyond the capabilities of an LLM without outside help (in this case, specific puzzle-solving tools). The more important finding came when the models were asked to show their work. For the most part, they couldn't. Sometimes they lied. Sometimes they explained things in ways that made no sense. Sometimes they hallucinated and started talking about the weather. If gen AI tools can't explain their decisions accurately or transparently, that should cause us to be cautious as we give these things more and more control over our lives and decisions, said Ashutosh Trivedi, a computer science professor at the University of Colorado at Boulder and one of the authors of the paper published in July in the Findings of the Association for Computational Linguistics. "We would really like those explanations to be transparent and be reflective of why AI made that decision, and not AI trying to manipulate the human by providing an explanation that a human might like," Trivedi said. When you make a decision, you can at least try to justify it or explain how you arrived at it. That's a foundational component of society. We are held accountable for the decisions we make. An AI model may not be able to accurately or transparently explain itself. Would you trust it? Why LLMs struggle with Sudoku We've seen AI models fail at basic games and puzzles before. OpenAI's ChatGPT (among others) has been totally crushed at chess by the computer opponent in a 1979 Atari game. A recent research paper from Apple found that models can struggle with other puzzles, like the Tower of Hanoi. It has to do with the way LLMs work and fill in gaps in information. These models try to complete those gaps based on what happens in similar cases in their training data or other things they've seen in the past. With a Sudoku, the question is one of logic. The AI might try to fill each gap in order, based on what seems like a reasonable answer, but to solve it properly, it instead has to look at the entire picture and find a logical order that changes from puzzle to puzzle. Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts Chatbots are bad at chess for a similar reason. They find logical next moves but don't necessarily think three, four or five moves ahead. That's the fundamental skill needed to play chess well. Chatbots also sometimes tend to move chess pieces in ways that don't really follow the rules or put pieces in meaningless jeopardy. You might expect LLMs to be able to solve Sudoku because they're computers and the puzzle consists of numbers, but the puzzles themselves are not really mathematical; they're symbolic. "Sudoku is famous for being a puzzle with numbers that could be done with anything that is not numbers," said Fabio Somenzi, a professor at CU and one of the research paper's authors. I used a sample prompt from the researchers' paper and gave it to ChatGPT. The tool showed its work, and repeatedly told me it had the answer before showing a puzzle that didn't work, then going back and correcting it. It was like the bot was turning in a presentation that kept getting last-second edits: This is the final answer. No, actually, never mind, this is the final answer. It got the answer eventually, through trial and error. But trial and error isn't a practical way for a person to solve a Sudoku in the newspaper. That's way too much erasing and ruins the fun. AI and robots can be good at games if they're built to play them, but general-purpose tools like large language models can struggle with logic puzzles. Ore Huiying/Bloomberg via Getty Images AI struggles to show its work The Colorado researchers didn't just want to see if the bots could solve puzzles. They asked for explanations of how the bots worked through them. Things did not go well. Testing OpenAI's o1-preview reasoning model, the researchers saw that the explanations -- even for correctly solved puzzles -- didn't accurately explain or justify their moves and got basic terms wrong. "One thing they're good at is providing explanations that seem reasonable," said Maria Pacheco, an assistant professor of computer science at CU. "They align to humans, so they learn to speak like we like it, but whether they're faithful to what the actual steps need to be to solve the thing is where we're struggling a little bit." Sometimes, the explanations were completely irrelevant. Since the paper's work was finished, the researchers have continued to test new models released. Somenzi said that when he and Trivedi were running OpenAI's o4 reasoning model through the same tests, at one point, it seemed to give up entirely. "The next question that we asked, the answer was the weather forecast for Denver," he said. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Explaining yourself is an important skill When you solve a puzzle, you're almost certainly able to walk someone else through your thinking. The fact that these LLMs failed so spectacularly at that basic job isn't a trivial problem. With AI companies constantly talking about "AI agents" that can take actions on your behalf, being able to explain yourself is essential. Consider the types of jobs being given to AI now, or planned for in the near future: driving, doing taxes, deciding business strategies and translating important documents. Imagine what would happen if you, a person, did one of those things and something went wrong. "When humans have to put their face in front of their decisions, they better be able to explain what led to that decision," Somenzi said. It isn't just a matter of getting a reasonable-sounding answer. It needs to be accurate. One day, an AI's explanation of itself might have to hold up in court, but how can its testimony be taken seriously if it's known to lie? You wouldn't trust a person who failed to explain themselves, and you also wouldn't trust someone you found was saying what you wanted to hear instead of the truth. "Having an explanation is very close to manipulation if it is done for the wrong reason," Trivedi said. "We have to be very careful with respect to the transparency of these explanations."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store