
The Five-Millionth Mercedes-Benz AG Sprinter All-Electric Van Goes to FedEx!
Click here to learn about FedEx Cares, our global community engagement program.
Visit 3BL Media to see more multimedia and stories from FedEx
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Verge
3 minutes ago
- The Verge
Grok's ‘spicy' video setting instantly made me Taylor Swift nude deepfakes
The 'spicy' mode for Grok's new generative AI video tool feels like a lawsuit waiting to happen. While other video generators like Google's Veo and OpenAI's Sora have safeguards in place to prevent users from creating NSFW content and celebrity deepfakes, Grok Imagine is happy to do both simultaneously. In fact, it didn't hesitate to spit out fully uncensored topless videos of Taylor Swift the very first time I used it — without me even specifically asking the bot to take her clothes off. Grok's Imagine feature on iOS lets you generate pictures with a text prompt, then turn them quickly into video clips with four presets: 'Custom,' 'Normal,' 'Fun,' and 'Spicy.' While image generators often shy away from producing recognizable celebrities, I asked it to generate 'Taylor Swift celebrating Coachella with the boys' and was met with a sprawling feed of more than 30 images to pick from, several of which already depicted Swift in revealing clothes. From there, all I had to do was open a picture of Swift in a silver skirt and halter top, tap the 'make video' option in the bottom right corner, select 'spicy' from the drop-down menu, and confirm my birth year (something I wasn't asked to do upon downloading the app, despite living in the UK where the internet is now being age-gated.) The video promptly had Swift tear off her clothes and begin dancing in a thong for a largely indifferent AI-generated crowd. Swift's likeness wasn't perfect, given that most of the images Grok generated had an uncanny valley offness to them, but it was still recognizable as her. The text-to-image generator itself wouldn't produce full or partial nudity on request; asking for nude pictures of Swift or people in general produced blank squares. The 'spicy' preset also isn't guaranteed to result in nudity — some of the other AI Swift Coachella images I tried had her sexily swaying or suggestively motioning to her clothes, for example. But several defaulted to ripping off most of her clothing. The image generator will also make photorealistic pictures of children upon request, but thankfully refuses to animate them inappropriately, despite the 'spicy' option still being available. You can still select it, but in all my tests, it just added generic movement. You would think a company that already has a complicated history with Taylor Swift deepfakes, in a regulatory landscape with rules like the Take It Down Act, would be a little more careful. The xAI acceptable use policy does ban 'depicting likenesses of persons in a pornographic manner,' Grok Imagine simply seems to do nothing to stop people creating likenesses of celebrities like Swift, while offering a service designed specifically to make suggestive videos including partial nudity. The age check only appeared once and was laughably easy to bypass, requesting no proof that I was the age I claimed to be. If I could do it, that means anyone with an iPhone and a $30 SuperGrok subscription can too. More than 34 million images have already been generated using Grok Imagine since Monday, according to xAI CEO Elon Musk, who said usage was 'growing like wildfire.' Posts from this author will be added to your daily email digest and your homepage feed. See All by Jess Weatherbed Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Report Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Twitter - X Posts from this topic will be added to your daily email digest and your homepage feed. See All xAI


Fast Company
3 minutes ago
- Fast Company
Cloudflare vs. Perplexity: a web scraping war with big implications for AI
When the web was established several decades ago, it was built on a number of principles. Among them was a key, overarching standard dubbed 'netiquette': Do unto others as you'd want done unto you. It's a principle that lived on through other companies, including Google, whose motto for a period was ' Don't be evil.' The fundamental idea was simple: Act ethically and morally. If someone asked you to stop doing something, you stopped—or at least considered it. But Cloudflare, an IT company that protects millions of websites from hostile internet attacks, has published an eye-opening exposé suggesting that one of the leading AI tools today isn't following that principle. Cloudflare claims Perplexity, an AI-powered ' answer engine,' is overriding website requests not to crawl their content by spoofing its identity to hide that the requests are coming from an AI company. Cloudflare launched its investigation after receiving complaints from customers that Perplexity was ignoring directives in files, which are used by websites to signal whether they want their content indexed by search engines or AI crawlers. Perplexity's alleged behavior highlights what happens when the web shifts from being rooted in voluntary agreements to a more hard-nosed business environment, where commercial goals overrule moral considerations. 'The code of honor around crawling and files is a charming remnant from when the web was collaborative and based on community standards,' says Eerke Boiten, a cybersecurity researcher at De Montfort University in the U.K. Cloudflare's position as a market leader in web protection means that, for now at least, it's still possible to preserve some remnants of that morality, Boiten says. Boiten believes the sense of ethical cooperation online is fading fast, noting that many large AI companies show little regard for where or how they obtain their training data, often operating in murky ethical territory. While he sees OpenAI as generally respectful of the established norms, he's far less optimistic about others. 'Perplexity trying to scrape their way around any defenses feels like it will be the norm rather than the exception,' he says. Perplexity's alleged conduct stands out as particularly bold, especially given that the company is already facing a lawsuit over unauthorized content scraping. Dow Jones Company—the parent of the Wall Street Journal and New York Post — filed a lawsuit in October 2024, alleging that Perplexity 'copies on a massive scale' their content. (The case is ongoing.) The BBC also sent a letter in June to Perplexity CEO Aravind Srinivas, threatening legal action for scraping its content without permission unless the company stops and either compensates for the data already accessed or deletes it entirely. Perplexity told the Financial Times that the BBC's case was 'manipulative and opportunistic' and reflected a 'fundamental misunderstanding' of copyright law. Perplexity did not respond to Fast Company 's request for comment on this story. But Boiten, for his part, anticipates an escalating arms race between those trying to protect online content from AI-driven web scraping and the companies attempting to do just that to improve their models. 'Cloudflare applying machine learning to spot Perplexity's patterns, and acknowledging that publication of all this likely means Perplexity will come up with new decoys,' he says. Cornell Law professor James Grimmelmann says the legal limits of scraping content without permission—or bypassing files—remain unclear, but Cloudflare's findings could expose Perplexity to more lawsuits. 'There is a loose judicial consensus that it is okay to scrape sites when their files allow it,' says Grimmelmann, 'but Perplexity seems determined to fuck around and find out whether the reverse is true.'


TechCrunch
3 minutes ago
- TechCrunch
Google's NotebookLM is now available to younger users as competition in the AI education space intensifies
Google's AI note-taking app is now open to younger users, having previously been limited to users 18 and older. The tech giant announced that NotebookLM is available to Google Workspace for Education users of any age and for consumers ages 13 and up. The removal of age restrictions is intended to provide younger students with access to the AI research tool, allowing them to better understand their class materials. Now, students can access features such as the ability to convert notes into podcast-like Audio Overviews, visually summarize ideas with interactive Mind Maps, and more. NotebookLM recently released Video Overviews to let users turn notes, PDFs, and images into visual presentations. This expansion comes amid increasing concerns about the use of AI in education regarding data privacy and potential misuse. Google says that NotebookLM enforces stricter content policies for users under 18 to prevent inappropriate responses, and user chats and uploads are not reviewed by humans or used for AI training. The availability of NotebookLM for younger users follows OpenAI's introduction of a study mode for ChatGPT, indicating that companies are ramping up competition in the AI education sector.