
Studio Ghibli-Based Imagery Proves AI Art Keeps Generating Our Gag Reflex
HOLLYWOOD - JULY 27: Director Hayao Miyazaki arrives to the special screening of "Ponyo" held at ... More the El Capitan Theatre on July 27, 2009 in Hollywood, California. (Photo by Michael Tran/FilmMagic)
This week, AI-generated imagery in the style of beloved Japanese animation production company Ghibli Studio began flooding the internet courtesy of a new filter available on the latest version of Open AI's GPT-40. The filter turns images like photographs into a simulation of the hand-drawn, pastel toned, cel-shaded look seen in Ghibli films such as Princess Mononoke, Howl's Moving Castle and The Boy and the Heron.
While some people are beguiled by the tool, the reaction that has greeted this surge of faux anime art is the same that we've seen over and over again when AI artwork breaks into the popular consciousness: disgust at both the aesthetics and the ethics. This time, the stakes are amplified because the project aims to diminish criticism of AI art by diminishing one of its most prominent critics.
For examples of previous negative reactions, consider the example of the Coca Cola commercial released this holiday season, in which hundreds of hours of AI-generated video footage was edited down to a fast-cut 30 second spot designed to play on sentimental remembrances of a classic holiday ad from the 1990s. Though technically impressive in some ways, even squeezing professional level AI tools as hard as possible still produced a few uncanny artifacts in details and scale. Most of them went by quickly enough to evade conscious perception, but the overall effect was unsettling. Online critics highlighted the imperfections using frame grabs and zooms, raising issues that ordinary human-created, computer-assisted 3D animation, by its nature, rarely exhibits.
Toys-R-Us, another brand with nostalgic connections to childhood, got caught in the AI honeytrap last summer with an ad inspired one commenter to write on X, 'There's something so cynical about creating an ad about the limitless imagination of a child and then rendering it with soulless AI slop.'
Sometimes these campaigns include elements meant to anticipate criticism of AI and turn that into part of the story. Liquor brand Absolut Vodka launched a bizarre campaign on YouTube aimed at promoting acceptance of AI imagery but suggesting that we be more inclusive about the people and ethnicities depicted in synthetic photographs, so that future AI models scraped from these images would themselves be more diverse. This did not have the intended effect, and got the campaign listed among 'The Bad' implementations of AI in advertising by Creative Bloq.
AI imagery is particularly loathed in the world of comics and graphic novels. DC got embroiled in a controversy last year when a fellow artist spotted telltale AI imperfections in the artwork for a cover of Superman by artist Francesco Mattina, resulting in the cancelation of solicitations for the edition of the comic featuring that cover and a strong statement from DC and other comic book companies clarifying their policies and prohibitions regarding its use.
In all of these cases, the negative response to the AI imagery occurs on several levels. Our brains are aware of inconsistencies in the imagery at a detailed level, even if the exact nature of these inconsistencies eludes our conscious perception. This is similar to the 'uncanny valley' we experience when we see computer-generated imagery, usually realistic 3D animation, intended to duplicate reality but missing some of the subtle atmospherics of lighting and physics that we'd see in nature. At least with CGI, the shortcomings are consistent: things like shadows and mist always don't look right, or objects in the distance appear a little too clearly. Over time, we can identify the things that 'look wrong,' and they reflect lapses in taste or craft by humans using the technology as much as limitations of the technology itself.
With AI, the artifacts occur at random because of how the diffusion models generate imagery. In still images, everything can look 'right' except that the pattern on the drapes doesn't follow any consistent design logic. Or a piece of machinery is rendered perfectly in terms of materiality, reflection and mass, but the machine includes gears or controls that have no clear function. In video imagery, small changes frame to frame can lead to jitter in the background or problems like, in the Coke ad, people in the crowd looking too big relative to the scale of other objects in the scene.
In depictions of humans, the effect is even more pronounced. According to emerging research, uncanny valley reactions sometimes occur when we pick up on mismatched features, like realistic eyes but unrealistic skin. This particular incongruity is a common problem for stable diffusion-generated AI images.
Even if the people responsible for prompting the AI systems notice these problems, there isn't much they can do about them. Post-processing AI-generated images to remove these subtle issues is almost as time-consuming as creating the imagery from scratch, and requires the kind of technical mastery that the AI systems are designed to replace.
Typically, the MO of the proponents of AI video and imagery is to suggest that we lower our standards to meet the capabilities of the system, recognizing the technical virtuosity of getting things close enough, especially considering the savings in time and effort.
That on its own might be enough to satisfy some people, especially since emerging tech like the Ghibli simulator are getting pretty good at avoiding the 'extra finger' problems that bedeviled early generations of AI imagery. But then there are the overarching ethical and economic considerations. As most people know by now, online critics have the knives out for AI imagery because the models were built with imagery obtained without consent, compensation or control, while threatening the livelihoods of those it has stolen from. This is picking up steam as more people, not just artists, begin to realize that AI is coming for their jobs as well.
AI proponents, both in business and on the consumer side, are obviously tired of this criticism. They are invested, financially or emotionally, in the ultimate success of this technology and they don't want to hear any more whining from a bunch of artists and elitist critics. Billions, maybe trillions, of dollars are on the line, and companies are not going to let a little consumer disgust and rejection get in the way of their big plans. After all, they're the masters of the universe, and we are their increasingly disempowered 'end users.'
That edge of 'what are you going to do about it?' gives the Ghibli situation added cultural salience. Hayeo Miyazaki, Ghibli's founder and artistic visionary, famously objected to AI art in the strongest possible way, saying he found it disgusting and anti-human. Building an AI art tool based on his aesthetic is basically spitting in the face of a global icon whose work has brought joy to millions, but not only that. It's a way for the AI tech lords to show all the aggrieved dissidents who's boss, as we do these days.
In a sense, this seems to set up a final battle between the forces of 'wow, this is really cool!' and the forces of 'ick, stop, this is gross on every level!' employing the charming Ghibli aesthetic to divide the opposition. After all, who among us can resist turning ourselves into anime characters? And once we're all implicated, then maybe we'll shut up and let them make money.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


San Francisco Chronicle
44 minutes ago
- San Francisco Chronicle
Bay Area legend gets the national spotlight on NPR's ‘Tiny Desk' concert series
E-40 is commemorating the 30th anniversary of his breakthrough album, 'In A Major Way,' in — well, a major way. The Vallejo rapper recently made an appearance on NPR's 'Tiny Desk,' bringing the sounds of the hyphy movement to the nation's capital to help kick off Black Music Month. During the nearly 30-minute performance on the popular YouTube concert series, released Monday, June 2, E-40 delivered hits from across his discography — from 1994's 'Captain Save a Hoe' to 2006's 'U and Dat' and 2014's 'Choices (Yup)' — while backed by a live band. A pioneer in the Bay Area rap scene who played a significant role in popularizing the hyphy movement, a style of uptempo, 'hyperactive' hip-hop music, E-40 was invited to join former President Joe Biden during a post-debate rally, performed at the NBA All-Star Game when it was held in San Francisco earlier this year and even has a road in his hometown named after him. Most recently, he spent Memorial Day weekend in Wine Country for two BottleRock Napa Valley performances. He also filmed a still-to-be-released episode of the ABC game show 'Celebrity Family Feud' alongside fellow Bay Area rapper Too Short and Atlanta rapper Lil Jon. Season 12 of the show is set to premiere July 10. E-40's guest performance comes shortly after San Lorenzo native Ruby Ibarra performed on the NPR show, after winning the 2025 'Tiny Desk' contest in May. Ibarra beat out nearly 7,500 other artist submissions with a performance of her latest single, 'Bakunawa.' As her prize, she got to film an intimate ' Tiny Desk' concert, released Thursday, May 29, and is now set to headline 'Tiny Desk' contest's On The Road Tour, which begins Wednesday, June 11, at the Regent Theater in Los Angeles. Meanwhile, E-40 is also planning some live performances following his 'Tiny Desk' concert. He is set to headline the fifth annual Mare Island Dock of Bay Festival in his hometown of Vallejo on Sept. 6. Other local rappers such as LaRussell, Baby Bash and Con Funk Shun are also scheduled to perform.


Gizmodo
an hour ago
- Gizmodo
This AI Company Saw Google's Veo 3-Powered YouTube Slop and Said, ‘Hell Yeah'
The video generator train has left the station and there's no turning back. Video generation is all the rage right now. While last month was all about Google's Veo 3, which has already threatened to deepfake all of YouTube, make incoherent action movies, and automate game development, the fun (or horror) hasn't stopped there. Just this week, OpenAI announced that it's making its Sora video generation model available for free via Bing, which is big news for anyone that's interested in testing out a video generator but doesn't actually feel the need to pay for one yet. And it's not all about the big two, either—there are apparently other AI companies that saw what Veo 3 was putting down in the world of AI slop and said, 'Hold my beer.' One of those companies is called Captions, which just launched its own video generation model called Mirage Studio that it says can 'generate expressive videos at scale' and create 'actors that look and feel alive.' To translate: it looks like Mirage Studio is designed to be the perfect model for AI-generating social media slop, and semi-convincing AI social media slop at that. Introducing Mirage Studio. Powered by our proprietary omni-modal foundation model. Generate expressive videos at scale, with actors that actually look and feel alive. Our actors laugh, flinch, sing, rap — all of course, per your direction. Just upload an audio, describe the… — Captions (@getcaptionsapp) June 2, 2025 Mirage Studio seems, unlike Veo 3, to be almost singularly geared toward 'content creation,' which is to say videos that might appear on TikTok, Instagram, or YouTube. As Captions puts it, Mirage Studio is 'built for marketers, creative teams, and anyone serious about crafting great narrative videos.' It works by uploading an audio clip and then either describing a scene to generate or dropping in a 'reference image.' Once all of that is in there, you just select a few parameters, and boom, you generate the video. If this sounds like an easy AI-ified way to steal the likeness of content creators or actors, I'm here to report that that's exactly what it is. As proof, I took some of Gizmodo's content from my colleague Kyle Barr (sorry, Kyle), who got to check out the Switch 2 recently, and turned his work into a Twitch-like abomination delivered by this guy with a blue mohawk. It's hard to say who Mirage Studio is stealing from (aside from us), but I'd hazard a guess and say it's clearly trained on videos from YouTube, Twitch, and other streaming platforms. To make the video, I did exactly what Captions said: I took a video from Gizmodo's Instagram page, threw it into the AI slopifier, selected my 'actor,' and waited for Mirage Studio to melt it down inside its generative belly and spit it back out. After an excruciating 10 minutes of waiting for a three-second preview, I finally got a little clip of what Kyle's video would look like if delivered by a blue-haired Twitch streamer, and then I got to generating the video in full. The results are about exactly what I might expect from an AI video generator of this caliber. I turned @KyleBarr5's Switch 2 video on Gizmodo into AI slop with @getcaptionsapp. Sorry, Kyle, and everyone with eyeballs. — James Pero (@jamestpero) June 4, 2025 At a very cursory glance, it did an okay job, but the more you watch, the more it unravels. The narration is mostly fine since it basically stole Kyle's delivery and regurgitated it, but there are hazy edges around the perimeter of the video and other small visual distortions. Also—my personal favorite—a spontaneous appearance of purple ectoplasm toward the end, which I assume is some kind of hallucination of an object from another video that Mirage Studio stol—er—borrowed from. It's a little sloppy and more than a little lazy, but it's also (unfortunately) more than a little discouraging. It's pretty clear from this small demo that the AI slop train is coming, and there's little we can do to prevent it. Outside of implications on content and the deluge of mindless social media chum we may encounter as a result of video generators, there are also big questions we need to unpack about intellectual property—questions with answers that don't seem to be forthcoming anytime soon. Hollywood actors have already urged regulators en masse to set a framework for this sort of thing, but as is the case with most tech policy, it seems they're content with being reactive as opposed to preemptive. In any case, Mirage Studio is just further proof that video generators are coming (or they've already arrived), and that's probably great news if you want to churn out some BS video for clicks, but for anyone that puts time and effort into making content, it may be time to buckle up because things are about to get, well… sloppy.
Yahoo
3 hours ago
- Yahoo
Nakama Japanese Steakhouse closes doors after 20 years on South Side
South Side restaurant Nakama Japanese Steakhouse and Sushi has quietly closed its doors. The hibachi-teppanyaki restaurant has ceased offering its dining services, stopped taking reservations online for further meals, and its phone is no longer operational. Representatives for Nakama did not respond to multiple attempts to contact the business to give a reason for the quiet closing. A longstanding South Side sensation, Nakama blended entertainment and eating, receiving widespread acclaim from local news outlets and food critics. The restaurant offered a sushi bar, standard bar and hibachi tables where chefs would cook directly in front of guests. Nakama first opened in 2003, one of the earliest tableside Japanese steakhouses to open in the area before the trend would expand rapidly over the next decade. The restaurant also operated a location in Wexford, which closed last summer, as well as a food truck. It's unclear if the food truck is still in operation. Click here to read more from our partners at the Pittsburgh Business Times. Download the FREE WPXI News app for breaking news alerts. Follow Channel 11 News on Facebook and Twitter. | Watch WPXI NOW