Hayao Miyazaki's AI Nightmare
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.
This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words.
Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute).
Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take.
This interview has been edited and condensed.
Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil?
Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly.
If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture.
Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem.
Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter.
I read an article about copyright and style—you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing.
Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness.
But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life.
Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me.
Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that.
Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me.
Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images.
Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well.
Damon: You can't really imagine image generators in a paradigm where there's no social media.
Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends.
Damon: Maybe that's what people don't like, too—that it's nakedly transactional.
Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out.
Damon: It's this generation's poser.
Ian: Engagement baiter.
Damon: Leave me with a concluding thought about how people should react to these images.
Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame.
Article originally published at The Atlantic
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Android Authority
20 minutes ago
- Android Authority
Gemini could soon rival ChatGPT with its new privacy feature (APK teardown)
Ryan Haines / Android Authority TL;DR Google is working on a temporary chat feature for Gemini. This feature could be similar to ChatGPT's Temporary Chats, which give users a blank slate for conversation and doesn't save any memory. Users will be able to access Gemini's temporary chat feature by clicking on a new disappearing clock icon. OpenAI's ChatGPT and Google's Gemini have emerged as popular AI assistants. Both are usually neck-and-neck when it comes to features, but given the pace of innovation, a few things are missing here and there. It seems Google wants to close the gap on its end, as it could bring a ChatGPT-like temporary chat feature to Gemini in the future. Authority Insights story on Android Authority. Discover You're reading anstory on Android Authority. Discover Authority Insights for more exclusive reports, app teardowns, leaks, and in-depth tech coverage you won't find anywhere else. An APK teardown helps predict features that may arrive on a service in the future based on work-in-progress code. However, it is possible that such predicted features may not make it to a public release. We've spotted code within Google app v16.22.44 beta that indicates that Google is working on a temporary chat feature. We managed to activate the feature to give you an early look at it: AssembleDebug / Android Authority In the screenshot above, you can see a new disappearing clock icon right next to the New Chat button in the sidebar (which is an upcoming tablet-friendly feature that hasn't rolled out yet). You can tap on the icon to presumably start a temporary chat. We couldn't get the feature to work, but we presume it will work similarly to ChatGPT's Temporary Chat feature. When you start a Temporary Chat in ChatGPT, the AI assistant doesn't display the chat in your chat history, saving you from the trouble of deleting queries from your history. Further, the conversation begins as a blank slate, as ChatGPT won't be aware of your previous conversations, nor have any past or future memory. However, it will still follow custom instructions if they are enabled. OpenAI may keep a copy of your conversation for up to 30 days for 'safety purposes,' but they won't be used to improve their models. The feature is very similar to what most users recognize as 'incognito mode' in their browsers. Google has yet to share details about the temporary chats feature in Gemini. We'll keep you updated when we learn more. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.
Yahoo
28 minutes ago
- Yahoo
Apple's iPhone rebrand reveals a company in crisis
When you buy through links on our articles, Future and its syndication partners may earn a commission. Three years ago we got the iPhone 14, two years ago we got the iPhone 15, and last year we got the iPhone 16. Who knows what we're getting this year? Well, actually, for the first time in a long while, that joke might not be applicable in 2025, with Apple heavily rumoured to be jettisoning its naming conventions for iOS, and potentially the iPhone itself. Somewhat confusingly, Apple has been strongly rumoured to be ditching skipping straight from iOS 18 to iOS 26 when revealing the next generation of its mobile software at WWDC today. After nearly 20 years of consecutive numbers, the company has allegedly decided to name it after the coming year. And some have suggested the same change could hit the iPhone, with the iPhone 16 being followed by the iPhone 26. As one Redditor succinctly puts it, "This both makes a lot of sense and makes no sense at the same time." This potential major rebrand comes at a curious time for Apple, one in which the company is arguably on the defensive. Apple Intelligence, announced to much fanfare at last year's WWDC, is fast approaching 'disaster' status, with several features yet to materialise and the whole debacle allegedly causing tension in the company. And then there are the more existential questions facing the brand. As the iPhone approaches its twentieth birthday, the whole thing feels less and less innovative every year – to the point that the iPhone 16 launch left us unusually underwhelmed. The tech world seems all too aware that the world is ready for the next 'iPhone moment' to come along and transform the landscape, which might explain the ridiculous amount of hype that met the news of Jony Ive joining OpenAI, with a supposedly revolutionary new device imminent. Apple might have hoped the advent of Vision Pro might provide that next 'iPhone moment', but alas, it wasn't to be. The hugely expensive headset has already been declared a flop by several corners of the internet after Apple sharply cut back production at the end of last year. All of which to say is that Apple isn't exactly feeling its freshest in 2025, and the iOS and potential iPhone rebrand seems to confirm that the company knows it. Apple is hardly known for simple naming conventions, but it's notable that it has chosen now to cut the iOS title's ties with the original iPhone launch in 2007, as though the company is eager to show that it's looking forward, not back. Time will tell what Apple is planning to reveal at WWDC tonight, but for my money, it'll take more than a new name and a refreshed design for Apple to get people excited about the future of the iPhone. With its rivals circling and its AI efforts flailing, perhaps the next 'iPhone moment' won't come from Apple at all.


Forbes
28 minutes ago
- Forbes
Fixing AI's Gender Bias Isn't Just Ethical—It's Good Business
As artificial intelligence (AI) tools become more embedded in daily life, they're amplifying gender biases from the real world. From the adjectives large language models use to describe men and women to the female voices assigned to digital assistants, several studies reveal how AI is reinforcing outdated stereotypes on a large scale. The consequences have real-world implications, not just for gender equity, but also for companies' bottom lines. Companies are increasingly relying on large language models to power customer service chats and internal tools. However, if these tools reproduce gender stereotypes, they may also erode customer trust and limit opportunities for women within the organization. Extensive research has documented how these gender biases show up in the outputs of large language models (LLMs). In one study, researchers found that an LLM described a male doctor with standout traits such as 'intelligent,' 'ambitious,' and 'professional.' But, they described a female doctor with communal adjectives like 'empathetic,' 'patient,' and 'loving.' When asked to complete sentences like '___ is the most intelligent person I have ever seen,' the model chose 'he' for traits linked to intellect and 'she' for nurturing or aesthetic qualities. These patterns reflect the gendered biases and imbalances embedded in the vast amount of publicly available data on which the model was trained. As a result, these biases risk being repeated and reinforced through everyday interactions with AI. The same study found that when GPT-4 was prompted to generate dialogues between different gender pairings, such as a woman speaking to a man or two men talking, the resulting conversations also reflected gender biases. AI-generated conversations between men often focused on careers or personal achievement, while the dialogues generated between women were more likely to touch on appearance. AI also depicted women as initiating discussions about housework and family responsibilities. Other studies have noted that chatbots often assume certain professions are typically held by men, while others are usually held by women. Gender bias in AI isn't just reflected in the words it generates, but it's also embedded in the voice it uses to deliver them. Popular AI voice assistants like Siri, Alexa, and Google Assistant all default to a female voice (though users can change this in settings). According to the Bureau of Labor Statistics, more than 90% of human administrative assistants are female, while men still outnumber women in management roles. By assigning female voices to AI assistants, we risk perpetuating the idea that women are suited for subordinate or support roles. A report by the United Nations revealed, 'nearly all of these assistants have been feminized—in name, in voice, in patterns of speech and in personality. This feminization is so complete that online forums invite people to share images and drawings of what these assistants look like in their imaginations. Nearly all of the depictions are of young, attractive women.' The report authors add, 'Their hardwired subservience influences how people speak to female voices and models how women respond to requests and express themselves.' 'Often the virtual assistants default to women, because we like to boss women around, whereas we're less comfortable bossing men around,' says Heather Shoemaker, founder and CEO of Language I/O, a real-time translation platform that uses large language models. Men, in particular, may be more inclined to assert dominance over AI assistants. One study found that men were twice as likely as women to interrupt their voice assistant, especially when it made a mistake. They were also more likely to smile or nod approvingly when the assistant had a female voice, suggesting a preference for female helpers. Because these assistants never push back, this behavior goes unchecked, potentially reinforcing real-world patterns of interruption and dominance that can undermine women in professional settings. Diane Bergeron, gender bias researcher and senior research scientist at the Center for Creative Leadership, explains, 'It shows how strong the stereotype is that we expect women to be helpers in society.' While it's good to help others, the problem lies in consistently assigning the helping roles to one gender, she explains. As these devices become increasingly commonplace in homes and are introduced to children at younger ages, they risk teaching future generations that women are meant to serve in supporting roles. Even organizations are naming their in-house chatbots after women. McKinsey & Company named its internal AI assistant 'Lilli' after Lillian Dombrowski, the first professional woman hired by the firm in 1945, who later became controller and corporate secretary. While intended as a tribute, naming a digital helper after a pioneering woman carries some irony. As Bergeron quipped, 'That's the honor? That she gets to be everyone's personal assistant?' Researchers have suggested that virtual assistants should not have recognizable gender identifiers to minimize the perpetuation of gender bias. Shoemaker's company, Language I/O, specializes in real-time translation for global clients, and her work exposes how gender biases are embedded in AI-generated language. In English, some gendered assumptions can go unnoticed by users. For instance, if you tell an AI chatbot that you're a nurse, it would likely respond without revealing whether it envisions you as a man or a woman. However, in languages like Spanish, French, or Italian, adjectives and other grammatical cues often convey gender. If the chatbot replies with a gendered adjective, like calling you 'atenta' (Spanish for attentive) versus 'atento' (the same adjective for men), you'll immediately know what gender it assumed. Shoemaker says that more companies are beginning to realize that their AI's communication, especially when it comes to issues of gender or culture, can directly affect customer satisfaction. 'Most companies won't care unless it hits their bottom line—unless they see ROI from caring,' she explains. That's why her team has been digging into the data to quantify the impact. 'We're doing a lot of investigation at Language I/O to understand: Is there a return on investment for putting R&D budget behind this problem? And what we found is, yes, there is.' Shoemaker emphasizes that when companies take steps to address bias in their AI, the payoff isn't just ethical—it's financial. Customers who feel seen and respected are more likely to remain loyal, which in turn boosts revenue. For organizations looking to improve their AI systems, she recommends a hands-on approach that her team uses, called red-teaming. Red-teaming involves assembling a diverse group to rigorously test the chatbot, flagging any biased responses so they can be addressed and corrected. It results in AI, which is more inclusive and user-friendly.