logo
Hayao Miyazaki's AI Nightmare

Hayao Miyazaki's AI Nightmare

The Atlantic28-03-2025

This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here.
This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words.
Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute).
Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take.
This interview has been edited and condensed.
Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil?
Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly.
If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture.
Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem.
Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter.
I read an article about copyright and style— you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing.
Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness.
But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life.
Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me.
Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that.
Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me.
Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images.
Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well.
Damon: You can't really imagine image generators in a paradigm where there's no social media.
Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends.
Damon: Maybe that's what people don't like, too—that it's nakedly transactional.
Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out.
Damon: It's this generation's poser.
Ian: Engagement baiter.
Damon: Leave me with a concluding thought about how people should react to these images.
Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI updates ChatGPT's voice mode with more natural-sounding speech
OpenAI updates ChatGPT's voice mode with more natural-sounding speech

Yahoo

time34 minutes ago

  • Yahoo

OpenAI updates ChatGPT's voice mode with more natural-sounding speech

ChatGPT's conversational voice mode just got an upgrade. Over the weekend, OpenAI rolled out an update to Advanced Voice, the feature that lets users have dialogues with ChatGPT out loud. The company says ChatGPT's voices now sound more natural and fluid, with "subtler intonation," "realistic cadence" (including pauses and emphasis), and more "on-point expressiveness" for emotions like empathy and sarcasm. Voice mode now also allows users to translate languages more easily. Ask ChatGPT to interpret, and it will continue translating the conversation until you tell it to stop or switch to another language. The feature is available for all paid ChatGPT users across markets and platforms. OpenAI said there may be minor dips in audio quality, including "unexpected variations in tone and pitch," and noted that the update doesn't fix voice mode's occasional hallucinations-related bugs, such as unintended sounds, gibberish, or background music. This article originally appeared on TechCrunch at

Does the new Google mark the end of Search as we know it?
Does the new Google mark the end of Search as we know it?

Yahoo

timean hour ago

  • Yahoo

Does the new Google mark the end of Search as we know it?

When you buy through links on our articles, Future and its syndication partners may earn a commission. As the buzz around Google's momentous 2025 I/O conference begins to wind down, much of the web is taking stock of what the word 'web' will even mean soon as AI, chatbots, and search continue to converge. But though AI and chatbots have been some of the fastest-growing tech categories in the past ten years, usage for old stalwarts like traditional web search from Google and Bing remains at an all-time high. Will the 'AI revolution' really kill search, or is it just another tool in our growing online kit? At Google I/O 2025, the company unveiled its plan for the future of its Search product family. With what the company calls 'AI Mode' now included in every search bar and Chrome installation, Google sees its new Search as a conversational, back-and-forth process that happens completely within the confines of a Google product. Rather than searching for an answer, scrolling to a page to find it, and clicking through to get it, Google's AI Mode retrieves and summarizes all the necessary information in one place. This isn't unlike OpenAI's ChatGPT using search mode or how Google's chatbot, Gemini, operates. However, the difference is that, unlike those options, which are separate products that take new tabs to access, Google's AI Mode is an omnipresent addition to any Google search you're already making with the press of a button. Could this change lead to the end of search as we know it? Industry analysts aren't so sure. In a recent look at the past 24 months of available web traffic data, collected via the SEO reporting service SEMRush and relayed by OneLittleWeb, analysts have seen that while chatbot usage is growing exponentially, it's still completely dwarfed by the totality of search traffic. OpenAI's ChatGPT has exploded in users over the past two years, but overall, chatbots still trail total search volume by a factor of nearly 34x. To put this figure into context, the top 10 search engines globally generated 1.8 trillion page views between April 2024 and March 2025, while the top 10 chatbots collectively generated a comparatively moderate 55.2 billion. Now, for what it's worth, pageviews aren't as reliable a metric as they were in years past. For example, let's say you want to find a pet groomer in your town. Before chatbots and AI solutions, your first page on Google Search might be 'pet groomer', but the results are too broad. You then go back to the search bar and type 'dog groomer' to whittle things down further. Each of these individual searches counts as a pageview. Chatbots are fundamentally rewriting the way people interface with the web. Meanwhile, chatbots like ChatGPT or Gemini are a one-window, one-pageview solution. You open up the interface once, and then have a conversation with the chatbot that reaches your eventual answer; somewhat similar to the multiple searches mentioned above. The difference is that these interactions are structured so that SEO services like SEMRush can only see metrics from the top of the funnel and not anything else that came after it. This is to say that we would take evaluations like this with a grain of salt. Chatbots are fundamentally rewriting the way people interface with the web. As a result, more traditional methods of evaluating traffic, popularity, and growth aren't as applicable as they used to be. Knowing this, it could be argued that the best metric to determine how much chatbots are actually growing would be time on page rather than straight pageviews. We haven't found any analysis that approaches the problem from this angle, rather than a sheer numbers game of how often users visited a search or chatbot. It remains to be seen whether chatbots will destroy search as we know it in the next decade, but LaptopMag will be keeping an eye on Google, OpenAI, and more as the landscape continues to evolve! What is Google's AI Mode, and how will it change search forever? "Google AI/O?" The Internet reacts to Google I/O 2025 Google makes a bold pitch for an all-encompassing AI: "Project Astra"

Google quietly gave Gemini a big upgrade that could change everything
Google quietly gave Gemini a big upgrade that could change everything

Yahoo

timean hour ago

  • Yahoo

Google quietly gave Gemini a big upgrade that could change everything

When you buy through links on our articles, Future and its syndication partners may earn a commission. Google made it clear that AI is a big part of its business at the company's I/O event last month. Its Gemini AI model was the star of the show, and the tech giant plans to inject the AI into all of its devices and services, including its upcoming smart glasses, Google Search results, and Gmail. In the big AI race, Google is usually in second place behind OpenAI's ChatGPT, but a new update for Gemini could give it a big lead over the competition. With its latest update, Google Gemini can now act more as an AI assistant, a common goal for the different tech companies. Gemini Pro and Ultra users can now have the AI do scheduled actions by asking it to perform a task at a certain time or make a recurring action. "Now you can wake up with a summary of your calendar and unread emails, or get a creative boost by having Gemini write five ideas for your blog every Monday," Dave Citron, senior director, product management for the Gemini app, said in a blog post. "Stay informed by getting updates on your favorite sports team, or schedule a one-off task like asking Gemini to give you a summary of an award show the day after it happens. Just tell Gemini what you need and when, and it will take care of the rest." The Gemini update is already live for those with Gemini Pro and Ultra subscriptions, along with individuals using qualifying Google Workspace business and education plans. All the big tech companies are trying to get their AI model to be the The company that is still behind with its AI agent is Apple. It could be said that Siri was the first agent out of the gate when it came out in 2011, but its usefulness continues to lag behind when compared to the likes of ChatGPT, Claude, and Gemini. Apple did plan to release an overhauled Siri this year. The company announced this change last year when it revealed its Apple Intelligence feature, but turmoil within the company has been setting back the reveal of the new Siri. Apple reportedly changed leadership of the team handling the new Siri, but it's unlikely to make its debut this year. AI, in general, has been a sore spot for Apple. Although it has a partnership with OpenAI and makes use of ChatGPT for its AI services, it has been rumored that Apple won't have much AI to talk about at the upcoming Worldwide Developers Conference that starts on Monday.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store