Latest news with #AtlanticIntelligence
Yahoo
28-03-2025
- Entertainment
- Yahoo
Hayao Miyazaki's AI Nightmare
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words. Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute). Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take. This interview has been edited and condensed. Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil? Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly. If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture. Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem. Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter. I read an article about copyright and style—you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing. Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness. But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life. Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me. Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that. Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me. Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images. Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well. Damon: You can't really imagine image generators in a paradigm where there's no social media. Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends. Damon: Maybe that's what people don't like, too—that it's nakedly transactional. Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out. Damon: It's this generation's poser. Ian: Engagement baiter. Damon: Leave me with a concluding thought about how people should react to these images. Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame. Article originally published at The Atlantic


Atlantic
28-03-2025
- Entertainment
- Atlantic
Hayao Miyazaki's AI Nightmare
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. This week, OpenAI released an update to GPT-4o, one of the models powering ChatGPT, that allows the program to create high-quality images. I've been surprised by how effective the tool is: It follows directions precisely, renders people with the right number of fingers, and is even capable of replacing text in an image with different words. Almost immediately—and with the direct encouragement of OpenAI CEO Sam Altman—people started using GPT-4o to transform photographs into illustrations that emulate the style of Hayao Miyazaki's animated films at Studio Ghibli. (Think Kiki's Delivery Service, My Neighbor Totoro, and Spirited Away.) The program was excellent at this task, generating images of happy couples on the beach (cute) and lush illustrations of the Kennedy assassination (not cute). Unsurprisingly, backlash soon followed: People raised concerns about OpenAI profiting off of another company's intellectual property, pointed to a documentary clip of Miyazaki calling AI an 'insult to life itself,' and mused about the technology's threats to human creativity. All of these conversations are valid, yet they didn't feel altogether satisfying—complaining about a (frankly, quite impressive!) thing doesn't make that thing go away, after all. I asked my colleague Ian Bogost, also the Barbara and David Thomas Distinguished Professor at Washington University in St. Louis, for his take. This interview has been edited and condensed. Damon Beres: Let's start with the very basic question. Are the Studio Ghibli images evil? Ian Bogost: I don't think they're evil. They might be stupid. You could construe them as ugly, although they're also beautiful. You could construe them as immoral or unseemly. If they are evil, why are they evil? Where does that get us in our understanding of contemporary technology and culture? We have backed ourselves into this corner where fandom is so important and so celebrated, and has been for so long. Adopting the universe and aesthetics of popular culture—whether it's Studio Ghibli or Marvel or Harry Potter or Taylor Swift—that's not just permissible, but good and even righteous in contemporary culture. Damon: So the idea is that fan art is okay, so long as a human hand literally drew it with markers. But if any person is able to type a very simple command into a chatbot and render what appears at first glance to be a professional-grade Studio Ghibli illustration, then that's a problem. Ian: It's not different in nature to have a machine do a copy of a style of an artist than to have a person do a copy of a style of an artist. But there is a difference in scale: With AI, you can make them fast and you can make lots of them. That's changed people's feelings about the matter. I read an article about copyright and style— you can't copyright a style, it argued—that made me realize that people conflate many different things in this conversation about AI art. People who otherwise might hate copyright seem to love it now: If they're posting their own fan art and get a takedown request, then they're like, Screw you, I'm just trying to spread the gospel of your creativity. But those same people might support a copyright claim against a generative-AI tool, even though it's doing the same thing. Damon: As I've experimented with these tools, I've realized that the purpose isn't to make art at all; a Ghibli image coming out of ChatGPT is about as artistic as a photo with an Instagram filter on it. It feels more like a toy to me, or a video game. I'm putting a dumb thought into a program and seeing what comes out. There's a low-effort delight and playfulness. But some people have made this point that it's insulting because it's violating Studio Ghibli co-founder Hayao Miyazaki's beliefs about AI. Then there are these memes—the White House tweeted a Ghiblified image of an immigrant being detained, which is extremely distasteful. But the image is not distasteful because of the technology: It's distasteful because it's the White House tweeting a cruel meme about a person's life. Ian: You brought up something important, this embrace of the intentional fallacy—the idea that a work's meaning is derived from what the creator of that work intended that meaning to be. These days, people express an almost total respect for the intentions of the artist. It's perfectly fine for Miyazaki to hate AI or anything else, of course, but the idea that his opinion would somehow influence what I think about making AI images in his visual style is fascinating to me. Damon: Maybe some of the frustration that people are expressing is that it makes Studio Ghibli feel less special. Studio Ghibli movies are rare—there aren't that many of them, and they have a very high-touch execution. Even if we're not making movies, the aesthetic being everywhere and the aesthetic being cheap cuts against that. Ian: That's a credible theory. But you're still in intentional-fallacy territory, right? Studio Ghibli has made a deliberate effort to tend and curate their output, and they don't just make a movie every year, and I want to respect that as someone influenced by that work. And that's weird to me. Damon: What we haven't talked about is the Ghibli image as a kind of meme. They're not just spreading because they're Ghibli images: They're spreading because they're AI-generated Ghibli images. Ian: This is a distinctive style of meme based less on the composition of the image itself or the text you put on it, but the application of an AI-generated style to a subject. I feel like this does represent some sort of evolutionary branch of internet meme. You need generative AI to make that happen, you need it to be widespread and good enough and fast enough and cheap enough. And you need X and Bluesky in a way as well. Damon: You can't really imagine image generators in a paradigm where there's no social media. Ian: What would you do with them, show them to your mom? These are things that are made to be posted, and that's where their life ends. Damon: Maybe that's what people don't like, too—that it's nakedly transactional. Ian: Exactly—you're engagement baiting. These days, that accusation is equivalent to selling out. Damon: It's this generation's poser. Ian: Engagement baiter. Damon: Leave me with a concluding thought about how people should react to these images. Ian: They ought to be more curious. This is deeply interesting, and if we refuse to give ourselves the opportunity to even start engaging with why, and instead jump to the most convenient or in-crowd conclusion, that's a real shame.


Atlantic
28-02-2025
- Business
- Atlantic
The Complicated Relationship Between Sam Altman and Donald Trump
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. President Donald Trump has been clear about his vision for America as an AI superpower, signing in his first week an executive order geared toward helping AI 'promote human flourishing, economic competitiveness, and national security.' In order to achieve this goal, the Trump administration has forged a relationship with OpenAI and its CEO, Sam Altman—but that could complicate things for Elon Musk. As my colleague Matteo Wong wrote for The Atlantic on Wednesday, the two technologists are bitter rivals. Musk was one of OpenAI's initial investors and served on the company's board, but he left in 2018. Ever since ChatGPT made OpenAI a household name, Musk has routinely taken potshots at the company, calling its chatbot too 'woke' and using the nickname 'Scam Altman' to refer to its CEO. Meanwhile, he's launched his own AI firm, xAI, whose products have lagged behind OpenAI's. Musk and Altman may both need Trump's blessing to unlock considerable resources for their projects and to stave off inconvenient regulations. 'Anything that OpenAI might gain from Trump, xAI could reap as well,' Matteo writes. The companies are in competition with each other, so any advantage that one gets may be to the detriment of the other, building up a tension between Musk and Altman that could eventually snap. How Sam Altman Could Break Up Elon Musk and Donald Trump By Matteo Wong The rivalry between Sam Altman and Elon Musk is entering its Apprentice era. Both men have the ambition to redefine how the modern world works—and both are jockeying for President Donald Trump's blessing to accelerate their plans. Altman's company, OpenAI, as well as Musk's ventures—which include SpaceX, Tesla, and xAI—all depend to some degree on federal dollars, permits, and regulatory support. The president could influence whether OpenAI or xAI produces the next major AI breakthrough, whether Musk can succeed in sending a human to Mars, and whether Altman's big bet on nuclear energy, and fusion reactors in particular, pans out. What to Read Next 'Terrified' federal workers are clamming up: Karen Hao recently spoke with more than a dozen federal workers about the culture of fear and paranoia that they say is spreading through their agencies under the Trump administration. They allege that they are being hindered from doing their work—some of which touches on risks emerging from AI. 'Federal workers I spoke with now say that neither they nor their colleagues want to be associated in any way with working on or promoting disinformation research,' Hao writes, 'even as they are aware that the U.S. government's lack of visibility into such networks could create a serious national vulnerability, especially as AI gives state-backed operations powerful upgrades.'
Yahoo
21-02-2025
- Business
- Yahoo
What Could DOGE Do With Federal Data?
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. When the Department of Government Efficiency stormed the federal government, it had a clear objective—to remake the government, one must remake the civil service. And in particular, the team of Elon Musk acolytes 'focused on accessing the terminals, uncovering the button pushers, and taking control,' Michael Scherer, Ashley Parker, Shane Harris, and I wrote this week in an investigation into the DOGE takeover. Computers, they figured, run the government. DOGE members and new political appointees have sought access to data and IT systems across the government—at the Treasury Department, IRS, Department of Health and Human Services, and more. Government technologists have speculated that DOGE's next step will be to centralize those data and feed them into AI systems, making bureaucratic processes more efficient while also identifying fraud and waste, or perhaps simply uncovering further targets to dismantle. Musk's team has reportedly already fed Department of Education data into an AI system, and Thomas Shedd, a former Tesla engineer recently appointed to the General Services Administration, has repeatedly spoken with staff about an AI strategy, mentioning using the technology to develop coding agents and analyze federal contracts. No matter DOGE's goal, putting so much information in one place and under the control of a small group of people with little government experience has raised substantial security concerns. As one recently departed federal technology official wrote in draft testimony for lawmakers, which we obtained, 'DOGE is one romance scam away from a national security emergency.' This Is What Happens When the DOGE Guys Take Over By Michael Scherer, Ashley Parker, Matteo Wong and Shane Harris They arrived casually dressed and extremely confident—a self-styled super force of bureaucratic disrupters, mostly young men with engineering backgrounds on a mission from the president of the United States, under the command of the world's wealthiest online troll. On February 7, five Department of Government Efficiency representatives made it to the fourth floor of the Consumer Financial Protection Bureau headquarters, where the executive suites are located. They were interrupted while trying the handles of locked office doors. 'Hey, can I help you?' asked an employee of the agency that was soon to be forced into bureaucratic limbo. The DOGE crew offered no clear answer. Read the full article. What to Read Next DOGE and new Trump appointees' access to federal data and computer systems is growing in both breadth and depth. Defense technologies, Americans' sensitive personal and health data, dangerous biological research, and more are in reach. Within at least one agency, USAID, they have achieved 'God mode,' according to an employee in senior leadership—meaning Elon Musk's team has 'total control over systems that Americans working in conflict zones rely on, the ability to see and manipulate financial systems that have historically awarded tens of billions of dollars, and perhaps much more,' Charlie Warzel, Ian Bogost, and I reported this week. With this level of control, the USAID staffer feared, DOGE could terminate federal workers in 'a conflict zone like Ukraine, Sudan, or Ethiopia.' In the coming weeks, we reported, 'the team is expected to enter IT systems at the CDC and Federal Aviation Administration.' Just how far Musk and his team can go is uncertain; they face various lawsuits, which have thus far had varying success. The team may be trying to improve the government's inner workings, as is its stated purpose. 'But in the offices where the team is reaching internal IT systems,' Charlie, Ian, and I wrote, 'some are beginning to worry that [Musk] might prefer to destroy' the government, 'to take it over, or just to loot its vaults for himself.' Article originally published at The Atlantic


Atlantic
21-02-2025
- Business
- Atlantic
What Could DOGE Do With Federal Data?
This is Atlantic Intelligence, a newsletter in which our writers help you wrap your mind around artificial intelligence and a new machine age. Sign up here. When the Department of Government Efficiency stormed the federal government, it had a clear objective—to remake the government, one must remake the civil service. And in particular, the team of Elon Musk acolytes 'focused on accessing the terminals, uncovering the button pushers, and taking control,' Michael Scherer, Ashley Parker, Shane Harris, and I wrote this week in an investigation into the DOGE takeover. Computers, they figured, run the government. DOGE members and new political appointees have sought access to data and IT systems across the government—at the Treasury Department, IRS, Department of Health and Human Services, and more. Government technologists have speculated that DOGE's next step will be to centralize those data and feed them into AI systems, making bureaucratic processes more efficient while also identifying fraud and waste, or perhaps simply uncovering further targets to dismantle. Musk's team has reportedly already fed Department of Education data into an AI system, and Thomas Shedd, a former Tesla engineer recently appointed to the General Services Administration, has repeatedly spoken with staff about an AI strategy, mentioning using the technology to develop coding agents and analyze federal contracts. No matter DOGE's goal, putting so much information in one place and under the control of a small group of people with little government experience has raised substantial security concerns. As one recently departed federal technology official wrote in draft testimony for lawmakers, which we obtained, 'DOGE is one romance scam away from a national security emergency.' This Is What Happens When the DOGE Guys Take Over By Michael Scherer, Ashley Parker, Matteo Wong and Shane Harris They arrived casually dressed and extremely confident—a self-styled super force of bureaucratic disrupters, mostly young men with engineering backgrounds on a mission from the president of the United States, under the command of the world's wealthiest online troll. On February 7, five Department of Government Efficiency representatives made it to the fourth floor of the Consumer Financial Protection Bureau headquarters, where the executive suites are located. They were interrupted while trying the handles of locked office doors. 'Hey, can I help you?' asked an employee of the agency that was soon to be forced into bureaucratic limbo. The DOGE crew offered no clear answer. What to Read Next DOGE and new Trump appointees' access to federal data and computer systems is growing in both breadth and depth. Defense technologies, Americans' sensitive personal and health data, dangerous biological research, and more are in reach. Within at least one agency, USAID, they have achieved 'God mode,' according to an employee in senior leadership—meaning Elon Musk's team has 'total control over systems that Americans working in conflict zones rely on, the ability to see and manipulate financial systems that have historically awarded tens of billions of dollars, and perhaps much more,' Charlie Warzel, Ian Bogost, and I reported this week. With this level of control, the USAID staffer feared, DOGE could terminate federal workers in 'a conflict zone like Ukraine, Sudan, or Ethiopia.' In the coming weeks, we reported, 'the team is expected to enter IT systems at the CDC and Federal Aviation Administration.' Just how far Musk and his team can go is uncertain; they face various lawsuits, which have thus far had varying success. The team may be trying to improve the government's inner workings, as is its stated purpose. 'But in the offices where the team is reaching internal IT systems,' Charlie, Ian, and I wrote, 'some are beginning to worry that [Musk] might prefer to destroy' the government, 'to take it over, or just to loot its vaults for himself.'