logo
The real argument artists should be making against AI

The real argument artists should be making against AI

Vox16-04-2025

is a senior reporter for Vox's Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.
Many artists are upset at companies like OpenAI and Meta for using their work to train AI systems. Getty Images/Westend61
Every artist I know is furious. The illustrators, the novelists, the poets — all furious. These are people who have painstakingly poured their deepest yearnings onto the page, only to see AI companies pirate their work without consent or compensation.
The latest surge of anger is a response to OpenAI integrating new image-generation capabilities into ChatGPT and showing how they can be used to imitate the animation style of Studio Ghibli. That triggered an online flood of Ghiblified images, with countless users (including OpenAI CEO Sam Altman) getting the AI to remake their selfies in the style of Spirited Away or My Neighbor Totoro.
Couple that with the recent revelation that Meta has been pirating millions of published books to train its AI, and you can see how we got a flashpoint in the culture war between artists and AI companies.
This story was first featured in the Future Perfect newsletter.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
When artists try to express their outrage at companies, they say things like, 'They should at least ask my permission or offer to pay me!' Sometimes they go a level deeper: 'This is eroding the essence of human creativity!'
These are legitimate points, but they're also easy targets for the supporters of omnivorous AI. These defenders typically make two arguments.
First, using online copyrighted materials to train AI is fair use — meaning, it's legal to copy them for that purpose without artists' permission. (OpenAI makes this claim about its AI training in general and notes that it allows users to copy a studio's house style — Studio Ghibli being one example — but not an individual living artist. Lawyers say the company is operating in a legal gray area.)
Second, defenders argue that even if it's not fair use, intellectual property rights shouldn't be allowed to stand in the way of innovation that will greatly benefit humanity.
The strongest argument artists can make, then, is that the unfettered advance of AI technologies that experts can neither understand nor control won't greatly benefit humanity on balance — it'll harm us. And for that reason, forcing artists to be complicit in the creation of those technologies is inflicting something terrible on them: moral injury.
Moral injury is what happens when you feel you've been forced to violate your own values. Psychiatrists coined the term in the 1990s after observing Vietnam-era veterans who'd had to carry out orders — like dropping bombs and killing civilians — that completely contradicted the urgings of their conscience. Moral injury can also apply to doctors who have to ration care, teachers who have to implement punitive behavior-management programs, and anyone else who's been forced to act contrary to their principles. In recent years, a swell of research has shown that people who've experienced moral injury often carry a sense of shame that can lead to severe anxiety and depression.
Maybe you're thinking that this psychological condition sounds a world away from AI-generated art — that having your images or words turned into fodder for AI couldn't possibly trigger moral injury. I would argue, though, that this is exactly what's happening for many artists who are seeing their work sucked up to enable a project they fundamentally oppose, even if they don't yet know the term to describe it.
Framing their objection in terms of moral injury would be more effective. Unlike other arguments, it challenges the AI boosters' core narrative that everyone should support AI innovation because it's essential to progress.
Why AI art is more than just fair use or remixing
By now, you've probably heard people argue that trying to rein in AI development means you're anti-progress, like the Luddites who fought against power looms at the dawn of the industrial revolution or the people who said photographers should be barred from taking your likeness in public without your consent when the camera was first invented.
Some folks point out that as recently as the 1990s, many people saw remixing music or sharing files on Napster as progressive and actually considered it illiberal to insist on intellectual property rights. In their view, music should be a public good — so why not art and books?
To unpack this, let's start with the Luddites, so often invoked in discussions about AI these days. Despite the popular narrative we've been fed, the Luddites were not anti-progress or even anti-technology. What they opposed was the way factory owners used the new machines: not as tools that could make it easier for skilled workers to do their jobs but as a means to fire and replace them with low-skilled, low-paid child laborers who'd produce cheap, low-quality cloth. The owners were using the tech to immiserate the working class while growing their own profit margins.
That is what the Luddites opposed. And they were right to oppose it because it matters whether tech is used to make all classes of people better off or to empower an already-powerful minority at others' expense.
Narrowly tailored AI — tools built for specific purposes, such as enabling scientists to discover new drugs — stands to be a huge net benefit to humanity as a whole, and we should cheer it on. But we have no compelling reason to believe the same is true of the race to build AGI — artificial general intelligence, a hypothetical system that can match or exceed human problem-solving abilities across many domains. In fact, those racing to build it, like Altman, will be the first to tell you that it might break the world's economic system or even lead to human extinction.
They cannot argue in good faith, then, that intellectual property should be swept aside because the race to AGI will be a huge net benefit to humanity. They might hope it will benefit us, but they themselves say it could easily doom us instead.
But what about the argument that shoveling the whole internet into AI is fair use?
That ignores the fact that when you take something from someone else, it really matters exactly what you do with it. Under the fair use principle, the purpose and character of the use is key. Is it for commercial use? Or not-for-profit? Will it harm the original owner?
Think about the people who sought to limit photographers' rights in the 1800s, arguing that they can't just take your photo without permission. Now, it's true that the courts ruled that I can take a photo with you in it even if you didn't explicitly consent. But that doesn't mean the courts allowed any and all uses of your likeness. I cannot, for example, legally take that photo of you and non-consensually turn it into pornography.
Pornography — not music remixing or file sharing — is the right analogy here. Because AI art isn't just about taking something from artists; it's about transforming it into something many of them detest since they believe it contributes to the 'enshittification' of the world, even if it won't literally end the world.
That brings us back to the idea of moral injury.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

This epic ChatGPT discount is too good to resist if you run a small team — save 96%
This epic ChatGPT discount is too good to resist if you run a small team — save 96%

Tom's Guide

time31 minutes ago

  • Tom's Guide

This epic ChatGPT discount is too good to resist if you run a small team — save 96%

Yes, ChatGPT can be yours to use completely free of charge. But these days, all of the good features are locked behind a paywall. The good news is that if you've been eyeing up some of the fancier features of ChatGPT, now is a great time to test them out. Right now, OpenAI is offering a pretty hefty 96% discount on ChatGPT Teams. While that sounds like something that you would need your own business for, anybody can sign up for ChatGPT Teams, offering access to some of the best paid for features to up to five people on one account. The first step in getting this deal is to head over to this link on the ChatGPT website. There you will see a discounted version of ChatGPT Teams, showing the price down from $30/£30 a month to just $1/£1. The deal only lasts for one month, but can be accessed by up to five people in that time. When you arrive on this page, it will ask you to either sign in to your account, or create a new one to verify your eligibility for the promotion. Even if you already have an account, you can still get the discount. Once you've signed in or created a new account, simply continue through and make the $1/£1 payment. If you know you don't want to pay the $30/£30 the following month, head to the account settings and go into manage subscription or plan section. From here, there will be a team plan section which will allow you to cancel. Once your account is created, you'll be asked to set up a workspace. From here, you can invite people to join that workspace and get access to all of the features of ChatGPT Teams. The Teams plan is one step up from ChatGPT Pro. That includes higher limits on the model's output and a faster performance. You can tailor ChatGPT to your work file uploads, projects and lock its memory to the files that you work with most. This plan includes dedicated workspaces. This allows you to organize AI interactions into departments, projects and general structures. This is especially useful if you're wanting to keep images, conversations and ideas in separate areas. One of the most useful features of this plan, especially for businesses is that ChatGPT doesn't train off any data produced by a Team account. This means you can upload work files or come up with ideas safely. It includes unlimited access to GPT-4o — ChatGPT's advanced reasoning model and offers all of the important advanced features of ChatGPT like the ability to analyse visual data and generate images. If you decide to keep the plan past the first month, everything revolves around the interactions of a team, allowing you to manage users, set up admin controls and give different account members roles and levels of access.

AI Companies Should Be Wary of Middle East Spending Spree
AI Companies Should Be Wary of Middle East Spending Spree

Wall Street Journal

timean hour ago

  • Wall Street Journal

AI Companies Should Be Wary of Middle East Spending Spree

A handful of Middle Eastern countries have emerged as some of the world's biggest spenders on artificial intelligence. The companies benefiting from the windfall should be wary—as should their investors. Gulf countries including Saudi Arabia, the United Arab Emirates and Qatar are in splash-out mode, spending billions of dollars on data-center projects with Microsoft MSFT -0.82%decrease; red down pointing triangle and OpenAI, as well as chip purchases from Nvidia NVDA -2.09%decrease; red down pointing triangle, Advanced Micro Devices AMD -1.97%decrease; red down pointing triangle and others to fuel their AI ambitions. Their aim is to seed domestic AI industries rather than ceding the technology entirely to foreign tech companies. Many see it as a point of national pride to develop Arabic-language AI models locally. AI investments also fit into longstanding economic diversification efforts. Nvidia's stock rose 15% in one week last month after it reached deals to sell millions of chips to the U.A.E. and Saudi Arabia. AMD also got a share boost after it reached a deal worth up to $10 billion in Saudi Arabia. The deals have been welcome news when U.S. export controls have effectively shut out Nvidia and AMD from the market for advanced AI chips in China. That was a body blow especially for Nvidia, which had been a leading player in a market it estimates will grow to $50 billion annually in the coming years. Tech companies hope spending in the Middle East will keep going up as countries place more bets on AI. But there are reasons to wonder whether it will become a sustainable source of revenue in the long run. Many big investments in the region that generated a flurry of initial excitement have been scaled back or scrapped over the years, falling victim to poor management or political squabbles. Among the largest is Neom, a futuristic desert city in Saudi Arabia launched in 2017 but plagued by cost overruns and delays. A board presentation last year found it would cost $370 billion to build the city's first phase in the next 10 years. Even if Middle Eastern countries do move forward with the projects they have announced, it is unclear where the end demand for their AI services will come from. That differentiates them from commercial buyers and makes it less certain that they will keep plowing money into larger and more advanced AI facilities down the road. Tech companies' growth in the region could also easily be scuttled by U.S. export controls. U.S. officials have long worried that China will leverage its close political and military ties to the Gulf countries to get around AI restrictions on China. In the waning days of the Biden administration, the U.S. addressed those concerns by proposing country-specific quotas for AI chip shipments to the Middle East and other countries. The Trump administration scrapped them after Emirati officials courted the president with $1.4 trillion of investments over the next decade, paving the way for Nvidia and AMD's deals. It isn't hard to imagine similar restrictions coming back if Middle Eastern data centers do become a way for China to sidestep direct curbs on its AI development, or if a later U.S. administration takes a more cautious view of that risk. Israel's conflict with Iran makes it harder to ignore the region's political risks. Should the conflict spread or become a wider regional war, closed airspace and military threats to U.S. interests in the region could slow or postpone investments in AI infrastructure. U.S. companies may be reluctant to send staff to the region. A prolonged conflict would also reorient the political and financial priorities of regional governments, which could quickly turn off U.S. companies' regional revenue spigot. More broadly, the conflict is a reminder of the constant geopolitical risks present in the region. The Middle East has recently been a gold mine for U.S. tech. In the long run, though, it may not be the mother lode recent announcements suggest. Write to Asa Fitch at

Arun Srinivas Appointed as Managing Director and Head of Meta India
Arun Srinivas Appointed as Managing Director and Head of Meta India

Entrepreneur

time2 hours ago

  • Entrepreneur

Arun Srinivas Appointed as Managing Director and Head of Meta India

Srinivas, who has been serving as Director and Head of Ads Business for Meta India since 2022, will continue to report to Sandhya Devanathan, Meta's Vice President for India and Southeast Asia. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Meta has appointed Arun Srinivas as the new Managing Director and Head for its India operations, effective July 1, 2025. Srinivas, who has been serving as Director and Head of Ads Business for Meta India since 2022, will continue to report to Sandhya Devanathan, Meta's Vice President for India and Southeast Asia. In his previous role, Srinivas led Meta's business strategy and revenue growth in India across platforms like Facebook, Instagram, and WhatsApp. He played a key role in driving priorities such as artificial intelligence, Reels, and Messaging for business growth. The appointment follows Meta's decision to expand Devanathan's responsibilities to include oversight of Southeast Asia. The news was announced by Dan Neary, Meta's Asia Pacific head. Before joining Meta, Srinivas held leadership roles in major companies. He served as Chief Operating Officer and Global Chief Marketing Officer at Ola and was a Vice President at Unilever. He also worked at private equity firm WestBridge Capital, focusing on investments in the consumer sector. Srinivas takes charge at a challenging time for Meta in India. The company is currently facing scrutiny from Indian authorities over anti-competitive practices. In November, the Competition Commission of India (CCI) fined Meta and barred WhatsApp from sharing user data with other Meta platforms for five years. Srinivas's appointment signals Meta's continued focus on strengthening its leadership and navigating regulatory challenges in one of its key global markets.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store