Opinion - AI companies are commiting mass theft and hiding behind the language of ‘training'
Ursula K. LeGuin once wrote, 'There is no right answer to the wrong question.' And while AI might struggle to understand the quip, we human readers get the point immediately. To solve a problem, you have to focus on what really matters about it.
That's a big problem with the ongoing debate, most recently joined by Professor Nicholas Creel, over whether artificial intelligence models 'learn' the same way as humans.
The simple answer, spelled out well in Erik J. Larson's book 'The Myth of Artificial Intelligence,' is that they don't. They can't handle or generate novelty, they become stymied when forced to operate with uncertainty or incomplete information, and they lack empathy and any real conception of understanding. They can copy and exploit almost infinite volumes of data, but they cannot extrapolate or intuit new truths from mere scraps of information as humans can.
I asked Microsoft's AI, Copilot, the question, 'Does AI reason in the same way humans reason?' This is the answer I received:
'AI relies on large datasets and algorithms to make decisions and predictions. It processes information based on patterns and statistical analysis. AI follows predefined rules and models to arrive at conclusions. It doesn't have intuition or emotions influencing its decisions. AI can learn from data through techniques like machine learning, but this learning is based on mathematical models and not personal experiences.'
As the Human Artistry Campaign's Moiya McTier has explained, real creativity flows from far more than crunching big data sets to pull out patterns and connections. It 'is the product of lived experience and grows organically from the culture, geography, family, and moments that shape us as individuals.'
Thus, it's clear that AI learns and produces outputs in fundamentally different ways than humans do.
But for those of us living and dealing with AI in the real world, including musicians like me whose work has been scraped off the internet and fed into AI models without any kind of consent, it's also clear this sterile philosopher's dispute isn't that important. Per LeGuin, the real question we should be asking is what AI does — and whether that is worth the cost.
In order to develop their models and launch their pattern-detecting algorithms, it is indisputable that AI companies must cause a machine to reproduce copyrighted works or produce a new work derived from copyrighted works. They also distribute the work across a large network. These are three exclusive rights reserved to authors under federal law.
Normally, a company that wishes to engage in this kind of activity would simply license the works from authors. But AI companies, ostensibly competitors, have all pretty much decided not to license the works but use them anyway, effectively setting the price for these copyrights at zero.
That's a mass devaluing of the world's creative legacy — a huge cost in lost opportunities and jobs for real people. What's more, it will create a dumbed down and derivative culture and, if left unchecked too long, a gaping empty hole where the next generation of truly fresh or novel creations should be. AI models may excel at producing different versions of works they have copied and analyzed (reassembled from enough different sources to avoid immediate liability for plagiarism) but they cannot break the mold and give us something truly new.
Sadly, AI evangelists seem determined to anthropomorphize commercial AI as cute little robots that learn the same way humans learn. They have coopted the language, reframing art as 'data' and mass copying as 'training,' as if AIs were pets. This is a fairytale designed to conceal the fact there is a cartel of trillion-dollar companies and deep-pocketed technology investors committing or excusing mass copyright infringement.
Maybe that game would fool an AI, but we humans see right through it.
David Lowery is a mathematician, writer, musician, producer, and singer-songwriter for the bands Cracker and Camper Van Beethoven.
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
27 minutes ago
- Forbes
Microsoft Confirms Security Pause For Outlook Email Encryption
Two-click encrypted email adds security pause for Outlook users. There are lots of ways that email platforms can come under attack. Your passwords can be compromised, whether you are using Outlook or Gmail. Two-factor authentication code session cookies can be stolen by hackers enable easy account access. And you can, in effect, compromise your own email by not taking privacy considerations seriously enough. This can even be, maybe even especially be, the case when encrypted email is concerned. Which is why Microsoft is introducing a security pause for users of New Outlook for Windows. Here's what you need to know about the new 2-click encryption view. As regular readers of the cybersecurity section here at Forbes will know, I'm a great advocate for security being simple if it is to be effective. Anything that gets in the way of a smooth user experience will likely be ignored or disabled, and thus quickly becomes counterproductive from the perspective of attack mitigation. You might be surprised to learn, therefore, that I'm all in when it comes to the latest Microsoft announcement that Outlook is introducing a security pause to the encrypted email viewing process. 'New Outlook for Windows will now support a two-click view for encrypted emails,' Microsoft said. 'This feature enables admins to require user confirmation before allowing access to encrypted emails.' Sounds like an extra security hurdle to me, for sure, but in this case, it is a desired one. Why so? Because we are talking about preventing users from opening an encrypted email without giving any thought at all. What if the user is in an inappropriate environment? What if there are people around who should not be able to view the contents? Poor privacy controls are the doorway to compromise, especially when sensitive data is concerned, and let's face it, why would you encrypt email if it were not meant to kept private? Admins with control over New Outlook for Windows, as well as Outlook for the web, iOS and Android, will be able to implant the security pause from the TwoClickMailPreviewEnabled setting in the Microsoft Azure directory, and Microsoft said they can also set this up using a Microsoft PowerShell command in Microsoft Exchange Online PowerShell. The feature will start rolling out in early April and be complete by the end of that month, although Android and iOS implementations will come a little later, with the end of June being the target completion date, according to Microsoft.
Yahoo
30 minutes ago
- Yahoo
People Are Sharing The Things That Have Been Ruined By AI, And The Future Is A Nightmare
Recently, Reddit user StrawHat_Froggy asked, "What is something that has been ruined by AI and will never recover?" People had A LOT to say in the comments. Here are some of the best and most interesting replies: 1."There are more and more sewing patterns out there with fake instructions and an AI-generated picture of the finished garment. If you know what to look out for, they're not hard to spot (at least currently), but many people don't know this scam exists and just buy some AI pattern off Etsy." —Frillybits 2."Freelance writing. Two years ago, I was making a good $500 a month writing articles. Then some writing app was introduced, and the number of jobs I could choose from dropped from about 80-100 per day to nothing in three weeks." —Useless890 3."My colleagues' ability to write and send an email by themselves." —CloakAndKeyGames 4."Several of my friends' ability to have conversations in dating apps." —youcantkillanidea 5."Search engine results." —Ruminations0 "I recently learned that you can put '-ai' after your search and it won't give an AI summary." —AstonishedSockPuppet Related: People Who Never Believed In The Supernatural Are Revealing What Made Them Change Their Minds, And I'm Terrified 6."Trust. There used to be a saying, 'I'll believe it when I see it.' Now we can see it and still not believe it." —mslabrat01 7."Duolingo." —EthanTheJudge 8."Video proof of an event." —dontcarebare Related: 40 Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really, Really Creepy Wikipedia Pages 9."People. So many people use AI to generate things like assignments, documents, etc., to totally avoid learning content. Then they don't even read it or realize that AI essentially created a digital turd." —CreepyValuable 10."Photographic evidence. We can no longer trust any image to be real." —PoolExtension5517 11."The em dash. I used to use them in my copywriting, but now it's a big flag for AI written content because ChatGPT is em dash happy. I'll never forgive AI for that." "Dropping this article for anyone interested in more context about the em dash drama." —inkyblackops 12."The AITA (Am I The A-Hole) subreddit and other forums on Reddit. Posting a quick AI story makes it easy for bot accounts to get enough karma to fully access Reddit communities." —Horizontal_Bob 13."Democracy. Propaganda can be made so realistic that it will be almost impossible to tell what's true and what's fiction. All sides will be able to point to their own evidence and 'truth.'" —DrTenochtitlan 14."Porn deepfakes seem like a genie that's not going back into the bottle." —Dangerous-Coach-1999 15."Texting and instant messaging. I was texting my aunt about my disapproval of a certain politician. Halfway into the conversation, her responses switched from short two sentence statements (in broken English) to paragraphs with bullet points, capitalization, and punctuation/vocabulary I know she doesn't use. I asked if it was AI. She responded that it was her own analysis. People are losing the ability to communicate their own thoughts." —Gobnobbla 16."Voice acting." —KasElGatto 17."Computer science grad here. AI has taken away most of my job prospects. Companies make more money by shipping worse software faster with AI. I no longer see a future where my better ability is worth my slower speed in the eyes of hiring managers." —madboneman finally, it's an AI ouroboros: "Oddly enough, AI. As more AI-generated content is online, it's included in learning data for subsequent models, and the AI hallucination problem gets worse. In other words, it gets high on its own supply." —whereamilivingtoday H/T r/AskReddit Some replies have been edited for length and clarity. Also in Internet Finds: Lawyers Are Sharing Their Juiciest "Can You Believe It?!" Stories From The Courtroom, And They're As Surprising As You'd Expect Also in Internet Finds: People Are Sharing "The Most Believable Conspiracy Theories," And Now I'm Questioning Everything I Thought I Knew Also in Internet Finds: 51 People Who Quickly Discovered Why Their Hilariously Clueless Partner Was Single Before Meeting Them
Yahoo
35 minutes ago
- Yahoo
ChatGPT Is Telling People With Psychiatric Problems to Go Off Their Meds
This week, my colleague Maggie Harrison Dupré published a blockbuster story about how people around the world have been watching in horror as their family and loved ones have become obsessed with ChatGPT and started suffering severe delusions. The entire piece is filled with disturbing examples of the OpenAI chatbot feeding into vulnerable folks' mental health crises, often by affirming and elaborating on delusional thoughts about paranoid conspiracies and nonsensical ideas about how the user has unlocked a powerful entity from the AI. One particularly alarming anecdote, due to its potential for harm in the real world: a woman who said her sister had managed her schizophrenia with medication for years — until she became hooked on ChatGPT, which told her the diagnosis was wrong, prompting her to stop the treatment that had been helping hold the condition at bay. "Recently she's been behaving strange, and now she's announced that ChatGPT is her 'best friend' and that it confirms with her that she doesn't have schizophrenia," the woman said of her sister. "She's stopped her meds and is sending 'therapy-speak' aggressive messages to my mother that have been clearly written with AI." "She also uses it to reaffirm all the harmful effects her meds create, even if they're side effects she wasn't experiencing," she added. "It's like an even darker version of when people go mad living on WebMD." That outcome, according to Columbia University psychiatrist and researcher Ragy Girgis, represents the "greatest danger" he can imagine the tech posing to someone who lives with mental illness. When we reached out to OpenAI, it provided a noncommittal statement. "ChatGPT is designed as a general-purpose tool to be factual, neutral, and safety-minded," it read. "We know people use ChatGPT in a wide range of contexts, including deeply personal moments, and we take that responsibility seriously. We've built in safeguards to reduce the chance it reinforces harmful ideas, and continue working to better recognize and respond to sensitive situations." Do you know of anyone who's been having mental health problems since talking to an AI chatbot? Send us a tip: tips@ -- we can keep you anonymous. We also heard other stories about people going off medication for schizophrenia and bipolar disorder because AI told them to, and the New York Times reported in a followup story that the bot had instructed a man to go off his anxiety and sleeping pills; it's likely that many more similarly tragic and dangerous stories are unfolding as we speak. Using chatbots as a therapist or confidante is increasingly commonplace, and it seems to be causing many users to spiral as they use the AI to validate unhealthy thought patterns, or come to attribute disordered beliefs to the tech itself. As the woman's sister pointed out, it's striking that people struggling with psychosis are embracing a technology like AI in the first place, since historically many delusions have centered on technology. "Traditionally, [schizophrenics] are especially afraid of and don't trust technology," she told Futurism. "Last time in psychosis, my sister threw her iPhone into the Puget Sound because she thought it was spying on her." Maggie Harrison Dupré contributed reporting. More on AI and mental health: Therapy Chatbot Tells Recovering Addict to Have a Little Meth as a Treat