logo
The great intellectual property paradox

The great intellectual property paradox

Express Tribune08-05-2025
Here's a modern moral riddle for the post-bookstore, post-truth, post-everything age: If a machine trained on pirated novels writes a bestselling story, does the story get money for it? If it does, who gets paid?
As the cultural world grapples with the ever-expanding footprint of artificial intelligence (AI), two camps have emerged, often with overlapping memberships: those who decry the use of artists' and writers' work to train AI without consent, and those who passionately defend access to pirated academic and literary materials in the name of equity and anti-capitalist resistance. What do we make of the contradiction?
Z-Library, LibGen, Anna's Archive and the likes of them have long been the darlings of underfunded students, overworked researchers, and book-starved readers in the Global South. They offer vast repositories of paywalled knowledge and commercial literature for free. In late 2022, when the FBI took down Z-Library and arrested its alleged founders, a wave of online outrage followed, not because people supported copyright infringement per se, but because it laid bare a deeper issue: the inaccessibility of knowledge in a commodified system.
Yet fast forward to 2025, and the outrage has flipped. AI companies are scraping the same pirated datasets: books, journals, news articles, to train large language models. This includes OpenAI, Meta, Google, and likely any company whose model seems suspiciously well-read. The reaction has been widespread indignation, protests, copyright lawsuits, authors furious that their work was ingested without permission. And understandably so.
But here's the twist: some of the same people who once romanticised the digital Robin Hoods of Anna's Archive are now drawing sharp lines in the sand against AI. It is imperative to wonder what changed.
Meet your new librarian
AI tools like ChatGPT, Claude, and Perplexity are often trained on datasets that include pirated or publicly scraped material. That means your out-of-print poetry book, your obscure research paper, or your lovingly crafted fan fiction might now be part of a model's brain. The moral offence for many writers isn't just that they weren't paid, it's that they weren't asked.
In April, The Guardian reported on the UK's plans for a collective license to ensure authors are compensated for their work used in AI training. It's a step toward recognition that writing is labour. But here's where things get messy: if AI must pay to train on data, we have to reckon with what that means for the legitimacy of free access archives like Z-Library, somehow championing equitable access while condemning models trained on that same material.
This dilemma isn't just for writers. Last month, an AI-generated video mimicking the art style of Studio Ghibli went viral, prompting backlash from fans and the studio itself. Hayao Miyazaki, famous for his staunch opposition to AI-generated art, once called machine-generated imagery "an insult to life itself."
But let's not forget that much of Ghibli's global fandom was born on pirated DVDs, torrents, and unauthorised fan subs. The same applies to early anime culture, punk zines, indie comics, and underground music. Culture has always moved through shadow economies before it becomes mainstream.
Anti-capitalism with caveats
What this all reveals is a hierarchy of theft. It feels noble to pirate for the sake of access. It feels extractive to pirate for the sake of automation. The difference, many would argue, is power. AI companies are not poor students or struggling researchers. They're multibillion-dollar entities capitalising on collective cultural labour.
This power imbalance is real, but it doesn't resolve the paradox. If we believe information should be free, what limits do we place on that freedom? If we believe artists deserve consent and compensation, how do we justify the informal economies that brought many of us to art in the first place?
In a particularly surreal twist, the BBC announced last month that it has used AI to simulate writing classes from Agatha Christie and other long-dead authors. While the company insists it used only public domain material, the uncanny effect of conjuring an author's voice without their consent sparked a minor literary panic.
Here, it's important to ask what it means to preserve legacy in the age of simulation. Since AI can remix a dead author's style into a monetisable product, we have to be clear on whether it counts as homage or exploitation. And again, what if those same Christie novels were your gateway to literature, discovered on a secondhand hard drive or a dodgy file-sharing site?
AI has made this contradiction harder to ignore: we want art and knowledge to be free and sacred. We want machines to be ethical, but we're fine when people break the rules for the right reasons. We bristle at corporate extraction, but applaud individual resistance. We believe in fair pay, except when we can't afford it.
The truth is, intellectual property has always been a slippery concept. Shakespeare was a remixer. Folk tales were anonymous. Hip hop was built on sampling. Piracy made Hollywood nervous but also made it global. And now, AI is here, remixing everything with the cold neutrality of math.
So what do we do?
That depends on what you believe culture is for. If you believe art is a commodity, then AI training must come with licensing fees and permissions. If you believe art is a commons, then access matters more than ownership. If you believe art is both, well, welcome to the mess.
The call for a collective license in the UK is promising, but it's not a solution for everyone. Especially not for writers outside the Anglosphere, whose work may be scrapped but who may never see a dime. And it's not a solution for what happens to the culture downstream: the classroom relying on pirated PDFs, the rural library using a chatbot trained on pirated data, the next great writer raised on illegal ebooks.
We are in a moment of reckoning where two long-standing ideas collide: that creativity is sacred and that knowledge should be free. AI didn't create this contradiction. It just forced us to face it.
We cheer when piracy democratises access, jeer when AI exploits the same, and now we're stuck in an intellectual property paradox. Whichever side you lean toward, the question is no longer who owns culture, but who gets to use it, and why.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Emilie Kiser case sees new police report allegations on husband's actions before son's drowning
Emilie Kiser case sees new police report allegations on husband's actions before son's drowning

Express Tribune

time5 days ago

  • Express Tribune

Emilie Kiser case sees new police report allegations on husband's actions before son's drowning

Newly released details from the Chandler Police Department (CPD) have shed light on the events surrounding the death of influencer Emilie Kiser's 3-year-old son, Trigg, in May 2025. The report includes allegations about what Emilie's husband, Brady Kiser, was doing at the time of the incident. On May 12, Brady was at home caring for Trigg and the couple's newborn son, Theodore, while Emilie was out with friends. Brady told investigators he lost sight of Trigg for three to five minutes while tending to the baby, later finding him in the family's backyard pool. Trigg died six days later in the hospital. The CPD report, citing video evidence, states Trigg was unsupervised in the backyard for over nine minutes and in the pool for about seven. It also alleges Brady was watching an NBA playoff game and placed a $25 bet shortly before the drowning. The report notes discrepancies between Brady's statements and the video evidence, concluding he 'was not aware of what [Trigg] was doing and was not watching him.' Despite the CPD recommending a class 4 felony child abuse charge, the Maricopa County Attorney's Office said on July 25 there was 'no likelihood of conviction.' Emilie successfully petitioned the Arizona Superior Court for Maricopa County to seal two pages of the CPD report, citing concerns that the content could be used for 'disturbing' online material, including AI-generated reenactments. Court filings reveal more than 100 requests have been made for public records related to the case. Emilie's legal team argued the redactions were necessary to protect her family's privacy while keeping the core legal issues public. A source said Emilie's sealed personal declaration offered 'an intensely personal account of her grief and trauma' intended solely for the court's understanding.

Armed gangs outsmart police
Armed gangs outsmart police

Express Tribune

time28-07-2025

  • Express Tribune

Armed gangs outsmart police

Armed snatchers in the city's upscale localities have adopted a novel technique to conceal weapons by hiding pistols in leather slits strapped below their knees, enabling them to evade police checks with alarming ease. The tactic, reportedly inspired by online content and aided by AI-based tips on avoiding detection, has surfaced in areas including Hayatabad, Dalazak Road and Ring Road. Members of Ababeel Force were unable to detect firearms during routine frisking, despite weapons being worn directly on the body. According to police sources and young suspects' own confessions on social media chat groups, the new gangs often target cafes, tea points and juice bars in affluent zones. They scout potential victims, usually youth with high-end smartphones, before executing the heist via motorcycles. Instead of strapping pistols to their waists or backs, the offenders tuck compact firearms inside narrow leather pouches worn close to the ankle, slipping past conventional pat-downs. A senior officer with over a decade of experience in K-P's counter-terrorism and investigations told The Express Tribune that technological evolution, including AI, has drastically altered how crimes are committed. "Drug smuggling based on live location sharing is already a challenge. Now, AI-driven street crimes are surfacing where even non-organised youth gangs are imitating techniques used by terrorist outfits," he said. Another officer disclosed that not only petty criminals but terrorists too have adapted. "In Chamkani, the suicide bomber wore a half-jacket, not the typical full-body explosive vest, a change that helped him deceive surveillance. This shift in patterns highlights the urgency to modernise police procedures." He said that current methods of body searches and checkpoints were outdated in light of evolving criminal innovation. "Modern policing has to catch up fast."

AI 'nudify' apps blackmailing minors rising in number
AI 'nudify' apps blackmailing minors rising in number

Express Tribune

time17-07-2025

  • Express Tribune

AI 'nudify' apps blackmailing minors rising in number

After a Kentucky teenager died by suicide this year, his parents discovered he had received threatening texts demanding $3,000 to suppress an AI-generated nude image of him, reports AFP. The tragedy underscores how so-called sextortion scams targeting children are growing around the world, particularly with the rapid proliferation of "nudify" apps — AI tools that digitally strip off clothing or generate sexualised imagery. Elijah Heacock, 16, was just one of thousands of American minors targeted by such digital blackmail, which has spurred calls for more action from tech platforms and regulators. His parents told US media that the text messages ordered him to pay up or an apparently AI-generated nude photo would be sent to his family and friends. "The people that are after our children are well organised," John Burnett, the boy's father, said in a CBS News interview. "They are well financed, and they are relentless. They don't need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child." US investigators were looking into the case, which comes as nudify apps — which rose to prominence targeting celebrities — are being increasingly weaponised against children. The FBI has reported a "horrific increase" in sextortion cases targeting US minors, with victims typically males between the ages of 14 and 17. The threat has led to an "alarming number of suicides," the agency warned. In a recent survey, Thorn, a non-profit focused on preventing online child exploitation, found that six per cent of American teens have been a direct victim of deepfake nudes. "Reports of fakes and deepfakes — many of which are generated using these 'nudifying' services — seem to be closely linked with reports of financial sextortion, or blackmail with sexually explicit images," the British watchdog Internet Watch Foundation (IWF) said in a report last year. "Perpetrators no longer need to source intimate images from children because images that are convincing enough to be harmful — maybe even as harmful as real images in some cases — can be produced using generative AI." The IWF identified one "pedophile guide" developed by predators that explicitly encouraged perpetrators to use nudifying tools to generate material to blackmail children. The author of the guide claimed to have successfully blackmailed some 13-year-old girls. The tools are a lucrative business. A new analysis of 85 websites selling nudify services found they may be collectively worth up to USD36 million a year.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store